The Palgrave Handbook of International Cybercrime and Cyberdeviance [1st ed.] 9783319784397, 9783319784403

3,453 153 22MB

English Pages [1467] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Palgrave Handbook of International Cybercrime and Cyberdeviance [1st ed.]
 9783319784397, 9783319784403

Citation preview

Thomas J. Holt Adam M. Bossler Editors

The Palgrave Handbook of International Cybercrime and Cyberdeviance

The Palgrave Handbook of International Cybercrime and Cyberdeviance

Thomas J. Holt • Adam M. Bossler Editors

The Palgrave Handbook of International Cybercrime and Cyberdeviance With 44 Figures and 68 Tables

Editors Thomas J. Holt College of Social Science, School of Criminal Justice Michigan State University East Lansing, MI, USA

Adam M. Bossler Department of Criminal Justice and Criminology Georgia Southern University Statesboro, GA, USA

ISBN 978-3-319-78439-7 ISBN 978-3-319-78440-3 (eBook) ISBN 978-3-319-78441-0 (print and electronic bundle) https://doi.org/10.1007/978-3-319-78440-3 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Introduction

Almost the entire world now depends on the Internet and computer systems to manage all facets of daily life, from mundane activities like email to management of critical water, power, and sewage systems. The global connectivity of the Internet and the proliferation of technology make it possible for these systems to be misused by deviants, criminals, nation-states, and extremist groups. It is imperative that we understand how to both design better infrastructure and secure these systems. Similarly, we must recognize the human actors who seek to harm them. Though the computer science and engineering research literature has grown in tandem with the Internet, social science research has responded slower to the way that humans interact with and abuse technology. In fact, the bulk of criminological research related to cybercrime has only emerged in the last two decades. These studies appear in a range of outlets and cover such a diversity of issues that it is often difficult to assess the full breadth of knowledge available on any given topic in this area. This handbook is intended as a reference work to help individuals quickly realize the current state of the field with respect to technology use and abuse. The book is global in scope, drawing young and established scholars alike to share their knowledge. This work also provides a robust examination of cybercrime from multiple perspectives, including criminology, sociology, and political science. The legal, theoretical, and policy frameworks that guide researchers and practitioners in the field are also explored in depth. Finally, this work addresses a wide variety of technology abuses, including but not limited to computer hacking, fraud, sexual behavior, intimate partner violence, terrorism, hate, and warfare. To that end, this work is separated into seven distinct parts. The first part, Foundations, provides an examination of our basic understanding of cybercrime, definitions for these offenses, and the overall technical and social structures of cyberspace. The second part provides an overview of the various legal structures that engender the prosecution of cybercrime around the world as well as an understanding of the nature of policing cybercriminality generally. Examining how criminological theories can be used to better understand cybercrime offending and victimization is the focus of the third part, covering most main theories of offending in the discipline to date. From this point, the book’s parts delve into the unique nature of cybercrimes. The fourth part examines various aspects of computer hacking and v

vi

Introduction

its subculture. The fifth part considers a wide variety of online economic offenses, such as phishing, romance fraud, data breaches, identity theft, online counterfeit products, and digital piracy. The sixth part covers online sexual deviance and crime, including but not limited to sexting, image-based sexual abuse, online sex trafficking and prostitution, and child sexual exploitation. Finally, the seventh part considers different aspects of violence facilitated by the Internet, such as cyberbullying, cyberstalking, online vigilantism, online violence against women, gang violence, hate speech, violent extremism, and cyberwarfare. In sum, these two volumes represent the sum total of our current knowledge regarding cybercrime and technology-enabled offending. As technology continues to evolve, it will require our own understanding of technology misuse to evolve as well. This is one area of criminology in which there will always be new aspects of offending to examine. We hope that this work provides students the ability to understand both the wide variety of crimes that are encompassed under the term “cybercrime” and how legal systems around the world have responded. In addition, we hope that this handbook identifies avenues of inquiry for future researchers and even contemporary scholars who may wish to expand their areas of study.

Contents

Volume 1 Part I

.......................................

1

1

Defining Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brian K. Payne

3

2

Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kyung-Shick Choi, Claire S. Lee, and Eric R. Louderback

27

Technology Use, Abuse, and Public Perceptions of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steven Furnell

45

3

Foundations

4

Race, Social Media, and Deviance Roderick Graham

.........................

67

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug, Firearm, CSAM, and Cybercrime Markets . . . . . . . . Roberta Liggett, Jin R. Lee, Ariel L. Roddy, and Mikaela A. Wallin

91

6

Organized Crime and Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . Anita Lavorgna

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sagar Samtani, Maggie Abate, Victor Benjamin, and Weifeng Li

8

Surveillance, Surveillance Studies, and Cyber Criminality . . . . . . Brian Nussbaum and Emmanuel Sebastian Udoh

9

Technology as a Means of Rehabilitation: A Measurable Impact on Reducing Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cynthia McDougall and Dominic A. S. Pearson

10

Datasets for Analysis of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . C. Jordan Howell and George W. Burruss

117

135 155

183 207 vii

viii

Contents

Part II Legislative Frameworks and Law Enforcement Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

The Legislative Framework of the European Union (EU) Convention on Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . José de Arimatéia da Cruz

221

223

12

Data Breaches and GDPR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elif Kiesow Cortez

239

13

Cybercrime Legislation in the United States Adam M. Bossler

.................

257

14

Legislative Frameworks: The United Kingdom . . . . . . . . . . . . . . . Patrick Bishop

281

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sesha Kethineni

305

Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lennon Y. C. Chang

327

16

17

Cybercrime and Legislation in an African Context . . . . . . . . . . . . Philip N. Ndubueze

18

Cybercrime Initiatives South of the Border: A Complicated Endeavor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . José de Arimatéia da Cruz and Norah Godbee

365

.......

385

.................

403

........................

425

19

Police and Extralegal Structures to Combat Cybercrime Thomas J. Holt

20

Police Legitimacy in the Age of the Internet Johnny Nhan and Neil Noakes

21

Forensic Evidence and Cybercrime Marcus Rogers

Part III 22

23

Crime Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

345

447

Deterrence in Cyberspace: An Interdisciplinary Review of the Empirical Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Maimon

449

......................................

469

Routine Activities Billy Henson

Contents

24

ix

Environmental Criminology and Cybercrime: Shifting Focus from the Wine to the Bottles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fernando Miró-Llinares and Asier Moneva

491

25

Subcultural Theories of Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thomas J. Holt

26

Deviant Instruction: The Applicability of Social Learning Theory to Understanding Cybercrime . . . . . . . . . . . . . . . . . . . . . . Jordana N. Navarro and Catherine D. Marcum

527

Applying the Techniques of Neutralization to the Study of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Russell Brewer, Sarah Fox, and Caitlan Miller

547

27

513

28

The General Theory of Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . George E. Higgins and Jason Nicholson

567

29

General Strain Theory and Cybercrime . . . . . . . . . . . . . . . . . . . . . Carter Hay and Katherine Ray

583

30

Critical Criminology and Cybercrime . . . . . . . . . . . . . . . . . . . . . . Adrienne L. McCarthy and Kevin F. Steinmetz

601

31

Feminist Theories in Criminology and the Application to Cybercrimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alison J. Marganski

623

32

The Psychology of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alison Attrill-Smith and Caroline Wesson

653

33

Internet Addiction and Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . Bernadette Schell

679

34

Biosocial Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chad Posick

705

Volume 2 Part IV

Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

723

35

Computer Hacking and the Hacker Subculture . . . . . . . . . . . . . . . Thomas J. Holt

36

Hacktivism: Conceptualization, Techniques, and Historical View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marco Romagna

743

..............

771

37

Global Voices in Hacking (Multinational Views) Marleen Weulen Kranenbarg

725

x

Contents

38

Malicious Software Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fawn T. Ngo, Anurag Agarwal, Ramakrishna Govindu, and Calen MacDonald

793

39

Cybercrime-as-a-Service Operations Thomas S. Hyslip

.......................

815

Part V

Online Economic Crimes . . . . . . . . . . . . . . . . . . . . . . . . . . . .

847

40

Social Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jan-Willem Bullée and Marianne Junger

849

41

Spam-Based Scams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alex Kigerl

877

42

Phishing and Financial Manipulation . . . . . . . . . . . . . . . . . . . . . . . Byung Lee and Seung Yeop Paek

899

43

Romance Fraud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cassandra Cross

917

44

Data Breaches and Carding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ramakrishna Ayyagari

939

45

Organized Financial Cybercrime: Criminal Cooperation, Logistic Bottlenecks, and Money Flows . . . . . . . . . . . . . . . . . . . . . E. R. (Rutger) Leukfeldt, E. W. (Edwin) Kruisbergen, E. R. (Edward) Kleemans, and R. A. (Robert) Roks ..........

961

981

46

Identity Theft: Nature, Extent, and Global Response Katelyn A. Golladay

47

Counterfeit Products Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001 Jay P. Kennedy

48

Digital Piracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025 Kevin Jennings and Adam M. Bossler

Part VI

Sexual Deviance and Crime . . . . . . . . . . . . . . . . . . . . . . . . .

1047

49

Sexual Subcultures and Online Spaces . . . . . . . . . . . . . . . . . . . . . . 1049 Tina Hebert Deshotels and Craig J. Forsyth

50

Dating and Sexual Relationships in the Age of the Internet . . . . . . 1067 Daniel C. Semenza

51

Sexting and Social Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087 Kimberly O’Connor and Michelle Drouin

Contents

xi

52

Image-Based Sexual Abuse: A Feminist Criminological Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109 Nicola Henry and Asher Flynn

53

Revenge Pornography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1131 Karen Holt and Roberta Liggett

54

The Rise of Sex Trafficking Online . . . . . . . . . . . . . . . . . . . . . . . . . 1151 Jonathan A. Grubb

55

Prostitution and Sex Work in an Online Context . . . . . . . . . . . . . . 1177 Christine Milrod and Martin Monto

56

Child Sexual Exploitation: Introduction to a Global Problem . . . . 1203 Kathryn C. Seigfried-Spellar and Virginia Soldino

57

The Past, Present, and Future of Online Child Sexual Exploitation: Summarizing the Evolution of Production, Distribution, and Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225 Bryce Garreth Westlake

Part VII

Violence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1255

58

Risk and Protective Factors for Cyberbullying Perpetration and Victimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257 Denise Wilson, Kirsten Witherup, and Allison Ann Payne

59

Cyberstalking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283 Bradford W. Reyns and Erica R. Fissel

60

The Rise of Online Vigilantism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307 Joshua Smallridge and Philip Wagner

61

Intimate Partner Violence and the Internet: Perspectives . . . . . . . 1333 Shelly Clevenger and Mia Gilliam

62

Negative Emotions Set in Motion: The Continued Relevance of #GamerGate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1353 Torill Elvira Mortensen and Tanja Sihvonen

63

Social Media, Strain, and Technologically Facilitated Gang Violence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375 Timothy R. Lauger, James A. Densley, and Richard K. Moule Jr.

64

Hate Speech in Online Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1397 Matthew Costello and James Hawdon

xii

Contents

65

The Role of the Internet in Facilitating Violent Extremism and Terrorism: Suggestions for Progressing Research . . . . . . . . . . . . . 1417 Ryan Scrivens, Paul Gill, and Maura Conway

66

Cyberwarfare as Realized Conflict . . . . . . . . . . . . . . . . . . . . . . . . . 1437 Robert J. Elder

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1471

About the Editors

Prof. Thomas J. Holt Dr. Thomas J. Holt is a professor in the School of Criminal Justice at Michigan State University, and his research focuses on cybercrime, cyberterrorism, and police responses to these phenomena. His work has been published in various peerreviewed journals including British Journal of Criminology, Crime & Delinquency, Deviant Behavior, and Terrorism and Political Violence. He has co-authored multiple books including Cybercrime and Digital Evidence: An Introduction and Cybercrime in Progress: Theory and Prevention of Technology-Enabled Offenses. Dr. Holt is also a fellow at the Netherlands Institute for the Study of Crime and Law Enforcement, a founding member of the European Society of Criminology’s Working Group on Cybercrime, and the director of the International Interdisciplinary Research Consortium on Cybercrime, a global association of scholars in the social and technical sciences whose research considers cybercrime and cybersecurity. Prof. Adam M. Bossler Dr. Adam M. Bossler is Professor and Chair of Criminal Justice and Criminology at Georgia Southern University. He earned his doctorate in criminology and criminal justice from the University of Missouri–St. Louis. Dr. Bossler is an active member of the International Interdisciplinary Research Consortium on Cybercrime (IIRCC) as well as the European Society of Criminology’s Working Group on Cybercrime. Bossler teaches courses in policing, cybercrime, and criminal behavior. His research primarily focuses on examining the application of traditional criminological theories to various forms of cybercrime offending and xiii

xiv

About the Editors

victimization and the law enforcement response to cybercrime. His research has been funded by the National Science Foundation, Bureau of Justice Assistance, and the United Kingdom Home Office. He is a co-author of three books: Cybercrime and Digital Forensics: An Introduction, 2nd edition (Routledge); Cybercrime in Progress: Theory and Prevention of Technology-Enabled Offenses (Routledge) (winner of the 2017 Academy of Criminal Justice Sciences’ International Section Outstanding Book Award); and Policing Cybercrime and Cyberterror (Carolina Academic Press). Some of his recent peer-reviewed work can be found in Criminology and Public Policy, International Journal of Offender Therapy and Comparative Criminology, Journal of Contemporary Criminal Justice, and Deviant Behavior.

Contributors

Maggie Abate WellCare Health Plans Inc., Tampa, FL, USA Anurag Agarwal Lutgert College of Business, Florida Gulf Coast University, Fort Myers, FL, USA Alison Attrill-Smith Cyberpsychology Research Group, Department of Psychology, University of Wolverhampton, Wolverhampton, UK Ramakrishna Ayyagari University of Massachusetts – Boston, Boston, MA, USA Victor Benjamin Department of Management Information Systems, W.P. Carey School of Business, Arizona State University, Phoenix, AZ, USA Patrick Bishop Cyber Threats Research Centre, Hilary Rodham Clinton School of Law, Swansea University, Swansea, UK Adam M. Bossler Department of Criminal Justice and Criminology, Georgia Southern University, Statesboro, GA, USA Russell Brewer University of Adelaide, Adelaide, SA, Australia Jan-Willem Bullée Linköping University, Linköping, Sweden George W. Burruss Department of Criminology, University of South Florida, Tampa, FL, USA Lennon Y. C. Chang School of Social Sciences, Monash University, Clayton, VIC, Australia Kyung-Shick Choi Boston University, Boston, MA, USA Shelly Clevenger Illinois State University, Normal, IL, USA Maura Conway School of Law and Government, Dublin City University, Dublin, Ireland Matthew Costello Department of Sociology, Anthropology and Criminal Justice, Clemson University, Clemson, SC, USA Cassandra Cross Faculty of Law, School of Justice, Queensland University of Technology, Brisbane, QLD, Australia xv

xvi

Contributors

José de Arimatéia da Cruz College of Behavior and Social Science, Georgia Southern University, Savannah, GA, USA U.S. Army War College, Strategic Studies Institute, Carlisle, PA, USA James A. Densley Metropolitan State University, St. Paul, MN, USA Michelle Drouin Purdue University Fort Wayne, Fort Wayne, IN, USA Robert J. Elder Volgenau School of Engineering, George Mason University, Fairfax, VA, USA Erica R. Fissel University of Central Florida, Orlando, FL, USA Asher Flynn Monash University, Clayton, Australia Craig J. Forsyth University of Louisiana at Lafayette, Lafayette, LA, USA Sarah Fox University of Adelaide, Adelaide, SA, Australia Steven Furnell Centre for Security, Communications and Network Research, University of Plymouth, Plymouth, UK Paul Gill Department of Security and Crime Science, University College London, London, UK Mia Gilliam Indiana State University, Bloomington, IN, USA Norah Godbee Savannah, GA, USA Katelyn A. Golladay University of Wyoming, Laramie, WY, USA Ramakrishna Govindu College of Business, University of South Florida SarasotaManatee, Sarasota, FL, USA Roderick Graham Old Dominion University, Norfolk, VA, USA Jonathan A. Grubb Department of Criminal Justice and Criminology, College of Behavioral and Social Sciences, Georgia Southern University, Statesboro, GA, USA James Hawdon Department of Sociology, Center for Peace Studies and Violence Prevention, Virginia Tech, Blacksburg, VA, USA Carter Hay Florida State University, Tallahassee, FL, USA Tina Hebert Deshotels Jacksonville State University, Jacksonville, AL, USA Nicola Henry RMIT University, Melbourne, Australia Billy Henson Department of Criminology and Criminal Justice, Mount St. Joseph University, Cincinnati, OH, USA George E. Higgins Department of Criminal Justice, University of Louisville, Louisville, KY, USA

Contributors

xvii

Karen Holt School of Criminal Justice, Michigan State University, East Lansing, MI, USA Thomas J. Holt College of Social Science, School of Criminal Justice, Michigan State University, East Lansing, MI, USA C. Jordan Howell Department of Criminology, University of South Florida, Tampa, FL, USA Thomas S. Hyslip Norwich University, Northfield, VT, USA Kevin Jennings Georgia Southern University, Statesboro, GA, USA Marianne Junger University of Twente, Enschede, The Netherlands Jay P. Kennedy School of Criminal Justice, Center for Anti-Counterfeiting and Product Protection, Michigan State University, East Lansing, MI, USA Sesha Kethineni Department of Justice Studies, Prairie View A&M University, Prairie View, TX, USA Elif Kiesow Cortez The Hague University of Applied Sciences, The Hague, The Netherlands Alex Kigerl Washington State University, Spokane, WA, USA E. R. (Edward) Netherlands

Kleemans Vrije

Universiteit

Amsterdam,

Amsterdam,

E. W. (Edwin) Kruisbergen Research and Documentation Centre, Dutch Ministry of Justice and Security, The Hague, Netherlands Timothy R. Lauger Department of Criminology and Criminal Justice, Niagara University, Lewiston, NY, USA Anita Lavorgna University of Southampton, Southampton, UK Byung Lee Central Connecticut State University, New Britain, CT, USA Claire S. Lee School of Criminology and Justice Studies, University of Massachusetts, Lowell, MA, USA Jin R. Lee School of Criminal Justice, Michigan State University, East Lansing, MI, USA E. R. (Rutger) Leukfeldt Netherlands Institute for the Study of Crime and Law Enforcement (NSCR), The Hague University of Applied Sciences, Amsterdam, Netherlands Weifeng Li Department of Management Information Systems, Terry College of Business, University of Georgia, Athens, GA, USA Roberta Liggett School of Criminal Justice, Michigan State University, East Lansing, MI, USA

xviii

Contributors

Eric R. Louderback Division on Addiction, Cambridge Health Alliance, a Harvard Medical School Teaching Hospital, Medford, MA, USA Calen MacDonald College of Arts and Sciences, Emory University, Atlanta, GA, USA David Maimon Georgia State University, College Park, MD, USA Catherine D. Marcum Department of Government and Justice Studies, Appalachian State University, Boone, NC, USA Alison J. Marganski Le Moyne College, Syracuse, NY, USA Adrienne L. McCarthy Department of Sociology, Anthropology and Social Work, Kansas State University, Manhattan, KS, USA Cynthia McDougall Department of Psychology, University of York, York, UK Caitlan Miller University of Adelaide, Adelaide, SA, Australia Christine Milrod Los Angeles, CA, USA Fernando Miró-Llinares CRIMINA Research Center for the Study and Prevention of Crime, Miguel Hernandez University, Elche, Spain Asier Moneva CRIMINA Research Center for the Study and Prevention of Crime, Miguel Hernandez University, Elche, Spain Martin Monto University of Portland, Portland, OR, USA Torill Elvira Mortensen IT University of Copenhagen, Copenhagen, Denmark Richard K. Moule Jr. University of South Florida, Tampa, FL, USA Jordana N. Navarro Department of Criminal Justice, The Citadel, Charleston, SC, USA Philip N. Ndubueze Federal University, Dutse, Nigeria Fawn T. Ngo Department of Social Sciences, College of Liberal Arts and Social Sciences, University of South Florida Sarasota-Manatee, Sarasota, FL, USA Johnny Nhan Texas Christian University, Fort Worth, TX, USA Jason Nicholson Department of Criminology, University of West Georgia, Carrolton, GA, USA Neil Noakes Fort Worth Police Department, Texas Christian University, Fort Worth, TX, USA Brian Nussbaum University at Albany, State University of New York, Albany, NY, USA Kimberly O’Connor Purdue University Fort Wayne, Fort Wayne, IN, USA Seung Yeop Paek State University of New York at Oswego, Oswego, NY, USA

Contributors

xix

Allison Ann Payne Department of Sociology and Criminology, Villanova University, Radnor Township, PA, USA Brian K. Payne Old Dominion University, Norfolk, VA, USA Dominic A. S. Pearson Department of Psychology, University of Portsmouth, Portsmouth, UK Chad Posick Criminal Justice and Criminology, Georgia Southern University, Statesboro, GA, USA Katherine Ray Florida State University, Tallahassee, FL, USA Bradford W. Reyns Weber State University, Ogden, UT, USA Ariel L. Roddy School of Criminal Justice, Michigan State University, East Lansing, MI, USA Marcus Rogers Department of Computer and Information Technology, Purdue University, West Lafayette, IN, USA R. A. (Robert) Roks Erasmus University Rotterdam, Rotterdam, Netherlands Marco Romagna Centre of Expertise Cyber Security, The Hague University of Applied Sciences, The Hague, The Netherlands Sagar Samtani Department of Information Systems and Decision Sciences, Muma College of Business, University of South Florida, Tampa, FL, USA Bernadette Schell Faculty of Management, Laurentian University, Sudbury, ON, Canada Ryan Scrivens School of Criminal Justice, Michigan State University, East Lansing, MI, USA Emmanuel Sebastian Udoh University at Albany, State University of New York, Albany, NY, USA Kathryn C. Seigfried-Spellar Purdue University, West Lafayette, IN, USA Daniel C. Semenza Department of Sociology, Anthropology, and Criminal Justice, Rutgers University, Camden, NJ, USA Tanja Sihvonen University of Vaasa, Vaasa, Finland Joshua Smallridge Fairmont State University, Fairmont, WV, USA Virginia Soldino University Research Institute of Criminology and Criminal Science, University of Valencia, Valencia, Spain Kevin F. Steinmetz Department of Sociology, Anthropology and Social Work, Kansas State University, Manhattan, KS, USA Philip Wagner University of Wisconsin-Parkside, Kenosha, WI, USA

xx

Contributors

Mikaela A. Wallin School of Criminal Justice, Michigan State University, East Lansing, MI, USA Caroline Wesson Cyberpsychology Research Group, Department of Psychology, University of Wolverhampton, Wolverhampton, UK Bryce Garreth Westlake San Jose State University, San Jose, CA, USA Marleen Weulen Kranenbarg Vrije Universiteit (VU) Amsterdam, Amsterdam, The Netherlands Denise Wilson Department of Sociology and Criminology, Villanova University, Radnor Township, PA, USA Kirsten Witherup Department of Behavioral Sciences, York College of Pennsylvania, York, PA, USA

Part I Foundations

1

Defining Cybercrime Brian K. Payne

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Why Cybercrime Definitions Matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Challenges Defining Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . History of Cybercrime Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Atypical Nature of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Breadth of “Cyberspace” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Global Nature of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Scarcity of Criminological Research on Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Multidisciplinary Nature of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conceptualizing Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime as Traditional Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime as Deviant Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime as a Legal Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime as a Political Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime as a White-Collar Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime as a Social Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime as a Technological Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 4 7 7 11 11 13 14 14 16 16 17 18 19 19 20 21 22 22 23

Abstract

As the study of cybercrime has evolved, researchers have explored how to best define the term. To date, no universal definition of cybercrime has been developed. In this chapter, attention is given to why cybercrime definitions matter, challenges that arise in developing cybercrime definitions, and frameworks used B. K. Payne (*) Old Dominion University, Norfolk, VA, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_1

3

4

B. K. Payne

to conceptualize the concept. Specifically, making the case that cybercrime definitions impact estimates about the extent of cybercrime, policies used to respond to the problem, strategies used to prevent the behavior, and theories used to explain the behavior, it is demonstrated that cybercrime can be conceptualized as either traditional criminal activity, deviant behavior, a legal issue, a political issue, a white-collar crime, the product of a social construction, or a technological problem. This framework is guided by recognition that cybercrime is global in nature, committed in the vast area called cyberspace, different from many other crimes, infrequently studied in criminal justice/criminology, and best understood through a multidisciplinary lens. Keywords

Cybercrime · Computer crime · Cybercriminal · Digital crime · Internet crime

Introduction It has been said that beauty is in the eye of the beholder. This simple statement refers to the fact that different individuals will hold different views about the same object or phenomenon. Much in the same way, one might say that behaviors that constitute cybercrime are in the eye of the beholder. Two individuals might look at the same behavior and come to different conclusions about whether that behavior should be categorized as cybercrime. In effect, definitions of cybercrime are in the eye of the beholder. Along this line, Philosopher Stephen Toulmin (1961, p. 18) said this about definitions: “Definitions are like belts. The shorter they are, the more elastic they need to be.” This statement concedes that it is incredibly difficult to develop short and consistent definitions. It also tacitly recognizes that multiple definitions exist for whatever is being defined. Just as there are multiple types of belts (The Man’s Guide to Ultimate Belts identifies no less than 15 different types!), there are also multiple ways that cybercrime has been conceptualized. While cybercrime definitions are varied and tied to specific factors, it is insufficient to suggest that cybercrime is individually determined and leave it at that. Instead, focusing on why cybercrime definitions matter, challenges defining the concept, and various ways cybercrime is conceptualized will provide a foundation from which cybercrime can be defined.

Why Cybercrime Definitions Matter Returning to the belt example given above, like belts on a baggy pair of pants, definitions are needed. Regarding cybercrime specifically, definitions matter for at least seven overlapping reasons: (1) the way individuals define cybercrime will determine estimates about the extent of cybercrime; (2) definitions of cybercrime will impact consequences (or individual responses) to specific behaviors;

1

Defining Cybercrime

5

(3) cybercrime definitions have implications for the way criminologists try to explain cyber offending; (4) definitions will guide intervention strategies used to respond to certain types of behavior; (5) definitions will guide prevention strategies used to prevent certain behaviors; (6) definitions will determine the types of research methodologies used to study the behavior; and (7) definitions will determine how academics teach about the behavior. One fundamental point about definitions of cybercrime is that these definitions will determine estimates about the extent of cybercrime. The FBI’s Internet Crime Complaint Center (IC3), for instance, defines cybercrime as those Internet crimes perpetrated against individuals, regardless of whether the individuals were victimized (e.g., receiving a phishing email would apparently be counted in the statistics). As a result, their estimates about the extent of cybercrime include Internet-related experiences of individuals reported to the FBI. In contrast, the Ponemon Institute defines cybercrime as cyber-related offenses committed against businesses (Ponemon 2017). Consequently, estimates about the extent of cybercrime from Ponemon are different from those offered by IC3. Driving home the point that definitions influence estimates, one author team writes that “the lack of a clear sense of what constitutes cybercrime presents a barrier to tracking inclusive cybercrime data” (Finklea and Theohary 2012, np). The way that cybercrime is defined may also influence the consequences (or individual responses) to specific behaviors. Generally speaking, once behaviors are defined as illegal, individuals respond differently to those behaviors. For example, some individuals might find pleasure out of engaging in behavior simply because it is against the rules. Researchers have written about the “joy of violence” in reference to the pleasure individuals get out of engaging in violence (Kerbs and Jolley 2007). Is it possible that some individuals get pleasure out of cybercrime primarily because certain behaviors have been defined as illegal? Consider this analogy: some youth get pleasure out of trespassing on physical property. Others might get a similar pleasure out of trespassing in “cyber property.” While there might be “joy” from certain cybercrimes, others might break “cyber rules” simply to make their tasks easier. Instances where workers circumvent security strategies to make it easier for them to do their jobs come to mind. Alternatively, the conceptualization of behaviors as rule violations might cause individuals to react more negatively to others when those individuals break socially proscribed rules. In fact, the very premise of labeling theory is that criminal labels assigned to individuals stem from societal decisions to label certain behaviors as criminal. Definitions of cybercrime also have ramifications for the application of criminological theories. Following a Durkheimian perspective, for instance, some might define cybercrime as normal behavior. From this perspective, then, cybercrime could be seen as fulfilling four functions: (1) cybercrime might warn us about specific problems that need to be addressed (a growth in Internet frauds resulted in new ways to make online shopping safer); (2) cybercrime might actually lead to social change (hacktivism might be an example); (3) cybercrime might promote community integration (some would say the Democratic party became more solidified after

6

B. K. Payne

allegations about cyber misdeeds by Russians in the 2016 election); and (4) definitions of cybercrime provide examples of acceptable and unacceptable behavior. Lest it be assumed that cybercrime definitions only relate to structural theories, consider that other criminological theories also stress the importance of definitions. Sutherland’s differential association theory posits that individuals will be more likely to commit crime if from definitions (e.g., “The specific direction of motives and drives is learned from definitions of the legal codes as favorable or unfavorable. A person becomes delinquent because of an excess of definitions favorable to violation of law over definitions unfavorable to violation of the law”). Conflict theory suggests that those with power create definitions of crime in order to control those without power. Recent efforts to tax online sales would be an example conflict theorists might use to demonstrate this use of power in the cyber world. Neutralization theory, which has been widely applied to cybercrime, presupposes that individuals’ definitions of appropriate and inappropriate behavior change through a rationalization process. And, as noted above, labeling theory is – at its very core – based on the process of how criminological definitions are created. While sociological and criminological conceptualizations of cybercrime result in the use of those theories to explain cyber offending, those using more technological definitions of cybercrime might point to different theories to address the behavior. Conceptualizing cybercrime as being possible because of digital technologies, one author applies actor-network theory to explain the behavior. Rarely seen in the criminological literature, this theory draws attention to how “socio-technical ‘humanchine’ networks (comprised of humans and technologies as actors) intersect through translation and create agency” (Luppicini 2014, p. 36). How cybercrime is conceptualized will also impact the interventions and policies used to respond to these offenses (Moitra 2004). Those defining cybercrime from a more sociological or psychological orientation, for instance, will proscribe policies and interventions designed to change user behaviors. Alternatively, those defining cybercrime more from a technological orientation might turn to engineering or another technological field for solutions. Consider policies designed to respond to cyber bullying. Defining the problem as a sociological problem would result in policies and interventions aiming to change the values, beliefs, and behaviors of those who are bullying. In a similar way, definitions of cybercrime will dictate how to best prevent these offenses in the first place. The same analogy can be used. Those following more of a sociological framework might focus on educating potential offenders and victims as a strategy to prevent cyber bullying. Those following a technological framework might focus on using big data techniques to develop algorithms and apps that would allow parents and guardians to more easily monitor cyber behaviors of juveniles, thereby providing a tool that could deter cyber bullying. Cybercrime definitions also drive (or at least are related to) research strategies. Here again, contrasting social science definitions with technological definitions is useful. Those defining cybercrime form a more social science orientation, for instance, might be more prone to use social science research strategies to study cybercrime. Surveys, interviews, field studies, analysis of web forums, and other strategies where data is gathered directly from research subjects are those methods

1

Defining Cybercrime

7

that would be connected to cybercrime studies examining the topic from a social science perspective. Those defining cybercrime from a more technological orientation would probably use other strategies. Modeling and simulation techniques, honey pots, and laboratory experiments would be more in line with studies defining cybercrime as a technological issue. Strengths and weaknesses arise in each type of method. The social science strategies, for example, have been criticized for being intrusive. The technological strategies (such as honey pots) have been criticized for ignoring human behavior. Other technical studies have been critiqued on the grounds that the findings “[are] limited to details on the methods of attack and . . . [lack] insight about who the attackers are and how they differ from ‘traditional’ criminals” (Diamond and Bachmann 2015, p. 25). Definitions of cybercrime will also determine how academics/professors teach about the behavior. Table 1 shows seven course descriptions for cyber offending courses taught at seven different universities. A few patterns are striking. First, the title of the course seems to drive the topics considered in the course. Second, some courses appear to define cybercrime from more of a technological framework, others appear to define it more from a social science orientation, and others appear to blend multiple orientations. Finally, and much less important to be sure, is the fact that criminologists cannot even agree on whether cybercrime is one word, two words (cyber crime), hyphenated (cyber-crime), or plural (cybercrimes)! On the surface, this is not a big deal. However, the fact that there is confusion about the appropriate semantics for the behavior is indicative of the conceptual confusion surrounding the topic of cybercrime.

Challenges Defining Cybercrime Six dynamics make it harder to define cybercrime than other types of crime. These include (1) the history/evolution of the cybercrime concept, (2) the atypical nature of cybercrime, (3) the breadth of cyberspace, (4) the global nature of cybercrime, (5) the absence of empirical research on the topic, and (6) the multidisciplinary nature of cybercrime.

History of Cybercrime Concept A simple review of cybercrime in criminal justice scholarship confirms that the term has not been around for long. A quick review of Criminal Justice Abstracts to see how often the term has been used in articles and publications indexed in the database shows that the term has only recently been widely used. Prior to the introduction of the term, the phrase “computer crime” was used. Parker (1976) has been credited with developing one of the earliest computer crime taxonomies. Parker’s taxonomy defines computer crimes as those in which a computer is (1) the object of a crime, (2) the environment where a crime occurs, (3) the instrument for committing a crime, and (4) the symbol of a crime (e.g., when individuals deceptively say they are using a

8

B. K. Payne

Table 1 Cybercrime course titles and course descriptions Institution/course name American Intercontinental University Cybercrimes

Austin Peay State University Fundamentals of Cybercrime

Florida Atlantic University Computer Crime Florida International University Cyber Crime High Point University CRJ 2100 Cyber-Crime

Troy University Cyber Crime

University of Maryland Cyber Crime

Course description This hands-on introductory course provides students with the knowledge and skills necessary to begin a computer-based investigation. The course begins with an overview of computer forensics and then proceeds to introduce forensics tools, concepts, and documentation of evidence/procedures. The course uses common and accepted incident response policies and procedures for previewing and securing digital evidence. Topics include the basics of computer evidence and basic forensic methodology This course offers an intense examination of network security defense techniques and countermeasures. Defense fundamentals are explained in great detail. Topics include network defense techniques, cybercrime and cyberspace law, cyberterrorism, infusion detection and incident response, disaster recovery, and computer forensics This course provides an overview of computer crime from a criminal justice perspective. It also examines computer crime prevention, computer security, legal and social issues, and modern investigative methodologies Examines the types, extent, and response to cybercrime, including the constitutional protections and procedural law that govern its detection and prosecution This course examines criminal exploitation in the digital world. The course is divided into two parts. The first part provides students with an understanding of the seemingly mysterious world of crimes involving computers. We will examine the basic components of a computer, a network, and other digital devices. This will be followed by an examination of categories of cyber-crime including hacking, identity theft, cyber-stalking, digital piracy, and child pornography. The second part of the course will address the legality of cyber-crime and the interaction of “hackers” and cyber-criminals with the criminal justice system. Famous cases will be examined to showcase the difficulty in combating cyber-crime This course will introduce the topics of computer crime and computer forensics. Students will be required to learn different aspects of computer crime and ways to uncover, protect, and exploit digital evidence. Students will be exposed to different types of tools, both software and hardware, and an exploration of the legal issues affected by on-line and computer-related criminal conduct. The course will examine the evolution of criminal law relative to the development of new technology Cybercrime research has grown in visibility and importance during the last two decades. Nevertheless, despite the growing public interest in cybercrime and its consequences for businesses and individuals, only scant attention has been given in the criminological discipline to investigation and understanding of this new type of crime. The purpose of this course is to introduce students with the technical, social, and legal aspects of cybercrime as well as expose students to theories and tools that enable scientific exploration of this phenomenon. (continued)

1

Defining Cybercrime

9

Table 1 (continued) Institution/course name

Course description In the first few weeks of the semester, we will learn about the computer and the Internet and discuss several definitions and typologies of cybercrime. Then we will discuss the hacker, the victim, and the IT manger; review various theories of crime causation; and assess the relevance of these theories in the context of cyberspace. We will then describe several technical tools that allow the collection of data from the internet. We will conclude with a discussion on the legal issues affected and created by online crime

Source: Retrieved from each university’s course catalog

computer program or computer resources to inform decisions that lead to criminal behavior). About a decade later, Hollinger and Lanza-Kaduce (1988) authored one of the first pieces of criminological scholarship on the topic, focusing on how computer crime came to be criminalized. Subsequently, the Budapest Convention on Cybercrime, which was signed in 2001 and regarded as one of the most important international treaty to promote a collaborative response to cybercrime, characterized these offenses in a way that loosely paralleled Parker’s taxonomy. After the advent of the term computer crime, other terms began to be introduced by researchers interested in these behaviors. These other terms that have been used and their definitions include the following: • Digital crime is “any criminal activity which involves the use of computers and networks or any other digital devices” (Sabell et al. 2012, np). • Electronic crime is “as an illegal act that is carried out using a computer or electronic media” (Kardya and Mitrou 2012, np). • Internet crime is defined by the Internet Crime Complaint Center as “any illegal activity involving one or more components of the Internet, such as websites, chat rooms, and/or email. Internet crime involves the use of the Internet to communicate false or fraudulent representations to consumers. These crimes may include, but are not limited to, advance-fee schemes, non-delivery of goods or services, computer hacking, or employment/business opportunity schemes” (Federal Bureau of Investigation n.d., np). • Network crime have been described as “not only. . .a form of data interception, but also involves intruding into networks to gain unauthorized access to data, or even change data, destroy data, make unauthorized use of the network resource, etc.” (Wang 2012, p. 1). • Technocrime includes a wide range of crimes committed in the technological system (Friedrichs 2009). • Virtual crime refers to crimes committed in virtual reality settings or virtual games (Lastowka and Hunter 2004). Table 2 provides additional details on these terms and describes the limitations of each.

10

B. K. Payne

Table 2 Concepts related to cybercrime Computer crime

Why it was introduced Researchers, academics, and policy makers understood that new behaviors related to computer use resulted in the commission of criminal acts

Limitations As technology has evolved, and the number of devices connected to the Internet has increased, crimes once limited to computers can be committed without a computer through the use of other technological devices The phrase potentially limits the types of offenses that might be included to those that require a level of computer sophistication that is not actually required for most computer offenses As members of society have changed, the use of “electronic” has fallen out of favor, with many young people now avoiding electronic mail The phrase potentially excludes those crimes that occur independent of the Internet but nonetheless are driven by cyber technology

Digital crime

Researchers and practitioners, namely, forensics experts, recognized that crimes could be committed using digital technologies, regardless of whether a computer is used

Electronic crime

Researchers drew on the language used to capture communication strategies (e.g., email) that paralleled computer use

Internet crime

Other phrases implied a level of expertise to commit the offenses, when many offenses were simply perpetrated “through” or “on” the Internet

Network crime

Authors (particularly engineers) wanted to draw attention to the fact that crimes occur through the network connecting various technological devices Authors were drawing attention to the connection between technology and crime, particularly within the context of the workplace

Many cybercrimes occur that have very little to do with the computer network, but more to do with the behavior of computer users

Court cases have reviewed whether crime can be committed in the virtual world, including in game settings

While a handful of offenses might be committed in the virtual world, these remain the exception

Technocrime

Virtual crime

By drawing attention to technology, the term implies that the crime occurs because of technology when most cyber offenses are driven by human factors

Sources Hollinger and LanzaKaduce (1988) Richardson (2008)

Gogolin (2010) Kanellis (2006) Taylor et al. (2014) Etter (2001) Grabosky (2006)

Jewkes and Yar (2010) Taylor and Quayle (2003) Wall (2013) Adeyemi et al. (2013) Wang (2012)

Friedrichs (2009) LemanLangois (2008) Steinmetz and Nobles (2017) Brenner (2001b) Lastowka and Hunter (2004)

None of these other terms caught on the same way that cybercrime did. Still, cybercrime is a relatively new term. If developing a universal definition of a type of

1

Defining Cybercrime

11

crime were a race, one cannot help but conclude that other crime types have had a fairly generous “head start.” Crimes such as theft, burglary, arson, rape, murder, and other traditional offenses have existed for centuries. Even more recent concepts developed by criminologists have been afforded “head starts” that extend across decades. White-collar crime was introduced as a concept roughly 80 years ago (Sutherland 1940). Organized crime was introduced by the Chicago Crime Commission two decades before white-collar crime was first conceptualized. While younger than white-collar and organized crime, environmental crime is approaching its silver anniversary, having been introduced about five decades ago. Cybercrime, on the other hand, has not yet even entered its teenage years! In terms of its infancy, a few key points are noteworthy. First, cybercrime has not yet been empirically assessed to the same degree as other crime types. Second, the main feature driving cyber offenses (e.g., cyber technology) changes much more rapidly than do those structures shaping white-collar misdeeds, environmental offenses, and organized crime. Third, it is important to stress that there is no universally accepted definition of these other offense types! Based on this alone, it is virtually impossible and perhaps even naïve to assume that a universal definition of cybercrime can be developed.

The Atypical Nature of Cybercrime Debating how to define pornography and obscenity, Justice Potter Stewart once famously quipped, “I know it when I see it.” The same statement might apply to virtually all forms of criminal behavior except for cybercrime. In fact, it can be argued that, unlike most other crimes, cybercrime is not real to the senses. You can’t see it, hear it, smell it, feel it, or touch it. More specifically, cybercrime is, for the most part, invisible. When a robbery occurs, someone sees the offense. When an assault occurs, someone feels the offense. When an arson occurs, someone likely smells the smoke from the fire. When a public fight or argument occurs, someone hears the crime. To be sure, victims of cybercrime feel the experience of victimization; they just don’t necessarily feel it at that moment when the crime was committed. Suffice it to say that the invisibility of cybercrime makes it harder to define. No one says that they know cybercrime when they see it. As one author points out, in cybercrimes “the object of theft [are] electrons, bits, and bytes” (Goodman 2010, p. 316).

The Breadth of “Cyberspace” Another distinguishing factor that makes it harder to define cybercrime is the creation of a new form of space – cyberspace. Nevermind the fact that it is equally hard to define cyberspace as it is to define cybercrime. Cyberspace is essentially the digital world or the paths used by electrons and bits to transfer information from our keyboards to some other location. We have become so reliant on cyber technology

12

B. K. Payne

Cyber Fraud Cyber WhiteCollar Crime

Cyber Theft

Cyber Trespassing

Cyber Pornography

Varieties of Cybercrime Cyber Piracy

Cyber Bullying

Cyber Espionage

Cyber Terrorism Cyber Vandalism

Cyber Stalking

Fig. 1 Varieties of cybercrime

(and cyberspace) that nearly all of our behaviors are somehow connected to cyberspace. Individuals are able to interact with individuals they never would have met in the past. Consequently, this has dramatically expanded the number of possible offenders and victims. With such a large possible crime scene, many types of cybercrime have been committed (see Fig. 1). Though these crimes are distinct forms of crime, they share one common feature: they “occurred” in cyberspace. In addition to increasing the number of potential offenders, victims, and types of crimes, the nature of offending has also changed in cyberspace. Consider the following example from one author team: Before the advent of computer networks, the ability to steal information or damage property was to some extent determined by physical limitations. A burglar could only break so many windows and burglarize so many homes in a week. During each intrusion, he could take away only what he could carry. While this conduct is by no means trivial, the amount of property he could steal or the amount of damage he could cause was restricted by physical limitations. In [cyberspace], these limitations no longer apply. (Charney and Alexander 1996, p. 940)

Undoubtedly, the ability to communicate through cyberspace has changed all of our behaviors, whether specifically related to computers or not. As a result, even if

1

Defining Cybercrime

13

offenders are not committing a cybercrime, but are committing a traditional crime, they are likely using some form of digital technology and communicating (either intentionally or unintentionally) in cyberspace. Cell phone records, charges from tolls on highways, traffic cameras at red lights, and locations of automobiles with GPS systems are tracked in cyberspace. As a result, it has been suggested that nearly all crimes have some type of connection to the digital world, with “digital evidence . . . present in crimes of almost every kind” (Kävrestad 2017, p. 9). While helpful in crime investigations, the nature of cyberspace presents problems for defining cybercrime. As David Wall (2004) tells us, “Particularly confusing is the tendency to regard almost any offense that involves a computer as a cybercrime” (Wall 2004, p. 20).

The Global Nature of Cybercrime The global nature of cybercrime also makes it harder to define cybercrime. Consider that few, if any, other types of crime begin in one country and end in another country. Advances in cloud computing have exacerbated the ability of cyber offenders to cross national borders (Hooper et al. 2013). At the same time, cultural differences have demonstrated different offense patterns as well as different enforcement patterns (Hu et al. 2013). It is certainly plausible that these differences can be attributed, at least in part, to differences in the way that various countries define cybercrime. Cultural differences determine how crimes are defined. Because they occur within a specific jurisdiction, offenses such as drug crimes, prostitution, and domestic violence are defined within cultural boundaries. These boundaries allow criminologists to focus definitions within those specific cultures. For cybercrime, there is no such luxury. As a result of these differing definitions, one of the obvious challenges that arises in international responses is that different definitions across countries will determine whether law enforcement will be called upon. Even beyond the conceptualization issue, using the police in responding to cybercrime within an international framework is difficult, to say the least (Valiquet 2011). It is plausible to suggest that the response can be improved through international cooperation that promotes consistent definitions of cybercrime and prosecutions of offenders in cooperating countries (Brenner 2002). Focusing on opportunities for collaboration, a review of international responses to cybercrime by the Privacy and Cyber Crime Institute at Canada’s Ryerson University identified the following patterns: • Countries distinguish in their policy and strategy documents between cybercrimes, which are the domain of law enforcement agencies, and cyber-attacks which are increasingly the domain of the military. • European countries and the US allow for warrant-less access to electronic information in order to prevent both cyber-crime and cyber-attacks. Other countries do not acknowledge this possibility publicly. • Agencies to combat cyber-crime or cyber-attacks are typically created as an organizational part of the existing law enforcement or military structure. (Levin and Ilinka 2013, p. 2)

14

B. K. Payne

Experts are not unanimous in their optimism about international cooperation. According to one author, “It may be that developed democracies will be able to formulate arrangements among themselves, but the gap between them and authoritarian regimes in terms of defining the threat seems too great” (Tabansky 2012, p. 129). As part of this collaboration, at least 50 countries have published documents or policies outlining their official strategy for responding to cyber offending (Von Solms and van Niekerk 2013). What this implies is that at least 50 countries have developed their own definitions of cybercrime.

The Scarcity of Criminological Research on Cybercrime Compared to other crime subjects, fewer researchers study and write about cybercrime. Part of the absence of research on the topic stems from an overall resistance to studying new things in criminology and criminal justice. A preference seemingly exists for testing conventional theories, studying juvenile delinquency, and conducting research on topics such as drug offending and violent crime. Clearly, these are worthy topics, but the resistance to new topics, highlighted elsewhere (see Wright et al. 2008), is potentially at the expense of the criminal justice/criminology field. Of course, significant advances in researching cybercrime have been made by a growing handful of criminologists (Holt 2010; Holt and Copes 2010; Holt and Kilger 2012; Holt et al. 2010, 2017; Marcum et al. 2011, 2012). Many of these studies, however, fit within the rubric of traditional criminology studies by testing conventional theories or limiting the focus to juvenile delinquency that happens to involve computers. One result is a dearth of criminological research on cybercrime. A second result is that because there is a limited amount of criminological research on the topic, there are fewer criminological studies examining how to appropriately define the behavior.

The Multidisciplinary Nature of Cybercrime One of the most exciting aspects of cybercrime scholarship is that it requires a multidisciplinary approach to fully understand and appropriately define the behavior. One of the most challenging aspects of cybercrime scholarship is that it requires a multidisciplinary approach to fully understand and appropriately define the behavior. Multidisciplinary efforts in studying and defining cybercrime (and all other scientific endeavors for that matter) are challenging. Reasons that multidisciplinary pursuits are difficult include academic turf battles, academic socialization, academic isolation, academic language, and faculty incentives. Academic turf battles create challenges to getting faculty from across a college or university to work together on multidisciplinary efforts. These turf battles are nothing new but may be particularly problematic in efforts to conceptualize cybercrime. In particular, cybercrime potentially draws from many different disciplines

1

Defining Cybercrime

15

including computer science, computer engineering, psychology, philosophy, criminal justice, sociology, political science, information technology, and law, to name a few. What is particularly important to note is that – in most higher education institutions – these disciplines might come from as many as six different academic colleges with names such as the College of Social Sciences, the College of Engineering, the College of Sciences, the College of Business, the College of Law, and the College of Arts and Letters. It is hard enough to get faculty within a specific college to work together and agree on a specific definition. Once college boundaries are created, the turf battles become even more intense. (Elsewhere, I have likened academic politics to the Game of Thrones, with deans engaging in various battles in pursuit of the academic throne. Faculty in their respective colleges are warriors expected to help in battles that arise. If only these politics made a valuable difference! (see Payne 2016)) Academic socialization also inhibits multidisciplinary efforts to study and define cybercrime. Scholars go through a rather rigorous academic training process where many learn an awful lot of information about a very specific topic. Rather than encouraging multidisciplinary pursuits in graduate school, our curricula discourage students from doing anything not specifically related to their academic program. Furthermore, graduate programs tend to have prescribed curricula that are driven more by history and less by societal needs. Criminal justice is not an exception to this statement. As noted above, others have written about how difficult it is to get newer topics introduced in graduate programs in criminal justice (Wright et al. 2008). This is significant because if we are not training new graduates about cybercrime, it will be harder to develop a widespread base of scholars committed to studying and defining cybercrime from a multidisciplinary perspective. Academic isolation also inhibits multidisciplinary pursuits related to cybercrime. Even if academic politics does not keep scholars from working with others, the physical design of most colleges and universities is such that faculty are discouraged from interacting with colleagues from outside disciplines. Research labs are also designed in an isolated manner, with labs often assigned to faculty members rather than programs or departments. What this does, then, is create artificial barriers that make it more difficult for researchers, professors, and other academics to work on multidisciplinary pursuits. Academic language also discourages multidisciplinary efforts in cybercrime research. Imagine a criminologist, computer scientist, engineer, information scientist, and philosopher coming together to talk about cybercrime. Each of them would have received intense academic training within their discipline. They would each have different definitions of crime, different views about the causes of crime, different perspectives about what the term “cyber” means, and different views about how to use the academic discourse to address, prevent, and respond to cybercrime. In many ways, they are trained to speak different languages. In the end, these different languages would result in differing definitions of cybercrime. Faculty incentives also discourage faculty members from working with colleagues from outside disciplines. These incentives exist in annual evaluations, promotion and tenure evaluations, and program rankings. In annual evaluations,

16

B. K. Payne

faculty members are often evaluated based on how much research they contributed to their academic discipline. This, in and of itself, could serve to dissuade junior faculty from working on multidisciplinary efforts. Faculty members are evaluated based on how many articles they contributed to their discipline’s journals, with journals from other disciplines receiving less internal prestige. In promotion and tenure evaluations, the same process unfolds with a notable exception: external faculty members from the promotion/tenure candidate’s field are asked to evaluate the candidate based on the candidate’s scholarly contributions to the field. So it’s not just the faculty members’ day-to-day colleagues discouraging multidisciplinary work, it’s also their disciplinary colleagues. In a similar way, when academic programs are ranked in US News and World Report, part of the ranking is based on how the program is perceived by disciplinary peers. This, too, discourages multidisciplinary efforts. It’s a wonder that any multidisciplinary work is even done on cybercrime!

Conceptualizing Cybercrime Thus far, it has been established that cybercrime definitions matter and it is hard, if not impossible, to develop a universal definition of the concept. Still, the challenges in developing these definitions (e.g., the history of the concept, the atypical nature of cybercrime, its global nature, the dearth of criminological research on the subject, and the multidisciplinary underpinnings of cybercrime) can be used to frame a discussion about how cybercrime can be appropriately defined. Common strategies for conceptualizing cybercrime include the following: • • • • • • •

Cybercrime as traditional crime Cybercrime as deviant behavior Cybercrime as a legal issue Cybercrime as a political issue Cybercrime as a white-collar crime Cybercrime as a social construction Cybercrime as a technological problem Each of these is discussed below.

Cybercrime as Traditional Crime Some commentators have considered cybercrime as simply traditional types of crime that are committed with new strategies. Debating whether these are new offenses or not, cyber criminologist Peter Grabosky (2001) asked whether cybercrime is “old wine in new bottles” or “new wine.” Consider theft as an example. In the 1980s, individuals might have stolen music directly from a music store selling cassette tapes. In the 2000s, individuals were able to steal music directly from the Internet.

1

Defining Cybercrime

17

The offense (theft) is essentially the same. What is different is the strategy used to steal the music. Other examples could be provided: • Child pornography – Prior to the technological revolution, child pornography was distributed through photos, microfilm, and video. Today, it is distributed on the Internet. • Prostitution – Prior to the development of the Internet, the vast majority of prostitution was done by streetwalkers. Today, Internet prostitution is widespread. • Bullying – A few decades ago, children were bullied in person at school. Today, they are often bullied online. • Stalking – Traditionally, stalking has been an offense where an offender physically follows a victim around. With the creation of cyber technology, offenders can stalk victims without leaving their home. • Destruction of property – Individuals have used various strategies to destroy other people’s property over time. In the past, a disgruntled worker might vandalize their office space. Today, they might introduce viruses on their business’s computer network. • Fraud – Postal fraud was prevalent with the development of the postal services. Telemarketing fraud grew with the development of telephones. Frauds on the Internet expanded with the creation of the Internet. The list could go on. The main point is that these offenses are not necessarily all new types of behaviors. They are traditional crimes that can be committed using cyber technologies. One legal scholar concludes, “cybercrime is unique only to the extent that it is often a more efficient means by which to commit certain types of offenses” (O'Neill 2000, p. 239). However, as Brenner (2006) reminds us, “while most of the cybercrime we have seen to date is simply the commission of traditional crimes by new means, this will not be true of all cybercrime” (p. 384). Indeed, another cybercrime expert argues that cybercrime patterns “represent the emergence of a new and distinctive form of crime” (Yar 2005, p. 407). Hacking, for example, involves a new form of crime with offenders using different techniques to commit an assortment of harms. Also, malicious software (e.g., malware) is used by offenders to commit a wide range of offenses. These crimes are not traditional crimes. Indeed, new laws have been created to criminalize these behaviors.

Cybercrime as Deviant Behavior Some may prefer to define cybercrime as deviant behavior. Doing so results in a broader sociological perspective. The distinction between defining cybercrime as deviance rather than as criminal behavior is that the focus shifts to societal norms rather than legally proscribed rules. For instance, some cyber behaviors might be deviant but not illegal. Indeed, as one author team notes, many cyber offenses “are often not clearly illegal” (Verma et al. 2013, p. 124). As an illustration, in the USA,

18

B. K. Payne

“there is no all-encompassing law that governs the security of citizens’ sensitive information” (Collins et al. 2010, p. 796). But, misuse of secure information would most certainly be defined as deviant. Also, given that many countries do not yet have laws governing against certain types of cyber behaviors, a deviance perspective more appropriately captures behaviors that are harmful but not necessarily illegal in those countries. Expanding on this perspective, Grabosky (2004) notes that some nations’ “laws are still not attuned to the digital age” (p. 155). What this means is that “the fine line between a criminal activity and anti-social behavior in the online world is not agreed upon universally” (Goodman 2010, p. 320). This poses a threat, according to some, because “criminals can. . .move to countries where legislation related to cybercrime is outdated or nonexisting” (Boes and Leukfeldt 2017, p. 187). A philosophical question arises: if an offender commits a cyber harm in a country that does not legislate against the behavior, has a crime been committed? Those following a strict legal definition might suggest that a crime has not been committed, while those defining crime from a deviance perspective likely would. While not answering this question, a statement from one group of researchers offers some insight: “There is, therefore, a considerable linguistic agency associated with the term cybercrime, particularly the use of the word ‘crime’ in relation to something which might not lie entirely within the boundaries of the criminal law” (Fafinski et al. 2010, p. 9).

Cybercrime as a Legal Issue The term cybercrime “is not a legal term” (Fafinski et al. 2010, p. 9), but legal scholars tend to prefer definitions based in the law rather than sociological principles. From this perspective, legal scholars tend to define crime, in general, from a narrower perspective than social scientists might. In fact, when computer crime studies first began to be introduced, a great deal of concern among legal scholars existed about which types of offenses should be categorized as computer offenses (Hollinger and Lanza-Kaduce 1988). Legal prescriptions of crime can be used to inform definitions of cybercrime. As an illustration, a common legal definition of crime suggests that criminal behavior involves “illegal acts committed in violation of the criminal law without defense or justification and sanctioned by the state as a felony or misdemeanor” (Tappan 1960, p. 10). From this legal perspective, then, one might define cybercrime as “illegal acts using technology that break the criminal law, with the acts sanctionable by the government as a felony or misdemeanor.” Seemingly embracing a legal definition of cybercrime, cyber law expert Susan Brenner writes that “cybercrime, like crime, consists of engaging in conduct that has been outlawed by a society because it threatens social order” (Brenner 2012, p. 6). Brenner (2004) summarizes a number of cybercrime laws that were developed specifically to address cyber offenses. These include hacking laws, malware laws, cyber-stalking laws, and unauthorized access offenses, to name a few. Elsewhere, Brenner (2001a) has

1

Defining Cybercrime

19

noted that a great deal of variation exists in how states legally define cybercrime. This ambiguity exists, in part, she argues because the complex dynamics found in many cyber offenses make them difficult for politicians to understand and subsequently legislate against.

Cybercrime as a Political Issue Political scientists and policy makers tend to define crime from a political perspective rather than a legal or sociological perspective. Political scientists (and criminologists) have drawn attention to the way that politicians use narratives that paint a rather bleak picture about cybercrime. Focusing on presidential rhetoric about cybersecurity, researchers have identified how US presidents use fear-based language to shape the political narrative about cybersecurity (Hill and Marion 2016a). Consider the following quotes from US presidents offered by Hill and Marion (2016b) in a separate study: • Sexual predators use the Internet to distribute child pornography and obscenity. They use the Internet to engage in sexually explicit conversations. They use the Internet to lure children out of the safety of their homes into harm’s way. Every day, millions of children log on to the Internet, and every day we learn more about the evil of the world that has crept into it. (George W. Bush 2002) as cited in Hill and Marion 2016b, p. 9 • Al Qaida and other terrorist groups have spoken of their desire to unleash a cyberattack on our country, attacks that are harder to detect and harder to defend against. Indeed, in today’s world, acts of terror could come not only from a few extremists in suicide vests, but from a few key strokes [sic] on the computer, a weapon of mass disruption. (Barack Obama 2009) as cited in Hill and Marion 2016b, p. 9 Hill and Marion (2016b) note that the comments made by both Bush and Obama are not based on evidence but are used to frame the topic in a way that justified presidential activities related to cybersecurity. In other words, the presidents defined cybercrime from a political orientation calling for specific responses so they could receive credit for addressing a problem they socially constructed in the first place.

Cybercrime as a White-Collar Crime Another way to define cybercrime – at least some types of cybercrime – is as a whitecollar crime. Generally speaking, white-collar crime is crime committed in the course of an occupation. Not all white-collar crimes are cybercrimes just as not all cybercrimes are white-collar crime. However, the term “white-collar cybercrime” has been used to describe those cybercrimes that are committed as part of the offender’s occupation (Payne 2018).

20

B. K. Payne

White-collar cybercrimes have been subclassified into legitimate white-collar crimes and contrepreneurial white-collar crimes. The distinction has to do with whether the offender is working in a legitimate business when committing the white-collar offense. In contrepreneurial offenses, the offender creates a fake or criminal business and uses that business to commit a crime (Friedrichs 2009). Here are a few examples provided by Payne (2018): • Contrepreneurial: “[the offender] participated in a scheme to create and sell malware that could be used to spy on and steal personal information from a Google Android cell phone without the owner’s knowledge. [The offender] crafted a piece of malware ultimately named ‘Dendroid’ which, through the use of a binder, could hide itself within a Google App and then download onto a Google Android phone when the user of that phone downloaded the Google App from a place such as the Google Play Store.” (U.S. Department of Justice 2015, August 25) • Legitimate: “[The company] maintained computer servers related to the dispensing machines at its facility in Niles. [The offender] worked at the facility as a contractor from November 2014 to February 2016, after which his access to Grainger’s servers was deactivated. [The offender] hacked into the servers on several occasions in July 2016, the indictment states.” (U.S. Department of Justice 2017, December 14)

Cybercrime as a Social Construction The notion that crime is socially constructed can be traced to sociologists and criminologists who suggest that certain types of behaviors are constructed as social/ criminal problems in order to further the interests of those creating the socially constructed phenomenon. Drugs, and the drug problem, for example, have been identified as a social construction (Goode 2015). In a similar way, some have contended that certain types of cybercrimes are socially constructed problems. Researchers have explored how politicians, in particular, have socially constructed cybercrime and cybersecurity issues. Focusing on how US presidents talk about cybercrime (which was discussed above some), Hill and Marion (2016b, p. 11) conclude, “Not only are presidents engaging in the theater of addressing problems, but they are also engaging in the very social construction of those problems” (Hill and Marion 2016b, p. 11). Reconciling low prosecution rates with dramatic news stories of widescale cyber offending, David Wall writes: [one possibility is that the] ‘cybercrime problem’ has simply been blown up out of all proportion and that the media news gathering process has unintentionally fabricated an apparent crime wave out of a few novel and dramatic events. The worst case scenario is that we are in fact witnessing a deliberately calculated attempt to peddle fear, uncertainty and doubt by the media and cyber-security industry to further its interests, or by government to effect governance through fear of (cyber)crime! (Wall 2008a, p. 46)

Researchers have also drawn attention to the way that the term cyberterrorism has been constructed. It has been suggested by some that “the term itself is misleading”

1

Defining Cybercrime

21

(Gordon and Ford 2006, p. 14). Another author offered a similar conclusion: “cyberterrorism is a scare word that plays with the fear of two generally known unknowns – terrorism and technology” (Cottim 2010, p. 56). The social construction of cybercrime has resulted in a significant amount of misinformation reported in the media. As a result of this reporting, Wall (2008b) suggests that a number of myths about cybercrime have taken root. Those myths include the following: • • • • • •

Hackers have become part of organized crime. Criminals are anonymous and cannot be tracked. Criminals go unpunished and get away with crime. We are on the brink of a cyberwar and will fall prey to cyberterrorists. Cybercrime is dramatic, futuristic, and dystopic. Cyberspace is pathologically unsafe and criminogenic.

Why might individuals, politicians, and business leaders “create” or socially construct cybercrime as a problem? Three overlapping reasons exist: power, control, and profit. Regarding power, by defining cybersecurity as an important issue, politicians are able to exert cyber power over weaker governments and individuals in an effort to maintain or even increase the amount of power that the powerful have over weaker governments/individuals. Regarding control, cybersecurity concerns have been used to expand the ability of those with power, particularly government interests, to use cyber intrusion strategies (such as cameras, digital evidence strategies, and so on) to control human behavior. Perhaps most importantly, cybersecurity is a billion-dollar industry. By defining (or constructing) a cybercrime problem, the need for businesses to address the problem has surfaced.

Cybercrime as a Technological Problem Some, particularly those from STEM fields, might prefer to define cybercrime from a technological framework rather than a sociological or criminological lens. In doing so, these scholars draw more attention to the technological nature of the offenses, with some implying that the offenses occur because of the technology. For instance, it has been suggested that “cybercriminals are leveraging innovation at a pace to target many organizations that security vendors cannot possibly match” (Alazab et al. 2012, p. 210). An example of a technological definition of cybercrime is offered by Clough (2012) who defined cybercrime as “the use of digital technology in the commission or facilitation of crime” (p. 364). Focusing on the wireless nature of these crimes, another research team (three information technology professors) offered the following definition: “wireless cybercrime is an illegal activity committed by people with expertise in science and technology” (Yen and Lin 2012, p. 27). Note the reference to technology in each of the definitions. As was shown above, those defining cybercrime through a technological lens are likely to offer technological solutions to reduce and control cyber offending.

22

B. K. Payne

Conclusion Recall the example of the belt mentioned in the introduction. If cybercrime definitions are like belts, then one might logically conclude that there are different “sizes” of belts. Below are examples of short, medium, and long definitions (e.g., belts): • One professional defined cybercrime as “a crime committed on a computer network, especially the Internet” (Hoar 2005, p. 4). • Summarizing the Council of Europe’s Convention on Cybercrime, the Australian Institute of Criminology has defined cybercrime as “an umbrella term to refer to an array of criminal activity including offences against computer data and systems, computer-related offences, content offenses and copyright offenses” (Krone 2005, p. 1). • David Wall (2007) offers the following definition: “Cybercrimes are criminal or harmful activities that are informational, global and networked and are to be distinguished from crimes that simply use computers. They are the product of networked technologies that have transformed the division of criminal labor to provide entirely new opportunities and new forms of crime which typically involve the acquisition or manipulation of information and its value across global networks for gain. They can be broken down into crimes that are related to the integrity of the system, crimes in which networked computers are used to assist the perpetration of crime, and crimes which relate to the content of computers” (p. 4). None of these definitions are incorrect or perfect for that matter. The shortest definition might potentially oversimplify those behaviors best categorized as cybercrime. The second definition is general, connected to international experts, and flexible enough for expansion (e.g., the shorter they are, the more they need to be flexible!). The third definition is especially valuable in that it embraces cybercrime definitions in their various forms. In particular, this definition captures the many different possible definitions of cybercrime. Cybercrime can be defined as traditional crime, a law violation, a deviant behavior, a political issue, a white-collar crime, a social construction, and a technological problem. Choosing the right definition for the right context is like finding the right belt for the right outfit. Some are better at it than others, but if the wrong belt is selected, or more importantly, if no belt is used when one should have been, everyone will notice!

Cross-References ▶ Computer Hacking and the Hacker Subculture ▶ Hacktivism: Conceptualization, Techniques, and Historical View ▶ Malicious Software Threats ▶ Sexting and Social Concerns

1

Defining Cybercrime

23

References Alazab, M., Venkatraman, S., Watters, P., Alazab, M., & Alazab, A. (2012). Cybercrime: The case of obfuscated malware. In C. K. Georgiadis, H. Jahankhani, E. Pimenidis, R. Bashroush, & A. Al-Nemrat (Eds.), Global security, safety and sustainability & e-democracy. E-democracy 2011, ICGS3 2011. Heidelberg: Springer. Boes, S., & Leukfeldt, E. R. (2017). Fighting cybercrime: A joint effort. In R. Clark & S. Hasim (Eds.), Cyber-physical security (pp. 185–203). New York: Springer International Publishing. Brenner, S. (2001a). State cybercrime legislation in the United States. Richmond Journal of Law and Technology, 7. Available at: http://scholarship.richmond.edu/jolt/vol7/iss3/4. Brenner, S. (2001b). Is there such a thing as ‘virtual crime. California Criminal Law Review, 4. Available at: http://scholarship.law.berkeley.edu/bjcl/vol4/iss1/3. Brenner, S. W. (2002). Transnational evidence gathering and local prosecution of international cybercrime. John Marshall Journal of Computer and Information Law, 20, 347–295. Brenner, S. W. (2004). Cyber crime metrics: Old wine in new bottles? Virginia Journal of Law and Technology, 9(13), 1–52. Brenner, S. W. (2006). Cybercrime jurisdiction. Crime, Law and Social Change, 46, 189–206. Brenner, S. W. (2012). Cybercrime and the law: Challenges, issues, and outcomes. UPNE. Charney, S., & Alexander, K. (1996). Computer crime. Emory Law Journal, 45, 931. Clough, J. (2012). The council of Europe convention on cybercrime: Defining ‘crime’ in a digital world. Criminal Law Forum, 23, 363–391. Collins, J., Sainato, V., & Khey, D. (2010). Organizational data breaches, 2005–2010. International Journal of Cyber Criminology, 5(1), 794–810. Cottim, A. (2010). Cybercrime, cyberterrorism and jurisdiction. European Journal of Legal Studies, 2, 55–79. Diamond, B., & Bachmann, M. (2015). Out of the beta phase: Obstacles, challenges, and promising paths in the study of cyber criminology. International Journal of Cyber Criminology, 9(1), 24–34. Etter, B. (2001). Forensic challenge of e-crime. Marden: Australian Center for Policing Research. Fafinski, S., Dutton, W., & Margetts, H. (2010). Mapping and measuring cybercrime. Oxford: Oxford Internet Institute. Federal Bureau of Investigation. (n.d.). Internet crime schemes. Available at https://www.ic3.gov/ crimeschemes.aspx. Finklea, K., & Theohary, C. A. (2012). Cybercrime: Conceptual issues for congress and U.S. law enforcement. Washington, DC: Congressional Research Service. Friedrichs, D. (2009). Trusted criminals (4th ed.). Belmont: Cengage. Gogolin, G. (2010). The digital crime tsunami. Digital Investigation, 7(1–2), 3–8. Goode, D. (2015). Drugs in American society (9th ed.). New York: Macmillan. Goodman, M. (2010). International dimensions of cybercrime. In S. Ghosh & E. Turrini (Eds.), Cybercrimes: A multidisciplinary analysis (pp. 311–3390). New York: Springer. Gordon, S., & Ford, R. (2006). On the definition and classification of cybercrime. Journal of Computer Virology, 2, 13–20. Grabosky, P. (2001). Virtual criminality: Old wine in new bottles? Social and Legal Studies, 10, 243–249. Grabosky, P. (2004). The Global Dimension of Cybercrime. Global Crime, 6, 146–157. Grabosky, P. (2006). Electronic crime. New York: Prentice Hall. Hill, J., & Marion, N. (2016a). Presidential rhetoric and cybercrime. Criminology, Criminal Justice Law, and Society, 17, 1–17. Hill, J., & Marion, N. (2016b). Presidential rhetoric on cybercrime: Links to terrorism? Criminal Justice Studies, 29, 163–177. Hoar, S. (2005). Trends in cybercrime. Criminal Justice, 20, 4–13. Hollinger, R. C., & Lanza-Kaduce, L. (1988). The process of criminalization: The case of computer crime laws. Criminology, 26, 101. Holt, T. J. (2010). Examining the role of technology in the formation of deviant subcultures. Social Science Computer Review, 28(4), 466–481.

24

B. K. Payne

Holt, T. J., & Copes, H. (2010). Transferring subcultural knowledge on-line: Practices and beliefs of persistent digital pirates. Deviant Behavior, 31(7), 625–654. Holt, T. J., & Kilger, M. (2012). Examining willingness to attack critical infrastructure online and offline. Crime & Delinquency, 58(5), 798–822. Holt, T. J., Bossler, A. M., & Fitzgerald, S. (2010). Examining state and local law enforcement perceptions of computer crime. In Crime on-line: Correlates, causes, and context (pp. 221–246). Holt, T. J., Kilger, M., Chiang, L., & Yang, C. S. (2017). Exploring the correlates of individual willingness to engage in ideologically motivated cyberattacks. Deviant Behavior, 38(3), 356–373. Hooper, C., Martini, B., & Choo, K. R. (2013). Cloud computing and its implications for cybercrime investigations in Australia. Computer Law & Security Review, 29(2), 152–163. Hu, Y., Chen, X., & Bose, I. (2013). Cybercrime enforcement around the globe. Journal of Information Privacy & Security, 9, 34–52. Jewkes, Y., & Yar, M. (2010). Handbook of internet crime. New York: Routledge. Kanellis, P. (2006). Digital crime and forensic science in cyberspace. Indiana: Idea Publisher Group. Kardya, M., & Mitrou, L. (2012). Internet forensics: Legal and technical issues. Second international workshop on digital forensics and incident analysis (WDFIA 2007). https://doi.org/ 10.1109/WDFIA.2007.4299368. Kävrestad, J. (2017). Guide to digital forensics: A concise and practical introduction. New York: Springer. Kerbs, J., & Jolley, J. (2007). The joy of violence. American Journal of Criminal Justice, 32(1–2), 12–29. Krone, T. (2005). Concepts and terms. High Tech Crime Brief. Australian Institute of Criminology. Lastowka, F., & Hunter, D. (2004). The laws of the virtual worlds. California Law Review, 92, 1–73. Leman-Langois, S. (Ed.). (2008). Technocrime. New York: Willan. Levin, A., & Ilinka, D. (2013). International comparisons of cyber crime. Toronto: Ryerson University Privacy and Cyber Crime Institute. Luppicini, R. (2014). Illuminating the dark side of the internet with actor-network theory. Global Media Journal, 7, 35–49. Marcum, C. D., Higgins, G. E., & Tewksbury, R. (2011). Doing time for cyber crime: An examination of the correlates of sentence length in the United States. International Journal of Cyber Criminology, 5(2), 825. Marcum, C. D., Higgins, G. E., & Tewksbury, R. (2012). Incarceration or community placement: Examining the sentences of cybercriminals. Criminal Justice Studies, 25(1), 33–40. Moitra, S. (2004). Cybercrime. International Journal of Comparative and Applied Criminal Justice, 28, 105–123. O'Neill, M. (2000). Old crimes in new bottles: Sanctioning cybercrime (pp. 237–288). George Mason Law Review. Parker, D. (1976). Crime by computer. New York: Scribner. Payne, B. K. (2016). Expanding the boundaries of criminal justice: Emphasizing the “S” in the criminal justice sciences through interdisciplinary efforts. Justice Quarterly, 33, 1–20. Payne, B. (2018). White-collar cybercrime: White-collar crime, cybercrime, or both? Criminology, Criminal Justice, Law and Society. in press. Ponemon. (2017). Ponemon Institute’s 2017 Cost of Data Breach Study. Available online at https:// www-01.ibm.com/marketing/iwm/dre/signup?source=urx-15763&S_PKG=ov58441. Richardson, R. (2008). 2008 CSI/FBI computer crime and security survey. Computer Security Issues and Trends, 8, 1–30. Sabell, E., Manaf, A., & Ismail, Z. (2012). Development of Malaysian digital forensics investigator competency identification methods. In A. E. Hassanien, A.-B. M. Salem, R. Ramadan, & T.-h. Kim (Eds.), Advanced machine learning technologies and applications. Berlin: Springer.

1

Defining Cybercrime

25

Steinmetz, K. F., & Nobles, M. (2017). Technocrime and criminological theory. New York: Routledge. Sutherland, E. (1940). White-collar criminality. American Sociological Review, 5(1), 12. Tabansky, L. (2012). Cybercrime: A national security issue. Military and Strategic Affairs, 4, 117–128. Tappan, P. W. (1960). Crime, justice and correction (Vol. 10). New York: McGraw-Hill. Taylor, M., & Quayle, E. (2003). Child pornography: An internet crime. New York: BrunnerRoutledge. Taylor, R., Fritsch, E., & LIederbach, J. (2014). Digital crime and digital terrorism. New York: Pearson. Toulmin, S. (1961). Foresight and understanding: An enquiry into the aims of science. New York: Harper and Row. Valiquet, D. (2011). Cybercrime: Issues. Parliamentary Information and Research Services. Verma, M., Hussain, S., & Kushwa, S. (2013). Cyber Law. International Journal of Research Review in Engineering Science and Technology, 1, 123–130. Von Solms, R., & Van Niekerk, J. (2013). From information security to cyber security. Computers and Security, 38, 97–102. Wall, D. S. (2004). What are cybercrimes. Criminal Justice Matters, 58, 20–21. Wall, D. S. (2007). Cybercrime: The transformation of crime in the information age. Oxford: Polity. Wall, D. S. (2008a). Cybercrime and the culture of fear. Information Communication and Society, 11, 861–884. Wall, D. S. (2008b). Cybercrime, media and insecurity: The shaping of public perceptions of cybercrime. International Review of Law Computers and Technology, 22, 1–2, 45–63. Wall, D. (2013). Policing identity crimes. Policing and Society, 23, 437–460. Wang, Z. (2012). Method for providing terminals of IMS network firewall and firewall system. Washington, DC: U.S. Patent Application. Wright, J. P., Beaver, K. M., DeLisi, M., Vaughn, M. G., Boisvert, D., & Vaske, J. (2008). Lombroso’s legacy: The miseducation of criminologists. Journal of Criminal Justice Education, 19, 325–338. Yar, M. (2005). The novelty of cybercrime. European Journal of Criminology, 2(4), 407–427. Yen, T., Lin, I., & Chang, A. (2012). A study on digital forensics standard operation procedure for wireless cybercrime. International Journal of Computer Engineering Science, 2, 26–39.

2

Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime Kyung-Shick Choi, Claire S. Lee, and Eric R. Louderback

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Research in Criminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Classification and Key Actors in the Cybercrime Ecosystem . . . . . . . . . . . . . . . . . . Trends in Cybercrime Research Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime and Technological Innovations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition of the Third Industrial Revolution and Cybercrimes Enabled by the Third Industrial Revolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Research During the Fourth Industrial Revolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Issues and Ethical Considerations in Relation to Cybercrime and Cybersecurity Research . . . . Conclusions and Future Research Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28 30 30 31 32 34 34 34 35 37 39 40

Abstract

This chapter aims to examine cybercrime viz-à-viz technological developments by identifying key research studies in the field and crime resulting from technological changes. The industrial revolution technological process and innovations

K.-S. Choi (*) Boston University, Boston, MA, USA e-mail: [email protected] C. S. Lee School of Criminology and Justice Studies, University of Massachusetts, Lowell, MA, USA e-mail: [email protected] E. R. Louderback Division on Addiction, Cambridge Health Alliance, a Harvard Medical School Teaching Hospital, Medford, MA, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_2

27

28

K.-S. Choi et al.

are discussed to illustrate their significant role in shaping and creating cybercrime research. In the age of the fourth industrial revolution, existing cybercrimes will coexist with newer types and versions of cybercrimes while combining traditional crimes in hybrid forms. This chapter concludes with future challenges for cybercrime research. Keywords

History of cybercrime · The Industrial Revolution · Cybercrime research in criminology

Introduction In the twenty-first century, people live in two worlds: the physical world and cyber world. The cyber world came into being through the development of technological innovations, such as Internet technologies. The Internet was originally designed to be used by the military for scientific purposes, but it was not widely used by non-experts until the early 1990s. The emergence of accessible cyberspace can be encapsulated in a series of significant changes in the Internet penetration rate and for Internet users. Key elements that mark this remarkable rise include reduced Internet surfing fees, the development of Internet technology to expand bandwidth, and increased Internet speed and reliability (Curran 2016; Greenstein 2015). According to the International Telecommunication Union, the number of individuals using the Internet has dramatically changed across the world in the past three decades. In 1990, only 0.049% of the global population used the Internet – only a fraction of a percent. By 2016, 45.8% of the world population was online (see Fig. 1; World Bank n.d.). Take the United States as an example. In 1999, the United States had an overwhelming lead in the number of Internet users with a worldwide Internet user share of over 42.8%. However, the US share declined by more than half to 20.8% by the end of 2003 as other countries adopted Internet technology. At the end of 2003, the worldwide Internet penetration rate increased to 12.5%, up from 4.3% at the end of 1999. The individual Internet penetration rates for each country increased as well. In the case of developed countries, those that showed penetration rates higher than 50% at the end of 2003 were the United States (56.5%), Japan (70.3%), Canada (51.0%), Australia (54.3%), the Netherlands (78.8%), Sweden (79.6%), the United Kingdom (54.6%), and South Korea (76.5%). In addition, each country showed an increase in e-commerce sales per Internet user from 1999 to 2003. The worldwide per capita e-commerce sales rate increased to $5100/user at the end of 2003 from $1600/user at the end of 1999. During this 5-year period (1999–2003), the overall trend revealed that the United States had an overwhelming lead in e-commerce sales per Internet user (Wei 2005). These figures show the rapid rise in Internet penetration and use in the early years of online technologies. The trends of increasing Internet use have continued to

2

Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime

29

50 45 40 35 30 25 20 15 10

0

1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016

5

Source: International Telecommunication Union / World Bank (n.d.)

Fig. 1 Individuals using the Internet worldwide 1993–2016 (% of population). (Source: International Telecommunication Union/World Bank n.d.)

present day. For example, Internet penetration rates by region have increased substantially over the past decades, and the world’s average Internet penetration rate is now 56.8%. Internet access rates continue to vary by region: North America (89.4%), Europe (86.8%), Oceania (68.4%), Latin America (67.5%), Middle East (67.2%), Asia (51.8%), and Africa (37.3%) (Internet World Stats 2019). Technological changes can result in the emergence of new types of crime. For example, the first Industrial Revolution brought factories and railroads along with train robberies, whereas the second Industrial Revolution brought electricity and cars along with auto thefts (Stearns 2018). As Internet access has become more widespread, crime in digital space has begun to adversely affect individuals and organizations in new ways. The third and fourth Industrial Revolutions are, respectively, linked to the introduction of computers and the Internet and enhanced global connectivity. Table 1 shows a summary of the four Industrial Revolutions, technological advancements, and new types of crimes. In particular, cybercrime studies aim the causes and outcomes of crime and deviance on the Internet, as well as their legitimate issues, ethics, countermeasures, and control systems. The fourth Industrial Revolution is currently being shaped and developed by the use of the Internet which is a platform for interaction and transactions (Schwab 2016). Technologies brought about by the third and fourth Industrial Revolutions play a role in both enabling and inhibiting ongoing and emergent types of cybercrimes (Brenner 2010, 2012). Despite the fact that total worldwide financial losses due to cybercrime likely exceed one billion US dollars annually, law enforcement and policymakers are exceedingly ineffective at preventing and arresting cybercriminals (Anderson et al. 2013). As such, it is crucial for scholars and government

30

K.-S. Choi et al.

Table 1 The four industrial revolutions, technological advancements, and new types of crime

First industrial revolution

Time period 1760–1840

Second industrial revolution

1870–1914

Third industrial revolution

1969–2010

Fourth industrial revolution

2010 to present

Technological advancements Factories Railroads Coal mining Textiles Electricity Automobiles Light bulbs Telephone Radio Petroleum industry Federal Reserve Banks Microprocessors Internet communication Computer transistors Nuclear energy Biotechnology Robotic automation Nanotechnology Internet of things Artificial intelligence Cryptocurrency

New types of crime Train robbery OUI/DUI Counterfeit goods Car burglary Telephone fraud Utility theft Radio interference Bank robbery

Hacking Cyber trespass Cyber theft Data tampering Malware Spyware Ransomware for Bitcoin “Deep Fake” fraud

entities to develop novel ways of detecting, understanding, preventing, and prosecuting cybercriminals within nations and transnationally (Brenner 2008). Against this backdrop, this chapter examines the historical trajectories of cybercrime from 1969 to the present, focusing on cyber-dependent, computer-focused cybercrimes. (This is limited to cyber-dependent, computer-focused crimes. It will not include interpersonal or “person-focused” cybercrimes such as online harassment, cyberbullying, and child pornography.) First, this chapter defines important terms in cybercrime research and provides an overview of the key actors in the cybercrime ecosystem. Second, it explores the third and fourth Industrial Revolutions and key changes in cybercrime during these historical periods. The chapter concludes with present challenges and suggestions for future research on cybercrime in the discipline of criminology to guide researchers and policymakers.

Background Cybercrime Research in Criminology Criminology is the study of deviance and crime, and it has developed to encompass the themes of crime characteristics, causes, consequences, the mechanisms of victims and offenders, offenses, and the criminal justice system (ASC 2019). Cybercrime research, (Cybercriminology is not widely acknowledged as a

2

Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime

31

subdiscipline of criminology at this moment and it is often understood as “cybercrime research.” For this chapter, we use the term cybercrime research with the hope that “cybercriminology” can be a widely adopted term to describe this subfield in the scholarly community in the near future.), a subfield within criminology, examines new types of crime in cyberspace, including computer-focused crime (also known as cyber-dependent crime), which targets computers and electronic devices (e.g., hacking, malware, and spyware) and is the focus of this chapter. It also includes computer-facilitated crime (also known as cyber-enabled crime), which refers to conventional crime that is facilitated through electronic means (e.g., cyberfraud, cyberstalking, and cyberbullying) (e.g., Choi 2015; Furnell et al. 2015; Holt and Bossler 2015; Maimon and Louderback 2019; Yar 2005a). With new technological and industrial changes accompanying the fourth Industrial Revolution, scientific research must consistently adapt to stay up-to-date on the ever-increasing types of cybercrime developed by individuals and groups of hackers. In doing so, we need new definitions, innovative ways to investigate such criminal and deviant activities, and novel mechanisms to create, understand, and interpret criminal profiles. Cybercrime research is founded upon a method of interdisciplinary examination that includes recognizing the causes of cybercrime – citing studies in criminology, psychology, sociology, computer science, cybersecurity, etc. – to understand the role of cybercrime within society and the criminal justice system. In particular, research in cybercrime investigates the causes and outcomes of crime and deviance on the Internet, as well as their legitimate issues, ethics, countermeasures, and control systems. This is a dynamic, applied field that allocates resources directly into legal practice, particularly in the way of developing laws to affect criminal justice-related policy and its implementation (Choi and Lee 2018, p. 1) (There are two research strands within the field of cybercriminology. One is the application of general crimerelated theories, for example, social control, self-control, routine activity and lifestyle, and delinquency theories, to cybercrime, whereas the other involves integrated theory testing or creating new theories about cybercrime. Cyber routine activity theory (Choi 2008, 2015) and space transition theory (Jaishankar 2008) are such examples. However, what is still missing are more interdisciplinary perspectives and theories. In particular, those linking social sciences (e.g., criminology, psychology, sociology, etc.) with technical perspectives (e.g., computer science, cybersecurity, etc.) are particularly lacking though very important (Choi and Lee 2018, p. 1–2).).

Defining Cybercrime The term “cybercrime” was first coined by William Gibson in 1982 (Wall 2007a). Since then, the boundaries defining cybercrime have expanded. During the genesis of “cybercrime” as a field, the earliest generations of scholars and organizations preferred the term “computer crimes,” in the narrower sense of a computer-based cybercrime, over “cybercrime” in a broader sense of the word. Some scholars propounded the view that computers are the primary promoters or inhibitors of

32

K.-S. Choi et al.

such crimes, thus deeming the phenomenon that of “computer crimes.” More specifically, the National White Collar Crime Center (2003) defines a computer crime as a violation of law involving a computer. “True” computer crimes, according to the Center’s definition, include targeting the content of computer operating systems, programs, or networks (hereafter referred to as “computer systems”), typically involving one or more of the following: (a) Accessing computer systems without permission (unauthorized access) (b) Damaging computer systems (sabotage) (c) Acquiring information stored on computer systems – without permission (theft of data) (d) Acquiring services from computer systems – without permission (theft of services) (National White Collar Crime Center 2003) As technology has advanced in recent decades, computer crimes have grown to include many new types of crimes, and the term “cybercrime” has been adopted by scholars, policymakers, and the general public. A more recent definition of cybercrime that encompasses the wide range of cybercrimes occurring during the fourth Industrial Revolution is “any illegal behavior directed by means of electronic operations that target the security of computer systems and the data processed by them” (United Nations’ Definition of cybercrime) (IDN WI). It is also important to note that, in fact, a limited set of crimes are specifically defined in laws such as the US Computer Fraud and Abuse Act (18 U.S.C. 1030) and the UK Computer Abuse Act. Computer crimes include theft of computer services; unauthorized access to protected computers; software piracy and the altercation or theft of electronically stored information; extortion committed with the assistance of computers; obtaining unauthorized access to records from banks, credit card issuers, or customer reporting agencies; and traffic in stolen passwords and transmission of destructive viruses or commands (Choi 2015: 9–10; Dhillon and Moores 2001; Moore 2010).

Cybercrime Classification and Key Actors in the Cybercrime Ecosystem Beyond these definitions, one convention of classifying a cybercrime is to identify it as either a cyber-dependent and cyber-enabled crime. On the one hand, cyber-dependent crimes (or “pure” cybercrimes) are offenses that can only be committed using a computer, computer networks, or other forms of information communications technology (ICT). These acts include the spread of viruses or other malware, hacking, and distributed denial-of-service (DDoS) attacks (McGuire and Dowling 2013). On the other hand, cyber-enabled crimes are the extension of existing crimes to cyberspace (e.g., cyberterrorism and cyber phishing scams) (McGuire and Dowling 2013, p. 75). Survey evidence reveals that approximately 25% of respondents in the 2012 Crime Victimization Survey (CVS) suspected that their most recent online “crime” incidents were committed by an organized group of criminals rather than someone

2

Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime

33

working alone (McGuire and Dowling 2013, p. 26). Cyber-dependent and cyberenabled crimes are not, however, just about technical skills and rely heavily on the behavior of the intended victim. Social engineering tactics are key to deceiving computer users about the purpose of a file or an email they have been sent (McGuire and Dowling 2013, p. 26). As such, cybercrime occurs within a larger ecosystem (see Fig. 2), in which there are major actors who play different roles, including perpetrators (e.g., hackers), enablers (e.g., virus writers), guardians (e.g., law enforcement and system administrators), and victims (Arief et al. 2015; KraemerMbula et al. 2013; Maimon and Louderback 2019). Case study research has attempted to categorize particular types of cyber offenders, producing seven archetypes, which are as follows: newbies (who have limited skills and experience and are reliant on tools developed by others, also known as “script kiddies”), cyberpunks (who deliberately attack and vandalize), internals (who are insiders with privileged access and who are often disgruntled employees), coders (who have high skill levels), old guard hackers (who have no criminal intent and high skill levels, so would most likely equate to white hat hackers), professional criminals, and cyberterrorists (McGuire and Dowling 2013, p. 24). Computer crime is a subcategory of cybercrime, but computer crime is not necessarily synonymous with cybercrime (Choi 2008). In other words, cybercrime is a larger umbrella that encompasses computer crime, which merely requires more than a basic level of computer operating skill for offenders to successfully victimize other computer users. Specifically, manipulation of digital data should be considered a key process in the execution of a cyber-dependent crime. Ultimately, offenders who commit either a cybercrime or computer crime are conducting their business in this virtual landscape – cyberspace – which is a realm different from the physical world and therefore subject to unique laws and jurisdictions (Choi 2015).

Fig. 2 Key actors in the ecosystem of cyber-dependent crimes. (Adapted from Kraemer-Mbula et al. 2013)

34

K.-S. Choi et al.

Trends in Cybercrime Research Publications Without a doubt, scholarly attention has increased dramatically in the past decade as compared to other types of crimes (see reviews in Bossler 2017; D’Arcy and Herath 2011; Holt and Bossler 2014; Maimon and Louderback 2019). Drawing from systematic literature search of scholarly databases, research articles and monographs on cybercrime were obtained from Google Scholar and JSTOR (keyword “cybercrime”) between 2000 and 2017. More than 500 research works in English alone have been published in the past two decades, with more studies in recent years. While many types of cybercrime have been studied over the years, many gaps in the body of cybercrime research have been documented by scholars (Bossler 2017; Holt and Bossler 2014; Maimon and Louderback 2019). Using a semantic network analysis (Doerfel 1998) of cybercrime literature, we find that existing English research on cybercrime is predominantly tied to two keywords – “(computer-related) cybercrime” and “Internet” – that correspond with definitions of cybercrime outlined in the literature. Other important facets of cybercrime are how the crimes are organized and how offenders generate opportunities for criminal activity. Thus, “organize” (related to actual crimes, crime trends, responses to crime), “age,” and “crime” are linked to each other.

Cybercrime and Technological Innovations Each Industrial Revolution is characterized by the most significant technologies of its time. The first Industrial Revolution was an era dominated by steam and mechanical manufacturing. It was accompanied by the rise of mass production, whereas the third and fourth Industrial Revolutions were characterized by information technology and cyber-physical structures that can be viewed as distinctive times due to their speed, scope, and effect on the global economy. This is the age of worldwide connections with the power to transform the entire cycle of production, management, and governance systems. In these systems, predictive algorithms are often able to predict job creation and job losses across different sectors, from the service sector, transportation and logistics, office and administrative support workers, to manufacturing positions (Peters 2017). With a burgeoning body of literature on cybercrime, it is important to explore cybercrime research in relation to recent technological advancements. This section turns our attention to the third and fourth Industrial Revolutions as they relate to cybercrime and cybersecurity.

Definition of the Third Industrial Revolution and Cybercrimes Enabled by the Third Industrial Revolution Since the late 1960s and 1970s, the third Industrial Revolution has been characterized by the development of Internet communication technology (ICT) as well as

2

Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime

35

increasing industrialization and globalization. This period has produced studies on cybercrime and computer crime, with published research addressing hacking, identity theft, viruses, distributed denial-of-service attack (DDoS), malware, spyware, scareware, spear phishing, ransomware, and cyberterrorism, among others. The FBI’s IC3 reports expanded victim awareness of the damages of cybercrime and urged victims to report their cases to the IC3 Center (FBI 2017). Scholarly awareness has also increased in terms of research on specific crime cases and broader issues as well. Nigerian scams known as “4-1-9 (419) Scams” are the second most common type of cybercrime complaint while also causing the highest average financial loss per victim. Other cybercrimes such as online identity theft (Holt and Turner 2012), online piracy (Marcum 2008), and online stolen data markets (Holt 2013a; Hutchings and Holt 2017; Holt et al. 2016a, b) are also examined as categories. Cyber-dependent crimes that originated in the third Industrial Revolution are evolving at present due to the new conditions of the fourth Industrial Revolution. While traditional cybercrimes such as online scams, hacking, and DDoS do persist, newer variations of these cybercrimes have been developed over the past decades. The research during this period laid the foundation for more advanced academic inquiries, which are discussed in the next section.

Cybercrime Research During the Fourth Industrial Revolution Definition of the Fourth Industrial Revolution The fourth Industrial Revolution, a term coined by Klaus Schwab at the World Economic Forum, is presumed to have begun in the 2010s (Schwab 2016). It brings us remarkable changes not only in our daily lives but also in the types of crime in our world. Schwab (2016) defines this era as one characterized by the development of new technologies that unite the biological, digital, and physical worlds, with implications across industries, economies, and disciplines. He further argues that the fourth Industrial Revolution creates a world in which virtual and physical systems of manufacturing globally cooperate with one another in an adaptable and integrated way (Schwab 2016). The new revolution “is not only about smart and connected machines and systems, but it is the combination of these advancements and their cooperation over the physical, digital and biological domains that make the fourth Industrial Revolution fundamentally unique from previous revolutions. In this revolution, emerging technologies and broad-based innovation are diffusing much faster and more widely than in previous revolutions, which continue to unfold in certain parts of the world” (Schwab 2016, p. 8). These transformations in technology and society also come with vulnerabilities and risks, in particular related to cybercrime and cybersecurity. Potential and actual victims, ranging from individuals and companies to governments and institutions, as well as defensive forces such as law enforcement agencies (Brenner 2008), face a new type of threat in daily life. In turn, as in the earlier Industrial Revolutions, such technological changes result in new vulnerabilities and risks to victims.

36

K.-S. Choi et al.

Cybercrimes Enabled by the Fourth Industrial Revolution In response to these widespread and pervasive technological trends and their resulting effects on cybercrime incidence and patterns, scholars have begun conducting research on new types of cybercrime, key players in the cybercrime ecosystem, and new contexts. With the nascence of these types of technologies and crimes in the fourth Industrial Revolution, relatively few researchers have identified the origins, developments, and futures of such cybercrimes in sufficient detail. In this line of inquiry, researchers in the domains of computer science and engineering pay more attention to the fourth Industrial Revolution than researchers in social sciences, in particular, criminology. For example, the Internet of things, robotics, virtual reality, and cyber warfare create insider and outsider threats and security issues in relation to the fourth Industrial Revolution (e.g., Martellini et al. 2017). To begin addressing this gap, this section discusses the relationship between the fourth Industrial Revolution and the cybercrimes it enables. Some of these consist of traditional cybercrimes that have increased in frequency or complexity through new technologies belonging to the most recent Industrial Revolution. Under the conditions of the fourth Industrial Revolution, there are key features that can lead to cybercrime and cybersecurity issues: increased connectivity, widespread automation, and artificial intelligence technologies. First, Internet/technological ecosystems, including the Internet of things (IoT) and cloud computing (Chaka and Marimuthu 2018), also known as cyber-physical systems (Schwab 2016), present new opportunities for cybercriminals to access multiple people’s data and files simultaneously by gaining illicit access to cloud computing systems. Moreover, the rise in “smart” homes and Internet of things (IoT) devices such as Alexa and Google Home, as well as conventional items such as lighting, garage door openers, and appliances, can allow hackers to remotely control and modify items in other people’s homes and businesses (Karlov 2017). Second, in connection with these newly established ecosystems, the connectivity of devices and technologies in fourth Industrial Revolution-facilitated ecosystems is becoming increasingly relevant to our daily life. Connectivity is important not only for the system but also, perhaps more so, for emerging types of cybercrime. For example, facial recognition devices can be hacked by highly skilled cybercriminals to gain unauthorized access to smartphones, tablets, and laptop computers. New technologies simultaneously enable the creation of new types of crimes and enhance the investigation of crimes through predictive and preventative crime pattern software based on data collection and spatial analytics (e.g., geographically weighted regression; Cowen et al. 2018). For example, connecting smart devices into a global network has led to new ways of banking, such as Venmo and Zelle. New payment systems such as cryptocurrency (e.g., Bitcoin and Ethereum), blockchain, and QR codes might be thought to present new opportunities for criminals and corresponding challenges for victims and law enforcement (Brenner 2012). For example, there have been several recent highprofile incidents of ransomware (Luo and Liao 2009; Mansfield-Devine 2016), in which hackers demand payment in Bitcoin to prevent destruction of an individual’s or a business’ data and files. Since cryptocurrency is anonymous and

2

Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime

37

confidential for both parties, this new method of transferring money worldwide can facilitate the development of transnational cybercrime via the payment of ransoms in exchange for the preservation of valuable data. Third, the pace of artificial intelligence (AI) and automation has increased with the advent of driverless cars, pilotless drones, and automated retail systems such as ATMs, restaurant kiosks, and self-service checkouts. These developments involve the emergence of autonomous intelligent systems sometimes taking the form of humanoid robots (Peters 2017). AI may be thought of as a fusion of technologies that blurs the lines between the physical, digital, and biological domains. The implications of the AI revolution for business, industry, and daily life remain to some extent in the realm of speculation. The most obvious issues in this regard are those that relate to the ways in which the nature of work and the job market are changing and will continue to change at an increasing pace (Schwab 2017; The Fourth Industrial Revolution and Education 2018). AI, big data, the IoT, the cloud, augmented reality (AR)/augmented virtuality (AV), Bitcoin, blockchain, and other new types of technologies seemingly make our lives more convenient. These new technologies enable us to investigate, detect, and predict crimes, but at the same time create new vulnerabilities. The social implications and consequences of technological developments designed to improve our standard of living ironically allow cybercrime to proliferate at an increasing rate.

Issues and Ethical Considerations in Relation to Cybercrime and Cybersecurity Research The cybercrime and cybersecurity issues discussed in the previous section present ethical considerations that must be discussed in the context of the fourth Industrial Revolution in cyber/digital systems, data, and surveillance. First, critical ethical dimensions in relation to cyber/digital systems, in particular, the IoT, are important to note. Kevin Ashton first proposed the concept of the IoT in the late 1990s (Gabbal 2015). The IoT refers to IT systems connected to all sub-systems, processes, internal and external objects, and supplier and customer networks that communicate and cooperate with each other and with humans (Howard 2015; Jan Smit 2016, p. 22). The term “things” in the IoT refers to any physical objects which have their own IP address and can connect to the network in order to send/receive data – the so-called smart things (Karlov 2017). According to some projections, by 2020, 30 billion devices, from aircraft to sewing needles, will be connected to the Internet (Howard 2015; Jan Smit 2016, p. 22). Likewise, as Mohamed and Køien argue, the “IoT has gradually permeated all aspects of modern human life, such as education, healthcare, and business, involving the storage of sensitive information about individuals and companies, financial data transactions, product development and marketing” (Mohamed and Køien 2015, p. 66). The connected devices or machines are extremely valuable to cyberattackers for several reasons (Cheng et al. 2012; Krudner 2013): (a) Most IoT devices operate unattended by humans; thus it is easy for an attacker to physically gain access to

38

K.-S. Choi et al.

them; (b) most IoT components communicate over wireless networks where an attacker could obtain confidential information by eavesdropping; (c) most IoT components cannot support complex security measures due to low power and computing resource capabilities; and (d) cyber threats can be launched against any IoT assets and facilities, potentially causing damage or disabling system operation, endangering the general populace, or causing severe economic damage to owners and users (Mohamed and Køien 2015, p. 68). Second, data-related ethical and critical issues should be addressed. AI and “big data” have led to data security and breaches both at the consumer and institutional levels. Data security refers to protecting data from destructive forces and from the unintended actions of unauthorized users (Jan Smit 2016, p. 35). Given the rise in cybersecurity breaches leading to the disclosure of confidential information and data, such as recent hacks in the Cambridge Analytica/Facebook scandal (see Lee 2019) and by WikiLeaks (Brevini 2017; Taylor 2017) and foreign operatives (Schnur and Wilson 2018), scholars have sought to better understand the legal and ethical implications of cybercrime (Sorell 2015; Zureich and Graebe 2015). A key challenge in developing objective and universal codes of ethics for cyberspace is consistent advancements in technology. Given the ubiquity of data, personal data protection and privacy have become an increasingly pressing concern. With the recent technological developments in cloud services and computing, more and more people and organizations store their data in these virtual systems. Data gathered from numerous IoT devices and cloud systems can lead to data-related ethical issues in the fourth Industrial Revolution. Hacking in this context may be elevated to new and different levels, as data from these devices are increasingly connected to individuals, companies, and public institutions. Another data-pertinent ethical issue is linked to system vulnerabilities. For example, Microsoft’s two-factor authentication system has been shown to have critical issues that led to Skype and email hacking, showing the weakness of “secure-sounding” technologies that actually offer limited protection. Scholars of ethics have raised concerns about employers conducting electronic surveillance and monitoring of employees both within and outside of the workplace (Chory et al. 2016; West and Bowman 2016). Despite being physically absent from a workplace during nonwork hours, many employees have been terminated from their positions due to postings on social media or others posting damaging information online about them (O’Rourke et al. 2018; Westrick 2016). In contrast to previous decades, the rise in widespread social media use coupled with a highly competitive labor market can result in employees and consumers committing cybercrime against co-workers and companies to gain promotions in the workplace and other advantages such as free products (Grégoire et al. 2015; Obeidat et al. 2017). Future studies should examine these issues related to privacy and ethics in cyberspace, as well as additional topics described in the next section on Conclusions and Future Research Directions. In sum, in the fourth Industrial Revolution, cybercrimes and cybersecurity issues are much more prevalent and complex than before, which requires increased attention to security, risks, and ethics with regard to systems, data, and surveillance.

2

Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime

39

Important roles can be played by relevant education and training to raise public awareness for potential victims and protection measures, as well as creating a better cyber-physical ecosystem. These goals can be achieved through newly introduced technologies and a better understanding of their potential impacts in people’s lives. With the highly technological environment, it is important to initiate more systematic ways of giving ordinary people usable information on these issues. Education programs should not only be adapted at the individual level but also extended to industry, local, state, and federal levels. Public-private partnerships at a national level are essential, but these programs and partnerships should also be expanded for international and global cooperation.

Conclusions and Future Research Directions Despite its relatively short history, each wave of technological advancement builds upon cybercrime research as a subarea of criminology devoted to the study of cybercrime. Few scholars have attempted to look at the past, present, and future of cybercrime research, where useful insights for the disciplines and their policy applications exist (Choi and Lee 2018; Maimon and Louderback 2019; Bossler 2017; D’Arcy and Herath 2011; Holt and Bossler 2014; Holt et al. 2016a). This chapter examined cybercrime vis-à-vis technological developments by identifying key research studies in the field and crime resulting from technological change. This chapter pays specific attention to the periodization and the industrial/technological developments that play a significant role in shaping and creating cybercrime research. The later stage of the third Industrial Revolution created previously unimagined spaces for scholars to study such cybercrime issues and their implications for society more broadly. In the age of the fourth Industrial Revolution, from 2010 to the present day, the cybercrimes initiated by the third Industrial Revolution coexist with newer types and versions of cyber-dependent crimes that have been prompted by recent industrial developments. New technologies are already being used for new types of crime, combining traditional crimes with new crimes to produce hybrid forms. In the coming-of-age of the fourth Industrial Revolution, such trends are more prevalent than before. Future challenges for cybercrime research lie in the changing landscapes of the Internet, virtual technologies, and law enforcement. With new forms of crime on the rise, ordinary people are less likely to have the knowledge to determine whether or not a particular software program or new technology hosts a cybercrime in disguise (Arief and Adzmi 2015). This lack of awareness increases their probability of becoming victims of the emerging forms of cybercrime. In turn, local, regional, federal, and global law enforcement agencies do not have comprehensive legal guidelines to create new categories of crime (Brenner 2010) and, more importantly, face difficulties in successfully investigating such crimes (Dupont 2017). The increasing complexity of cybercrime has begun to bridge the gaps between different disciplines studying criminal behavior. In line with this, some criminologists and computer scientists collaborate with each other to enhance their own

40

K.-S. Choi et al.

research studies with an interdisciplinary approach (Holt 2016, p. 8; Maimon et al. 2013, 2014). In the age of the fourth Industrial Revolution, such collaboration needs to become much more rigorous than ever before. Eventually, criminologists should be able to understand the mechanisms and techniques used in the cybercrime ecosystem so that they can effectively contribute to criminal investigations, legislative measures, and crime prevention programs (Casey 2011). Future research should also incorporate multimodal research techniques (Lagazio et al. 2014) that leverages qualitative, quantitative, and technological techniques to better understand and explain cybercrime. Interdisciplinary collaboration as well as international collaboration may also open new avenues for tackling cybercrime issues in the new age of cybercrime research as a field of study. Government policymakers, private foundations, corporate entities, and academic institutions should increase the monetary funding that is available for young and established scholars alike to foster successful research projects. Likewise, future research should be not only reactive but also preemptive in investigating, predicting, and analyzing the causes and consequences of cybercrime, using knowledge, technology, and transnational collaboration.

References American Society of Criminology (ASC). (2019). ASC homepage. https://www.asc41.com/. Anderson, R., Barton, C., Böhme, R., Clayton, R., Van Eeten, M. J., Levi, M., Moore, T., & Savage, S. (2013). Measuring the cost of cybercrime. In B. Schneier (Ed.), The economics of information security and privacy (pp. 265–300). Berlin/Heidelberg: Springer. Arief, B., & Adzmi, M. A. B. (2015). Understanding cybercrime from its stakeholders’ perspectives: Part 2-defenders and victims. IEEE Security and Privacy, 13(2), 84–88. Arief, B., Adzmi, M. A. B., & Gross, T. (2015). Understanding cybercrime from its stakeholders’ perspectives: Part 1-attackers. IEEE Security and Privacy, 13(1), 71–76. Bossler, A. M. (2017). Need for debate on the implications of honeypot data for restrictive deterrence policies in cyberspace. Criminology and Public Policy, 16(3), 681–688. Brenner, S. W. (2008). Fantasy crime: The role of criminal law in virtual worlds. Vanderbilt Journal of Entertainment and Technology Law, 11, 1–91. Brenner, S. W. (2010). Cybercrime: Criminal threats from cyberspace. Santa Barbara: ABC-CLIO. Brenner, S. W. (2012). Cybercrime and the law: Challenges, issues, and outcomes. Boston: UPNE. Brevini, B. (2017). WikiLeaks: Between disclosure and whistle-blowing in digital times. Sociology Compass, 11(3), 124–157. Casey, E. (2011). Digital evidence and computer crime: Forensic science, computers, and the Internet (3rd ed.). Waltham: Academic Press. Chaka, J. G., & Marimuthu, M. (2018). Curtailing the threats to cloud computing in the fourth industrial revolution. In Z. Fields (Ed.), Handbook of research on information and cyber security in the fourth industrial revolution (pp. 112–141). Hershey: IGI Global. Cheng, Y., Naslund, M., Selander, G., & Fogelstrom, E. (2012). Privacy in machine-to-machine communications a state-of-the-art survey. In 2012 IEEE International Conference on Communication Systems (ICCS). IEEE, pp. 75–79. Choi, K. S. (2008). Computer crime victimization and integrated theory: An empirical assessment. International Journal of Cyber Criminology, 2, 308–333. Choi, K. S. (2015). Cybercriminology and digital investigation. El Paso: LFB Scholarly Publishing.

2

Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime

41

Choi, K. S., & Lee, C. S. (2018). The present and future of cybercrime, cyberterrorism, and cybersecurity. International Journal of Cybersecurity Intelligence and Cybercrime, 1(1), 1–4. Chory, R. M., Vela, L. E., & Avtgis, T. A. (2016). Organizational surveillance of computer-mediated workplace communication: Employee privacy concerns and responses. Employee Responsibilities and Rights Journal, 28(1), 23–43. Cowen, C., Louderback, E. R., & Roy, S. S. (2018). The role of land use and walkability in predicting crime patterns: A spatiotemporal analysis of Miami-Dade County neighborhoods, 2007–2015. Security Journal, 1–23. https://doi.org/10.1057/s41284-018-00161-7 Curran, J. (2016). The Internet of history: Rethinking the internet’s past. In Misunderstanding the Internet (pp. 48–84). Abingdon/New York: Routledge. D’arcy, J., & Herath, T. (2011). A review and analysis of deterrence theory in the IS security literature: making sense of the disparate findings. European Journal of Information Systems, 20(6), 643–658. Dhillon, G., & Moores, S. (2001). Computer crimes: Theorizing about the enemy within. Computers and Security, 20(8), 715–723. Doerfel, M. L. (1998). What constitutes semantic network analysis? A comparison of research and methodologies. Connect, 21(2), 16–26. Dupont, B. (2017). Bots, cops, and corporations: On the limits of enforcement and the promise of polycentric regulation as a way to control large-scale cybercrime. Crime, Law and Social Change, 67(1), 97–116. Federal Bureau of Investigation (FBI). (2017). 2016 IC3 annual report. Washington, DC: Bureau of Justice Statistics. http://www.ic3.gov/media/annualreport/2016_IC3Report.pdf. Furnell, S., Emm, D., & Papadaki, M. (2015). The challenge of measuring cyber-dependent crimes. Computer Fraud and Security, 10, 5–12. Gabbal, A. (2015). Kevin Ashton describes “the Internet Things”. Smithsonian Magazine (January). Available at: http://www.smithsonianmag.com/innovation/kevin-ashton-describesthe-internetof-things-180953749/. Greenstein, S. (2015). How the Internet became commercial: Innovation, privatization, and the birth of a new network. Princeton: Princeton University Press. Grégoire, Y., Salle, A., & Tripp, T. M. (2015). Managing social media crises with your customers: The good, the bad, and the ugly. Business Horizons, 58(2), 173–182. Holt, T. J. (2013a). Exploring the social organisation and structure of stolen data markets. Global Crime, 14(2–3), 155–174. Holt, T. J. (2013b). Examining the forces shaping cybercrime markets online. Social Science Computer Review, 31(2), 165–177. Holt, T. J. (Ed.). (2016). Cybercrime through an interdisciplinary lens. Taylor & Francis. Holt, T. J., & Bossler, A. M. (2014). An assessment of the current state of cybercrime scholarship. Deviant Behavior, 35(1), 20–40. Holt, T. J., & Bossler, A. M. (2015). Cybercrime in Progress: Theory and prevention of technologyenabled offenses. New York: Routledge. Holt, T. J., & Turner, M. G. (2012). Examining risks and protective factors of on-line identity theft. Deviant Behavior, 33(4), 308–323. Holt, T. J., Smirnova, O., & Chua, Y. T. (2016a). Exploring and estimating the revenues and profits of participants in stolen data markets. Deviant Behavior, 37(4), 353–367. Holt, T. J., Smirnova, O., & Hutchings, A. (2016b). Examining signals of trust in criminal markets online. Journal of Cybersecurity, 2(2), 137–145. Howard, P. N. (2015). Sketching out the Internet of things Trendline. Washington, DC: The Brookings Institution. https://www.brookings.edu/blog/techtank/2015/06/09/sketching-outthe-internet-of-things-trendline/. Accessed May 14, 2019. Hutchings, A., & Holt, T. J. (2017). The online stolen data market: Disruption and intervention approaches. Global Crime, 18(1), 11–30. IDN. (2018). United nations definition of cybercrime. https://idn-wi.com/united-nations-definitioncybercrime/. Accessed January 5, 2019.

42

K.-S. Choi et al.

Internet World Stats. (2019). Internet world penetration rates by geographic regions – March, 2019. https://www.internetworldstats.com/stats.htm. Accessed May 10, 2019. Jaishankar, K. (2008). Space transition theory of cyber crimes. Crimes of the Internet, 283–301. Jan Smit, S. K. (2016). Industry 4.0. Directorate general for internal policies. European Parliament. Karlov, A. A. (2017). Cybersecurity of Internet of things: Risks and opportunities. In Proceedings of the XXVI International Symposium on Nuclear Electronics and Computing (NEC’2017) Becici, Budva, Montenegro, September 25–29, 2017. Kraemer-Mbula, E., Tang, P., & Rush, H. (2013). The cybercrime ecosystem: Online innovation in the shadows? Technological Forecasting and Social Change, 80, 541–555. Lagazio, M., Sherif, N., & Cushman, M. (2014). A multi-level approach to understanding the impact of cyber crime on the financial sector. Computers and Security, 45, 58–74. Lee, C. S. (2019). Datafication, dataveillance, and the social credit system as China’s new normal. Online Information Review. https://doi.org/10.1108/OIR-08-2018-0231. Luo, X., & Liao, Q. (2009). Ransomware: A new cyber hijacking threat to enterprises. In Handbook of research on information security and assurance (pp. 1–6). IGI global. Maimon, D., & Louderback, E. R. (2019). Cyber-dependent crimes: An interdisciplinary review. Annual Review of Criminology, 2(1), 191–216. Maimon, D., Kamerdze, A., Cukier, M., & Sobesto, B. (2013). Daily trends and origin of computerfocused crimes against a large university computer network. British Journal of Criminology, 53, 319–343. Maimon, D., Alper, M., Sobesto, B., & Cukier, M. (2014). Restrictive deterrent effects of a warning banner in an attacked computer system. Criminology, 52, 33–59. Mansfield-Devine, S. (2016). Ransomware: Taking businesses hostage. Network Security, 10, 8–17. Martellini, M., Abaimov, S., Gaychen, S., & Wilson, C. (2017). Assessing cyberattacks against wireless networks of the next global Internet of things revolution: Industry 4.0. In Information security of highly critical wireless networks (SpringerBriefs in computer science) (pp. 63–69). Cham: Springer. McGuire, M., & Dowling, S. (2013). Cyber crime: A review of the evidence. Summary of key findings and implications. Home Office Research report, 75. Mitch, D. (2018). The role of education and skill in the British industrial revolution. In The British Industrial Revolution (pp. 241–279). Routledge. Mohamed, A., & Køien, G. M. (2015). Cyber security and the Internet of things: Vulnerabilities, threats, intruders and attacks. Journal of Cyber Security, 4, 65–88. Moore, R. (2010). Cybercrime: Investigating high-technology computer crime. New York: Routledge. National White Collar Crime Center (2003). Check and Credit Card Fraud, WCC Issue Papers. Morgantown, WV: National White Collar Crime Center Administration and Research Office. O’Rourke, A., Pyman, A., Teicher, J., & van Gramberg, B. (2018). Old wine in new bottles? Regulating employee social media use through termination of employment law: A comparative analysis. Common Law World Review, 47(4), 248–271. Obeidat, Z. M. I., Xiao, S. H., Iyer, G. R., & Nicholson, M. (2017). Consumer revenge using the Internet and social media: An examination of the role of service failure types and cognitive appraisal processes. Psychology and Marketing, 34(4), 496–515. Peters, M. A. (2017). Technological unemployment: Educating for the fourth industrial revolution. Educational Philosophy and Theory, 49(1), 1–6. Schnur, Z., & Wilson, R. (2018). Cold war echoes: The Russian effort to interfere in the 2016 election. In International Conference on Cyber Warfare and Security (pp. 675–680). Academic Conferences International Limited. Schwab, K. (2016). The fourth industrial revolution. New York: Crown Business. Schwab, K. (2017). The fourth industrial revolution. Currency. Sorell, T. (2015). Human rights and hacktivism: The cases of Wikileaks and anonymous. Journal of Human Rights Practice, 7(3), 391–410. Stearns, P. N. (2018). The industrial revolution in world history. New York: Routledge.

2

Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime

43

Taylor, C. A. (Ed.). (2017). The ethics of WikiLeaks. New York: Greenhaven Publishing LLC. Wall, D. S. (2007a). Cybercrime: The transformation of crime in the information age. Malden: Polity Press. Wall, D. S. (2007b). Policing cybercrimes: Situating the public police in networks of security within cyberspace. Police Practice and Research, 8(2), 183–205. Wei, J. (2005). Internet penetration analysis: The impact on global e-commerce. Journal of Global Competitiveness, 13(1–2), 9–24. West, J. P., & Bowman, J. S. (2016). Electronic surveillance at work: An ethical analysis. Administration and Society, 48(5), 628–651. Westrick, S. J. (2016). Nursing students’ use of electronic and social media: Law, ethics, and e-professionalism. Nursing Education Perspectives, 37(1), 16–22. World Bank. (n.d.). Individuals using the Internet (% of population). https://data.worldbank.org/ indicator/it.net.user.zs?end=2017&start=1986&view=chart. Retrieved from October 20, 2018. Yar, M. (2005a). The novelty of ‘cybercrime’: An assessment in light of routine activities theory. European Journal of Criminology, 2(4), 407–427. Yar, M. (2005b). Computer hacking: Just another case of juvenile delinquency? The Howard Journal of Criminal Justice, 44(4), 387–399. Zureich, D., & Graebe, W. (2015). Cybersecurity: The continuing evolution of insurance and ethics. Defense Counsel Journal, 82(2), 192–198.

3

Technology Use, Abuse, and Public Perceptions of Cybercrime Steven Furnell

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technology Abuse and Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Phishing: Technology Abuse Exemplified . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Threats from All Sides: Exploiting the Internet of Everything . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Public Perceptions and Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46 47 52 57 61 64 65

Abstract

The extensive use of information technology systems and networks has delivered undoubted benefits to both individuals and organizations. Unfortunately, at the same time, it offers new opportunities for abuse and criminal activities. This chapter examines the nature of the problem, looking at the different guises that cybercrime can take, and the factors that are influencing its growth. Specific focus is then given to the threat of phishing, which has risen to become one of the most frequently encountered forms of technology abuse, in both personal and workplace contexts. The discussion then moves to consider the increasing breadth of threats that users are actually facing from the increasing array of devices and services that they use, with the rise of the Internet of Things being used to exemplify the issue. All of this leads to consideration of the effect that cybercrime has upon public perceptions, with discussion of how it affects usage and what it should mean in terms of protection. The chapter concludes with recognition of the

S. Furnell (*) Centre for Security, Communications and Network Research, University of Plymouth, Plymouth, UK e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_9

45

46

S. Furnell

inevitability of technology-related abuse but the equal recognition that there are actions that can be taken to safeguard against it. Keywords

Abuse · Cybercrime · IoT · Malware · Perceptions · Phishing

Introduction It feels rather clichéd and unimaginative to open a discussion about technology by saying how much it has advanced and how much society depends upon it, but it is nonetheless fundamentally true. The use of technology has grown in all dimensions and continues to do so. Looking at figures reported by the International Telecommunication Union (ITU) in 2018 clearly shows both the significant penetration of technology and its ongoing growth (ITU 2018). Internet access is now available to almost the entirety of the globe. Figures from 2017 placed the total number of Internet users at 3.58 billion, against a world population of over 7.5 billion. Moreover, broadband access is becoming the norm, particularly via mobile devices, with mobile broadband subscriptions estimated to double from 3.4 billion in 2015 to 6.9 billion by 2021. Meanwhile, use of Internet-based services, such as social media, has also seen considerable growth. For example, the ITU report placed Facebook subscribers at over 2 billion in 2017, suggesting that more than half of Internet users are at least registered on the service. With this volume of use in mind, consider the following quote from US Federal Trade Commissioner Orson Swindle, which actually dates back to the launch of the Organisation for Economic Co-operation and Development’s security guidelines back in 2002: Today we are all linked together through powerful information systems and networks. We bank, conduct business, communicate with friends and family, pay bills, shop, do schoolwork, and listen to music through the marvels of information technology. Even more important, the critical infrastructures of our society rely upon the same information systems and networks . . . Along with the incredible benefits we enjoy through this technology, there are inherent vulnerabilities that must be recognized and addressed by all who use computers, modems, the Internet, and networks. We are all too familiar with the horror stories about viruses, hackers, and worms . . . The more we depend upon interconnected information systems and networks, the greater our vulnerability – unless we act prudently. (FTC 2002)

The quote serves to highlight how implicit and embedded technology had already become in people’s daily lives. Indeed, one of the most striking points is that these are comments from over 15 years ago! How has technology usage changed since then? In one sense not at all – people still use the “marvels of information technology” to achieve all of these things. However, in another sense it has changed almost immeasurably – for example, people now do all of it from devices carried around in their pockets, and in many places enjoy ubiquitous Internet access. As a result, they

3

Technology Use, Abuse, and Public Perceptions of Cybercrime

47

have come to rely upon the technology as a constant companion – keeping in touch, paying for things, finding their way around, answering questions, etc. – people have become used to all of this being literally within reach and possible from their devices, with the Internet providing the enabling infrastructure to get it done. Alongside all of this, the worldwide digital economy continues to grow – with Huawei predicting that it is set to almost double in less than a decade, from US $11.5 trillion in 2016 to an estimated US $23 trillion by 2025 (Huawei 2017). Quite clearly, technology growth represents one of the major opportunities for growth, in terms of both usage and resultant revenues. Against this backdrop, the fact that the technology can expose us to threats and add to risk is sometimes forgotten – or at least set to the side. Yet this aspect was also there, even back in the days of Commissioner Swindle’s quote, as illustrated by the latter half of the segment presented here. Even then it was not new – three decades ago, the Morris Worm had already shown us the potential of an Internet-wide threat (Spafford 1989). In fact, in retrospect, it served to highlight something that users and organizations still have not come to grips with – the vulnerability of the Internet and the individual systems within it. The difference now is that while the Internet Worm (as it was also known, having been the first one to have affected the network on a large scale) was borne out of curiosity on the part of its author, today’s systems are routinely and frequently being attacked for malicious and criminal purposes. This chapter examines the problems posed by technology abuse and cybercrime, as well as how the public perceive the issues and how this in turn affects their trust and engagement with the technology. Having already considered the growing extent of technology usage, the discussion proceeds by considering the scope of what the related abuse can look like, with a particular focus upon those activities that are typically classed as criminal. Focus is then given to a particular example in the form of phishing, which has risen to become the most commonly encountered form of cybercrime at the time of writing. However, while it exemplifies the problem of how technology can be abused and exploited, phishing is also a fairly well-recognized threat and largely relates to the use of long-established email services. Unfortunately, technology threats are also following into newer contexts, and the discussion explores this by looking at what has already happened in the emerging Internet of Things. This is already highlighting the exploitation of technology that should arguably have been more protected from the outset, suggesting that those providing it could be doing more to learn the lessons of the past. The final segment of discussion then moves to consider what all of this may mean for public perceptions, what they think about cybercrime, and what it means for their use of the technology concerned.

Technology Abuse and Cybercrime For all the good that that they can do, there are numerous ways in which technologies can be abused, and many of these can verge into the realms of criminal activity, giving rise to the potentially intriguing label of “cybercrime.” But what is cybercrime and what does it now encompass? More particularly, how are its potential

48

S. Furnell

victims likely to perceive it and form their perceptions around it? A probable route for them to find out is to look online, and a web search for “types of cybercrime” reveals a variety of answers. To illustrate the point, Table 1 summarizes the top three results returned. None are presented as the definitive list of cybercrimes to be concerned about – they are merely the first results from this particular search at a particular time – and although it is an unscientific approach, it mirrors what the public would get when trying to understand cybercrime for themselves. As such, while other chapters have also presented their own related discussion around definitions, the aim of revisiting the theme here is specifically to consider how the public may form perceptions via a less informal route. What does it tell us? The first observation is that the three lists are immediately quite different in content and coverage. Although some areas of commonality can be found between them, all of the lists ultimately label and group things in different ways, and each contains something that the others omit (or at least do not specifically call out with the same name). It is also clear that the role of technology varies quite significantly between the different cases. Another observation is that while all are clearly involving abuse of the technology, there may be questions around whether all of them actually represent crimes (e.g., potentially unwanted programs and online trading issues are candidates where things could fall short of a criminal level). Moreover, a fundamental point in any case is that for something to be a crime, there needs to be a law in place to make it a criminal offense. While all of the items listed in the table could represent crimes, there may be variations in practice depending upon national laws and where the activities take place. Table 1 Types of cybercrime, as listed by three online sources

Government of the Netherlandsa Phishing: using fake email messages to get personal information from Internet users

Misusing personal information (identity theft)

Hacking: shutting down or misusing websites or computer networks Spreading hate and inciting terrorism Distributing child pornography Grooming: making sexual advances to minors a

Panda Securityb DDoS attacks Botnets Identity theft Cyberstalking Social engineering Potentially unwanted programs (PUPs) Phishing Prohibited/illegal content Online scams Exploit kits

Australian Cybercrime Online Reporting Networkc Attacks on computer systems Cyberbullying Prohibited offensive and illegal content Online child sexual abuse material Identity theft Online trading issues Email spam and phishing Online scams or fraud

www.government.nl/topics/cybercrime/forms-of-cybercrime www.pandasecurity.com/mediacenter/panda-security/types-of-cybercrime c www.acorn.gov.au/learn-about-cybercrime b

3

Technology Use, Abuse, and Public Perceptions of Cybercrime

49

Having made this distinction, it is probably appropriate to briefly have some further discussion to clarify the use of terminology, as there are certain terms that often get used interchangeably, but which are not in fact synonyms. Indeed, the chapter title itself references the concepts of technology abuse and cybercrime. While these are related, and may indeed overlap, they are not necessarily the same thing. There can be abuse of technology without it necessarily being a crime (e.g., people downloading pornographic content onto workplace systems will typically not be breaking the law but will almost certainly be contravening acceptable use policy). By contrast, there cannot be a cybercrime that does not involve some form of technology abuse (as all forms of cybercrime are using or involving the technology for a negative purpose) – and so the former is a subset of the latter. When discussing these issues, it is also common to encounter the mention of attacks and breaches. However, these are not synonymous either. An attack may or may not result in a security breach (it depends if the attack is successful), and likewise a breach may result from things that are not attacks (e.g., some breaches may be accidental rather than deliberate). This gives us an overall relationship as depicted in Fig. 1. To give examples of how different activities would fit in, consider the following examples: • Downloading pornography – this would represent abuse but not attack or cybercrime. • Denial of service – this would be technology abuse and an attack, but may not be cybercrime (this would depend upon whether the issue has been criminalized in the location concerned). • Misuse of legitimately assigned permissions – this could be abuse and a cybercrime, but would not constitute an attack.

Additionally, all of the above could also be breaches if they succeeded. Finally, if it is useful to have a term that encompasses all of them, then the one to use is “incident.” Of course, looking further into the terms, it is clear that none of them are referring to just one thing either. There are numerous things that can actually fall into each category. To examine just one of them, let’s look at how cybercrime can be usefully Fig. 1 Relating abuse, attacks, breaches, and cybercrime

50

S. Furnell

decomposed a little further. In the UK, the Serious and Organised Crime Strategy defines two broad categories of cybercrime, which differ on the basis of whether the computer has a primary or secondary role in the proceedings (Home Office 2013): • Cyber-dependent crimes. These are offenses that can only be committed by using a computer, computer networks, or other forms of ICT. These acts include the spread of viruses and other malicious software, hacking, and distributed denial of service (DDoS) attacks (i.e., the flooding of Internet servers to take down network infrastructure or websites). Cyber-dependent crimes are primarily acts directed against computers or network resources, although there may be secondary outcomes from the attacks, such as fraud. • Cyber-enabled crimes. These are traditional crimes that are increased in scale or reach by the use of computers, computer networks, or other ICT. Unlike cyberdependent crimes, they can still be committed without the use of ICT. Examples can include fraud (including phishing and other online scams), theft, and sexual offending against children. These are rather close to the definitions of computer-focused and computerassisted crimes that were proposed in my own cybercrime book some years earlier (Furnell 2001). As an alternative way of looking at it, one can consider computers as the target of crime, or as the tool to perpetrate it. Or, to take yet another possible classification, Wall (2007) identifies computer integrity crime (e.g., hacking and denial of service), computer-assisted crime (e.g., scams and thefts), and computer content crime (e.g., offensive communications). Having said all this, these categories will not necessarily align with how things are seen from the victim perspective, where the likely perception will be around “something cyber” having happened to them. Meanwhile, another important observation is that cybercrime these days is often not a crime committed in isolation – it often feeds through to other forms of criminal activity. For example, various instances of malware and phishing have been linked to organized (as opposed to casual, ad hoc) criminal activity. With the above in mind, it is relevant to consider what makes cybercrime different, and how its significance has changed over time. In this respect, a number of related factors can be considered: • Motivation – the possible motives are essentially not that different to criminality in general, with cybercrimes routinely motivated by financial gain, espionage, revenge, and ideology (Furnell 2001). One aspect that has evolved over time is the move away from cybercriminals being personified by stereotypical teenage hackers operating on their own, with an increasing proportion of activities now falling more into the realms of organized criminality. Meanwhile, attacks motivated purely by mischief making, the challenge of beating the system, or boosting one’s own ego are potentially less prominent now, because there is a greater recognition of the activities as criminal acts that would attract a law enforcement response. Another trend over time has been a rise in attacks committed by state-sponsored actors, but this essentially moves matters from the context of cybercrime into an issue of national defense.

3

Technology Use, Abuse, and Public Perceptions of Cybercrime

51

• Means – these are technology-based but still focused around a relatively contained set of core methods relating to system penetration, infection, and/or disruption. Some of these will be focused entirely upon attacking the technology, whereas others might incorporate a human aspect, bringing in additional methods such as social engineering. Many of the technology-focused attacks are assisted by automated tools, scripted exploits, and modifications to existing code (e.g., developing new malware variants from a base version), which in turn open the way for a wider group of potential attackers to enter the fray, leading to opportunistic attacks, copycat incidents, and an overall amplification of the problem in general. At the same time, there are also targeted attacks, which are still likely to exploit the same core methods, but in a more crafted manner to aim toward specific victims. • Opportunity – this aspect is ever growing, thanks to the increased deployment of technology-based systems. In addition to routine use of technology within today’s organizations, a particular example over the last decade or so has been the significant rise of services such as online banking, shopping, and e-government services. All of these result in large repositories of user data being maintained in locations that often prove to be vulnerable to attack and exposure, with some cases resulting in millions of account holder details being released as a result (Dunn 2017). Additionally, technology itself creates opportunity to disguise and protect the attackers, with the use of encryption, the dark web, and cryptocurrency all bringing greater challenges for law enforcement in terms of identifying offenders online and bringing them to justice. • Impact – this is ever growing too, as the fundamental dependency upon systems and technology means that more can go wrong if operations are disrupted, data is lost or disclosed, or other undesirable outcomes occur. • Geography – the ability to commit cybercrimes remotely and impact victims all around the world is a key aspect of cybercrime that makes it stand out from other crime types. The challenge this then poses to law enforcement in relation to jurisdictional matters in tackling cybercrime can be substantial. Even doing so across county or state borders (i.e., where victims can be located in multiple different locations across the country), with the offender elsewhere, can be challenging for the traditional setup of local policing. As such, the real change over time is what can now happen as a result and how widely it can occur. To give an illustration, the UK’s Cyber Security Breaches Survey 2018 reports a variety of outcomes and impacts that resulted from breaches and attacks. For example, in terms of outcomes, 22% reported temporary loss of access to files or networks, and 15% experienced corruption or damage of software or systems. Meanwhile, the most commonly reported of the resultant impacts were the need for new security measures (36%), adding to staff time to deal with the breach or inform others about it (32%), and preventing staff from carrying out their daily work (27%). Other less commonly reported impacts included reputational damage, customer complaints, and loss of revenue. Having looked at the problem in broad terms, it is interesting to take a more detailed look at some specific incarnations. As such, the discussion now moves to

52

S. Furnell

consider some particular examples of cybercrime and technology abuse that have proven the most problematic in recent years, starting with one that almost everyone will have had some encounter with.

Phishing: Technology Abuse Exemplified Few examples illustrate the combined ease and scale of cybercrime as vividly as phishing. This is a form of cyber-enabled crime, which is typically linked to further problems such as data theft, identity theft, and wider fraud. To express it in a little more detail, the Anti-Phishing Working Group defines the problem as follows (APWG 2018): Phishing is a criminal mechanism employing both social engineering and technical subterfuge to steal consumers’ personal identity data and financial account credentials. Social engineering schemes use spoofed e-mails purporting to be from legitimate businesses and agencies, designed to lead consumers to counterfeit Web sites that trick recipients into divulging financial data such as usernames and passwords. Technical subterfuge schemes plant crimeware onto PCs to steal credentials directly, often using systems to intercept consumers online account user names and passwords – and to corrupt local navigational infrastructures to misdirect consumers to counterfeit Web sites (or authentic Web sites through phisher-controlled proxies used to monitor and intercept consumers’ keystrokes).

In short, the most visible guise of phishing means scammers will try to trick people into giving away credentials. However, while this is easy to say, it’s often far harder to prevent. Indeed, while the phishing threat is not so advanced, it is nonetheless very persistent! This is illustrated by Fig. 2, which shows the average volume of unique phishing sites and messages logged per month by the APWG at distinct points over the last decade and a half. One thing that is dramatically clear is that the threat is certainly not in decline! Indeed, the monthly averages for both sites and messages in the first half of 2018 were both more than double those observed 5 years earlier. Despite the heightened awareness, Fig. 2 presents a clear illustration of the fact that the problem is only getting worse over time. In spite of all the efforts to tackle the problem, shut down scam websites, filter messages, and raise awareness, the problem of phishing today is more acute than ever. Indeed, recent evidence shows that phishing has vastly overtaken malware to become the most frequently reported form of attack or breach facing organizations. As an example, Fig. 3 shows the most frequently encountered problems as reported by the aforementioned Cyber Security Breaches Survey series, which has been running since 2016 (with the first run based upon just over 1,000 respondents and the latter runs each involving just over 1,500). As can be seen, the reporting categories changed a little between 2016 and the later versions. The clear evidence is that once phishing-related issues start getting counted, they serve to dwarf the other categories. Referring back to the earlier discussion of terminology, it can be noted that the CSBS survey reports these as “attacks or breaches,” so one cannot be sure what proportion of those reported here

3

Technology Use, Abuse, and Public Perceptions of Cybercrime

53

100,000

87,865

90,000

82,763

80,000

No. of instances

70,000 60,000 50,000

41,025

39,000

40,000

27,997

30,000

23,200

20,000 10,000

1,067 0

2004

2008

2013

Unique phishing sites

2018 (H1)

Unique phishing e-mails

Fig. 2 Monthly averages of reported phishing sites and messages 2004–2018

80

75 72

70

68

2016

2017

2018

% respondents

60

50 40

33

32 27 28

30

24 17

20

15

15 12 8

10

15

13 10

0 Fraudulent emails Viruses, spyware, or malware or being directed to fraudulent websites

Ransomware

Others impersonating organisation in emails or online

Denial-of-service Access to attacks computers, networks, or services without permission (i.e. hacking)

Fig. 3 Most commonly encountered attacks or breaches in the prior 12 months

Unauthorised use of computers, networks, or servers by outsiders

54

S. Furnell

were merely detected attacks versus successful breaches that may (or may not) have then had an associated impact. Why is this? Because it has a proven success and it has attracted more players into the game. In terms of the success, the historical received wisdom is that 5% of recipients fall for it. In fact, a study by Google and the University of California San Diego showed that it can actually be much higher (Bursztein et al. 2014). Their research examined the credential submission rates for a series of distinct phishing pages and determined that the success rate ranged between 3% and 45%, with the average sitting at 13.7%. Therefore, if the premise is well-chosen and effective, some users can clearly be susceptible – certainly enough to justify the effort and sustain this form of threat. Unfortunately, it’s not just an amplification of the same problem. Indeed, the raw numbers in Fig. 2 only tell part of the story, and they mask out the way in which the underlying threat is evolving, particularly in terms of who and what it seeks to target. In the early days, the problem was essentially characterized by an indiscriminate bulk approach. In this context messages are not targeted or tailored to the recipient, and the success essentially relies upon large-scale mailings that then reach enough recipients to ensure that some will be fooled into thinking it looks relevant. While such forms of phishing certainly haven’t gone away, today also sees a far richer picture of approaches, including: • Spear phishing: targeted at specific individuals or companies, and tailored accordingly, with scammers being more likely to have done some reconnaissance in order to frame the attack. • Whaling: targeted toward high-value/senior individuals (e.g., higher management within an organization). • Clone phishing: taking a legitimate email that contained an attachment or link, and then retaining the message content but replacing the attachment/link with a malicious version. This approach often spoofs the original sender, claiming to be a resend or an update of the original message. In addition, there are variants of phishing that operate on technology platforms other than email, including vishing and smishing, which relate to the scams via voice and short message services, respectively. Meanwhile, social media-related scams are on also the rise (Libeau 2018) because they enable the scammers to leverage personal networks, and in the process increase the chances of success, because people will now be receiving the invitation to the scam from one of their friends/ contacts. As such, the scammers stand to gain from people’s level of trust in their friends. Many of the tricks that work today are the same as have worked for some time, using tried-and-tested social engineering triggers. They may be dressed in new guises or reach us via new routes, but they are often still the “too good to be true” offers, or things trying to catch us off-guard. For example, messages claiming that a bank account is locked, offering a link to an invoice or tracking status for a product that wasn’t actually ordered, etc., all encourage people to worry and then act in haste,

3

Technology Use, Abuse, and Public Perceptions of Cybercrime

55

Fig. 4 Examples of financially driven phishing messages

clicking a link and/or providing information they shouldn’t be sharing so easily. These in turn can leave people exposed to malware or identity theft. Figure 4 presents two illustrative examples of bulk phishing messages received during the writing of this chapter. Both impersonate the same UK bank, and so from that perspective it is interesting to make some comparisons between them. Both also play on the issue of increased security as the pretext for the scam, claiming that new authentication procedures are being introduced to protect users. Each message also attempts to make itself appear legitimate via use of NatWest color schemes and imagery and business addresses. However, the first message takes things a bit further by evidently lifting certain text in the latter half of the message from genuine NatWest email correspondence, including the “Important Security Information.” This, like the various other reassuring words, is intended to increase the basis for trust and confidence from the recipients – if they are actually NatWest customers, then aspects of this will presumably look familiar and therefore perhaps feel genuine. However, anyone looking at the text within the security warning would see that it makes a claim that the message then fails to live up to (i.e., that part of the recipient’s postcode is presented at the top of the message as an added security measure), and so for more alert users, this may highlight the message as a scam. The second message dispenses with some of the extra details but retains the same pretext and requested action Both messages were sent as indiscriminate bulk mailings rather than specifically targeted toward NatWest customers. As such, many recipients would have been

56

S. Furnell

easily able to spot and ignore the scam simply on the basis of not having a NatWest account in the first place. However, in a proportion of cases, (bad) luck would have ensured that the message arrived in the mailbox of actual NatWest online banking users. In these cases, it is relevant to see what signs would have been there to enable users to avoid being duped. In these cases, the generic nature of the messaging (e.g., using “Dear Customer” rather than a personalized greeting involving the customer’s actual name) ought to be the first warning sign. The next would be the requested task itself – the “confirm your account details” approach is one of the more established (almost stereotypical) phishing pretexts, and so people would hopefully be aware enough to at least question the legitimacy. A third and fairly definitive indicator would be to look at where the links in the messages are actually taking them. As shown in Fig. 5, these links are clearly not to the online banking site that users ought to be expecting (i.e., in this case a page within the overall https://personal.natwest. com site). However, making this particular check requires two things from users: they need to be aware of how to preview the link before following it, and they need to know how to differentiate a legitimate-looking link from a bogus one. While the first part simply requires them to be told how to do it (e.g., hover their mouse pointer over the link), the latter aspect requires a little more understanding. For example, in both of the links in Fig. 5, users may still be deceived by the fact that they see NatWest being mentioned. In common with many other banks, NatWest’s standard guidance to customers (as presented in their online banking website) is that they “will never ask you for your full Online Banking PIN, full Online Banking password, activation codes or card reader codes.” As such, anyone following the links from these messages would likely have found themselves being asked to perform contradictory actions. However, this requires people to remember and be aware enough to have kept it at the forefront of their minds – which the pretext of the phishing message will be attempting to work against. In terms of guarding against the threat, it is extremely hard to give definitive advice that works as a safeguard. People can be told to check the address of the sender, but in many circumstances, they will not know the address that they should expect to see. They can be advised not to click links within email messages, but with so many genuine messages having links within them, it is going to be a challenge to dissuade them. Added to this, even if they are prevented from clicking, some users would still fall for a request to copy and paste a malicious address into their browser or to reply directly with the desired details. Again, if the social engineering is done right, it may bypass all attempts at encouraging good judgment. As such, it is disproportionately easier to attack than to defend against.

Fig. 5 Where the online banking phishing links would take you

3

Technology Use, Abuse, and Public Perceptions of Cybercrime

57

In the context of this chapter, phishing is interesting – and indeed worrying – as an example because it shows just how easily the technology can be abused. It requires relatively little skill to develop the scam – indeed, from past experience, 1st year undergraduate students can mock up viable messages (including plausible pretexts and convincing appearance) in less than 30 min, including an accompanying web page. As such, the barrier to entry is rather low, which goes some way to explaining why the problem persists. Unfortunately, despite the long-standing nature of the threat, many users still remain unaware of how to spot the signs of scams and attacks, and so remain vulnerable to phishing, and wider forms of social engineering. Providing the necessary awareness can be thought of the human equivalent of software patching, in the sense that it aims to fill gaps in knowledge and understanding that attackers would otherwise seek to exploit. In this respect, further sustained attention is required in terms of public awareness-raising efforts, and more targeted activities within organizations to ensure that staff are aware of what they need to be protecting and how to do so (recognizing that lessons learned in the workplace can also help to support the same users in their personal use outside it).

Threats from All Sides: Exploiting the Internet of Everything There are now an increased array of computing devices, from desktops and laptops, to smartphones and tablets. Moreover, the consumer is confronted with an array of smart-this and iThat. As a consequence, a growing array of things that were previously standalone are now connected, online devices: smart TVs, smart speakers, and smart appliances (coffee machines, kettles, fridges, vacuum cleaners, etc.). There seems no end to the things that manufacturers can find some justification for putting online. This extends into the broader context of smart homes, with lighting, heating, and other features all essentially interconnected and then measured and reported via smart meters. Of course, these developments have relevance in the cybercrime context. The bottom line is that, if they are not properly protected, the technologies leave people more exposed, with more routes for attackers to find us. Indeed, successive technologies and services have become popular and then seen that their use as a channel to target and exploit users is not far behind. From desktops to mobile device, from email to instant messaging, from online banking to social media, the attackers seem to follow wherever users choose to go. The surprising thing is that on each occasion users do not appear to be any better prepared for it to happen. There has been the transition from email-based threats (e.g., malware and phishing) to the same things reaching and affecting users via other online channels, such as instant messaging and social media services. Similarly, the threats have shadowed the use of new devices, with most of the problems that were originally faced on desktops and laptops now following onto smartphones and tablets as well. However, each time it has seemed to involve a new learning curve; the devices get designed and shipped without security appearing to be fully considered, while users don’t seem to carry the lessons from

58

S. Furnell

one context over to the next. Unfortunately, the same looks set to happen with IoT and smart systems, where there is currently a massive growth of devices . . . and evidence to show that many of them are open to attack. The volume of IoT devices already exceeds the population of the planet several times over. According to IHS Markit (2017), there were over 27 billion IoT devices in 2017, with the volume forecast to more than quadruple – to 125 billion – by 2030 (predicting an average annual growth of 12%). If such systems were vulnerable to attack, then they would clearly represent a massive target for exploitation by cybercriminals. The keyword in that statement is of course “if,” and so it is relevant to consider whether IoT devices to date have fallen into that category. Unfortunately, the answer is resoundingly yes. Various incidents have been reported, and among the more headline-grabbing have been those showing that items such as children’s toys and baby monitors are vulnerable to attack (Laughlin 2017). At the time of writing, the most significant and widely known example of an IoT breach was the Mirai botnet, which was discovered in August 2016 and then used in the following months to launch a series of disruptive distributed denial-of-service attacks. The Mirai malware compromises vulnerable IoT devices and enlists them as bots into a wider network of compromised systems. The botnet was used to attack various targets, among the most notable of which was the DNS service provider Dyn, which was targeted and put out of action on 21 October 2016 (Williams 2018). As a result of DNS services being unavailable, various high-profile sites that depended upon it for name resolution were rendered inaccessible. This list included Airbnb, GitHub, Netflix, Reddit, and Twitter. As such, the impact of the Mirai activities was ultimately felt by millions of service users. However, while Mirai remained the most well-known example, a great deal continued to happen in the IoT malware landscape after this incident. According to figures from Kaspersky Lab, the number of malware samples for IoT devices rose from a relatively modest 3,219 in 2016 to 121,588 during the first half of 2018 (Kuzin et al. 2018). Kaspersky’s investigation placed honeypot devices on the network and then observed where the attacks came from and how they sought to compromise the devices. In the vast majority of cases, they worked by taking advantage of default passwords, enabling remote connection (and entry) to the devices via Telnet and SSH protocols. However, in a minority of cases, other methods had been used including exploitation of vulnerabilities, which (as the Kaspersky report observed) had the advantages of enabling infection to happen more quickly than password cracking and being more difficult for users to address (requiring a software update rather than simply changing the password). Mirai remained notably prominent among the malware observed in the Kaspersky study, with over 20% of the downloaded malware coming from that family, but equally it was far from alone, and the emergence of other IoT-focused malware families (including Gafgyt, Hajime, and Persirai) clearly shows the growing interest in this sector. To date, it should be noted that most of the attacks have not been felt by the consumer themselves. The intention behind Mirai and other incidents has been to use compromised devices to mount denial of service attacks against other parties. The consumer’s device would be used, but they would not be the victim. At the same

3

Technology Use, Abuse, and Public Perceptions of Cybercrime

59

time, attacks such as the aforementioned hacking of toys and baby monitors show that IoT exploitation can quite clearly cause invasion of privacy, and as more devices appear and become further embedded in smart homes and lifestyles, the potential to affect and disrupt the individual becomes correspondingly greater. So why does this level of vulnerability exist? In many cases, the prior attacks were possible because device owners had neglected to change the passwords from known defaults, which of course offers a ready-made opportunity for attackers. This in turn raises a question of why devices should be shipped in a manner that invites problems – an issue that has more recently been tackled by new legislation in the USA. Specifically, Senate Bill 327 on the Information privacy of connected devices requires that from the start of 2020 manufacturers of Internet-connected devices must: equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure. (California Legislature 2018)

In layperson’s terms, one of the underlying specifics is that the default password must now be unique to each device manufactured. As the name of the bill suggests, its scope extends to all connected devices rather than specifically IoT devices, but it will clearly be beneficial in that context. Meanwhile, in the UK, the government (via the Department for Digital, Culture, Media and Sport and the National Cyber Security Centre) has issued a code of practice for the security of consumer IoT devices, with the aim of getting developers, manufacturers, and retailers of the devices to voluntarily adopt practices that will safeguard those using the devices (DCMS 2018). A total of 13 guidelines are proposed, covering various aspects of protection (including storage and communication of personal data, and the ability to keep the devices suitably updated). Notably, the first of the guidelines mirrors the US legislation, calling for the elimination of universal defaults for usernames and passwords. Although these are clearly positive moves, it is interesting to note that the use of unique default passwords had already been standard practice for years with wireless access points (notably by agreement from within the industry rather than by law). This was driven by prior experience of the first generations of devices being widely exploited in the era when known password defaults had been used. As such, it is revealing to see that this practice was not carried forward by default when IoT devices started to emerge. As with many aspects of security, it appears that the same lessons need to be learned and relearned repeatedly for each new generation of device and service that are introduced. In this case the onus was on manufacturers to avoid the problem in the first place, but it was equally clear that users had also learned little, as most continued to use their device with the standard password (i.e., the same one that they would know thousands of other devices were also using, because it said so in the user instructions). However, is it really fair to blame the user here? In these cases more than others, they were buying consumer electronics devices and may quite reasonably have done so with the expectation that they

60

S. Furnell

would be safe to use from the outset. It’s also worth thinking about what this means at a more basic level, with consumers now finding themselves becoming Internet users in a whole new context. Users still haven’t reached a point where everyone thinks about security enough on their traditional computers. As such, it’s likely to be a tall order to now be expecting them to think about it on devices that they’ve been used to using for years without concern, which they don’t see as being computers in the first place. Moreover, they won’t want to be concerned about it. Some people aren’t even looking to have smart devices in the first place; they are being sold to them or upgraded to them by default. Indeed, as with mobile phones, it might soon be difficult to choose options that don’t have the smart functionality. Having been landed with something they didn’t necessarily choose, the last thing that many consumers will want is additional concerns or responsibilities around securing it! As such, any protection and updating processes will need to be as automated and transparent as possible. All of this leads to a broader question to consider: is anybody really offline anymore? Businesses certainly aren’t, and it is becoming a rarity for individuals. Individuals’ personal Internet footprint is ever larger. While it is possible to find the occasional maverick who is determined not to sign up to anything or does so with false data to prevent their true details being shared, it becomes increasingly difficult to live normal daily life without being online. Indeed, people are expected to be online. Government services and banking are clear examples of where people are now routinely referred to the website as the first port of call. Meanwhile, physical premises make way for online presence. In many respects, people have lost the choice; if they opt out, they lose out. Added to this, consumers are likely to be largely unaware of what data is collected and with whom it might be shared. This, in turn, has implications. While some people are happily racing to adopt the latest gadgets and online features, it is relevant to recognize that some are now becoming Internet users by default, without necessarily having actively chosen to be. The problem – as already highlighted – is that manufacturers haven’t been taking the precautions, and so many devices have been left vulnerable. Consumers are therefore presented with the smart features as new capabilities, and they are indeed often useful to have. However, many buyers won’t naturally associate these benefits with also potentially exposing themselves to greater risk. Customers currently get labels on white goods to show their energy efficiency. They might soon need them to indicate the amount of data they’re sharing. While describing such an approach in detail is outside the scope of this discussion, one could envisage such ratings being derived on the basis of combining factors such as: • The type of information that users are expected to provide (e.g., from basic personal information, through contact details, to financial account data) • The extent to which data may be shared (e.g., to a single provider or a wider community) • The user’s level of control over the process (e.g., is it optional or a mandatory aspect of accessing a given service?)

3

Technology Use, Abuse, and Public Perceptions of Cybercrime

61

• Whether the device will also communicate/interact with others in their home network and again whether this is optional or essential to the functionality being provided This would of course be a considerable change from the current situation, in which consumers are not directly guided to think about these aspects at all. At the same time it would represent a significant contribution toward raising their awareness and in turn enabling a level of informed choice.

Public Perceptions and Expectations All of this to some extent leads to the question of what the public think about cybercrime and in particular whether it has any resulting effect upon them or their use of technology. Perceptions can of course take various forms. Do individuals see cybercrime as a threat? Is it something that they consider to be their problem? Are they worried or afraid by it? Have they been a victim of it? The common, and perhaps predominant, feeling is likely to be something that is essentially fear of the unknown. Many users will have heard of things like the dark web without having actually been there, and so perceptions are most likely to be based upon media reporting or word of mouth. Meanwhile, the more accurate perceptions are likely to be shaped by directly personal experiences. Things that people have heard about are all very well, but they are happening to others, and moreover they are often reported as happening to organizations rather than individuals (even though the impact of incidents such as data breaches in an organization will ultimately mean that individuals can be affected as well, they don’t generally get to hear these stories). If users have had an encounter of their own, then it is likely to focus their attention upon the reality of what it meant to them. Before looking at the situation today, it is perhaps interesting to look back at a survey that was conducted two decades ago (Dowland et al. 2000) and reflect upon some of the key findings in relation to public awareness and perception of cybercrime in that era. It was a vastly different technology landscape; it was pre-broadband in homes and most small organizations, and mobile Internet access essentially didn’t exist (other than getting your laptop online via a modem). As a result, the threat environment was also markedly different. Hacking and malware were of course already recognized problems, and the arrival of the Internet had ushered in problems such as denial of service and website defacement, but issues such as phishing had yet to make an appearance. The reported results drew upon responses from 175 members of the public, virtually all of whom were established users of technology. Nonetheless, the findings almost immediately highlight differences in the way that this usage occurred, with 88% claiming to have Internet access at work but less than half (48%) having it at home (even so, this was significantly ahead of the UK average for domestic access, which then stood at just 14%). In terms of the resultant findings around cybercrime,

62

S. Furnell

over 80% of the respondents considered it to be a problem, with “viruses” receiving the clearest recognition as a problem issue, with 88% rating them as a serious or very serious concern. Perceptions around hacking were somewhat more varied, with 71% considering it to be unacceptable, and the dominant perceptions around what motivated hackers being things like curiosity, the thrill of it, and the challenge of beating the system (all of which scored between 80% and 95% recognition as motivations). Meanwhile, hacking as a malicious or financially driven activity was notably less recognized, with 68% and 62% recognition, respectively. While clearly still a majority perception, it clearly suggested that the more dominant public views in this era were taking a rather more lenient stance on this issue (which was likely colored by media treatment of cases up to that time, in terms of both news coverage and dramatizations). Jumping forward and bringing things into the present day, findings published by the UK Home Office consider the issues around public perceptions, and misconceptions, of cybercrime (Home Office 2018). Their key finding was that large proportion of the public (along with small- and medium-sized enterprises) underestimate the risk of cybercrime and also feel powerless in terms of protecting themselves against it. Specifically, the report identifies and then attempts to debunk three specific myths: • Cybercrime isn’t something that I need to be concerned about. • Cybercrime isn’t “real” crime. • There is nothing more I can do to protect myself. It is hoped that the prior discussion in this chapter has already indicated the fallacy of the first two points; the examples show that cybercrime clearly has the potential to reach the individual and the impacts can certainly be real. The third myth is where individuals and organizations often fall down, and it is interesting to consider how such perceptions have been formed. There is a significant chance that users are becoming desensitized by a number of high-profile breaches that get reported in the media. After all, they are likely to have seen many of the services they regularly use having fallen victim to hacking attacks. Facebook, Instagram, LinkedIn, Snapchat, Twitter . . . name a popular service, and there has typically been a data breach of some sort. With such big names being hit, but clearly carrying on, there is a sense of attacks becoming normalized. Meanwhile, many users having heard about these breaches will have felt personally unaffected and potentially develop a sense that they are safe. In one respect this is clearly a positive – it is certainly not desirable for people to be driven away from the benefits that technology can offer them. On the other hand, it is important to avoid a culture of complacency, where users feel that they are safe regardless of how they use things. There is already ample evidence of people and organizations leaving themselves more exposed due to poor password choices, careless data sharing, inadequate malware protection, and lack of backups (OTA 2018; Get Safe Online 2018). People don’t need encouragement to settle for a false sense of security; it is already the easy path of no effort.

3

Technology Use, Abuse, and Public Perceptions of Cybercrime

63

In parallel with this, another element of the “nothing more I can do” issue is that many people feel confused or uncertain about what they should be doing. Indeed, this issue can even affect users who are otherwise keen to adopt and use the technology. As an example, findings from the UK Safer Internet Centre have indicated that – despite being regular and enthusiastic online users – young people face challenges in terms of their online data sharing, with almost half indicating that they feel anxious or not in control as a result (UK Safer Internet Centre 2019). Past research has established that the public don’t necessarily expect security to work in the first place and therefore accept that breaches have a chance to occur and affect them. What they are interested in is not the (potentially false) promise that security is in place to stop attacks but rather an assurance that they will not be impacted, or will be safeguarded, if attacks succeed (Lacohee et al. 2006). While this research was conducted over a decade ago, it is unlikely that the public position has substantially changed, particularly in view of the ongoing and increased evidence of major providers and services having suffered breaches that exposed customer data. As such, confidence is at least as likely to be developed by clarity around the mitigations and restitution that would be in place after an incident, as it is by assertions that systems are protected from attack. Indeed, this view is supported by looking at more recent data from the European Union, with a survey of over 28,000 users across 28 member countries revealing that while almost three quarters of Internet users are concerned that their personal information could not be kept secure by websites, only one in ten respondents tended to have reduced or opted out of activities such as online shopping and banking (European Commission 2017). This suggests that while many are not confident in the security, they continue to use the services anyway with the assumption that other safeguards will ultimately protect them. Having said all this, it does not mean that individual efforts are not warranted. Indeed, if people (and indeed organizations) were to do some of the basics, the situation for cybercriminals would become much harder. Although some of the discussion has focused upon the details of cybercrime, in many cases it is not necessary to know the minutiae of the threats in order to know how to respond; it is simply necessary to accept that the threats exist, that they are significant, and that protection is required. Moreover, the key safeguards are often fairly standard, and many of them apply across several threats, as well as to supporting security beyond the specific scope of cybercrime. For example, one of the major routes aiding hackers and malware attacks is that many systems remain unpatched against known vulnerabilities (e.g., in operating systems, server, and application software). For example, findings from Fortinet (2017) suggested that 90% of organizations experienced exploits of vulnerabilities that were 3 or more years old. As such, a key safeguard is secure configuration, which relates to applying patches and ensuring that other appropriate settings are configured securely. By installing patches in a timely manner, attackers are denied the opportunity to exploit the holes. However, a common problem is that patches do not get applied as quickly as they should, and the window of opportunity for attackers is extended (Ensor 2017). With these issues occurring even at the organizational level, where there will often be dedicated staff

64

S. Furnell

and expertise to help, it is clear that personal users run the risk of being more exposed when left to manage the situation for themselves. Indeed, returning to the European Union findings, it was established that while the biggest area of concern was expressed around finding malicious software on their devices (cited by 69% of respondents), only 45% claimed to have installed or changed antivirus software to protect themselves. While the survey did not explore the reasons more fully, this again illustrates a notable gap between the threat perception and the ability (or perhaps willingness) to act upon it. Meanwhile, another significant reason that some crimes can persist is because many users remain unaware of how to spot the signs of scams and attacks (Jones 2018). As a result, they remain vulnerable to threats such as phishing, and wider forms of social engineering. In a sense, providing such awareness can be thought of the human equivalent of software patching, because it aims to fill gaps in users’ knowledge and understanding that attackers would otherwise seek to exploit. Attention is required in terms of both public awareness-raising efforts and more targeted activities within organizations to ensure that staff are aware of what they need to be protecting and how to do so. However, at present society at large seems some way short of a cyber-aware population. While the public at large cannot be relied upon to be cybersecurity literate, this leaves a lot of ground for organizations to make up if they are to ensure awareness among their workforce; and unfortunately, the indications are that many fail to do so. Moreover, while this remains the case, there is likely to be an ongoing perception gap in terms of what people think they understand about cybercrime threats and what that actually ought to be doing about them.

Conclusions As the discussion has shown, technology abuse and cybercrime are a direct and ultimately inevitable result of technology usage. As technology use and related dependency has increased, so the breadth and volume of problems have also grown. Comparing today with the past, the problem is worse, and exposure is greater, but technology adoption continues unabated – showing that the desire to use technology clearly and consistently outweighs the concerns. It is also relevant to release that many breaches will continue to occur regardless of individual awareness and precautions around cybercrime. For example, the compromise of online services, and the consequent harvesting of users’ personal account data, is entirely beyond the control of the individual users affected by it. However, they are the ones that then face the risk of their details being misused, and it would be them that have the inconvenience and impacts of financial losses, identity thefts, and other undesirable outcomes that could occur as a consequence. This of course has the potential to reinforce users’ perception of being helpless victims, as well as the belief that there is little point trying to do anything to improve their own personal cybersecurity practices because they may be let down by someone else’s anyway. While the underlying points are valid, the conclusions derived from them are flawed. To use a fitting but nonetheless unpleasant analogy, the fact

3

Technology Use, Abuse, and Public Perceptions of Cybercrime

65

that someone could become victims of a road traffic accident along with other passengers while travelling on a bus (i.e., driven by someone else) does not have any bearing on the need for them to be individually safe and competent when driving their own vehicle. As with the admittedly clichéd opening of the chapter, it is always tempting to conclude by looking to the future, trying to predict what will happen next and from where the new threats will come. However, while there will doubtless be new threats, they are not really needed to show that there is a problem. There is a certainty that the threats already known about will continue to grow in scale and that they will be faced from a greater range of sources. Malware, phishing, and other attacks are already an ever-present and routine backdrop to the use of IT, and these looks set to continue. All the while, technologies such as mobile, cloud and IoT are all the more pervasive and will continue to increase the routes for exposure and exploitation. Unless there are deliberate efforts to address it, technology use and dependency will continue to increase disproportionately to the level at which it is protected. However, if the issues are faced and acted upon responsibly, then it is clearly possible to enjoy the benefits of the technology and reduce the risk of encountering its negative aspects.

References APWG. (2018). Phishing activity trends report, 1st Quarter 2018. Anti-Phishing Working Group, 31 July 2018. Retrieved from http://docs.apwg.org/reports/apwg_trends_report_q1_2018.pdf Bursztein, E., Benko, B., Margolis, D., Pietraszek, T., Archer, A., Aquino, A., Pitsillidis, A., & Savage, S. (2014). Handcrafted fraud and extortion: Manual account hijacking in the wild, Proceedings of the 2014 Conference on Internet Measurement Conference (IMC’14), Vancouver, 5–7 November 2014, 347–358. California Legislature (2018). SB-327 Information privacy: Connected devices. Senate Bill No. 327. Chapter 886. 28 September 2018. Retrieved from https://leginfo.legislature.ca.gov/ faces/billTextClient.xhtml?bill_id=201720180SB327 DCMS. (2018). Code of practice for consumer IoT security. Department for Digital, Culture, Media and Sport, 14 October 2018. Retrieved from https://www.gov.uk/government/publications/ secure-by-design/code-of-practice-for-consumer-iot-security Dowland, P. S., Furnell, S. M., Illingworth, H. M., & Reynolds, P. L. (2000). Computer crime and abuse: A survey of public attitudes and awareness. Computers & Security, 18(8), 715–726. Dunn, J. E. (2017). 22 of the most infamous data breaches affecting the UK. Retrieved from http://www.techworld.com/security/uks-most-infamous-data-breaches-3604586/ Ensor, C. (2017). A brief history of Cyber Essentials. Retrieved from https://www.cyberessentials. ncsc.gov.uk/2017/11/27/a-brief-history-of-cyber-essentials European Commission. (2017). Europeans’ attitudes towards cyber security. Special Eurobarometer 464a, September 2017. Retrieved from https://data.europa.eu/euodp/data/ dataset/S2171_87_4_464A_ENG Fortinet. (2017). Threat landscape report Q2 2017. 15 August 2017. Retrieved from https://www. fortinet.com/blog/threat-research/dissecting-our-q2-threat-landscape-report.html FTC. (2002). OECD launches guidelines for the security of information systems and networks: Towards a culture of security. Press Release, Federal Trade Commission, 23 August 2002. Furnell, S. (2001). Cybercrime: Vandalizing the information society. London: Addison Wesley Professional.

66

S. Furnell

Get Safe Online. (2018). Britain is a nation of digital over-sharers. Retrieved from https://www. getsafeonline.org/news/britain-is-a-nation-of-digital-over-sharers/ Home Office. (2013). Serious and organised crime strategy. Cm 8715, October 2013. ISBN 9780101871525. Home Office. (2018). A call to action: The cyber aware perception gap. 1 March 2018. Retrieved from https://www.gov.uk/government/publications/cyber-aware-perception-gap-report Huawei. (2017). Digital spillover: Measuring the true impact of the digital economy. Huawei Technologies Co., Ltd. Retrieved from https://www.huawei.com/minisite/gci/en/digital-spill over/index.html IHS Markit. (2017). The internet of things: A movement, not a market. HIS Markit e-Book. Retrieved from https://cdn.ihs.com/www/pdf/IoT_ebook.pdf ITU. (2018). The state of broadband: Broadband catalysing sustainable development, September 2018. Broadband Commission for Sustainable Development, International Telecommunication Union, Geneva, Switzerland. Retrieved from https://www.itu.int/dms_pub/itu-s/opb/pol/SPOL-BROADBAND.19-2018-PDF-E.pdf Jones, R. (2018, January 22). Think you can spot scammers? Just 9% of Britons really can. The Guardian. Retrieved from https://www.theguardian.com/money/2018/jan/22/spot-scammershome-office-fraud Kuzin, M., Shmelev, Y., & Kuskov, V. (2018). New trends in the world of IoT threats. Retrieved from https://securelist.com/new-trends-in-the-world-of-iot-threats/87991/ Lacohee, H., Phippen, A. D., & Furnell, S. M. (2006). Risk and restitution: Assessing how users establish online trust. Computers & Security, 25(7), 486–493. Laughlin, A. (2017). Safety alert: see how easy it is for almost anyone to hack your child’s connected toys. Retrieved from https://www.which.co.uk/news/2017/11/safety-alert-see-howeasy-it-is-for-almost-anyone-to-hack-your-childs-connected-toys/ Libeau, F. (2018). Phishing via social media up 100 percent, now a preferred vector. Retrieved from https://www.informationsecuritybuzz.com/articles/phishing-via-social-media-100-percent-now-pre ferred-vector/ OTA. (2018). Cyber incident & breach trends report – Review and analysis of 2017 cyber incidents, trends and key issues to address. Online Trust Alliance, The Internet Society. 25 January 2018. Retrieved from https://otalliance.org/system/files/files/initiative/documents/ota_cyber_inci dent_trends_report_jan2018.pdf Spafford, E. H. (1989). The internet worm program: An analysis. ACM SIGCOMM Computer Communication Review, 19(1), 17–57. UK Safer Internet Centre. (2019). Our internet our choice – Understanding consent in a digital world. Retrieved from https://www.saferinternet.org.uk/safer-internet-day/safer-internet-day2019/our-internet-our-choice-report Wall, D. S. (2007). Cybercrime. Cambridge, UK: Polity Press. Williams, C. (2018, October 21). IoT gadgets flooded DNS biz Dyn to take down big name websites. Retrieved from https://www.theregister.co.uk/2016/10/21/dyn_dns_ddos_explained/

4

Race, Social Media, and Deviance Roderick Graham

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Racist Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deviant Anti-racist Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Affordances of Social Media: Anonymity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deindividuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Disinhibition Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Affordances of Social Media: Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deviant Racist Behavior: Symbolic Violence in the Form of Hate Speech . . . . . . . . . . . . . . . . . . . . . Defining and Describing Hate Speech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hate Speech Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deviant Racist Behavior: Building Far-Right Extremist Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Importance of Stormfront . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spreading Racist Discourse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deviant Anti-racism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Importance of Black Twitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deviant Anti-racism: The Production of Counterdiscourses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deviant Anti-racism: Collective Action Through Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Black Americans and Activism on Twitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Antifa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Networked Anti-racist Movements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion and Directions for Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68 69 70 70 71 72 73 74 74 75 77 78 78 79 80 80 82 82 83 84 85 86 86

R. Graham (*) Old Dominion University, Norfolk, VA, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_10

67

68

R. Graham

Abstract

This chapter is a review of the scholarly work on racist and deviant anti-racist behavior on social media platforms. Racists and anti-racists leverage the affordances of social media for deviant behavior. These affordances are anonymity and connectivity. Deviant behavior is explored along two dimensions. One dimension is the use of communication or rhetoric for deviant purposes. Racists use social media to visit symbolic violence, in the form of hate speech, on racial minorities. Meanwhile, anti-racists use social media rhetorically to develop counterdiscourses that explain social phenomena in non-racist ways. Racists use social media for political purposes to build extremist, far-right groups. Research suggests several ways that social media can be leveraged, but this chapter will focus on the propaganda component. Anti-racists, on the other hand, use social media to support social movements that agitate for racial justice. This chapter ends by identifying avenues for further research. Scholars need to expand their analyses outward from the platforms of Twitter and Facebook. Scholars must also expand their analyses outward from Black Twitter and the Black Lives Matter movement and explore the anti-racist activities of other racial minorities. Keywords

Social media · Racism · Anti-racism · Twitter · Facebook · Black Lives Matter · Black Twitter · Standing Rock · Hate speech · Stormfront

Introduction This chapter reviews the research on a set of phenomena found at the intersection of race, social media, and deviance behavior. Specifically, this chapter provides a review of the scholarly work on racist and deviant anti-racist behavior on social media platforms. The understanding of deviance used in this chapter is one based on objective norms – behavior that violates expectations of a given group and is negatively labeled and evaluated. The emphasis is placed on Western societies. Racist behavior refers to the actions or words that suggest that one race is inferior to another. These actions could be direct discriminatory behaviors, such as refusing to hire someone or levying harsher punishments on an offender because of their race. Racist behavior could also be the prejudicial sentiments and stereotypical thoughts one has toward a racial group. Over the past several decades, social scientists have chronicled a shift in racial views held by Whites. Western societies condemn “old racisms” – the type of behaviors that assume biological differences between racial groups or suggest that a racial group does not deserve full participation in society. As a result, most social scientists have shifted their focus to “new racisms” – the types of behaviors that assume that cultural differences between racial groups explain current levels of racial inequality (Bonilla-Silva 2017; DiAngelo and Dyson 2018). Asserting that Black Americans need to be more diligent in school or that Asian Americans need

4

Race, Social Media, and Deviance

69

to be more assertive are examples of culture-based new racisms. These new racisms are not always seen as deviant and are often a part of wider discourse. Therefore, this chapter focuses primarily on the research surrounding old racism on social media platforms, as this strain of racism is more roundly condemned. In contradistinction, anti-racist behavior refers to the words and actions that support racial equality and diversity. Anti-racists, unlike racists, are not usually condemned. Notions of racial inequality, diversity, and multiculturalism are supported by wide segments of society. However, there are instances when antiracism extends beyond these normative understandings. This can occur, for example, when new mechanisms for reproducing racial inequality are suggested. The discussions in the media around microaggressions – the everyday insults experienced by racial minorities – illustrate this dynamic (see Mac Donald 2018 for criticism of microaggression research from popular media; see Sue 2010 for key research on the concept). Suggesting policy changes that threaten the racial status quo can also be met with opposition. While racial equality is supported in the abstract, whites may reject policies such as affirmative action or changes to the criminal justice system. Social media can be described as: “a broad and growing portion of the Internet that is designed as a platform which allows users, and groups of users, to create an exchange content, often in an interactive or collaborative fashion” (Gainous and Wagner 2014, p. 2). Popular social media platforms include Facebook, YouTube, Twitter, Instagram, and Pinterest. Content production on these platforms includes blogging, posting audio or video, commenting on content, and leaving other pieces of data signifying sociality such as liking or upvoting a piece of content. The use of social media is a common element of modern Western societies. According to a Pew Research Center report in 2018, about 70% of Americans use social media (Smith and Anderson 2018). To restate, this chapter provides a review of the scholarly work on the use of social media platforms for racist behavior and deviant, anti-racist behavior. Table 1 displays these behaviors, organized along a rhetorical versus political dimension and a racist versus anti-racist dimension. A brief description is given below.

Racist Behavior There are two types of racist behavior that will be covered in this chapter. The first is hate speech as a form of cyberviolence (Wall 1998). Scholars have argued that hate speech produces degrees of psychological trauma in victims. Hate speech is a

Table 1 Deviant behaviors of racists and anti-racists Rhetorical Political

Racist Cyberviolence in the form of hate speech Building far-right extremist groups

Anti-racist Developing counterdiscourses about race and racial inequality Collective action for far-left activities

70

R. Graham

common element of many social media platforms, even as these platforms attempt to identify and remove such speech. The second type of racist behavior that will be discussed is the use of social media to build far-right extremist groups. Social media can be used to spread propaganda, socialize with like-minded people, and provide emotional support. This chapter will focus on the aspect that has garnered the most attention by scholars – the spread of racist discourse.

Deviant Anti-racist Behavior There are two types of deviant anti-racist behavior discussed in this chapter. The first is the development of counterdiscourses. Until the end of the 1960s, racial minorities that advocated for racial equality with Whites were viewed as radical, and their ideas were excluded from mainstream media outlets. While this has abated greatly in the twenty-first century, racial minorities still feel the need to develop counterdiscourses (Foucault 1982; Fraser 1990). Counterdiscourses that are constructed by minority groups are systems of knowledge that challenge mainstream notions of race and racial inequality. The second type of deviant anti-racist behavior discussed in this chapter are instances of collective action by groups on the far left politically agitating for changes in racial practices. Social media platforms have been instrumental in turning anti-racist narratives into political action. Notable instances of this include the protests in Ferguson, Missouri, in response to the deaths of several young black men, and the response to perceived environmental racism on the Standing Rock Reservation in North and South Dakota. This chapter proceeds as follows. First, two core technological affordances of social media are discussed. Affordances are the enablers and constraints a given technology presents to a user (Davis and Chouinard 2016). These affordances make it possible for both racists and anti-racists to circumvent many of the measures of social control applied in the physical environment to maintain normative racial behavior. The two discussed are anonymity and connectivity. Next, the chapter focuses on racist behaviors – the use of hate speech on social media platforms and the use of social media to build extremist hate groups. This is followed by a focus on anti-racist behavior – the development of counterdiscourses on social media platforms and the use of those platforms for collective action. This chapter concludes by identifying some of the gaps in the research and positing future research directions.

The Affordances of Social Media: Anonymity Anonymity is seen as a major reason why racial dialogue is more prevalent on social media. Computer users can speak freely – both in racist and anti-racist directions, without fear of sanctions. It is important, however, to take a nuanced view about anonymity on social media. Unlike other spaces online, social media has fewer instances of pure anonymity. Many platforms require users to link their real selves to accounts (users may violate these terms of course). Second, many of the excesses

4

Race, Social Media, and Deviance

71

brought about by anonymous – or even false – accounts have been curtailed by platforms through learning algorithms. Thus, although pure anonymity is a rarity, something approximating anonymity is the norm. Individuals can still hide behind their screen names and hide various aspects of their identity. In the paragraphs that follow, two theories are introduced that scholars have used to explain the link between anonymity and deviant behavior. These theories, emanating from the field of social psychology, are deindividuation and disinhibition. Both theories argue that individuals operating in the digital environment are freed from societal mechanism of social control.

Deindividuation Deindividuation theory has its foundation in Gustav LeBon’s classic work The Crowd: A Study of the Popular Mind (Le Bon 2001). LeBon, as a Frenchmen living in the late nineteenth century, had seen waves of social unrest. He had experienced the Paris Commune in 1871 – a radical populist takeover of the French government – and other smaller protests during the 1890s. LeBon’s work was an attempt to make sense of what he saw as the violence and irrationality exhibited in crowds. Two major concepts in The Crowd are submergence and contagion. Submergence refers to the process becoming identified with the generalized norms and sentiments of a crowd. When this occurs, a person loses their sense of self. In this submerged state, a person is susceptible to contagion. A contagion is an idea or emotion that passes through a crowd. This idea or emotion may be irrational outside of a crowd context, but in a submerged state a person may not make use of their critical faculties. Thus, a random person in a crowd suggesting that storefront windows be broken or a bystander be attacked may be uncritically accepted by others. Vilanova et al. (2017) summarizes the implications of submergence and contagion in this manner: “The crowd. . .constitutes a single collective being that is guided by a mental unity and a collective soul that makes individuals feel, think and act differently than they would independently” (p. 3). A modern strain of deindividuation applied directly to the digital environment is Social Identification and Deindividuation Effects (SIDE) theory (Postmes and Spears 2002). SIDE argues that anonymity in the digital environment reduces the ability to individualize oneself from others in a group setting (deindividuation). This happens because the uniqueness of the individual is obscured in digital environments, where one’s history and personal characteristics are not readily apparent. This deindividuation increases the awareness of the similarities that all in the group share. SIDE does not propose that deindividuation inevitably leads to deviant behavior. In some cases, deindividuation may lead to conformity in online settings because an individual strongly identifies with the group and its norms. Thus, through deindividuation an individual can act in either racist or anti-racist ways, depending upon the social media platform they are interacting in and the group norms on that platform. Moreover, their actions can stretch beyond the racial norms instantiated in the physical environment, leading to the deviant behaviors presented

72

R. Graham

in Table 1. Deindividuation theory has been applied in the digital environment to explain software piracy (Hinduja 2008), trolling (Bishop 2013), cyberbullying (Lowry et al. 2016), developing group identity (Mikal et al. 2016), and social conformity online (Coppolino Perfumi et al. 2019).

Online Disinhibition Effect A second theoretical perspective relevant to explaining the link between anonymity and deviant racial behavior is Suler’s online disinhibition effect (Casale et al. 2015; Suler 2004). According to Suler (2004), six factors lead to a change of behavior online. These are: 1. Dissociative Anonymity – behavior is compartmentalized, and an individual can dissociate themselves from their online behavior. 2. Invisibility – individuals cannot see each other, giving people the courage to go places and do things they otherwise would not do. 3. Asynchronicity – because people do not interact with each other in real time, the normative aspects of continuous communication are lost. People can send an email or post to a discussion board and not get immediate feedback. 4. Solipsistic Introjection – because of a lack of context cues, individuals may “feel that their mind has merged with the mind of the online companion” (Suler 2004: 323). 5. Dissociative Imagination – people may view their online experiences as not following the same rules and norms as their offline experiences 6. Minimization of Status and Authority – in the physical environment, authority figures express their status and power in their dress, body language, and their environmental settings (e.g., an office space with a degree on the wall). This is missing in online environments. These six factors work together to dissolve the barriers that block a person’s inner drives and desires. We can understand these barriers as mechanisms of social control, the removal of which frees a person to act in deviant ways. The disinhibition effect can foster two types of behaviors. Positive behaviors, or benign disinhibition, include reaching out to others to resolve interpersonal conflicts, exploring new avenues of one’s identity, and efforts to improve self-understanding. Negative behaviors, or toxic disinhibition, include rude, racist, and misogynistic language, as well as the acting out of socially reprehensible behaviors including consuming and sharing child pornography. Like deindividuation theory, Suler’s online disinhibition theory explains both racist and anti-racist deviant behaviors. Individuals with racist ideas can express or act upon these thoughts in the digital environment when disinhibited. Similarly, individuals with an inner desire to address what they see as racial injustice but may feel as if they will face negative sanctions from friends, family, and colleagues may be able to act upon this desire on social media platforms.

4

Race, Social Media, and Deviance

73

Online disinhibition theory has been used in several pieces of research including online victimization (Agustina 2015) and the online dating behavior of men who have sex with men (Lemke and Weber 2017). The theory has proven exceptionally useful in a criminological context for exploring individual acts of cyberviolence, most notably cyberbullying (Barlett and Helmstetter 2018; Lai and Tsai 2016; Lowry et al. 2016; Wachs and Wright 2018).

The Affordances of Social Media: Connectivity In the era before the Internet and mobile communication, individuals formed placebased relationships – relationships based upon traditional and historical precedents rooted in physical geography. A person was tied to their family, local friends, neighbors, and the authority figures in their immediate vicinity. There are numerous implications to place-based relationships. One that is relevant for the current chapter is that being rooted in place means a lesser ability to escape the socializing effects of one’s immediate environment. Deviant behavior is spotted sooner, and negative sanctions can be applied more effectively. Individuals can be controlled through gossip, ostracism, or more formal punishments in the form of fines and jail time. Scholars of new media technologies have argued that modern Western societies can be characterized as network societies (Castells 2007; Rainie and Wellman 2012). The term network in a sociological context refers to “a structural condition whereby distinct points (often called ‘nodes’) are related to one another by connections (often called ‘ties’) that are typically multiple, intersecting, and often redundant” (Barney 2010, p. 2). As with place-based relationships, the implications of living in a network society are wide ranging. However, for the purposes of this chapter, the critical implication is that a person’s relationships do not have to be rooted in physical space. A person can experience a wider range of values, beliefs, and norms, and they can escape some of the social control and socialization pressures of place. Social media platforms are technologies that facilitate networked relationships – making it possible to find more nodes, or people, and develop more ties, or connections. Indeed, the terms “social media site” and “social networking site” are often used interchangeably. Four of the more popular social media platforms are Facebook, Twitter, YouTube, and Instagram, and they share a similar set of characteristics that can broadly be understood as the affordance of connectivity. These platforms allow users to create profiles (with varying degrees of anonymity) and connect with others on the platform. These connections include the reading of content posted by someone. For example, a Twitter user can search the platform for content surrounding an event such as The World Cup, using the words World Cup or a hashtag (a word symbolizing an idea prepended by the pound # sign) such as #WorldCup2022. Users can also connect directly by “following” someone, such that content posted by someone, regardless of subject, is seen by the follower. A third way is through direct communications between users, where one user comments on the content posted by another, or they speak directly to each other through the platform’s direct messaging service.

74

R. Graham

Table 2 Communicative practices for social mediaa One-to-one One-to-many Many-to-many a

Asynchronous Text messaging Images, text, and audio on webpage Social networking

Synchronous Instant messenger Live stream Online chat

Reproduced and modified from Jensen and Helles (2011)

Moreover, communication is easier, thus facilitating more connectivity. Jensen (2015) describes three modes of communication: “one to one” – or interpersonal communication, “one to many” – characterized by the broadcast technologies of radio and television, and “many to many” – evidenced on social media platforms where several people are commenting in a single thread. These modes can be further divided by time, into asynchronous and synchronous. Synchronous means that the communication is occurring in real time, while asynchronous refers to communication sent at one time and read by the recipient at another time. Variations in modes and times amount to a total of six communicative practices (Jensen and Helles 2011). Table 2 lists these practices and gives examples. The ability to connect in so many ways allows users to circumvent the traditional boundaries in the terrestrial or physical environment. Both racists and anti-racists can communicate ideas about race relatively free of social controls. They can use this connectivity to develop online communities that share resources (Baym 2015, pp. 81–110). They can also use this connectivity to organize for political purposes.

Deviant Racist Behavior: Symbolic Violence in the Form of Hate Speech Defining and Describing Hate Speech Wall defines cyberviolence as “the violent impact of the cyberactivities of another upon an individual or social grouping. Whilst such activities do not have to have a direct physical manifestation, the victim nevertheless feels the violence of the act and can bear long-term psychological scars as a consequence” (Wall 1998). Cyberviolence can take many forms, one of which is hate speech directed at racial minorities. Hate speech is characterized, according to Matsuda (1989) in three elements: (1) the speech implies or states that the target group is inferior (racial, ethnic, sexual), (2) the speech is aimed at a group that has been historically oppressed or disadvantaged, and (3) the speech is intended to be harmful. Hate speech is a common element online and on social media. A Pew Research Center report from 2017 stated that 8% of social media users had been harassed online based on their race (Duggan 2017). Most speech that claims one racial group in society is inferior to another, or that one group is not deserving of a given set of privileges because of their race, is at the least

4

Race, Social Media, and Deviance

75

a violation of informal norms in most contexts. In some contexts, this type of speech is subject to legal action. Examples of hate speech include: • A white American in an online thread about immigration replying to a person (who they think is Hispanic), saying that they should go back to Mexico where they belong • A white New Zealander suggesting in an online forum that the country would have been a “third-world country” had it not been colonized by the British • The use of racial slurs by whites such as nigger, hori, sand monkey, muzrats, etc. The notion of harm is central to discussions of hate speech. The idea that words can be censored is frowned upon in democracies. Therefore, a strong justification for censoring must be given. This justification comes by linking hate speech to identifiable harms. Gelber and McNamara (2016) provide a description of the types of harms associated with hate speech. The first type of harm is constitutive harm – damages that may occur directly to a person because they are at present experiencing hate speech. These damages are some form of psychological trauma – anxiety, fear, stress, depression, or low self-esteem. The second type of harm is consequential harm. Consequential harms are the indirect damages that occur because hate speech has become a part of a society’s discourse. In other words, hate speech changes the environment in which a person must live, which will then cause that person harm in the future. Consequential harm, according to Gelber and McNamara (2016), can take several forms: persuading hearers to believe negative stereotypes that lead them to engage in other harmful conduct; shaping the preferences of hearers so that they come to be persuaded of negative stereotypes; conditioning the environment so that expressing negative stereotypes and carrying out further discrimination become (often unconsciously) normalized; and causing hearers to imitate the behaviour. (2016, p. 325)

Another characteristic of hate speech is the power differential between offender and victim. In theory, all individuals can claim that another person is inferior or undeserving because of their race. Australian indigenous peoples can make claims about the bravery of European settlers, or black Americans can argue that white Americans are morally corrupt. However, their ideas cannot be leveraged into systematic exclusion of Whites from jobs, land, or wealth. For this reason, scholars have focused on hate speech used by whites toward non-whites.

Hate Speech Research There have been numerous studies of hate speech on social media platforms (Bliuc et al. 2018; Chetty and Alathur 2018; Farkas et al. 2018; Jakubowicz 2017; Miškolci et al. 2018). What follows is a selection of scholarly work that emphasizes the variety of research across social media platforms and across national contexts.

76

R. Graham

Awan (2016) explored the hate speech directed at Muslims on Facebook. He analyzed several pages, including Britain First, English Brotherhood and English Defence League, and the Ban Islam in Australia pages. For example, a racist joke displayed on the English Brotherhood Page reads: “What is the difference between an extremist muslim [sic] and a moderate muslim [sic]? A quick change of clothes.” This joke was accompanied by a photo shopped illustration and is clearly meant to reduce all Muslim people into the stereotype of a terrorist. It also suggests that you cannot trust a Muslim even if they are wearing the attire of modern liberal societies.

Awan categorizes types of offenders based upon their tweeting behavior, with the most numerous being: • “The reactive” – A person who following a major incident will begin an online campaign targeting a specific group and individual • “The accessory” – A person who joins in with other people’s conversations via Twitter to target vulnerable people • “The impersonator” – A person who is using a fake profile, account, and images to target individuals • “The disseminator” – A person who has tweeted about and retweeted messages, pictures, and documents of online hate Awan concludes that “from the evidence established within this study, Islamophobia on Facebook is much more prevalent than previously thought and is being used by groups and individuals to inflame religious and racial hate” (2016: 17). Burnap and Williams (2015) used machine learning to develop a model for predicting the risk of hate speech spreading online. The researchers collected data from Twitter in the aftermath of the murder of Drummer Lee Rigby in London on May 22, 2013. Rigby was a white Briton who was murdered by two Nigerian Muslims. Twitter is amenable to producing quick content to be shared publicly. As such, it is a prime site for studying social attitudes during “triggering” events such as the murder of Rigby. Their research took a big data approach, and 450,000 tweets were analyzed. Broad communication patterns could be identified with a statistical assurance not possible through qualitative work or smaller scale content analysis. One is the calls for collective action. They document the use of phrases such as “get them out,” and “should be hung.” A second is the linking of “othering” words that are not readily recognizable as racist yet are racist in their function. These included “send them home” and “get them out,” and “burn korans.” Matamoros-Fernandez (2017) explored hate speech across platforms in her analysis of the Adam Goodes imbroglio. In 2015, Goodes, an indigenous Australian Football League player, celebrated a goal by performing a war dance and pantomiming the throwing of a spear into the crowd. White Australians reacted in numerous ways: “Opponents used Twitter to ridicule Goodes...Facebook pages were created

4

Race, Social Media, and Deviance

77

solely to vilify him...and his Wikipedia page was vandalised, replacing pictures of him with images of chimpanzees” (2017, p. 6). Matamoros-Fernandez also linked the unique characteristics of a given platform to the unique ways hate is propagated. For example: When content is reposted in its original form but in a new context, it adopts another layer of meaning...YouTube videos embedded in a [Twitter] tweet were offensive in the new context. For instance, one user tweeted a YouTube video of a song called “Apeman” and accompanied it with a message for Goodes saying that this was his retirement song. (2017, p. 16)

Matamoros-Fernandez’s work is important because it shows how hate speech is cross-platform. The reality of social media usage is that users consume content across many platforms. Thus, research that focuses on one platform is necessarily producing a narrow view of the phenomenon. Chaudhry and Gruzd (2019) analyzed comments to racial news stories on Facebook, specifically the Canadian Broadcasting Corporation’s Facebook page. The researchers collected racial comments from news stories that mentioned race, racism, or ethnicity from August 1 to 31, 2016. They found that of the approximately 8000 comments about race, 18% of comments were some form of racial othering (i. e., constructing an in-group of white Canadians and out-group of non-whites) and 18% were outright racial slurs. The research also coded anti-racist comments, and found that the remaining comments, about 64% were anti-racist. This adds another layer of complexity to our understandings of hate speech on social media platforms. These are spaces that are often unmoderated and open to both racists and anti-racists. Thus, while scholars may identify incidents of hate speech in any given space, it is likely that incidents are also met with anti-racist elements.

Deviant Racist Behavior: Building Far-Right Extremist Groups There has been a steady rise in the number of hate groups in Western societies. In the case of the United States, an article from the New York Times reads: The number of hate groups in the United States rose for the fourth year in a row in 2018, pushed to a record high by a toxic combination of political polarization, anti-immigrant sentiment and technologies that help spread propaganda online, the Southern Poverty Law Center said Wednesday. The law center said the number of hate groups rose by 7 percent last year to 1,020, a 30 percent jump from 2014. That broadly echoes other worrying developments, including a 30 percent increase in the number of hate crimes reported to the F.B.I. from 2015 through 2017 and a surge of right-wing violence that the Anti-Defamation League said had killed at least 50 people in 2018. (Stack 2019)

Social media has made it possible for individuals to seek out, anonymously and across space and time, individuals who share deviant beliefs and attitudes about race.

78

R. Graham

The Importance of Stormfront One of the more influential racist spaces online – and one of the most researched by scholars – is Stormfront (https://www.stormfront.org/). Stormfront has been critical in the development of what scholars know about far-right extremist groups online. The website, based in the United States, has been active since 1995. It is a meeting point for many far-right and extremist groups, including the White Nationalism, the Christian Identity Movement, the Ku Klux Klan (KKK), and neo-Nazism. Although Stormfront is more accurately categorized as a website, and not a social media platform, it has the elements of spaces like Facebook and Twitter – including anonymity and the ability to communicate in many ways. Stormfront’s long history (in Internet terms) and its active user base has made it a prime site of study for scholars (Back 2002; De Koster and Houtman 2008; Meddaugh and Kay 2009; Vysotsky and McCarthy 2017). One of the more insightful studies of Stormfront is from Bowman-Grieve (2009), who explored themes within what she termed Stormfront’s community of practice. She discusses the importance of dating services on the site for networking and promoting an in-group ideology. She also points out the use of personal narratives or anecdotes relayed by users that “provide validation for other users as well as providing outlines or blueprints for ways to become involved” (2009, p. 998). The goal of most activities on Stormfront and similar spaces is to move individuals to political action. As such, forums and threads are dedicated to ways in which individuals can become politically active. Although a large amount of research on Stormfront was done in the early 2000s, there is nothing to suggest that the website is no less significant theoretically. As an example, Vysotsky and McCarthy (2017) apply Sykes and Matza’s (1957) neutralization theory to the discourse produced on Stormfront. The researchers theorized that the discourse on the forum will attempt to neutralize the stigma associated with racism. They find that this neutralization largely occurs by otherizing non-whites: “By othering non-whites via denial of their status as members of the same species, the moral rights and values that a society maintains for all members are not necessarily applicable to non-white individuals anymore” (2017, p. 12). The research on Stormfront and other sites like it established foundational themes that show the ways racist groups use the digital environment to build extremist groups. These include propagandizing, socializing, linking, fundraising, and promotion of violence (Chris Hale 2012; Cohen-Almagor 2018). The paragraphs below will focus on the theme that has received the most attention by researchers – the spreading of racist discourse.

Spreading Racist Discourse A key strategy in the propagation of racist ideas is to covertly spread racist information into mainstream spaces. One strategy, as described by Daniels (2009), is to build a website that appears to be from an established authority. The website is then

4

Race, Social Media, and Deviance

79

used to spread racist ideas. These “cloaked websites” are possible because of the degree of anonymity afforded by the digital environment – the owner of the website, their credentials, and their motives are not readily available to the audience. Cloaking on social media takes a different form, but still relies on the degree of anonymity afforded. Farkas et al. (2018) examined cloaked Facebook pages. In this research, the focus was on white supremacists creating disguised (cloaked) websites that appeared to be from radical Islamists in Denmark. Facebook’s architecture allowed anonymity, as page administrators on Facebook can remain anonymous. Moreover, comments can be deleted, and users can be blocked if they can potentially reveal the true origin of the administrators. The purpose of the cloaked websites was to generate anti-Muslim discourse during a period of heightened tension between Danes and Muslim immigrants. According to the researchers, the cloaked websites had the desired effect: “comments range from expressing extreme hostility to antiimmigration and pro-nationalistic sentiments by attacking the Muslim community and refugees in general to showing support for the right-wing populist Danish People’s Party. The cloaked Facebook pages became sites of aggressive posting and reaction through comments, producing a spectacle of hostility” (2018, p. 1861). Another way to propagate racist ideas is through the merging of racist ideas with mainstream ideas. Klein (2012) explores this phenomenon, which he calls “information laundering” across many spaces online, concluding that “vilifying accusations about a president’s [Barack Obama] religion, or claims about his nationality, do not emerge from scholarly, political, or public debates. Rather, they begin on the fringes, in white power websites, and only through the web where they have found a successful pathway to work racist subthemes into mainstream issues” (p. 443). Graham (2016) terms the phenomenon “inter-ideological mingling” and shows how the strategic use of hashtags can link a racist idea to a mainstream idea. Graham lists several strategies. The first two are the major strategies for all inter-ideological mingling. “Joining” is when hashtags from different ideological discourses are brought together in one tweet, while “blending” is when hashtags from different ideological contexts are merged to explain racial phenomena. All instances of interideological mingling, according to Graham, are forms of joining and blending. The remaining three are minor, sub-strategies. These are “piggybacking,” where trending hashtags from mainstream discourses are added to a racist tweet; “backstaging,” where a mainstream tweet is composed with a link referring to extremist material outside of the Twitterverse; and “narrating,” where clever use of hashtags produce a racist version of an otherwise mainstream story.

Deviant Anti-racism Anti-racist actions are not generally deviant or criminal. Indeed, racial equality is valued in Western societies, and actions that condemn racist behavior are applauded. However, some actions of anti-racist groups can be labeled as deviant by whites and also other people of color.

80

R. Graham

For one, racial minorities often produce their own narratives about social issues. This discourse is often done on social media platforms within minority-dominant spaces. This discourse is usually seen as outside the lines of accepted understandings of race. As these narratives, or counterdiscourses, are in opposition to mainstream understandings of social phenomena, they elicit responses of social control. We can see this historically with the black power movement of the 1960s, which was opposed by both whites and African-Americans who were allied with the nonviolent wing of the Civil Rights movement. Current calls by Asian-Americans to reject the model minority myth, and embrace discourses of inequality and disadvantage, are not shared by older Asian-Americans (Tran and Curtin 2017). Second, calls for redistribution of resources in society along racial lines are often met with resistance. Many whites in society support change in the abstract but respond negatively when anti-racist groups agitate for changes in law and policy that make this equality concrete. This opposition can be especially virulent when the policies are understood to be emanating from a far-left or radical political ideology.

The Importance of Black Twitter If research on Stormfront is essential to understanding the scholarly work on deviant racist behavior, Black Twitter is essential to understanding the scholarly work on deviant anti-racist behavior (Florini 2014; Graham and Smith 2016). Black Twitter can be described as the collection of networked Twitter users and the content they produce. As one cannot tell the race of users on Twitter, the true “blackness” of the space is indicated primarily by the content produced and issues discussed. As a phenomenon Black Twitter is relatively recent, dating back to the mid to late 2010s. However, Black Twitter has been connected to several impactful anti-racist moments in the United States. It has been credited with developing and publicizing a narrative of injustice around the deaths of black men in the United States. It has also been instrumental in galvanizing on the ground protests in Ferguson, Missouri. Most recently, it has been a space for discussions around slavery reparations. Because of this, scholars have published several pieces on Black Twitter (Dates and MoodyRamirez 2018; Florini 2014; Graham and Smith 2016; Lee 2017; Maragh 2018; Prasad 2016; Sharma 2013), and the scholarly work on anti-racism and social media is heavily influenced by this phenomenon.

Deviant Anti-racism: The Production of Counterdiscourses In order to understand how anti-racist groups produce counterdiscourses, a starting point is the concept of the public sphere. Habermas (1999) argued that the public sphere developed in Europe in the eighteenth century as a mediator between private concerns of business and family, and the public concerns of policing, law, and government. This public sphere was unique to representative democracies. It is in the public sphere that societal problems are introduced to the public and solutions are

4

Race, Social Media, and Deviance

81

proposed. As a modern example, the editors and writers at a national news television show or a major newspaper have great influence in determining what is seen as an issue to be deliberated and what are the legitimate set of solutions. Even in an era of media fragmentation and greater distrust of the media, news outlets with national reach and historical legitimacy still have an outsized impact on determining issues and potential solutions. In a seminal article, Fraser (1990) argued that the public sphere did not address the concerns of marginalized or disadvantaged groups. The problems and solutions introduced by mainstream media outlets are predominantly the problems and solutions of the wealthy, of whites, heterosexuals, and men. Therefore, Fraser argues, “members of subordinated social groups – women, workers, peoples of color, and gays and lesbians – had repeatedly found it advantageous to constitute alternative publics” (pp. 122–23). These alternative publics, or counterpublics, are the spaces in which counterdiscourses are generated. As mentioned above, the paradigmatic example of a counterpublic generating counterdiscourses is Black Twitter. Black Twitter also illustrates the dynamic in which groups that are ostensibly anti-racist can be considered deviant by some groups. The ideas that emanate from the Black Twitter space are often rejected by wider society. Hill (2018) in a study of Black Twitter and its commentary on the Ferguson riots argues that Black Twitter “enables new and transgressive forms of organizing, pedagogy and, ultimately, resistance.” Hill argues that in the aftermath of Ferguson, the discourse on Black Twitter explicitly rejected the politics of respectability. In other words, Black Twitter did not accept the normative standards of how one should discuss the death of Brown. Graham and Smith (2016) explored differences between the space created by Black Twitter – its network of users and the narratives they produce – and other publics on Twitter. Using the tweets from January 2015, they found that there was little content overlap between the Black Twitter space and the spaces representing mainstream American discussions. For example, at the time of analysis, the two major events were the protests in Ferguson, Missouri, in the wake of the shooting of a young black male (Michael Brown) and the shootings at the French newspaper Charlie Hebdo where 12 people were killed and 11 wounded. Users of Black Twitter focused on Michael Brown and other black-oriented themes, ignoring Charlie Hebdo. Meanwhile, users in spaces indicating the public sphere tweeted little about the Brown shooting. There are other examples of the production of counterdiscourses. Kuo (2018) examined two spaces organized around the Twitter hashtags #Solidarityisforwhitewomen – a reaction to mainstream white feminism, and #NotYourAsianSideKick – a reaction as well to white feminism, but also in response to Asian women stereotypes. Kuo concludes that hashtags like #Solidarityisforwhitewomen and #NotYourAsianSideKick offer multiple ways for interested individuals to become involved in social issues. Petray and Collin (2017) explore the rhetorics surrounding the #whiteproverbs hashtag that trended in early 2014. The hashtag, originating in an Australian context, is meant to be humorous, and used to signify the everyday comments whites use to mask their privilege. The jokes are meant to be transgressive. Some examples are

82

R. Graham

“Do you speak Aboriginal? #WhiteProverbs” and “So what country are Islamics from? #whiteproverbs.” By pointing out these sayings the Twitter user is helping to produce an alternative commentary about race in Australia using irony and humor.

Deviant Anti-racism: Collective Action Through Social Media Tilly (2004) argued that there are three elements of social movements: campaigns – the social changes or policy prescriptions argued for by the movement, repertoires – the strategies and tactics used by the movement to achieve their goals, and a public display – where the movement presents their worth, unity, numbers, and commitment to others. Carty (2015) applies Tilly’s concepts to new media technology and argues that “activists have always used the latest communication device to recruit, distribute information, and mobilize support, whether it be the pen, printing press, telegraph, radio, television, Internet, or high speed digital technologies” (2015, p. 7). Carty, in her analysis of recent movements such as the Arab Spring and Occupy Wall Street, argues that understandings of what social movements are must be reimagined. Important is her assertation that hierarchical, terrestrial organizations are no longer essential for building the foundation for successful social movements. Individuals can leverage social media platforms to organize protests and build social movements. This leveraging includes the affordance of anonymity. Zeynep Tufekci (Tufekci 2014, 2017; Tufekci and Wilson 2012) has done extensive work arguing that the defining characteristics of modern protests is the inability of states to surveil dissidents and censor their content. Another affordance is connectivity. Manuel Castells, in Networks of Outrage and Hope (2012), argues that social movements are emotional movements – “if many individuals feel humiliated, exploited, ignored or misrepresented, they are ready to transform their anger into action, as soon as they overcome their fear” (p. 15). For Castells, the Internet is the communication technology that facilitates the sharing of emotion – outrage and then anger – among individuals within the group. Moreover, as Carty also asserts, the Internet facilitates a unique form of social movement that is horizontal and multimodal. There may be no clear leaders and individuals are connected across many platforms, offline and online. Carty and Castells help provide the theoretical understandings of the uniqueness of networked anti-racist movements. Significantly for this chapter is the fact that when racial minorities and other groups agitate for social or political change, they are attempting to change the normative structures in society. To put it more plainly, the actions organized by interactions on social media platforms are deviant behaviors done deliberately to bring about social change.

Black Americans and Activism on Twitter Black Americans, often through Twitter, have consistently leveraged social media platforms for racial justice. For example, Jackson and Foucault Welles (2016) explore the early initiators of racial justice who used the #Ferguson hashtag. In a

4

Race, Social Media, and Deviance

83

rare comparative analysis, Bock and Figueroa (2018) compare Black Lives Matter and Blue Lives Matter Facebook pages. They conclude that: The Black Lives Matter page, like the movement it has inspired, is a relatively open, loose, and inclusive network working on social justice issues. Supporters share some slogans and symbols and have a common interest in the civil rights of African Americans. Blue Matters culture, on the other hand, seems more intensely bound by codes that are religious, militaristic, and authoritarian. Supporters on the Blue Matters page espoused support for what they see as righteous violence and discipline, and turn to one another for mutual protection as soldiers do in combat. (2018, pp. 3113–3114)

The Black Lives Matter (BLM) movement is a well-known instance of deviant antiracism. This movement, founded by Patrisse Cullors, Opal Tometi, and Alicia Garza, began in earnest with the death of Trayvon Martin in 2012 and reached a crescendo in 2014 with protests in Ferguson, Missouri. At the time of this writing, the BLM movement has developed a stable structure with chapters across the United States. But it is widely understood that its origins began organically on Twitter. The BLM movement made a claim that many in society initially rejected: that the shootings of Martin, Brown, and others were indicators of a web of racist practices directed at racial minorities. As Royal and Hill (2018) write, “BLM explicitly challenges state violence and White supremacy around the globe, in addition to examining the global and interdependent nature of systemic injustice and oppression. Even though police-involved shootings have drawn the most public attention, BLM asserts that state violence is both individual and collective, personal and structural” (2018, p. 144). This claim was beyond accepted understandings of racial inequalities and how to address them.

Antifa Another group that has participated in deviant anti-racism through social media is Antifa, short for anti-Fascists or anti-Fascist action. Antifa has its origins in counter protests to Nazi and Fascist parties in European countries in the 1920s and 1930s. Iterations of Antifa have counter-protested in the same physical space as neo-Nazis, White Nationalists, and other far-right groups. Antifa is more accurately described as a movement, rather than a group. Lafree (2018) writes “It is not a highly organized entity. It has not persisted over time. There is little evidence of a chain of command or a stable leadership structure. . .In this respect, the current form of antifa resembles other broad political phenomena like the anti-abortion or animal rights movements” (249). Individuals who identify as feminist, socialist, and communist may not necessarily commit themselves to a formal Antifa group, but nevertheless support the ideals of the movement and participate in protests. Antifa’s actions, while grounded in the normative notion that white supremacy must be condemned, often exceed the boundaries of acceptable responses to white supremacy. One reason is that many believe Antifa groups violate norms of free speech. A number of high-profile protests by Antifa have taken place on college campuses, spaces that traditionally valued free inquiry and open debate. In February

84

R. Graham

of 2017, on the campus of the University of California at Berkeley, controversial political commentator Milo Yiannopoulos’ scheduled talk was canceled amidst protests from Antifa and other student groups. Similarly, political scientist Charles Murray was unable to deliver a lecture at Middlebury College, Vermont, in March 2017, because of student protests. A second reason is that the targets of Antifa are not viewed by wider society as being worthy of racial condemnation. Several Antifa targets such as Murray and University of Toronto clinical psychologist Jordan B. Peterson have made controversial statements but have considerable support within mainstream society. Additionally, the election of Donald Trump to the United States Presidency, ostensibly a mainstream conservative, has led to a growth in Antifa (LaFree 2018). According to Beinart (2017), NYC Antifa’s Twitter followers quadrupled in the month after Trump’s election win. A third reason is that several of Antifa’s protests and counter protests have been violent. According to media coverage from CNN of the Yiannopoulos event, “Black-clad protesters wearing masks threw commercial-grade fireworks and rocks at police. Some even hurled Molotov cocktails that ignited fires. They also smashed windows of the student union center on the Berkeley campus where the Yiannopoulos event was to be held. At least six people were injured” (Park and Lah 2017). They have also committed acts of cyberviolence through doxing targets (Muldowney 2017). Doxing is the act of acquiring someone’s personal information, often through hacking or social engineering, and making this information public. This information can be addresses, family members, email addresses, and more.

Other Networked Anti-racist Movements The literature connecting social media platforms to social movements is vast (see Carty 2015; MacKinnon 2013). However, the range of anti-racist movement research is relatively small, with scholarly work clustering around the intertwining phenomena of Black Twitter and the Black Lives Matter movement. Even with the high profile of Antifa beginning in 2017, there was little published peer-reviewed research at the time of this writing. This dearth of anti-racist, social media research is likely temporary. The instances of social agitation centered around racial injustice will likely increase, as Asian, Muslim, Hispanic, and indigenous peoples identify social problems and articulate changes in policy to address them. One event that has been explored by scholars is the 2016 Dakota Access Pipeline Protests, on the Standing Rock Sioux Tribe Reservation. The reservation occupies parts of North and South Dakota, United States. The protests were in response to the proposed building of an oil pipeline through Standing Rock territory, that, according to members of the Standing Rock Sioux Tribe, violated their nation’s sovereignty. Some scholars have also argued that the pipeline is an example of environment racism, where non-white environments are not treated the same as environments occupied by whites (see Zimring 2015). The protests were signified by the hashtag #NoDAPL.

4

Race, Social Media, and Deviance

85

The activists leveraged the affordances of social media platforms to construct a counterdiscourse about the pipeline: the pipeline was a violation of treaties between sovereign nations and a breach of the cultural and spiritual integrity of Sioux lands. The activists used a “media-by-default” tactic, where social media platforms where integral at the beginning of the movement (Martini 2018). Camps within the movement set up Twitter, Instagram, and Facebook accounts at the outset of their activities before protests began. These efforts were relatively successful as evidenced by the attention the protestors were able to garner nationally and internationally. Their work culminated in a #NoDAPL Day of Action, “during which three hundred solidarity events occurred in all fifty states, drawing tens of thousands of demonstrators. The #NoDAPL Day of Action also went global with dozens of cities, including London, Paris, Auckland, Kyoto, and Marrakesh” (2018, pp. 166–167). Moreover, “...the pressure put on government officials through social media and the outcry of constituents caused the U.S. Army Corps to postpone the completion of the pipeline pending further investigation” (p. 167).

Conclusion and Directions for Future Research This chapter explored phenomena at the intersection of race, social media, and deviance. Specifically, this chapter summarized key points in the literature surrounding racist and anti-racist behavior on social media platforms. One of the assumptions underpinning the chapter’s narrative was that both racist and anti-racist behaviors deviate from what is considered the norm with respect to race. For the former, the link between racist behavior and deviance is self-evident. Western societies have embraced notions of racial diversity and multiculturalism. Racist behavior is condemned vociferously in Western societies. However, anti-racist behavior can also migrate beyond accepted boundaries. For both racists and anti-racists, social media platforms present a type of leverage. These groups can avoid some of the negative sanctions associated with their behavior and connect with others who share their ideas. Given the nature of the phenomena, this research was necessarily drawn from several disciplines, including criminology, sociology, psychology, communications, and media studies. As such, the directions for future research proposed here are based upon what is known about the phenomena, and not restricted to advancing knowledge in any one discipline. One direction can be to explore phenomena on less popular social media platforms. Research has been dominated by Twitter, and to a lesser degree by Facebook. This is understandable. Twitter is an event-based application, and both racist and anti-racist sentiments become salient during a social event. Meanwhile Facebook is the most popular social media application in the Western world. However, given the fact that racist behavior is condemned, it is likely that less well-known social media sites will house interesting racial phenomena. For example, the social media platform Gab has been called “extremist friendly” and supported the anti-Semitic

86

R. Graham

postings of Robert Bowers (Bowers was the shooter who in 2018 killed 11 and injured 7 at the Tree of Life synagogue in Pittsburgh). A similar argument can be made for Black Twitter and its political parallel Black Lives Matter. There is a vast amount of research on the varying ways that whites have exhibited racist behavior. However, the research on anti-racist behavior is quite narrow, with disproportionate focus on the efforts of black Americans. Researchers need to explore the rhetorical and political anti-racist actions of other racial minorities. A final avenue of exploration is in understanding how racist groups develop group identities on social media. Research has demonstrated the ways this is done on Web 1.0 sites, especially Stormfront. Research has been relatively silent on explicating the mechanisms on how white supremacist groups attract adherents on Facebook, Twitter, and YouTube. There may be practical reasons for this. Those platforms have steadily diminished the ways in which users can navigate their spaces and remain anonymous. It may be that Facebook will not facilitate sustained interaction in which whites are openly cultivating a group identity. It may be that a cross-platform approach, like the one used by Matamoros-Fernandez (2017), will be necessary. Indeed, cross-platform work is needed across all aspects of the phenomena explored in this chapter. It is simply the reality of social media usage. Computer users do not produce or consume information only on one platform. It is also likely that collections of users follow each other across platform, from Twitter to Instagram to Facebook. In conclusion, the need to understand both racist and anti-racist behavior on social media will not diminish in the near future. Most reports from reputable research outlets suggest that hate speech, hate groups, and hate crimes are rising in Western societies. There is no reason to suggest that social media is not integral to that growth. On the other hand, for scholars and activists who are interested in antiracism, it is essential to understand how specific platforms foster specific types of anti-racist behaviors (for an example, see Byrd et al. 2017). Social media platforms are integral to the development of alternative narratives about racial minorities, as well as being a tool used by minorities to agitate for political change.

Cross-References ▶ Hate Speech in Online Spaces

References Augustina, JR. (2015). Understanding Cybervictimization: Digital Architectures and the Disinhibition Effect. International Journal of Cybercriminology, 9(1): 35–54. Awan, I. (2016). Islamophobia on Social Media: A Qualitative Analysis of the Facebook’s Walls of Hate. International Journal of Cybercriminology, 10(1): 1–20. Back, L. (2002). Aryans reading Adorno: Cyber-culture and twenty-first century racism. Ethnic and Racial Studies, 25(4), 628–651. https://doi.org/10.1080/01419870220136664.

4

Race, Social Media, and Deviance

87

Barlett, C. P., & Helmstetter, K. M. (2018). Longitudinal relations between early online disinhibition and anonymity perceptions on later cyberbullying perpetration: A theoretical test on youth. Psychology of Popular Media Culture, 7(4), 561–571. https://doi.org/10.1037/ppm0000149. Barney, D. D. (2010). The network society. Cambridge, UK: Polity. Baym, N. K. (2015). Personal connections in the digital age (2nd ed.). Malden: Polity Press. Beinart, P. (2017). The Rise of the Violent Left Antifa’s activists say they’re battling burgeoning authoritarianism on the American right. Are they fueling it instead? The Atlantic. Retrieved from https://www.theatlantic.com/magazine/archive/2017/09/the-rise-of-the-violent-left/534192/ Bishop, J. (2013). The effect of de-individuation of the Internet Troller on Criminal Procedure implementation: An interview with a Hater. International Journal of Cyber Criminology, 7(1), 28. Bliuc, A.-M., Faulkner, N., Jakubowicz, A., & McGarty, C. (2018). Online networks of racial hate: A systematic review of 10 years of research on cyber-racism. Computers in Human Behavior, 87, 75–86. https://doi.org/10.1016/j.chb.2018.05.026. Bock, M. A., & Figueroa, E. J. (2018). Faith and reason: An analysis of the homologies of Black and Blue Lives Facebook pages. New Media & Society, 20(9), 3097–3118. https://doi.org/10.1 177/1461444817740822. Bonilla-Silva, E. (2017). Racism without racists: Color-blind racism and the persistence of racial inequality in America (5th ed.). Lanham: Rowman & Littlefield. Bowman-Grieve, L. (2009). Exploring “Stormfront”: A virtual community of the radical right. Studies in Conflict & Terrorism, 32(11), 989–1007. https://doi.org/10.1080/10576100903259951. Burnap, P., & Williams, M. L. (2015). Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making: Machine classification of cyber hate speech. Policy & Internet, 7(2), 223–242. https://doi.org/10.1002/poi3.85. Byrd, W. C., Gilbert, K. L., & Richardson, J. B. (2017). The vitality of social media for establishing a research agenda on black lives and the movement. Ethnic and Racial Studies, 40(11), 1872–1881. https://doi.org/10.1080/01419870.2017.1334937. Carty, V. (2015). Social movements and new technology. Boulder: Westview Press, a Member of the Perseus Books Group. Casale, S., Fiovaranti, G., & Caplan, S. (2015). Online disinhibition: Precursors and outcomes. Journal of Media Psychology, 27(4), 170–177. https://doi.org/10.1027/1864-1105/a000136. Castells, M. (2007). Communication, power and counter-power in the network society. International Journal of Communication, 1(1), 29. Chaudhry, I., & Gruzd, A. (2019). Expressing and challenging racist discourse on Facebook: How social media weaken the “Spiral of Silence” theory: Countering racist discourse on Facebook. Policy & Internet. https://doi.org/10.1002/poi3.197. Chetty, N., & Alathur, S. (2018). Hate speech review in the context of online social networks. Aggression and Violent Behavior, 40, 108–118. https://doi.org/10.1016/j.avb.2018.05.003. Chris Hale, W. (2012). Extremism on the World Wide Web: A research review. Criminal Justice Studies, 25(4), 343–356. https://doi.org/10.1080/1478601X.2012.704723. Cohen-Almagor, R. (2018). Taking North American white supremacist groups seriously: The scope and the challenge of hate speech on the Internet. International Journal for Crime, Justice and Social Democracy, 7(2), 38. https://doi.org/10.5204/ijcjsd.v7i2.517. Coppolino Perfumi, S., Bagnoli, F., Caudek, C., & Guazzini, A. (2019). Deindividuation effects on normative and informational social influence within computer-mediated-communication. Computers in Human Behavior, 92, 230–237. https://doi.org/10.1016/j.chb.2018.11.017. Daniels, J. (2009). Cyber racism: White supremacy online and the new attack on civil rights. Lanham: Rowman & Littlefield Publishers. Dates, J. L., & Moody-Ramirez, M. (2018). Blackface to black twitter: Reflections on black humor, race, politics, & gender. New York: Peter Lang. Davis, J. L., & Chouinard, J. B. (2016). Theorizing affordances: From request to refuse. Bulletin of Science, Technology & Society, 36(4), 241–248. https://doi.org/10.1177/0270467617714944. De Koster, W., & Houtman, D. (2008). ‘Stormfront Is like a second home to Me’: On virtual community formation by right-wing extremists. Information, Communication & Society, 11(8), 1155–1176. https://doi.org/10.1080/13691180802266665.

88

R. Graham

DiAngelo, R., & Dyson, M. E. (2018). White fragility: Why it’s so hard for white people to talk about racism. Boston: Beacon Press. Duggan, M. (2017). Online harassment 2017. [Internet and Technology]. Retrieved from Pew Research Center website: https://www.pewinternet.org/2017/07/11/online-harassment-2017/ Farkas, J., Schou, J., & Neumayer, C. (2018). Cloaked Facebook pages: Exploring fake Islamist propaganda in social media. New Media & Society, 20(5), 1850–1867. https://doi.org/10.1177/ 1461444817707759. Florini, S. (2014). Tweets, tweeps, and signifyin’: Communication and cultural performance on “Black Twitter”. Television & New Media, 15(3), 223–237. https://doi.org/10.1177/152747641 3480247. Foucault, M. (1982). The archaeology of knowledge. New York: Pantheon Books. Fraser, N. (1990). Rethinking the public sphere: A contribution to the critique of actually existing democracy. Social Text, (25/26), 56. https://doi.org/10.2307/466240. Gainous, J., & Wagner, K. M. (2014). Tweeting to power: The social media revolution in American politics. New York: Oxford University Press. Gelber, K., & McNamara, L. (2016). Evidencing the harms of hate speech. Social Identities, 22(3), 324–341. https://doi.org/10.1080/13504630.2015.1128810. Graham, R. (2016). Inter-ideological mingling: White extremist ideology entering the mainstream on Twitter. Sociological Spectrum, 36(1), 24–36. https://doi.org/10.1080/02732173.2015. 1075927. Graham, R., & Smith, ‘. S. (2016). The content of our #characters: Black Twitter as counterpublic. Sociology of Race and Ethnicity, 2(4), 433–449. https://doi.org/10.1177/2332649216639067. Habermas, J. (1999). The structural transformation of the public sphere: An inquiry into a category of bourgeois society (10. print). Cambridge, MA: MIT Press. Hill, M. L. (2018). “Thank You, Black Twitter”: State violence, digital counterpublics, and pedagogies of resistance. Urban Education, 53(2), 286–302. https://doi.org/10.1177/004208 5917747124. Hinduja, S. (2008). Deindividuation and internet software piracy. Cyberpsychology & Behavior, 11(4), 391–398. https://doi.org/10.1089/cpb.2007.0048. Jackson, S. J., & Foucault Welles, B. (2016). #Ferguson is everywhere: Initiators in emerging counterpublic networks. Information, Communication & Society, 19(3), 397–418. https://doi. org/10.1080/1369118X.2015.1106571. Jakubowicz, A. H. (2017). Alt_Right white lite: Trolling, hate speech and cyber racism on social media. Cosmopolitan Civil Societies: An Interdisciplinary Journal, 9(3), 41–60. https://doi.org/10.5130/ccs.v9i3.5655. Jensen, K. B. (2015). What’s social about social media? Social Media + Society, 1(1). 205630511557887. https://doi.org/10.1177/2056305115578874. Jensen, K. B., & Helles, R. (2011). The internet as a cultural forum: Implications for research. New Media & Society, 13(4), 517–533. https://doi.org/10.1177/1461444810373531. Klein, A. (2012). Slipping racism into the mainstream: A theory of information laundering: Theory of information laundering. Communication Theory, 22(4), 427–448. https://doi.org/10.1111/ j.1468-2885.2012.01415.x. Kuo, R. (2018). Racial justice activist hashtags: Counterpublics and discourse circulation. New Media & Society, 20(2), 495–514. https://doi.org/10.1177/1461444816663485. LaFree, G. (2018). Is Antifa a terrorist group? Society, 55(3), 248–252. https://doi.org/10.1007/ s12115-018-0246-x. Lai, C.-Y., & Tsai, C.-H. (2016). Cyberbullying in the social networking sites: An online disinhibition effect perspective. In Proceedings of the the 3rd multidisciplinary international social networks conference on social informatics 2016, data science 2016 – MISNC, SI, DS 2016 (pp. 1–6). https://doi.org/10.1145/2955129.2955138. Le Bon, G. (2001). The crowd: A study of the popular mind. Mineola: Dover Publications. Lee, L. (2017). Black Twitter: A response to bias in mainstream media. Social Sciences, 6(1), 26. https://doi.org/10.3390/socsci6010026.

4

Race, Social Media, and Deviance

89

Lemke, R., & Weber, M. (2017). That man behind the curtain: Investigating the sexual online dating behavior of men who have sex with men but hide their same-sex sexual attraction in offline surroundings. Journal of Homosexuality, 64(11), 1561–1582. https://doi.org/10.1080/0091 8369.2016.1249735. Lowry, P. B., Zhang, J., Wang, C., & Siponen, M. (2016). Why do adults engage in cyberbullying on social media? An integration of online disinhibition and deindividuation effects with the social structure and social learning model. Information Systems Research, 27(4), 962–986. https://doi.org/10.1287/isre.2016.0671. Mac Donald, H. (2018). The diversity delusion: How race and gender pandering corrupt the university and undermine our culture. MacKinnon, R. (2013). Consent of the networked: The worldwide struggle for Internet freedom (Paperback ed.). New York: Basic Books. Maragh, R. S. (2018). Authenticity on “Black Twitter”: Reading racial performance and social networking. Television & New Media, 19(7), 591–609. https://doi.org/10.1177/152747641 7738569. Martini, M. (2018). Online distant witnessing and live-streaming activism: Emerging differences in the activation of networked publics. New Media & Society, 20(11), 4035–4055. https://doi.org/ 10.1177/1461444818766703. Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130. Matsuda, M. J. (1989). Public response to racist speech: Considering the victim’s story. Michigan Law Review, 87(8), 2320. https://doi.org/10.2307/1289306. Meddaugh, P. M., & Kay, J. (2009). Hate speech or “Reasonable Racism?” the other in stormfront. Journal of Mass Media Ethics, 24(4), 251–268. https://doi.org/10.1080/08900520903320936. Mikal, J. P., Rice, R. E., Kent, R. G., & Uchino, B. N. (2016). 100 million strong: A case study of group identification and deindividuation on Imgur.com. New Media & Society, 18(11), 2485–2506. Miškolci, J., Kováčová, L., & Rigová, E. (2018). Countering hate speech on Facebook: The case of the Roma minority in Slovakia. Social Science Computer Review., 089443931879178. https://doi.org/10.1177/0894439318791786. Muldowney, D. (2017, November 2). Doxx racists: How Antifa uses cyber shaming to combat the alt-right. Pacific Standard. Retrieved from https://psmag.com/news/doxxing-the-alt-rightracists. Park, M., & Lah, K. (2017, February 2). Berkeley protests of Yiannopoulos caused $100,000 in damage. CNN. Retrieved from https://www.cnn.com/2017/02/01/us/milo-yiannopoulos-berke ley/index.html Petray, T. L., & Collin, R. (2017). Your privilege is trending: Confronting whiteness on social media. Social Media + Society, 3(2). 205630511770678. https://doi.org/10.1177/20563051 17706783. Postmes, T., & Spears, R. (2002). Behavior online: Does anonymous computer communication reduce gender inequality? Personality and Social Psychology Bulletin, 28(8), 1073–1083. Prasad, P. (2016). Beyond rights as recognition: Black Twitter and posthuman coalitional possibilities. Prose Studies, 38(1), 50–73. https://doi.org/10.1080/01440357.2016.1151763. Rainie, H., & Wellman, B. (2012). Networked: The new social operating system. Retrieved from http://public.eblib.com/choice/publicfullrecord.aspx?p=3339439 Royal, C., & Hill, M. L. (2018). Fight the power: Making #BlackLivesMatter in urban education: Introduction to the special issue. Urban Education, 53(2), 143–144. https://doi.org/10.1177/ 0042085917747123. Sharma, S. (2013). Black Twitter? Racial hashtags, networks and contagion. New Formations, 78(78), 46–64. https://doi.org/10.3898/NewF.78.02.2013. Smith, A., & Anderson, M. (2018). Social media use in 2018. [Internet and Technology]. Pew Research Center, Washington, DC.

90

R. Graham

Stack, L. (2019, February 21). Over 1,000 hate groups are now active in United States, civil rights group says. The New York Times. Retrieved from https://www.nytimes.com/2019/02/20/us/hategroups-rise.html Sue, D. W. (2010). Microaggressions in everyday life: Race, gender, and sexual orientation. Hoboken: Wiley. Suler, J. (2004). The online disinhibition effect. Cyberpsychology & Behavior, 7(3), 321–326. Sykes, G. M., & Matza, D. (1957). Techniques of neutralization: A theory of delinquency. American Sociological Review, 22(6), 664. https://doi.org/10.2307/2089195. Tilly, C. (2004). Social movements, 1768–2004. Boulder: Paradigm Publishers. Tran, J., & Curtin, N. (2017). Not your model minority: Own-group activism among Asian Americans. Cultural Diversity and Ethnic Minority Psychology, 23(4), 499–507. https://doi.org/10.1037/cdp0000145. Tufekci, Z. (2014). The medium and the movement: Digital tools, social movement politics, and the end of the free rider problem: The end of the free rider problem. Policy & Internet, 6(2), 202–208. https://doi.org/10.1002/1944-2866.POI362. Tufekci, Z. (2017). Twitter and tear gas: The power and fragility of networked protest. New Haven: Yale University Press. Tufekci, Z., & Wilson, C. (2012). Social media and the decision to participate in political protest: Observations from Tahrir Square. Journal of Communication, 62(2), 363–379. https://doi.org/ 10.1111/j.1460-2466.2012.01629.x. Vilanova, F., Beria, F. M., Costa, Â. B., & Koller, S. H. (2017). Deindividuation: From Le Bon to the social identity model of deindividuation effects. Cogent Psychology, 4(1). https://doi.org/ 10.1080/23311908.2017.1308104. Vysotsky, S., & McCarthy, A. L. (2017). Normalizing cyberracism: A neutralization theory analysis. Journal of Crime and Justice, 40(4), 446–461. https://doi.org/10.1080/0735648X. 2015.1133314. Wachs, S., & Wright, M. (2018). Associations between bystanders and perpetrators of online hate: The moderating role of toxic online disinhibition. International Journal of Environmental Research and Public Health, 15(9), 2030. https://doi.org/10.3390/ijerph15092030. Wall, D. S. (1998). Catching cybercriminals: Policing the internet. International Review of Law, Computers & Technology, 12(2), 201–218. https://doi.org/10.1080/13600869855397. Zimring, C. A. (2015). Clean and white: A history of environmental racism in the United States. New York: New York University Press.

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug, Firearm, CSAM, and Cybercrime Markets Roberta Liggett, Jin R. Lee, Ariel L. Roddy, and Mikaela A. Wallin

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Dark Web . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Drug Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Shift from Offline to Online Drug Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Purchasing and Receiving Drugs Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Firearm Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . From the Underground to the Dark Web Firearm Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Purchasing Firearms from Dark Web Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Receiving Firearms from Dark Web Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Market Actors and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Product Pricing and Cybercrime Revenue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Child Sexual Abuse Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extent of Child Sexual Abuse Material Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Child Sexual Abuse Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Commercially Available Child Abuse Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policing and Investigating Dark Web CSAM Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92 93 94 94 96 97 98 98 99 100 101 103 104 105 105 106 108 109 110 110

Abstract

Widespread adoption of the Internet and mobile technologies has allowed for easy access to a variety of global services. Despite the many legitimate goods and services provided by online retailers, criminal illicit markets have also prolifer-

R. Liggett (*) · J. R. Lee · A. L. Roddy · M. A. Wallin School of Criminal Justice, Michigan State University, East Lansing, MI, USA e-mail: [email protected]; [email protected]; [email protected]; [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_17

91

92

R. Liggett et al.

ated online. Illicit online markets capitalize on the anonymity and global nature of the Internet, creating challenges and difficulties for law enforcement investigations. This chapter provides an overview of the Dark Web and the current literature on illicit online markets operating on the Dark Web related to drugs, firearms, cybercrime services, and child sexual exploitation. This overview focuses on the online Dark Web marketplace, although several of these products are also bought, sold, and traded on sites that are publicly accessible. Special attention is dedicated to an overview of market forces and its processes. Keywords

Cybercrime · Dark web · Drug markets · Firearms · Child exploitation

Introduction Widespread adoption of the Internet and mobile technologies have allowed for easy access to a variety of global services. Online consumerism and e-commerce, for instance, is a burgeoning industry that amounts to approximately $2.3 trillion USD in global sales (Clement 2019). An analysis of Internet consumer behavior conducted by the Pew Research Center (2016) found that eight in ten Americans buy goods and services online. Similar trends were exhibited in the United Kingdom, as Great Britain witnessed a record number of online sales in 2018 (Barnes 2018). It is argued that the growth of online consumerism stems from its convenience, the ability to review customer ratings before purchasing new items (Smith and Anderson 2016), and a mindset that buying items online is cheaper (Wilson 2011). Despite the many legitimate goods and services provided by online retailers, criminal illicit markets have also proliferated online. Illicit markets related to drugs, firearms, cybercrime services, and child sexual abuse material have capitalized on the anonymity and global nature of the Internet, creating challenges and difficulties for law enforcement investigations. Most concerning for law enforcement is the Dark Web, a layer of the Internet that allows for anonymous access. The Dark Web has fostered the sale of illicit goods, such as drugs, firearms, stolen data, and child exploitive material, among other products. For example, a coordinated law enforcement investigation of Dark Web marketplaces led to the arrest of 61 individuals living in 17 different countries using a variety of Dark Web sites to sell counterfeit goods and currency, cybercrime services, human trafficking, document fraud, drugs, and firearms (Federal Bureau of Investigation 2019). The purpose of this chapter is to provide an overview of the Dark Web and the current literature on illicit online markets operating on the Dark Web related to drugs, firearms, cybercrime services, and child sexual exploitation. This overview focuses on the online Dark Web marketplace, although several of these products are also bought, sold, and traded on sites that are publicly accessible. Special attention is dedicated to an overview of market forces and its processes.

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

93

The Dark Web The Internet is a large, global network of connected computers and devices that allow users to access information stored on the World Wide Web. The World Wide Web is an information system that links documents together through hypertext links, which allow users to move from one document to another with ease (East 2017; Gehl 2018). Large search engines such as Google, Yahoo, and Bing use these links to index different documents, making it easier for users to explore and locate information (East 2017). The Internet also exists in several layers of access. For instance, the surface web (e.g., open web) comprises all websites that are easily accessible to the public because they are indexed by large search engines. However, some websites are not accessible to the public because they are not indexed by these large search engines. This layer of the Internet is referred to as the deep web (East 2017; Gehl 2018; Symanovich 2019). In order to access these deep web websites, one must directly type the URL – or web address – into a web browser like Chrome, Internet Explorer, or Safari. Other deep web websites are private to the general public because they are password protected or encrypted. The deep web thus holds content that is invisible to search engines (Symanovich 2019). An example of the open web and the deep web is online banking. An individual can use any highly frequented search engine to look up the online banking homepage of their bank of choice. However, the online banking system is password protected, forcing users to login to the system to access their personal online banking account. While homepages may be indexed by various search engines, thus existing on the open web, the actual content of the banking account is invisible to the public (e.g., deep web). It is estimated that the deep web makes up the majority of the Internet (Gehl 2018; Symanovich 2019). A computer is identified through an Internet Protocol (IP) address, allowing it to communicate with a network. This IP address acts similarly to a home address in its ability to identify and locate a device or computer accessing the Internet. IP addresses also link browsing history to user identities (Gehl 2018). In order to remain anonymous on the Internet, users can download special cryptographic (e.g., location hiding) software that allow them to access the Internet and World Wide Web without exposing their computer’s physical address and keeping their browser history private. The most popular cryptographic software is the onion router (Tor), an open-source software that permits users to access the Internet through a series of virtual servers without making a direct connection (East 2017). Onion routing ensures privacy by using multiple layers (like an onion) of encryption, allowing a packet of information to know the previous and next device it passes through, but not the origin of the device or where the packet of information will eventually terminate (East 2017). This ensures user privacy by preventing Internet service providers (ISPs) from recording users’ browsing history or physical location, enabling relatively private communications on the Dark Web. In essence, the Dark Web can anonymize a user’s physical location, the content(s) of their browsing, and their online communications with others.

94

R. Liggett et al.

Tor also permits website hosting, meaning that websites hosted on Tor can hide their server locations while only being accessible by other users who have downloaded and use Tor software to access the Internet. These hidden websites, or “hidden services,” comprise what is generally characterized as the Dark Web and account for only 1.5% of Tor web traffic (Ward 2014). While cryptographic software like Tor allow users to access the Internet in an anonymous fashion, they are still able to access conventional websites located on the open and deep web. The Dark Web, on the other hand, is comprised of all the hidden websites hosted on Tor that are completely invisible to users not using its services. Although the majority of Tor use is innocuous, some have taken advantage of this platform to sell a variety of illegal products under the guise of anonymity. The following sections of this chapter will outline four major illegal markets existing on the Dark Web, including drugs, firearms, cybercrime goods and services, and child sexually abusive material.

Drug Markets The manufacture and sale of illegal drugs in the United States remains the oldest and highest grossing illegal industry, generating over $100 billion dollars in revenue annually (Kilmer et al. 2014). However, online drug markets capture only a small percentage of total drug market revenue. For example, the Silk Road, known as the largest Dark Web marketplace involved in drug sales to date, was responsible for only $23 million in annual drug sales before it was dismantled in 2013 (Christin 2013). Despite their seemingly small proportion of the total illicit drug market, there are several reasons why Dark Web drug markets are of mounting importance. First, the digital drug market is rapidly growing. Based on research conducted in 2012, the number of sellers listed on Dark Web marketplaces more than doubled from 220 to upward of 550 sellers over the course of 10 months. In addition, the combined value of publicly available drug sales processed by Dark Web marketplaces increased approximately 38% over a 6-month period (Martin 2014). Second, the protection offered by encrypted websites poses a substantial problem for law enforcement interested in prosecuting drug retailers and consumers. According to current research, the existing encryption techniques employed by darknets (e.g., the onion router or Tor) are considered insolvable by police (Buxton and Bingham 2015). Finally, the use of online platforms for drug sales could result in a wider network of vulnerable populations (e.g., children, adults in drug dependence recovery) having access to dangerous and addictive substances (Barratt et al. 2016). It is imperative to better understand the underlying features of cryptomarkets, lest the use of this platform becomes endemic in the US drug economy.

The Shift from Offline to Online Drug Markets Before the advent of online drug markets, drug sale and distribution involved inperson transactions and were largely limited by the expanse of a drug seller’s

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

95

social and personal networks (Bancroft and Reid 2016). In addition, the exchange of goods for payment occurred in the same geographic location. Just as “street” suppliers sell almost exclusively to people within their personal or professional networks, the majority of drug buyers purchase illicit substances from people they know or have purchased drugs from before. Although convenient, the close physical proximity between sellers and customers left both parties vulnerable to violence and personal safety. Barratt and colleagues (2016) conducted a survey comparing students’ online and offline experiences buying drugs and found that users who bought drugs offline reported purchasing drugs from individuals they knew directly or were within one degree removed from other personal contacts. When asked about the quality of the product they received, most students (56%) reported that they found the quality of the product highly volatile, even when they purchased drugs from the same person (Barratt et al. 2016). In addition, even among individuals who purchased drugs from friends or people they knew directly, 14% of participants reported experiencing threats to their personal safety associated with this route of supply (Barratt et al. 2016). These hazards created a unique opportunity for online platforms to improve upon traditional means of drug dealing by reducing consumer vulnerability. The movement of drug markets to online spaces is a recent phenomenon. This shift began with the establishment of the Silk Road in 2011 (Christin 2013). Monikered the “eBay of illicit drugs” by several sites (Ormsby 2012; Pauli 2012), buyers and sellers were able to facilitate the exchange of legal, controlled, and prohibited narcotics online. Following its takedown by the Federal Bureau of Investigation (FBI) in 2013, a number of websites – though markedly smaller in size – emerged to fill the void left by the Silk Road, including Silk Road 2.0, the Cannabis Road, Agora, Pandora, and Evolution (Dolliver et al. 2018). A variety of social forces exist to facilitate the movement of the online drug market. Individuals utilizing cryptomarkets for drug purchasing and sales often report less violence, anonymity, and shorter supply chains as main benefits of the new online market. Van Hout and Bingham (2014) used a case study approach to explore the experiences of ten Dark Web vendors. Participants emphasized several facets of the Dark Web that they favored over traditional street markets. They found that Dark Web markets promoted “responsible” consumption of drugs, facilitated the distribution of high-quality products, and provided anonymizing features that increased feelings of security (Van Hout and Bingham 2014). Similarly, a study conducted by Barratt and colleagues (2016) found that participants were less likely to experience threats to personal safety or physical violence resulting from cryptomarket use compared with conventional drug distribution channels. Another social force impacting the movement of drug sales online is anonymity. Encryption, discrete packaging, and addresses unlinked to consumers increase buyer protection from legal consequences (Van Hout and Bingham 2014). Furthermore, shorter supply chains (i.e., the direct sale between producers and consumers without the interference of middlemen) result in less tampering with goods. This means drugs are less likely to be cut with other dangerous substances, resulting in purer, consistent, and safer products (Aldridge and Décary-Hétu 2016).

96

R. Liggett et al.

Purchasing and Receiving Drugs Online Movement from offline to online drug markets influences the ways drugs are sold and disseminated by sellers. Rather than strictly depending on a known network of consumers, sellers must appeal to a broad base of potential buyers. Drug retailers can now communicate with drug vendors over the Internet through discussion forums, online storefronts, and marketplaces and provide prohibited substances without having to meet the consumer (Mikhaylov and Frank 2018). Thus, vendors are able to sell their goods to complete strangers without the presence of middlemen to extend their limited networks. The types of drugs sold in cryptomarkets differ from traditional street markets. Drug sellers arrested for offline drug sales are more likely to be charged with selling marijuana, cocaine, crack, PCP, or heroin, with few being charged for selling psychedelics (Caulkins and Reuter 2016; MacCoun and Reuter 1992). Using a web crawler, Aldridge and Decary-Hetu (2016) found that the drug types most commonly obtained through cryptomarkets were MDMA/ecstasy, cannabis, and LSD. However, recent research has discovered that stimulants and hallucinogens composed over half of the 348 active drug items for sale on the Silk Road 2.0, an iteration of the first Silk Road marketplace (Dolliver 2015). While differences in these outcomes cannot definitely point to a shift in the contents provided by the illicit drug markets, they hint at the possibility of specialization between unique cryptomarket hosts. The method through which sellers and consumers facilitate the exchange of drugs online is markedly different than traditional street methods. Consumers that utilize Dark Web drug markets often purchase these goods through individuals they do not know and can even purchase drugs directly from foreign countries (Martin 2014). Dark Web transactions are made with peer-to-peer currencies to facilitate completely anonymous sales (Bancroft and Reid 2016). These trades are often mediated by either bitcoin tumblers or escrow services, which hold funds until consumers confirm the receipt of the goods they paid for, reducing the likelihood of fraud (Bradbury 2014). Since product quality and shipment discretion are largely unknown to the consumer, digital platforms facilitate the dissemination of this information through rating systems (Christin 2013). On most dark market websites, each seller profile displays the vendor’s ranking among all sellers on the site, as well as statistics on the number of successful transactions the seller has completed, along with the quality of the shipment, and a rating out of five stars. This system is meant to promote fair practices among suppliers, as well as signal to buyers which sellers are most trustworthy (Martin 2014). This rating system is a substantial departure from previous methods of drug sales, where drug quality was relatively inconsistent and information was disseminated by word of mouth or experienced personally (Bancroft and Reid 2016). Goods purchased on the Dark Web are most often shipped through postal services and delivered to consumers, though not always directly to their front door. When providing shipping information to cryptomarket suppliers, it is common for

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

97

individuals to provide an address other than their place of residence (e.g., a vacant house, a place of business, a neighbor’s residence, or a post box) to reduce the likelihood of prosecution if the package is intercepted (Martin 2014). Discussion forums hosted on the Dark Web offer detailed advice in avoiding the attention of postal and customs authorities. This advice includes vacuum sealing goods and utilizing professional looking, “businesses-style” printed envelopes (Christin 2013) and hiding illicit goods in legal items such as books or food containers (Aldridge and Askew 2017; Maddox et al. 2016; Van Hout and Bingham 2014). Buyers are advised to avoid ordering from countries with a reputation for exporting illicit drugs (e.g., the Netherlands or Colombia) and instead favor countries that routinely attract commercial traffic, such as the United States and Canada (Martin 2014). Sellers can also use another method to distribute their products outside of postal services. Noncontact drug distribution in Russia generally involves an exchange in which the buyer and the seller agree on a place used as a hiding spot for drugs, as well as a time for pickup. Once the buyer has made the payment, they receive instructions on where to find the stashed drugs (Mikhaylov and Frank 2018). Even when law enforcement agencies manage to detect a shipment of prohibited drugs, there remain significant obstacles to obtaining sufficient evidence for prosecution. Buyers using digital platforms are advised on discussion forums and seller question and answer (Q&A) pages to use pseudonyms and have goods posted to addresses other than their place of residence (Martin 2014). Additionally, individuals that ship illicit goods are cognizant to do so in amounts that result in the lowest possible thresholds of punishment. If caught, it is likely that the seller’s criminal sentence is low and underrepresents the extent of the crimes committed (Van Hout and Bingham 2014). Drug market research consistently illustrates the expanding nature of these Dark Web sites, with online drug purchases providing consumer incentives and protections not afforded in traditional markets. With buyer motivation largely shaped by these factors, the literature suggests that transactions will continue moving into virtual spaces. The power of legislation and regulations in shaping and producing underground illicit markets is important to assess. The movement toward virtual drug marketplaces poses significant challenges to law enforcement in detecting, intercepting, and limiting these exchanges. Furthermore, the anonymity of the Dark Web leaves open the possibility that vulnerable populations – such as children – will be obtaining these illicit products. Investigation into these markets is paramount to public safety.

Firearm Markets Anonymity, ease of access, and lack of regulation have also allowed for the growth of Dark Web firearm markets. While these markets suffer from a dearth of empirical investigation, they continue unfettered, allowing buyers to purchase anything from military grade weapons to explosives. Though very little is known about the extent to which individuals are purchasing firearms on the Dark Web, anecdotal evidence

98

R. Liggett et al.

reveals the danger of unregulated Dark Web markets. In 2014, Alexander Mullings, a prisoner in the United Kingdom, used a smartphone to purchase eight weapons and hundreds of rounds of ammunition from a known and active Dark Web firearm supplier (Pleasance 2015). Despite the German government’s claims of tight gun control laws, the Dark Web provided a means of skirting local legislation. Nearly 2 years later, Ali David Sonboly killed 9 people, injured 21, and then killed himself while using the unlicensed Glock 9 mm handgun and ammunition he purchased from a Dark Web firearm vendor. The 18-year-olds from Munich, Germany, utilized a PGP encryption program to purchase his firearm and over 300 rounds of ammunition with Bitcoin cryptocurrencies (Sengupta 2016). While the link to the Dark Web in these two cases was discovered by law enforcement, firearm markets operating on the Dark Web are frequently uncaptured and unknown to authorities. Regardless of a country’s legislative efforts to regulate the sale of firearms, the Dark Web provides a loophole for buyers and sellers to globally exchange unlicensed and untraceable weapons. Despite its continued utilization by individuals with ill intent, Dark Web firearm markets remain understudied. Understanding how Dark Web firearm markets operate and facilitate firearm purchase is vital to tackling the larger threat to public safety posed by gun violence, as firearm ownership is significantly related to increased homicide rates (Dahlberg et al. 2004; Hemenway 2011; Monuteaux et al. 2015; Siegel et al. 2013).

From the Underground to the Dark Web Firearm Markets Within the United States, legislation (e.g., The Gun Control Act of 1968) has been implemented to regulate the sale, purchase, and possession of firearms. Although strict regulations and restrictions are implicated as a key factor in aiding the creation of the underground and illicit gun market within the United States (Cook et al. 2007), firearms within the country are invariably regulated at the state level, easily accessible, and quickly obtainable. Given these gaps in legislation and implementation, firearms in the United States are less regulated than the neighboring countries of Canada and Mexico. Research conducted by Cook et al. (2009) summarized firearm legislation in these respective locations, revealing that the United States is a primary supplier of illegal handguns and firearms to illicit underground markets. Specifically, the United States’ lax firearm regulations combined with the abundance of firearms in circulation make the United States a “relatively low-cost supplier to those who want to acquire illicit guns” (Cook et al. 2009, p. 268). Although data on suppliers of firearms on the Dark Web are limited, these findings suggest that legislative variations shape, and plausibly drive, buyers to illicit firearm markets.

Purchasing Firearms from Dark Web Markets Though minimal empirical data on Dark Web firearm forums exists, two studies to date have explored Dark Web firearm market dynamics across forums and single

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

99

vendor shops. A RAND study by Paoli and colleagues (2017) took a singular screen grab of 60 Dark Web firearm vendor forums to assess marketplace dynamics. Based on their estimates, there were 136 Dark Web firearm transactions per month, with prices notably higher than comparable “open web” or physical firearm shop pricing. The most common products sold on these Dark Web forums were pistols, rifles, and submachine guns (Paoli et al. 2017). The first of its kind to empirically explore the Dark Web firearm market, Copeland and colleagues (2019) conducted a qualitative assessment of six Dark Web single vendor shops selling firearms. Compared to forums, single vendor shops operated without third-party oversight, leading buyers to evaluate their likelihood of being victims of scams or law enforcement stings. On these shops, purchasing firearms mimicked that of other e-commerce sites. The majority of vendors provided images and descriptions of each product, accepting various types of cryptocurrency as payment. The primary currency accepted was Bitcoin, with only two of the six vendors accepting alternative forms of payment. Consistent with Paoli and colleagues (2017), Copeland et al. (2019) revealed that 73% of the 105 weapons sold across these shops were semiautomatic handguns and revolvers, with the remaining 27% consisting of long guns, including both rifles and shotguns. Although the data includes scrapes from single shop vendor platforms spanning 4 months, it did not provide information on the rate at which these items were purchased. In fact, shops frequently advertised new products but failed to update listings to reflect this promise. Costs are markedly higher for firearms offered on the Dark Web than open web platforms or physical firearm shops (Copeland et al. 2019; Paoli et al. 2017). Across sites, pricing for semiautomatic handguns varied from $531.80 to $1391.20, illustrating that individuals purchasing on Dark Web shops may be paying a significantly higher price. These findings are consistent with underground gun market scholarship, which reveal that illicit gun markets are more expensive than traditional firearm markets (Paoli et al. 2017). Notably, premiums on weapons exist largely due to the illegality that characterizes this market (Cook et al. 2007, 2015), with buyers paying more due to the illegality and anonymity of the transaction.

Receiving Firearms from Dark Web Markets Beyond navigating the Dark Web in general, purchasing a firearm on the Dark Web poses minimal barriers to prospective buyers. Similar to drugs, firearms place a unique burden on buyers in ensuring they receive their product while avoiding detection. Yet, unlike drugs, firearms present an obstacle when it comes to concealment. Copeland and colleagues’ study (2019) provided detailed accounts of shipping mechanisms to best avoid detection. Shipping procedures varied, as the majority of shops suggested a myriad of tips on avoiding raising any “flags” with customs or border patrol. One site told buyers to correctly spell all information on return addresses, use local “Mom and Pop” shipping services, and ship to PO boxes as opposed to physical addresses (Copeland et al. 2019).

100

R. Liggett et al.

While buyers are advised to take particular precautions, some sellers went to lengths to assure customers that they were trustworthy, legitimate, and known for avoiding detection through their shipping practices. In order to conceal firearms and weapons purchased from Dark Web vendors, the shop Black Market advertised that they would ship firearms in anything from computers to books (Copeland et al. 2019). Notably, not all vendors provided the same level of advice or shipping specifications. Given that firearms are often assembled and range in weight and size, sellers and consumers must navigate the plausible difficulties in ensuring discretion and anonymity. The Dark Web firearms literature – while limited – generally suggests that these virtual spaces are predominantly populated by individuals who are unable to obtain firearms through legal means. Since individuals are free from identification on the Dark Web, they are able to avoid restrictive firearm prohibitions that may prohibit their purchase or possession of firearms. Understanding how the Dark Web facilitates and expands the population of individuals able to possess firearms by illegal means is crucial for law enforcement efforts aimed at limiting and reducing the availability of firearms to illegal possessors.

Cybercrime Markets The risks presented by computer hackers and cybercrime tools have soared in recent years due to their ability to steal sensitive data and compromise network systems (Brenner 2008; Chu et al. 2010; Denning 2011; Holt 2007; Holt and Lampke 2010; Wall 2007). Malicious software (e.g., viruses and Trojan horse programs), also known as malware, is particularly damaging as it enables individuals to conduct more intricate and catastrophic attacks, often resulting in grave loses (Brenner 2008; Holt and Kilger 2008; Taylor et al. 2014). Although the amount of financial damage varies depending on the company and the severity of the attack, a 2018 report by Accenture and the Ponemon Institute estimated the average cost of a malware attack toward a US company to be approximately $2.6 million USD annually (see Weinschenk 2019). It is important to note that not all cybercrime tools are used for illegal purposes, as individuals in the cybersecurity space often create, test, and implement tools to identify and exploit various defects in software (Gordon and Ma 2003; Holt and Kilger 2008; Holt et al. 2008; Taylor 1999). It is the application of these tools in unauthorized settings that constitutes illegal behavior (Holt 2013). Given its vast capabilities, online marketplaces that sell cybercrime products and services such as malware, stolen data, and hacking tools have emerged on both the open and Dark Web (Holt 2013; Smirnova and Holt 2017). Research on these online markets demonstrate that hackers both buy and sell tools to orchestrate attacks, as well as sell personal data (e.g., credit card and bank accounts, pin numbers, and customer information) acquired through data breaches (Chu et al. 2010; Franklin et al. 2007; Holt and Lampke 2010; Motoyama et al. 2011). These online cybercrime markets also offer spam and phishing-related services, including email lists for spamming (Chu et al. 2010; Franklin et al. 2007). In spite of this growing

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

101

trend, there is a lack of knowledge around how Dark Web cybercrime markets operate, price services, negotiate deals, and distribute products and services (Dupont 2013, 2017). Understanding how online cybercrime markets operate (e.g., produced, purchased, distributed) is imperative in reducing the amount of harm caused by cybercriminals.

Cybercrime Market Actors and Operations Cybercriminals involved in the illicit marketplace, which include both buyers and sellers, converge in both open and Dark Web settings to engage in business transactions, where a large selection of services and products is exchanged. Products sold through illicit online markets generally fall within three broad categories: (1) stolen data from credit cards, credentials (such as login IDs and passwords), bank accounts, online payment accounts, and other personal identifying information; (2) cybercrime tools such as malware, hacking tools/packages, botnets, and phishing kits; and (3) cybercrime services such as cash-out services and consulting (Ablon 2018; Holt 2013; Leukfeldt et al. 2017). Online cybercrime markets are comprised of buyers and sellers who have various technological expertise (Ablon 2018). For instance, a small number of vendors are highly technical and create sophisticated hacking tools and malware which they sell online (Ablon 2018). These sophisticated members fill additional roles within the market as content experts and moderators (Ablon 2018). However, it seems that the majority of buyers and sellers on the Dark Web have little to no technical expertise. Buyers with limited technical skill and sophistication are able to procure stolen data, credentials, services, and pre-made tools in order to execute cyberattacks, illustrating that technological proficiency is no longer a required element to cybercrime offending (Ablon 2018). Numerous methods are used to acquire stolen data, including malware that affect retailers’ consumer data (Peretti 2009) and phishing attacks that use fraudulent email and website credentials (Holt and Bossler 2016; James 2005). Research suggests that Dark Web cybercrime markets operate within two primary environments: forums and shops (Li and Chen 2014; Smirnova and Holt 2017). Media reports have indicated that Dark Web forums operate in similar ways to cybercrime markets on the open web (Li and Chen 2014). First, sellers post advertisements of their product (e.g., malware, stolen data, hacking service) on forums, indicating detailed properties of their products or services, along with pricing, payment, terms of service, and contact information (Franklin et al. 2007; Holt 2013; Holt and Lampke 2010). Once posted, interested buyers may respond to these advertisements if they wish to contact the seller. Some online markets use moderators, which can increase trust among buyers and sellers and ensure productive transactions (Holt et al. 2016). For example, moderators may banish members identified as rippers – those that scam others out of money – or verify sellers and their products to ensure legitimate transactions. Some sellers will also offer multiple points of contact on how to negotiate price and deals within their advertisements (Hutchings and Holt 2014).

102

R. Liggett et al.

While products and services are often publicized through online advertisement threads (Fossi et al. 2008), transactions usually occur outside of the forum (Holt 2013). To complete transactions, sellers and buyers may adopt additional modes of communication that provide them with more privacy, such as private messaging apps or direct messaging features found on the forum. Money is also exchanged using multiple methods, with cryptocurrency being the most common. Since buyers and sellers could be working with different types of currency, the involved parties may hire an individual who is able to convert e-currency into usable currency for a fee. Sometimes, an escrow or “go-between” individual ensures that both product and money are properly exchanged between the involved parties (Department of Justice 2019). This method can ensure trust in buyers who have to navigate the risk of being scammed (Holt et al. 2016). One such forum, Wall Street Market, was recently taken down by a coordinated effort between multiple, international law enforcement agencies (Department of Justice 2019). The Wall Street Market advertised multiple illegal products and services, including hacking tools and malware, and its disruption led to the arrest of three German nationals and one Brazilian citizen (Department of Justice 2019). As law enforcement narrowed in on Wall Street Market, the four arrested individuals – all administrators of the site – conducted an exit scam in which they stole nearly 11 million USD in virtual currency held in escrow and user accounts. Therefore, although escrow and moderators may increase trust in buyers and sellers, scams still occur within Dark Web transactions, with very little avenues for recourse among buyers. Shops – the other environment that cybercrime markets operate in – are created by individual vendors that directly offer information and services to its buyers. In a shop, a seller is able to post individual ads without being a member of a larger forum community. Shops operate independently of forums, allowing vendors to offer direct services to buyers (Smirnova and Holt 2017). Similar to the online drug market, after a transaction takes place, buyers are often allowed to leave positive or negative feedback on the vendor’s web page (Holt et al. 2015). The most common way to increase trust on Dark Web markets is through customer feedback (Zeller 2005). Online customer reviews have been shown to have positive effects on buyer trust (Pavlou and Dimoka 2006). Customer feedback reflects the quality of the product and the service provided by the seller (Holt 2013). Customer feedback influences the behavior of those engaging in cybercrime markets, as potential buyers are able to determine whether a vendor is reputable, legitimate, and safe. The behavior of those engaging in cybercrime markets is essentially influenced by social forces that attempt to maximize rewards and reduce risk for buyers and sellers (Holt and Lampke 2010). Similar to offline crimes, online offenders engage in various practices to avoid law enforcement detection. Offenders active in stolen data markets tend to target nations with fewer sanctions and arrest powers. That is, cybercriminals tend to target geographic areas with minimal extradition risk or use Tor-based services to anonymize their identity. This geographic targeting may explain why countries like the United States and major European Union countries are victimized in greater

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

103

frequencies in both open and Dark Web cybercrime markets (Smirnova and Holt 2017). In fact, Franklin and colleagues (2007) found that the majority of stolen card data were from the United States and the United Kingdom, with smaller amounts coming from Canada, Brazil, Australia, France, and Germany. Similarly, Holt and Lampke (2010) found that the majority of credit and debit card numbers (e.g., dumps) came from the United States (31.5%), the European Union (27%), and Canada (22.9%). In a different study, Holt et al. (2016) found that Europe was the primary source of bank account information, credit and debit card numbers (e.g., dumps), Card Verification Values (CVVs), and fullz (e.g., personal identifiable information) – the United States was the second most common source for dumps and fullz, Canada the second largest for bank accounts, and the United Kingdom the second largest category for CVV data (Holt et al. (2016)). Greater international representation is available in cybercrime markets hosted on the open web, as the use of Tor may be challenging for those in countries with restrictive Internet access. In fact, Tor users appear to reside primarily in the United States, Russia, and various European Union and Middle Eastern countries (Tor 2019). These dynamics suggest that behaviors in online environments may be related to the social and economic conditions of actors’ physical locations (Castells 2002).

Product Pricing and Cybercrime Revenue Pricing for cybercrime services varies depending on the product or service but often mimics pricing strategies seen in legal service industries. For example, stolen data may be priced based on desirability and quality, with higher limit credit cards selling for higher fees (Holt and Lampke 2010). This allows criminals selling stolen data to potentially turn a profit of several thousand dollars (Holt et al. 2016). Online credentials are also sold on cybercrime markets, including username and password combinations that allow buyers to access various online accounts (Shulman 2010). Similar to other stolen data, web accounts on popular platforms, banking accounts, and other such valuable accounts are sold at higher prices (Schulman 2010). However, similar to other offline stolen goods markets, it is likely that data sellers may only receive a small percentage of the true value of their product (Holt et al. 2016). Cybercrime services are also sold on illicit online markets. In the crime-as-aservice (CaaS) model, an individual creates malware and advertises its existence and capabilities in the underground market either directly or indirectly through an advertiser (Sood and Enbody 2013). In this way, less tech-savvy buyers are able to purchase preassembled malware in order to execute attacks. For instance, cybercriminals may sell access to botnets or botnet rentals, consulting services on how to set up botnets, email phishing schemes, and distributed denial of service attacks (DDoS) (Manky 2013; Sood and Enbody 2013). From there, buyers can purchase a number of installs, the amount of time a DDoS attack occurs, and the number of phishing emails sent. Similar to how stolen data is priced, cybercrime as a service is

104

R. Liggett et al.

also commoditized based on the amount of time and expertise it consumes (Symantec 2014). That is, similar to hiring any other type of professional service, cybercrime services are charged based on how long they take to conduct and execute. Various cybercrime markets also offer its buyers bulletproof services to enhance their privacy and security from law enforcement detection – bulletproof services reduce the visibility and detectability of the actor so that even if the attack is identified, the offender’s identification is hidden and obscure. Pricing is a significant force shaping cybercrime market behavior (Holt 2013). Services, tools, and data that are disproportionately priced can come under scrutiny, triggering a vetting process of the vendor and product (Holt 2013). In order to motivate buyers who are looking for good deals and quality product, vendors may use discounts and specials similar to other service-driven industries (Holt 2013). However, due to the hidden nature of final sale negotiations, it is difficult to completely estimate the revenue and profit of these different cybercrime services.

Child Sexual Abuse Markets The proliferation of the Internet and digital technology has allowed for easy engagement with sexual and erotic material online, facilitating an expansive online sexual market where material is shared, sold, and discussed (Blevins and Holt 2009; Castle and Lee 2008; Fisher and Barak 2000; Holt and Blevins 2010; Jones 2016; Skidmore et al. 2018). The online sexual market blurs several boundaries between legality, deviance, and illegality. Most concerning are networks and technologies that anonymize Internet access (e.g., Tor), as they have been implicated in the distribution and sale of exploitative material related to child pornography and online child sexual trafficking (Department of Justice 2016; Mitchell et al. 2011; Wortley and Smallbone 2006). The global nature of child exploitation crimes makes investigation and prosecution complex and challenging (UNDOC 2015; Department of Justice 2016). In 2015, the Federal Bureau of Investigation partnered with several international law enforcement organizations during Operation Pacifier, an initiative to investigate and remove one of the largest child pornography websites existing on the Dark Web, known as “Playpen” (Federal Bureau of Investigation 2017). In total, 350 US citizens and 548 internationals were arrested under a variety of charges related to the exploitation of children, illustrating how child pornography consumption, production, distribution, and physical sexual abuse of children come together to form an exploitive global marketplace (Federal Bureau of Investigation 2017). The child sexual abuse marketplace facilitates real, long-term harm for children. Children not only experience serious psychological and emotional distress from the physical abuse and grooming involved in the creation of exploitative images but also from the knowledge that the image is continually circulated online – serving as a graphic record of their trauma (Department of Justice 2017). A deeper understanding of child sexual abuse markets assists law enforcement officers and other stakeholders in policing and eradicating online child exploitation.

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

105

Extent of Child Sexual Abuse Material Online Child sexual abuse material (CSAM) describes the visual depiction (e.g., images, videos, livestreamed abuse) of sexual acts such as intercourse, bestiality, masturbation, and sexual displays of genitalia involving children aged 17 years old and younger (Wolak et al. 2012). Although a true estimate of the amount of CSAM circulating online is unknown, multiple organizations collecting data on CSAM suggest the pervasive nature of this offense. Interpol, an international organization that coordinates worldwide police cooperation, currently houses a large database that contains over one million child pornography images for law enforcement use during investigation (Interpol 2018). In the United States, the National Center for Missing and Exploited Children (NCMEC) – a nonprofit organization created to assist law enforcement in the identification, investigation, and rescuing of minor victims of sexual exploitation – has fielded nearly 43 million reports of child exploitation through their national CyberTipline to date (NCMEC 2019). The Department of Justice (2017) has noted that CSAM can be found on both the open and Dark Web, as well as on mobile devices. Research conducted by the University of Portsmouth found that 2% of Tor hidden services are dedicated to child sexual abuse (Ward 2014). However, approximately 80% of traffic to Tor hidden services sites were directed to these child abuse sites (Ward 2014). In other words, a large amount of Dark Web traffic is directed toward a small number of CSAM-specific websites (Ward 2014). In addition, the large child pornography website operating on the Dark Web, Playpen, was reported as having about 214, 898 active users before being shut down by the Federal Bureau of Investigation in 2015 (Raymond 2015).

Online Child Sexual Abuse Markets The Internet facilitates the formation of supportive online communities among individuals displaying deviant and criminal interests (Holt 2007; Holt et al. 2010). Similar to offline subcultural formation (Braithwaith 2000; Cohen 1955; Thomas 1975), individuals may feel alienated from conventional sexual norms and use the Internet to mitigate self-worth, build social connection, and validate their deviant interests (Durkin 1997; Durkin and Bryant 1999; Jenkins 2001; Holt et al. 2010; Quayle and Taylor 2002). Individuals who self-identify their sexual interest in child and child pornography can use the Internet to freely discuss their interests (Holt et al. 2010) or distribute CSAM online (Department of Justice 2016, 2017). In order to circumvent law enforcement activities, savvy individuals use anonymized software or visit hidden websites on the Dark Web. The CSAM online market is unique to other illicit markets in that the product is mostly offered for free. The majority of CSAM online is distributed through peer-topeer (P2P) file sharing networks (Europol European Cybercrime Center 2013; INHOPE 2012; Wolak et al. 2012). P2P file sharing is a vast network that allows users to search for and download content (e.g., music, videos, and other files) from

106

R. Liggett et al.

other network members’ computers for free. Individuals can join the P2P network by downloading specific software onto their personal computer, such as BitTorrent, eDonkey, and Gnutella. P2P networks allow users to quickly download and distribute files in a relatively anonymous manner, making them popular among CSAM users. Individuals consuming, producing, and distributing CSAM will often use the Dark Web to increase their anonymity and decrease law enforcement detection. Similarly, CSAM on the Dark Web is distributed for free among a subculture of users connected through forum sites hosted through an anonymous browser (e.g., Tor), with popular images displaying younger children engaging in more serious sexual acts (Department of Justice 2016; Europol European Cybercrime Center 2013). The way child sexual abuse material is generated and distributed online has changed substantially in the last decade. A survey of 3803 images and videos of CSAM found that 85.9% of graphic images depicting children under the age of 15 were created using webcam technology, often revealing children alone in their homes (Internet Watch Foundation 2015). Many images in circulation on the open web, Dark Web, and P2P networks consist of image stills taken from livestreamed abuse or, alternatively, are self-generated by the children themselves (Europol European Cybercrime Center 2013; Smith 2012). The integration of apps and social media has provided offenders with the creative ability to access, groom, and coerce children into producing sexual material, images of which are then siphoned into a greater CSAM network (Department of Justice 2016). The overlap between CSAM and sextortion – the threat to release intimate images unless more graphic images are produced – illustrates a coercive situation where children are tricked into producing sexual images and then trapped in a cycle of abuse in which more are demanded (Liggett 2019; Wittes et al. 2016).

Commercially Available Child Abuse Material A small amount of CSAM is commercially available through the Dark Web. Current estimates indicate that between 7.5 and 18% of CSAM is commercially sold online (Europol European Cybercrime Center 2013; INHOPE 2012). Commercially available abuse material is offered on publicly accessible websites as well as through the Dark Web. However, very little is understood about the market forces that shape online commercial CSAM or the differences between CSAM sold on the open web and the Dark Web. Although only a small amount of CSAM is commercially available, news agencies estimate that CSAM is a nearly three billion dollar USD industry (Pulido 2014). Research conducted on buyers and sellers of commercially exploitative material found that offenders fall within two categories: those who purchase or sell access to known child victims for physical or virtual sexual contact and those who purchase or sell CSAM images that they did not produce (Mitchell et al. 2011). According to this classification, it is apparent that online sexual exploitation involves the viewing,

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

107

downloading, and dissemination of images, as well as the physical offline sexual harm to children committed during the production of content, demonstrating that profits are made at multiple points in the offense chain. Commercially available child pornography on the Dark Web generates revenue in a variety of ways. First, revenue can be obtained through direct pay-per-view methods whereby users pay to download CSAM or pay to view live sexual assaults against minors (Europol European Cybercrime Center 2013). The ability to pay and watch the abuse of children online is a hallmark of the child cybersex industry and reflects the overlap between multiple child exploitation crimes online. For instance, David Timothy Deakin, a 53-year-old American living in the Philippines, was arrested and charged with multiple counts of child exploitation for operating a CSAM website. Deakin would lure children into his apartment and users would pay to watch or download a video stream of the sexual abuse of these victims (Mendoza 2017). The commercial value of CSAM often lies in the “newness” of the image, as many of the most popular images online have been in circulation for a number of years. Pricing for video clips may be as low as 10 dollars per download, while membership to CSAM sites may be priced at 50 dollars per month. New depictions of sexual abuse may generate up to 1000 dollars per download. Payment per downloaded image or video can be purchased using online payment services such a PayPal or cryptocurrencies such as bitcoin (Smith 2014a). Second, revenue can be generated through third-party affiliate websites. According to Smith (2014b) of the Internet Watch Foundation, websites may advertise CSAM through sites hosting adult pornographic material and receive revenue when users click third-party advertisements. In other instances, users may be redirected to a third-party site with CSAM while trying to click on legitimate adult pornography links. Some websites hosting CSAM have exploited technological advances to circumvent law enforcement monitoring and to trick online payment companies into believing that users are utilizing their services for the legitimate sale of adult pornography (See Smith 2014b for a technical report). Individuals may also advertise the physical sexual services of children on websites such as backpage.com or Craigslist, as part of the human sex-trafficking industry. Child sex trafficking describes the commercial industry in which children under the age of 17 are induced by force, fraud, or coercion to engage in sexual acts (Trafficking Victims Protection Act, 2000; UNDOC 2018). Sex trafficking continues to be the most reported form of human trafficking worldwide and overwhelmingly targets women and girls (UNDOC 2018). The emergence of webcam technology has also facilitated a rise in cybersex tourism in which users pay to view sexual webcam shows featuring children (De Leon 2013). With the online migration of child exploitation through webcams and livestreaming, child pornography and trafficking have become entwined. For example, a Florida resident, David Paul Lynch, was sentenced to 330 years in prison for traveling abroad to engage in sexual activities with children as young as 6 years old recording the encounter and freely posting and sharing the CSAM online (Smith 2018). Abusers can therefore profit from the direct sale of children, as well as the digitally recorded aftermath.

108

R. Liggett et al.

The Internet allows traffickers to advertise the physical sexual abuse of minors as well as coerce and recruit minors into cybersex tourism (Smith et al. 2009; UNDOC 2018). Offenders prey upon the vulnerabilities of young children and manipulate technology to coerce them into sexual exploitation (UNDOC 2018). Many victims identified in CSAM material are acquainted with those who produced the original images (Mitchell et al. 2011) or self-produced images (Smith 2014a). For instance, CSAM featuring young children are most likely generated within a family context, suggesting that parents and guardians are the ones often perpetrating this abuse (ECPAT 2018). In addition, increasing concern exists over the role of youth gangs in exploiting girls in their community through social media platforms (Department of Justice 2016; Latonero et al. 2011). In sum, the commercial sale of child abuse material online spans multiple sexual offenses and envelopes different offenders operating under various motivations.

Policing and Investigating Dark Web CSAM Markets In order to successfully identify anonymous dark web users engaging with CSAM, federal law enforcement must rely on third-party reporting, international cooperation, and complex technical skills, making the illicit online sex market difficult to disrupt. The anonymous nature of Dark Web services adds additional challenges to law enforcement hoping to stop distributors at the source. Investigating CSAM offenses are both psychologically and technically difficult for law enforcement. Law enforcement personnel assigned to Internet crimes against children units tend to experience vicarious trauma and are at risk for burnout, depression, and familial conflict (Craun et al. 2015). In addition, serious ethical and operational issues plague CSAM investigations. Recently, the Federal Bureau of Investigation was criticized for their operational decision to keep a large Dark Web CSAM website operating in order to gather information on users. Investigators kept the website operating for weeks in order to collect enough information to charge hundreds of users despite many believing that the FBI should have shut down the website immediately in order to prevent more images from being distributed (Raymond 2015). The Federal Bureau of Investigation has also used ethically ambiguous hacking strategies to identify anonymous users on the Dark Web. Such tactics have been highly criticized by lawyers and serves as a dilemma for investigators who need to gather data on a hard to reach group of offenders even if it means pushing the boundaries of the law (Conditt 2016; Nakashima 2016; Raymond 2015). Current research on the impact of P2P file sharing networks and commercial CSAM websites illustrate that only a small number of computers are responsible for a large proportion of circulated images (Smith 2014a; Wolak et al. 2012). Law enforcement partnerships between researchers, nonprofit organizations, and business can increase information sharing that identifies these high-impact distributors, allowing for a more targeted removal of explicit content online. Technological advances in machine learning have also assisted in the removal and reporting of

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

109

suspected CSAM on the open web (Reuters 2018). With these partnerships, social media platforms, law enforcement, and business can work together to prevent the spread, sale, and creation of child abuse material online.

Conclusion The growth of networks and technologies that anonymize Internet access has furthered the emergence of several online, illegal marketplaces. This chapter outlines current research pertaining to Dark Web markets for drugs, firearms, cybercrime services, and child abuse material. This review shows that the changing landscape of online consumerism has permeated illegal industries. Specifically, the presence of anonymous networks and software has facilitated the movement of illicit offline markets to thrive in online spaces, raising important questions regarding the globalization of criminal marketplaces and the steps law enforcement must take to disrupt them. The anonymous and ubiquitous nature of the Dark Web raises important concerns for both citizens and law enforcement. First, it is extremely difficult to identify key sellers and buyers online. Encryption and cryptographic features of the Dark Web, as well as other software such as Freenet and I2P, make criminal investigations more complex and resource taxing. In fact, law enforcement officers often express that they require exceedingly sophisticated knowledge and specialized units to keep up with cybercrime issues (Burns et al. 2004; Harkin et al. 2018; Hinduja 2004; Holt and Bossler 2012; Holt et al. 2015, 2019; Huey 2002; Lee et al. 2019; Stambaugh et al. 2001). Although law enforcement agencies prioritize the investigation of CSAM online, they tend to dismiss other cybercrimes as less important, such as the sale of cybercrime services, stolen data, and cyberfraud (Dupont 2013). As a result, local law enforcement agencies are often understaffed and undertrained in the investigations of cybercrime issues, causing frustrations for both law enforcement and cybercrime victims (Holt et al. 2019; Lee et al. 2019). With online technology becoming more commonplace in society, cybercrime attacks that generate data loss, data manipulation, and unauthorized access to devices will increasingly require a reliable solution. Second, the integration of technology into the illegal sale of goods directly impacts private industry and business. For instance, the use of PayPal, Western Union, and other money services in the buying and selling of illegal goods means that business and corporations must take action to prevent cybercrimes. With large corporations such as Walmart, Target, and banks experiencing data breaches, important discussions related to responsibility link businesses with policing. As a result, it is increasingly difficult to know who has responsibility and control for the illegal activity of others and how business and law enforcement share the investigative duties. Third, the ease in accessibility of illicit products online may lead to increases in consumers. This is particularly troubling when considering the current opioid epidemic and mass shooting incidents seen in the United States. For instance, studies

110

R. Liggett et al.

have linked the prevalence of firearm ownership to heightened abuse of intimates (McFarlane et al. 1999) and threats with a firearm (Rothman et al. 2005). While Dark Web firearm markets vary in legality based on local legislation for both the buyer and seller, anecdotal and empirical evidence illustrates the dire need to assess and regulate these markets in efforts to increase public safety and reduce gun violence. When direct victims are involved (e.g., child sex industry), it is unclear how accessibility of abusive content translates to an increasing market of victimization. In addition, the sharing of information in online communities allows for users who have less technical skill to engage in a variety of cybercrimes. Specifically, the sale of cybercrime services and products suggests that individual offenders do not need advanced technical knowledge to deface websites, steal data, and/or disrupt commerce. In an expanding technological world, cybercrime services can catastrophically fall into the wrong hands for the right price. It is critical that scholarship continue to engage in research around illicit Dark Web markets. A deeper understanding of its main players, social forces, and the process of buying and selling illegal goods and services provide vitals clues on how to intervene and investigate these crimes. The online nature of these offenses has grave implications for local, federal, and international legislation. Future research should not only focus on describing the nature of these criminal marketplaces but frame research in an action-oriented manner that facilitates guidance on situational crime prevention, partnerships, and investigative strategy. In a world where technology is integrated into crime faster than legislation and policy can adapt, knowing more about these marketplaces can lead to greater preparedness and safety.

Cross-References ▶ Child Sexual Exploitation: Introduction to a Global Problem ▶ Counterfeit Products Online ▶ Cybercrime-as-a-Service Operations ▶ Data Breaches and Carding ▶ Identity Theft: Nature, Extent, and Global Response ▶ The Past, Present, and Future of Online Child Sexual Exploitation: Summarizing the Evolution of Production, Distribution, and Detection ▶ The Rise of Sex Trafficking Online

References Ablon, L. (2018). Data thieves: The motivations of cyber threat actors and their use and monetization of stolen data. Santa Monica, CA: RAND Corporation. Aldridge, J., & Askew, R. (2017). Delivery dilemmas: How drug cryptomarket users identify and seek to reduce their risk of detection by law enforcement. International Journal of Drug Policy, 41, 101–109. https://doi.org/10.1016/j.drugpo.2016.10.010 Aldridge, J., & Décary-Hétu, D. (2016). Hidden wholesale: The drug diffusing capacity of online drug cryptomarkets. International Journal of Drug Policy, 35, 7–15. https://doi.org/10.1016/j. drugpo.2016.04.020.

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

111

Bancroft, A., & Reid, P. S. (2016). Concepts of illicit drug quality among darknet market users: Purity, embodied experience, craft and chemical knowledge. International Journal of Drug Policy, 35, 42–49. https://doi.org/10.1016/j.drugpo.2015.11.008. Barnes, L. (2018, September 21). UK retail sees record online spending in 2018. PCR. Retrieved from https://www.pcr-online.biz/features/uk-retail-sees-record-online-spending-in-2018 Barratt, M. J., Lenton, S., Maddox, A., & Allen, M. (2016). ‘What if you live on top of a bakery and you like cakes?’ – Drug use and harm trajectories before, during and after the emergence of Silk Road. International Journal of Drug Policy, 35, 50–57. https://doi.org/10.1016/j. drugpo.2016.04.006. Blevins, K. R., & Holt, T. J. (2009). Examining the virtual subculture of Johns. Journal of Contemporary Ethnography, 38, 619–648. https://doi.org/10.1177/0891241609342239. Braithwaite, J. (2000). Shame and criminal justice. Canadian Journal of Criminology, 42(3), 281–298. Bradbury, D. (2014). Unveiling the dark web. Network Security, 2014, 14–17. https://doi.org/ 10.1016/S1353-4858(14)70042-X. Brenner, S. W. (2008). Cyberthreats: The emerging fault lines of the nation state. Oxford: Oxford University Press. Burns, R. G., Whitworth, K. H., & Thompson, C. Y. (2004). Assessing law enforcement preparedness to address Internet fraud. Journal of Criminal Justice, 32(5), 477–493. Buxton, J., & Bingham, T. (2015). The rise and challenge of dark net drug markets. Swansea: Global Drug Policy Observatory. Castells, M. (2002). The internet galaxy: Reflections on the internet, business, and society. Oxford: Oxford University Press. Castle, T., & Lee, J. (2008). Ordering sex in cyberspace: A content analysis of escort websites. International Journal of Cultural Studies, 11, 107–121. https://doi.org/10.1177/ 1367877907086395. Caulkins, J. P., & Reuter, P. (2016). Dealing more effectively and humanely with illegal drugs. Crime and Justice, 46, 95–158. https://doi.org/10.1086/688458. Christin, N. (2013). Traveling the Silk Road: A measurement analysis of a large anonymous online marketplace. In Proceedings of the 22nd international conference on World Wide Web (pp. 213–224). Chu, B., Holt, T. J., & Ahn, G. J. (2010). Examining the creation, distribution, and function of malware on-line (pp. 1–183). Washington, DC: Department of Justice Abstract. Clement, J. (2019). E-commerce worldwide – Statistics & Facts. Statista. Retrieved from https:// www.statista.com/topics/871/online-shopping/ Cohen, A. K. (1955). Delinquent boys: The culture of the gang. Glencoe: Free Press. Conditt, J. (2016, August 23). FBI improved a Dark Web child pornography site, lawyer argues. Engadget. Retrieved from https://www.engadget.com/2016/08/23/fbi-improved-dark-web-childporn-site-lawyer/ Cook, P. J., Ludwig, J., Venkatesh, S., & Braga, A. A. (2007). Underground gun markets. The Economic Journal, 117. https://doi.org/10.1111/j.1468-0297.2007.02098. Cook, P. J., Cukier, W., & Krause, K. (2009). The illicit firearms trade in North America. Criminology & Criminal Justice, 9, 265–286. https://doi.org/10.1177/1748895809336377. Cook, P. J., Harris, R. J., Ludwig, J., & Pollack, H. A. (2015). Some sources of crime guns in Chicago: Dirty dealers, straw purchasers, and traffickers. The Journal of Criminal Law and Criminology, 104, 717. Copeland, C., Wallin, M., & Holt, T. (2019). Assessing the practices and products of Darkweb Firearm vendors. Deviant Behavior, 40, 1–20. Craun, S. W., Bourke, M. L., & Coulson, F. N. (2015). The impact of internet crimes against children work on relationships with families and friends: An exploratory study. Journal of Family Violence, 30(3), 393–402. https://doi.org/10.1007/s10896-015-9680-3. Dahlberg, L., Ikeda, R., & Kresnow, M. (2004). Guns in the home and risk of a violent death in the home: Findings from a national study. American Journal of Epidemiology, 160, 929–936. https://doi.org/10.1093/aje/kwh309. De Leon, S. (2013, July 17). Cyber-sex trafficking: a 21st century scourge. CNN News. Retrieved from https://www.cnn.com/2013/07/17/world/asia/philippines-cybersex-trafficking/index.html

112

R. Liggett et al.

Denning, D. E. (2011). Cyber conflict as an emergent social phenomenon. In Corporate hacking and technology-driven crime: Social dynamics and implications (pp. 170–186). Hersey, NY: IGI Global. Department of Justice. (2016). The national strategy for child exploitation prevention and interdiction. Retrieved from https://www.justice.gov/psc/file/842411/download Department of Justice. (2017). Child pornography. Retrieved from https://www.justice.gov/crimi nal-ceos/child-pornography Department of Justice. (2019, May 3). 3 Germans who allegedly operated Dark Web marketplace with over 1 million users face U.S. narcotics and money laundering charges. Department of Justice Press Releases. Retrieved from https://www.justice.gov/usao-cdca/pr/3-germans-whoallegedly-operated-dark-web-marketplace-over-1-million-users-face-us Dolliver, D. S. (2015). Evaluating drug trafficking on the Tor Network: Silk Road 2, the sequel. International Journal of Drug Policy, 26, 1113–1123. https://doi.org/10.1016/j. drugpo.2015.01.008. Dolliver, D. S., Ericson, S. P., & Love, K. L. (2018). A geographic analysis of drug trafficking patterns on the TOR network. Geographical Review, 108, 45–68. https://doi.org/10.1111/gere.12241. Dupont, B. (2013). Cybersecurity futures: How can we regulate emergent risks? Technology Innovation Management Review, 3, 6–11. Dupont, B. (2017). Bots, cops, and corporations: On the limits of enforcement and the promise of polycentric regulation as a way to control large-scale cybercrime. Crime, Law and Social Change, 67, 97–116. Durkin, K. F. (1997). Misuse of the Internet by pedophiles: Implications for law enforcement and probation practice. Federal Probation, 14, 14–18. Durkin, K. F., & Bryan, C. D. (1999). Propagandizing pederasty: a thematic analysis of the on-line exculpatory accounts of unrepentant pedophiles. Deviant Behavior: An Interdisciplinary Journal, 20, 103–127. East, C. S. (2017). Demystifying the Dark Web. ITNOW, 59(1), 16–17. https://doi.org/10.1093/ itnow/bwx007. ECPAT. (2018). Trends in online child sexual abuse material. Bangkok. Retrieved from www.ecpat.org Europol European Cybercrime Center. (2013). Commercial sexual exploitation of children online a strategic assessment. https://doi.org/10.1002/stem.1434 Federal Bureau of Investigation. (2017, May 5). ‘Playpen’ creator sentenced to 30 years: Dark Web ‘hidden service’ case spawned hundreds of child porn investigations. Federal Bureau of Investigation Press Release. Retrieved from https://www.fbi.gov/news/stories/playpen-crea tor-sentenced-to-30-years Federal Bureau of Investigation. (2019, March 26). J-CODE announces 61 arrests in its second coordinated law enforcement operation targeting opioid trafficking on the darknet. Federal Bureau of Investigation Press Release. Retrieved from https://www.fbi.gov/news/pressrel/pressreleases/j-code-announces-61-arrests-in-its-second-coordinated-law-enforcement-operation-targetingopioid-trafficking-on-the-darknet Fisher, W. A., & Barak, A. (2000). Online sex shops: Phenomenological, psychological, and ideological perspectives on internet sexuality (Vol. 3). Mary Ann Liebert. Retrieved from www.liebertpub.com. Fossi, M., Johnson, E., & Turner, D. (2008). Symantec report on the underground economy. Symantec Security Response, 3(1), 77–82. Franklin, J., Perrig, A., Paxson, V., & Savage, S. (2007, October). An inquiry into the nature and causes of the wealth of internet miscreants. In ACM conference on computer and communications security (pp. 375–388). Gehl, R. W. (2018). Weaving the Dark Web: Legitimacy on freenet, Tor, and I2P. Cambridge, MA: MIT Press. Gordon, S., & Ma, Q. (2003). Convergence of virus writers and hackers: Fact or fantasy? Cupertine: Symantec Security White paper. Harkin, D., Whelan, C., & Chang, L. (2018). The challenges facing specialist police cyber-crime units: An empirical analysis. Police Practice and Research, 19(6), 519–536.

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

113

Hemenway, D. (2011). Risks and benefits of a gun in the home. American Journal of Lifestyle Medicine, 5, 502–511. https://doi.org/10.1177/1559827610396294. Hinduja, S. (2004). Perceptions of local and state law enforcement concerning the role of computer crime investigative teams. Policing: An International Journal of Police Strategies and Management, 3, 341–357. Holt, T. J. (2007). Deviant Behavior subcultural evolution? examining the influence of on-and offline experiences on deviant subcultures. Deviant Behavior, 28, 171–198. https://doi.org/ 10.1080/01639620601131065. Holt, T. J. (2013). Examining the forces shaping cybercrime markets online. Social Science Computer Review, 31(2), 165–177. https://doi.org/10.1177/0894439312452998. Holt, T. J., & Bossler, A. M. (2016). Cybercrime in progress: Theory and prevention of technologyenabled offenses. London: Routledge. Holt, T. J., & Kilger, M. (2008, April). Techcrafters and makecrafters: A comparison of two populations of hackers. In 2008 WOMBAT workshop on information security threats data collection and sharing (pp. 67–78). Amsterdam, Netherlands: IEEE. Holt, T. J., & Lampke, E. (2010). Exploring stolen data markets online: Products and market forces. Criminal Justice Studies, 23, 33–50. https://doi.org/10.1080/14786011003634415. Holt, T. J., & Blevins, K. R. (2010). Examining sex work from the client’s perspective: Assessing Johns using on-line data. Deviant Behavior, 28, 333–354. https://doi.org/10.1080/01639620701233282 Holt, T. J., & Bossler, A. M. (2012). Predictors of patrol officer interest in cybercrime training and investigation in selected United States police departments. Cyberpsychology, Behavior, and Social Networking, 15(9), 464–472. Holt, T. J., Burruss, G. W., & Bossler, A. M. (2015). Policing cybercrime and cyberterror. Durham: Carolina Academic Press. Holt, T. J., Soles, J., & Leslie, L. (2008, April). Characterizing malware writers and computer attackers in their own words. In 3rd international conference on information warfare and security (pp. 24–25). Holt, T. J., Blevins, K. R., & Burkert, N. (2010). Considering the pedophile subculture online. Sexual Abuse: A Journal of Research and Treatment, 22(1), 3–24. https://doi.org/10.1177/ 1079063209344979. Holt, T. J., Smirnova, O., Chua, Y. T., & Copes, H. (2015). Examining the risk reduction strategies of actors in online criminal markets. Global Crime, 16, 81–103. https://doi.org/10.1080/ 17440572.2015.1013211. Holt, T. J., Smirnova, O., & Hutchings, A. (2016). Examining signals of trust in criminal markets online. Journal of Cybersecurity, 2(2), 137–145. Holt, T. J., Lee, J. R., Liggett, R., Holt, K. M., & Bossler, A. (2019). Examining perceptions of online harassment among constables in England and Wales. International Journal of Cybersecurity Intelligence & Cybercrime, 2(1), 24–39. Hutchings, A., & Holt, T. J. (2014). A crime script analysis of the online stolen data market. British Journal of Criminology, 55, 596–614. https://doi.org/10.1093/bjc/azu106. Huey, L. J. (2002). Policing the abstract: Some observations on policing cyberspace. Canadian Journal of Criminology, 44, 243. INHOPE. (2012). Annual report. European Union. https://doi.org/10.1017/CBO9781107415324.004 Internet Watch Foundation. (2015). Emerging patterns and trends report #1 online-produced sexual content. Retrieved from https://www.iwf.org.uk/sites/default/files/inline-files/Online-produced_ sexual_content_report_100315.pdf Interpol. (2018). Towards a global indicator on identified victims of child sexual exploitation material: Summary report. Retrieved from https://www.ecpat.org/wp-content/uploads/2018/03/ TOWARDS-A-GLOBAL-INDICATOR-ON-UNIDENTIFIED-VICTIMS-IN-CHILD-SEXUALEXPLOITATION-MATERIAL-Summary-Report.pdf James, L. (2005). Phishing exposed. Rockland: Syngress. Jenkins, P. (2001). Beyond tolerance: Child pornography on the Internet. New York: New York University Press. Jones, A. (2016). I Get Paid to Have Orgasms: Adult webcam models’ negotiation of pleasure and danger. Journal of Women in Culture and Society, 42(11), 227–256.

114

R. Liggett et al.

Kilmer, B., Everingham, S., Caulkins, J., Midgette, G., Pacula, R., Reuter, P., Burns, R., Han, B., & Lundberg, R. (2014). What America’s users spend on illegal drugs: 2000–2010. Santa Monica: RAND Corporation. Latonero, M., Berhane, G., Fellow, L., Hernandez, A., Assistant, R., Mohebi, T., & Movius, L. (2011). Policy human trafficking online the role of social networking sites and online classifieds. Retrieved from http://technologyandtrafficking.usc.edu Lee, J. R., Holt, T. J., Burruss, G. W., & Bossler, A. M. (2019). Examining English and Welsh detectives’ views of online crime. International Criminal Justice Review, 1–20. Leukfeldt, R., Kleemans, E., & Stol, W. (2017). The use of online crime markets by cybercriminal networks: A view from within. American Behavioral Scientist, 61, 1387–1402. https://doi.org/ 10.1177/0002764217734267. Li, W., & Chen, H. (2014, September). Identifying top sellers in underground economy using deep learning-based sentiment analysis. In 2014 IEEE joint intelligence and security informatics conference (pp. 64–67). The Hague, Netherlands: IEEE. Liggett, R. (2019). Exploring online sextortion offenses: Ruses, demands, and motivations. Sexual Assault Report. MacCoun, R., & Reuter, P. (1992). Are the Wages of Sin $30 an hour? Economic aspects of streetlevel drug dealing. Crime & Delinquency, 38, 477–491. https://doi.org/10.1177/ 0011128792038004005. Maddox, A., Barratt, M. J., Allen, M., & Lenton, S. (2016). Constructive activism in the dark web: Cryptomarkets and illicit drugs in the digital ‘demimonde.’ Information, Communication and Society, 19, 111–126. https://doi.org/10.1080/1369118X.2015.1093531 Manky, D. (2013). Cybercrime as a service: A very modern business. Computer Fraud and Security, 2013, 9–13. https://doi.org/10.1016/S1361-3723(13)70053-8. Martin, J. (2014). Lost on the Silk Road: Online drug distribution and the ‘cryptomarket’. Criminology & Criminal Justice, 14, 351–367. McFarlane, J. M., Campbell, J. C., Wilt, S., Sachs, C. J., Ulrich, Y., & Xu, X. (1999). Stalking and intimate partner femicide. Homicide Studies, 3, 300–316. Mendoza, M. (2017, May 9). AP exclusive: Big child webcam sex bust reveals rising abuse. Associated Press. Retrieved from https://www.apnews.com/74b81f79e9024124a1cfe43a0ce9eec2 Mikhaylov, A., & Frank, R. (2018). Illicit payments for illicit goods: Noncontact drug distribution on Russian online drug marketplaces. Global Crime, 19, 146–170. Mitchell, K. J., Jones, L. M., Finkelhor, D., & Wolak, J. (2011). Internet-facilitated commercial sexual exploitation of children: Findings from a nationally representative sample of law enforcement agencies in the United States. Sexual Abuse: A Journal of Research and Treatment, 23, 43–71. Monuteaux, M. C., Lee, L. K., Hemenway, D., Mannix, R., & Fleegler, E. W. (2015). Firearm ownership and violent crime in the U.S. American Journal of Preventive Medicine, 49, 207–214. Motoyama, M., McCoy, D., Levchenko, K., Savage, S., & Voelker, G. M. (2011, November). An analysis of underground forums. In Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference (pp. 71–80). Berlin, Germany: ACM. Nakashima, E. (2016, January 21). This is how the government is catching people who use child porn sites. The Washington Post. Retrieved from https://www.washingtonpost.com/world/nationalsecurity/how-the-government-is-using-malware-to-ensnare-child-porn-users/2016/01/21/fb8ab5 f8-bec0-11e5-83d4-42e3bceea902_story.html?noredirect=on&utm_term=.1393ab08f686 National Center for Missing and Exploited Children. (2019). The issues: Child sexual abuse imagery. Retrieved from http://www.missingkids.com/theissues/sexualabuseimagery Ormsby, E. (2012). The drug’s in the mail. The Age. http://www.theage.com.au/victoria/ the-drugsin-the-mail-20120426-1xnth.html. Accessed 1 Mar 2019. Paoli, G. P., Aldridge, J., Ryan, N., & Warnes, R. (2017). Behind the curtain: The illicit trade of firearms, explosives and ammunition on the Dark Web. Retrieved 7 June 2018, from the RAND Corporation web site: http://www.rand.org/t/RR2091 Pauli, D. (2012). Aussie coppers bedevilled by online contraband networks. SC Magazine. http://www.scmagazine.com.au/Tools/Print.aspx?CIID=314984.0. Accessed 1 Mar 2019.

5

The Dark Web as a Platform for Crime: An Exploration of Illicit Drug. . .

115

Pavlou, P. A., & Dimoka, A. (2006). The nature and role of feedback text comments in online marketplaces: Implications for trust building, price premiums, and seller differentiation. Information Systems Research, 17(4), 392–414. Peretti, K. K. (2009). Data breaches: What the underground world of carding reveals. Santa Clara Computer & High Technology Law Journal, 25, 375. Pleasance, C. (2015, February 26). Prisoner who used Parcelforce to import deadly 850-round-aminute machine guns into the UK is jailed for life as judge slams lax jail security. Retrieved from https://www.dailymail.co.uk/news/article-2970878/Prisoner-used-Parcelforce-import-deadly850-round-minute-machine-guns-UK-jailed-life-judge-slams-lax-jail-security.html Pulido, M. L. (2014, January 23). Child pornography: basic facts about a horrific crime. Huffington Post. Retrieved from https://www.huffpost.com/entry/child-pornography-basic-f_b_4094430? guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig =AQAAAMxzSEbW74zyEUSeWEc04zCNX0Q-714Eq3ucLHx3xzBZW7j7cH7024da7g 3QP8NulsM6BsW9dMHL48fh_jghq6m4fgdt1_xZEl7thyFtzzE5oXZkgXEkBSi8Uzb_JxLHn N4TafGLpVyaRiexSFAYAdPbEi9n5UwktH25x0Wolboi Quayle, E., & Taylor, M. (2002). Child pornography and the Internet: Perpetuating a cycle of abuse. Deviant Behavior, 23, 331–361. Raymond, N. (2015, July 8). Two people in N.Y. charged in massive probe of child porn website. Reuters. Retrieved from https://www.reuters.com/article/us-usa-crime-childporn-idUSKCN0PI2CH20150708 Reuters. (2018). Facebook is using new software to identify and remove child nudity photos uploaded to the site. Retrieved from https://www.cnbc.com/2018/10/24/facebook-cracksdown-on-child-nudity-photos-using-machine-learning.html Rothman, E. F., Hemenway, D., Miller, M., & Azrael, D. (2005). Batterers’ use of guns to threaten intimate partners. Journal of the American Medical Women’s Association, 60, 62–68. Schneider, J. L. (2003). Hiding in plain sight: An exploration of the illegal (?) Activities of a drugs newsgroup. The Howard Journal of Criminal Justice, 42(4), 374–389. Sengupta, K.. (2016). How teenager used the Dark Web to buy gun for Munich mass murder. Retrieved from https://www.standard.co.uk/news/world/how-teenager-used-the-dark-web-tobuy-gun-for-munich-mass-murder-a3330406.html Shulman, A. (2010). The underground credentials market. Computer Fraud & Security, 2010, 5–8. Siegel, M., Ross, C. S., & King, C. (2013). The relationship between gun ownership and firearm homicide rates in the United States, 1981–2010. American Journal of Public Health, 103, 2098–2105. Skidmore, M., Garner, S., Desroches, C., & Saggu, N. (2018). The threat of exploitation in the adult sex market: A pilot study of online sex worker advertisements. Policing, 12, 210–218. Smirnova, O., & Holt, T. J. (2017). Examining the geographic distribution of victim nations in stolen data markets. American Behavioral Scientist, 61(11), 1403–1426. Smith, S. (2012). Study of self-generated sexually explicit images and videos featuring young people online. Retrieved from https://www.iwf.org.uk/sites/default/files/inline-files/IWF_ study_self_generated_content_online_011112.pdf Smith, S. (2014a). Briefing paper-preliminary analysis of new commercial CSAM website accepting payment by Bitcoin. Retrieved from https://www.iwf.org.uk/sites/default/files/inlinefiles/Preliminary_analysis_into_commercial_CSAM_distributor_accepting_bitcoin_sanitised_not_ restricted_01014.pdf Smith, S. (2014b). Rogue affiliates distributing CSAM using disguised websites. Retrieved from https://www.iwf.org.uk/sites/default/files/inline-files/Analysis_of_rogue_affiliates_commercial_pub lic_0414.pdf Smith, B. (2018, April 22). Florida man get 330 years on child porn charges in ‘sex tourism’ trial. USA Today. Retrieved from https://www.usatoday.com/story/news/nation-now/2018/04/22/flor ida-man-gets-330-years-child-porn-charges-sex-tourism-trial/540295002/ Smith, A. & Anderson, M. (2016). Online shopping and e-commerce. Pew Research Center. Retrieved from http://www.pewinternet.org/2016/12/19/online-shopping-and-e-commerce/ Smith, L. A., Vardaman, S. H., & Snow, M. A. (2009). The national report on domestic minor sex trafficking. Retrieved from www.sharedhope.org

116

R. Liggett et al.

Sood, A. K., & Enbody, R. J. (2013). Crimeware-as-a-service – A survey of commoditized crimeware in the underground market. International Journal of Critical Infrastructure Protection, 6(1), 28–38. Stambaugh, H., Beaupre, D. S., Icove, D. J., Baker, R., Cassady, W., & Williams, W. P. (2001). Electronic crime needs assessment for state and local law enforcement. National Washington, DC: Institute of Justice, NCJ 186276. Symanovich, S. (2019). How to safely access the deep and dark webs. Norton. Retrieved from https://us.norton.com/internetsecurity-how-to-how-can-i-access-the-deep-web.html Symantec. (2014). The future of mobile malware. Retrieved from http://www.symantec.com/con nect/blogs/future-mobile-malware Taylor, P. A. (1999). Hackers: Crime in the digital sublime. New York: Routledge. Taylor, R. W., Fritsch, E. J., & Liederbach, J. (2014). Digital crime and digital terrorism. London, UK: Prentice Hall Press. Thomas, W. I. (1975). Old world traits transplanted. Montclair: Patterson Smith. Tor. (2019). About the Tor project. Retrieved from https://2019.www.torproject.org/index.html.en UNDOC. (2018). Global report on trafficking in persons. United Nations Office on Drugs and Crime. Retrieved from https://www.unodc.org/documents/data-and-analysis/glotip/2018/ GLOTiP_2018_BOOK_web_small.pdf Van Hout, M. C., & Bingham, T. (2014). Responsible vendors, intelligent consumers: Silk Road, the online revolution in drug trading. International Journal of Drug Policy, 25, 183–189. https://doi.org/10.1016/j.drugpo.2013.10.009 Victims of Trafficking and Violence Act of 2000, 22 U.S.C. § 7101 Wall, D. (2007). Cybercrime: The transformation of crime in the information age (Vol. 4). Cambridge, UK: Polity. Ward, M. (2014). Tor’s most visited hidden sites host child abuse images. http://www.bbc.com/ news/technology-30637010. Accessed 30 Aug 2016. Weinschenk, C. (2019, March 6). Growing malware, malicious insider attacks contributing to $13M Avg company cybersecurity costs. Retrieved 4 June 2019, from https://www.telecompetitor. com/growing-malware-malicious-insider-attacks-contributing-to-13m-avg-company-cybersecu rity-costs/ Wilson, M. (2011). Accenture survey: Discounters continue to dominate back-to-school shopping. Chain Store Age. https://www.chainstoreage.com/article/accenture-survey-discounters-con tinue-dominate-back-school- shopping Wittes, B. B., Poplin, C., Jurecic, Q., & Spera, C. (2016). Sextortion: Cybersecurity, teenagers, and remote sexual assault. Washinton, DC: The Brookings Institute. Wolak, J., Finkelhor, D., & Mitchell, K. J. (2012). Trends in arrests for child pornography possession: The Third National Juvenile Online Victimization Study. Retrieved from https://scholars.unh.edu/ccrc/46 Wortley, & Smallbone, S. (2006). Applying situational principles to sexual offenses against children. In R. K. Wortley & S. Smallbone. (Eds.), Situational prevention of child abuse (Crime prevention studies, Vol. 19, pp. 7–36). Monsey: Criminal Justice Press. Zeller, T. (2005). Black market in stolen credit card data thrives on Internet. New York Times. Retrieved from https://www.nytimes.com/2005/06/21/technology/black-market-in-stolen-creditcard-data-thrives-on-internet.html

6

Organized Crime and Cybercrime Anita Lavorgna

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problematizing the “Cyber-Organized Crime” Narrative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Problem with “Organized Crime” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Problem with “Cyber” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Organized Crime in Cyberspace: What Do We Know? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Organized Cybercriminals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “Traditional” Organized Crime Groups Operating Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Framing the Cyber-Organized Crime Narrative: cui prodest? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

118 119 120 121 122 122 125 127 129 131

Abstract

Over the past 15 years, it has been suggested that organized crime is or might be linked to cybercrime, and the term “cyber-organized crime” has come to be used by policymakers, law enforcement officials, media outlets, and academics to refer to a variety of criminal phenomena online. However, empirical evidence is still limited and inconclusive as to whether organized crime groups have moved their activities online or new criminal actors have created organized groups in cyberspace. This chapter clarifies important definitional issues on what actually qualifies as organized crime and cybercrime, presents state-of-the-art research on the relationships between organized crime and cybercrime, and critically discusses how the “cyberorganized crime” narrative has emerged and established itself in policymaking, media, and academic narratives. By the end of the chapter, it will be clear that organized crime has a presence in cyberspace, but the equalization of online

A. Lavorgna (*) University of Southampton, Southampton, UK e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_14

117

118

A. Lavorgna

criminal networks involved in serious cybercrimes and organized crime should not be taken for granted, especially bearing in mind the recommendations of organized crime scholars that we need something more than certain forms of organizational structure to have organized crime. Any criticism about pairing cybercrime and organized crime is not to deny the seriousness of cyber-related risks. Nonetheless, critical attention and rigor in the terminology employed are needed before crystallizing the use of the “cyber-organized crime” narrative, with important consequences in terms of resource allocation and action prioritization. Keywords

Organized crime · Criminal networks · Co-offending · Cybercrime · Cyberorganized crime · Trafficking · Criminal adaptability · National security · Serious crime

Introduction Cyberspace has not only impacted criminal behaviors and activities but has also intervened in patterns of relationships between criminal actors, affecting the organizational life of crime. As cybercrime has become increasingly important in national and international security agendas, some characteristics of organized crime have been attributed to cybercriminality (see, among others, Williams 2001; Choo and Smith 2007; Grabosky 2007; McCusker 2011; McGuire 2012; Tropina 2013; Hutchings 2014; Leukfeldt 2015). In addition, organized crime is often presented by policymakers and cybersecurity companies as a major actor in cyberspace. However, while the organization of crime in cyberspace is an expanding research area, in many aspects it is still in its infancy. Empirical evidence is still scarce as to whether “old” organized crime groups have moved their activities online or new criminal actors have created organized groups in cyberspace. Nonetheless, existing evidence points to the presence in cyberspace of loose, flat, and fluid networks, generally without a common functional unit (Wall 2014, 2015). This chapter provides an overview of the relationship between organized crime and cybercrime. First, it will briefly clarify some definitional issues about what actually qualifies as organized crime. It will then present state-of-the-art evidence on the presence of organized crime in cyberspace (considering both “new” organized cybercriminals and “old” organized crime groups). Finally, it will critically discuss the extent to which the “cyber-organized crime” narrative has made inroads. The findings presented support the idea that organized crime is active in cyberspace; nonetheless, its presence is probably far more limited than certain publications suggest. Furthermore, the limited reliable data available do not allow an exact comparison to be drawn between the existing criminological conceptualization of organized crime and the newly emerging criminal networks operating online.

6

Organized Crime and Cybercrime

119

Problematizing the “Cyber-Organized Crime” Narrative As outlined above, the association between cybercrime and organized crime has so far been based on limited empirical evidence, little research having been carried out on how different types of organized crime groups use cyberspace to carry out their traditional activities. Likewise there has been scant research on whether and to what extent new illicit online activities have emerged as a result of the digital shift in organized crime (Carrapico and Lavorgna 2015; Leukfeldt et al. 2017b). Nonetheless, over the past 15 years, the term “cyber-organized crime” has come to be used by policymakers, law enforcement officials, and academics to refer to a variety of phenomena. The term has also attracted media attention. The problem is that the adjective “organized” is often used as an attribute of criminal activities perceived as serious or of criminal networks perceived as sophisticated, hence stretching the significance of what “organized crime” actually is beyond its definitional limits (Lavorgna and Sergi 2016). When the organized crime narrative is employed in ambiguous and inconsistent ways, it matters not only from an academic point of view (because this ambiguity hampers our understanding of the criminal phenomenon) but also for concrete reasons. The reductionism implied by pairing “cyber” and organized crime confuses the public and policymakers alike and can also lead to the misdirection of police resources (Wall 2015). In fact, views on what organized crime is and what it is not differentially influence the practical work of law enforcement, as labelling something as organized crime generally triggers greater investigative powers (especially as regards intrusive surveillance measures) and tougher sentences in many countries (Levi 1998; Ashby 2016; Lavorgna 2018b) (see ▶ Chaps. 8, “Surveillance, Surveillance Studies, and Cyber Criminality” and ▶ 19, “Police and Extralegal Structures to Combat Cybercrime”). This debate is not new in organized crime research, where scholars have already criticized the very loose and vague definitions of organized crime which set extremely low standards for the inclusion of different and diverse phenomena (Carrapico 2014). Historically, this dilatation of the organized crime umbrella has been used to express social and political anxieties on the growth of (offline) illegal markets during the expansion of globalization, setting extremely low standards for the inclusion of different and diverse phenomena as “organized crime,” in order to include a variety of threats affecting both the economic and social spheres (Paoli 2002; van Duyne and Vander Beken 2009; Carrapico 2014). Now, the chief anxieties are those informing the mythology of cybercrime (Wall 2008), reflecting public fear concerning cyber-related risk. Indeed, as explained by Wall (2008), a number of myths about cyberspace and cybercrime originate in fiction rather than in fact; they blur the line between what is real, what is not, and what might exist only in the realm of technical scientific possibilities, shaping expectations about online risk (see ▶ Chap. 3, “Technology Use, Abuse, and Public Perceptions of Cybercrime”).

120

A. Lavorgna

The Problem with “Organized Crime” The underlying problem is that the notion of organized crime is deeply imbued with historical and cultural elements, covering a whole range of different types of crime and crime groups in different countries. For instance, while in Northwestern Europe, the element of cross-border trade is emphasized; in the United States, organized crime tends to refer to mafia-style local organizations or international criminal groups involved in illegal trades. The presence of a “political coalition” dimension is stressed in countries such as Australia and Japan, whereas Italian scholarship often reduces organized crime to the mafia. The European Union is mainly concerned with cross-border aspects of organized crime and the consequent need for consistent European-level actions to tackle it (Kleemans et al. 2010; Van Duyne 1995; Lavorgna 2018b). What these interpretations have in common is the fact that organized crime evokes the idea of a bigger interpersonal and social threat compared with non-organized crime: in other words, what individuals can do, organizations can do better (Levi 1998; van Duyne and van Dijk 2007; von Lampe 2008). Organized crime provides an organizational framework allowing offenders to cause greater harm, enabling them to have a different delinquent ambition and consequently a different scale of activity compared with individual offending (Harding 2007). Hence, organized crime is more dangerous to national and international security and more deserving of attention and resources. This theorization has been crystallized in the legal framework of many countries. It gradually entered into a global threat discourse that paved the way for the 2000 UN Convention against Transnational Organized Crime (Palermo Convention) (van Duyne and Nelemans 2012), which is still the international reference frame. The Palermo Convention reflects the heterogeneity found in national legislation dealing with organized crime. It presents a broad definition of “organized crime” and “criminal organization” (“a structured group of three or more persons, existing for a period of time and acting in concert with the aim of committing one or more serious crimes or offences established in accordance with this Convention, in order to obtain, directly or indirectly, a financial or other material benefit,” art. 2a) with a relatively high threshold, setting a minimum sentence requirement (“‘serious crime’ shall mean conduct constituting an offence punishable by a maximum deprivation of liberty of at least four years or a more serious penalty,” art. 2b). Over the years, organized crime researchers have further dissected and refined the notion of organized crime, identifying three basic dimensions. First, it can be interpreted as a set of activities characterized by a certain level of sophistication and continuity which are often maintained through violence or corruption (hence the focus is on organized crimes). In this perspective, organized crime is generally associated with the provision of illegal goods and services, although evidence shows that it can also involve other activities, particularly in predatory crimes such as fraud and robbery, and that its reach often extends to legal businesses. Second, organized crime can be interpreted as having an associational structure, indicating the presence of stable and structured links between offenders. A criminal organization is generally understood “as a large-scale collective primarily engaged in illegal

6

Organized Crime and Cybercrime

121

activities with a well-defined collective identity and subdivision of work among its members” (Paoli 2002, p. 52). Organized crime groups are thought to be functionally different from other types of offender groups, as organization within a criminal group creates economies of scale, reducing the time needed to search for co-offenders, integrating the functional elements in crimes, and offering reputational benefits (Levi 2014). Finally, organized crime denotes a systemic condition, where the focus is on the concentration of power in the form of an underworld government or alliance between criminals and political and economic elites (von Lampe 2008). The second dimension is particularly important in the context of this chapter, as it is the one that often creates confusion: organized crime is not simply crime that is somehow organized. Rather, organized crime groups are supposed to be functionally different from other types of criminal networks, as organized crime – among other things – creates efficiency in the pursuit of criminal activities and offers reputational benefits (Levi 2014).

The Problem with “Cyber” When discussing the relationship between organized crime and cybercrime, another preliminary issue to be addressed is that “cybercrime” is too often used as a hodgepodge of different illegal behaviors occurring online. With the blurring of online and offline in our daily lives, it makes less and less sense to think of “cyber” as a unicum that can easily be kept apart from our (“real”) routines in the physical world (Wall 2001; Holt and Bossler 2014; Lavorgna 2019). When discussing the role of organized crime in cyberspace, we need to be careful to distinguish between very different sets of crimes, for two main reasons. First, organized crime presence, attitude, and modus operandi can differ significantly, depending on the specific online criminal activity we are taking into consideration. Second, many criminal activities listed as cybercrimes may neither meet the legal threshold to be considered as organized crime in countries which link organized crime to the seriousness of a certain criminal activity nor be covered by anti-organized crime legislation (Joseph 2015; Leukfeldt et al. 2017a). It is worth remembering that what is considered a cybercrime can differ, depending on the jurisdiction. From an international point of view, the Budapest Convention on Cybercrime (adopted under the aegis of the Council of Europe in November 2001 and coming into force in July 2004) was the first international convention addressing criminal behavior online. The Budapest Convention covers a broad range of crimes and still represents the best international response to cybercrimes currently available. In particular, in its section concerning substantive criminal law, the Convention deals with four main types of criminal activities: offences against the confidentiality, integrity, and availability of computer data and systems (artt. 2–6), computer-related offences (artt. 7 and 8), content-related offences (art. 9), and offences related to infringement of copyright (art. 10). The Additional Protocol of the Convention urges states to criminalize acts of a racist and xenophobic nature committed through computer systems. Therefore, this piece of

122

A. Lavorgna

legislation has the merit of covering a broad range of online criminal behavior, but it is not comprehensive. For instance, Internet-facilitated trafficking activities (think of a large-scale drug trafficking ring in the Deep Web) are not covered by the Budapest Convention and therefore should not be considered cybercrimes in the strict sense, even if nowadays they are generally considered as such. The Budapest Convention also promotes the adoption of appropriate legislation at a national level, improvement in mutual assistance and investigative techniques, and the fostering of international cooperation (Lavorgna 2019).

Organized Crime in Cyberspace: What Do We Know? When we consider the role of organized crime in cybercrimes, it is important to keep separate two very different phenomena: (1) organized cybercriminals, who carry out cybercrimes as defined by the law, and (2) “traditional” organized crime groups operating online, who are also active offline and who use the Internet as a crime facilitator (Lavorgna 2015).

Organized Cybercriminals At the time of the writing this chapter, organized cybercriminals have received relatively more attention from researchers than traditional organized crime groups moving online (see, among others, Birk et al. 2007; Wall 2007, 2014; Koops 2011; Broadhurst et al. 2014; Lusthaus 2013; Holt et al. 2016). In particular, scholarly research has studied criminal groups carrying out new types of offenses targeting computer networks – such as malware attacks and hacking, with a focus on financial crime – and committed by means of a computer system or network, such as identity theft, fraud, copyright infringements, and child pornography. While this research topic is global in nature, unfortunately most empirical research has been focused on actors based in countries which were part of the former Soviet Union, as many highly skilled offenders allegedly operate from within these countries (Kadlecová 2015; Leukfeldt et al. 2017b), which hinders result generalization. From existing research on sophisticated cybercrime networks (which might be virtually compared to organized crime groups), we can infer that organized cybercriminals often neither meet the standard legal definitions of organized crime (as contained, for instance, in the Palermo Convention) nor most academic definitions (Lavorgna 2016; Leukfeldt et al. 2017a), even where online operating networks are highly harmful (Broadhurst et al. 2014; Hutchings 2014; Holt et al. 2016). In a recent empirical study focusing on phishing and malware attacks affecting the banking and financial sectors, Leukfeldt and colleagues (2017a) investigated specifically to what extent profit-driven cybercrime can be conceptualized as organized crime. By looking merely at the composition and structure of criminal networks, the groups observed met most legal definitions of organized crime, which tend to set very low standards (generally, the presence of a minimum of two or three persons working together over time), although they seldom met the more stringent academic

6

Organized Crime and Cybercrime

123

standards. However, looking at the type of criminal activities carried out, the cybercriminals could not be easily conceptualized as organized crime. Depending on the country where they were prosecuted, most cases analyzed did not meet the legal threshold of a minimum sentence requirement to be considered “serious enough” and therefore labelled as organized crime. Alternatively, the specific type of cybercrime considered was not recognized (yet) as one of the activities covered by anti-organized crime legislation. In addition, the cybercrimes analyzed in the study met only the broader academic definitions which do not require the presence of a quid pluris (such as corruption, (threat of) violence, attempts to gain or maintain monopoly or control over a particular criminal market, and so on) in order to be organized crime. They would not have met those definitions according to which certain crimes might be complex and organized but still not be organized crime if other core characteristics are missing. In addition, when the authors tested the cases analyzed against the conceptualization of organized crime as power/a system of governance (the idea that organized crime seeks a social function through control over production and distribution of a certain commodity in the underworld, protection services, or an alliance with political and economic elites), they found that the element of power was generally difficult to recognize in online networks. Some online criminal networks have a high degree of sophistication and try to regulate or control the production and distribution of products and services (for instance, in online forums, administrators and moderators can offer a certain degree of third-party enforcement over certain transactions and regulate access to criminal markets) (Holt 2013a; Yip et al. 2013). However, contrary to organized crime in the physical world, there is no effective system of enforcement, no opposition against competitors, and no real control over distribution (Lusthaus 2013). Consider, for instance, the case of a so-called ripper (i.e., someone committing fraud in an online forum by selling invalid stolen credit cards or not delivering the promised service after receiving payment). In order to defend the reputation of a given cybercrime market, forum administrators usually ban rippers (which is a form of enforcement) (Motoyama et al. 2011; Holt 2013b; Yip et al. 2013). Rippers, however, can easily access the same marketplace again using another name or the nickname of someone having a good reputation on another forum (which renders the abovementioned form of enforcement completely ineffective). Hence, some key organized crime characteristics are missing online (Lavorgna 2019). There are other elements concerning organized cybercriminals which generally differ from what we would expect to find in most offline organized crime groups. A first discerning factor to consider is recruitment, where both online meeting places and offline social ties are used to enter into contact with potential co-offenders (Leukfeldt et al. 2017c, d). Online, members of cybercriminal networks can just as easily meet criminal peers as noncriminals. They can build relationships with them in a multitude of online convergence settings, but little is known about the involvement mechanisms in a specific cybercriminal group (Leukfeldt et al. 2017b). Due to lack of research in the area, it is not possible to draw parallels with the recruitment processes of traditional organized crime groups, for instance, with regard to the creation of trust among co-offenders.

124

A. Lavorgna

In addition, the type of criminal activity carried out also determines the convergence settings of choice and consequently the social opportunity structure which offenders can rely on. In fact, offenders constantly have to deal with a trade-off between concealing the criminal activity from law enforcement and reaching potential customers or criminal peers. Recent research on cybercriminal networks focused on groups active in the Dark Web (see, among others, Martin 2014; Morselli et al. 2017; Dupont et al. 2017) (see ▶ Chap. 5, “The Dark Web as a Platform for Crime: An Exploration of Illicit Drug, Firearm, CSAM, and Cybercrime Markets”). However, many criminal activities are carried out on the surface web, because offenders prioritize ease of reaching potential customers against the risks of being caught, considering the size of the environment controlled by law enforcement (Lavorgna 2014a, b; Holt et al. 2016; Hutchings et al. 2016). A second element that should be further investigated in order to better understand potential convergence points between online criminal networks and organized crime is the stability of the composition of a criminal group. It is relatively easy for organized cybercriminals to step in and out of criminal collaboration or to be part of different collaborations at the same time (Leukfeldt et al. 2017b, c, d). Of course, this does not exclude the presence of online criminal networks relying on prolonged interactions between members, for instance, on Internet forums. Where online networks enjoy a certain longevity and stability, they generally have a group of core members over an extended period of time. These members often know each other in the offline world and recruit external affiliates online when needed (both people with specific tech expertise and nonspecialists to be used, for instance, as money mules) (Leukfeldt et al. 2017a, b). Other important aspects concern the degree of professionalism of a criminal network. Of course, the type of criminal activity at stake can determine huge differences in this regard: some crimes require specific technical skills, while others can be embarked upon by virtually anyone with minimum skills. Research on online market-based activities, for instance, suggests that certain illegal trades in cyberspace (such as some types of drug trafficking) are carried out by offenders without previous criminal experience (Lavorgna 2014a, b). In addition to trafficking carried out in a professional or semiprofessional way by loose groups of full-timers, for whom the trafficking activity is their main source of income, there are people legitimately involved in legal activities, who decided to increase their profit by amateurishly embracing the new criminal opportunities provided by cyberspace. Also many actors involved in online financial crime do it in an occasional or semiprofessional way, generally exploiting specific opportunities encountered through their legitimate employment (for instance, people employed in the ICT sector or in a bank) (Leukfeldt et al. 2017a). There are many connections between legal and illegal services offered online and between online and offline activities (Bijlenga and Kleemans 2017). The switching from legal to illegal businesses and connections between the upper- and underworlds are common in the criminal careers of certain organized crime groups offline (Kleemans and De Poot 2008). Further research should explore these aspects with regard to different types of cybercrimes. We should not forget that offline convergence settings have a pivotal role as well for online criminals, as cybercrimes always have an offline and local dimension. After all,

6

Organized Crime and Cybercrime

125

cybercrime is not something detached from us which only takes place in the virtual world. Recent research has shown that in many sale frauds, the victims and the perpetrators are in the same country (Levi et al. 2017). Members of major global cybercriminal networks may originate from the same offline social clusters (Leukfeldt et al. 2017b). For instance, Lusthaus and Varese (2017) studied Romanian groups where large numbers of individuals professionally geared toward online fraud were concentrated in particular neighborhoods. They were often also involved in offline activities, such as loan sharking and extortion, and had links with politicians and law enforcement authorities. These groups in particular displayed organized crime-like features. That said, such cyber-organized crime groups were formed under very specific local circumstances, and we should be careful not to universalize their features to the majority of criminal actors operating in cyberspace (Lavorgna 2019). For instance, previous research has shown that, contrary to most traditional organized crime groups, online networks tend to take full advantage of the Internet’s anonymity. Their online reputation is unrelated to their physical one, and consequently their connection with their territory of origin (while potentially relevant to the initial formation of the group) is less functional to the perpetration of their criminal activities (Lavorgna 2015). Overall, studies on the local dimensions of cybercrime remain scarce and mostly focused on specific geographical locations. We should therefore be careful in generalizing or drawing conclusions in light of a lack of further empirical research.

“Traditional” Organized Crime Groups Operating Online An increasing number of policymakers, research reports, and newspapers have noted that organized crime has moved online (among many others, Home Office 2010; McGuire 2012; Europol 2017), but many of these claims are not supported by strong evidence (Lavorgna 2016; Leukfeldt et al. 2017a). Even if it makes sense to think that criminal opportunities provided by the Internet are attractive for traditional organized crime groups (especially for those operating large-scale trafficking activities, as the borderless cyberspace can reduce distances to nothing, linking sellers and buyers operating at the antipodes), it is not self-evident whether these groups have the characteristics or abilities to exploit such opportunities (Brenner 2002; McCusker 2006; Lavorgna 2015). Unfortunately, there is still little research addressing the rate of criminal adaptability of traditional organized crime when it comes to use of the Internet, with anecdotic evidence giving mixed results. Of course, we can consider it as a given that traditional organized crime groups also somehow use the Internet in the course of their routines and activities: VoIP services are not only economically convenient but also increase the possibilities of anonymous communications, online auctions, e-banking services, and online gambling providing easy ways to launder the illicit proceeds of crime, and so on (Morris 2004; Giacopassi and Pitts 2009; Lavorgna and Sergi 2014). While not all types of organized crime groups have significantly changed their modus operandi because of cyberspace, by relying on different socioeconomic opportunity structures, the various groups have indeed different capacities of adaptation to the new possibilities and challenges provided by the Internet (Lavorgna 2015).

126

A. Lavorgna

Organized crime manifests itself in diverse ways, depending on the characteristics of the territory in which it operates. When it comes to online presence, groups tend to behave differently. The involvement of different types of organized crime groups in cyberspace has been explored in Italy by Lavorgna and Sergi (2014). This study suggests that mafia-type groups, operating in their traditional territories, are not significantly exploiting cyberspace. A plausible explanation is that the social opportunity structure they rely on does not accord well with Internet usage. Apparently it still works sufficiently well enough so that they do not feel the need to make relevant but risky changes. Historically, mafia-type groups have not only been players in criminal markets but have also used violence, intimidation, and a code of silence to prosper in the legitimate economy. They are motivated not only by profit but also by the will to impose their presence on productive activities in their territory of origin or expansion and in the political arena (Paoli 2005). For mafia groups, physical presence in a given territory matters. Regarding their presence in cyberspace, there is evidence of mafia-type groups being active in specific activities, such as online gambling, which is a profitable crime area that can also be exploited for money-laundering purposes. These groups are not only involved in offline clandestine betting, but they also tamper with slot machines which electronically transmit information to the relevant national authorities who manage online casinos for tax purposes. Some mafia-type groups are also active in online trafficking activities, although for the most part, they still seem fairly reluctant to move their business online. Overall, the existing empirical evidence suggests that for most mafia-type groups, cyberspace has not significantly changed the social opportunity structure on which they rely (Lavorgna 2015). Other types of organized crime groups seem more prone to invest in online opportunities. For instance, business-like groups (those comparable to commercial businesses, generally focused on profitable trafficking activities) tend to be more active online (Lavorgna and Sergi 2014). Business-like groups can vary along a continuum, ranging from long-standing structured criminal networks to looser gangs which might lack clear leadership. These groups tend to use the same opportunity structure that facilitates legal economic activities, operating as opportunistic economic agents: they do not care what they trade, as long as they make a good profit (Naìm 2006; Kleemans 2007). Hence, it is not surprising that – at least in theory – this type of criminal group can make the most of online opportunities, as the inherently transnational character of the Internet provides them with the possibility to exploit regulatory vulnerabilities in certain countries and to easily connect with distant criminal peers (Shelley 2003). We may conclude that the Internet certainly represents more than an enhanced communication tool for business-like groups. Cyberspace has advanced new profitable possibilities for criminal markets that were generally considered not very profitable, as the possibility to reach an indefinite number of potential buyers makes almost any type of criminal market worthwhile for criminals, regardless of the limited monetary value of a single item. In certain cases, it has enabled interactions between offenders and customers at an earlier stage of the trafficking chain, allowing them to bypass international or local intermediaries, thus eliminating

6

Organized Crime and Cybercrime

127

a layer in the social organization of the group. Even more interestingly, cyberspace has also opened up the way for opportunistic local criminals with a potential global reach. If, in the physical world, structured criminal associations might be needed in order to commit, for example, large-scale cross-border crimes, in cyberspace some organizational layers can be dismissed. Suffice it to say that both payments and product deliveries can be made from a safe distance, through online banking and automated postal services. Consequently, very loose criminal networks (and also individuals or two co-offenders) may be as efficient as sophisticated organized criminal groups (Lavorgna 2014a, b). As we have seen, empirical research on the cyberspace expansion of traditional organized crime groups has so far been relatively limited and geographically bounded. As already suggested by Leukfeldt et al. (2017b), it is of the utmost importance to carry out comparative research considering differences in the diverse use of online criminal opportunities by different types of organized crime groups. Moreover, further research should also explore how organized crime presence in cyberspace changes depending on the local embeddedness of organized crime groups, as they are generally closely linked to the territories in which they operate. It is worth noting that even if the exploitation of cyberspace as a facilitator for traditional organized crime is still an object of debate, other types of offline criminal networks, especially gangs, have been more open in their use of the Internet. For instance, many gangs have a visible presence on social media, which is used both to demonstrate affiliation (for instance, by teasing competing gangs) and to promote a “gang lifestyle,” hence facilitating a wider dissemination of the gang’s value and identity (Morselli and Décary-Hétu 2013; Moule et al. 2014). Overall, cyberspace does have a role in socializing processes, as it allows the creation of new convergence settings for gang affiliates to interact with outsiders, but there is no evidence that it plays a significant role in gang formation (Sela-Shayovitz 2012; Morselli and Décary-Hétu 2013). Indeed, gang formation is still mostly based at a local level, with members clustering around the same neighborhoods and schools (Sela-Shayovitz 2012; Moule et al. 2014). The involvement of gang members in online delinquency is similar to that of nonmembers (e.g., copyright infringement, gambling, drug sales) (Sela-Shayovitz 2012), even if more recent research suggests that this occurs at a higher rate (Pyrooz et al. 2015). Gangs, especially the most sophisticated ones, seem to be well aware of the dangers of being online. Similar to what may be observed with many organized crime groups, gangs know that law enforcement monitors their activities, and fear of discovery seems to inhibit their willingness to use the Internet for gang-related purposes (Moule et al. 2014).

Framing the Cyber-Organized Crime Narrative: cui prodest? The previous sections have asserted that the pairing between organized crime and cybercrime is not straightforward. With lack of further empirical evidence, terminological rigor requires caution in employing the “organized cybercriminals” label to describe criminal networks operating in cyberspace or to assume that organized

128

A. Lavorgna

crime groups are now just a click away. If this is the case, it is worth briefly investigating how and why the “cyber-organized crime” narrative has made inroads. Document analysis carried out on official outputs of European and international organizations and policymakers shows that cybercrime has been paired with organized crime in order to make the case for effective crime repression and specifically to justify the prioritization and expansion of intelligence and law enforcement activities in the domain of counter-organized crime efforts. The perceived degree of danger of serious forms of cybercrime is often established by linking the criminal activity to the presumed presence of highly organized criminal groups, whose structure and professionalization somehow explain the seriousness of said criminal activity. In the outputs analyzed, however, the terminology employed included a certain amount of ambiguity, with the threshold of organized crime being lowered beyond the maximum extent (Lavorgna 2016). In addition, it is worth noting that in the document analyzed, the “cyber-organized crime” rhetoric was not based on strong empirical data or independent, peer-reviewed academic research. Rather, it was possible to observe a mechanism of cross-fertilization of sources and references, in which outputs of policymakers and security companies were referencing each other. What was considered merely a risk in earlier publications (i.e., the presence of organized crime online) mysteriously became a fact in the latest publications through the shift of modal verbs (may/can) to the asserting verbs (is/have), without the clear presence of new data to substantiate this change in the language. Lavorgna and Sergi (2016) expanded this analysis by carrying out a discourse analysis of policymaking documents in the United Kingdom, in order to assess whether and to what extent the cyber-organized crime narrative had been developing in the country. The United Kingdom offers an interesting case study, as it is a country where the conceptualization of organized crime has been evolving over the years in a process dominated by a juxtaposition between the concepts of “serious” and “organized,” according to which, if a crime is “serious enough,” it must be organized. Such juxtaposition has slowly led to the characterization of organized crime as a national security issue (Sergi 2016). The hypothesis addressed by Lavorgna and Sergi was that cybercrime was becoming organized in the policy narrative because of its seriousness, which justifies its inclusion within the national security agenda as well. The study showed that over the last few years, an inverted parallelism between the characterization of organized crime as a serious threat to national security and the developing characterization of cybercrime as serious crime too (and therefore organized “by default”) is triggering the securitization of cybercrime in the country. This has important consequences in terms of policing powers and approaches and resource allocation. For instance, when cybercrime is equated with organized crime, law enforcement will be called to act with a high policing approach, typical of national security threats in the United Kingdom. Specifically, law enforcement should act using the preferred tools of disruption and prevention, rather than using common policing strategies (including victim support), which might at times be more appropriate. This high policing approach carries an enhanced focus on the perpetrators of crime and on the crime type, whereas issues such as victimization, social network relations, and social harm at a local level might struggle to come to the surface and be properly addressed.

6

Organized Crime and Cybercrime

129

Finally, this path of inquiry was furthered by Lavorgna (2018a), who investigated whether the advancement of the cyber-organized crime narrative could also be identified in the media discourse in the United Kingdom. The media is a fundamental actor in understanding policy developments, as the everyday interpretations of crime and public perceptions of control policies depend on what people learn about them through the media. The study demonstrated how the media often succumb to the temptation of using the adjective “organized” to emphasize the seriousness of certain cybercrimes in a way that is not substantiated by research and that leaves unchallenged the framing of a crime control discourse by primary definers (namely, high-level policymakers and cybersecurity companies or consultants) with vested interests (see ▶ Chap. 7, “Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective”). These primary definers, in fact, have both hidden and not-so-hidden agendas in sensationalizing cyber-threats and effectively use media hyperboles to rally the support of society beyond their specific scope (in a way that, in line with van Duyne (2011), can be interpreted as “fear management” creating gullibility). In their search for the (newsworthy) production of information, the media seem to have lost the front line as primary definers, leaving the main voice in framing reality to others (Simon and Feeley 2013) and increasingly accepting the narrative of those “governing through crime” (Simon 2007). The analyses presented in this section again have limited geographical scope, and further analyses on the topic would be welcomed. Nonetheless, the studies suggest that behind the inroads of the cyber-organized crime narrative, there may be agendas trying to bring forward the securitization process of organized crime (by considering cybercrime as part of an exponential growth of organized crime activities as the basis for the prioritization and expansion of transnational counter-organized crime efforts, see Carrapico 2014). The aim may be to produce consensus around increased resources, domestic powers, and international cooperation in policymaking, to sell cybersecurity products, or simply to make items more newsworthy, at the risk of increasing analytical ambiguities. After all, as already stressed by Cohen (1980), if it is plausible that a certain event will become a major threat (in our case, the significant presence of organized crime online, scaling up the perceived danger and sophistication of many forms of cybercrime), the control culture we are part of mobilizes in advance, with potential events being anticipated to justify increased repression or more pervasive security measures.

Conclusion Addressing the relationship between cybercrime and organized crime forces us to rethink the conceptualization of organized crime in cyberspace and to ponder whether our current criminological paradigms can properly capture emerging trends in the criminal scenario. It is worth remembering that organized crime is criminalized in a separate way in many countries, because the danger it poses goes beyond the risks posed by individual offenders or occasional criminal cooperation and represents actual or potential threats to the social order (Fijnaut and Paoli 2006).

130

A. Lavorgna

In cyberspace, this distinction becomes problematic, as individuals or loose associations can be as dangerous as organized crime (Lavorgna 2015). Being critical about the use of organized rhetoric in cyberspace is not to deny the presence of certain organized groups online or the seriousness of many forms of cybercrime. After all, regardless of the label attributed to the offenders involved, many cybercrimes cause harm to both concrete victims and society as a whole and adversely affect social control (Leukfeldt et al. 2017a). In many cases, however, defining the criminal networks we find in cyberspace as organized crime is a bit of a stretch and may be guided by analytical ambiguities, if not by vested interests. Existing empirical evidence suggests, in fact, that alongside organized criminal groups, most criminal groups operating online are formed by loose and transient networks of relationships. However, the empirical evidence is still fairly limited and geographically bounded, having a strong European/Anglo-Saxon focus. All the actors involved in the framing of emerging security issues online should be careful and rigorous in the terminology employed. Law enforcement agencies, cybersecurity companies, media, and academics in particular need to be careful not to assume the organized crime narrative when addressing serious crime online. The risk, otherwise, is to neglect taking into consideration actors in the criminal process who could be equally efficient and harmful in carrying out their activities and to shift attention and resources without serious reflection on how to deal with emerging and evolving challenges in an effective and efficient way. As suggested already by Leukfeldt and colleagues (2017a), it remains to be seen whether and to what extent it is worth labelling certain cybercrimes as being organized crime in order to give law enforcement enhanced investigative powers. An example of this is the use of “cyberorganized crime” rhetoric by senior UK policymakers to push the adoption of a new bill (the so-called Snooper’s Charter) which would have allowed, among other things, service providers to maintain data on users’ online activities (including browsing activities) and to expand the powers of the UK intelligence community by allowing bulk interception of communications and data and by requiring communication service providers to retain the Internet connections records of their users (Lavorgna 2018a). Alternatively, better solutions could allow proper investigation and prosecution of serious crimes online without the need to rely on anti-organized crime regulatory and investigatory frameworks (which might not always be better). It is already known that one of the challenges in fighting cybercrime is linked to the fact that modern policing institutions have been designed to deal effectively with low-volume, high-impact crime. Cybercrimes, on the other hand, are often high-volume and low-impact but might cause large aggregated losses (Wall 2007; Dupont 2017). Many police units are already overwhelmed by traditional crime and might not be willing to use their scarce resources to investigate a criminal incident they perceive as relatively minor (Burns et al. 2004; Grabosky 2016; Leppänen and Kankaanranta 2017; Lavorgna 2019). Hence, rather than pushing the pairing between cyber and organized crime, it might be more effective to recognize the peculiarities of crime in cyberspace and to better resource and train the majority of (local and nonspecialized) police units. It has been extensively reported that law enforcement officers continue to lack adequate personnel, training, and equipment to investigate cybercrimes,

6

Organized Crime and Cybercrime

131

especially with regard to local law enforcement agencies with small jurisdictions (which are the natural contact point for the general public) or those located in poorer countries (Burns et al. 2004; Davis 2012; Holt and Bossler 2015; Willits and Nowacki 2016; Lavorgna 2018a, 2019). Of course, the criminal panorama presented in this chapter is not fixed but rather evolves in response to societal changes. The social organization of criminals evolves in response to the constantly changing issues and contexts they have to face. In the Internet era, these changes are particularly quick and radical. Further research is needed on the different criminal networks active online and their modus operandi, especially with regard to networks operating in (linguistic and criminal) contexts overlooked so far, at least to test the few empirical findings currently available. Furthermore, the empirical evidence we have now is destined to be surpassed in the near future, so that it is likely that some of the information presented in this chapter will have to be updated soon. In the meantime, it is important to maintain a critical attitude in assessing emerging narratives as they develop, especially in the context of increased securitization and potential erosion of the delicate equilibria among privacy, security, and surveillance.

References Ashby, M. P. J. (2016). Is metal theft committed by organized crime groups, and why does it matter? Criminology and Criminal Justice, 16(2), 141–157. Bijlenga, N., & Kleemans, E. R. (2017). Criminals seeking ICT-expertise: An exploratory study of Dutch cases. European Journal on Criminal Policy and Research, 24(3), 253–268. Birk, D., Gajek, S., Grobert, F., & Sadeghi, A. R. (2007). Phishing phishers. Observing and tracing organized cybercrime. In Second International Conference on Internet Monitoring and Protection (ICIMP). Silicon Valley, CA. https://doi.org/10.1109/ICIMP.2007.33. Brenner, S. W. (2002). Organized cybercrime? How cyberspace may affect the structure of criminal relationships. North Carolina Journal of Law and Technology, 4(1), 1–50. Broadhurst, R., Grabosky, P., Alazab, M., et al. (2014). Organizations and cybercrime. International Journal of Cyber Criminology, 8(1), 1–20. Burns, R. G., Whitworth, K. H., & Thompson, C. Y. (2004). Assessing law enforcement preparedness to address Internet fraud. Journal of Criminal Justice, 32, 477–493. Carrapico, H. (2014). Analysing the European Union’s responses to organized crime through different securitization lenses. European Security, 23(4), 601–661. Carrapico, H., & Lavorgna, A. (2015). Editorial. Special issue: Space oddity? Exploring organised crime ventures in cyber space. The European Review of Organised Crime, 2(2), 1–5. Choo, K. K. R., & Smith, R. G. (2007). Criminal exploitation of online systems by organised crime groups. Asian Journal of Criminology, 3(1), 37–59. Cohen, S. (1980). Folk devils and moral panics (2nd ed.). Oxford: Martin Robertson. Davis, J. T. (2012). Examining perceptions of local law enforcement in the fight against crimes with a cyber component. Policing: An International Journal of Police Strategies & Management, 35(2), 272–284. Dupont, B. (2017). Bots, cops, and corporations: On the limits of enforcement and the promise of polycentric regulation as a way to control large-scale cybercrime. Crime, Law and Social Change, 67(1), 9–116. Dupont, B., Côté, A. M., Boutin, J. I., & Fernandez, J. (2017). Darkode: Recruitment patterns and transactional features of ‘the most dangerous cybercrime forum in the world’. American Behavioral Scientist, 61, 1219. (Online first). Europol. (2017). Internet facilitated organized crime (IOCTA). The Hague: European Police Office.

132

A. Lavorgna

Fijnaut, C. J. C., & Paoli, L. (2006). Organised crime and its control policies. European Journal of Crime, Criminal Law and Criminal Justice, 14(3), 307–327. Giacopassi, D., & Pitts, W. J. (2009). Internet gambling: The birth of a victimless crime? In F. Schmallager & M. Pittaro (Eds.), Crimes of the Internet (pp. 417–437). Upper Saddle River: Prentice Hall. Grabosky, P. (2007). The Internet, technology, and organized crime. Asian Criminology, 2, 145–161. Grabosky, P. (2016). Cybercrime. Oxford: Oxford University Press. Harding, C. (2007). Criminal enterprise. Individuals, organisations and criminal responsibility. Devon: Willan Publishing. Holt, T. J. (2013a). Exploring the social organisation and structure of stolen data markets. Global Crime, 14(2–3), 155–174. Holt, T. J. (2013b). Examining the forces shaping cybercrime markets online. Social Science Computer Review, 31(2), 165–177. Holt, T. J., & Bossler, A. M. (2014). An assessment of the current state of cybercrime scholarship. Deviant Behavior, 35(1), 20–40. Holt, T. J., & Bossler, A. M. (2015). Cybercrime in progress. Theory and prevention of technologyenabled offences. New York: Routledge. Holt, T. J., Smirnova, O., & Chua, Y. T. (2016). Data thieves in action. Examining the international market for stolen personal information. New York: Palgrave. Home Office. (2010). Cyber crime strategy. London: The Stationery Office. Hutchings, A. (2014). Crime from the keyboard: Organised cybercrime, co-offending, initiation and knowledge transmission. Crime, Law and Social Change, 62(1), 1–20. Hutchings, A., Clayton, R., & Anderson, R. (2016). Taking down websites to prevent crime. eCrime Researchers Summit, 102–111. Joseph, S. W. (2015). Dismantling the Internet mafia: RICO’s applicability to cyber crime. Rutgers Computer and Technology Law Journal, 41, 268–297. Kadlecová, L. (2015). Russian-speaking cyber crime: Reasons behind its success. The European Review of Organised Crime, 2(2), 104–121. Kleemans, E. R. (2007). Organized crime, transit crime, and racketeering. Crime & Justice, 35(1), 163–215. Kleemans, E. R., & De Poot, C. (2008). Criminal careers in organized crime and social opportunity structure. European Journal of Criminology, 5(1), 69–98. Kleemans, E. R., Soudjin, M. R. J., & Weenink, A. W. (2010). Situational crime prevention and cross-border crime. In K. Bullock, R. V. Clarke, & N. Tilley (Eds.), Situational prevention of organised crimes. Devon: Willan Publishing. Koops, B. J. (2011). The Internet and its opportunities for cybercrime (Research paper series no. 09/2011). Tilburg Institute for Law, Technology, and Society (TILT), Tilburg Law School Legal Studies. Lavorgna, A. (2014a). Internet-mediated drug trafficking: Towards a better understanding of new criminal dynamics. Trends in Organized Crime, 17(4), 250–270. Lavorgna, A. (2014b). Wildlife trafficking in the Internet age: The changing structure of criminal opportunities. Crime Science, 3(5), 1–12. Lavorgna, A. (2015). Organised crime goes online: Realities and challenges. Journal of Money Laundering Control, 18(2), 153–168. Lavorgna, A. (2016). Exploring the cyber-organised crime narrative: The hunt for a new bogeyman? In P. C. van Duyne, M. Scheinost, G. A. Antonopoulos, J. Harvey, & K. von Lampe (Eds.), Narratives on organised crime in Europe. Criminals, corrupters and policy. Nijmegen: Wolf Legal Publishers. Lavorgna, A. (2018a). Cyber-organised crime. A case of moral panic? Trends in Organized Crime. (Online first). Lavorgna, A. (2018b). Analysis and prevention of organised crime. In R. Wortley, A. Sidebottom, N. Tilley, & G. Laycock (Eds.), The handbook of crime science. London: Routledge. Lavorgna, A. (2019). Cybercrimes: Critical issues in a global context. London: Palgrave Macmillan.

6

Organized Crime and Cybercrime

133

Lavorgna, A., & Sergi, A. (2014). Types of organized crime in Italy. The multifaceted spectrum of Italian criminal associations and their different attitudes in the financial crisis and in the use of Internet technologies. International Journal of Law, Crime & Justice, 42(1), 16–32. Lavorgna, A., & Sergi, A. (2016). Serious, therefore organised? A critique of the emerging “cyberorganised crime” rhetoric in the United Kingdom. International Journal of Cyber Criminology, 10(2), 170–187. Leppänen, A., & Kankaanranta, T. (2017). Cybercrime investigation in Finland. Journal of Scandinavian Studies in Criminology and Crime Prevention, 18(2), 157–175. Leukfeldt, R. (2015). Organised cybercrime and social opportunity structures: A proposal for future research directions. The European Review of Organised Crime, 2(2), 91–103. Leukfeldt, R., Lavorgna, A., & Kleemans, E. (2017a). Organised cybercrime or cybercrime that is organised? An assessment of the conceptualisation of financial cybercrime as organised crime. European Journal on Criminal Policy and Research, 23(3), 287–300. Leukfeldt, R., de Poot, C., Verhoeven, M., Kleemans, E., & Lavorgna, A. (2017b). Cybercriminal networks. In R. Leukfeldt (Ed.), The human factor in cybercrime and cybersecurity. The Hague: Eleven Publishing. Leukfeldt, E. R., Kleemans, E. R., & Stol, W. P. (2017c). Cybercriminal networks, social ties and online forums: Social ties versus digital ties within phishing and malware networks. British Journal of Criminology, 3(1), 704–722. Leukfeldt, E. R., Kleemans, E. R., & Stol, W. P. (2017d). A typology of cybercriminal networks: From low tech locals to high tech specialists. Crime, Law and Social Change, 67(1), 39–53. Levi, M. (1998). Perspectives on “organized crime”: An overview. Howard Journal of Criminal Justice, 37, 335–345. Levi, M. (2014). Thinking about organised crime. Structure and threat. The RUSI Journal, 159(1), 6–14. Levi, M., Doig, A., Gundur, R., Wall, D., & Williams, M. (2017). Cyberfraud and the implications for effective risk-based responses: Themes from UK research. Crime, Law and Social Change, 67(1), 77–96. Lusthaus, J. (2013). How organised is organised cybercrime? Global Crime, 14(1), 52–60. Lusthaus, J., & Varese, F. (2017). Offline and local: The hidden face of cybercrime. Policing: A Journal of Policy and Practice. (Online first). Martin, J. (2014). Drugs on the dark net. How cryptomarkets are transforming the global trade in illicit drugs. London: Palgrave. McCusker, R. (2006). Transnational organized cyber crime: Distinguishing threat from reality. Crime, Law and Social Change, 46(4), 257–273. McCusker, R. (2011). Organised cybercrime: Myth or reality, malignant or benign? In S. Manacorda (Ed.), Cybercriminality: Finding a balance between freedom and security. Conference proceedings. Selected papers and contributions from the international conference on “Cybercrime: Global phenomenon and its challenges”, Courmayeur Mont Blanc, Italy 2–4 December 2011 (pp. 107–119). Milan: ISPAC. McGuire, M. (2012). Organised crime in the digital age. London: John Grieve Centre for Policing and Security and BAE Systems Detica. Morris, S. (2004). The future of netcrime now: Threats and challenges (Part 1. Home Office online report). London: Home Office Research, Development and Statistics Directorate. Morselli, C., & Décary-Hétu, D. (2013). Crime facilitation purposes of social networking sites: A review and analysis of the “cyberbanging” phenomenon. Small Wars and Insurgencies, 24(1), 152–170. Morselli, C., Décary-Hétu, D., Paquet-Clouston, M., & Aldridge, J. (2017). Conflict management in illicit drug cryptomarkets. International Criminal Justice Review, 27(4), 237–254. Motoyama, M., McCoy, D., Levchenko, K., Savage, S., & Voelker, G. M. (2011). An analysis of underground forums. In 2011 Internet measurement conference (pp. 71–79). Berlin. Moule, R. K., Jr., Pyrooz, D. C., & Decker, S. H. (2014). Internet adoption of online behaviour among American street gangs. British Journal of Criminology, 54, 1186–1206. Naìm, M. (2006). Illicit. How smuggles, traffickers, and copycats are hijacking the global economy. New York: Anchor Books.

134

A. Lavorgna

Paoli, L. (2002). The paradoxes of organized crime. Crime, Law and Social Change, 37(1), 51–97. Paoli, L. (2005). Italian organised crime: Mafia associations and criminal enterprises. In M. Galeotti (Ed.), Global crime today. The changing face of organized crime (pp. 19–31). Abington: Routledge. Pyrooz, D. C., Decker, S. H., & Moule, R. K., Jr. (2015). Criminal and routine activities in online settings: Gangs, offenders, and the Internet. Justice Quarterly, 32(3), 471–499. Sela-Shayovitz, R. (2012). Gangs and the web: Gang members’ online behavior. Journal of Contemporary Criminal Justice, 28(4), 389–405. Sergi, A. (2016). National security vs criminal law. Perspectives, doubts and concerns on the criminalisation of organised crime in England and Wales. European Journal on Criminal Policy and Research, 22(4), 713–729. Shelley, L. (2003). Organized crime, terrorism and cybercrime. In A. Bryden & P. Fluri (Eds.), Security sector reform: Institutions, society and good governance. Baden-Baden: Nomos Verlagsgesellschaft. Simon, J. (2007). Governing through crime. How the war on crime transformed American democracy and created a culture of fear. Oxford: Oxford University Press. Simon, J., & Feeley, M. (2013). Folk devils and moral panics: An appreciation from North America. In D. Downes, P. Rock, C. Chinkin, & C. Gearty (Eds.), Crime, social control and human rights. From moral panics to states of denial, essays in honour of Stanley Cohen. London: Willan. Tropina, T. (2013). Organized crime in cyberspace. In S. Heinrich-Boell & R. Schönenberg (Eds.), Transnational organized crime: Analyses of a global challenge to democracy. Bielefeld: Transcript-Verlag. van Duyne, P. C. (1995). The phantom and threat of organized crime. Crime, Law and Social Change, 24(4), 341–377. van Duyne, P. C. (2011). (Transnational) organised crime, laundering and the congregation of the gullible. Tilburg University, 14 March. http://www.cross-border-crime.net/index.php?page= Free%20Downloads van Duyne, P. C., & Nelemans, M. D. H. (2012). Transnational organized crime: Thinking in and out of Plato’s Cave. In A. Felia & S. Gilmour (Eds.), Routledge handbook of transnational organized crime (pp. 36–51). London: Routledge. van Duyne, P. C., & van Dijk, M. (2007). Assessing organised crime: The sad state of an impossible art. In F. Bovenkerk & M. Levi (Eds.), The organized crime community. Essays in honor of Alan A. Block. New York: Springer. van Duyne, P. C., & Vander Beken, T. (2009). The incantations of the EU organised crime policy making. Crime, Law and Social Change, 51(2), 61–281. von Lampe, K. (2008). Organized crime in Europe: Conceptions and realities. Policing, 2(1), 7–17. Wall, D. S. (2001). Cybercrimes and the Internet. In D. S. Wall (Ed.), Crime and the Internet. New York: Routledge. Wall, D. S. (2007). Cybercrime: The transformation of crime in the information age. Cambridge: Polity. Wall, D. S. (2008). Cybercrime, media and insecurity: The shaping of public perceptions of cybercrime. International Review of Law, Computers & Technology, 22(1–2), 45–63. Wall, D. S. (2014). Internet mafias? The dis-organisation of crime and the Internet. In S. Caneppele & F. Calderoni (Eds.), Organised crime, corruption and crime prevention (pp. 227–239). London: Springer. Wall, D. (2015). Dis-organised crime: Towards a distributed model of the organisation of cybercrime. The European Review of Organised Crime, 2(2), 71–90. Williams, Ph. (2001). Organized crime and cybercrime: Synergies, trends, and responses. Arresting Transnational Crime. An Electronic Journal of the U.S. Department of State, 6(2). Willits, D., & Nowacki, J. (2016). The use of specialized cybercrime policing units: An organizational analysis. Criminal Justice Studies, 29(2), 105–124. Yip, M., Webber, C., & Shadbolt, N. (2013). Trust among cybercriminals? Carding forums, uncertainty and implications for policing. Policing and Society, 23(4), 516–539.

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective Sagar Samtani, Maggie Abate, Victor Benjamin, and Weifeng Li

Contents Introduction: Cybersecurity Significance and Cyber Threat Intelligence (CTI) Overview . . . . Background of Cyber Threat Intelligence (CTI) Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common Cyber Threat Intelligence (CTI) Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Popular Cyber Threat Intelligence (CTI) Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operational Intelligence: Visualization, Report Generation, and Intelligence Dissemination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Review of Cyber Threat Intelligence (CTI) Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General Observations: Industry Age, Locations, Revenue Streams, and Public Standing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Collection Efforts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTI Analytics Efforts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operational Intelligence Efforts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Existing Gaps Within CTI Platform Landscape and Potential Opportunities . . . . . . . . . . . . . . . . . . Shift from Reactive to Proactive OSINT-Based CTI Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhancement of Natural Language Processing (NLP) and Text Mining Capabilities . . . . .

136 138 138 140 141 142 142 143 145 146 147 147 148

S. Samtani Department of Information Systems and Decision Sciences, Muma College of Business, University of South Florida, Tampa, FL, USA e-mail: [email protected] M. Abate WellCare Health Plans Inc., Tampa, FL, USA e-mail: [email protected] V. Benjamin (*) Department of Management Information Systems, W.P. Carey School of Business, Arizona State University, Phoenix, AZ, USA e-mail: [email protected] W. Li Department of Management Information Systems, Terry College of Business, University of Georgia, Athens, GA, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_8

135

136

S. Samtani et al.

Enhancement of Data Mining Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Integration of Big Data and Cloud Computing Technologies . . . . . . . . . . . . . . . . . . . . . Opportunities and Strategies for Academia to Address Identified Gaps . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

149 149 150 152 153

Abstract

The rapid integration of information technology has been met with an alarming rate of cyber-attacks conducted by malicious hackers using sophisticated exploits. Many organizations have aimed to develop timely, relevant, and actionable cyber threat intelligence (CTI) about emerging threats and key threat actors to enable effective cybersecurity decisions. To streamline and create efficient and effective CTI capabilities, many major cybersecurity companies such as FireEye, Anomali, ThreatConnect, McAfee, CyLance, ZeroFox, and numerous others have aimed to develop CTI platforms, enabling an unprecedented ability to prioritize threats; pinpoint key threat actors; understand their tools, techniques, and procedures (TTP); deploy appropriate security controls; and ultimately, improve overall cybersecurity hygiene. Given the significant benefits of such platforms, our objective for this chapter is to provide a systematic review of existing CTI platforms within industry today. Such a review can offer significant value to academics across multiple disciplines (e.g., sociology, computational linguistics, computer science, information systems, and information science) and industry professionals across public and private sectors. Systematically reviewing existing CTI platforms identified five future possible directions CTI start-ups can explore: (1) shift from reactive to proactive OSINT-based CTI platforms, (2) enhancement of natural language processing (NLP) and text mining capabilities, (3) enhancement of data mining capabilities, (4) further integration of big data and cloud computing technologies, and (5) opportunities and strategies for academia to address identified gaps.

Keywords

Cyber threat intelligence · Platforms · Data sources · Data mining

Introduction: Cybersecurity Significance and Cyber Threat Intelligence (CTI) Overview Computing technology has provided modern society with innumerable and unprecedented benefits. Many organizations across private and public sectors employ complex information systems (IS) to maintain critical infrastructure, execute financial transactions, maintain health records, and conduct many other day-to-day activities. Unfortunately, the rapid integration of IS has been met with an alarming

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective

137

rate of cyberattacks conducted by malicious hackers using sophisticated exploits. Cybersecurity experts have appraised the total cost of hacking activities against major entities such as Target, Home Depot, Marriott, Equifax, Uber, and Yahoo! at $450 billion annually (Graham 2017). To combat this major societal and global issue, many organizations have aimed to develop timely, relevant, and actionable intelligence about emerging threats and key threat actors to enable effective cybersecurity decisions. This process, also referred to as cyber threat intelligence (CTI), has quickly emerged as a key aspect of cybersecurity. CTI is fundamentally a data-driven process. Similar to other data analysis procedures, organizations will first define their intelligence needs by examining the existing threat landscape, monitoring their cyber assets, and modelling possible attack vectors. This information guides data collection from Intrusion Detection and Prevention Systems (IDS/IPS) and log files from databases, firewalls, and servers. Well-refined analytics such as malware analysis, event correlation, and forensics are utilized to derive the intelligence needed for CTI professionals to deploy appropriate security controls (e.g., two-factor authentication, malware signatures for antiviruses, form sanitation, etc.) and develop more robust cyber defenses (Friedman 2015; Kime 2016; Shackleford 2016). Figure 1 illustrates the four phases of the general CTI lifecycle. To streamline and create efficient and effective CTI capabilities, many major cybersecurity companies such as FireEye, Anomali, ThreatConnect, McAfee, CyLance, ZeroFox, and numerous others have aimed to develop CTI platforms. Such platforms are purchasable by organizations looking to improve their overall cybersecurity posture by utilizing CTI platforms to enable an unprecedented ability to prioritize threats; pinpoint key threat actors; understand their tactics, techniques, and procedures (TTPs); deploy appropriate security controls; and, ultimately, improve overall cybersecurity hygiene. Such benefits have led Anomali, the vendor of the internationally renowned CTI platform ThreatStream, to remark that: “An effective Threat Intelligence Platform can enable analysts to determine patterns of malicious behavior learned from previous events to better address future attacks.” (Anomali 2017). Phase 1: Intelligence Planning/Strategy

Phase 2: Data Collection and Aggregation

Phase 3: Threat Analytics

Phase 4: Intelligence Usage and Dissemination

Description: Identify intelligence needs of organization, critical assets, and their vulnerabilities

Description: Identify and collect relevant data for threat analytics

Description: Analyze collected data to develop relevant, timely, and actionable intelligence

Description: Mitigate threats and disseminate intelligence

Approaches: threat trending, vulnerability assessments, asset discovery, diamond modelling

Data sources: internal network data, external threat feeds, OSINT, human intelligence

Approaches: malware analysis, event correlation, visualizations

Approaches: manual and automated threat responses, intelligence communication standards (e.g., STIX)

Fig. 1 General cyber threat intelligence (CTI) lifecycle

138

S. Samtani et al.

Given the significant benefits of such platforms, our objective for this chapter is to provide a systematic review of existing CTI platforms within industry today. Such a review can offer significant value to academics across multiple disciplines (e.g., sociology, computational linguistics, computer science, information systems, information science, etc.) and industry professionals across public and private sectors. For example, academics can use this review to serve as a basis to pursue novel, highimpact research inquiries that can advance future CTI platforms. Similarly, it can help provide grounding and justification for pursuing several federal funding opportunities to help support novel CTI initiatives that are of significant interest to academia, government, and industry. Further, security practitioners can form a better understanding of how CTI platforms are useful aggregators of intelligence sourced from ISACs (Information Sharing and Analysis Center), various data feeds (both paid and unpaid), network and system traffic, and more. To achieve our objective, we organize this book chapter as follows: First, we provide a detailed background of CTI by summarizing the structure of CTI platforms, including common data sources, popular analytics, and operational intelligence (e.g., visualization, reporting capabilities). Second, we provide a systematic review of existing CTI industry platforms within the marketplace today. Third, we summarize existing gaps and offer suggestions on how these gaps can be addressed by academics and industry professionals. Finally, we summarize our overall findings and conclude this chapter.

Background of Cyber Threat Intelligence (CTI) Platforms As indicated in the introduction, CTI is fundamentally a data analytics-oriented process. While numerous CTI platform vendors exist (detailed in section “Review of Cyber Threat Intelligence (CTI) Platforms”), CTI platforms generally have three major components: data collection, analytics, and operational intelligence. In the following subsections, we describe common CTI data sources, popular CTI analytics, and the common strategies for creating, organizing, and disseminating operational intelligence created from CTI analytics procedures.

Common Cyber Threat Intelligence (CTI) Data Sources In general, five major categories of CTI data sources exist: (1) Open Source Intelligence (OSINT), (2) Internal Intelligence, (3) Human Intelligence (HUMINT), (4) Counter Intelligence, and (5) Finished Intelligence (FINTEL). Other data sources also include Signal Intelligence (SIGINT), Imagery (IMAGINT), Geospatial Intelligence (GEOINT), and Signature Intelligence. However, they are more common for military applications rather than CTI; thus, they are omitted from our review. Table 1 briefly summarizes each common data source, whether it is internal or external to an organization, several examples of each data source, and the value each data source provides.

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective

139

Table 1 Common cyber threat intelligence (CTI) data sources used for analytics CTI data source Open Source Intelligence (OSINT)

Brief description Data that can be collected from the Internet or from other CTI companies

Internal or external? External

Examples Vulnerability/exploit feeds, social media, Darknet data, public statements, threat feeds

Internal Intelligence

Data collected from an organization’s internal cyber assets

Internal

Human Intelligence (HUMINT) Counter Intelligence

Manual research and data collection

Both

Providing false information to deceive hackers Finished Intelligence ready for dissemination

Both

Honeypots, antihuman intelligence

Both

Commercial data feeds

Finished Intelligence (FINTEL)

Network logs, database access events, Intrusion Detection Systems (IDS) logs, Intrusion Prevention Systems (IPS) logs Direct hacker interactions

Value Provides a comprehensive view of an organization’s external threat landscape Provides information about activities internal to an organization

Provides very precise and deep knowledge Safely identifies tools and methods used by attackers Refines and analyzes intelligence

Internal intelligence (most common and traditional CTI data source) is data collected from network logs generated from Intrusion Detection Systems, Intrusion Prevention Systems, databases, servers, routers, switches, and other network devices on an organization’s networks. Such data is the most common and traditional data source, as it is relatively straightforward to collect (e.g., dedicated packet sniffers, packet captures, vulnerability assessments, port scans, log aggregators, etc.), providing significant information regarding an organization’s operations. OSINT offers organizations an opportunity to look at the “outside world” to identify relevant threats. Common data sources include traditional social media sites (e.g., Facebook, Twitter, PasteBin, Shodan, etc.) and Darknet (i.e., online hacker community) sources such as hacker forums, Darknet Marketplaces (DNMs), Internet-Relay-Chat (IRC), carding shops, and others. Such data sources can provide an excellent overview of potential cyber threats within cyberspace that are present within relevant industries. HUMINT relies on manual research and data collection (e.g., direct hacker interactions, ethnographies, etc.) to gain very precise and deep knowledge about particular threats (e.g., advanced persistent threats). Counter Intelligence is based on providing false information through automated or manual approaches (e.g., honeypots, antihuman intelligence) to deceive hackers. Such methods can offer a relatively safe approach to identify tools and methods used by attackers without directly engaging with attackers. Finally, FINTEL is finished intelligence ready for dissemination.

140

S. Samtani et al.

Popular Cyber Threat Intelligence (CTI) Analytics Each of the aforementioned data sources summarized in the previous subsection can offer significant CTI value. However, each source requires significant analysis so that CTI value can be effectively drawn from them and insights can be developed. Broadly speaking, seven major types of CTI analytics exist: (1) summary statistics, (2) event correlation, (3) reputation services, (4) malware analysis, (5) anomaly detection, (6) forensics, and (7) machine learning. Table 2 briefly describes each analytics procedure, provides salient examples, and summarizes the value of technique. We note that the analytics summarized in Table 2 are neither exhaustive nor mutually exclusive. In practice, each of the listed analytical approaches can be used in silos or in conjunction with each other to maximize the potential CTI value. Moreover, companies may have developed proprietary approaches to effectively analyze the CTI data source(s) of interest and relevance to them. Summary statistics provide CTI analysts with the ability to supply overviews and high-level summaries of vast quantities of data and/or carefully refined analytic results. Event correlation aims to fuse and integrate multiple data sources (usually internal network data sources) together to analyze relationships between events. IP Table 2 Common cyber threat intelligence (CTI) analytics procedures Analytical approach Summary statistics

Description High-level summary of collected data

Event correlation

Analyzes relationships between events

IP reputation services Malware analysis

Identifying the quality of an IP

Anomaly detection Forensics

Machine learning

Analyzing (statically and/or dynamically) malicious files on a network or a system Detecting abnormal activities and behaviors Identifying and preserving digital evidence Algorithms that can learn from and make predictions/describe data

Examples Number of blocked IPs, locations of attacks, counts over time Identifying machine sending malicious traffic by checking firewall log IP “X” has a poor reputation

Value Good overview for cybersecurity executives for decision-making purposes

Decompiling ransomware

Bolsters technical cyber defenses against malicious files and binaries

Unusual user logins, spurious network activities Examining RAM from a malicious or hacked system Classifying malware

Detect malicious activities

Integrates multiple sources of data together (usually internal network) Identify which IP addresses to block

Identifying how an attack occurred Automating analysis and identifying patterns within data that are not possible by other methods

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective

141

reputation services aim to identify the quality of an IP address based on the type of data and payloads it is generating. Such analysis is valuable when identifying which IPs to blacklist and block. Malware analysis is the process of systematically analyzing (through static and/or dynamic procedures) malicious files on a network or a system. Knowledge gleaned from these procedures can bolster cybersecurity controls against malicious files and binaries. Anomaly detection aims to detect abnormal activities and behaviors which deviate from a predefined set of normal activities (i.e., baseline). Forensics aims to carefully examine the factors which led to a cyberbreach by identifying and preserving digital evidence. Finally, machine learning offers a suite of algorithms that can learn from and make predictions and/or describe data. Such analysis is valuable for automation and identifying trends and patterns within data that are not possible by other methods. Taken together, each analytics procedure offers a valuable mechanism to systematically sift through the terabytes of generated data to glean valuable cybersecurity insights.

Operational Intelligence: Visualization, Report Generation, and Intelligence Dissemination A key aspect of any CTI program and platform is the proper implementation of operation intelligence capabilities. This often takes form via three methods: visualization, report generation, and intelligence dissemination. Visualizations are visual representations of analyzed data. CTI analysts may have to deal with hundreds of thousands of log files, hacker data, etc. Generally speaking, visual cues can be more intuitive than raw data and pure textual information. Thus, visualizations can make threat data analytics and dissemination significantly easier. More importantly, they can enable better decision making, provide value to strategic level employees, and serve as an excellent reporting mechanism. Oftentimes, visualizations are the fundamental building blocks for biweekly, quarterly, semiannual, and annual CTI reports offered by major organizations. Common visualizations within the CTI landscape include, but are not limited to, bar charts, line charts for trend analysis and temporal evolution, network science-based representations, radar charts, box and whisker plots, geo-spatial maps, heat maps, and many others. Following the generation of these visualizations, reports, and analyzed data is the sharing of information with key stakeholders. Cyber threat information sharing is critical in the fight against today’s sophisticated cyber adversaries and emerging threats. Cyber threat information sharing relies on asking and answering a series of key questions, including (but not limited to) who to tell (e.g., incident response team, chief information security officer, staff, developer, clients, etc.), when to tell, what to tell, and how to tell. Ultimately, the sharing of this information can help organizations deploy automated and manual defenses (i.e., security controls) to help prevent and mitigate cyberattacks. Automated defenses are those in which security controls are automatically deployed after an event has been identified. Examples of automated defenses include deploying a firewall rule based on abnormal activities on a port, flagging specific emails based on its features, and automatically blocking a user account

142

S. Samtani et al.

based on an abnormal activity. Such actions can significantly reduce an incident response team’s overall workload. In contrast, manual defenses require forms of human intervention. These include manual deployment of third-party controls, highrisk mitigation strategies, human interfaces, interacting with hackers in the Darknet, addressing insider threats, combating social engineering threats, and many others.

Review of Cyber Threat Intelligence (CTI) Platforms In total, we reviewed 91 CTI companies focused on developing CTI capabilities. Most of the reviewed CTI companies are US-based, though a small number of international firms were included. The CTI companies were identified through a Gartner report and also through domain expertise regarding reputable vendors with advanced threat intelligence capabilities. Further, the review is focused on companies which offer paid platforms, rather than those found through open source feeds and resources (such as those found at https://github.com/theragnarpatel/awesomethreat-intelligence). This helps focus the review on the commercial space. Additionally, we focused on reviewing companies which offered CTI platforms with the three major components described in the previous subsections: data collection, analytics, and operational intelligence. For each company, we aimed to identify what data sources a company’s platform uses, data sources when the company was founded, its headquarters, and other information. To conduct the review, we carefully examined each company’s websites, platform data sheets, fact sheets, and other publicly accessible resources. This helps ensure that we gathered a comprehensive set of information for each company’s CTI platform without purchasing the actual platform itself (out of the scope of this project). A summary of our review is provided in this section. Specifically, we first provide an overview of the entire industry (e.g., age of the industry, locations of many companies, revenue streams, their public/private status, etc.). We then systematically review platforms on the three aspects found in CTI platforms: data collection efforts, CTI analytics efforts, and operational intelligence efforts. Throughout the summary, we highlight specific company names as examples and illustrations. Interested readers who wish to obtain the full review in table format can email the authors.

General Observations: Industry Age, Locations, Revenue Streams, and Public Standing In contrast to other industries such as retail, banking, and healthcare that have existed for over half a century, 59 of 91 of the reviewed CTI companies were founded in the 2000s. Similar to many traditional and well-established technology companies (e.g., Facebook, Twitter, LinkedIn, Google, Apple, Adobe, etc.), many (34 of 91) CTI companies were founded in and currently headquartered in the greater Silicon Valley area (e.g., San Francisco, San Jose, Palo Alto, Mountain View, etc.) and in other areas of California (e.g., Los Angeles, San Diego). However, contrast to the

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective

143

Table 3 Summary of CTI companies based on revenue brackets Number of companies that are publicly traded 0

Number of private companies 22

Revenue range $1–$10 million $10–$100 million

Number of companies 22 31

1

30

$101–$999 million Over $1 billion

20

4

16

18

14

4

Example companies Ziften, ZeroFox, Lifars LookingGlass, ForeScout, AlienVault CarbonBlack, FireEye, NSFocus Checkpoint, Verint, Juniper Networks

traditional technology companies, the majority of reviewed CTI companies (56 of 91) were founded and are currently headquartered outside of California. These include Impulse in Tampa, Florida, Threatq in Washington, DC, Forcepoint in Austin, Texas, and Insights in Israel. Such observations bode well for those researchers aiming to produce CTI platforms using federal seed funding and other mechanisms (refer to subsection e of the next section) outside of the numerous venture capitalist (VC) funding opportunities commonly found within Californian borders. At the time of this writing, 24 of the 91 companies are publicly traded, while the remainder (67 of 91) remain private. Table 3 summarizes the reviewed CTI companies based on their revenue bracket. The relative youth of the CTI industry has resulted in the majority of companies (53 of 91) falling within the $1–$10 million (22 of 91) and $10–$100 million (31 of 91) revenue brackets. Companies in the former bracket include Ziften, ZeroFox, and Lifars, while the latter comprise of LookingGlass, ForeScout, and AlienVault. Only one of these companies, NSFocus, is publicly traded; the remainder are private. In contrast, 14 of the 18 companies in the $1 billion+ revenue bracket (e.g., Checkpoint, Verint, Juniper Networks) are public. These 14 make up 73.68% of all publicly traded companies within the CTI industry. Overall, these results indicate that the CTI industry remains one that is rapidly emerging and growing (boding well for budding entrepreneurs in this space), and not one which has reached saturation.

Data Collection Efforts Carefully examining the data sources used by the reviewed CTI companies revealed that 90%+ companies rely either solely or primarily on internal network data (e.g., log files generated from servers, databases, IDS/IPS, security information and event management (SIEM), and other networked devices). Oftentimes, the log files are collected with one or a combination of two strategies. The first is deploying sensors and log aggregators on client networks to gather data. This allows the CTI platforms provided by the CTI vendors to gather and analyze data closest to the client. One

144

S. Samtani et al.

such platform that employs this strategy is ThreatConnect, who offers services to put monitoring technologies on selected endpoints within an enterprise network to collect data. The second strategy relies on sensor networks deployed worldwide. Such a deployment enables the collection of vast amounts of data and provides a global perspective on potential cyberattack events worldwide. One such company deploying worldwide sensor networks is FireEye. While network data remains the prevailing data source, the Darknet is slowly emerging as a viable data source for selected CTI companies. The Darknet consists of a multitude of underground online communities inhabited by cybercriminals. Darknet communities span across web forums, Internet-Relay-Chat (IRC), black markets, stolen data shops, and more. Participants will often share or trade cybercriminal assets such as hacking tools, tutorials, and services. Discussion topics are often related to new attack techniques, emerging threats, potential targets, victims, and so on. The availability of these resources and information has also enabled many lesser-skilled Internet miscreants to conduct advanced cybercriminal operations that may cause disruption and financial loss. The Darknet exacerbates existing cybersecurity issues and is a critical data source for academics and security practitioners. To the best of our knowledge and to the extent of our review, we identified 13 of 91 CTI companies who explicitly mentioned their use of the Darknet as a CTI data source. These include Lifars, LookingGlass, Recorded Future, Digital Shadows, Skybox Security, Blueliv, Verint, Cyber4Sight Booz Allen Hamilton, SurfWatch Labs, Flashpoint, Insights, ZeroFox, and DarkOwl. While the specific aspect of the Darknet collected and used for CTI analytics is not detailed in many company’s data and fact sheets, Table 4 summarizes each of these companies based on the date they were founded, their location, whether they are private or public, their revenue, their target audience, and whether the Darknet is the only CTI data source these companies use. Several key observations are drawn from our review of CTI companies using Darknet data sources. First, 11 of the 13 reviewed companies are private; only Verint and Cyber4Sight Booz Allen Hamilton are publicly traded. Among the 11 that are private, 8 were founded in the past decade (i.e., since 2009). Second, revenues for each of these companies range between 2.5 million and 5.48 billion. These observations suggest that many established companies have not yet forayed extensively into Darknet data sources. However, this appears to be the focus of 50% (11 of 22) of the companies within the $1–$10 million range. Third, about half of the companies using the Darknet aim to provide services to small- to medium-sized organizations. This indicates that medium- to large-sized organizations are often not the target audience of these CTI companies. Finally, carefully examining each company’s data sheets indicates that the Darknet is not the only data source employed for analytics purposes for 12 of 13 companies. This indicates that internal network data is used to correlate and analyze events occurring on network devices and Darknet platforms. The only exception is DarkOwl, who focuses their value proposition on collecting a comprehensive set of Darknet data for selected clients to develop intelligence from (whereas other companies are more targeted in nature).

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective

145

Table 4 Summary of selected CTI companies using Darknet as a data source Only data source Revenue Target audience Darknet? $6.2 Small to No million medium organizations Washington, DC Private $12 Small to No million medium organizations San Private $5 Medium to No Francisco, CA (Series E) million large organizations San Private $5 Small to No Francisco, CA (Series C) million medium organizations Silicon Valley Private $15 Medium to No million large organizations European Private $11.3 Small to No Union (EU) (Series A) million medium organizations New York, NY Public $1.135 Large No billion organizations McLean, VA Public $5.48 Large No billion organizations and government Washington, DC Private $3 Small to No (Series B) million medium organizations New York, NY Private $5.6 Medium to No (Series C) million large organizations Small to No Israel Private $2.5 (Series D) million medium organizations Baltimore, MD Private $8 Small to large No (Series C) million organizations Denver, CO Private $6.1 Small to large Yes million organizations

CTI company using Darknet Lifars (consulting firm) LookingGlass

Date founded Location 2013 New York, NY

Recorded Future

2007

Digital Shadows

2011

Skybox Security

2002

Blueliv

2009

Verint

1994

Cyber4Sight Booz Allen Hamilton

1914

SurfWatch Labs

2013

Flashpoint

2010

Insights

2015

ZeroFox

2013

DarkOwl

2015

2006

Public or private? Private

CTI Analytics Efforts Our review revealed that all companies use one or a combination of the CTI analytics procedures detailed earlier (e.g., summary statistics, event correlation, malware analysis, IP reputation services, anomaly detection, forensics, etc.). Employing this breadth of analytics provides two key benefits. First, it enables companies to have a

146

S. Samtani et al.

rich toolbox of approaches that can effectively extract intelligence from the diverse data often collected. Second, providing a suite of analytics options allows these companies to offer multiple selections and pricing plans to organizations interested in purchasing such offerings. Such strategies enable CTI companies to maximize their revenues and marketability to organizations interested in adopting a CTI platform as part of their overall cybersecurity strategy. CTI companies providing multiple offerings include FireEye, McAfee (multiple malware analysis offerings), and Symantec. Our review also revealed that artificial intelligence-based methods (e.g., data mining, machine learning, natural language processing) have also rapidly permeated and emerged in the cyber threat intelligence industry. While not as widespread as the traditional CTI analytics, these methodologies offer a promising mechanism to automate CTI analytics and discover patterns, relationships, and associations within large amounts of cybersecurity data which would otherwise be undetectable. Examples of traditional CTI analytics which have seen significant improvements and benefits include malware analysis and anomaly detection. Oftentimes, these analytics are conducted on a “Big Data” scale (e.g., terabytes of data moving in real time). Companies leading the forefront of AI-based CTI analytics include McAfee, CyLance, Cybersift, and Insights. However, carefully examining the data sheets from these companies suggests that many of the algorithms used for analytics are “out-of-the-box” (i.e., provided by standard data mining software packages such as scikit-learn, Rapidminer, or WEKA). One such example is using the standard k-means clustering algorithm to automatically group similar malware samples. Ultimately, such algorithms can enable more effective and efficient threat detection, mitigation, and incident response.

Operational Intelligence Efforts Reviewing the operational intelligence aspect revealed two key discoveries. First, to our surprise, less than half of the companies (43 of 91) offered visualization services as part of their CTI platform services. Those companies providing such services relied heavily on dashboards that contained multiple types of visualizations (e.g., bar charts, line charts, network diagrams, etc.) together. Oftentimes, these dashboards are real time, dynamic, interactive, and carefully laid out such that key stakeholders receive actionable (i.e., relevant and timely) intelligence. Examples of companies providing visualization services include Splunk, Insights, Rapid7, Checkpoint, and others. Interestingly, nearly all of the companies collecting and analyzing Darknet data offer some visualization capabilities. The second key insight drawn from our review is the provision of threat intelligence feeds. Oftentimes, companies have developed numerous technical mechanisms for storing threat analysis data. These include Structured Threat Information eXpression (STIX), Cyber Observable Expression (CybOX), and/or Malware Attribution Enumeration and Characterization (MAEC). Each uses data formats such as eXtensible Markup Language (XML) and/or JavaScript Object Notation (JSON).

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective

147

Moreover, each follows a prespecified schema and data dictionary definitions. Automatic data sharing technologies such as Trusted Automated eXchange of Indicator Information (TAXII) and other Application Programming Interfaces (APIs) serve as technical mechanisms for interested clients or consumers to automatically collect selected data from companies. Such feeds can provide actionable intelligence for other organizations within and across industries, as well as input for other organization’s threat analytics. Organizations can serve as providers of threat feeds, consumers, or a combination of both. Ultimately, this enables organizations to share data within and across industries at near real-time speeds. Selected considerations when choosing threat feeds can include the following: • • • • • •

Data sources used for analytics and operational intelligence Analytics employed to analyze selected data Cost of feed (if appropriate and relevant) Functionalities of feed (e.g., dynamic updating, etc.) Formatting of the data feed for ingestion into existing databases and warehouses Visualization capabilities to present collected and analyzed data

Existing Gaps Within CTI Platform Landscape and Potential Opportunities To date, staggering advances have been made in progressing CTI platforms. However, the development of any nascent industry often has its gaps. Identifying and working progressively to addressing those gaps can help develop novel insights and capabilities beyond those currently seen within the marketplace. Throughout our review, we identified four major gaps and areas of further expansion: (1) lack of proactive OSINT-based CTI platforms, (2) enhancement of natural language processing (NLP) and text mining capabilities, (3) enhancement of data mining capabilities, and (4) further integration of big data and cloud computing technologies. We note that issues such as customization, cost, vendor selection, vendor support, and others are not unique to the CTI industry but are common issues seen among various technological industries. As such, we omit such general drawbacks and considerations common across the technological industry. Rather, we focus on the four abovementioned issues that are unique to the CTI industry. The final subsection summarizes some of the possible opportunities available to academia (with special focus on current federal funding opportunities) to solve some of these gaps and progressively improve the CTI platform landscape.

Shift from Reactive to Proactive OSINT-Based CTI Platforms Despite its value, existing CTI practices have been criticized as reactive to known exploits, rather than proactive to new and emerging threats from the hackers themselves. To combat these concerns, CTI experts have suggested proactively

148

S. Samtani et al.

examining emerging exploits in the vast, international, and rapidly evolving online hacker community platforms (i.e., “Darknet”). As indicated earlier, Darknet platforms include hacker forums, Darknet Marketplaces, Internet-Relay-Chat (IRC), and carding shops. Hackers have obtained exploits in forums to execute well-known breaches (e.g., Target in 2013). Innovative solutions for salient cybersecurity issues require interdisciplinary efforts cutting across private and public sectors. Recognizing these challenges, there is a need for developing advanced, proactive CTI capabilities by (1) identifying and automatically collecting a multimillion record testbed of hacker community posts and (2) analyzing the rich textual nature of these posts to identify emerging threats, specifically malicious hacker exploits (malware). While this is a growing body of literature in these areas (Benjamin and Chen 2013; Benjamin et al. 2015, 2016, 2019; Benjamin 2016; Li 2017; Li and Chen 2014; Li et al. 2016a,b; Samtani et al. 2015, 2016; 2017; Samtani and Chen 2016; Grisham et al. 2017), significant work is required to effectively transition proof-of-concept and proof-of-value methodologies demonstrated in these studies to practice.

Enhancement of Natural Language Processing (NLP) and Text Mining Capabilities Significant quantities of the data found within the realms of cybersecurity are text. Traditional internal network devices (e.g., Intrusion Detection Systems, Intrusion Prevention Systems, databases, workstations, routers, switches, gateways, etc.), emerging significant hacker community data sources (e.g., hacker forums, InternetRelay-Chat, carding shops, and Darknet Marketplaces), and traditional Open Source Intelligence (OSINT) sources offered by major commercial entities (e.g., Facebook, Twitter, PasteBin, Shodan, etc.) are ripe with rich information that can significantly aide organizations in developing comprehensive and holistic cyber defenses. To date, many cybersecurity companies such as FireEye, Splunk, IBM, Webroot, and many others are looking beyond traditional structured data to mine novel insights out of these rich textual data sources. However, such practices have not yet been widely adopted across the entire CTI industry. A common paradigm which many companies and researchers can adopt is natural language processing (NLP) and text mining. Such methodologies can offer significant value in traditional CTI analytics, including but not limited to malware analysis, phishing (e.g., fake email and/or website detection), anomaly detection, and many others. These include semantic matching, coreference resolution, distant supervision, tagging, parsing, named entity recognition (NER), entity resolution, feature selection/reduction, ontology development, topic modelling (e.g., latent Dirichlet allocation, latent semantic analysis), and others. In recent years, there has been a shift to emerging deep learning based NLP approaches. These include neural information retrieval (neural IR), hacker language modelling, diachronic linguistics (i.e., mapping how language evolves over time to detect emerging threats), deep structured semantic modelling for short text matching (offers value for data fusion tasks), text-augmented social network analysis, and numerous others. Despite

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective

149

remarkable advances in the fundamental principles for each aforementioned methodology, the unique characteristics of cybersecurity data (e.g., version names, rapidly evolving hacker terminology, significant multilingual content, computergenerated content, etc.) necessitate the development of novel computationally oriented NLP and text processing approaches inspired by these domain-specific features. Consequently, this can be the feature and value offering proposition of novel CTI platforms offered by nascent CTI companies entering the existing marketplace.

Enhancement of Data Mining Capabilities Data mining holds significant promise in advancing numerous traditional analytics commonplace within CTI, including malware analysis, IP reputation services, phishing email detection, event correlation, anomaly detection, and others. Data mining can assist CTI efforts from two perspectives. First, it can help organizations and researchers identify patterns within datasets which are not readily apparent by other analytics approaches (e.g., summary statistics, manual inspection, basic malware analysis, etc.). Second, it can help CTI researchers and practitioners process large amounts of data in an effective and efficient manner. In a domain where the amount of data is being generated at staggering rates from a variety of data sources (e.g., internal network devices, OSINT, etc.), these benefits are critical to ensuring that an organization is able to effectively extract key insights from all collected data. Beyond enhancing the aforementioned traditional CTI analytics, data mining can provide an array of new inquiries for cybersecurity. These include, but are not limited to, grouping similar types of network events together; clustering similar threat actors in hacker community platforms (e.g., hacker forums); classification of log files into predefined bins; detecting an adversary’s tactics, techniques, and procedures (TTPs); stream data mining to deliver real-time cyber threat intelligence data feeds; and many other analytics possibilities.

Further Integration of Big Data and Cloud Computing Technologies Cybersecurity analytics, such as cyber threat intelligence (CTI) processes, is fundamentally a Big Data analytics problem. Promising CTI data sources include Open Source Intelligence (OSINT) such as Facebook, Twitter, PasteBin, hacker forums, IRC, Darknet Marketplaces, and carding shops. They may also include data from internal network devices such as IDS/IPS, routers, databases, firewalls, switches, servers, SIEM. These various data sources are aggregated to generate terabytes of heterogeneous (structured and unstructured, of varying quality) data, often at realtime speeds. Consequently, CTI shares the same five Vs commonly associated with traditional Big Data contexts (e.g., e-commerce, business intelligence, health informatics, etc.): volume, variety, velocity, veracity, and value. The similarities of CTI characteristics vis-à-vis these traditional, high-impact domains suggest that technologies such as Hadoop (MapReduce + Hadoop Distributed File System (HDFS)),

150

S. Samtani et al.

Apache Spark (GraphX, SparkSQL, Spark Streaming, and Machine Learning Library (MLlib)), Hive, Sqoop, Mahout, and others can assist scholars and practitioners to efficiently collect, process, and present threat data, analytics, and key insights. For example, Hadoop can provide a highly scalable solution to systematically extract features. Moreover, the rich set of analytical programs offered by technologies built upon Hadoop provide access to common feature reduction algorithms (e.g., principal components analysis (PCA), autoencoders, etc.) to extract the most critical features from provided inputs for subsequent malware classification and clustering tasks. Storing feature lists using HDFS and then using MapReduce-based programs and technologies to distribute the feature extraction and selection process to a cluster of machines can achieve significant savings time and finances. Apache Spark’s Spark Streaming functionality can allow researchers to analyze large amounts of real-time streaming data (commonplace in the CTI domain). Taken together, leveraging Big Data technologies in conjunction with fundamental CTI principles and approaches (e.g., honeypots and malware analysis) can push the frontier and boundary of novel CTI Big Data analytics. Similarly, fusing multiple sources of data together that are naturally disparate enables the access to all attributes (commonly referred to as features or dimensions in machine learning and data mining literature) in each fused data source. This aggregation of attributes can significantly improve the performance of traditional data mining task classification, clustering, dimensionality reduction, and recommendation algorithms. Each is employed for significant cybersecurity analytics applications such as Big Data malware analysis, phishing detection, OSINT analysis (both hacker community and traditional social media sources), event correlation, and others. Moreover, comprehensively collecting and aggregating a diverse set of features across multiple datasets increases the CTI researcher’s capability to develop new features for enhanced data mining algorithm performances and/or Key Performance Indicators (KPIs) (e.g., those that can assist organizations and researchers in systematically quantifying and prioritizing risk). Cloud computing services provided by major providers such as Amazon Web Services (AWS), Microsoft Azure, Digital Ocean, and others can provide effective mechanisms for organizations and researchers to deploy environments (e.g., on-demand networks across multiple geographic regions) to significantly streamline Big Data CTI collection and analytics for selected CTI platform capabilities.

Opportunities and Strategies for Academia to Address Identified Gaps Addressing the aforementioned gaps summarized in the previous subsections is a nontrivial task. Each requires well-constructed teams containing a diverse set of expertise, interests, and experiences. Such teams can include perspectives drawn from multiple disciplines. These include, but are not limited to, technical fields such as computational linguistics, computer science, information systems, electrical and computer engineering, and information science, as well as social science-based ones such as cognitive science, communications, criminology, psychiatry, and psychology.

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective

151

Each offers their unique perspectives to tackle separate yet related and intertwined issues to help deliver unique CTI capabilities within novel CTI platforms. One of the most efficient mechanisms to quickly make impact in the CTI landscape while simultaneously harnessing the collective knowledge across disciplines is acquiring grant funding from world-renowned federal agencies. Acquiring such funding has numerous benefits. These include the ability to develop long-term, sustainable research programs around major CTI issues (as opposed to forming ad hoc teams), the ability to generate a strong reputation, recruiting high-caliber research scientists, strong graduate students, and the ability to foster and facilitate industry/government and interdisciplinary academic collaborations. Among other options such as Defense Advanced Research Projects Agency (DARPA) or Intelligence Advanced Research Projects Activity, the National Science Foundation (NSF) regularly promotes solicitations relevant to CTI and, in a more broad sense, cybersecurity. Common recent solicitations include Cybersecurity Innovation for Cyberinfrastructure (CICI), Secure and Trustworthy Cyberspace (SaTC), EArly-concept Grants for Exploratory Research (EAGER), Scholarship-for-Service (SFS), Community Computer and Information Science and Engineering (CISE) Research Infrastructure (CCRI), and Data Infrastructure Building Blocks (DIBBs). Table 5 presents a summary of selected NSF solicitation relevant to cybersecurity and also details possible areas of CTI investigation for each solicitation. For the purposes of this chapter, we focus our review of promising solicitations to NSF only, as they are often viewed as the “gold standard” and most prestigious and portable funding source for academics across a multitude of technical and nontechnical disciplines. Over the past decade, each program has funded multimillion dollar, large-scale, multi-institutional projects. Each program encourages multidisciplinary, high-impact cybersecurity research with significant intellectual merit that can be publishable at premier and top-tier journals and conferences across a multitude of disciplines. Additionally, each program aims to fund projects which can offer significant broader impacts to practice. Examples include CCRI, DIBBs, and SFS. The first two can offer scholars with enough seed funding to pursue and develop innovative CTI infrastructure such that professionals from academia, government, and industry can pursue high-impact opportunities not otherwise possible. On the other hand, SFS can help address the significant shortage of trained cybersecurity professionals for eventual placement into federal, state, and local government positions by offering cutting-edge cybersecurity curricula and education opportunities. While each program offers significant promise in helping academics achieve and execute advanced CTI research, infrastructure, and capabilities, the program which has emerged as the premier source for cybersecurity funding is SaTC. Since its inception in 2012, the NSF SaTC program supports research that addresses cybersecurity and privacy. Cutting across multiple CISE divisions and drawing upon numerous technical perspectives, SaTC project PIs have made remarkable advances in adversarial data mining, anomaly detection, real-time log analytics, and many other areas. SaTC offer several funding mechanisms within including CORE research, education (EDU), Transition to Practice (TTP), CISE Research Initiation Initiative (CRII) and CAREER. The latter two serve as prestigious early seed funding for junior faculty at the start of their academic career (SaTC, a program offered within CRII and

152

S. Samtani et al.

Table 5 Summary of selected National Science Foundation (NSF) cybersecurity relevant funding opportunities with descriptions and possible areas of CTI investigation NSF funding opportunity Secure and Trustworthy Cyberspace (SaTC)

Brief description of program Supports fundamental research related to cybersecurity and privacy

Scholarship-forService (SFS)

Designed to enhance cybersecurity workforce development

Community CISE Research Infrastructure (CCRI)

Provides resources to launch a computational research infrastructure

Data Infrastructure Building Blocks (DIBBs)

Designing a computational research testbed to support transformative research opportunities Offers funding to potentially transformative yet high-risk project

EArly-concept Grants for Exploratory Research (EAGER) CISE Research Initiation Initiative (CRII) CAREER

Provides seed funding to early career junior faculty to initiate their research Provides funding to junior faculty to promote a lifetime of research and teaching excellence

Possible areas of CTI investigation (selected) Transitioning CTI analytics platforms to practice Identifying emerging threats and key threat actors via emerging machine learning, text mining, and deep learning analytics Developing a workforce of government agents with significant CTI expertise (e.g., collection, analytics) Designing a highly customized environment supporting advanced CTI data collection and analytics for multiple CISE research communities Developing a long-term storage source for maintaining and curating CTI data collection Data fusion of multiple CTI and traditional CTI platforms for advanced and holistic analytics Launching CTI research streams and teams to begin conducting selected CTI research Integrative, long-term CTI research and education opportunities

CAREER). Among these, TTP presents a promising opportunity for teams of researchers with current SaTC funding to transition their research into the marketplace. Additional funding mechanisms to help transition the research into practice and ensure sustainability of the technologies can be attained via mechanisms such as Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) sources, NSF, or other agencies such as the Department of Defense (DoD) and Defense Advanced Research Projects Agency (DARPA).

Conclusion Hackers’ regular exploitation of numerous information systems technologies costs the global economy nearly half a trillion dollars yearly. Cyber threat intelligence offers a promising mechanism for organizations to select appropriate cybersecurity controls (e.g., authentication, protocols, cryptography, etc.) to improve their overall

7

Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective

153

cybersecurity posture. CTI platforms are developed to streamline the CTI process to prioritize threats; pinpoint key threat actors; understand their tactics, techniques, and procedures (TTPs); deploy appropriate security controls; and improve the overall cybersecurity hygiene and posture of organizations. These platforms draw upon (1) Open Source Intelligence (OSINT), (2) Internal Intelligence, (3) Human Intelligence (HUMINT), (4) Counter Intelligence, and (5) Finished Intelligence (FINTEL) coupled with (1) summary statistics, (2) event correlation, (3) reputation services, (4) malware analysis, (5) anomaly detection, (6) forensics, and (7) machine learning to develop threat intelligence. Through visualization, report generation, and intelligence dissemination, the resulting threat intelligence can help organizations effectively deploy automated and manual defenses and ultimately improve the cybersecurity posture of organizations. Systematically reviewing dozens of CTI platforms revealed the CTI industry remains one that is rapidly emerging and growing, and not one which has reached saturation. This bodes well for budding entrepreneurs interested in exploring this space. Systematically reviewing existing CTI platforms identified five future possible directions CTI start-ups can explore: (1) shift from reactive to proactive OSINTbased CTI platforms, (2) enhancement of natural language processing (NLP) and text mining capabilities, (3) enhancement of data mining capabilities, (4) further integration of big data and cloud computing technologies, and (5) opportunities and strategies for academia to address identified gaps. Numerous funding opportunities from highly visible sources (e.g., NSF CCRI, SaTC, CRII, CAREER, DIBBs, SFS, EAGER, SBIR/STTR, TTP, etc.) can help scholars attain requisite funds to assemble teams, conduct high-impact and relevant CTI research within these identified gaps, and transition their findings into the CTI industry for adoption by broader society. Ultimately, addressing one or more of these gaps currently can help ensure a safer and more secure society. Acknowledgments This work was supported in part by NSF CRII CNS-1850362.

References Anomali. (2017). ThreatStream 6.0 Data Sheet. https://anomali.cdn.rackfoundry.net/files/ ThreatStream_6.0.pdf. Benjamin, V. A. (2016). Securing cyberspace: Analyzing cybercriminal communities through web and text mining perspectives. Doctoral dissertation, University of Arizona. Benjamin, V. A., & Chen, H. (2013). Machine learning for attack vector identification in malicious source code. In 2013 IEEE international conference on intelligence and security informatics (ISI) (pp. 21–23). IEEE. Benjamin, V., Li, W., Holt, T., & Chen, H. (2015). Exploring threats and vulnerabilities in hacker web: Forums, IRC and carding shops. In 2015 IEEE international conference on intelligence and security informatics (ISI) (pp. 85–90). IEEE. Benjamin, V., Zhang, B., Nunamaker, J. F., & Chen, H. (2016). Examining hacker participation length in cybercriminal internet-relay-chat communities. Journal of Management Information Systems, 33(2), 482–510. Benjamin, V., Valacich, S. J., & Chen, H. (2019). DICE-E: A Framework for Conducting Darknet Identification, Collection, Evaluation with Ethics. MIS Quarterly, 43(1), 1–22.

154

S. Samtani et al.

Friedman, J. 2015. Definitive guide to cyber threat intelligence. CyberEdge Group, LLC. https:// cryptome.org/2015/09/cti-guide.pdf. Luke Graham. (2017). Cybercrime costs the global economy $450 billion: CEO. Retrieved June 5, 2017, from https://www.cnbc.com/2017/02/07/cybercrime-costs-the-global-economy-450-bil lion-ceo.html. Grisham, J., Samtani, S., Patton, M., & Chen, H. (2017). Identifying mobile malware and key threat actors in online hacker forums for proactive cyber threat intelligence. In 2017 IEEE international conference on intelligence and security informatics: Security and big data, ISI 2017 (pp. 13–18). Kime, B. P. (2016). Threat intelligence: Planning and direction. SANS Institute. https://www.sans. org/reading-room/whitepapers/threats/threat-intelligence-planning-direction-36857. Accessed 5 June 2017. Li, W. (2017). Towards secure and trustworthy cyberspace: Social media analytics on hacker communities. Doctoral dissertation, University of Arizona. Li, W., & Chen, H. (2014). Identifying top sellers in underground economy using deep learningbased sentiment analysis. In 2014 IEEE joint intelligence and security informatics conference (pp. 64–67). IEEE. Li, W., Chen, H., & Nunamaker, J. F. (2016a). Identifying and profiling key sellers in cyber carding community: AZSecure text mining system. Journal of Management Information Systems, 33(4), 1059–1086. Li, W., Yin, J., & Chen, H. (2016b). Targeting key data breach services in underground supply chain. In IEEE international conference on intelligence and security informatics: cybersecurity and big data, ISI 2016 (pp. 322–324). Samtani, S., & Chen, H. (2016). Using social network analysis to identify key hackers for keylogging tools in hacker forums. In 2016 IEEE conference on intelligence and security informatics (ISI) (pp. 319–321). IEEE. Samtani, S., Chinn, R., & Chen, H. (2015). Exploring hacker assets in underground forums. In 2015 IEEE international conference on intelligence and security informatics (ISI) (pp. 31–36). IEEE. Samtani, S., Chinn, K., Larson, C., & Chen, H. (2016). AZSecure hacker assets portal: Cyber threat intelligence and malware analysis. In 2016 IEEE conference on intelligence and security informatics (ISI) (pp. 19–24). IEEE. Samtani, S., Chinn, R., Chen, H., & Nunamaker, J. F. (2017). Exploring emerging hacker assets and key hackers for proactive cyber threat intelligence. Journal of Management Information Systems, 34(4), 1023–1053. Shackleford, D. (2016). 2016 security analytics survey. SANS Institute. https://www.sans.org/ reading-room/whitepapers/analyst/2016-security-analytics-survey-37467. Accessed 5 June 2017.

8

Surveillance, Surveillance Studies, and Cyber Criminality Brian Nussbaum and Emmanuel Sebastian Udoh

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Critical Surveillance Studies (CSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Market Surveillance: Customers and Consumers as Object of Surveillance . . . . . . . . . . . . . . . . . . National Security Surveillance: Protective Surveillance by the State . . . . . . . . . . . . . . . . . . . . . . . . . . The Surveillance Studies Assemblage: Bringing Critical, Market, and Security Surveillance Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surveillance Artifacts as the Subject of Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Illicit Surveillance: Surveillance as a Mode of Cyber Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surveillance as Crime Control Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surveillance as Mechanism for Monitoring Technological Systems . . . . . . . . . . . . . . . . . . . . . . . Concluding Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

156 156 159 160 162 163 166 169 171 172 173

Abstract

This chapter examines the numerous connections between cybercrime and surveillance. The use of surveillance to detect and disrupt cybercrime, the theft and other illegal use of the artifacts of surveillance as a variety of cybercrime, as well as criminal types of surveillance are among many ways in which the two relate. This chapter offers a holistic look at surveillance and cybercrime by looking at both theoretical insights from literatures, like critical surveillance studies and behavioral surveillance of consumers for marketing, as well as emerging practical issues in surveillancerelated cybercrime, like “sextortion” and the use of remote access trojans or RATs.

B. Nussbaum (*) · E. Sebastian Udoh University at Albany, State University of New York, Albany, NY, USA e-mail: [email protected]; [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_16

155

156

B. Nussbaum and E. Sebastian Udoh

Keywords

Surveillance · Cybercrime · Surveillance Studies · Investigation

Introduction Surveillance – the monitoring and close observation of behavior – is both an important metaphor and tool for thinking about many aspects of cybersecurity and cybersafety. Surveillance and cyber criminality have a multi-faceted and complex relationship. Surveillance can be a subject of crime – as when criminals steal data or other artifacts resulting from surveillance or hijack surveillance channels or mechanisms for illicit purposes. Surveillance can be a crime itself, as when cyber criminals install remote access trojans (RATs) to violate privacy or engage in sextortion. Surveillance can also serve as an important crime-fighting tool – both in cyberspace and in “meatspace” (traditional physical) environments – enabling authorities to monitor and intercede in criminal activity or to collect information and data that enables subsequent forensic investigation of such crimes. Whether that data is event logs from a computer crime or camera footage of an assault, the surveillance is actually an important tool enabling authorities to engage in law enforcement. Finally, in a world of growing embedded computing, surveillance will increasingly be about monitoring the behavior of devices rather than just people, providing real time pictures and monitoring of the behavior of broad and distributed cyber physical systems. While this initially doesn’t seem to have an obvious connection to cybercrime, denial of service and other attacks on these new Internet of Things systems will become an increasingly important area of cybercrime studies, because of the increasing consequence of such attacks and their ability to disrupt services in the physical world. Surveillance (close inspection of behavior) as. . . Subject of crime Illicit activity Crime fighting measure

Monitoring mechanism for technological systems

Implication Artifacts of surveillance can be stolen or accessed illicitly Conducting certain kinds of surveillance is actually constrained by law (illegal) or rules (cheating) Many organizations and jurisdiction use surveillance – Proactively (real time) or retrospectively (forensically) – To stop, alert for, or investigate criminal activity Surveillance of devices – As well as users/people – Can be a mechanism for understanding system performance

Critical Surveillance Studies (CSS) Etymologically, “surveillance” comes from two French words: (1) sur, which means “from above,” and (2) veiller, which means “to watch.” From these roots, therefore, surveillance means “watching over,” a form of discipline according to

8

Surveillance, Surveillance Studies, and Cyber Criminality

157

French Philosopher Michel Foucault in his landmark work Surveiller et punir or Discipline and Punish (1975, 1995), in contrast to more recent developments such as sousveillance. In the context of espionage and counterintelligence, surveillance is the monitoring of behavior, activities, or other changing information for the purpose of influencing, managing, directing, or protecting people (Lyon 2007a). Michel Foucault is widely acknowledged as the foundational thinker in surveillance studies (Lyon 1994, 6–7; Murakami Wood 2009, 235; Pryor 2006, 898; Gutting 1998, 708–713). In his work, Foucault sees discipline as forms of operational power relations and technologies of domination, which can be used to discipline, control, and normalize people. He specifically stated that factories, schools, barracks, and hospitals all resemble prisons (Foucault 1995, 228). Foucault therefore sees Jeremy Bentham’s panopticon as an ideal architectural symbol of modern surveillance societies and disciplinary power, consisting of an annular building divided into different cells and a huge tower with windows in the middle. In this architecture, prisoners, workers, pupils, and patients stay in the cells while a supervisor occupies the middle tower; this arrangement enables the supervisor to observe all occupants of the cells without being seen. This is Foucault’s idea of surveillance as “watching without being seen.” Surveillance studies is therefore a cross-disciplinary venture which aims to understand the rapidly increasing ways in which personal details (data or information) are collected, stored, transmitted, checked, and used as means of influencing and managing people and populations. With the advances in information and communication technologies (ICT) and the social networking, collaborative production, distribution affordances, and data/information sharing potentials of Web 2.0, surveillance now takes different forms and occurs in indifferent platforms, e.g., observation from a distance by means of electronic equipment (e.g., closed-circuit television (CCTV) cameras), interception of electronically transmitted information (e.g., Internet traffic or phone calls), and simple methods (e.g., human intelligence agent and postal interception). This discourse addresses surveillance in the strict sense of electronic or Internet surveillance. The panopticon as foundation of Internet surveillance studies divides scholars into two groups: Those that espouse non-panoptic notions of Internet surveillance and those that espouse panoptic notions of Internet surveillance (Allmer 2011, 578). Proponents of non-panoptic notions of Internet surveillance either use a neutral concept founded on the assumption that these are enabling as well as constraining effects or a positive concept that identifies comical, playful, and amusing characteristics of online surveillance. Some prominent scholars in this tradition include David Lyon (1998, 2003a, b); Miller, Seumas, and John Weckert (2000); Anders Albrechtslund (2008); and Hille Koskela (2004, 2006). Proponents of the panoptic notions of Internet surveillance see online surveillance as negative, arguing that power, domination, coercion, control, discipline, and surveillance have increased in the Internet era. Some prominent scholars in this tradition include Greg Elmer (1997), Manuel Castells (2001), David Wall (2003, 2006), Joseph Turow (2005, 2006), Mark Andrejevic (2002, 2007a, b), and John Edward Campbell and Matt Carlson (2002).

158

B. Nussbaum and E. Sebastian Udoh

Online social networking has been described as “participatory surveillance” (Albrechtslund 2008) or “participatory panopticon” (Cascio 2006). Similarly, recognizing the social networking, and crowdsurcing potentials of Web 2.0 technologies like Google and Facebook, Fuchs, Boersma, Albrechtslund, and Sandoval (2011) present Web 2.0 (the new Internet or social media platforms) as a two-edged sword, facilitating and enabling a participatory culture online on one hand, and heightening the scales and consequences for electronic surveillance and privacy on the other. Specifically, the authors argue that surveillance is both an economic and political issue: Web 2.0 platforms typically collect, store, and analyze personal data, thus enabling both commercial and state surveillance of consumers. Theoretically, critical Internet surveillance studies are founded on the conviction that surveillance has a political economy, with the surveillance economy divisible into the spheres of production, circulation, and consumption (Fuchs et al. 2011; Allmer 2011). In terms of research genre, Christian Fuchs draws a distinction between a cultural studies approach and a critical political economy approach in studying web 2.0 surveillance (Fuchs 2011). Web 2.0 surveillance exerts power and domination by leveraging the affordances of the Internet, such as user-generated content and permanent dynamic communication flows, into a system of panoptic sorting, mass self-surveillance, and personal mass dataveillance (Fuchs 2011). With regard to surveillance’s political economy, David Lyon sees the nation state and the capitalist workplace as the main sites of surveillance on the Internet (Lyon 1998, 95, 2003a, 69, b, 163), arguing that surveillance technologies like the Internet reinforce asymmetrical power relations on an extensive and intensive level (Lyon 1998, 92). Economic actors such as corporations carry out surveillance practices of all kinds in order to control certain economic behaviors, guarantee the production of surplus value, maximize profit, and coerce them to produce or buy specific commodities. In most cases, people do not know that they are being surveilled. Several scholars therefore believe that economic online surveillance is of negative value and should be curtailed. Prominent among these scholars are Gandy (1993, 230–231), Castells (2001, 182–184), Parenti (2003, 207–212), Ogura (2006, 291–293), Lyon (1994, 159–225, 2001, 126–140, 2007a, 159–178, b, 368–377), Fuchs (2009, 115–117), and Marisol Sandoval (in Fuchs et al. eds. 2011). However, some scholars also see positives in surveillance practices. For Anders Albrechtslund (2008; see also Albrechtslund’s contribution in this volume), being under surveillance also has positive aspects; he argues that online surveillance also empowers the users, constructs subjectivity, and is playful. Moreover, Internet surveillance as a social and participatory act involves mutuality and sharing (Albrechtslund 2008). Similarly, Koskela speaks of “the other side of surveillance,” which he says has resistant and liberating elements (2006, 175). As an example, Koskela argues in another work that Webcams contribute to the ‘democtratization’ of surveillance (Koskela 2004, 204).

8

Surveillance, Surveillance Studies, and Cyber Criminality

159

Market Surveillance: Customers and Consumers as Object of Surveillance One of the places that surveillance is most commonplace and most successful in modern society is in the space of consumer surveillance – as companies attempt to market their goods, understand their customers, and manage their market position. From early behavioral studies of consumer behavior, through the growth of customer loyalty and other behavioral tracking programs, to the explosion of online surveillance, the consumer or customer are among the most surveilled entities of the modern era. While most of this monitoring, observation, analysis, and even manipulation, began under concepts and names other than that of surveillance, there is an increasing consensus that such programs and efforts collectively constitute a surveillance regime (Lace 2005; Turow 2006). There is an emerging literature that equates modern marketing with surveillance (Pridmore and Zwick 2011; Pridmore and Lyon 2011), as well as a broader literature that generalizes to “consumer information gathering” (Karas 2002) more broadly as the basis for understanding market surveillance. In fact, other aspects of this “surveillance-industrial complex” (Ball and Snider 2013) are also now understood to constitute the monitoring of customers and consumers. These include customer loyalty programs (Zurawski 2011; Pridmore 2010), advertising (Turow 2006; Hackley 2002), web surveillance (Sandoval 2013), company attempts at building brands (Pridmore and Lyon 2011), customer retention (Alhawari 2015), and “customer knowledge management” (Rowley 2005). While these activities may differ in the extent to which they constitute surveillance, their collective impact seems to constitute an environment in which consumers are always under examination. While today these efforts to track, understand, adjust the preferences of, and cater to consumers are increasingly understood as a version of surveillance, in reality, they largely started more prosaically and with short term pragmatic ends. The tracking of consumer sentiment and behavior was pioneered in two places; first in retail industries, with the idea to improve sales by understanding consumer behavior, and second in marketing, with the idea to adjust the perceptions or desires of consumers. These early efforts were often market, industry, or even company specific, in contrast with today’s more comprehensive online web advertising or online platform consumer knowledge efforts, which knit together disparate information on consumers to create customer profiles that cross all sorts of boundaries – geographic, industry, company, etc. That is not to suggest that earlier or narrower versions of customer and consumer analysis didn’t interact or cross boundaries at all. For example, attempts to understand aging or elderly consumers – one group whose behavior was deemed worthy of “surveillance” – emerged in many different ways, and across many industries (Mortenson et al. 2015). The insurance industry had efforts at “customer knowledge management” to understanding elderly consumers (Ranjbarfard 2016), as would, for example, industries heavily dependent on older clients the medical device industry

160

B. Nussbaum and E. Sebastian Udoh

(Zippel and Bohnet-Joschko 2017) or the nursing home and elder care industry (Cheng et al. 2005). This cross-industry attempt to understand elderly customers will only grow with aging populations in many developed countries but has been going on for decades. Like most customer and consumer research, it too originated with behavioral analyses (Mason and Bearden 1978; Danziger et al. 1982; Martin 1976) focused on retail behavior, consumption patterns, and purchase decision-making. Only later did it begin to zero in on particular industries, geographic markets, product types, and the like. There are similar analyses in many industries, and of differing demographic or customer groups, often following the same pattern – beginning with early and relatively narrow behavioral studies of consumption patterns, and later moving into more holistic customer knowledge or consumer analyses. In addition to attempts to surveil customer and consumer behavior, the retail industry also pioneered many mechanisms and processes for surveilling physical space – their facilities and stores. Monitoring of the retail environment has been done forever, but even the study of these efforts is not new. By the 1980s, the industry and practices were sufficiently evolved that it was possible to do large comparative studies of loss prevention strategies and programs in the retail environment (Baumer and Rosenbaum 1984). Different technologies of surveillance – like retail alarm systems – have also been studied to understand their practical and theoretical implications (Dawson 1993) The study of other types of retail surveillance, like the use of video cameras began with initial questions about implementation and utility (Baumer and Rosenbaum 1984), but soon moved into more advanced studies of ethics (Kirkup and Carrigan 2000), transportability across sectors (Brodsky et al. 2002), and narrowed focus on the detection of particular criminal behavior (Fan et al. 2009).

National Security Surveillance: Protective Surveillance by the State Starting in earnest in the 1970s, there was an explosion of writing about formerly secret intelligence agencies and programs, many of which were previously littleknown outside of the executive branch in the United States. In the wake of Cold War and Civil Rights era abuses in the 1950s and 1960s, suddenly agencies like the National Security Agency NSA (formerly jokingly referred to as “No Such Agency”) were the subject of critical, and sometimes sympathetic, analysis. While much of this new literature was not about surveillance per se, it gave birth to a series of books that catalogued the institutions of US government surveillance. These books covered law enforcement (Cowan et al. 1974; Marx 1989; Ferguson 2017; Theoharis 1978, 2004), military (Coakley 1996; Laurie and Cole 1997; Banks and Dycus 2016), and intelligence agencies (Bamford 1982, 2009). Collectively, this literature described a world of organizations that observed and monitored potential threats to national security, and the emerging regimes for holding such agencies accountable. This through line of research has continued until relatively recently,

8

Surveillance, Surveillance Studies, and Cyber Criminality

161

including books that cover how such agencies and bureaucracies grew and changed over recent decades (Harris 2010) and what they look, and act like (Granick 2017), in the post-9/11 era. In addition, focusing on agencies or types of agencies, many writings about this world have focused their research on particular historical episodes. These historical episodes range from particular events, as singular as the 2004 Olympics (Samatas 2007), to more common initiators, like particular programs (whether public, leaked, or discovered). For example, the PATRIOT ACT, passed in the aftermath of the September 11 attacks, resulted in expansive literatures and debates about the surveillance capabilities of the US government, and how such abilities could keep pace with technological developments. Other surveillance programs – like NSA bulk data surveillance (Yoo 2014; Stoycheff 2016) and the President’s “Terrorist Surveillance Program” (Wong 2006; Avery 2007) – became public as the result of leaked documents or evidence. Such programmatically focused analyses – whether of program legality (Yoo 2014), technical specifications (Landau 2013), societal impact (Stoycheff 2016), or policy utility – are a common part of the literature of national security surveillance. Particular leaks or surveillance exposures – like those tied to Wikileaks or Edward Snowden – have created research and analysis that constitutes an important component of the national security surveillance literature. There is no one literature that reflects such insights, rather the implications of the Wikileaks releases are studied by those interested in public policy (Moore 2011), management (Roberts 2012), and international politics (Pieterse 2012; Springer et al. 2012) among others. The same is true for the literature that was inspired by, or tied to, the leaks of Edward Snowden. Again, there is literature about the general revelations (Verble 2014), the technical mechanisms of surveillance (Landau 2013), the societal implications of such surveillance, resultant changes in government policy (Margulies 2014), and the security landscape (Van Hoboken 2013); as well as a more traditional literature situating such programs in the critical surveillance studies literature (Bakir 2015). The literature on national security surveillance is fundamentally a literature of public policy. As such, it develops with similarities to traditional policy literatures. For example, it often develops along national lines – i.e., with focuses on particular countries surveillance policies. One of the most extensive examples of a national surveillance literature comes from, perhaps ironically, Canada – a country with a strong reputation for liberal values and a fairly limited reputation for militarism. This literature begins with a similar accounting of what government agencies and institutions pursue or enable surveillance (Kinsman et al. 2000; Whitaker et al. 2012; Kealey 2017). However, there are few areas of society where the lens of surveillance hasn’t been applied. From particular surveillance implementations, like cameras monitoring streets (Walby 2005; Hier and Walby 2011), to the ways in which surveillance agencies shape Canadian political discourse (Kinsman et al. 2000), to how certain political activities are surveilled (Walby and Monaghan 2011), to particular locales like airports (Lyon 2006), the literature of Canadian security surveillance is broad and deep. Many other countries have similar literatures. Like so many public policy literatures, there does seem to be a bias toward developed

162

B. Nussbaum and E. Sebastian Udoh

Western countries in the surveillance literature, and so North American and European countries (and a few others) appear over represented in this literature. Also, like other policy literatures, the conversation around security surveillance develops along disciplinary literatures. For example, there is an extensive literature on police surveillance, much of it in criminal justice. While this literature sometimes overlaps with the literatures on surveillance by militaries (Andreas and Price 2001) and intelligence services, they speak to each other less than might be imagined. This literature too dates back to at least the 1970s (Cowen et al. 1974). Over the decades this literature grew (Marx 1989; Fijnaut and Marx 1995) and internationalized (Zureik and Salter 2013) and became more comparative (Fijnaut and Marx 1995). It changed along with the broader surveillance literature after 2001, and the subsequent rise of the homeland security paradigm in the United States (Lyon 2003d; Bloss 2007; Monahan 2010). Finally, like other public policy areas – particularly controversial ones – there is a broad literature about legislation, legal interpretation, and court battles that have shaped the surveillance landscape. In the US context, this ranges from the implications of founding documents like the Constitution (and Constitutional Law) on surveillance (Dowley 2002), as well as particular legislation (Kerr 2002) or legal concepts (Kerr 2014), to broader legal analyses of surveillance and national security (Baker 2007; Balkin 2008). In almost all of these regards, the literature on national security surveillance has parallels to other public policy literatures. There are some differences and variations derived from the nature of secrecy in the field and debates over state power, but those aren’t dissimilar from other areas of public policy concerned with national security.

The Surveillance Studies Assemblage: Bringing Critical, Market, and Security Surveillance Together The literatures in all three of these versions of surveillance – critical, market, and security – overlap in important ways. For example, scholars of security surveillance often extrapolate from existing technologies in consumer surveillance, or alternatively look at technologies which were largely government monopolies and how they might privatize and proliferate (Pell and Soghoian 2014). Scholars of critical surveillance studies often draw on historical episodes of security surveillance and combine them with the current state of the art in consumer or web surveillance to explore the potential for surveillance futures that looks even bleaker than the present. Other scholars look at how various areas of surveillance – like consumer and security – might be converging in particular places and times, like airport environments in the post-9/11 world (Lyon 2003c) Arguably, the critical surveillance studies lens plays an outsize role – when compared with the other two – because of how fertile it has been intellectually. For example, the critical lens often provides intellectual framing for scholars working in the security or consumer areas of surveillance. Examples would include studying consumer surveillance with a “Marxist critique” (Lauer 2008) or using critical metaphors like the “panopticon” to conceive of the role of surveillance

8

Surveillance, Surveillance Studies, and Cyber Criminality

163

participants like advertising agencies (Hackley 2002). Critical surveillance narratives have also provided important framing to activist organizations in the surveillance and privacy space, enabling organizations to knit together their antisurveillance efforts in both the consumer protection and national security realms into a coherent whole. Thus, all three areas have remained diverse, independent literatures, and approaches but have also shaped scholarship in the other two. Even with that interdependence, there is a case to be made that the critical lens – with its focus on power dynamics – has shaped large portions of the literature in the other two more “applied” areas of surveillance studies.

Surveillance Artifacts as the Subject of Crime The artifacts of modern surveillance, particularly online surveillance but also surveillance from Internet-connected devices like mobile phones and wearables, are incredibly valuable. The giant pools, lakes, and oceans of data collected from these activities, often without the surveillance target’s awareness, are so valuable that there are whole industries of aggregators and resellers of such surveillance information (FTC 2014). Thus, many of the most serious recent data breaches fundamentally involve the theft of valuable surveillance data. Some of this data is not too indicative of behavior per se, like the theft of credit card numbers from a large retailer, and thus typically not stolen for their surveillance value, but for instrumental purposes of reselling credit card numbers or financial fraud. However, many of the prominent recent breaches, like the Equifax breach (Berghel 2017), offer a much broader set of information about the individuals involved. The data stolen from Equifax was collected by the company, without the consent of those surveilled, with the intent of assessing the financial risk of extending lines of credit to these people. Loyalty programs at stores or other companies are fundamentally surveillance programs, in which consumer behavior is tracked and analyzed to improve sales, targeting, customer satisfaction, and other indices that companies want to maximize (Norton 2019). The data collected goes far beyond credit card or payment data, but includes product choice, buying habits, location data, and various other incredibly in-depth data about loyalty club members. In some cases, like casino loyalty programs, the data they collect is so comprehensive that calling them real-time surveillance of individuals would not be inaccurate. This data is incredibly valuable to hackers (Gerstner 2018). Recent breaches that targeted the data from such surveillance programs include the theft of data from loyalty programs at Dunkin Donuts (Taylor 2018), Radisson Hotels (Schwartz 2018), and Panera Bread (Barrabi 2018). Perhaps the most indicative version of this kind of loyalty program breach as theft of surveillance artifacts is the recent breach affecting the Marriott Hotel chain’s Starwood loyalty program. According to the Federal Trade Commission, “. . .the hackers accessed people’s names, addresses, phone numbers, email addresses, passport numbers, dates of birth, gender, Starwood loyalty program account information, and reservation information” (Gressin 2018). That is not merely payment

164

B. Nussbaum and E. Sebastian Udoh

information but actual behavioral information of members of this program. In fact, it is not clear if the hackers actually got the payment information. “Marriott says the payment card numbers were encrypted, but it does not yet know if the hackers also stole the information needed to decrypt them” (Gressin 2018). In September of 2018, Marriott brought in a security vendor to investigate whether it had a security breach. This was after one of their network monitoring systems – conducting surveillance of behavior on their network – noted an unusual command asking how many rows were in a particular database (Cimpanu 2019). The software flagged this request because it is indicative of behavior by human users rather than computers (Cimpanu 2019). The vendor actually found a live surveillance mechanism in their networks, a remote access trojan (or RAT) (O’Flaherty 2019). It wasn’t until November that the investigation uncovered that the breach actually dated back to July 2014 (O’Flaherty 2019), meaning that anyone who had stayed in Starwood loyalty club hotel chains – including “W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, Le Méridien Hotels & Resorts, and other hotel and timeshare properties” (Gressin 2018) – had potentially had their data exposed. After a forensic investigation, the company discovered that several compressed and encrypted files had been deleted from the network (assumedly after having been exfiltrated), and that these files included passport information and guest details (O’Flaherty 2019). According to a statement from a company executive, the data stolen included “383 million guest records, 18.5 million encrypted passport numbers, 5.25 million unencrypted passport numbers (663,000 from the USA), 9.1 million encrypted payment card numbers, and 385,000 card numbers that were still valid at the time of the breach” (Cimpanu 2019). The scale and scope of the data stolen from Marriott is truly amazing, but it is important because surveillance data like this is actually more broadly useful to hackers than long lists of credit cards. The data “is so rich and specific it could be used for espionage, identity theft, reputation attacks and even home burglaries, security experts say” (AP 2018). That is, behavioral data is no single use or exploitation data, the way some payment information is, because it has rich and detailed behavioral insights, it can be mined for the same kinds of insights that the hotel industry want to get out of it. Who works in the same industries and attends the same conferences? What cities do employees of what companies travel to, how often, and for what reasons? Is company X decreasing travel to country Y? Colin Bastable of Lucy Security says hotels are “important government sources of local information for tracking foreigners: reservation systems and loyalty programs took the surveillance global. . .” (AP 2018). Bastable went on to say that intelligence agencies often target the hospitality and hotel industries “by fair means or foul,” (AP 2018) and that it is not surprising that criminals might follow suit if they have access to the right attack tools. It is important to note that the Starwood loyalty program is not the only hotel, hospitality, or other loyalty program to be targeted. What is important is that it is an illustrative example of the value of the data collected in consumer surveillance activities, and how that data can be the subject of criminal attack.

8

Surveillance, Surveillance Studies, and Cyber Criminality

165

It is also worth noting that the case of this breach of the Starwood loyalty program was almost an MC Escher-worthy loop of reflexive surveillance – surveillance data (customer behavior data) was stolen by hackers using a surveillance mechanism (a remote access trojan or RAT), and the hackers were themselves discovered by software conducting surveillance (network monitoring) looking for suspicious behavior reminiscent of hackers: surveillance of consumers, surveillance of victims, surveillance of perpetrators. Bruce Schneier says “surveillance is the business model of the internet,” (Pix and Schneier 2017) and so it is unsurprising that surveillance of online behavior is omnipresent and creates incredibly valuable data. The Cambridge Analytica controversy around the 2016 election was largely based on the political value of Facebooks surveillance data on its users. A subsequent Facebook breach also exposed information of millions of users (St. John 2019). Internet surveillance also scales even more quickly than traditional surveillance data. For example, a recent breach at Exactis – a company many have never heard of – may have exposed 340 million people’s personal data, about the population of the United States (Al-Heeti 2018). Exactis, a company that helps target ads online, is indicative of the challenge of online surveillance – it scales so quickly that a company very few people of heard of can have a breach of records “with roughly 230 million on consumers and 110 million on business contacts” (Al-Heeti 2018). When the data collected by companies and governments has geographic information in addition to behavioral information embedded in it, it often becomes even more valuable (for targeting ads, for understanding behavior, etc.). Two examples of the power of location data – both problematic, but neither stolen per se – come in the form of the running app Strava, and mobile phone location data that is sold and resold by telecommunication providers. Strava enabled runners to share their routes on a collective world map, drawing GPS location from their wearables to map the routes. This was very valuable if you were new to town and wanted to find a good three mile running loop. However, because the data was collected and disseminated in ways that weren’t always clear to users – or because users were not responsible – a small scandal ensued when Strava running routes appeared to show the locations and outlines of secret military and intelligence bases around the world (Hern 2018; Newman 2018), a result of the GPS data collected and displayed in the wearables of diligent soldiers and intelligence officers pursuing their physical training and conditioning. Another scandal emerged recently when it became clear that a community of bounty hunters, skip tracers, and various other individuals were buying access to the mobile phone location data of numerous American citizens in ways that appear to have either been illegal or in which lawmakers and regulators did not understand (Cox 2019; Hollister 2019; Feldman 2019). Again, adding location data to behavioral data is an important added value in many contexts. This geolocated data is incredibly powerful, and that makes it incredibly valuable. However, when surveillance data becomes more valuable in the traditional market, it usually becomes more valuable in the black market as well.

166

B. Nussbaum and E. Sebastian Udoh

Again, the data collected that can be stolen or accessed illegally is not just collected by industry or technology companies. For example, the city of Boston has a history of challenges controlling the data collected by its automatic license plate reader (ALPR) devices. These devices – either static, mobile, or mounted on squad cars – digitally read license plates at a distance and record the plates and their location. This technology can be valuable for finding stolen cars, tracking down serious criminals, and otherwise locating important people or vehicles. However, it also creates masses of highly personal and potentially very sensitive data – it is easy to imagine license plate data exposing an affair, a medical condition (based on doctor’s office visits) or other very personal behavior. Assumedly, important geolocated data like this would be treated with the intense security and confidentiality that it would be reasonable for the citizens of Boston to expect. In 2013, the Boston Police Department accidentally turned over a large trove of ALPR data to the Boston Globe – by mistake (Musgrave 2013). In 2015, a reporter in Boston contacted the city to tell them that with some simple online sleuthing – entering unique search terms in Google – he was able to track down an Internet facing server filled with Boston ALPR data “including hundreds of thousands of motor vehicle records dating back to 2012” (Lipp 2015). Again, this is surveillance data as the subject of illegal or illicit access.

Illicit Surveillance: Surveillance as a Mode of Cyber Crime While most surveillance – that conducted by states and market participants – is legal, there are illicit versions as well. Surveillance is a mode of crime when it is the result of illegal activity (violations of laws about trespassing, wiretapping, etc.) or used for illegal purposes (such as extortion or harassment). Many types of cybercrime would constitute illegal surveillance. Whether monitoring someone’s account or computing behavior using malware, or recording through their webcam or microphone without permission, the surveillance activity can often constitute a crime. From “sextortion” based on stolen pictures or illicit surveillance to illicit wiretapping to cyberespionage, surveillance can also be a mode of cybercrime. Sextortion involves sexual extortion, blackmail, or exploitation, in which there is coercion through the threat of release of sexually explicit images or information (De la Cerna 2012), where criminals use fake identities to befriend victims online – on Facebook, Skype, or Linkedin – before persuading them to perform sexual acts in front of their webcam (Mulin 2018), or coercing them to do so (Wittes 2016). In terms of motivation, sextortion is blackmail aimed at extorting sexual favors, money, or other benefits (ICMEC 2018, 1). Consequently, sextortion is sometimes also called “webcam blackmail” (Mulin) and “remote sexual violence” (Wittes et al. 2016b, 2), and its main modalities include social media manipulation, computer hacking, account hacking, and webcam hacking (Wittes 2016). Recently, sextortionists are working as organized gangs tricking people into stripping off or performing sex acts online and then using the footage as blackmail (Mulin 2018). The term “sextortion” was coined by the United Kingdom’s National Police Chiefs’ Council, in the course of its investigation of the emerging crime.

8

Surveillance, Surveillance Studies, and Cyber Criminality

167

Sextortion is an international crime, and often involves coercion/exploitation and sexual exploitation of children (Wittes 2016). The idea of sextortionists actively looking to take over the victims’ computers is referred to as “slaving” (DCA 2015). Many sextortionists prefer to activate the victim’s computer webcam or “digital peepholes” (Andrews et al. 2015), stealthily record them and their private spaces such as the bedroom and use the recordings for sextortion. This remote activation of webcams has ethical and legal implications but is a practice common not only among individual “RATers” or sextortionists (e.g., for extortion, information theft, and voyeurism) but also among governments (e.g., espionage) and private companies (e.g., information theft) (Rezaeirad et al. 2018; Farinholt et al. 2017; Andrews et al. 2015). Other sextortionists social engineer victims under the pretext of helping to reset their passwords or verifying they are persons and not bots, take over their email boxes and set up automatic mail forwarding to other email addresses controlled by the perpetrator, who then has access to a treasure trove of victims’ personal information, which are then used to sextort the victims (Kang 2018, 3–5). Sextortionists are also known to work through promised reciprocation, as in “You show me yours, and I’ll show you mine” after bonding with the child through flattery and praise – often while pretending to be younger or a female (Clark 2016, 43). Cases so far recorded show that women and girls are disproportionately impacted by cybersextortion (Villa 2016, 4; ICMEC, 2–3). On the whole, sextortion can be perpetrated through email-based scams – stolen personally identifiable information or “watering holes” or using pornography links as bait – and more targeted, personalized, social engineering which uses a ransomware to search for nude pictures and videos on the victim’s computer and uses the results to sextort the victim (Glassberg 2018). At the moment, there are no known anti-sextortion laws, though there are strident calls for such laws at the federal level (Wittes 2018; Wittes et al. 2016b). Sextortion has taken a new twist in recent years; a rising number of sextortion cases are now being perpetrated by scammers pretending to have videos of victims watching pornography (Murdock 2018; Glassberg 2018). Typically, the blackmail email has the victim’s real password in the subject line – probably stolen in a previous attack – and ends with a demand for ransom payment in cryptocurrency within one day (Murdock, ibid.). Surveillance as a mode of crime is also exemplified in activities involving the use of Remote Access Tools (RAT) – specifically, Remote Access Trojans (RATs). A RAT is a malicious code or trojan which is usually disguised as legitimate documents, photographs, videos, and songs to trick victims into downloading the malware unto their electronic devices. RATs are dangerous because they can be easily acquired, used or disseminated by any user with basic technology skills. The Digital Citizens Alliance (DCA 2017) identifies four such RATs uses or categories in current literature: (1) Legitimate applications which are produced by known vendors but used for malicious ends; (2) Applications written by hackers that can be easily distributed and used by script kiddies or wannabe hackers for stealthy surveillance of victims; (3) Applications deliberately written as criminal tools by sophisticated

168

B. Nussbaum and E. Sebastian Udoh

criminal organizations; (4) Applications written by nation-states – the most complex, secretive, and stealthy. Some “Ratters” – those who use RATs – use them for sextortion, others for harassment, others for personal gain, and still others for espionage. Some Ratters are known to use YouTube and other content sites as tools to spread their malware, and as locales to celebrate their exploits or release the contents stolen from others (DCA 2017, 1; 2015). So, there are many variations on the theme, and the problem is not a static one. One genre of RATing that has gained prominence in recent times is state espionage, “an epidemic of targeted malware” (Deibert et al. 2014, ‘Executive Summary’) which is increasingly being used to target journalists, human rights advocates, activists, etc. (Marquis-Boire and Hardy 2012). For instance, in 2012, CNN reported that “Computer spyware is the newest weapon in the Syrian conflict” (Brumfield 2012). In what many have termed “cyberespionage,” supporters of Syria’s regime used computer viruses to spy on opposition groups and traffic the harvested information to a server at a government-owned telecommunications company in Syria (Brumfield 2012). Similarly, in the heat of the Bahrain crisis in 2012, Bahraini activists were targeted with apparently malicious emails, enticing them to open a series of malicious attachments (Marquis-Boire 2012). This cyberespionage has dire implications for ethics, freedom of speech, human rights, international law and relations, among other dimensions. It is obviously not illegal for many governments to conduct surveillance of activists in their own country or even to delegate such authorities to non-state actors like nationalist hacktivists. However, when – as Citizen Lab has documented (Deibert et al. 2014; Marquis-Boire 2012; Marquis-Boire and Hardy 2012) – such regimes or groups monitor activists in this way in other countries, they are often engaging in illegal surveillance under the second country’s laws. So, in addition to potentially breaking espionage laws in those countries, they are often breaking laws around surveillance, computer crime, wiretapping, or electronic surveillance. This is classic illicit or illegal surveillance. There is an additional literature – beyond simple criminal remote access Trojan related crimes – looking at more sophisticated and complex malware used by states to conduct surveillance of both Internet usage and behavior and activities of individuals. This malware ranges from commercial off the shelf (COTS) malware purchased by states from companies specializing in such surveillance tools (Marczak et al. 2018a), to hyper complex malware that is assumedly developed by intelligence services for longterm surveillance campaigns (Bencsáth et al. 2012). These sophisticated programs – like Duqu, Flame, and Gauss among others – have been termed “cousins of Stuxnet”(Bencsáth et al. 2012) – a reference to the sophisticated cyberweapon that targeted the Iranian nuclear centrifuge program. Surveillance malware campaigns have targeted political dissidents in places like Tibet (Alexander et al. 2018), and Saudi Arabia (Marczak et al. 2018b), members of Amnesty International (Marczak et al. 2018b), journalists in North America and Europe (Deibert 2017), enemies of the regime (Galperin and Marquis-Boire 2012) – and of regime enemies like ISIS (Scott-Railton et al. 2014) – in Syria, and even anti-obesity campaigners in Mexico (Perlroth 2017). The impact of the targeting of civil society on the Internet by such surveillance is broad and very serious (Galperin 2016).

8

Surveillance, Surveillance Studies, and Cyber Criminality

169

Surveillance as Crime Control Measure Public-area surveillance belongs to the category of situational-crime prevention and differs from other strategies by its crime-specific focus, and its focus on the setting or place in which crime occurs (Welsh and Farrington 2009a). Such public spaces include public parks, pedestrianized streets in city centers, outdoor public parking areas, residential neighborhood streets, public transport interchanges, as well as areas outside public facilities such as sports arenas and subway stations. Surveillance as crime control measure can serve several purposes: (i) crime prevention and deterrence, (ii) as an investigative tool, (iii) for emergency response situations, (iv) as an eyewitness in investigations and prosecutions, and (v) as a virtual guard or security system (La Vigne et al. 2011a, xi). In their work, Welsh and Farrington (2009a) argue that criminal opportunities and risks are influenced by environmental conditions in interaction with resident and offender characteristics. Changes in environmental design can in fact create physical barriers to criminal activity – as is often described in crime prevention through environmental design or CPTED. Furthermore, though public surveillance measures like street lighting, closed-circuit television (CCTV) cameras, and physical design changes to buildings and parks do not always intrinsically constitute a physical barrier to crime, they can act as a catalyst to crime reduction through instilling a change in the perceptions, attitudes, and behavior of residents and potential offenders. On the whole, proponents of the effectiveness of surveillance as crime control measure point to the rational choice theory, which states that potential offenders make purposeful, rational decisions to commit crimes after weighing the potential costs and benefits of the crime in question (Cornish and Clarke 2003; La Vigne et al. 2011b, 4). Proponents – such as rational choice theorists – therefore believe that if potential offenders know they are being watched, they will refrain from criminal activity (La Vigne et al. 2011b, 4). Specifically, in the context of surveillance as a crime control measure, increasing effectiveness would demand increasing the risk of being apprehended (Ratcliffe 2006). Many of the same dynamics that enable or lead to surveillance as a crime control model in the physical world are common in the world of cyber and cybercrime as well. Neal Katyal has laid out many cases in which design and architecture (the built world) can serve to limit or control crime (Katyal 2001a), laying out both specific cases, as well as general principles like “natural surveillance.” Katyal later argued that digital design and architecture – what Lawrence Lessig calls “code” in his discussions of regulation (Lessig 2009) – can also be used for cybercrime control when both professionals and program users can view and edit the code (Katyal 2003). Other principles of design and architecture that Katyal describes also equate to what would be considered “surveillance” online, like “territoriality” described in terms of the logging of Internet Protocol addresses. Who is in a position to do such surveillance – the logging and monitoring? (i.e., in Katyal’s terms, who’s territory is that data available on?) Often not law enforcement, who might do so in physical

170

B. Nussbaum and E. Sebastian Udoh

space, but rather private actors. In fact, much of the surveillance done to prevent cybercrime is carried out by private companies and actors (Katyal 2001b). He calls out Internet Service Providers (ISPs), credit card companies, and hardware and software makers as third parties key to “scanning, coding and norm enforcement” (Katyal 2001b). In fact, much of the decline of key types of cyber bad behavior – like spam (Wall 2004; Krebs 2014) – and cybercrime – like malware delivery (Canali et al. 2013) – is controlled not by legal authorities, but rather by surveillance by telecommunications and internet service providers who are in a position to surveil internet traffic and block access to malicious actors. Sometimes private sector corporations are deputized, requested, or coerced, to do these same activities on behalf of law enforcement, as was common in the take down of large-scale botnets like the GameOverZeus botnet (Microsoft 2014; Zeitlin 2015). David Wall explicitly frames the “governance of spam” and the control of spam email exactly in the terms of Katyal and situational crime prevention (Wall 2004). Wall has written extensively on cybercrime (2007) and the challenges of policing it (2013), including the challenges that police and law enforcement face in surveilling cybercrime when compared to traditional crime. While private firms are often in a position to surveil and monitor users and internet traffic, law enforcement can only do so in very specific circumstances. In fact, even when law enforcement can conduct digital surveillance, a number of products and technologies make surveillance challenging. Denning and Baugh (1999) present numerous technologies – from encryption to remote storage to digital compression to steganography – that make the observation and surveillance of digital activities more challenging. They also cited early versions of online currency – which has exploded with cryptocurrencies – and online services like anonymous remailers as technologies that enabled criminals (and non-criminals) the ability to hamper surveillance of criminal activity online (Denning and Baugh 1999). So, while there is often a sense that online surveillance by law enforcement is rampant (Huey and Rosenberg 2004), the reality is more complicated than that (especially when police and law enforcement are disentangled from other intelligence and security agencies, especially signals intelligence or SIGINT agencies). There has been rapid growth in the use of “cyber threat intelligence” (Shackleford 2015) and various attempts at information sharing (Brown et al. 2015) in an attempt to help organizations learn what to look for on their systems, and a whole world of organizations and subunits, most notably Security Operations Centers or SOCs (Sundaramurthy et al. 2014; Onwubiko 2015; Miloslavskaya 2016) that have grown up around organizations attempting to monitor and surveil their own networks in an attempt to both prevent and discover cybercrime and malicious cyber activity on their own infrastructure and networks. This includes a mix of surveillance of users (people) and surveillance of systems (networks), done by both humans (analysts and security professionals) and machines (hardware and software), and so this version of surveillance for the control of cybercrime, actually blends into a separate, new, and rapidly expanding role of surveillance.

8

Surveillance, Surveillance Studies, and Cyber Criminality

171

Surveillance as Mechanism for Monitoring Technological Systems This is obviously the newest and least fully-developed of the connections between surveillance and cybercrime. It begins in the 1980s with the early versions of intrusion detection systems (IDS) and other network monitoring technologies. The early versions of these are described by Cliff Stoll in Stalking the Wily Hacker (Stoll 1988) and in more depth in the Cuckoos’ Egg (Stoll 2005), in which he cobbles together a primitive intrusion detection system from printers and telephones and pagers to monitor his network for the presence of a hacker, and described more technically in the same era by Dorothy Denning in her classic piece An Intrusion Detection Model (Denning 1987) and other works (Denning and Neumann 1985). While most IDS and network monitoring systems (including products like Data Loss Prevention) used a mix of signatures and/or behavioral indicators, what they had in common was that they monitored networks for known indicators of compromise – that included the presence of known malicious files or code, certain types of behavior by users or network devices, and other already identified threat indicators. It is debatable how much the using of signatures would qualify as surveillance per se, but the monitoring of network users and devices for behavioral indicators of malicious activity would certainly seem to qualify. In fact, more recently, versions of network surveillance – like Network Security Monitoring (NSM) – have come to be seen as a more necessary and regular part of cybersecurity (Bejtlich 2004). Two things have changed the early versions of this to make the reliance on simple signature recognition less important or sufficient for detecting malicious network behavior. The first is the adversarial use of various technologies – including polymorphic malware (You and Yim 2010) and fileless intrusions (Mansfield-Devine 2017) – that render signature recognition less effective, leaving behavioral recognition the only remaining option. The second is the rapid expansion of networks and network complexity, including and especially with Internet of Things (IoT) devices which will need to be surveilled and monitored (Gandhi et al. 2018). These two changes mean that increasingly the only way in which today and tomorrows ever more complex networks are likely to be secure is using mass surveillance of network behavior (again both user and device behavior), which is why increasingly Security Operations Centers (SOCs) for large organizations use Security Incident Event Management (SIEM) software to simply aggregate logs and attempt to get a operating picture of their network and the behaviors of its components (Madani et al. 2011). Again, this is the forefront of the connection between surveillance and cybercrime and cyber safety; but a component that is rapidly growing in importance. This move to tracking and surveilling devices is decidedly not only about computer networks and the IoT. Increasingly, surveillance will focus both on individuals (users) and on devices on networks, but in the real world as well. While retailers and stores will track individuals through the aisles of their stores, they may do so using video (that tracks the actual shoppers) or using Bluetooth (that tracks their mobile device) or an app (that again tracks the shoppers’ device rather than the person per se). That is, as devices become important components of people’s

172

B. Nussbaum and E. Sebastian Udoh

lives and reside with them and collect data about them, they will become ever more important for surveillance. The US Supreme Court has suggested as much through rulings in court cases like Riley v. California and Carpenter v. United States (Kerr 2018), both of which involved data collected on or about mobile phones that constitute surveillance data about the individuals using those phones.

Concluding Thoughts Ultimately, it is almost impossible to disentangle cybercrime from surveillance. Increasingly, cybercriminals will seek out the results of surveillance in order to steal them and will engage in illicit or illegal surveillance for many reasons – however they are also likely to be caught by surveillance (both the online and offline versions of it) and to have their tools and maneuvers on networks recognized and flagged for security and law enforcement by technological surveillance. Even when criminals – cyber criminals or those who would merely use cyber to enable traditional crime – engage in operational security measures and take steps or use tools to obscure their behavior, it turns out the increasing ubiquity of surveillance technologies and tools will foil some of their exploits. A fascinating example came in 2013 when a traditional crime – a bomb threat hoax – appeared to have been enabled by online technologies designed to defeat surveillance. A student at Harvard University attempted to use several technologies to disguise himself – including The Onion Router (TOR) and an anonymous email service called Guerilla Mail – when he issued a bomb threat to his campus in an attempt to “disrupt final exams” (Brandom 2013). When authorities traced back the emails, it became clear that the user had accessed the email service through TOR, and thus would normally have been tough to track. However, law enforcement assumed that a Harvard student may have been involved in the incident, and thus accessed campus network logs to see whether anyone on campus had accessed TOR around the time the emails were sent; this quickly led them to the perpetrator who promptly confessed to the crime (Brandom 2013). This case – because it is a traditional crime and not a particularly serious or deadly one – may seem like an odd one to hold up as an archetype for the impact of surveillance on crime in general and cybercrime in particular. However, this cyberenabled traditional crime is importantly indicative of a major element of crime in the information age. Digital systems (and increasingly more and more systems are digital systems) are incredibly good at collecting surveillance data, and increasingly storing the results of those collections at lower and lower cost. While computer and network logs are an obvious version of this data – many other kinds of data from mobile phones, video cameras, websites, and more are now stored digitally as well – and these large storages of data enable something that was key to cracking the Harvard bomb scare case – surveillance back in time. That is, the data is collected and stored without being examined necessarily at the time, but can later be examined and correlated to find out who accessed TOR (or crossed street X, or paid with a credit card at hotel Y) later in time, when it becomes relevant.

8

Surveillance, Surveillance Studies, and Cyber Criminality

173

Bruce Schneier describes this phenomenon in this way: Ubiquitous surveillance is fundamentally different. . .ubiquitous surveillance isn’t follow that car, which we’ve all seen on cop shows. It’s follow every car, and when you can follow every car you can do different things. You can do surveillance backwards in time. Tell me where that car was last month. (Schneier undated)

This is the context in which the relationship between cybercrime and surveillance must be understood. Increasingly, the same technology that makes cybercrime possible and appealing to criminal actors will make cybercrime (and many traditional crimes) less effective and easier to get caught for. There will be, moving forward, an arms race between cybercriminals and cyber authorities; both those working on the prevention, investigation, and mitigation side in the private sector, and those working on the criminal investigative and prosecutorial side, who are largely (though not entirely) in the public sector. The criminals will continue innovating on their operational security and their attack tools, both of which will attempt to evade surveillance measures of all stripes, and the technology itself which will increasingly be shaped in an attempt to shrink the space in which bad actors and their tools (from RATs to banking trojans) cannot operate un-surveilled. And thus is the paradox of the relationship between surveillance and cybercrime – at the same time that some surveillance, and its byproducts, make cybercrime more valuable; other surveillance, and its byproducts, will make cybercrime (and traditional crime too) harder to get away with and less likely to be successful. Surveillance, like each of us – according to Whitman, contains multitudes.

References Albrecht, K. (2001). Supremarket cards: The tip of the retail surveillance iceberg. Denver University Law Review, 79, 534. Albrechtslund, A. (2008). Online social networking as participatory surveillance. http:// firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2142/1949 Alexander, G., Brooks, M., Crete-Nisihata, M., Maynier, E., Scott-Railton, T., Delbert, R. (2018) Familiar feeling: A malware campaign targeting the Tibetan diaspora resurfaces. Citizen Lab Research Report No.111, University of Toronto, August 2018. Alhawari, S. (2015). An empirical study on customer retention and customer loyalty. International Journal of Information Systems and Change Management, 7(3), 183–202. Al-Heeti, A. (2018, June 28). Exactis said to have exposed 340 million records, more than Equifax breach. CNET. https://www.cnet.com/news/exactis-340-million-people-may-have-been-exposedin-bigger-breach-than-equifax/ Allmer, T. (2011). Critical surveillance studies in the information society. Cognition, Communication and Co-operation, 9(2), 566–592. Andreas, P., & Price, R. (2001). From war fighting to crime fighting: Transforming the American national security state. International Studies Review, 3(3), 31–52. Andrejevic, M. (2002) The work of being watched: Interactive media and the exploitation of selfdisclosure. Critical Studies in Media Communication, 19(2), 230–248. Andrejevic, M. (2007a) Surveillance in the digital enclosure. The Communication Review, 10(4), 295–317.

174

B. Nussbaum and E. Sebastian Udoh

Andrejevic, M. (2007b), iSpy: Surveillance and power in the interactive era. Lawrence: University Press of Kansas. Andrews, L., Holloway, M., & Massoglia, D. (2015). Digital peepholes: Remote activation of webcams: Technology, law and policy. The Institute for Science, law and technology. http://ckprivacy.org/files/2016/04/digital_peepholes_2015.pdf Armitage, R. (2002). To CCTV or not to CCTV? A review of current research into the effectiveness of CCTV systems in reducing crime (community safety practice briefing). Nacro, 1–8. Available https://epic.org/privacy/surveillance/spotlight/0505/nacro02.pdf Associated Press. (2018, December 1). Espionage, ID theft? Risks from stolen Marriott data myriad. https://www.voanews.com/a/espionage-id-theft-risks-from-stolen-marriott-data-myriad/4683203.html Avery, M. (2007). The constitutionality of warrantless electronic surveillance of suspected foreign threats to the National Security of the United States. University of Miami Law Review, 62, 541. Baker, J. E. (2007). In the common defense: National Security law for perilous times (pp. 105–110). Cambridge: Cambridge University Press. Bakir, V. (2015). “Veillant panoptic assemblage”: Mutual watching and resistance to mass surveillance after Snowden. Media and Communication, 3(3), 12–25. Balkin, J. M. (2008). The constitution in the national surveillance state. Minnesota Law Review, 93, 1. Ball, K., & Snider, L. (Eds.). (2013). The surveillance-industrial complex: A political economy of surveillance. Abingdon/New York: Routledge. Bamford, J. (1982). The puzzle palace: America’s National Security Agency and its special relationship with Britain’s GCHQ: James Bamford. London: Sidgwick & Jackson. Bamford, J. (2009) The Shadow Factory: The Ultra-Secret NSA from 9/11 to the Eavesdropping on America. Anchor Books. 2009. ISBN: 0307279391, 9780307279392. Banks, W., & Dycus, S. (2016). Soldiers on the home front: The domestic role of the American military. Cambridge: Harvard University Press. Barrabi, T. (2018, April 2). Panera bread data breach exposes customer records. Fox Business. https://www.foxbusiness.com/markets/panera-bread-data-breach-exposes-customer-records Baumer, T. L., & Rosenbaum, D. P. (1984). Combating retail theft: Programs and strategies. Boston: Butterworth. Bejtlich, R. (2004). The Tao of network security monitoring: Beyond intrusion detection. Boston, Massachusetts: Pearson Education. Bell, C. (2006). Surveillance strategies and populations at risk: Biopolitical governance in Canada’s national security policy. Security Dialogue, 37(2), 147–165. Bencsáth, B., Pék, G., Buttyán, L., & Felegyhazi, M. (2012). The cousins of stuxnet: Duqu, flame, and gauss. Future Internet, 4(4), 971–1003. Berghel, H. (2017). Equifax and the latest round of identity theft roulette. Computer, 50(12), 72–76. Bloss, W. (2007). Escalating US police surveillance after 9/11: An examination of causes and effects. Surveillance & Society, 4(3), 208–228. Brandom, R. (2013, December 18). FBI agents tracked Harvard bomb threats despite Tor. The Verge. https://www.theverge.com/2013/12/18/5224130/fbi-agents-tracked-harvard-bomb-threatsacross-tor Brodsky, T., Cohen, R., Cohen-Solal, E., Gutta, S., Lyons, D., Philomin, V., Trajkovic, M. (2002). Visual surveillance in retail stores and in the home. In Video-based surveillance systems computer vision and distributed processing (pp. 51–61). Brown, S., Gommers, J., Serrano, O. (2015). From cyber security information sharing to threat management. In Proceedings of the 2nd ACM Workshop on information sharing and collaborative security (WISCS ’15 Denver, Colorado, USA – October 12 – 12, 2015) (pp. 43–49) Brumfield, B. (2012, February 17). Computer spyware is newest weapon in Syrian conflict. CNN, Friday. https://www.cnn.com/2012/02/17/tech/web/computer-virus-syria/index.html Campbell, J. E., Carlson, M. (2002). Panopticon.com: Online Surveillance and the Commodification of Privacy. Journal of Broadcasting & Electronic Media, 46(4), 586–606. Canali, D., Balzarotti, D., & Francillon, A. (2013, May). The role of web hosting providers in detecting compromised websites. In Proceedings of the 22nd international conference on World Wide Web (pp. 177–188). Rio de Janeiro: ACM.

8

Surveillance, Surveillance Studies, and Cyber Criminality

175

Cascio, J. (2006). The rise of the participatory panopticon. WorldChanging. At http://www. worldchanging.com/archives/002651.html. Accessed 30 Jan 2008. Castells, M. (2001). The internet galaxy: Reflections on the internet, business, and society. Oxford: Oxford University Press. CBS. (2018, December 11). Equifax data breach was “entirely preventable,” congressional report finds. CBS News. https://www.cbsnews.com/news/equifax-data-breach-was-entirely-prevent able-congressional-report-finds/ Cheng, B. W., Chang, C. L., & Liu, I. S. (2005). Establishing customer relationship management framework in nursing homes. Total Quality Management & Business Excellence, 16(5), 607–629. Cimpanu, C. (2019, March 8). Marriott CEO shares post-mortem on last year’s hack. ZDNet. https:// www.zdnet.com/article/marriott-ceo-shares-post-mortem-on-last-years-hack/ Clark, J. (2016, May). Growing threat: Sextortion. Cyber Misbehavior, United States Attorneys’ Bulletin, 64(3), 1–64. https://www.justice.gov/usao/file/851856/download Coakley, R. W. (1996) The role of federal military forces in domestic disorders, 1789–1878. Army Historical Series. Center of Military History, United States Army, Washington, D.C., 2011. Available: https://history.army.mil/html/books/030/30-13-1/CMH_Pub_30-13-1.pdf Coats, D. (2017). “Worldwide Threat Assessment of the US Intelligence Community”, Senate Select Committee on Intelligence Statement for the Record, May 11, 2017, Washington DC: Office of the Director of National Intelligence. Available: https://www.hsdl.org/?view&did= 801029 Cornish, D. B., & Clarke, R. V. (2003). Opportunities, precipitators and criminal decisions: A reply to Wortley’s critique of situational crime prevention. In M. J. Smith & D. B. Cornish (Eds.), Theory for practice in situational crime prevention crime prevention studies (Vol. 16, pp. 41–96). Monsey: Criminal Justice Press. Cowan, P., Egleson, N., & Hentoff, N. (1974). State secrets; police surveillance in America. New York: Holt, Rinehart and Winston. Cox, J. (2019, February 6). Hundreds of bounty hunters had access to AT&T, T-Mobile, and Sprint customer location data for years. Motherboard. https://motherboard.vice.com/en_us/article/ 43z3dn/hundreds-bounty-hunters-att-tmobile-sprint-customer-location-data-years Danziger, S., Van Der Gaag, J., Smolensky, E., & Taussig, M. K. (1982). The life-cycle hypothesis and the consumption behavior of the elderly. Journal of Post Keynesian Economics, 5(2), 208–227. Dawson, S. (1993). Consumer responses to electronic article surveillance alarms. Journal of Retailing, 69(3), 353. DCA. (2015). Selling “slaving” outing the principal enablers that profit from pushing malware and put your privacy at risk. https://www.digitalcitizensalliance.org/clientuploads/directory/ Reports/selling-slavery.pdf DCA. (2017). The gateway Trojan, digital citizens Alliance (Vol. 1, Version 1, pp. 1–45). https:// www.digitalcitizensalliance.org/clientuploads/directory/Reports/2017_7The_Gateway_Trojan.pdf De la Cerna, M. (2012). Sextortion. Cebu Daily News. Retrieved October 05, 2012. Deibert, R. (2017, December 6). Evidence that Ethiopia is Spying on journalists shows commercial spyware is out of control. Wired. https://www.wired.com/story/evidence-that-ethiopia-is-spy ing-on-journalists-shows-commercial-spyware-is-out-of-control/ Deibert R., Crete-Nishihata, M., Dalek, J., Hardy, S., Kleemola, K., McKune, S., Poetranto, I., Scott-Railton, J., Senft, A., Sonne, B., & Wiseman, G. (2014, November 11). Communities @risk: Targeted digital threats against civil society. Citizen Lab. https://tspace.library.utoronto. ca/bitstream/1807/80130/1/Deibert%20et%20al_2014_Communities%20%40%20Risk.pdf Denning, D. E. (1987). An intrusion-detection model. IEEE Transactions on Software Engineering, 13(2), 222–232. Denning, D. E., Baugh, W. E. (1999). Hiding crimes in cyberspace. Information, Communication & Society, 2(3), 251–276. Denning, D., & Neumann, P. G. (1985). Requirements and model for IDES-a real-time intrusiondetection expert system. Menlo Park: SRI International.

176

B. Nussbaum and E. Sebastian Udoh

Dowley, M. F. (2002). Government surveillance powers under the USA Patriot act: Is it possible to protect national security and privacy at the same time-a constitutional tug-of-war. Suffolk University Law Review, 36, 165. Elgin, B., & Riley, M. (2014). Now at the sands casino: An Iranian hacker in every server, Bloomberg. Online at https://www.bloomberg.com/news/articles/2014-12-11/iranian-hackershit-sheldon-adelsons-sands-casino-in-las-vegas Elmer, G. (1997). Spaces of surveillance: Indexicality and solicitation on the internet. Critical Studies in Mass Communication, 14(2), 182–191. Fan, Q., Yanagawa, A., Bobbitt, R., Zhai, Y., Pankanti, S., Hampapur, A. (2009). Fast detection of retail fraud by using polar touch buttons. In ICME, 2009. Farinholt, B., Rezaeirad, M., Pearce, P., Dharmdasani, H., Yin, H., Le Blond, S., McCoy, D., & Levchenko, K. (2017). To catch a ratter: Monitoring the behavior of amateur DarkComet RAT operators in the wild. IEEE Symposium on Security and Privacy (SP), 1, 770–787. Fazzini, K. (2019, February 13). The great Equifax mystery: 17 months later, the stolen data has never been found, and experts are starting to suspect a spy scheme. CNBC. https://www.cnbc. com/2019/02/13/equifax-mystery-where-is-the-data.html Federal Trade Commission (FTC). (2014). Data brokers: A call for transparency and accountability. Washington, DC. https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transpar ency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf Federal Trade Commission (FTC). (Undated). The Equifax data breach. https://www.ftc.gov/ equifax-data-breach Feldman, B. (2019, January 9). It’s very easy to get your hands on cell phone location data. New York Magazine. http://nymag.com/intelligencer/2019/01/report-demonstrates-how-easy-it-is-tobuy-location-data.html Ferguson, A. G. (2017). The rise of big data policing: Surveillance, race, and the future of law enforcement. New York: New York University. Fijnaut, C. J., & Marx, G. T. (Eds.). (1995). Undercover: Police surveillance in comparative perspective. The Hague: Martinus Nijhoff Publishers. Fleishman, G. (2018). Equifax data breach, one year later: Obvious errors and no real changes, new report says. Fortune, September 7. http://fortune.com/2018/09/07/equifax-data-breach-one-year-anniversary/ Foucault, M. (1975). Surveiller et punir: Naissance de la prison. Paris: Editions Gallimard. Foucault, M. (1977). Discipline and punish: The birth of the prison (trans: Sheridan, A.). London: Allen Lane. Foucault, M. (1995). Discipline and punish: The birth of the prison. New York: Vintage. Fuchs, C. (2009). Social networking sites and the surveillance society: A critical case study of the usage of studiVZ, Facebook, and MySpace by students in Salzburg in the context of electronic surveillance. Salzburg: Research Group Unified Theory of Information. Fuchs, C. (2010). Social networking sites and complex technology assessment. International Journal of E-Politics, 1(3), 19–38. Fuchs, C. (2011). New media, web 2.0 and surveillance. Sociology Compass, 5(2), 134–147. Fuchs, C., Boersma, K., Albrechtslund, A., & Sandoval, M. (Eds.). (2011). Internet and surveillance: The challenges of web 2.0 and social media. (pp. 147–169). New York: Routledge. Gallagher, S. (2014, December 11). Iranian hackers used Visual Basic malware to wipe Vegas casino’s network. Ars Technica. https://arstechnica.com/information-technology/2014/12/ira nian-hackers-used-visual-basic-malware-to-wipe-vegas-casinos-network/ Galperin, E. (2016). When governments attack: Malware targeting activists, lawyers, and journalists. USENIX. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presenta tion/galperin Galperin, E., Marquis-Boire M. (2012). The internet is back in Syria and so is malware targeting Syrian activists. Electronic Frontier Foundation December 3, 2012, available: https://www.eff. org/deeplinks/2012/12/iinternet-back-in-syria-so-is-malware Gandhi, U., Malarvizhi Kumar, P., Varatharajan, R., Manogaran, G., Sundarasekar, R., Kadu, S. (2018). HIoTPOT: Surveillance on IoT Devices against Recent Threats. Wireless Personal Communications 2/2018. https://doi.org/10.1007/s11277-018-5307-3

8

Surveillance, Surveillance Studies, and Cyber Criminality

177

Gandy, O. (1993). The panoptic sort: A political economy of personal information. Boulder: Westview. Gerstner, L. (2018, May 10). Hackers target loyalty programs. Kiplinger. https://www.kiplinger. com/article/spending/T059-C000-S002-hackers-target-loyalty-programs.html Glassberg, J. (2018, September 25). A new kind of ‘sextortion’ scam is on the rise. Yahoo Finance. https://finance.yahoo.com/news/new-kind-sextortion-scam-rise-191229944.html Granick, J. S. (2017). American spies: Modern surveillance, why you should care, and what to do about it. Cambridge: Cambridge University Press. Gressin, S. (2018). The Marriott data breach. Federal Trade Commission. https://www.consumer. ftc.gov/blog/2018/12/marriott-data-breach Gutting, G. (1998). Foucault, Michel (1926–84). In E. Craig & L. Floridi (Eds.), Routledge encyclopedia of philosophy (Vol. 4, pp. 708–713). London: Routledge. Hackley, C. (2002). The panoptic role of advertising agencies in the production of consumer culture. Consumption, Markets and Culture, 5(3), 211–229. Harris, S. (2010). The watchers: The rise of America’s surveillance state. New York: Penguin. Hern, J. (2018, January 28). Fitness tracking app Strava gives away location of secret US army bases. The Guardian. https://www.theguardian.com/world/2018/jan/28/fitness-tracking-appgives-away-location-of-secret-us-army-bases Hier, S. P., & Walby, K. (2011). Privacy pragmatism and streetscape video surveillance in Canada. International Sociology, 26(6), 844–861. Hollister, S. (2019, January 8). Carriers can sell your location to bounty hunters because ISP privacy is broken. The Verge. https://www.theverge.com/2019/1/8/18174024/att-sprint-t-mobile-scan dal-phone-location-tracking-black-market-bounty-hunters-privacy-securus Huey, L., Rosenberg, R. (2004). Watching the web: Thoughts on expanding police surveillance opportunities under the cyber-crime convention. Canadian Journal of Criminology and Criminal Justice, 46(5), 597–606. ICMEC. (2018, October). Studies in child protection: Sexual extortion and nonconsensual pornography. The Koons Family Institute on International Law & Policy, 1–39. https://www.icmec.org/ wp-content/uploads/2018/10/Sexual-Extortion_Nonconsensual-Pornography_final_10-26-18.pdf Jaeger, P. T., Bertot, J. C., & McClure, C. R. (2003). The impact of the USA Patriot act on collection and analysis of personal information under the foreign intelligence surveillance act. Government Information Quarterly, 20(3), 295–314. Kang, R. (2018). Lessons from Hollywood cybercrimes: Combating online predators. Berkeley Journal of Entertainment and Sports Law, 7(1) Article 5, 1–12. https://scholarship.law.berkeley. edu/cgi/viewcontent.cgi?article=1071&context=bjesl Karas, S. (2002). Enhancing the privacy discourse: Consumer information gathering as surveillance. The Journal of Technology Law & Policy, 7, 29. Katyal, N. K. (2001a). Architecture as crime control. Yale Law Journal, 111, 1039. Katyal, N. K. (2001b). Criminal law in cyberspace. University of Pennsylvania Law Review, 149(4), 1003–1114. Katyal, N. K. (2003). Digital architecture as crime control. Yale Law Journal, 112, 2261. Kealey, G. S. (2017). Spying on Canadians: The Royal Canadian Mounted Police Security Service and the origins of the long cold war. Toronto: University of Toronto Press. Kerr, O. S. (2002). Internet surveillance law after the USA Patriot act: The big brother that isn’t. Northwestern University Law Review, 97, 607. Kerr, O. S. (2014). A rule of lenity for national security surveillance law. Virginia Law Review, 100, 1513–1543. Kerr, O. S. (2018) Implementing Carpenter (December 14, 2018). The Digital Fourth Amendment (Oxford University Press), Forthcoming; USC Law Legal Studies Paper No. 18-29. Available at SSRN: https://ssrn.com/abstract=3301257 Kinsman, G. W., Buse, D. K., & Steedman, M. (Eds.). (2000). Whose National Security?: Canadian state surveillance and the creation of enemies. Toronto: Between the Lines. Kirkup, M., & Carrigan, M. (2000). Video surveillance research in retailing: Ethical issues. International Journal of Retail & Distribution Management, 28(11), 470–480.

178

B. Nussbaum and E. Sebastian Udoh

Koskela, H. (2004). Webcams, TV shows and mobile phones: Empowering exhibitionism. Surveillance & Society, 2(2/3), 199–215. Koskela, H. (2006). The other side of surveillance: webcams, power and agency. In D. Lyon (Ed.), Theorizing surveillance: The panopticon and beyond (pp. 161–181). Devon: Willan Publishing Krebs, B. (2014). Spam nation: The inside story of organized cybercrime-from global epidemic to your front door. Naperville: Sourcebooks, Inc. La Vigne, N. G., Lowry, S. S., Dwyer, A. M., & Markman, J. A. (2011a). Using public surveillance systems for crime control and prevention: A practical guide for law enforcement and their municipal partners (Final technical report). Urban Institute Justice Policy Center. https://www. urban.org/sites/default/files/publication/27551/412402-Using-Public-Surveillance-Systems-forCrime-Control-and-Prevention-A-Practical-Guide-for-Law-Enforcement-and-Their-MunicipalPartners.PDF La Vigne, N. G., Lowry, S. S., Markman, J. A., & Dwyer, A. M. (2011b). Evaluating the use of public surveillance cameras for crime control and prevention (final technical report). Washington, DC: Urban Institute Justice Policy Center. Lace, S. (Ed.). (2005). The glass consumer: Life in a surveillance society. Bristol: Policy Press. Landau, S. (2013). Making sense from Snowden: What’s significant in the NSA surveillance revelations. IEEE Security & Privacy, 11(4), 54–63. Last, J. M., Abramson, J. H., & Freidman, G. D. (Eds.). (2001). A dictionary of epidemiology (Vol. 4). New York: Oxford University Press. Lauer, J. (2008). Alienation in the information economy: Toward a Marxist critique of consumer surveillance. In B. De Cleen & N. Carpentier (Eds.), Participation and media production: Critical reflections on content creation (pp. 41–53). Cambridge: Cambridge Scholars Publishing. Laurie, C. D., & Cole, R. H. (1997). The role of federal military forces in domestic disorders, 1877–1945. Washington, DC: Government Printing Office. Lessig, L. (2009). Code: And other laws of cyberspace. www.ReadHowYouWant.com Lipp, K. (2015). License to connive: Boston still tracks vehicles, lies about it, and leaves sensitive resident data exposed online. Dig Boston. https://digboston.com/license-to-connive-bostonstill-tracks-vehicles-lies-about-it-and-leaves-sensitive-resident-data-exposed-online/ Lyon, D. (1994). The electronic eye: The rise of surveillance society. Minneapolis: University of Minnesota Press. Lyon, D. (1998). The World Wide Web of surveillance: The internet and off-world power-flows. Information, Communication & Society, 1(1), 91–105. Lyon, D. (2001). Surveillance society: Monitoring everyday life (Issues in society). Maidenhead: Open University Press. Lyon, D. (2003a). Cyberspace, surveillance, and social control: The hidden face of the internet in Asia. In K. C. Ho, R. Kluver, & K. C. Yang (Eds.), Asia.Com: Asia encounters the internet (pp. 67–82). London: Routledge. Lyon, D. (2003b). Surveillance technology and surveillance society. In T. J. Misa, P. Brey, & A. Feenberg (Eds.), Modernity and technology (pp. 161–184). Cambridge, MA: MIT Press. Lyon, D. (2003c). Airports as data filters: Converging surveillance systems after September 11th. Journal of Information, Communication and Ethics in Society, 1(1), 13–20. Lyon, D. (2003d). Technology vs ‘terrorism’: Circuits of city surveillance since September 11th. International Journal of Urban and Regional Research, 27(3), 666–678. Lyon, D. (2006). Airport screening, surveillance, and social sorting: Canadian responses to 9/11 in context. Canadian Journal of Criminology and Criminal Justice, 48(3), 397–411. Lyon, D. (2007a). Surveillance studies: An overview. Cambridge: Polity Press. Lyon, D. (2007b). Resisting surveillance. In S. Hier & J. Greenberg (Eds.), The surveillance studies reader (pp. 368–377). Maidenhead: Open University Press. Lyon, D. (2014). Surveillance, Snowden, and big data: Capacities, consequences, critique. Big Data & Society, 1(2), 2053951714541861. Madani, A., Rezayi, S., & Gharaee, H. (2011, October). Log management comprehensive architecture in Security Operation Center (SOC). In International conference on computational aspects of social networks (CASoN) (pp. 284–289). Salamanca: IEEE.

8

Surveillance, Surveillance Studies, and Cyber Criminality

179

Mansfield-Devine, S. (2017). Fileless attacks: Compromising targets without malware. Network Security, 2017(4), 7–11. Marczak, B., Scott-Railton J., McKune S., Abdul Razzak B., Deibert R. (2018a) Hide and Seek: Tracking NSO Group’s Pegasus Spyware to Operations in 45 Countries. The CitizenLab September 18, 2018. Available: https://citizenlab.ca/2018/09/hide-and-seek-tracking-nsogroups-pegasus-spyware-tooperations-in-45-countries/ Marczak, B., Scott-Railton J., Deibert R. (2018b) NSO Group Infrastructure Linked to Targeting of Amnesty International and Saudi Dissident. The CitizenLab July 31, 2018. Available: https:// citizenlab.ca/2018/07/nso-spyware-targeting-amnesty-international/ Margulies, P. (2014). Dynamic surveillance: Evolving procedures in metadata and foreign content collection after Snowden. Hastings Law Journal, 66, 1. Marquis-Boire, M. (2012, June). From Bahrain with love: Spy kit exposed? The citizen lab research brief. https://citizenlab.ca/wp-content/uploads/2015/03/From-Bahrain-With-Love-FinFishersSpy-Kit-Exposed.pdf Marquis-Boire, M., & Hardy, S. (2012, June). Syrian activists targeted with BlackShades spy software. The Citizen Lab Research Brief. https://citizenlab.ca/wp-content/uploads/2015/03/ Syrian-Activists-Targeted-with-BlackShades-Spy-Software.pdf Martin, C. R., Jr. (1976). A transgenerational comparison – The elderly fashion consumer. Advances in Consumer Research, 3, 1. Marx, G. T. (1989). Undercover: Police surveillance in America. Berkeley: University of California Press. Mason, J. B., & Bearden, W. O. (1978). Profiling the shopping behavior of elderly consumers. The Gerontologist, 18(5_Part_1), 454–461. Microsoft. (2014). Microsoft helps FBI in GameOver Zeus botnet cleanup. https://blogs.microsoft. com/blog/2014/06/02/microsoft-helps-fbi-in-gameover-zeus-botnet-cleanup/ Miller, S., & Weckert, J. (2000). Privacy, the workplace and the internet. Journal of Business Ethics, 28(3), 255–265. Miloslavskaya, N. (2016). Security operations centers for information security incident management. In 2016 IEEE 4th international conference on future internet of things and cloud (FiCloud), Vienna, Austria, 2–24 Aug. 2016. https://doi.org/10.1109/FiCloud.2016.26 Monahan, T. (2010). The future of security? Surveillance operations at homeland security fusion centers. Social Justice, 37(2/3 (120–121)), 84–98. Moore, A. D. (2011). Privacy, security, and government surveillance: Wikileaks and the new accountability. Public Affairs Quarterly, 25(2), 141–156. Mortenson, W. B., Sixsmith, A., & Woolrych, R. (2015). The power (s) of observation: Theoretical perspectives on surveillance technologies and older people. Ageing & Society, 35(3), 512–530. Mulin, G. (2018, May 24). Online sex crime – what is sextortion, how common is webcam blackmail and how to keep yourself safe online? The Sun. https://www.thesun.co.uk/news/ 2293628/sextortion-how-common-is-webcam-blackmail-and-how-to-keep-yourself-safe-onlinelatest/ Murakami Wood, D. (2009). Situating surveillance studies. Surveillance & Society, 6(1), 52–61. Murdock, J. (2018, July 24). Scammers are pretending to have webcam footage of victims watching porn to make them pay up. Newsweek. https://www.newsweek.com/sextortion-scammers-claimhave-webcam-footage-victims-watching-porn-pay-1039463 Musgrave, S. (2013, December 14). Boston police halt license scanning program. Boston Globe. https://www.bostonglobe.com/metro/2013/12/14/boston-police-suspend-use-high-tech-licenceplate-readers-amid-privacy-concerns/B2hy9UIzC7KzebnGyQ0JNM/story.html Newman, K. (2018, January 29). Fitness app reveals remote military bases. US News and World Report. https://www.usnews.com/news/world/articles/2018-01-29/fitness-app-strava-revealsmilitary-security-oversight Nieto, M. (1997). Public video surveillance: Is it an effective crime prevention tool? California Research Bureau. http://www.ncjrs.gov/App/publications/abstract.aspx?ID=179953 Norton. (2019). Do loyalty cards compromise your security? https://us.norton.com/internetsecurityprivacy-do-loyalty-cards-compromise-your-security.html

180

B. Nussbaum and E. Sebastian Udoh

O’Flaherty, K. (2019, March 11). Marriott CEO reveals new details about Mega Breach. Forbes. https://www.forbes.com/sites/kateoflahertyuk/2019/03/11/marriott-ceo-reveals-new-details-aboutmega-breach/#27c1f788155c Onwubiko, C. (2015) Cyber security operations centre: Security monitoring for protecting business and supporting cyber defense strategy. In Proceedings of the IEEE international conference on cyber situational awareness, data analytics and assessment (Cyber SA 2015), joint and colocated with Cyber Science 2015 conferences, London, UK, June 8–9, 2015. Ogura, T. (2006). Electronic government and surveillance-oriented society. In D. Lyon (Ed.), Theorizing surveillance: The panopticon and behind (pp. 270–295). Portland: Willan Publishing. Parenti, C. (2003). The soft cage: Surveillance in America from slavery to the war on terror. New York: Basic Books. Pell, S. K., & Soghoian, C. (2014). Your secret stingray’s no secret anymore: The vanishing government monopoly over cell phone surveillance and its impact on national security and consumer privacy. The Harvard Journal of Law & Technology, 28, 1. Perlroth, N. (2017, February 11). Spyware’s odd targets: Backers of Mexico’s soda tax. New York Times. https://www.nytimes.com/2017/02/11/technology/hack-mexico-soda-tax-advocates.html Pieterse, J. N. (2012). Leaking superpower: WikiLeaks and the contradictions of democracy. Third World Quarterly, 33(10), 1909–1924. Pix, A., & Schneier, B. (2017, July 18). Surveillance is the business model of the internet. Open Democracy. https://www.opendemocracy.net/en/digitaliberties/surveillance-is-business-modelof-internet/ Piza, E. L. (2018). The crime prevention effect of CCTV in public places: A propensity score analysis. Journal of Crime and Justice, 41(1), 14–30. Pridmore, J. (2010). Reflexive marketing: The cultural circuit of loyalty programs. Identity in the Information Society, 3(3), 565–581. Pridmore, J., & Lyon, D. (2011). Marketing as surveillance: Assembling consumers as brands. In D. Zwick & J. Cayla (Eds.), Inside marketing: Practices, ideologies, devices (pp. 115–136). Oxford: Oxford University Press. Pridmore, J., & Zwick, D. (2011). Marketing and the rise of commercial consumer surveillance. Surveillance & Society, 8(3), 269–277. Pryor, B. (2006). Foucault, Michel (1926–1984). In D. M. Borchert (Ed.), Encyclopedia of philosophy (Vol. 3, pp. 698–702). Detroit: Thomson Gale. Ranjbarfard, M. (2016). Customer knowledge management maturity model for insurance sector. International Research Journal of Applied and Basic Sciences, 10, 938–951. Ratcliffe, J. (2006). Video surveillance of public places. Washington, DC: U.S. Department of Justice, Office of Community Oriented Policing Services. Rezaeirad, M., Farinholt, B., Dharmdasani, H., Pearce, P., Levchenko, K., & McCoy, D. (2018, August 15–17). Schrödinger’s RAT: Profiling the stakeholders in the remote access Trojan ecosystem. In Proceedings of the 27th USENIX security symposium. Baltimore. Roberts, A. (2012). WikiLeaks: The illusion of transparency. International Review of Administrative Sciences, 78(1), 116–133. Rowley, J. (2005). Customer knowledge management or consumer surveillance. Global Business and Economics Review, 7(1), 100–110. Samatas, M. (2007). Security and surveillance in the Athens 2004 Olympics: Some lessons from a troubled story. International Criminal Justice Review, 17(3), 220–238. Sandoval, M. (2013). A critical empirical case study of consumer surveillance on Web 2.0. In: Internet and surveillance (pp. 167–189). Routledge. Schneier, B. (Undated). Ubiquitous Surveillance and Security. Norbert Weiner Learning Center. http://norbertwiener.org/ubiquitous-surveillance-and-security-bruce-schneier/ Schwartz, M. (2018, November 2). Radisson suffers global loyalty program data breach. Bank Info Security. https://www.bankinfosecurity.com/blogs/radisson-suffers-global-loyalty-program-databreach-p-2677

8

Surveillance, Surveillance Studies, and Cyber Criminality

181

Scott-Railton, J., Hardy, S., & Daily, P. (2014, December, 18). Malware attack targeting Syrian ISIS critics. The Citizen Lab. Shackleford, D. (2015). Who’s using cyberthreat intelligence and how? SANS Institute. https:// www.sans.org/reading-room/whitepapers/analyst/cyberthreat-intelligence-how-35767 Smith, A. D. (2008). Modernizing retail grocery business via knowledge management-based systems. Journal of Knowledge Management, 12(3), 114–126. Springer, S., Chi, H., Crampton, J., McConnell, F., Cupples, J., Glynn, K., et al. (2012). Leaky geopolitics: The ruptures and transgressions of WikiLeaks. Geopolitics, 17(3), 681–711. St. John, A. (2019). Facebook breach exposed personal data of millions of users. Consumer Reports. https://www.consumerreports.org/digital-security/facebook-data-breach-exposed-per sonal-data-of-millions-of-users/ Stoll, C. (1988). Stalking the wily hacker. Communications of the ACM, 31(5), 484–498. Stoll, C. (2005). The cuckoo’s egg: Tracking a spy through the maze of computer espionage. New York: Simon and Schuster. Stoycheff, E. (2016). Under surveillance: Examining Facebook’s spiral of silence effects in the wake of NSA internet monitoring. Journalism & Mass Communication Quarterly, 93(2), 296–311. Sundaramurthy, S. C., Case, J., Truong, T., Zomlot, L., Hoffmann, M. (2014). A tale of three security operation centers. In Proceedings of the 2014 ACM workshop on security information workers (November 2014 SIW ’14) (pp. 43–50) Taylor, R. (2018, November). Dunkin’ suffers data breach of customer loyalty program. QSR Magazine. https://www.qsrmagazine.com/security/dunkin-suffers-data-breach-customer-loyaltyprogram Theoharis, A. G. (1978). Spying on Americans: Political surveillance from hoover to the Huston plan. Philadelphia: Temple University Press. Theoharis, A. G. (2004). The FBI & American democracy: A brief critical history (p. 46). Lawrence: University Press of Kansas. Turow, J. (2005). Audience Construction and Culture Production: Marketing Surveillance in the Digital Age. Annals of the American Academy of Political and Social Science, 597(1), 103–121. https://doi.org/10.1177/0002716204270469 Turow, J. (2006). Cracking the consumer code: Advertisers, anxiety, and surveillance in the digital age. In K. Haggerty & R. Ericson (Eds.), The new politics of surveillance and visibility (p. 279). Toronto: University of Toronto Press. Van Hoboken, J. V. (2013). Privacy and security in the cloud: Some realism about technical solutions to transnational surveillance in the post-Snowden era. Maine Law Review, 66, 487. Verble, J. (2014). The NSA and Edward Snowden: Surveillance in the 21st century. ACM SIGCAS Computers and Society, 44(3), 14–20. Villa, M. (2016, July). “Executive summary” to a call to action: Ending “sextortion” in the digital age (p. 4). https://www.trust.org/contentAsset/raw-data/f3b8d35c-27bf-4ba7-9251-abc07d588347/file Walby, K. (2005). Open-street camera surveillance and governance in Canada. Canadian Journal of Criminology and Criminal Justice, 47(4), 655–684. Walby, K., & Monaghan, J. (2011). Private eyes and public order: Policing and surveillance in the suppression of animal rights activists in Canada. Social Movement Studies, 10(01), 21–37. Wall, D. (2003). Mapping out cybercrimes in a cyberspatial surveillant assemblage. In K. Ball & F. Webster (Eds.), The intensification of surveillance: Crime terrorism and warfare in the information age (pp. 112–136). London: Pluto. Wall, D. (2006). Surveillant internet technologies and the growth in information capitalism: Spams and public trust in the information society. In K. Haggerty & R. V. Ericson (Eds.), The new politics of surveillance and visibility (pp. 340–362). Toronto: University of Toronto Press. Wall, D. S. (2004) Digital realism and the governance of spam as cybercrime. European Journal on Criminal Policy and Research, 10(4), 309–335. Wall, D. S. (2007). Policing cybercrimes: Situating the public police in networks of security within cyberspace. Police Practice & Research: An International Journal, 8(2), 183–205.

182

B. Nussbaum and E. Sebastian Udoh

Wall, D. S. (2013). Enemies within: Redefining the insider threat in organizational security policy. Security Journal 26(2), 107–124. Palgrave Macmillan UK. Welsh, B. C., & Farrington, D. P. (2003). Effects of closed-circuit television on crime. The Annals of the American Academy of Political and Social Science, 587(1), 110–135. Welsh, B. C., & Farrington, D. P. (2009a). Making public places safer: Surveillance and crime prevention. Oxford: Oxford University Press. Welsh, B. C., & Farrington, D. P. (2009b). Making public places safer: Surveillance and crime prevention. https://doi.org/10.1093/acprof:oso/9780195326215.001.0001. Chapter: Welsh, Brandon C., & David P. Farrington how might surveillance measures reduce crime? https:// doi.org/10.1093/acprof:oso/9780195326215.003.0003. Whitaker, R., Kealey, G. S., & Parnaby, A. (2012). Secret service: Political policing in Canada: From the Fenians to fortress America. Toronto: University of Toronto Press. Winter, J. S. (2014). Surveillance in ubiquitous network societies: Normative conflicts related to the consumer in-store supermarket experience in the context of the internet of things. Ethics and Information Technology, 16(1), 27–41. Wittes, B. (2016). Sextortion. The Government Studies Program at Brookings. Available: https:// www.brookings.edu/wpcontent/uploads/2016/05/20160511-sextortion-presentation-wittes.pdf Wittes, B. (2018, May). Cyber Sextortion and International Justice transcript (p. 941). https:// www.law.georgetown.edu/international-law-journal/wp-content/uploads/sites/21/2018/05/48-3Cyber-Sextortion-and-International-Justice.pdf Wittes, B., Poplin, C., Jurecic, Q., & Spera, C. (2016a). Sextortion: Cybersecurity, teenagers, and remote sexual assault. https://www.brookings.edu/wp-content/uploads/2016/05/sextortion1-1.pdf Wittes, B., Poplin, C., Jurecic, Q., & Spera, C. (2016b). Closing the sextortion sentencing gap: A legislative proposal. https://www.brookings.edu/wp-content/uploads/2016/05/sextortion2.pdf Wong, K. (2006). The NSA terrorist surveillance program. Harvard Journal on Legislation, 43, 517. Yoo, J. (2014). The legality of the National Security Agency’s bulk data surveillance programs. Harvard Journal of Law & Public Policy, 37, 901. You, I., & Yim, K. (2010, November). Malware obfuscation techniques: A brief survey. In 2010 international conference on broadband, wireless computing, communication and applications (pp. 297–300). Fukuoka: IEEE. https://dl.acm.org/citation.cfm?id=1908903 Zeitlin, S. (2015). Botnet takedowns and the fourth amendment. New York University Law Review, 90, 746. Zhang, Z. (2011). Customer knowledge management and the strategies of social software. Business Process Management Journal, 17(1), 82–106. Zippel, C., & Bohnet-Joschko, S. (2017). Post market surveillance in the german medical device sector–current state and future perspectives. Health Policy, 121(8), 880–886. Zurawski, N. (2011). Local practice and global data: Loyalty cards, social practices, and consumer surveillance. The Sociological Quarterly, 52(4), 509–527. Zureik, E., & Salter, M. (Eds.). (2013). Global surveillance and policing. Portland, Oregon: Routledge.

9

Technology as a Means of Rehabilitation: A Measurable Impact on Reducing Crime Cynthia McDougall and Dominic A. S. Pearson

Contents How Prisons Can Be Improved with the Introduction of Technology . . . . . . . . . . . . . . . . . . . . . . . . . Inside a Prison Introducing Digital Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Process Evaluation Research Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Impact on People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kiosk Usage Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Observation of Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prisoner Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Impact of Digital Technology on Prison Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Outcome Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results of the Impact Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prisoner Reoffending Following Experience of Self-Service Technology . . . . . . . . . . . . . . . . . . . . . Comparison with the National Trend in Reduced Proved Reoffending . . . . . . . . . . . . . . . . . . . . Future Areas for Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . In-Cell Telephones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

185 188 189 189 190 191 191 192 193 194 195 196 197 198 198 199 200 204

Abstract

Although prisons aim to rehabilitate offenders and offer programs to change offending behavior, they rarely provide a culture that sustains such a change.

C. McDougall (*) Department of Psychology, University of York, York, UK e-mail: [email protected] D. A. S. Pearson Department of Psychology, University of Portsmouth, Portsmouth, UK e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_71

183

184

C. McDougall and D. A. S. Pearson

This chapter describes how digital technology can impact on the dependency culture that exists in prisons. Prisoners have little opportunity for taking personal responsibility within a prison regime, so prison does not prepare them for managing their lives after release. Technology offers the opportunity to normalize the prison environment more closely with that of the outside world making the processes more efficient and encouraging self-responsibility and selfimprovement in prisoners. This chapter describes the introduction of self-service kiosks, similar to those found in the community in supermarkets, travel centers, doctors’ surgeries, and job centers. The kiosks, accessed via biometric fingerprint identification, are located on wing landings and enable prisoners to complete tasks previously carried out by prison officers using resource-intensive paperbased systems. The functions on the kiosks include access to prisoner account balance, canteen shopping, menu ordering, visits booking, and applications for education. Processes of implementation are described, including staff and prisoner responses and frequency of use. Rigorous evaluation of the impact on performance measures of introducing the kiosks in a number of mainly private prisons in our recent research revealed a statistically significant reduction in adjudications for in-prison misbehavior including violence. Reoffending was also significantly lower than in prisons without self-service technology. Potentials for using the kiosks to develop the educational and rehabilitative functions of prisons are discussed. Keywords

Digital technology · Prisoner rehabilitation · Process evaluation · Digital prisons · Adjudications · Recidivism · Stepped wedge design

Much of this book is about cybercrime and cyberdeviance and how offenders exploit the vulnerability of victims, whether they be governments, commercial organizations, or individuals. For some offenders, technology is seen as a new and inviting tool to assist in the commission of even more skillful and lucrative types of crime. Many of these offenders will become part of the prison population and have no desire or intention to change or lead law-abiding lives in the future. This is often evident from their exploitative and deviant behavior in prison (McDougall et al. 2013). There are however prisoners who do want to change but do not always have the ability to do so. The criminogenic needs which continually lead them back into offending (Andrews and Bonta 2010) are often hard to overcome. These include alcohol and drug misuse, mental health problems, poor problem-solving, limited education, inability to read, and learning difficulties, many of which lead to dysfunctional and broken relationships and self-harm. This chapter examines how technology can be beneficial to those offenders who are seeking the opportunity to mend their lives, but cannot achieve this without external support. We present evidence in this chapter of how technology can be used to assist change with these prisoners and to rehabilitate them.

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

185

It is reasonable to say that, when technology has been introduced into prisons, the principal purpose has usually been to make the prisons more efficient and to reduce costs. This statement is supported in the UK by the fact that private prisons have been among the first to introduce technology into their establishments (McDougall et al. 2017). In the USA (State of California), efficiencies have been achieved by reducing the state prison population and managing offenders in their local communities (Petersilia 2016). These efficiencies allow scope for new approaches to rehabilitation. In a groundbreaking speech, shortly after taking up post, the UK Justice Secretary announced his approach to running prisons (Gauke 2018a). This was based on “getting the basics right” in order to create “environments that are decent, clean and safe” and in which rehabilitation can flourish. He announced that there would be an expansion of in-cell telephones, currently operating under strict security in a small number of prisons. This would allow family telephone calls to take place quickly and privately (Gauke 2018c). He also proposed to install digital self-service kiosks “to speed up arranging visits and managing money” (Gauke 2018c). Wing-based kiosk facilities, with biometric fingerprint identification, have been in existence in some privately run prisons in the UK for some time, with expansion of the available functions, as required. The kiosks allow prisoners to complete tasks previously carried out using resource-intensive paper-based application systems that occupy most of a prison officer’s time. The range of functions on the kiosks include prisoner self-checking of spending account balance; canteen shopping (from the prison shop); phone card top up; meal ordering; access to the prisoner’s daily timetable; family visits booking; noticeboard messages; applications for healthcare appointments, clothing items, and employment change; requests to attend education classes or offending behavior programs; and a frequently asked questions service. All of these tasks were previously carried out manually by prison officers receiving requests and delivering the paper-based applications to the relevant departments. Support staff and specialist personnel then dealt with these applications or queries and returned them to the wing for distribution by hand to the prisoners. This chapter describes an evaluation of prisoner self-service kiosks and their impact on rehabilitation.

How Prisons Can Be Improved with the Introduction of Technology A chief inspector of prisons in England and Wales is quoted as saying that prisons are in a “pre-Internet dark age” (Champion and Edgar 2013, p. iii), and this may be true of most prisons internationally (Molleman and van Os 2016). This view is confirmed by Reisdorf and Rikard (2018), based on a critique of current prison policies in the USA. According to Reisdorf and Rikard, no US prisons allow full access to the Internet in their establishments, with approximately ten states offering only limited access. They claim that many US prisoners have never used digital technologies or have been forced to discontinue use when incarcerated. Although it is acknowledged that digital skills are required in most occupations, many prisoners

186

C. McDougall and D. A. S. Pearson

in the USA have never been taught how to use digital technologies. Reisdorf and Rikard (2018) argue for a revision of rehabilitation theory to incorporate digital inequality research. Criminologists and psychologists have for some time drawn attention to the disadvantage to prisoners internationally, created by lack of digital facilities in prisons and hence limited acquisition of skills (Jewkes and Reisdorf 2016; Knight 2015). There has however so far been little published empirical evidence on the beneficial effect these skills may have on rehabilitation. Wolff et al. (2012), in a study of how well prisoners were prepared for release, emphasized the importance of prisoners being supported in rejoining their communities, particularly those who had been serving long prison sentences. They proposed that the kinds of support prisoners needed were in education, financial assistance, job training, employment assistance, and community living skills, together with the self-management skills associated with their offense-related problems. Wolff et al. did not however include the acquisition of digital technology skills in their list. Indeed few studies so far have presented evidence of the importance of digital technology in everyday life and the growing relationship between digital and social exclusion (Helsper and Eynon 2013). In particular research has not so far recognized that many of the skills Wolff et al. (2012) identified as important to rehabilitation are now dependent on digital capabilities. Dame Sally Coates (2016) in her report reviewing education in UK prisons strongly supported the teaching of digital skills to prisoners as a necessity in increasing job and educational prospects and reducing the likelihood that prisoners will reoffend. Given that “nearly one-third of prisoners have self-identified on initial assessment as having a learning difficulty and/or disability, these prisoners will be doubly disadvantaged by being deprived of learning basic digital skills while in prison” (Coates 2016, p. iii). To the current authors’ knowledge, the value of teaching digital technology skills to prisoners and the potential impact on criminal behavior, as presented in this chapter and in McDougall et al. (2017), has not previously been evaluated. In current society we are totally dependent on our mobile cell phones and the selfservice kiosks that allow us to withdraw cash, pay for our purchases, and buy our travel tickets. This is the simplest and quickest way, and we get an immediate response to our requests. As Elias Aboujaoude observed: “We do everything online today and do it so seamlessly that it is easy to forget how relatively young the medium still is” (Aboujaoude 2011, p. 15). Despite this progress in society, little change has occurred in prisons. Today, in most of our prisons, if prisoners have a request, they have to either speak to a prison officer or write the request on a piece of paper. This piece of paper is then transferred to the relevant department, is dealt with by hand, and returned to the prisoner with an answer. This transaction could take days and depends on the quantity of paperwork the officer is required to handle that day from a wing full of prisoners. Often prisoners equate the speed of response with the degree to which they are treated with respect (Hulley et al. 2012) and are known to accuse prison officers of deliberately losing applications from prisoners they do not like. This can become a major source of tension between officers and prisoners. In the case of ordering toiletries and food items from the prison shop, this could amount to more than 1,000 pieces of paper being handled by officers during

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

187

1 week on this one function. The cost of each list of purchases is calculated by hand in the prison shop and manually taken from the prisoner’s spending account. This process is inevitably subject to delay and human error, which again results in frequent frustrations between prisoners and staff, and is far removed from preparing the prisoners to take on responsibility for their lives on release. This is indeed “a pre-Internet dark age” (Champion and Edgar 2013, p. iii). In the community, some of the most basic tasks will require released prisoners to use a self-service kiosk. For example, they would have to apply for a job using a kiosk at the employment office, their probation officer will keep in touch by text or by mobile cell phone, and if they want to see a doctor, they would use a kiosk to register for the appointment. All of these skills now need to be taught in prison as part of basic living skills. Digital technology skills are no longer a luxury. McDougall et al. (2017), in their evaluation of the impact of self-service kiosks on prisoners’ behavior in prison and afterward in the community, developed a Theory of Change to explain how, in addition to improving efficiency, the use of kiosks might impact on rehabilitation. Figure 1 shows how we would anticipate that the introduction of self-service kiosks might contribute to prisoner rehabilitation. Firstly it gives the prisoner an element of control over the basic necessities of prison life. This in itself addresses one of the main concerns in Agnew’s (2006) general strain theory (GST), which suggests that events and conditions that are physically or psychologically distressing increase the likelihood of criminal behavior. In a prison’s context, the stressors are often the perceived lack of fairness and lack of control commonly observed (Listwan et al. 2013). If these stressors can be alleviated, this should contribute to a reduction in the anger and frustration which often leads to violent behavior. At the start prisoners can take control of their own rehabilitation, making direct contact with an offender supervisor; initiate appointments to discuss release plans

Prisoners make use of system and acquire digital skills

Direct contact with offender supervisors

Improved employment / housing prospects

Access to programmes / education / support

Improved mental health

Self responsibility

Improved attitudes to offending behaviour

Reduced dependency on prison officers

Reduced tension in prison

Easier contact with family and friends

Improved family relationships

Less disciplinary issues due to mental health

Reduced dynamic risk of re-offending

Reduced proportion of prisoners reoffend

Fig. 1 Theory of Change for prisoners. (Based on the model presented in the Journal of Experimental Criminology, McDougall et al. 2017)

188

C. McDougall and D. A. S. Pearson

or family issues, offense-related programs, and employment training; and seek assistance with finding future accommodation on release. Mann et al. (2013) have noted that many offenders are deterred from seeking help with rehabilitation when they have to make the request for help through a prison officer. This ability to make direct contact with change agents in seeking help removes this obstacle. Additionally, having access to education allows prisoners to address poor literacy skills, one of the frequent criminogenic needs identified by Bonta and Andrews (2017), and provides the pathway to rehabilitation exhorted in Coates’ (2016) review of prison education. The regular use of self-service kiosks will give prisoners more confidence in their ability to conduct basic tasks using technology. They will thus develop selfresponsibility and reduce dependence on prison officers. This is one of the model’s pathways to reducing disciplinary adjudications. Easier contact with family and friends will be facilitated by organizing and timetabling prison visits. Lösel et al. (2012) emphasized the importance of family contact in a longitudinal study of imprisoned fathers and their families. Positive resettlement, which included desistance from crime, was associated with highquality family relationships, good communication, and high frequency of family contact while in prison. The improved opportunities presented through prisoner self-service are likely to benefit mental health, boosting self-confidence, reducing prison officer/prisoner tensions, and improving family relationships. These may lead to a more positive attitude to making purposeful release plans and reinforce motivation to desist from future offending. These benefits may ultimately lead to an improvement in behavior in the prison and hence fewer adjudications, which is a positive predictor of reduced reoffending (Cochran et al. 2014; French and Gendreau 2006; Heil et al. 2009). At the same time, there is likely to be a positive impact on rehabilitation, through improved family relationships (Lösel et al. 2012), and an increase in successful completions of offending behavior programs (Bonta and Andrews 2017). Additionally, prison officers will be relieved of time-consuming paper-based tasks, so allowing time to be spent with prisoners on personal officer activities such as encouraging and supporting prisoners’ problem-solving and coping skills (Champion and Edgar 2013; McDougall and Pearson 2014). The Theory of Change was later partially tested in a quantitative evaluation of change in performance measures (McDougall et al. 2017).

Inside a Prison Introducing Digital Technology The following is a previously unpublished description of the process of changing the culture of a prison by introducing wing-based prisoner self-service kiosks; the reaction from staff and prisoners; and the impact on the efficiency of the prison processes (McDougall and Pearson 2014).

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

189

Based on documentation supplied by the kiosk provider and the views of senior managers, the prime purpose of installation of self-service technology was to achieve a number of business benefits. These included both hard benefits, such as savings in staff time, reductions in food waste, and usage of paper and printing required in the laborious paper-based processes, and soft benefits, such as increased prisoner responsibility, improved prisoner life skills, reduced frustration of prisoners, reduced stress on staff, and fewer assaults. Prisoner rehabilitation was also a hypothesized benefit, due to prisoners gaining experience of organizing and managing their own affairs, increased self-reliance and greater independence, and more involvement in purposeful activities.

Process Evaluation Research Methodology A process evaluation was conducted in one prison and selected because it was planned to receive self-service kiosks imminently (McDougall and Pearson 2014; Unpublished Process Evaluation Report). The prison was privately operated, housing over 1,000 prisoners, categorized as a Category C trainer, and holding adult male prisoners serving between 12 months and 4 years. The prison was relatively new. We visited the prison before, during, and after installation of kiosks to conduct a review of process and implementation (McDougall and Pearson 2014). The review adopted a comprehensive evaluation methodology (Bouffard et al. 2003), combining quantitative and qualitative techniques which provided confirmatory cross-methodological evidence. These approaches incorporated views of managers, operational staff, and prisoners, including views on types of training; monitoring of frequency of usage of kiosks and types of use; and observation of usage via closed-circuit TV (CCTV) footage. In addition we collected some case examples to illustrate individual user characteristics.

Implementation Procedure Management and staff were consulted by the technology provider about necessary changes to processes and the impact of kiosks on back-office functions. Sensibly, adjustments were made to plans to fit in with the operation of the prison, including delaying the visits booking function to await building completion of a new visits complex. The main premise behind the approach to staff training was that the prison must continue to operate while staff were being trained. Training of staff was therefore cascaded to cause as little disruption as possible. Awareness sessions were delivered, and approximately 30–40% of key staff in each function received direct training from the provider. Next training was cascaded from trained staff and then supported by written materials. Training guides and quick training guides were provided. It was recognized that introduction of digital technology presented a “root and branch” change for many back-office functions, so the training could be challenging in some

190

C. McDougall and D. A. S. Pearson

functions, particularly where finance was involved, leading to some apprehension. Two managers, one with self-service kiosk experience across a range of prisons and another locally based digital technology manager, were therefore designated to support the establishment and its staff in coping with installation and the use of the systems. These were designated as “troubleshooters” who were available to help with any aspect of the project, including support of training. Training of prisoners was conducted by staff with the aid of written instructions. Trusted prisoners were identified on each wing to help those prisoners who were having difficulty in using the technology. Since the kiosks were in open sight, officers were encouraged to look out for any instances of abuse. A system of prisoner “listeners” (linked to The Samaritans – a UK voluntary organization in the community that offers support to those at risk of self-harm) and designated mentors already existed on each wing. Some mentors had experience of self-service kiosks in previous prisons, while others were trained by staff to use the kiosks so that they could support other prisoners in system usage. A “buddy” system was also in place for supporting older prisoners. All prisoners with reading difficulties were already identified on reception and encouraged to take part in a reading support scheme run by a charitable agency (the Shannon Trust) aimed at helping those with reading difficulties in prison. These helpers were tasked with seeking out prisoners with literacy needs to help them with kiosk use. Staff members were also encouraged to assist any prisoners having difficulty with kiosk use. Following installation of the self-service system, prisoner training on kiosks was planned to be added to their induction program.

Impact on People Staff and Prisoner Opinions We held focus groups with staff and prisoners 1 month before the self-service system was installed and 1 month after. The staff groups were divided between those officers who worked on the prison wings and those who had specialist jobs, e.g., working on reception of prisoners or supervising visits. The prisoner groups were divided between those prisoners on a main wing and prisoners from a more vulnerable group who were anticipated to have difficulty managing the self-service systems. In the pre-installation focus groups, we consulted staff and prisoners on their general opinions of prisoner self-service. Staff anticipated that the self-service kiosks would cut down on paperwork giving them more time to spend on personal officer tasks; would reduce frustrations between prisoners and staff caused by the lengthy application processes; and would encourage prisoner involvement in purposeful activities. Prisoners thought that with self-service, it would be easier to manage their finances; they looked forward to meal selection; and the kiosks would reduce tensions and frustrations over visits booking, finance, and canteen shopping. Some concerns were expressed that some prisoners might have difficulty in learning to use the kiosks. Consultation 1 month after implementation raised some issues of concern, but both staff and prisoners referred to these as “teething troubles,” and both groups thought that things would get better. Full benefits had not yet been felt by staff as

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

191

some dual systems were still in operation, i.e., the old system was running in parallel until the new system had been thoroughly tested. Staff, however, could recognize that prisoners were taking on more responsibility. Instead of presenting a problem to staff such as “why is my pay wrong?,” prisoners could now work out why their pay was wrong and ask for a correction. Staff also experienced time savings. The introduction of the spending account balance enquiry function had led to many fewer queries from prisoners. Prisoner reception was running smoothly. The canteen shop had already seen large savings in time: canteen staff now did not have to cost every purchase individually and deal with paper-based applications from over 1,000 prisoners per week. They remarked that purchases of fresh fruit had unexpectedly almost doubled since the introduction of kiosks, with pictures now presented alongside titles of products to assist in identification. Purchases of newspapers were now requested and paid for in advance by prisoners, saving the canteen 12 hours per week on this one task. Prisoners in focus groups and in individual discussions, having received help with kiosks from staff and mentors, concluded that kiosks were easy to use. Prisoners’ concerns included breakdowns of kiosks, which happened a lot initially but had improved by the time of our visit. Prisoners accepted that kiosks were repaired when they reported breakdowns to the officers. Machines were also regularly running out of paper for receipts, which was addressed on one wing by prisoners taking responsibility themselves for ordering more. Use of kiosks led to some discussion of policy issues not related directly to prisoner self-service but which had been highlighted by the ability to check account balances frequently. Some prisoners thought the canteen shopping system was excellent. An unexpected side effect was prisoners ordering more from the canteen causing less availability of their favorite choices. Kiosks, however, had given confidence to those afraid to use computerized systems, and the kiosks were thought to be user friendly. These systems had made prisoners take control of their finances, and they felt they were gaining life skills.

Kiosk Usage Measures Printed snapshots of actual kiosk usage were taken on two wings at two time points: immediately after implementation of self-service kiosks and 1 month later. Use of kiosks increased over the two time points on both wings, with one main wing showing a statistically significant increase in the use of account balance enquiry, canteen shopping, phone top up, frequently asked questions, and individual timetables.

Observation of Usage CCTV footage was observed on two wings at the two time points at which kiosk usage was measured. Use of the kiosks was slow at the first time point with long periods without activity, but at the second time point snapshots showed a “hive” of activity at the kiosks. Usage was orderly and there were no signs of frustration or intrusion.

192

C. McDougall and D. A. S. Pearson

Prisoner Survey A survey was conducted 1 month after the installation to gauge the prisoner attitudes to the self-service system. This was included in McDougall et al. (2017). Out of a possible 1,389 prisoners, 743 (53%) responded to the questionnaire. Although only one-half of the population responded, this is more than double of what can be expected and was considered to be an exceptionally good response rate in a prisoner survey (Gojkovic et al. 2011). Below we divide the prisoner survey questions and responses by two relevant prison performance monitoring priorities (“safe, decent, and secure prisons” and “prisoner rehabilitation”).

Prisoner Survey Results: Safe, Decent, and Secure Prisons Of the 743 respondents, 93% thought the kiosks were “easy” or “very easy” to use, even though very little formal training was offered, while 7% thought the kiosks were “difficult” or “very difficult” to use. When asked “Did you get enough training/ help to use the kiosks,” 10% thought the training/help was about right; 80% of respondents said they had no training/help or not much training/help; and 10% said they had “quite a bit” or “very much” training/help to use the kiosks. Many of the staff, including the director of the prison, where the survey was conducted, thought that the kiosks would give the prisoners more responsibility and control over their lives in prison. When prisoners were asked the question, “Have the kiosks given you more control over your life in prison?,” 55% thought that the kiosks had given them “more” or “much more” control. This was the highest affirmatory response in the survey which suggests that the technology was having this impact. Meanwhile, 36% thought the kiosks had made no difference, and 8% thought kiosks had given them “less” or “much less” control over their lives. When prisoners were asked if the self-service system had affected their relationships with prison officers, 32% thought relationships were “better” or “much better” after installation; 58% thought self-service had made no difference to relationships with officers; and 10% thought relationships were “worse” or “much worse.” Prisoner Survey Results: Prisoner Rehabilitation Following introduction of the kiosks, 37% thought that relationships with family and friends were “better” or “much better”; 53% thought that the kiosks had made no difference to relationships with family and friends; and 10% thought relationships were “worse” or “much worse.” When asked if using the kiosks would give them more confidence to deal with technology-enabled services in the outside world, 43% said that using the kiosks had given them “more” or “much more” confidence; 50% said they had made no difference to their confidence; and 7% said that kiosks had made them “less” or “much less” confident. While the survey suggests that prisoners in general had a positive reaction to the self-service kiosks, there was, however, a small percentage who would like to go back to the old ways of doing things. This group either needs support to make better use of the kiosks or to be allowed to seek the assistance from prison officers if that

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

193

is needed. Attention should be given to the genuine physical difficulties that this small percentage of the users seemed to have with biometric identification and poor eyesight. Ten percent thought relationships with officers were “worse” or “much worse,” which suggests that this group valued the individual support given by officers during the paper-based interactions. Weakened prisoner/staff relationships have from time to time been proposed as a potential negative outcome from introduction of kiosks; however, this survey appeared to show that the prisoners who hold this view are in the minority. Nevertheless, the support from officers that this group of prisoners requires should be noted. In general, we found prisoner self-service was perceived to be advantageous at a prison level by staff and by prisoners. We therefore thought it important to investigate whether the favorable impression made on individual staff and prisoners had also had a positive impact on in-prison behavior. The following study examined this hypothesis.

Impact of Digital Technology on Prison Performance Measures A multi-prison study was conducted to test the impact on a range of prisons that had already introduced the prisoner self-service system (McDougall et al. 2017), and this research is summarized here. The prisons were mainly privately operated adult male prisons with prisoner populations ranging from 1,000 to 2,000 prisoners. UK prisons are categorized from A (high security) to D (open/lowest security). The prisons in this research sample were Category B and Category C trainer prisons (hence medium security categories) and local prisons (housing shorter sentence and prisoners on remand). The prisons were located in England, Wales, and Scotland. The research design was retrospective as kiosks had already been installed in some prisons over a number of years. Since there are cultural and organizational differences between public and private sector prisons, it was not appropriate to use public prisons as a control group. The research therefore employed a “natural stepped-wedge” design (Hussey and Hughes 2007), which is one of the most rigorous evaluation methods that can be applied, without introducing a randomized controlled trial. Most of the 13 prisons included in the study had introduced self-service kiosks at different times across a 7-year period from 2007 to 2014, allowing the researchers to examine the level of monthly performance measures at each stage in each prison which had installed self-service, in comparison with the prisons that had not yet installed selfservice. In Fig. 2 one can see that in 2007 no prison had yet installed self-service technology, but in 2008 three prisons had installed the technology. This created an “intervention group” of three prisons and a control group of ten (at this time point). As time went on, more prisons installed self-service, thus increasing the size of the intervention group and reducing the control group. If the “intervention” was indeed having an effect, we would expect to see changes within each prison relating to the time when the kiosks were installed, and not at the other prisons. We therefore had a group comparison between intervention and control prisons at each stage. We were

194

C. McDougall and D. A. S. Pearson

13 12 11 10 9 8

Site

7 6 5 4 3 2 1 2007

2008

2009

2010

2011

2012

2013

2014

Year

Fig. 2 A natural stepped-wedge design showing introduction of self-service technology over a period of 7 years, with the vertical axis showing prison sites and the horizontal axis showing the time periods. (Taken from McDougall et al. 2017)

simultaneously able to examine each individual prison’s pre- and post-installation performance data, giving us a within-prison comparison. Hence, this minimized the possibility that the outcome was due to any change other than the installation of the self-service technology. The proposed main statistical analysis of the study was longitudinal multilevel modelling (Singer and Willett 2003). This method allows assessment of the immediate impact on, and change over time in, selected outcome measures associated with implementation of an intervention. We were therefore able to estimate the impact of self-service kiosks both within and between prisons. On occasion this method of analysis was not suitable due to sample size, when t-tests were used.

Outcome Measures Outcome measures were identified to examine the impact on three key national prison performance monitoring priorities. Although the Theory of Change for prisoners (Fig. 1) identified a number of variables that might impact on prison behavior, from a statistical point of view, we chose to select a primary measure for each main prison priority (Table 1). The data were drawn from performance measures already collected by HM Prison Service. Adjudications and offending behavior program completions were the internal performance measures, and these were followed by a post-release analysis of reoffending.

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

195

Table 1 Correspondence between performance outcome measures and prison priority areas Priority area Safe, decent, and secure prisons Prisoner rehabilitation Reoffending after release

Performance measure Adjudications Offending behavior program completions Number of proved reconvictions

Adjudications It was anticipated in the Theory of Change that tensions between prisoners and staff, due to reliance on time-consuming paper-based application systems, would be reduced following the introduction of self-service kiosks. Adjudications are the main disciplinary procedures used in prisons by governors/directors to hear cases of breaches of prison rules and to impose sanctions. This “adjudications” outcome measure recorded monthly frequencies of misconducts with findings of guilt, regardless of severity, including violent offenses, calculated as a proportion of the current prisoner population. Offending Behavior Program (OBP) Completions OBP completions were selected as a measure of rehabilitation. These were restricted to accredited living skills or thinking skills programs as these tend to be run in most UK prisons and are not offense-specific. OBP completions were measured as a proportion of OBP starts as a means of measuring prisoner willingness to change. This measure was selected as a proxy for commitment to rehabilitation. OBP completions are recognized in research as an interim indicator of reduction in reoffending (Lipsey et al. 2007).

Results of the Impact Evaluation Adjudications There was a statistically significant reduction in the level of adjudications following installation of self-service kiosks (mean difference = 0.49 (95% CI: 0.75, 0.24), t = 3.79, p < 0.001). The statistical model took account of the different time points across 13 prisons when kiosks were introduced in each prison compared with the prisons not installing kiosks at those times. We can therefore be confident that this change in adjudications was associated with installation of the kiosks. The reduction in adjudications gradually returned to pre-installation levels over a period of 3 years. As some prisons had a smaller amount of either pre- or post-kiosk data, we conducted a sensitivity analysis on adjudications in five prisons which had the longest time periods of pre- and post-data. The analysis in this subsample appeared to confirm that the significant reduction was not limited to (or overrepresented by) those prisons with fewer pre- and post-installation adjudications time points. As there are numerous factors that contribute to tensions in the prisons estate, it is not surprising that other events begin to impact on adjudications in the longer term.

196

C. McDougall and D. A. S. Pearson

Research into duration of treatment effects over time has shown that the provision of aftercare may help maintenance of initial change following treatment (Prendergast et al. 2004) and that a deterioration of impact in the short term does not necessarily mean that the benefits will not be experienced over a longer-term period (Joliffe et al. 2013). It was nevertheless impressive that the elimination of a paper-based application system was associated with a statistically significant reduction in adjudications over a 3-year time period. Although beyond the scope of McDougall et al.’s (2017) study, a comparison with changes in overall levels of adjudications in the wider prison estate over the time period would be informative.

Offending Behavior Program Completions A large amount of program completion information was made available by the central performance hub of Her Majesty’s Prison and Probation Service (HMPPS) from April 2010. However, information on program “starts” was only available from April 2009, which limited the analysis of the impact of self-service installations that occurred up to 2009. It was therefore not possible to apply the full multilevel model analysis. Therefore, a simple parametric analysis was applied to investigate trends in the data. The mean proportion of completers to starters pre-self-service kiosks was 88.25% (SD = 4.55), while for post-self-service the mean was 93.67% (SD = 4.57). The difference in means was not statistically significant (t = 1.96, df = 4, p = 0.121). This is likely due to the small sample size; however, the average completion rate of 93.67% was nearing the ceiling of possible performance which is a very satisfactory level. Although our confidence in the results is limited due to sample size, Fig. 3 shows that three out of five of the prisons with complete data showed a sizeable increase in completions after self-service installation, which is encouraging and positive. A larger sample would have allowed for a full analysis which would have accounted for impact of self-service, the different types of prison, and the system-wide changes over time affecting prisons with and without self-service kiosks. The Theory of Change proposed that improvement of in-prison behavior could be reflected in a reduction in reoffending, particularly since adjudications have consistently been shown in the research literature as predicting reduced recidivism. An analysis of reoffending associated with the installation of the technology was therefore conducted.

Prisoner Reoffending Following Experience of Self-Service Technology A proved reoffense is defined as any offense committed in a 1-year follow-up period that leads to a court conviction or caution in that 1 year (or within a further 6-month waiting period to allow the offense to be proved in court). Proved reoffending data for the self-service kiosk prisons were provided by Justice Statistics Analytical Services of HM Ministry of Justice. Prisoners released during a 6-month pre- and 6-month post-kiosk phase were followed up for 1 year, and proved reoffending was recorded. Reoffending data were

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

Fig. 3 Proportion of completers relative to starters before and after self-service installation. (Taken from McDougall et al. 2017)

197

SITE 5 7 10 11 12

100

Mean (%)

95

90

85

80 Pre-Installation

Post-Installation

available for 7 of the 13 prisons in the original sample. Missing data were due to some prisons not having sufficient prisoners released during the time period, while other prisons had installed kiosks later in the 7 years under study, which did not allow sufficient time for proved reoffending data to be available. As a comparison group, a sample of public sector prisons recognized by the Ministry of Justice as a “family” group with similar characteristics to the seven prisons in the self-service sample was also examined during the same time periods and followed up for proved reoffending for 1 year, with an average taken for the prisons in each family group in each reoffending period. Data in the kiosk sample were all adjusted to take account of the predicted rate of reoffending pre- and post-self-service using the Offender Group Reconviction Scale (OGRS3, Howard et al. 2009). (The Offender Group Reconviction Scale (OGRS) is a Ministry of Justice developed predictor of reoffending based only on static risks – age, gender, and criminal history. It allows probation, prison, and youth justice staff to produce predictions for individual offenders even when the use of dynamic risk assessment tools (e.g., the Offender Assessment System (OASys) or Asset is not possible.) The actual rate was closer to or better than predicted by OGRS in six of the seven sites, and the adjusted rate was lower after kiosk installation than before (z = 2.03, p = 0.04, r = 0.54). This reduction in proved reoffending was statistically significant. The effect size (r) shows that this represents a large change in outcomes between the pre- and post- kiosk cohorts.

Comparison with the National Trend in Reduced Proved Reoffending As we were aware that general prison reoffending had been decreasing over the 7-year time period of our study, Justice Statistics Analytical Services provided us with comparison group data from the same family group of prisons to those in our

198

C. McDougall and D. A. S. Pearson

Fig. 4 Summary of change in reoffending compared to the national trend. (Taken from McDougall et al. 2017)

Group Comparator CMS

Reoffending (%)

57.0

55.0

53.0

51.0 Baseline

Post-Installation (Adjusted for OGRS)

Cohort Period

sample so that we could compare the trend in prisons in general against those prisons with self-service kiosks. We measured the self-service prison reoffending data and the control prison reoffending data over the same time periods, adjusted for OGRS scores (see Fig. 4). The difference between baseline and post-kiosk reoffending for the comparison prisons was 0.78%, and for the self-service prisons, it was 5.36%, a difference of 4.58 percentage points. The above results show that the reduction in proved reoffending was almost 5 percentage points greater in the self-service kiosk prisons (CMS) than in a matched portion of the general prison population (Comparator), demonstrating that the impact on proved reoffending was greater than the national trend.

Future Areas for Study In-Cell Telephones At the time of the self-service kiosk evaluation studies, there was a small number of UK prisons with in-cell telephones, offering restricted access to approved telephone numbers. Because of the small numbers, it was not possible to conduct a quantitative impact assessment on prison performance measures and reoffending. There were, however, opportunities to speak to staff and prisoners in focus groups and to interview a small number of prison visitors to gauge reaction to the availability of incell phones. Staff found undoubted benefits from in-cell telephony. Access to wing telephones was a constant cause of tension due to the heavy demand from prisoners to make calls in the limited amount of time available. There was particular pressure at lockup time at 5:30 p.m. Most family contacts in the community who were working would not be home from work before then, so many prisoners were trying to make calls

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

199

at around the same time. Meanwhile staff were trying to carry out prisoner lockup duties. Staff stated this was one of the most difficult times of the day for officers to manage good order on the wings. When asked, prisoners voiced the opinion that in-cell phones were excellent. These phones allowed them to make calls at times suitable for the family and be able to speak to their children at bedtime. There were some negative issues raised such as the high cost of calls and limited cash that prisoners were allowed to spend each week. However, in general, reactions were positive. A small number of prison visitors were interviewed (n = 8). The visitors were also mainly positive about in-cell phones. They thought that the phones had made a big difference. Prisoners could phone at more convenient times and for longer, with less background noise and more privacy. A sample of comments were: “It fits better with work schedules and is more relaxed”; “He has better contact with the children and is more of a father to them”; “it gives me peace of mind to know he’s OK.” One visitor said it made the prison sentence harder for her partner, as he now knows what he is missing. Another said “We’re closer than ever; he opens up more; it has made the last few years seem worth it. He’s now part of the family.” When asked what the impact had been of in-cell phone calls, one visitor said “Perhaps I am visiting more.” Although we were unable to measure impact on performance measures, the comments of staff, prisoners, and visitors suggest that improvements in behavior may be measurable. All three groups talked of a reduction in tension, which suggests this might impact on the number of adjudications and assaults in the prison. It appeared that both prisoners and family thought that family relationships had improved. Additionally, the facility to telephone “the Samaritans” at no cost if they were feeling in need of support in times of depression may well have an impact on levels of self-harm. As with self-service kiosks, there seemed to be agreement between staff and prisoners that the introduction of in-cell telephony was a good thing, helping to strengthen family ties and engendering a motivation to avoid reoffending.

Costs We have not so far touched on economic costs and benefits of introducing selfservice technology, but it should be apparent that technology has the potential to speed up processes within a prison. Technology should therefore reduce the time officers have to spend on time-consuming paper-based tasks, which could easily be undertaken by prisoners themselves. It is not anticipated that there will be savings in actual staff numbers, where minimum staffing levels have been benchmarked, although it is anticipated that more staff time will be available to undertake tasks associated with security, rehabilitation, and prisoner/personal officer interaction. Changes in systems of menu ordering and in eliminating many paper-based systems may result in reductions in food waste and in paper usage. Due to reductions in tension and assaults across the prison, there may be improvements in staff sick absence. Savings obtained from officer availability to perform other tasks can be

200

C. McDougall and D. A. S. Pearson

measured and added to quantifiable costs saved by reduced adjudications. A major saving would be in benefits accrued from reducing reoffending in the community, which can also be calculated, based on guidelines for estimating the economic and social costs of crime in England and Wales (Dubourg and Hamed 2005). A thorough cost-benefit analysis is required to balance the anticipated savings against capital costs of purchasing hardware and software, as well as recurrent costs of relicensing the software. In McDougall et al. (2017) wing kiosks were used, which are the cheapest to operate as they can be used by large numbers of prisoners. Individual devices such as tablets or laptops are more expensive, and there is a need for cabling and/or wireless access. However these devices do have the potential to be used for more educational functions that require extended access to the device, and these could accrue savings in central education provision. In both cases capital outlay is involved; however a rental model is available which does not require a large up-front investment. The proposed cost-benefit analysis should also ensure that the hypothesized savings are available, and are not offset by additional unanticipated costs from increased prisoner use of more accessible facilities such as the complaints system or of direct contact with offender managers. Given the diversity among prisons in terms of population size, security category, and type of prisoner, the levels of savings in the establishments can vary widely, and similarly the costs of installation are influenced by the diversity of the establishments. Although it was not possible to conduct a thorough cost-benefit analysis on the data we obtained in McDougall et al. (2017), our observations strongly suggest that findings from such an analysis would be positive and encouraging.

Conclusions and Future Directions Technology in the community has both advantages and disadvantages (Aboujaoude and Starcevic 2015). While there are clear benefits to be obtained by facilitating communication and acquisition of knowledge, there are however disadvantages such as the potential influence of violent video games on aggression, online searches for health-related information on levels of health anxiety, and promotion of suicide on the Internet. So when implementing in new contexts, we should ensure that we preserve the advantages and protect against the disadvantages. In prisons we must acknowledge unique advantages and potential disadvantages of technology. Although the advantages in increasing efficiency and teaching prisoners’ digital skills are evident, there are genuine security concerns that prisoner access to technology may be misused. In a paper published jointly by the UK Prisoner Education Trust and the Prison Reform Trust, it is acknowledged that access to technology and online resources can be a risk to security (Champion and Edgar 2013). However the organizations propose that not allowing access to technology poses a greater risk of prisoners reoffending after release. They suggest that prisons need to concentrate on how the risks to security can be managed in order to allow the use of information and communications technology (ICT) in rehabilitation

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

201

and resettlement. Champion and Edgar (2013, p. 5) suggest the main security concerns are attempts to use USB ports to charge illicit mobile phones; creating hidden folders to store prohibited images; using social networks to taunt or intimidate victims or witnesses; and committing further crimes or planning an escape. They accept that these are genuine concerns which must be managed but make a strong argument for introducing controlled access to the Internet. On the other hand, the suppliers of digital technology recognize the concerns of the criminal justice system about the security risks and aim to provide equipment that addresses the above concerns. So, the self-service kiosks they provide do not have access to the Internet, but they can operate functions safely and securely, thus giving prisoners more control over their lives in prison without having Internet access. The suppliers claim also to be able to address the security issues if required. The digital self-service systems have existed in custodial environments for 12 years, and the suppliers state that in that time there have been no incidents other than minor security breaches as a result of the technology. The evidence provided in this chapter that the use of technology in prisons can have a beneficial impact on behavior in prison and on subsequent release should encourage the further development of safe and securely managed technology. A further concern that has often been expressed in criminal justice with the introduction of technology is that this will hinder or inhibit interactions between staff and prisoners. Many officers and prisoners would disagree with this view. Both groups have said that most of the tensions that occur between them are due to the existing error-prone paper-based application systems. As one prisoner said in a focus group before the introduction of the technology, “if I can’t get my visits or there is a mistake in my pay, the only thing I can do is shout at a prison officer, even though it is not his fault.” It was recognized that these frictions were likely to be much reduced with introduction of the prisoner self-service system. What did concern both officers and prisoners was that the additional officer time provided by the kiosks might be taken up with further administrative tasks. Prisoners hoped that the extra time available would lead to more visibility of officers on the landings, as this made them feel safer. Some said they would like more social interaction with officers, stating: “just occasionally asking how you are makes a big difference to the atmosphere on the wing.” Similar sentiments were found in another study, noting that prisoners appreciated the daily civilities from officers like simply saying “good morning” (Tait 2008). Officers said that they would welcome updated training in personal officer skills, as these had not been used by many for some time, and they wanted to be informed of best practice. Additionally there appeared to be some emerging opportunities for educational and rehabilitative initiatives surrounding the use of kiosks. The functions on the kiosks often mirror the situations in the community that prisoners will face on release, and this presents an opportunity to use “real-life” situations to give training and advice. For example, some prisoners admitted to not managing their finances well on the kiosks and recognized a need to take control of this aspect of their lives. They had a limited weekly amount of money allowed from their spending account and found that it was impossible to stay within their means.

202

C. McDougall and D. A. S. Pearson

With the additional freedom to make telephone calls, and choose from a wider range of purchases in the prison shop, they were quickly running out of money. Financial management programs in prison might assist prisoners struggling with overspending, and the technology could provide a ready means of applying and monitoring prisoner learning. Similarly, some prisoners were having difficulty with reading messages quickly and ordering meals and items from the canteen at the speed necessary to avoid screens timing out. Education staff observed that they were receiving many more applications for basic reading lessons now that reading had become of more practical importance in the daily life of the prisoners (McDougall and Pearson 2014). The UK Justice Secretary has supported introduction of an incentives system which would offer small rewards for becoming involved in a rehabilitative regime, including offending behavior programs, and in the newly introduced education and employment strategy (Gauke 2018b). The vision at the heart of this strategy is “to put offenders on a path to employment as soon as they set foot in prison” (Gauke 2018b, p. 3), and incentives will encourage this. Research into the value of ICT facilities in prison classrooms suggests that the mere presence of ICT does not necessarily facilitate learning. Batchelder and Rachal (2000), in a rigorous experimental study, examined whether the use of ICT had more impact than traditional teaching methods. They found no statistically significant difference. However, in the case of kiosks, where the IT is being used to assist prisoners in achieving their daily goals, there may be more personal incentive to learn. Behavior monitoring in Category D prisons (lowest category) has become an important part of the England and Wales’ risk assessment process prior to parole release and release on temporary license, as an indicator of risk of future reoffending (National Offender Management Service 2015). The self-service kiosks offer a valuable opportunity to monitor behavior and assess prisoner risk prior to release. Research has already demonstrated that there is a correlation between behavior of offenders in prison, both positive and negative, and their behavior subsequently in the community (McDougall et al. 2013). Links to reduced reoffending, including involvement in purposeful activities such as applications for education, work-related training, offending behavior programs, and making appointments with an offender manager, could all be monitored from the kiosks. For example, evidence of these behaviors, together with a reduction in adjudications, could be seen as positive indicators of reduced risk. The early willingness demonstrated by prisoners to use the kiosks should be encouraged in order to maintain and extend the initial positive impact on prisoner behavior. Currently, personal officers and psychologists are hard-pressed to interact with the number of offenders requiring assessment, support, and rehabilitation. Psychologists undertake psychometric assessments, which could be achieved by self-completion on the part of prisoners. Self-completion has already been explored by King (2016) who demonstrated that the Risk Need Perception Survey, informed by the Risk Need Responsivity model (see Bonta and Andrews 2017), was able to record self-perceived and evaluator-perceived criminogenic needs. King et al. (2017)

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

203

have also evaluated the use of these self-completed psychological assessments, using a randomized controlled trial (N = 212) to examine the difference between prisoners completing psychological assessments on tablet computers and those using paperand-pencil methods. They found that there was no significant difference in the psychometric risk responses between the two approaches. In terms of usability, completions using paper-and-pencil were faster, but there were more frequent item omissions. Those in the tablet condition rated their format much more favorably than paper-and-pencil participants, and across both groups there was a preference for using the tablet computers. Although nonsignificant statistically, there was a small to medium higher level of attitude to correctional rehabilitation in the tablet computer group following completion of questionnaires. This study gives support to the use of tablet computers in psychometric risk self-assessment. Similarly, applications that administer therapy in clinical situations are in development (Gilbody 2015; Morris and Kaur Bans 2018). Many of the approaches have not yet been evaluated; however positive results were found using touchscreen daily self-reporting by patients suffering from depression who were receiving group cognitive behavioral therapy in a clinical setting (Newnham et al. 2012). In a trial with 1,308 patients, it was found that collaborative dialogue between patients and therapists and regular feedback via the technology were effective in reducing symptoms in patients at risk of poor outcomes. The patients also recorded a positive reaction to using the technology. This appears to indicate potential for development in application to rehabilitative programs for offenders. Although there have been some successes in incorporating technology into therapy, Gilbody (2015) found that results were less encouraging when the design was rigorous. In a randomized controlled trial, he found that cognitive behavior therapy provided by two separate technology programs provided no added value to that of the service provided by primary general practitioner care alone. Evaluation appeared to suggest that the dropout rate was higher using a computer-based treatment (Gilbody 2015). There has so far been limited use of technology in rehabilitation programs for offenders and hence limited evaluation (Morris and Kaur Bans 2018). This needs to be thoroughly explored, as there could be a potential to reach a larger number of prisoners than is currently possible. In summary, the information presented in this chapter supports the view that securely and safely managed use of technology in prisons can be beneficial for staff, prisoners, and society and can assist management in normalizing the prison culture to provide efficiencies and an environment that is conducive to effective rehabilitation. As well as being welcomed at a personal level by the majority of management, staff, and prisoners, improved behavior in prison and on release is illustrated by statistically significant positive changes in performance measures and reduced reoffending. There is the potential for improved education, assessment, and rehabilitative programs, within the managed security of the systems. Those prisoners who gain confidence and new skills are to be encouraged, and those who struggle with the additional personal responsibilities require support. The introduction of technology will not work for all, but it presents an opportunity that should be welcomed.

204

C. McDougall and D. A. S. Pearson

References Aboujaoude, E. (2011). Virtually you: The dangerous powers of the E-personality. New York: W. W. Norton. Aboujaoude, E., & Starcevic, V. (2015). Mental health in the digital age: Grave dangers, great promise. New York: Oxford University Press. Agnew, R. (2006). Pressured into crime: An overview of general strain theory. New York: Oxford University. Andrews, D. A., & Bonta, J. (2010). The psychology of criminal conduct (5th ed.). New Providence: LexisNexis Matthew Bender. Batchelder, J. S., & Rachal, J. R. (2000). Efficiency of a computer assisted instruction programme in a prison setting. Adult Education Quarterly, 50, 120. Bonta, J., & Andrews, D. A. (2017). The psychology of criminal conduct (6th ed.). New York: Routledge. ISBN 9781138935778. Bouffard, J. A., Taxman, F., & Silverman, R. (2003). Improving process evaluations of correctional programs by using a comprehensive evaluation methodology. Evaluation and Program Planning, 26(2), 149–161. https://doi.org/10.1016/S0149-7189(03)00010-7. Champion, N., & Edgar, K. (2013). Through the gateway: How computers can transform rehabilitation. Prison Reform Trust and Prisoners’ Education Trust. https://fbclientprisoners. s3.amazonaws.com/Documents/CQ%20through%20the%20gateway%20WEB1.pdf. Accessed 1 July 2017. Coates, S. (2016). Unlocking potential: A review of education in prison. London: Ministry of Justice. https://www.gov.uk/government/publications/unlocking-potential-a-review-of-educa tion-in-prison Cochran, J. C., Mears, D. P., Bales, W. D., & Stewart, E. A. (2014). Does inmate behavior affect post-release offending? Investigating the misconduct-recidivism relationship among youth and adults. Justice Quarterly, 31(6), 1044–1073. https://doi.org/10.1080/07418825.2012.736526. Dubourg, R., & Hamed, J. (2005). Estimates of the economic and social costs of crime in England and Wales: Costs of crime against individuals and households, 2003/04. Home Office Online Report 30/05. http://webarchive.nationalarchives.gov.uk/20100413151441/http:/www.homeoff ice.gov.uk/rds/pdfs05/rdsolr3005.pdf French, S. A., & Gendreau, P. (2006). Reducing prison misconducts: What works! Criminal Justice and Behavior, 33(2), 185–218. https://doi.org/10.1177/0093854805284406. Gauke, D. (2018a). Prisons reform speech. London: Royal Society of Arts. https://www.gov.uk/ government/speeches/prisons-reform-speech Gauke, D. (2018b). Education and employment strategy. Presented to parliament by justice secretary. London: Ministry of Justice. Gauke, D. (2018c). From sentencing to incentives – How prison can better protect the public from the effects of crime. London: Centre for Social Justice. https://www.gov.uk/government/ speeches/justice-secretary-launches-fresh-crackdown-on-crime-in-prison-speech Gilbody, S. (2015). Computerised cognitive behavioural therapy (cCBT) as treatment for depression in primary care. (REEACT trial): Large-scale pragmatic randomized controlled trial. BMJ, 351. https://doi.org/10.1136/bmj.h5627. Gojkovic, D., Meek, R., & Mills, A. (2011). Offender engagement with third sector organisations: A national prison-based survey (Third Sector Research Centre working paper 61). https://www. birmingham.ac.uk/generic/tsrc/documents/tsrc/working-papers/working-paper-61.pdf Heil, P., Harrison, L., English, K., & Ahlmeyer, S. (2009). Is prison sexual offending indicative of community risk? Criminal Justice and Behavior, 36(9), 892–908. https://doi.org/10.1177/ 0093854809338989. Helsper, E. J., & Eynon, R. (2013). Distinct skills pathways to digital engagement. European Journal of Communication, 28(6), 696–713.

9

Technology as a Means of Rehabilitation: A Measurable Impact on. . .

205

Howard, P., Francis, B., Soothill, K., & Humphreys, L. (2009). OGRS 3: The revised Offender Group Reconviction Scale (research summary 7/09). London: Ministry of Justice. Hulley, S., Liebling, A., & Crewe, B. (2012). Respect in prisons: Prisoners’ experiences of respect in public and private sector prisons. Criminology and Criminal Justice, 12(1), 3–23. https://doi. org/10.1177/1748895811423088. Hussey, M. A., & Hughes, J. P. (2007). Design and analysis of stepped wedge cluster randomized trials. Contemporary Clinical Trials, 28(2), 182–191. https://doi.org/10.1016/j.cct.2006.05.007. Jewkes, Y., & Reisdorf, B. C. (2016). A brave new world: The problems and opportunities presented by new media technologies in prisons. Criminology & Criminal Justice, 16(5), 534–551. https://doi.org/10.1177/174889581665495. Joliffe, D., Farrington, D. P., & Howard, P. (2013). How long did it last? A 10-year reconviction follow-up study of high intensity training for young offenders. Journal of Experimental Criminology, 9(4), 515–531. King, C. M. (2016). The prediction of criminal recidivism using self- and evaluator appraised risk and needs. Doctoral dissertation. http://search.proquest.com/docview/1790102519. Accessed 1 July 2017. King, C. M., Heilbrun, K., Kim, N. Y., McWilliams, K., Phillips, S., Barbera, J., & Fretz, R. (2017). Tablet computers and forensic and correctional psychological assessment: A randomized controlled study. Law and Human Behavior. https://doi.org/10.1037/lhb0000245. Knight, V. (2015). Some observations on the digital landscape of prisons today. Prison Service Journal, 229, 3–9. Lipsey, M., Landenberger, N. A., & Wilson, S. J. (2007). Effects of cognitive-behavioral programs for criminal offenders: A systematic review. The Campbell Collaboration Library. http://www. campbellcollaboration.org/lib/project/29/ Listwan, S. J., Sullivan, C. J., Agnew, R., Cullen, F. T., & Colvin, M. (2013). The pains of imprisonment revisited: The impact of strain on inmate recidivism. Justice Quarterly, 30(1), 144–168. https://doi.org/10.1080/07418825.2011.597772. Lösel, F., Pugh, G., Markson, L., Souza, K. A., & Lansky, C. (2012). Risk and protective factors in the resettlement of imprisoned fathers with their families. Milton: Ormiston Children’s and Families Trust. Accessed 1 July 2017 from Google Scholar. Mann, R. E., Webster, S. D., Wakeling, H. C., & Keylock, H. (2013). Why do sexual offenders refuse treatment? Journal of Sexual Aggression, 19(2), 191–206. https://doi.org/10.1080/ 13552600.2012.703701. McDougall, C., & Pearson, D. A. S. (2014). Process evaluation: The prisoner custodial management system (CMS). Unpublished process evaluation report. McDougall, C., Pearson, D. A. S., Willoughby, H., & Bowles, R. A. B. (2013). Evaluation of the ADViSOR project: Cross-situational behaviour monitoring of high-risk offenders in prison and the community. Legal & Criminological Psychology, 18(2), 205–228. McDougall, C., Pearson, D. A. S., Torgerson, D. J., & Garcia-Reyes, M. (2017). The effect of digital technology on prisoner behavior and reoffending: A natural stepped-wedge design. Journal of Experimental Criminology, 13(4), 455–482. Molleman, T., & van Os, R. (2016). Technological disparity across prison services. https://www. europris.org/file/technological-disparity-across-prison-services/ Morris, J., & Kaur Bans, M. (2018). Developing digitally enabled interventions for prison and probation settings: A review. Journal of Forensic Practice, 20(2), 134–140. https://doi. org/10.1108/JFP-08-2017-0030. National Offender Management Service. (2015). Prison service instruction enhanced behavior monitoring. London: NOMS. https://www.justice.gov.uk/downloads/offenders/psipso/psi2015/pi-16-2015.pdf. Accessed Nov 2018. Newnham, E. A., Doyle, E. L., Sng, A. A. H., Hooke, G. R., & Page, A. C. (2012). Improving clinical outcomes in care with touch-screen technology. Psychological Services, 9(2), 221–223.

206

C. McDougall and D. A. S. Pearson

Petersilia, J. (2016). Realigning corrections, California style. Annals of the American Academy of Political and Social Sciences, 664(1), 8–13. https://doi.org/10.1177/0002716215599932. Prendergast, M. L., Hall, E. A., Wexler, H. K., Melnick, G., & Cao, Y. (2004). Amity prison- based therapeutic community: 5-year outcomes. The Prison Journal, 84(1), 36–60. Reisdorf, B. C., & Rikard, R. V. (2018). Digital rehabilitation: A model of reentry into the digital age. American Behavioral Scientist, 1–18. https://doi.org/10.1177/0002764218773817. Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York: Oxford University Press. Tait, S. (2008). Care and the prison officer: Beyond ‘care bears’ and ‘turn-keys’. Prison Service Journal, 180, 3–11. Wolff, N., Shi, J., & Schumann, B. E. (2012). Reentry preparedness among soon-to-be-released inmates and the role of time served. Journal of Criminal Justice, 40(5), 379–385. https://doi.org/ 10.1016/jcrimjus.2012.06.008.

Datasets for Analysis of Cybercrime

10

C. Jordan Howell and George W. Burruss

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Official Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proprietary Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Open-Source Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

208 209 211 214 215 217

Abstract

In this chapter, we document various sources of cybercrime data to help guide future research endeavors. We focus most of our attention on datasets associated with hacking, and to a lesser degree online fraud. Rather than a catalog of sources, we also describe what research has accomplished with these data on specific crimes and discuss the strengths and limitations of their use. The data discussed throughout the chapter are gathered from a variety of sources including the FBI, Cambridge Cybercrime Centre, Zone-H, various cybersecurity companies, and several other websites and platforms. These data allow researchers the opportunity to assess cybercrime correlates of engagement, victimization patterns, and macro-level trends. However, they share one major flaw; they do not allow for the assessment of causation. We conclude by suggesting that criminologists should prioritize longitudinal data collection that allows for causal assessment. Keywords

Cybercrime · Datasets · Analysis C. J. Howell · G. W. Burruss (*) Department of Criminology, University of South Florida, Tampa, FL, USA e-mail: [email protected]; [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_15

207

208

C. J. Howell and G. W. Burruss

Introduction Criminologists use official data to examine trends and correlates of criminal behavior, especially in developed countries where such data are routinely and systematically collected. The US Uniform Crime Reports (UCR) is an exemplar of official data collected at the local level and aggregated at the national level via the Federal Bureau of Investigation (FBI 2019a). Using this kind of data, criminologists examine the changes in crime rates over time and across jurisdictions for both property and violent crimes, correlating change with structural factors like unemployment, demographics, and policing practices. Analysis of these data also allows testing of criminological theories, such as routine activities, institutional anomie, and general strain. However, official data suffer from the “dark figure” of crime where much goes unreported by victims, unfounded by the police, or misclassified by the reporting agency. It should be noted that in the United States, the National Incident-Based Reporting System (NIBRS) has been developed to address many of the shortcomings of the UCR system (see FBI 2019b). These kinds of official datasets represent secondary data analysis where records collected for one agency or researcher’s purpose can be used to answer myriad research questions not considered by the collecting agency. To uncover the dark figure of crime, criminologists also use officially collected data that do not rely on official victim reporting to the police. Annual victimization surveys, like the Crime Survey for England and Wales (see Office for National Statistics 2019), attempt to estimate crime through a nationally representative household survey (approximately 50,000). These data constitute self-reported criminal experiences by victims. In addition to capturing crime unreported to the police, victimization surveys provide individual information about victims that criminologists can use to find correlates and causes of crime. The data can also be used to determine the level of underreporting in official police data. Victimization data are also considered secondary given the data are collected systematically for one agency’s purposes. Individual researchers, however, often conduct their own victimization surveys, which would constitute primary data collection (e.g., Schafer et al. 2018). For cybercrime, the same issues with official and victimization data for traditional crimes are applicable, but cybercrimes have more in common with white-collar crime than traditional crimes. For traditional crime, the more serious or harmful the offense, the more likely it will be reported to police. For example, most homicides are recorded in the UCR, and much information about the criminal events (victims, offenders, mode of death, and location) are available in the UCR’s Supplemental Homicide Reports. About 60% of US homicides are cleared by arrest or exceptional means (FBI 2019a) meaning we know much about the details of the offense. For less serious crime, like larceny-theft, only about 19% are cleared by arrest or exceptional means; thus, we know much less about the true amount of this crime that occurs year to year. For white-collar crime, many victims fail to report the crime to the police unaware they have been swindled, or they are ashamed, having willingly given away money or something of value through deception. White-collar

10

Datasets for Analysis of Cybercrime

209

crimes are not reported in the UCR Part 1 offenses, and only a few are reported in the Part 2 offenses. The same reporting issues face general victimization surveys as respondents are often unaware of their own victimization, and most general victimization surveys only capture a few fraud types (but see Huff et al. 2010 for an example of a white-collar specific victimization survey). Cybercrime victims are similar to fraud victims because they often fail to realize they have been wronged. Even self-reported victimization surveys may fail to record incidents of cybercrime given the victims are often naïve about computer-related offenses. For example, most victims of a business’ data breach are only made aware through the media’s reporting or because they are contacted long after the fact by the business. A victimization survey that asked specifically if a hacker had access to personal account information, passwords, or banking details would yield many false negatives until the breach was made public. Even then, the victims may not be sure they were in fact victimized. Nevertheless, cybercrime is unlike most white-collar or traditional crime in that networked computers generate vast amounts of data on user usage, social networks, and online behavior, such as shopping, dating, and posting information. As criminologists study cybercrime in more depth, they are just now beginning to use this array of data sources to understand the motives of offenders, lifestyles of victims, and the interaction of computing devices and human behavior. In this chapter we focus on available datasets, divided into three sections: official, proprietary, and open-source data. Though we cover various sources of information in this chapter, our coverage is not exhaustive of the data available at the time of this writing. Also, our coverage is not comprehensive regarding various types of cybercrimes given the space limitations. For example, we do not address cyberbullying or cyberstalking and the potential sources of data for these offenses available to researchers. As a result, we focused most of our discussion on datasets associated with hacking and to a lesser degree online fraud.

Official Data Sources Official data sources, for the purpose of the current chapter, include data published by government agencies or other organizations working on behalf of a government agency. Official data sources can be further subdivided into police data and victimization data. As previously mentioned, the UCR is the largest source of official police data in the United States. Local law enforcement agencies across the United States report crimes that occurred within their jurisdiction to the FBI. One major strength of the UCR data source is that it includes crime data for the majority of agencies within the United States. Researchers can use these data to gain insight into 21 different crime types. Unfortunately for cybercrime scholars, the UCR does not report cybercrime incidents. Therefore, the UCR cannot be used as a reliable source of official cybercrime data.

210

C. J. Howell and G. W. Burruss

Fortunately, NIBRS was developed to improve upon the UCR. Similar to the UCR, NIBRS reports crimes that are known to the police. However, NIBRS includes more crime types and more information about the crime. Specifically, NIBRS gathers data on victims, known offenders, arrestees, and the relationship between offenders and victims. NIBRS can be used to study cybercrime in two ways. First, researchers can use NIBRS data to determine if a computer was used during the commission of a crime. Second, NIBRS reports detailed information for reported online fraud incidents including hacking and wire fraud. Therefore, researchers can use these data to gain insight into online fraud cases that have been reported to the police. Although NIBRS provides detailed information for a wide range of crimes, a large majority of cybercrimes are not captured by NIBRS. In addition, as stated above, NIBRS only captures known offenses. Therefore, NIBRS underestimates crime generally and cybercrime specifically. The most prominent repository for official counts of cybercrime incidents in the United States is the Federal Bureau of Investigation’s Internet Crime Complaint Center (IC3). The IC3 was started in 2000 to provide the American public a convenient way to report cyber victimization. Since its inception, over 1,400,000 complaints have been filed, totaling over $5.5 billion in total documented losses (FBI 2017). However, due to underreporting by victims, this is likely only a fraction of the total number of incidents that have truly occurred (Button et al. 2014). Initially designed as a tool for law enforcement, the IC3 now allows researchers to download its data. Researchers can examine city, state, county, and country reports and sort the data by crime type, age, and transactional information. While useful for understanding emerging trends (Pangaria and Shrivastava 2013), the data are limited in the capacity to correlate potential causal factors, such as structural, demographic, or policy changes. Nevertheless, as a secondary dataset, the IC3 offers potential explanatory power when combining cybercrime trends with other sources of official data, such as from the US Census Bureau. As noted above, the problem with reporting bias makes the IC3 data problematic, but certainly an important step in understanding the distribution of American cybercrime victimization. In addition to analyzing cases reported to the police, researchers can simply ask individuals if they have been victimized. The most well-known victimization survey in the United States is the National Crime Victimization Survey (NCVS), which is administered by the Bureau of Justice Statistics. Although not designed to study cybercrime specifically, the NCVS includes questions regarding online identity theft. In addition, the NCVS collects information about cyberbullying as part of its School Crime Supplement. Data from the NCVS can be used to garner insight into the prevalence of victimization, how the crime is discovered by the victim, the financial and nonfinancial burden caused by victimization, whether the crime was reported, and if actions were taken to reduce future incidents. Although the NCVS has been successfully used to explain American victimization patterns and test criminological theory (Muniz 2019), it is limited in its ability to explain cyber victimization patters due to the limited amount of information gathered about a limited number of cybercrimes.

10

Datasets for Analysis of Cybercrime

211

In addition to the NCVS, the National Computer Security Survey (NCSS), which is cosponsored by the Bureau of Justice Statistics and the National Cyber Security Division (NCSD) of the US Department of Homeland Security, can be used to gain insight into computer security incidents. The NCSS was designed to gather and produce reliable estimates of computer security incidents against American businesses. In 2005, the NCSS gathered data for 7818 businesses. The data can be used to estimate monetary losses, system downtime resulting from cyber incidents, the types of offenders, whether the incident was reported, the types of systems infected, and the most commonly exploited vulnerabilities (Rantala 2008). Unfortunately, the most recent data was gathered in 2005, which limits the data source’s utility in understanding cybersecurity incidents in contemporary times. Although the aforementioned data sources provide information about cybercrime in the United States, multiple other countries collect similar data. For example, Canada’s General Social Survey (GSS) was developed in 1985 to monitor the well-being of Canadians and guide policy. The GSS asks those selected to participate in the survey to answer a wide array of questions about victimization generally and various forms of cyber victimization more specifically. The data can be used to analyze victimization patterns for both cyber-dependent crimes (i.e., hacking, phishing, malware infection) (Reyns 2015) and cyber-enabled crimes (i.e., cyberbullying, cyberstalking) (Reyns et al. 2016). In addition, the Crime Survey for England and Wales asks people living in England and Wales about their experiences with crime in the past 12 months. In regard to cybercrime, the survey includes questions about online fraud victimization. Using these data sources, and similar data sources collected in other parts of the world, researchers can analyze the longitudinal trends of cybercrime victimization. Although the reliability of the data may vary nation to nation, these datasets can be analyzed independently or combined to advance our understanding of global trends.

Proprietary Data In addition to official criminal justice data, businesses, for-profit data collection enterprises, and nongovernmental agencies also collect information about cybercrimes and victimizations. Cybersecurity has become a major global industry – from selling antivirus software to consulting on risk management. These businesses collect an immense volume of data to support their cybersecurity ventures. Companies like McAfee, Norton, and Kaspersky employ antispam and virus protection software across millions of business and personal computing devices. This software in turn collects information about hacking attempts, DDoS attacks, malware installation, spam campaigns, and phishing attempts. Because much of this information is proprietary, these companies do not typically release the data to researchers, though there are notable exceptions explained below. There are also ventures by enterprising individuals to collect open-source and self-report data that can be used for research. Though often problematic because the

212

C. J. Howell and G. W. Burruss

data collection was not devised by careful researchers, these kinds of data sources can prove to be useful. Finally, nongovernmental organizations, such as research consortiums, can engage in data collection across many sources to create data archives open to either the public or other researchers. In this section we discuss some of these kinds of proprietary data sources. While much proprietary data generated by cybersecurity businesses are not made public, some companies do make aggregated data available for public consumption. An example of a publicly available threat map is that produced by Kaspersky Lab, a multinational cybersecurity company and antivirus provider. As of 2016, their software had over 400 million users. The company constantly scans for cybersecurity-related incidents and displays them in “real time” (Kaspersky Lab 2018). With over 400 million users, data gathered from Kaspersky Labs can be used to show country-level variation in victimization patterns and overall global trends. Currently, Kaspersky Lab collects data on the frequency of local infections, web threats, network attacks, vulnerabilities, spam, infected mail, and botnet activity. Like Kaspersky Lab, Bitdefender is a cybersecurity company that offers data visualization in the form of an interactive map. With over 500 million users, this is another invaluable source of attack and victimization data. Bitdefender (2018) collects data on the total number of infections, attacks, and spam at the country level. Viewers are able to see where the attack originated and the target country. However, researchers should be cautious when examining the origin of a cyberattack from any data source because hackers often hide their location through looping, using one computer to access another (Lee et al. 1999). McAfee is an American-owned cybersecurity software company. McAfee offers an interactive map similar to those discussed above. The map provides country-level attack and spam data. In addition, McAfee provides malware data that can be analyzed at the macro-level and the event level. Specifically, they show how much malware each country receives and offer event-specific information such as the type of malware, the associated risk, and the date and time of discovery (McAfee 2018). In addition to companies that generate threat maps, other private cybersecurity businesses provide data on malicious online activity. Trend Micro is a multinational cybersecurity company that continuously monitors network activity to identify command-and-control (C&C) servers. A C&C server is a centralized computer that controls botnets, which are Internet-connected devices done so without the owners’ permission. These C&C servers can be used to create an army of infected computers often used in data breaches, DDoS attacks, and a number of other malicious activities. Trend Micro’s interactive map identifies the prevalence of C&C servers in a given location (Trend Micro 2018). Arbor Networks is a software company dedicated to network monitoring and network security. The company’s products are used to combat DDoS attacks. Currently Arbor Networks’ defense software is used by over 90% of all tier 1 Internet service providers globally (Arbor Networks 2018). In recent years, Arbor Networks collaborated with Google Ideas to create an interactive data visualization map that shows the distribution and historical trends of DDoS attacks.

10

Datasets for Analysis of Cybercrime

213

Project Honey Pot is an “antispam company” that identifies and archives malicious IP addresses for security and research purposes (Project Honey Pot 2018). Webmasters simply install the Project Honey Pot software on their website, and the software can parse out the spammers and spambots. Using data gathered from Project Honey Pot, researchers can examine malicious activity associated with specific IP addresses and determine which countries send and receive the highest frequency of spam. Zone-H falls somewhere between a for-profit enterprise and open-source dataset. It was created in 2002 by a private interest to archive defaced websites. Website defacement is a common form of hacking (Zone-H 2018), defined as the replacement of a website’s original content with one’s own content. Hackers use Zone-H to brag about their successful defacements. After Zone-H is notified of an attack, the archivists verify it and permanently document the incident in Zone-H’s archive. The information on Zone-H is publicly available through its website, but its current owners sell archival data for a negotiated fee. In 2017 alone, over 1 million website defacements were reported to Zone-H, but most studies only examine the content of the defaced websites. Although criminologists are becoming increasingly more interested in website defacement generally, and the Zone-H archive specifically, the data are still underutilized. Zone-H collects myriad data including the offenders’ name and motivation, attack location, system type, and attack domain. Additionally, Zone-H characterizes some website defacements as “special.” These special defacements are attacks on “important websites,” which are almost always government affiliated. Like most datasets, Zone-H has limitations. The data are self-reported, which creates potential self-selection bias: those who choose to report their successful defacements may differ from those who do not choose to report. For example, hackers who post defacements on Zone-H may be building their reputations among peers and therefore less experienced than elite hackers with established reputations. Conversely, newbie hackers who can only deface one or two sites may not bother bragging on Zone-H until their skill level increases. In other words, the defacements stored in Zone-H’s archive may not be representative of all website defacers. The Zone-H data do offer valuable insights into website defacement. The data allow for an examination at both the macro-level and event level. When paired with other datasets, the data can be used to correlate macro-level trends in defacement frequency over time and across countries (i.e., Howell et al. 2019). Researchers can also examine hackers’ motivations. Zone-H requires hackers to self-select their motivation when reporting a defacement from a set of possible responses: political, religious, for fun, or other reasons. Although the motivation variable is interesting, the validity and reliability of this measure are suspect. Specifically, the measure is not exhaustive, nor is it mutually exclusive. The Cambridge Cybercrime Centre, a multidisciplinary initiative, has built a robust cybercrime dataset on three aspects of cybercrime: distributed denial of service attacks (DDoS), web forum discussions, and spam. First, the Centre currently operates 100 censors around the world that record incidents of DDoS attacks.

214

C. J. Howell and G. W. Burruss

A DDoS attack is an explicit attempt by a hacker to prevent legitimate users from gaining access to online resources, such as retailers, banks, social media, and entertainment streaming services (Lau et al. 2000). DDoS attacks are becoming more common because even those lacking the skills needed to carry out the attack can simply hire a hacker. The Cambridge Cybercrime Centre data allow researchers to assess DDoS attacks and victimization patterns. Specifically, researchers can view the attackers’ and victims’ IP addresses, which indicate an attack’s location of origin and target’s location. The dataset currently includes over four trillion packets. Stated simplistically, the term packet refers to the data being transmitted from one computer to another. Second, the Cambridge Cybercrime Centre also scrapes data from Internet forums. They have over 40 million posts that can be analyzed for trends, techniques, and even individual criminal trajectories. By analyzing forum data, researchers may describe what activities hackers engage in and how an individual hacker progresses over time. Macdonald et al. (2015), for example, used the language of hackers to identify potential threats against critical infrastructure. Moreover, Benjamin et al. (2015) developed a way to automate the process of identifying existing threats and vulnerabilities using machine learning technologies and information retrieval techniques. Taken together, the use of available forum data can be used for research and the creation of prevention strategies. However, not all hackers discuss their techniques on forums. Third, the Cambridge Cybercrime Centre gathers phishing and spam data from a variety of sources. Phishing is an increasingly common phenomenon defined as “a scalable act of deception whereby impersonation is used to obtain information from a target” (Lastdrager 2014). Phishing occurs in three stages: (1) the victim receives an email from the offender, (2) the victim takes the action suggested in the message, and (3) the offender monetizes the stolen information (Hong 2012). Phishing costs Internet users billions of dollars a year (McGrath and Gupta 2008). Although much legal investigation has been conducted on spam and phishing, only a few studies have examined it using criminological theory (Kigerl 2012). Although Cambridge Cybercrime Centre collects one of the most robust cybercrime datasets available to date, they do not utilize random selection. The nonrandom nature of the data calls the generalizability of the data into question. Stated differently, since not all cybercrimes have an equal chance of inclusion, inferences drawn from the data may not reflect the population.

Open-Source Data The Malware Domain List is an example of an open-source repository, like Zone-H, that accepts reports on active malware around the world (MDL 2013). Anyone can report instances of detected malware to the site, such as cybersecurity professionals, white or gray hat hackers, or even black hat hackers. Like other forms of selfreported data, the data collection is biased toward instances of malware that is

10

Datasets for Analysis of Cybercrime

215

detected. Nevertheless, these data have been used to examine the global distribution of malware (Holt et al. 2018). Another open-source of data are web forums. Multiple studies have used web forums to better understand hacker behavior. For example, Holt et al. (2012) used forum data to explore the social network of a group of Russian hackers. By analyzing this unique source of open-source intelligence, Holt et al. (2012) determined that only a few skilled hackers are active in the forums in comparison to novice hackers. Additionally, Zhang et al. (2015) examined messages posted in hacker forums to create hacker typologies. These studies, among others, demonstrate that researchers can gather open-source intelligence from forums to study cybercrime. In addition to forum data, researchers have utilized open-source intelligence from various social media platforms to garner insight into cybercrime perpetration. More specifically, Maimon et al. (2017) gathered social media data for a large number of hackers found on the Zone-H archive. Results from their study indicate that social media presence increases attack frequency. Additionally, Babko-Malaya et al. (2017) found that social media behavior varies based on hackers’ motivations. Specifically, skilled hackers tend to discuss more technical topics, whereas those motivated by ideology use social media as a recruitment tool.

Conclusion In this chapter, we have documented various sources of cybercrime data to help guide future research endeavors. The list of analyzable datasets we provided is in no way exhaustive. Similarly, we did not provide a robust discussion of all forms of cybercrime. Therefore, we focused most of our attention on datasets associated with hacking and to a lesser degree online fraud. Rather than a catalog of sources, we also attempted to describe what research has accomplished with these data on specific crimes and to discuss the strengths and limitations of their use. The data discussed throughout the chapter are gathered from a variety of sources including the FBI, Cambridge Cybercrime Centre, Zone-H, various cybersecurity companies, and several other websites and platforms. All these data allow researchers the opportunity to assess cybercrime correlates of engagement, victimization patterns, and macro-level trends. However, all these data share one major flaw; they do not allow for the assessment of causation. Despite the methodological limitations of proprietary and self-reported data, studies conducted using these data serve as foundational work. Nevertheless, future research should begin to collect more reliable data from the field to assess specific forms of cybercrime. Specifically, researchers should prioritize the collection of longitudinal data to allow for causal modeling between validated theoretical constructs and cybercrime perpetration and victimization. Additionally, when possible, researchers should employ randomized experimental designs to eliminate the effects of confounding variables. One potential source of data to be used in experiments comes from honeypot computers, which act as attractive targets to hackers to lure them in. When successfully

216

C. J. Howell and G. W. Burruss

employed, network administrators can use honeypots to divert hackers from sensitive information or log the hackers’ activities to test the security and resilience of their system. But researchers can also use information from attacked honeypots to determine the skill level and methods of hackers. In addition, researchers can vary aspects of the honeypot systems to determine which factors might act as a deterrent for hackers. For example, using honeypot computers, Maimon et al. (2014) found that warning banners deter system trespasser behavior. However, the warning banner is less effective when the system trespasser has full administrative privileges (Testa et al. 2017). Although Maimon and his colleagues (Howell et al. 2017; Maimon et al. 2014; Wilson et al. 2015; Testa et al. 2017) have demonstrated that honeypot computers can be used to garner insight into system trespasser behavior, the data is limited in various ways. Specifically, as pointed out by Bossler (2017), honeypot data is collected at the event level, which may lead researchers to make inaccurate assumptions about the attacker. Additionally, the majority of cyberattacks are automated; therefore, honeypot data may include attacks generated by machines rather than individuals (Bossler 2017). Experimental and quasi-experimental designs can help criminologists gain a fuller understanding of the causal mechanisms underlying cybercrime and cyber victimization (Maimon and Louderback 2018). Moreover, cybercrime scholars can test how traditional criminological theories operate in cyberspace. Unfortunately, this cannot be easily accomplished through secondary data analysis and convenience samples of college students. Criminologists must trade convenience for innovation and gather data on active offenders. Effective policy stems from good research, which can only be conducted with the use of valid and reliable data. The data generated by networked computing devices continues to generate massive amounts of data on a multitude of user activities that can be used to track and predict human behavior. This unprecedented tracking of our daily activities brings with it new privacy concerns until recently only imagined in George Orwell’s Nineteen Eighty-Four Certainly, researchers will need to be mindful of how the data are used to protect human subjects from unanticipated harm. The promise of measuring the interaction between humans and networked systems, however, should help us understand criminal victimization and offending that is not possible for noncybercrimes. Knowing what one does online, how often, and under what circumstances would help develop existing criminological theories but also may lead to new theories. Ultimately our understanding of crime is only as good as the data we have to test or create theories. To improve the current state of cybercrime data, researchers should take two important steps: (1) facilitate data consortiums where proprietary, official, and public data are made available for researchers; and (2) explore what data are and can be generated from network users; then make this available. For the first step, businesses and government must work together to create data consortiums where data are managed and uploaded for research use. Many proprietary datasets are kept classified or confidential because cybersecurity firms or government agencies feel the need to protect the names of clients or keep secret cybersecurity failures. This is

10

Datasets for Analysis of Cybercrime

217

certainly understandable. However, data can always be made safe for public use by de-identifying the cases or aggregating the information. Criminologists routinely use information about individual victims, prisoners, or agency activities without divulging any sensitive details. This can certainly be done for cybercrime data. It took an act of Congress in 1930 to create the UCR; hopefully, affected businesses and agencies can see the need for collaboration for their own benefit. The second step requires interdisciplinary research between social scientists and computer and software engineers. As computing devices become more interconnected through software applications, social media, and Internet of things, we should be able to use this vast array of data to understand how humans interact with each other and through technology. We know that human perceptions can be manipulated through software algorithms that use online activity to predict one’s shopping preferences to political ideology. By understanding what data are being created, we might be able to find correlates of offending and victimization that go beyond survey questions. Social media use (amount not content), for example, might be predictive of psychological issues (depression) or in the aggregate geographic disparities (anomie). In conclusion, data that criminologists can use to understand cybercrime is like that used for traditional crime. The main difference is that cybercrimes, like traditional fraud, remain underreported even within self-reported surveys. Nevertheless, we have discussed several promising sources in this chapter. We believe the data generated by computers and networks (including cellular) offers promise for helping criminologists explain cybercrime through existing theory or to develop new theory to explain the growing problem of cybercrime.

References Arbor Networks. (2018). Insight into the global threat landscape. Retrieved from https://www. netscout.com/report Babko-Malaya, O., Cathey, R., Hinton, S., Maimon, D., & Gladkova, T. (2017). Detection of hacking behaviors and communication patterns on social media. In 2017 IEEE international conference on big data (Big Data) (pp. 4636–4641). Boston, MA: IEEE. Benjamin, V., Li, W., Holt, T., & Chen, H. (2015). Exploring threats and vulnerabilities in hacker web: Forums, IRC and carding shops. In Intelligence and Security Informatics (ISI), 2015 IEEE international conference on (pp. 85–90). Baltimore, MD: IEEE. Bitdefender. (2018). Cyberthreat real time map. Retrieved from https://threatmap.bitdefender.com Bossler, A. M. (2017). Need for debate on the implications of honeypot data for restrictive deterrence policies in cyberspace. Criminology & Public Policy, 16(3), 681–688. Button, M., Nicholls, C. M., Kerr, J., & Owen, R. (2014). Online frauds: Learning from victims why they fall for these scams. Australian & New Zealand Journal of Criminology, 47(3), 391–408. Federal Bureau of Investigation. (2019a). Crime in the United States 2018. Retrieved from https:// ucr.fbi.gov/crime-in-the-u.s/2018/crime-in-the-u.s.-2018/topic-pages/about-cius Federal Bureau of Investigation. (2019b). National incident-based reporting system. Retrieved from https://www.fbi.gov/services/cjis/ucr/nibrs Federal Bureau of Investigation Internet Crime Complain Center (IC3). (2017). 2017 Internet crime report. Retrieved from https://pdf.ic3.gov/2017_IC3Report.pdf

218

C. J. Howell and G. W. Burruss

Holt, T. J., Strumsky, D., Smirnova, O., & Kilger, M. (2012). Examining the social networks of malware writers and hackers. International Journal of Cyber Criminology, 6(1). Holt, T. J., Burruss, G. W., & Bossler, A. M. (2018). Assessing the macro-level correlates of malware infections using a routine activities framework. International Journal of Offender Therapy and Comparative Criminology, 62(6), 1720–1741. Hong, J. (2012). The state of phishing attacks. Communications of the ACM, 55(1), 74–81. Howell, C. J., Maimon, D., Cochran, J. K., Jones, H. M., & Powers, R. A. (2017). System trespasser behavior after exposure to warning messages at a Chinese computer network: An Examination. International Journal of Cyber Criminology, 11(1), 63–77. Howell, C. J., Burruss, B. W., Maimon, D., & Sahani, S. (2019). Website defacement and routine activities: Considering the importance of hackers’ valuations of potential targets. Journal of Crime and Justice, 42, 536. Huff, R., Desilets, C., & Kane, J. (2010). National public survey on white collar crime. Fairmont: National White Collar Crime Center. Kaspersky Lab. (2018). Cyberthreat real-time map. Retrieved from https://cybermap.kaspersky.com Kigerl, A. (2012). Routine activity theory and the determinants of high cybercrime countries. Social Science Computer Review, 30(4), 470–486. Lastdrager, E. E. (2014). Achieving a consensual definition of phishing based on a systematic review of the literature. Crime Science, 3(1), 1–10. Lau, F., Rubin, S. H., Smith, M. H., & Trajkovic, L. (2000). Distributed denial of service attacks. In Systems, man, and cybernetics, 2000 IEEE international conference on (Vol. 3, pp. 2275–2280). Nashville, TN: IEEE. Lee, M., Pak, S., Lee, D., & Schapiro, A. (1999). Electronic commerce, hackers, and the search for legitimacy: A regulatory proposal. Berkeley Technology Law Journal, 14(2), 839. Macdonald, M., Frank, R., Mei, J., & Monk, B. (2015). Identifying digital threats in a hacker web forum. In Proceedings of the 2015 IEEE/ACM international conference on advances in social networks analysis and mining 2015 (pp. 926–933). Paris, France: ACM. Maimon, D., & Louderback, E. R. (2018). Cyber-dependent crimes: An interdisciplinary review. Annual Review of Criminology, (0), 191–216. Maimon, D., Alper, M., Sobesto, B., & Cukier, M. (2014). Restrictive deterrent effects of a warning banner in an attacked computer system. Criminology, 52(1), 33–59. Maimon, D., Fukuda, A., Hinton, S., Babko-Malaya, O., & Cathey, R. (2017). On the relevance of social media platforms in predicting the volume and patterns of web defacement attacks. In 2017 IEEE international conference on big data (Big Data) (pp. 4668–4673). Boston, MA: IEEE. Malware Domain List (MDL). (2013). Malware domain list frequent asked questions. Retrieved from http://www.malwaredomainlist.com McAfee. (2018). Global virus map. Retrieved from https://home.mcafee.com/virusinfo/global-virus-map McGrath, D. K., & Gupta, M. (2008). Behind phishing: An examination of phisher modi operandi. LEET, 8, 4. Muniz, C. N. (2019). Sexual assault and robbery disclosure: An examination of Black’s theory of the behavior of law (Doctoral dissertation, University of South Florida). Pangaria, M., & Shrivastava, V. (2013). Need of ethical hacking in online world. International Journal of Science and Research (IJSR), India Online. ISSN: 2319–7064, 529–531. Project Honey pot. (2018). Project honey pot. Retrieved from https://www.projecthoneypot.org Rantala, R. (2008). Cybercrime against businesses, 2005. Retrieved from http://www.bjs.gov/index. cfm?ty=pbdetail&iid=769 Reyns, B. W. (2015). A routine activity perspective on online victimisation: Results from the Canadian general social survey. Journal of Financial Crime, 22(4), 396–411. Reyns, B. W., Henson, B., Fisher, B. S., Fox, K. A., & Nobles, M. R. (2016). A gendered lifestyleroutine activity approach to explaining stalking victimization in Canada. Journal of Interpersonal Violence, 31(9), 1719–1743.

10

Datasets for Analysis of Cybercrime

219

Schafer, J. A., Lee, C., Burruss, G. W., & Giblin, M. J. (2018). College student perceptions of campus safety initiatives. Criminal Justice Policy Review, 29(4), 319–340. Testa, A., Maimon, D., Sobesto, B., & Cukier, M. (2017). Illegal roaming and file manipulation on target computers: Assessing the effect of sanction threats on system trespassers’ online behaviors. Criminology & Public Policy, 16(3), 689–726. Trend Micro. (2018). Global botnet map. Retrieved from https://botnet-cd.trendmicro.com U.K. Office for National Statistics. (2019) Crime and justice. Retrieved from https://www.ons.gov. uk/peoplepopulationandcommunity/crimeandjustice Wilson, T., Maimon, D., Sobesto, B., & Cukier, M. (2015). The effect of a surveillance banner in an attacked computer system: Additional evidence for the relevance of restrictive deterrence in cyberspace. Journal of Research in Crime and Delinquency, 52(6), 829–855. Zhang, X., Tsang, A., Yue, W. T., & Chau, M. (2015). The classification of hackers by knowledge exchange behaviors. Information Systems Frontiers, 17(6), 1239–1251. Zone-H. (2018). Unrestricted information. Retrieved from http://www.zone-h.org

Part II Legislative Frameworks and Law Enforcement Responses

The Legislative Framework of the European Union (EU) Convention on Cybercrime

11

José de Arimatéia da Cruz

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background to the Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Legislative Structure of the Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additional Protocol on Xenophobia and Racism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits of the Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Criticism of the Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

224 224 226 228 230 232 234 235 236 236

Abstract

The Convention on Cybercrime of the Council of Europe (CETS No.185), known as the Budapest Convention, is the only binding international instrument on this issue. It serves as a guideline for any country developing comprehensive national legislation against cybercrime and as a framework for international cooperation between state parties to this treaty. The Budapest Convention is supplemented by a Protocol on Xenophobia and Racism committed through computer systems.

The views expressed in this essay are those of the author and do not necessarily reflect the official policy or position of the Department of the Army, the Department of Defense, or the US Government. J. de Arimatéia da Cruz (*) College of Behavior and Social Sciences, Georgia Southern University, Savannah, GA, USA U.S. Army War College, Strategic Studies Institute, Carlisle, PA, USA e-mail: [email protected] © This is a U.S. Government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_5

223

224

J. de Arimatéia da Cruz

Keywords

Budapest convention · Council of Europe · Convention on cybercrime

Introduction The Convention on Cybercrime of the Council of Europe (CETS No.185), known as the Budapest Convention, is the only binding international instrument on this issue. It serves as a guideline for any country developing comprehensive national legislation against cybercrime and as a framework for international cooperation between state parties to this treaty. The Budapest Convention is supplemented by a Protocol on Xenophobia and Racism committed through computer systems. The Council of Europe will celebrate its eighteenth anniversary of the Budapest Convention on Cybercrime (hereafter the “Convention”) in 2019. The Convention was officially opened for signature in November 2001 and entered into force on July 1, 2004.

Background to the Convention The Convention on Cybercrime of the Council of Europe (CETS No.185) was opened for signature in Budapest, Hungary, on November 23, 2001 (see Fig. 1 Convention on Cybercrime). As the Internet evolved from a novelty into an integral part of government’s day-to-day business, European nations became quite concerned about the adequacy of legislation criminalizing certain activities occurring over computer networks. As stated in the Convention’s preamble, its primary objective is to establish a “common criminal policy aimed at the protection of society against cybercrime” (CETS No.185). The Convention’s second goal, and perhaps the most important, is the harmonization of cybercrime laws across participating member states of the Convention. While such objective is an ambitious endeavor, the Convention proponents recognized that information and communication

Title

Reference

Opening of the treaty

Entry into Force

Convention on Cybercrime ETS No.185

Budapest, 23/11/2001 - Treaty open for signature by the member States and the non-member States which have participated in its elaboration and for accession by other non-member States

01/07/2004 - 5 Ratifications including at least 3 member States of the Council of Europe

Fig. 1 Convention on Cybercrime Status as October 18, 2019

11

The Legislative Framework of the European Union (EU) Convention on. . .

225

technologies (ICTs) and the Internet of Things (IoT) have become too important to be ignored as the world becomes more globalized and interconnected. The Convention was drafted by the Council of Europe (CoE) in 1989 in Strasbourg, France. In addition to the CoE member states, Canada, Japan, South Africa, and the United States participated in the negotiation of the Convention as observers (Vatis 2010). In November 1996, the CoE published a set of recommendations addressing the need for new laws criminalizing disruptive conduct committed through computer networks. The European Committee on Crime Problems (CDPC) recommended that the CoE set up an experts committee on cybercrime to address the inadequacies of computer-related and criminal procedural laws relating to cybercrime. In February 1997, the CoE Committee of Ministers, following the recommendations of the CDPC, established the Committee of Experts on Crime in Cyberspace. As Weber points out, “the Council of Europe established a Committee of Experts on Crime in Cyberspace (PC-CY) in 1997 to draft a binding convention facilitating international cooperation in the investigation and prosecution of computer crime” (Weber 2003). The Committee of Experts were charged with examining several provisions of the Convention draft as well as its binding legal authority. Member Parties to the Convention recognized the importance of cybersecurity and cybercrime to warrant a binding international agreement. The Committee of Experts would work Convention’s complexities over a period of 4 years. At the end of four arduous years pouring over the document, a final draft of the Convention was approved by the CDPC in June of 2001 and then adopted by the CoE’s Committee of Ministers on November 8, 2001 (Vatis 2010). The Convention is composed of 48 articles addressing both substantive and procedural aspects of cybercrime prevention among nation states and is further subdivided into chapters, titles, and subsections. The Preamble of the Convention lays out the aim of the Council of Europe which is to “achieve a greater unity between its members” (ETS 185, Preamble). Overall, the Convention’s main objective is to globalize its legislation, as a way to ease difficulties facing cybercrime prosecution where the criminal resides in a foreign country (Redford 2017). That is a welcome approach to cyber prosecution given that some states do not have the legislation laying out the penalty for cyber infractions nor clearly defines what constitutes cybercrime. The Convention’s prevention cybercrime approach allows greater member state cooperation in the prosecution of any crime in which the computer is a tool in the commission of the crime. According to the Convention’s Chapter IV – Final Provisions, Article 37 – Accession to the Convention, any additional states may be invited by the CoE’s Committee of Ministers to accede to the Convention after the Committee consults with and obtains the unanimous consent of “the Contracting States to the Convention” (ETS, 185): After the entry into force of this Convention, the Committee of Ministers of the Council of Europe, after consulting with and obtaining the unanimous consent of the Contracting States to the Convention, may invite any State which is not a member of the Council and which has not participated in its elaboration to accede to this Convention. The decision shall be taken by the majority provided for in Article 20.d. of the Statute of the Council of Europe and by the unanimous vote of the representatives of the Contracting States entitled to sit on the Committee of Ministers (Article 37).

226

J. de Arimatéia da Cruz

The 2013 decision to invite a non-member state to accede to the treaty is valid 5 years from its adoption. The most recent nation to officially join the Convention was Peru. Peru deposited the instruments of ratification of the Budapest Convention on Cybercrime January 30, 2019. The Peruvian Congress unanimously approved the act of accession to the Convention. With Peru’s accession, the Convention now has 64 state parties and 3 nations that have signed the Convention but have not followed through the ratification process. The United States signed the treaty on November 23, 2001, and ratified it September 29, 2006. The Convention entered into force with the United States on January 1, 2007 (Vatis 2010). There are some important differences between the Convention’s approach to prosecution of cybercrime and the United States’ approach. For example, Mark Redford pointed out that the US approach to cybercrime prevention is legislationbased, while the EU cybercrime approach is treaty-based (Redford 2017). The legislative approach means that the laws promulgated by the Congress of the United States are made for the needs of the United States without much collaboration or input from the international community at large regarding the issue. On the other hand, the Convention uses a treaty or convention approach. This treaty or convention approach is a limited collaboration system, which works through global voluntary participation. As Redford (2010) points out, “all the participating countries work in conformity to remedy cybercrime causation, with the intent to form laws that will enable them to overcome conflicts of law, choice of laws, and other international legal complexities.” Another important difference is the Convention’s broader definition of a computer system when compared to 18 USC §1030(e)(1). According to the Convention’s Article 1(a), a computer system is “any device or a group of interconnected or related devices, one or more of which, pursuant to a program, performs, automatic processing of data.” On the other hand, Congress in 18 USC §1030(e)(1) defines the term computer as an electronic, magnetic, optical, electrochemical, or other high-speed data processing device performing logical, arithmetic, or storage functions and includes any data storage facility or communications facility directly related to or operating in conjunction with such device, but such term does not include an automated typewriter or typesetter, a portable hand-held calculator, or other similar devices.

The Legislative Structure of the Convention While a European cyber instrument, the Convention is opened to non-member states by a unanimous decision of the parties. The Convention’s purpose is to achieve greater unity among its members by convincing nation-state members of the “need to pursue, as a matter of policy, a common criminal policy aimed at the protection of society against cybercrime, inter alia, by adopting appropriate legislation and fostering international cooperation” (ETS 185, Preamble). According to Amalie M. Weber, the Convention is “the first international treaty on crimes committed via the Internet and other computer networks, and is the product of

11

The Legislative Framework of the European Union (EU) Convention on. . .

227

four years of work” (Weber 2003, p. 429). The Convention also provides a “comprehensive response to the challenges of cybercrime, addressing issues of substantive offences, procedural laws and international cooperation” (Clough 2012, p. 369). Chapter 1 Article 1 “Use of Terms” provides definitions for the purpose of the Convention. While the title of the Convention is Convention on Cybercrime, in Article 1, readers are not given a concise definition of what is cybercrime. Instead, the Convention focuses on the definitions of what constitutes a computer system, computer data, service providers (ISPs), and traffic data (ETS 185, Article 1). Chapter 2 “Measures to be at the National level” is subdivided into sections, titles, and articles. Chapter 2 Section 1 – Substantive Criminal Law – addresses substantive criminal law and requires member states to enact a range of substantive criminal offences. Chapter 2 further established “a common cannon of computer-based and computer-related crimes, requiring a common set of procedural powers, and loosely establishes a set of rules by which parties can assert jurisdiction” (Weber 2003, p. 430). Under Chapter 2, four broad categories of offences are highlighted: offences against the confidentiality, integrity, and availability of computer data and systems, that is, the so-called CIA Triad; computer-related offences; content-related offences; and offences related to infringements of copyright and related rights (ETS 185). Under the “offences against the confidentiality, integrity, and availability of computer data and systems,” there are five articles addressing what constitutes a criminal act in relationship to the use of a computer as an accessory to a crime or the computer as an element of a crime. The activities constituting a criminal act under “offences against the confidentiality, integrity, and availability of computer data and systems” are what we today would call hacking (Clough 2012, p. 172). Chapter 2 Section 2 – Procedural Law – addresses the procedural law provisions agreed upon my Member States Parties to the Convention. This section is further subdivided into titles. Title 1 addresses common provision; Title 2 advances preservation of stored computer data; Title 3 discusses production order; Title 4 addresses search and seizure of stored computer data; and Title 5 deals with real-time collection of computer data. While the Convention calls for a comprehensive response to the challenges of cybercrime, addressing issues of substantive offences, procedural laws, and international cooperation, it recommends that “each Party shall adopt such legislative and other measures as may be necessary to establish the powers and procedures provided for this section [Article 14 Scope of procedural provisions] for the purpose of specific criminal investigations or proceedings” (ETS 185, Chapter 2, Section 2, Article 14). Chapter 2 Section 3 – Jurisdiction – requires each member state to adopt legislation and other measures “as may be necessary to establish jurisdiction over any offense established in accordance with Articles 2 through 11” of the Convention regardless of where a computer crime has been committed: “in its territory; or on board a ship flying the flag of that Party; or on board an aircraft registered under the laws of that Party, or by any one of its nationals, if the offence is punishable under criminal law where it was committed or if the offence is committed outside the territorial jurisdiction of any State” (ETS 185, Chapter 2, Section 3, Article 22, a-d). Chapter 3 – International Cooperation – addresses the need for greater cooperation among the European Union in regards to extradition and mutual

228

J. de Arimatéia da Cruz

assistance. As stated in Chapter 3, Section 1, Article 23, “the Parties shall co-operate with each other. . .to the widest extent possible for the purposes of investigations or proceedings concerning criminal offences related to computer systems and data, or for the collection of evidence in electronic form of a criminal offence.” One of the goals of the Convention, in light of the proliferation of information communication technology (ICT), is the harmonization of cybercrime laws among European nations. Chapter 4 of the Convention – Final Provisions – addresses the important issues of signature and entry into force of the Convention as well as accession to the Convention by non-member states. In Article 36, signature and entry into force requires states that, “this Convention shall be open for signature by the member States of the Council of Europe and by non-member States which have participated in its elaboration.” Furthermore, Article 37 establishes that, “After the entry into force of this Convention, the Committee of Ministers of the Council of Europe, after consulting with and obtaining the unanimous consent of the Contracting States to the Convention, may invite any State which is not a member of the Council and which has not participated in its elaboration to accede to this Convention. The decision shall be taken by the majority provided for in Article 20.d. of the Statute of the Council of Europe and by unanimous vote of the representatives of the Contracting States entitled to sit on the Committee of Ministers.” In other words, any state may accede following majority vote in Committee of Ministers and unanimous vote by parties entitled to sit on the Committee of Ministers.

Additional Protocol on Xenophobia and Racism According to the Preamble of the Convention on Cybercrime, its main objective is to “pursue a common criminal policy aimed at the protection society against cybercrime, especially by adopting appropriate legislation and fostering international cooperation.” The original thought for the development of the Internet and the World Wide Web was so that university researchers would be able to share their findings, thus promoting the advancement of knowledge among flagship universities from the East to West Coasts. With the advancement of the World Wide Web, the Internet transformed itself into not only a platform for the exchange of scientific information but also a platform for old friends to reconnect; lovers to find one another; and consumers to buy anything they desire. Recognizing that the Internet could also be used for nefarious activities, on November 7, 2002, the Committee of Ministers adopted the Additional Protocol to the Convention on Cybercrime concerning the criminalization of acts of a racist and xenophobic nature committed through computer systems (ETC, 189). According to the party members to the Convention, acts of a racist and xenophobic nature constitute a violation of human rights and a threat to the rule of law and democratic stability (ETC, 189). The Internet has also become an information operations force multiplier. Megan Penn, Joshua K. Miller, and Jan Schwarzenberg pointed out that violent extremist organizations have increased their online presence using the Internet to establish an online brand, communicating with their members, and radicalizing sympathizers (Penn et al. 2015). Terrorist and jihadist organizations have recognized the importance

11

The Legislative Framework of the European Union (EU) Convention on. . .

229

of the Internet as part of their arsenal of weapons in the “theater of fear” as they attempt to win the hearts and minds of perspective sympathizers and recruits. The Internet has become one of the most important tools in al-Qaeda’s war against the infidels and their supporters. So-called jihadist websites have proliferated in the aftermath of September 11 terrorist attacks against the United States. Not only has there been a proliferation of jihadist websites, but also the quality, contents, and messages have also become more sophisticated and professional. The Quetta Shura Taliban maintains several dedicated websites, including one with an Arabic-language online magazine and daily electronic press releases on other Arabic-language jihadist forums. The As-Shabab Institute for Media Production is al-Qaeda Central’s media arm distributing video, audio, and graphics products online through jihadist forums, blogs, and file-hosting websites (Theohary and Rollins 2011). Russia has used social media “to foster conspiracy theories, plant rumors, and spread fake news in Bulgaria, Denmark, Estonia, Finland, France, Georgia, Germany, Hungary, Italy, Latvia, Lithuania, Montenegro, the Netherlands, Norway, Serbia, Spain, Sweden, Ukraine, the United Kingdom, and the United States” (Jakubowski 2019). Recognizing that the Internet is a major source of radicalization for potential recruits and a fertile ground for proselyting lone wolf sympathetic to xenophobic and racist ideology, the Convention adopted the Additional Protocol on Xenophobia and Racism. According to the States of the Council of Europe and the other States Parties to the Convention on Cybercrime, “acts of a racist and xenophobic nature constitute a violation of human rights and a threat to the rule of law and democratic stability” (ETC, 189). For the purpose of criminalization of racist and xenophobic activities while using a computer, the Convention states that, “racist and xenophobic material mean any written material, any image or any other representation of ideas or theories, which advocates, promotes or incites hatred, discrimination or violence, against any individual or group of individuals, based on race, color, descent or national or ethnic origin, as well as religion if used as a pretext for any of these factors” (ETS, 189). Holt et al. (2020) point out that the Additional Protocol on Xenophobic and Racism “has tremendous value in addressing the development and radicalization of individuals through the internet, particularly white supremacist movements.” The signatures to the Additional Protocol on Xenophobia and Racism must pass laws criminalizing acts of racist of xenophobic nature committed through computer networks. In other words, the provisions of the Protocol are of a mandatory character. Parties to the Convention in order to satisfy their obligations must not only enact appropriate legislation but also ensure that it is effectively enforced (ETS, 189). According to the Additional Protocol on Xenophobia and Racism, Article 5, “Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offenses under its domestic law, when committed intentionally and without right, the following conduct: insulting publicly, through a computer system, (i) persons for the reason that they belong to a group distinguished by race, colour, descent or national or ethnic origin, as well as religion, if used as a pretext for any of these factors; or (ii) a group of persons which is distinguished by any of these characteristics.

230

J. de Arimatéia da Cruz

Benefits of the Convention While the Convention has been criticized for its shortcomings and failures to keep up with the rapid development in cyber technology, it continues to morph as it attempts to adjust to the new technologies of the twenty-first century. According to Jonathan Clough, in his essay “The Council of Europe Convention on Cybercrime: Defining “Crime” in a Digital World,” with the adoption of the Convention, harmonization “facilitates the exchange of information and knowledge between governments and industry, and is crucial for cooperation between law enforcement agencies” (Clough 2012, p. 365). Harmonization also allows Convention member states greater accessibility as well as information sharing for assistance in the successful apprehension, charging, and prosecution of cybercriminals regardless of their geographic location given the nature of the crime in the age of the Internet of Things (da Cruz 2019). Chapter 3 of the Convention in addition to requiring the exchange of information (Article 26) also requires member states in its Article 35 to establish a 24/7 Network and to “designate a point of contact available on a twenty-four hour, seven-day-a-week basis, in order to ensure the provision of immediate assistance for the purpose of investigations or proceedings concerning criminal offences related to computer systems and data, or for the collection of evidence in electronic form of a criminal office” (ETS 185). The Council of Europe helps to protect societies worldwide from the threat of cybercrime through the Convention on Cybercrime and its Protocol on Xenophobia and Racism, the Cybercrime Convention Committee (T-CY), and the technical cooperation programs on cybercrime. The Convention’s success has been attributed to its dynamic triangulation composed of the Budapest Convention, its Cybercrime Convention Committee (T-CY), and its technical cooperation programs as illustrated by Fig. 2. After celebrating its seventeenth birthday, the

Fig. 2 Cooperation on Cybercrime: The Council of Europe Approach. (Source: https://www.coe. int/en/web/cybercrime/home)

11

The Legislative Framework of the European Union (EU) Convention on. . .

231

Convention still is as relevant today as it was when it was opened for signature in Budapest November 2001. The Convention was established in order to supplement applicable multilateral and bilateral treaties or arrangements as between the parties including the provisions of: • The European Convention on Extradition, opened for signature in Paris, on December 13, 1957 (ETS No. 24) • The European Convention on Mutual Assistance in Criminal Matters, opened for signature in Strasbourg, on April 20, 1959 (ETS No. 30) • The Additional Protocol to the European Convention on Mutual Assistance in Criminal Matters, opened for signature in Strasbourg, on March 17, 1978 (ETS No. 99) The Convention’s approach to cooperation on cybercrime takes the form of a “dynamic triangle.” The Convention is based on common standards for the criminalization of attacks by means of computers and international police and judicial cooperation on cybercrime and e-evidence. Follow-up and assessments are done by the Cybercrime Convention Committee (T-CY). This committee assesses implementation of the Convention by the parties and keeps the Convention up to date. The Committee is currently working on solutions regarding law enforcement access to electronic evidence on cloud servers (GFCE 2019). T-CY members to the Convention share information and experience, assess implementation of the Convention, or interpret the Convention through guidance notes. T-CY is the most relevant intergovernmental agency dealing with cybercrime. The Cybercrime Programme Office (C-PROC) was established by the Council of Europe. The C-PROC primary objective is to provide support to countries worldwide to strengthen criminal justice capabilities on cybercrime and electronic evidence. Since 2016, C-PROC has been involved in several projects such as the Eastern Partnership region which includes the nations of Armenia, Azerbaijan, Belarus, Georgia, Moldova, and Ukraine. The C-PROC office is located in Bucharest, Romania. According to Cristina Schuman, Head of Cybercrime Unit within the Economic Crime Division of the Directorate General of Human Rights and Legal Affairs with the Council of Europe in Strasbourg, France, there are several benefits to member states part of the Council of Europe Convention on Cybercrime. For example, the Convention is: 1. The only multilateral treaty dealing with cybercrime matters already implemented in many countries while others are considering becoming a party 2. A guideline for drafting legislation on cybercrime 3. An important tool for law enforcement to investigate cybercrime 4. A means to ensure adequate protection of human rights and liberties according to the relevant international documents 5. A flexible mechanism to avoid conflicts with national legislations and proceedings (Schulman 2019)

232

J. de Arimatéia da Cruz

Furthermore, the Convention on Cybercrime, according to Cristina Schulman, provides member states with the following benefits: 1. A coherent national approach to legislation on cybercrime 2. The harmonization of criminal law provisions on cybercrime with those of other countries 3. A legal and institutional basis for international law enforcement and judicial cooperation with other parties 4. A means of participation in the consultations of the parties 5. A platform facilitating public-private cooperation (Schulman 2019)

Criticism of the Convention Despite the fact that the Convention is a first of its kind and that it provides global standards and a framework for effective international cooperation, the Convention has come under attack for its short sightedness into the rapid technological development in the field of cybersecurity and cybercrime. Furthermore, when the Convention entered into force, it was opposed by many civil liberties groups and certain countries, including Russia and China, fearing that the new investigative authority that would be created in many ratifying states and the increase of law enforcement cooperation would erode privacy and other rights (Vatis 2010). In fact, Jonathan Clough points out: The Convention has, and will continue to have, a significant impact on international, regional and national cybercrime initiatives, further enhancing efforts at harmonization. Nonetheless, developing the capacity to implement cybercrime laws effectively, particularly in developing countries, will present the greatest challenge, and one which will require true global cooperation (Clough 2012, p. 367).

Amalie M. Weber, in her essay The Council of Europe’s Convention on Cybercrime, highlights three core problems with the Convention. According to Weber (2003), the jurisdictional problem of cybercrime manifests itself in three ways: lack of criminal statutes; lack of procedural powers; and lack of enforceable mutual assistance provisions with foreign states. Clough (2012, p. 375) also criticizes the Convention on several procedural points. For example, he argues that the definition of a “computer” within the language of the Convention is too narrow unlike the definition provided by the United States in its Computer Fraud and Abuse Act (CFAC) 1986. Furthermore, the Convention has been unable to “keep pace with developments not foreseen or fully appreciated at the time of its drafting” (Clough 2003, p. 376). Clough specifically addresses the shortcomings of the Convention based on its inability to predict future cyber development such as the rise and utilization of botnets. Botnets or simply “Bots” are “highly adaptable worker bees that do their master’s bidding over a broad ‘net’—in the case of bots, scattered throughout the global Internet” (Dunham and Melnic 2009, p. 1). Botnets, as Dunham and Melnick

11

The Legislative Framework of the European Union (EU) Convention on. . .

233

pointed out, “started out as something non-malicious, but eventually developed into a criminal tool of choice over a ten year period during the dawn of the World Wide Web” (Dunham and Melnic 2009, p. 41). Botnets are widely used to conduct Denial of Service (DoS) as well as Distributed Denial of Service (DDoS) attacks. It was strategically used during the Russia-Estonia conflict in 2007 and the Russia and Republic of Georgia conflict in 2008. The importance of botnets has been recognized as a force multiplier during kinetic conflicts by nation states, rogue states, cybercriminals, transnational organized criminal organizations, and terrorist organizations. As Dunham and Melnick argued, “malicious botnets are the cyber parasites of the Internet, drawing their life and power from otherwise “healthy hosts” (Dunham and Melnic 2009, p. 127). Another major concern with the Convention among party members is that the Convention does not address the particular concerns that may be raised by cyberattacks that are not just criminal acts but may also constitute espionage or the use of force under the laws of war (Vatis 2010). Furthermore, the Convention also suffers from its own jurisdictional shortcomings. For example, the Convention lacks criminal statutes, lacks procedural powers, and lacks enforceable mutual assistance with foreign states (Weber 2003). Particularly concerning regarding the Convention is the fact that even when both the host and victim states have adequate criminal statutes and investigative powers, prosecution is frustrated by the lack of enforceable cooperation (Weber 2003). Also, some Parties to the Convention do not have skilled technicians or cyber experts to address the complexities of a cyber investigation. Also, while Article 22 of the Convention exudes international cooperation, it fails to provide any clear lines of who will have jurisdiction to prosecute a computer crime (Podgor 2004). The Convention has also been criticized on three other points. Clough (2012) criticizes the Convention for its inability to foresee the development of identity theft, “grooming,” and cyberterrorism. Identity theft uses a technique called credit card skimming “allow[ing] data to be copied from the magnetic strip present on many transactions cards, while so-called carding websites facilitate the trade in identify information by providing an online marketplace” (Clough 2012, p. 379). More concerning from a police perspective as well as an online public crisis mindset is the process of “grooming.” “Grooming” is a technique in which “a child is befriended by a would-be abuser in an attempt to gain the child’s confidence and trust, enabling them to get the child to acquiesce to abusive activity” (Clough 2012, p. 381). When the Convention entered into effect, terrorism was not a concern nor was it an issue on the horizon as it relates to cybersecurity or Information Communication Technology (ICT). The Convention’s founding fathers were not aware how digital technologies and networked computers would become so interconnected and interdependent within a nation’s critical infrastructure from banking to travel and from finding love to its Supervisory Control and Data Acquisitions (SCADA). While the Convention did not directly address the issue, it relinquished that responsibility to a country’s domestic laws. However, that is easier said than done. As it is well established in matters related to cybercrime, many countries do not prohibit cybercrimes nor does it provide a clear definition of what constitute cybercrime.

234

J. de Arimatéia da Cruz

Therefore, arresting, charging, and prosecuting someone become an issue. As Baron points out, “crimes concerning illegal access, interference and interception are being slowly implemented and are sparsely recognized” (Baron 2002, p. 270). A more recent development in regard to cybercrime which the Convention did not even fathom to consider was the potential rise of cyber mercenaries and how to prosecute those individuals (Zilber 2018). Perhaps the most damaging criticism of the Convention is the fact that it does not have any enforcement mechanism to compel Parties to the Convention compliance with their obligations under the treaty. Parties to the Convention, according to Article 45 §1, “shall be kept informed regarding the interpretation and application of this Convention.” Also, while the Convention has a dispute resolution mechanism in place should a conflict arise between party members, if one party refuses to submit to such arbitration, the other party has no recourse under the Convention as to that dispute (Vatis 2010). Another major criticism of the Convention is the fact the when cooperation between parties’ domestict law conflict, a country can claim that providing assistance would prejudice and conflict with its sovereignty (Vatis 2010). Under such circumstances, domestic law supersedes an international treaty which was supposed to bring great harmonization to the criminalization of cybercrime. Further complicating the legitimacy of the Convention, in addition to the fact that a country may not only refuse to cooperate or comply with a decision of the Convention, is the fact that where a party is suspected of being responsible for a cyberattack that party would likely be able to refuse cooperation and still be in compliance with the letter of the Convention (Vatis 2010). Even the Additional Protocol to the Convention on Cybercrime, concerning the criminalization of acts of a racist and xenophobic nature, suffers from the same challenges as the original Convention Protocol. It does not contain an enforcement mechanism to comply member states to enforce any of its laws or Convention rules when it conflicts with domestic laws. Also, even after a party has signed the Additional Protocol, there are several escape clauses if the party has any reservations regarding the Protocol. For example, Article 12 reservations and declarations states that, “reservations and declarations made by a Party to a provision of the Convention shall be applicable also to this Protocol, unless that Party declares otherwise at the time of signature or when depositing its instrument of ratification, acceptance, approval or accession.” In Article 15, §1 Denunciation, the Additional Protocol offers another escape clause. It states, “any Party may, at any time, denounce this Protocol by means of a notification addressed to the Secretary General of the Council of Europe.”

Recommendations It is now clearly recognized that the Internet has become an integral part of a nation’s critical infrastructure. Nations around the world are doing everything within their technological capacity and ability to stay one step ahead of criminal organizations using computers in the commission of a crime. It has become quite obvious that governments around the world do not have the capability or capacity to

11

The Legislative Framework of the European Union (EU) Convention on. . .

235

combat transnational criminal organizations (TOCs) or prevent the proliferation of xenophobic and racist websites on its own. Technology and cyberspace are changing faster than countries can legislate internally and negotiate externally. Wheeler (2018) points it out that “the challenge today is the rapid speed at which cyberspace morphs and evolves. It is changing faster than international summits can be convened, making obsolete any deal that takes longer than a week or two to negotiate.” The Convention on Cybercrime is an important step toward addressing the harmonization of cybercrime. While there has been resistance from some countries to the Convention, especially Russia and China, the Convention has been an important document promoting a multilateral agreement on cybercrime. Otherwise, the choice is more chaos in cyberspace and continuing vulnerability facing nation states. According to Vatis (2010), there are several steps that can be taken to improve the Convention. One of the most important suggestions for improvement would be an amendment to the Convention that authorizes requesting parties denied assistance without a legitimate, credible reason to engage in unilateral, cross-border investigative action (Vatis 2010). Further, the Convention’s lack of enforcement mechanism to make sure that Parties to the Convention comply with its treaty obligations need to be addressed. In fact, the Articles of the Convention as well as the Additional Acts giving the Parties to the Convention escape clauses if they denounce the Convention need to be shored up. The Convention could add a meaningful enforcement mechanism by which a nation that is denied assistance can seek redress (Vatis 2010). The Convention would add more credibility to its objectives if the Convention party members requested a nation that denies assistance provide specific reasons for doing so in writing (Vatis 2010). Currently, a Convention member only has to state that they will not contribute to any of the Convention’s investigations because it conflicts with its domestic law. So, the important question becomes, does it really conflict with its domestic law or is that any easy way out to deny responsibility and circumvent an investigation?

Conclusion Wheeler pointed out, “technology and cyberspace are changing faster than countries can legislate internally and negotiate externally” (2018). While the Convention has merit, it is a living document that must adapt to the realities of a new technological world. Wheeler further explains that, “in the absence of a binding global accord, the world will remain vulnerable to a motley mix of hackers, warriors, intelligence operatives, criminals, and angry teenagers –none of whom can be distinguished from behind the three proxy servers” (Wheeler 2018). But, most importantly, as cyber technology evolves and morphs, “it is changing faster than international summits can be convened, making obsolete any deal that takes longer than a week or two to negotiate” (Wheeler 2018). Cronin states that “the Internet, an effective vehicle to spread civil society and democratic ideals, also provides a means to disseminate violent ideologies,

236

J. de Arimatéia da Cruz

coordinate criminal behavior, share combat tactics, research powerful weapons, and undermine traditional tools of order” (2013). In fact, the use of the Internet and websites by transnational organized criminal organizations (TOCs), white supremacist groups, and hate groups has fundamentally altered how governmental agencies address cybercrime in the twenty-first century. In conclusion, this author agrees with Wheeler’s assessment that without a global consensus on what constitutes cyberwar, the world will be left in an anarchic state governed by contradictory laws and norms and vulnerable to the possibility of a devastating war launched by a few anonymous keystrokes (Wheeler 2018). The Legislative Framework of the European Union (EU) Convention on Cybercrime provides a good place from which to begin addressing some of the vicissitudes of the cyber world by calling our attention to, and defining clearly and succinctly, the nature of those obstacles. To ignore these warnings, it is at our own peril.

Cross-References ▶ Cybercrime and Legislation in an African Context ▶ Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective ▶ Organized Crime and Cybercrime

References “Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems,” European Treaty Series – No. 189. Retrieved from https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/ 189 Baron, R. M. A. (2002). Critique of the international cybercrime treaty. COMMLAW CONSPECTUS, 10, 263. Retrieved from https://scholarship.law.edu/commlaw/vol10/iss2/7 Clough, J. (2012). The Council of Europe Convention on cybercrime: Defining ‘crime’ in a digital world. Criminal Law Forum, 23, 363–391. https://doi.org/10.1007/s10609-012-9183-3. “Convention on Cybercrime,” European Treaty Series – ETS 185 Budapest, 23. XI.2001. Retrieved from https://rm.coe.int/CoERMPublicCommonSearchServices/ DisplayDCTMContent?documentId=0900001680081561 Cronin, A. K. (2013). How global communications are changing the character of war. The Whitehead Journal of Diplomacy and International Relations, Winter/Spring 2013, 14(1), 25–39. da Cruz, J. A. (2019). Intelligence sharing among agencies and internationally. In J. Vacca (Ed.), Online terrorist propaganda, recruitment, and radicalization. Taylor & Francis Group, CRC Press. Dunham, K., & Melnic, J. (2009). Malicious bots: An inside look into the cyber-criminal underground of the internet. Boca Raton: CRC Press. “Explanatory Report to the Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems,” European Treaty Series – No. 189. Retrieved from https://www.coe.int/en/web/conventions/fulllist/-/conventions/treaty/189

11

The Legislative Framework of the European Union (EU) Convention on. . .

237

GFCE. (October 18, 2019). The Budapest convention on cybercrime: A framework for capacity building. Retrieved from https://www.thegfce.com/news/news/2016/12/07/budapest-conven tion-on-cybercrime Holt, T. J., Freilich, J. D., & Chermak, S. M. (2020). Legislation specifically targeting the use of the internet to recruit terrorists. In J. R. Vacca (Ed.), Online terrorist propaganda, recruitment, and radicalization. New York: CRC Press. Jakubowski, G. (2019). What’s not to like? Social media as information operations force multiplier. Joint Force Quarterly, 3rd Quarter 2019, 94, 8–17. Panetta, L. (October 11, 2012). Remarks by Secretary Panetta on cybersecurity to the business executives for national security. New York City. Retrieved from http://www.defense.gov/utility/ printitem.aspx?print=http://www.defense.gov/transcripts/transcript.aspx?transcriptid=5136 Penn, M., Miller, J. K., & Schwarzenberg, J. (November 2015). Countering cyber extremism. Arthur D. Simmons Center for Interagency Cooperation, InterAgency Paper No. 17W. Podgor, E. S. (2004). Cybercrime: National, Transnational, or International, 50 Wayne L. Rev. 97. “Protocol on Xenophobia and Racism,” European treaty series –ETS 189 2003. Retrieved from https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/189 Redford, M. (2017). U.S. and EU legislation on cybercrime. In 2011 European intelligence and security information conference (pp. 34–37). U.S: U.S. Dept. of Defense for Roulo Roulo, C. DOD Must Stay Ahead of the Cyber Threat, Dempsy Says. U.S. Department of Defense. Retrieved from http://archive.defense.gov/news/newsarticle.aspx?id=120379 Schulman, C. (February 5, 2019). The Council of Europe Convention on Cybercrime: Status quo and future challenges. Retrieved from http://www.oas.org/juridico/english/cyb_pry_coe.pdf Theohary, C. A., & Rollins, J. (2011). Terrorist use of the internet: Information operations in cyberspace. Congressional Research Service (CRS Report R41674). U.S: Congressional Research Service Vatis, M. A. (2010). The Council of Europe Convention on Cybercrime. In Proceedings of a workshop on deterring cyber attacks: Informing strategies and developing options for U.S. policy. Weber, A. M. (2003). The council of Europe’s convention on cybercrime. Berkeley Technology Law Journal, 18(1), 425–446. Wheeler, T. (September 12, 2018). In cyberwar, there are no rules: why the world desperately needs digital Geneva Conventions. Foreign Policy. Retrieved from https://foreignpolicy.com/2018/09/ 12/in-cyberwar-there-are-no-rules-cybersecurity-war-defense Zilber, N. (August 31, 2018). The rise of the cyber-mercenaries: What happens when private firms have cyberweapons as powerful as those owned by governments? Foreign Policy. Retrieved from https://foreignpolicy.com/2018/08/31/the-rise-of-the-cyber-mercenaries-israel-nso

Data Breaches and GDPR

12

Elif Kiesow Cortez

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GDPR: Why Is It Important? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Breach Notification Under the GDPR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Article 33: Notification of Data Breach to the Supervisory Authority . . . . . . . . . . . . . . . . . . . . . Article 34: Communication of Data Breach to the Data Subject . . . . . . . . . . . . . . . . . . . . . . . . . . . Article 83: Administrative Fines for Data Breach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case Examples on Data Breaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

240 241 244 245 246 248 249 252 254 254

Abstract

This chapter focuses on the requirements for data breach notification and communication under the EU General Data Protection Regulation (GDPR). GDPR is aimed to be addressing the European Commission’s Digital Single Market Strategy that focuses on enabling businesses and governments to fully benefit from digitalization that would thrive the European market while protecting the individual’s fundamental right to privacy. GDPR is applicable internationally, therefore businesses all around the world might be required to comply with the GDPR data breach obligations. In the current cyber threat landscape, the increased risk of data breaches as well as extraterritorial applicability of the GDPR draw much attention to GDPR and data breaches. This chapter briefly introduces the impor-

E. Kiesow Cortez (*) The Hague University of Applied Sciences, The Hague, The Netherlands e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_39

239

240

E. Kiesow Cortez

tance and relevance of GDPR, GDPR data breach notification, and communication requirements as well as risk assessment methods under the GDPR and contemporary case examples on data breach incidents. The chapter provides an overview of the relevant provisions of the GDPR and points out examples that can serve as guidelines on data protection impact assessment approaches. Keywords

General data protection regulation · GDPR · Data breach notification · DPIA · Risk assessment · Privacy right · Data governance · Personal data breach

Introduction The new European General Data Protection Regulation (GDPR), which was published in the Official Journal of the EU in May 2016 and is applicable as of 25 May 2018, introduces a new framework for reporting data breaches as compared to the previous European Data Protection Directive of 1995. This chapter examines the new data breach notification and communication requirements under the GDPR. GDPR is drafted in line with the EU Digital Market Strategy (European Commission 2015) which aims to change the incentives for digital networks and services to flourish while providing trustworthy infrastructures supported by correct rules and enforcement mechanisms. In order to gain the EU citizens’ trust in using digital services, the service providers who are established in the EU as well as those targeting EU residents are now obliged to comply with strict data protection rules under the GDPR. GDPR also has extraterritorial effects and serves as an international reference point in discussion on the data protection laws. Therefore the data breach notification and communication requirements under the GDPR are not only relevant for companies with an establishment in the EU but are also relevant for foreign companies that are subject to GDPR as a result of providing goods and services to or monitoring behavior of EU residents. The scope of this chapter is limited to data breach notification and communication requirements under the GDPR. The data breach notification provisions under the Directive on security of network and information systems (NIS) Directive EU 2016/ 1148, and Commission Regulation 611/2013 on electronic communication are not within the scope of this chapter. This chapter will firstly focus on explaining the relevance and importance of the GDPR for the European and international data protection and privacy domain. After highlighting the significance of the GDPR, the relevant GDPR articles and recitals on data breach notification and its communication will be presented. In the final subchapter, the first data breach incident decisions under the GDPR will be briefly introduced, and a case example will be analyzed in detail regarding a data breach incident that led to three separate data protection authorities to decide on administrative fines on the same case.

12

Data Breaches and GDPR

241

GDPR: Why Is It Important? Before the GDPR, within the EU, data protection has been legislated by the Data Protection Directive 95/46/EC of 1995. As this prior legislation had the notion of a directive, it could only be enforced in the EU Member States via being implemented into the national legislations. This meant every member state could end up with a variety of different national data protection laws. This fragmented application of data protection rules therefore created problems both for the citizens and for the businesses willing to operate in or target services to the EU market. A fragmented approach meant that from the citizens’ perspective, protection measures across member states were not protecting individual’s right to protection of personal data equally and from the companies perspective, tracking compliance requirements across different jurisdictions was very costly (Albrecht 2016). With the objective of creating a non-fragmented, uniform implementation of EU level data protection laws, the GDPR has been adopted in 2016 after 4 years of strong negotiations (Voigt and von dem Bussche 2017). The GDPR has been in the center of discussions regarding its extraterritorial applicability since it has been at the draft stage (Schwartz 2013). It is reported that the draft regulation has been seen as controversial because it required any company, including international companies, that processes data of European citizens to comply with the GDPR (Victor 2013). European Data Protection Regulation’s territorial scope is defined in Article 3 of the GDPR. The article states that GDPR is applicable to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: (a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or (b) the monitoring of their behavior as far as their behavior takes place within the Union. GDPR has gained international popularity due to its applicability on consent requirements as this led to a world-wide update of privacy policies. In addition to its rules on how to acquire consent, GDPR’s data breach notification and communication requirements are also relevant for many entities across the globe. The extraterritorial applicability could be viewed that the GDPR aims to prevent data controllers from circumventing EU regulation by relocating to or contracting third parties from non-EU countries (Kuner 2010). The need and the motivation to draft the GDPR was reported in a press release by the European Commission as intending to create a legislation that protects the fundamental right to privacy of the European citizens but that is also in line with the EU Digital Single Market Strategy. A European Commission survey from 2015 showed that 67% of the survey respondent EU citizens reported that they were concerned about having no control over the information they provide online as they did not know how this information could be used (Eurobarometer 2015). The European Council emphasized that the mistrust led EU citizens to refrain from using online services to its full potential. It was then highlighted that the GDPR aims to provide individuals with more control over how to share their data “given the importance of creating the trust that will allow the digital economy to develop across

242

E. Kiesow Cortez

the internal market.” Therefore, the EU efforts to create and enforce the GDPR should be perceived also as an effort to gain the individual’s trust in using online services by giving more autonomy and control to the individual in deciding with whom to share their personal data and by giving more incentives to companies to better inform the citizens of their privacy respecting practices (Erdemoglu 2016). The relevant goal to overcome EU citizens’ hesitation and mistrust to online services was also stated in the EU Digital Single Market Strategy (European Commission 2012a). The mistrust of the citizens could be understood as a result of increased cybercriminal attacks as well as increased information on privacy threats such as Snowden’s revelations pointing at surveillance practices. The 2015 Decision of the Court of Justice of the European Union regarding the Schrems case declared that US data protection laws do not have “adequate level of data protection” for European citizens (CJEU Schrems Judgment 2015). The relevant case also led to discussions on security vs. privacy where the importance of collecting metadata for surveillance purposes could also be seen as a privacy infringement based on the European approach to right to privacy. Recent research also shows how seemingly non-personal data variables such as location data or metadata based on the use of phone services can be used to extract personal data (Schneier 2015; Acquisti et al. 2016). According to the European Charter of Fundamental Rights, “right to protection of personal data,” in other words, privacy right is listed as a fundamental right. However, in practice, the individual agreeing to be subject to privacy policies of a company could also be viewed as a right based on a contract. GDPR requirements of acquiring data subject’s informed consent by companies before processing the relevant personal data puts emphasis on the contractual requirements but given the fundamental right characteristic of the privacy right, the EU data protection regulation addresses issues that are covered in both private law and public law. Therefore the competent jurisdiction debate as well as the extraterritorial application of the EU data protection regulation should be understood in this light (Kuner 2010). The extraterritoriality of the GDPR requires also non-EU countries to provide an adequate protection of personal data and to respect the fundamental right to privacy. Countries that are adequately compliant with the GDPR receive an “adequacy” decision by the Commission. The adequacy decision states that the relevant country has laws that provide an appropriate level of protection of personal data. Currently the following countries received an adequacy decision by the Commission: Andorra, Argentina, Faeroe Islands, Guernsey, Israel, Isle of Man, Jersey, New Zealand, Switzerland, and Uruguay. Canada and the USA received partial adequacy decisions from the Commission. In Canada, decision is applicable only for private entities falling under the scope of the Canadian Personal Information Protection and Electronic Documents Act and in the USA; the adequacy decision only covers the companies that self-certified compliance with data protection laws under the Privacy-Shield agreement. The Commission reported on their work towards active engagement towards progression in data protection laws in Japan, Korea, India, and with countries in Latin America expressing their interest in adequacy findings (European Commission 2017). Two recent data protection legislations are prepared with many similarities to the GDPR: California Consumer Privacy Act which is coming into force in January

12

Data Breaches and GDPR

243

2020 (Goldman 2019) and the recent Brazilian Data Protection Act which is coming into force in February 2020 (Silva et al. 2019). Regarding the Brexit and the extraterritorial applicability of the GDPR, both the UK’s Data Protection Authority, Information Commissioner’s Office (ICO), and the European Data Protection Board issued guidelines (EDPB 2019) on how the UK entities can stay compliant with the GDPR. The EDPB guidelines clarify that in case of a no-deal Brexit, UK will receive third country status in GDPR context, therefore the data transfers from the EU member states to the UK need to be compliant with the international data transfer criteria. As the GDPR is enforceable in the UK since 25 May 2018, the first advice of the ICO for companies is to stay compliant with the GDPR (ICO Guidelines 2019). The goal of the UK to remain compliant with the GDPR, as well as the developments listed in non-EU countries, demonstrates the international relevance of the GDPR data breach communication and notification requirements. In addition to government surveillance, another main threat to individual’s privacy could be defined as cybercrime. It is important to analyze that the data breach notification and communication requirements under the GDPR does take into account the probability of a firm being subject to a cyber attack. A SANS Institute survey on IT professionals reported that phishing was selected by the 40% of the respondents chose this as the top cybersecurity threat and ransomware was selected by the 20% of the respondents as the top cybersecurity threat were the top two cybersecurity threats (SANS Institute 2017). Phishing refers to cybercriminals sending many users bulk e-mails that appears to be sent by a trustworthy organization (such as a bank) in order to steal personal data of the individuals whereas ransomware is a malware that encrypts the data in the computer of the user and the decryption key is only provided to the user if a ransom is paid to the cybercriminals (Kostopoulos 2017). If phishing or ransomware attack is targeted at a company, an employee’s misstep and clicking on a wrong link might lead to a data breach incident for the company. Some Member States including Germany have already regulated a rule on data breach notification for data controllers in their national legislations. The previous EU level Data Protection Directive 95/46/EC of 1995 did not include such a requirement, therefore the GDPR requirement for notification and communication data breach is a new development (Houser and Voss 2018). GDPR focuses on data breaches in two separate articles, Article 33 is focused on the requirements regarding data breach notification to the supervisory authority and Article 34 is focused on the requirements regarding data breach communication to the data subject. GDPR demands a risk-based-approach to compliance. Regarding the data breach notification and communication obligations under the GDPR, respect to the fundamental right to protection of personal data is positioned at the core of risk assessment. GDPR covers data breach notification and communication respectively under articles 33 and 34 while making a differentiation between requirements in cases of a risk or high risk being posed to data subject’s rights and freedoms. The articles differentiate these cases to oblige the data controllers to only notify the data supervisory authority of the breach (Art 33 GDPR) or to communicate the data breach to the data subject (Art 34 GDPR). As per Article 4 (12) of the GDPR, a personal data breach is defined broadly and in an extensive manner (Freiherr and

244

E. Kiesow Cortez

Zeiter 2016). Its definition states that any accidental or unlawful destruction, loss, alteration, unauthorized disclosure of or access to personal data would be classified as a data breach. GDPR is the first EU data protection law that obliges the data controllers to perform a data protection impact assessment which requires a risk-based approach (Gellert 2018). There are difficulties for companies in separating the risk based approach from the legal compliance approach, or in pinpointing the assessed risk as compliance risk (Gellert 2018). In Recital 83 of the GDPR, the security of personal data is referred to in the context that the data controller or the data processor should evaluate the risks associated with processing data and put in measures to mitigate such risk. The Recital 83 refers to encryption as an exemplary measure that could be used to mitigate data processing risk. There is no reference to a list of measures that could be followed through a checklist but only an example of a potential measure which can be seen in line with the technology neutral aspect of the GDPR. The technology neutrality of the GDPR could be perceived in line with the aim of keeping this legislative act relevant despite technological developments. Hildebrant and Tielemans refer to this role of technology neutrality of a legislation that it serves to provide more sustainability to the legislation without creating a need through lengthy procedures of law amendments (Hildebrandt and Tielemans 2013). This subchapter introduced the EU perspective regarding the protection of individual’s personal data, some potential risks associated with the personal data, as well as the risk-based approach of the GDPR. A more in depth discussion on risk assessment will be provided in the following subchapter in order to better illustrate the data breach notification and communication requirements under the GDPR.

Data Breach Notification Under the GDPR European Data Protection Board endorsed the Working Party 29 (WP 29) data breach notification guidelines that states that a data security policy should be addressed to prevent breach as much as possible but should also include a timely reaction plan. GDPR covers data breach notification and communication respectively under its Articles 33 and 34 while making a differentiation between requirements in cases of a risk or high risk being posed to data subject’s rights and freedoms. Article 83 of the GDPR indicates the relevant administrative fines for data breaches. The articles 33 and 34 differentiate these cases to oblige the data controllers to only notify the data supervisory authority of the breach (Art 33 GDPR) or to communicate the data breach also to the data subject (Art 34 GDPR). In order for GDPR to oblige the communication of the personal data breach to the data subject, the potential risk for data subject’s rights and freedoms should be perceived as “high.” The fact that the GDPR does not require the data subject individuals to be alerted, communicated at lower levels of risk could be seen as in line with the European goal to overcome EU citizens’ hesitation and mistrust to online services in line with the EU Digital Single Market Strategy (European Commission 2012b). Another important requirement under the GDPR imposed on companies is to perform a data

12

Data Breaches and GDPR

245

protection impact assessment which is closely linked to possible risks regarding personal data breaches (Albrecht 2016). Data protection impact assessment is covered by the GDPR in Article 35. WP 29 Guidelines analyze the personal data breach definition under the GDPR. The definition in Article 4/12 defines a data breach as a breach of security leading to accidental or unlawful destruction, loss, alteration, and unauthorized disclosure of or access to personal data transmitted, stored, or otherwise processed. Destruction of the data should be understood as the data no longer exists, while loss of personal data could refer to instants where data still exists but the controller loses access to this data. An example could be taken from section “GDPR: Why Is It Important?.” In reference to ransomware, it was highlighted that an employee’s reckless behavior might lead to the company database to become encrypted by cybercriminals. If the database of a company does not have another copy, during the time of the encryption, the company would be experiencing a personal data breach in form of loss of personal data (WP 29 Guidelines 2018). GDPR Recital 85 gives further information on what could be possible consequences of a personal data breach as follows: “. . .physical, material or non-material damage to natural persons such as loss of control over their personal data or limitation of their rights, discrimination, identity theft or fraud, financial loss, unauthorized reversal of pseudonymization, damage to reputation, loss of confidentiality of personal data protected by professional secrecy or any other significant economic or social disadvantage to the natural person concerned.”

Article 33: Notification of Data Breach to the Supervisory Authority Article 33 of the GDPR indicates that “In the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority competent . . ., unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. Where the notification to the supervisory authority is not made within 72 hours, it shall be accompanied by reasons for the delay.” WP 29 Guidelines 2018 divides data breaches into three categories: (1) confidentiality breach which includes an unauthorized or accidental disclosure of, or access to, personal data; (2) integrity breach which includes an unauthorized or accidental alteration of personal data; (3) availability breach which includes an accidental or unauthorized loss of access to, or destruction of, personal data. A discussion on Article 33 includes an emphasis on the wording “having become aware of” since the moment of awareness is stated in the regulation as the initiator of the 72 h time limit. The wording of the article could give room for a discussion on whether this could create a perverse incentive for the companies for not investing in proper data breach detection systems in order to avoid being aware of a data breach in your systems and avoid the 72 h time limit or the notification requirement with this behavior. Another article in GDPR, Article 5/1/f requires that “personal data should

246

E. Kiesow Cortez

be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorized or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organizational measures (‘integrity and confidentiality’).” Therefore in line with Article 5 of the GDPR, it is possible to state that Article 33 does not give opportunities for moral hazard as not implementing data breach incident detection systems would not constitute “ensuring appropriate security of the personal data” (Ustaran 2018). Recital 87 also indicates that to inform promptly the supervisory authority and the data subject, it should be determined whether all appropriate technological protection and organizational measures have been implemented to detect immediately whether and when a personal data breach has taken place. Regarding data breach notification procedure and the time limits, GDPR Recital 88 states that while “setting detailed rules concerning the format and procedures for the notification of personal data breaches, due consideration should be given to the circumstances of that breach, including whether or not personal data had been protected by appropriate technical protection measures, effectively limiting the likelihood of identity fraud or other forms of misuse.” The article also emphasizes that in setting the relevant rules the legitimate interests of law-enforcement authorities should be taken into consideration where early disclosure could unnecessarily hamper the investigation of the circumstances of a personal data breach. The fact that the breach should be reported to the supervisory authority only in case of a perceived risk and within 72 h means that right after being informed of or discovering a breach, it is essential that the controller should assess the risk that could result from this breach as well as finding means to control the incident. This is why the Data Protection Impact Assessment would become very useful for the companies to use this assessment that were conducted before the breach occurred to evaluate the severity of the impact of the breach on data subjects. This would also help the companies to determine whether notification is required to the supervisory authority and, if necessary, to the individuals concerned (WP29 Guidelines 2018). Some measures for companies to increase their reaction speed to a data breach incident includes assigning a responsible person with the task of addressing incidents, establishing the existence of a breach and assessing risk, having a prior risk assessment performed where incident scenarios were grouped with likelihood of risk for individuals occurring and creating a documentation of the breach and the steps taken after the bridge in order to correctly notify the supervisory authority (WP29 Guidelines 2018).

Article 34: Communication of Data Breach to the Data Subject As Article 33 focuses on a risk assessment approach through data controller’s evaluation of data subject’s rights and freedoms at stake, Article 34 highlights that the data subjects themselves should be informed of a data breach in case the breach would pose a high risk to their rights and freedoms. GDPR Article 34 states that “When the personal data breach is likely to result in a high risk to the rights and

12

Data Breaches and GDPR

247

freedoms of natural persons, the controller shall communicate the personal data breach to the data subject without undue delay.” In order to incentivize the companies to take appropriate measures to prevent a data breach, the article 34/3 lists exceptions from communicating the data breach to the data subject (Freiherr and Zeiter 2016). One of these exceptions includes the data controller to render the personal data unintelligible to non-authorized access through encryption. Another method to improve data security under the GDPR is prescribed as “pseudonymization” as introduced in the Recitals 28 and 29 and as referred to in several articles as a measure to improve data security (Kuner et al. 2019). Recital 28 refers to the application of pseudonymization as a measure that can reduce the risks to the data subjects concerned and help controllers and processors to meet their dataprotection obligations. As the aim of pseudonymization is to further lower the risks for the rights and freedoms of individuals associated with data breaches the Recital 29 of the GDPR declares how to create incentives for businesses to pseudonymize their datasets. The relevant Recital reads to incentivize businesses to apply pseudonymization “. . .when processing personal data, measures of pseudonymization should, whilst allowing general analysis, be possible within the same controller when that controller has taken technical and organizational measures necessary to ensure, for the processing concerned, that this Regulation is implemented, and that additional information for attributing the personal data to a specific data subject is kept separately.” The issue with the risk based approach and offering pseudonymization as a measure occurs when it comes to companies willing to create a low-risk data processing environment. If GDPR would encourage complete anonymization of the datasets then both defining this data as personal data (hence applying the GDPR) would not be necessary as the data would not be seen as belonging to an identified or an identifiable individual. However, in addition to data breach notification and communication requirements, as mentioned in section “Introduction,” GDPR also aims at increasing individual’s control on processing of their personal data therefore strongly protects rights of data subjects such as right to access and right to erasure. In order to grant an individual their right to access the information, namely, the personal data collected about the individual, it is mandatory that the company does not completely anonymize the personal data that is collected. This is why pseudonymization instead of complete anonymization is advised while creating an in-between category compared to easily identifiable personal data and anonymized personal data (Koops 2014). It is expected that receiving too many data breach incident notification would alert the users to cyber threats and eventually could lead to users to further mistrust digital services. The fact that the GDPR does not require the data subject individuals to be alerted or communicated at lower levels of risk could be seen as in line with the European goal to overcome EU citizens’ mistrust to online services in line with the EU Digital Single Market Strategy (European Commission 2012a). Therefore, it could be concluded that in each data breach incident where Article 34 is applicable, Article 33 is also applicable. However, it is possible that the data breach would not be leading to a high risk consequence; therefore Article 33 could be applicable without Article 34.

248

E. Kiesow Cortez

The risk-management approach in GDPR incentivizes the companies for assessing risk before a data breach occurs and puts the companies in a decision-making role to decide on the risk impact of the occurred breach and to choose to comply with articles 33 and 34 accordingly. It is viewed as a legislation that requires the organizations to make the law work efficiently rather than simply following the prescribed articles of this legislation (Quelle 2018). In line with the benefits of having technology neutral legislation around the ICT fields (Koops 2006), GDPR takes a step towards enhanced compliance requirements by companies that now also includes making decisions on the risk posed by their actions. For example, the WP 29 Guidelines 2018 refers to the fact that the encryption software and methods should be in line with the current technology. This means that a company would not be able to argue that they did not perceive any risk due to data breach in their systems. The risk would still exist even if their data was encrypted if that encryption could not be perceived as secure given the most current method in encryption. While assessing the risks, some key elements should be included in the evaluation: such as the type of the breach, the nature, sensitivity and volume of the personal data, ease of identification of individuals, severity of the potential consequences for the individuals. Expecting firms to perform risk analysis and risk disclosure is not a new concept within the field of financial regulation. In order to improve the information asymmetries (Akerlof 1970) between the investor and the managers, firms that are publicly listed are asked to disclose their financial risk publicly. Currently cybersecurity risk of a company is also required by the SEC to be disclosed publicly (SEC Statement 2018). As Mentioned in section “GDPR: Why Is It Important?” data breach notification requirements under the GDPR attract interest internationally and the last chapter will introduce the first international case examples on data breach notification and communication requirements.

Article 83: Administrative Fines for Data Breach In short, the risk-based approach of GDPR provides a clear distinction on when Article 33 or Article 34 will be triggered. According to Article 83 (4) of the GDPR, not being compliant with the data breach notification and communication requirements listed in these articles can result in administrative fines up to 10,000,000 EUR, or in the case of a company up to 2% of the total worldwide annual turnover of the preceding financial year, whichever is higher. UK Data Protection Authority also announces that this fine can be combined with their corrective powers under Article 58 in the personal data breach section of its website. This is aligned with the Working Party 29s guidelines on administrative fines. Article 58 (2) (e) states that the data protection supervisory authorities have corrective powers to order the controller to communicate a personal data breach to the data subject. If the corrective powers under Article 58 (2) are not complied with, then as per Article 83 (6) the companies might be fined to a much higher upper limit, up to 20,000,000 EUR, or in the case of an undertaking, up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher. This risk exists if the companies do not comply

12

Data Breaches and GDPR

249

with an order of the supervisory authority regarding personal data breaches (WP29 Guidelines 2017). In the Working Party 29 Guidelines, it is also noted that failure to notify a breach could indicate lack of appropriate security measures. In case of such an event the supervisory authority might have the possibility to issue administrative fines for failure to notify or communicate the breach under Articles 33 and 34 and separately for not having appropriate security measures under Article 32 (WP29 Guidelines 2018). Additional guidance on deciding on the level of administrative fines is introduced in Recital 148 of the GDPR. This recital asserts that the administrative fines should be issued taking into account the nature, gravity, and duration of the infringement; the intentional character of the infringement; actions taken to mitigate the damage suffered; degree of responsibility or any relevant previous infringements; the manner in which the infringement became known to the supervisory authority; compliance with measures ordered against the controller or processor; adherence to a code of conduct; and any other aggravating or mitigating factor.

Case Examples on Data Breaches As the GDPR is actively enforceable only since the last 12 months, there are not many case examples where GDPR was applicable. The most recent case examples decided under the GDPR are briefly described hereby: In 2018, a 20.000 EUR fine was issued in Germany due to late notification of the data breach to the data protection authority and no communication of the breach to the data subjects (Hamburg Commissioner for Data Protection 2018).

In 2019, a 35.000 EUR fine by the Hungarian Data Protection Authority to a political party. The political party experienced a data breach where the data of 6.000 individuals was breached by a hacker. The command used in this breach was made available to enable anyone with the command to access the dataset. The Hungarian Data Protection Authority emphasized in their decision that the information security practices of the political party were not deemed adequate (i.e., the encryption of the database was not strong enough) (CMS 2019). In 2019, a 61.500 EUR was issued in Lithuania by the Lithuanian Data Protection Authority to a payment service company. The Lithuanian Data Protection Authority indicated in their decision that the amount of fine should be set in a manner to serve as a deterrence mechanism for future incidents. The data protection authority discovered in their investigation that the personal data of the users of the relevant payment service company was made available on the internet for two days (July 9–10 2018). It was established that the company did not have appropriate organizational measures in place to ensure security of the personal data. The authority issued the fine also taking into account that the breach was not notified to the supervisory authority and that the payment service information was not kept sufficiently encrypted.

250

E. Kiesow Cortez

In the following chapter, an example from Uber that recently experienced a personal data breach incident and faced administrative fines will be provided. The Uber case is significant for demonstrating decision-making process of data protection authorities on a large-scale data breach incident as well as for representing international applicability of data protection laws. It is important to note that the Uber case was decided in 2018, not under the GDPR; however as three different national data protection authorities ruled on this case, it is very important to analyze their justification for predictions on future decisions under the GDPR. The case that will be introduced is regarding the data breach incident experienced by Uber Technologies Inc. The case is selected given its relevance for demonstrating extraterritorial application of the GDPR as referred to in section “GDPR: Why Is it Important?,” data breach notification and communication requirements under the GDPR as referred to in section “Data Breach Notification Under the GDPR.” In 2018, the company was imposed by administrative fines from three separate European national data protection supervisory authorities to a sum of 1.4 million Euros (CNIL 2018). In sections “GDPR: Why Is it Important?” and “Data Breach Notification Under the GDPR,” references were made to a company not being able to access their database due to ransomware. In the Uber case, there is again a breach of a customer database and a ransom that was paid to cybercriminals but the fact pattern is different. Uber case demonstrates that the company paid ransom to cybercriminals that penetrated their systems in order to avoid the data breach incident becoming public. According to the official decision published on the website of the French Data Protection Authority the events occurred as explained in the following paragraphs. In November 21, 2017, the company Uber Technologies Inc. published on its website an article reporting that by the end of 2016, two individuals outside the company had accessed data from 57 million users of UBER services around the world. This information was then repeated in many press articles, some of which reported that the company had paid the attackers the sum of 100,000 US dollars so that they destroy the data in question and they do not reveal the existence of this incident. On 28 November 2017, Uber B.V. informed to the Chair of the Article 29 Working Party on Data Protection (WP 29) of the circumstances of the data breach and their willingness to cooperate with all competent authorities on this matter. The French Data Protection Authority (CNIL) required Uber B.V. to complete a questionnaire that explained the data breach in detail. The company reported to CNIL that outsiders had gained access to Uber private workspace on GitHub. GitHub is a third-party software development platform on the Internet that was used by software engineers at Uber at the time of the incident to store code for collaboration and development. Uber’s engineers were logging on to GitHub using a user name and password configured by themselves. These identifications took the format of a personal email address as a username and an individual password. Uber B.V. pointed out that the platform was used by engineers and that there was no company procedure for removing licenses when an engineer left the company. Secondly, the company reported that the attackers used these credentials to connect to the GitHub platform and they found an access key written in a source

12

Data Breaches and GDPR

251

code file. This access key was related to a service account providing access to the hosting platform where the personal data of users of Uber services are stored. Thirdly, the company explained that the attackers used this access key to access the Uber databases stored on the servers and they copied a significant amount of personal data. CNIL reports that in the company’s answer to the questionnaire it was stated that the data breach had affected 57 million users worldwide including 1.4 million on the French territory. Among these users, there were 1.2 million passengers and 163,000 drivers. The company said that the attackers had access to the following data: first name, last name, e-mail address, city or country of residence, mobile phone number, and user status (such as driver, passenger or both). It is stated in the official CNIL decision that the company argued that only Uber B.V can be considered as the data controller and that Uber Technologies Inc. was only acting as a subcontractor of Uber B.V. It is explained that a subcontract was concluded between the two companies and that as subcontractor, the company Uber Technologies Inc. wrote data management guidelines, trained new group employees, signed contracts with third-party companies, and managed the consequences of the data breach. CNIL ruled that the both Uber Technologies Inc. and Uber B.V. were in charge of determining the means and purposes of data processing and therefore both organizations were decided to be jointly qualified as data controllers. With references to Weltimmo (CJEU Weltimmo Judgment 2015) and Costeja (CJEU Costeja Judgment 2014) judgments of The Court of Justice of the European Union, CNIL concluded that they have jurisdiction over the case as Uber Technologies Inc. and Uber B.V. had effective activity in France, carried out by means of a stable installation where the criterion of stability of the installation being examined with regard to the presence of human and technical means necessary for the provision of concrete services in question. CNIL fined Uber to an amount of 400,000 €. The Data Protection Authority of the United Kingdom, Information Commissioner’s Officer (ICO), also took a similar decision to CNIL and fined Uber to an amount of 385,000 £. The Dutch Data Protection Authority also fined Uber for this case an amount of 600,000 €. CNIL reasoned the decision to fine Uber based on the facts that in relevance to use of GitHub and not having procedures in place for stopping former engineers to have access to companies’ datasets; the company did not have appropriate measures to prevent a personal data breach whereas ICO and the Dutch Data Protection Authority reasoned their decisions based on the fact that Uber did not comply with the requirement of the data breach notification to the supervisory authority in due time. Uber was made aware of the data breach, at least by the attackers, and accepted to pay 100,000 $ in order to keep the data breach incident away from the public eye and did not contact ICO at any point to notify this breach. Instead, 1 year after Uber was clearly made aware of the breach, ICO followed up on the incident based on the issue finding a place in the news. As mentioned in section “GDPR: Why Is it Important?,” the previous data protection regime within the EU was at the level of a Directive, so the member states implemented this directive into their national legislation in a fragmented

252

E. Kiesow Cortez

manner. The relevant subchapter also referred to the fact that the Data Protection Directive did not have a special section dedicated to data breach notification and communication requirements. Therefore it is important to note that as the data breach affecting Uber took place in 2016, the data protection authorities took decisions that are in line with the previous directive, as the directive was the applicable law and not the GDPR. A reference to this is observable in the following statement from ICO’s decision “Paying the attackers and then keeping quiet about it afterwards was not, in our view, an appropriate response to the cyber attack. . . . Although there was no legal duty to report data breaches under the old legislation, Uber’s poor data protection practices and subsequent decisions and conduct were likely to have compounded the distress of those affected” (ICO 2018). If GDPR would be applicable to the Uber case, the company might not have been able to explain why it took them 1 year to report this data breach, given that their negotiations with the cybercriminals prove that they were made aware of the breach earlier on. Therefore it would be expected that the Article 83 would guide the administrative fines issues by the data protection authorities. As application of Article 83 might result in administrative fines up to 10,000,000 EUR, or in the case of a company up to 2% of the total worldwide annual turnover Uber’s 1.4 million Euros fine might seem not proportionate. This should also be read in light of the most recent GDPR-based administrative fine issued to Google being 50 million Euros (CNIL 2019).

Conclusion This chapter focused on the requirements for data breach notification and communication in light of the articles from the EU General Data Protection Regulation. Through the correct enforcement of the GDPR, EU is aiming to have a legal framework that allows the use of digital tools to thrive the economy while protecting the individual’s fundamental right to privacy. The scope of this chapter was limited to data breach notification and communication requirements under the GDPR. The data breach notification provisions under the Directive on security of network and information systems (NIS) Directive EU 2016/1148, and Commission Regulation 611/2013 on electronic communication were not referred to within the scope of this chapter. As per Article 2 (d), the GDPR is not applicable for the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences, or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security. Consequently the Budapest Convention of Cybercrime (the Convention) is also left outside the scope of this chapter. However, it is important to note that the European Data Protection Supervisor (EDPS) in its Opinion 3/2019 referred to the information security aspects of the Convention. EDPS highlighted that data security is essential for securing the secrecy of criminal investigations and it is also essential

12

Data Breaches and GDPR

253

as per the Articles 4 and 5 of the GDPR. EDPS recommended to include additional data protection and privacy safeguards in the Convention in order to guarantee an adequate level of security for the personal data produced and transferred. This chapter also emphasized the international applicability of the GDPR in its extraterritoriality context. It was stated that the goal of the UK to remain compliant with the GDPR and the examples of new legislation in non-EU countries demonstrated the international relevance of the GDPR data breach communication and notification requirements. This chapter first provided a detailed overview of the relevant GDPR provisions on personal data breach. It was explained that Article 33 required the data supervisory authority to be notified of a personal data breach only if this data breach poses a risk to data subject’s rights and freedoms at stake. Article 34 highlights that the data subjects themselves should be informed of a data breach in case the breach would pose a high risk to their rights and freedoms. While assessing the risks, some key elements should be included in the evaluation: such as the type of the breach, the nature, sensitivity and volume of the personal data, ease of identification of individuals, and severity of the potential consequences for the individuals. It was also introduced that GDPR uses risk-based approach and creates incentives for firms to provide risk assessment in form of data protection impact assessments before a personal data breach incident occurs. The administrative fines associated with data breach notification and communication requirements under the GDPR are also introduced. Finally, this chapter summarized some of the first data breach notification decisions under the GDPR and presented a case study on how data breach notification and communication requirements are applied in practice by the data protection supervisory authorities of France, United Kingdom, and The Netherlands in a critical manner in order to provide an example in understanding possible future applications of the data breach notification requirements under the GDPR. The high administrative fines associated with data breaches as well as international applicability of the GDPR creates opportunities for further research. It would be necessary to analyze how effectively the data breaches could be fined. Observing the data on administrative fines and the scale of data breaches could provide better legal certainty and more predictability for companies. However, it could also be noted that more predictability of GDPR fines could encourage companies to engage in strategic behavior of noncompliance. A recent example from Facebook increased concerns about effectiveness of administrative fines. Facebook disclosed that it budgeted in approximately 5 billion dollars in their financial reports expected to be paid due to infringement of data protection laws (Fiegerman CNN 2019). To get a more accurate overview, this effort of Facebook should be analyzed together with the international tendency of countries adopting more GDPR-like legislations. Further observations and research in countries’ and companies’ strategies regarding compliance with the GDPR and regarding enforcement of data protection authorities’ decisions would provide a more balanced view on the effectiveness of the GDPR.

254

E. Kiesow Cortez

Cross-References ▶ Data Breaches and Carding ▶ Defining Cybercrime ▶ Phishing and Financial Manipulation

References Acquisti, A., Taylor, C., & Wagman, L. (2016). The economics of privacy. Journal of Economic Literature, 54(2), 442–492. Akerlof, G. (1970). The market for lemons: Qualitative uncertainty and the market mechanism. Quarterly Journal of Economics, 84, 488–500. Albrecht, J. P. (2016). How the GDPR will change the world. European Data Protection Law Review, 2, 287. CMS Report on “Hungarian data authority investigates two cases of privacy breaches”, 5 April 2019. CNIL, French Data Protection Authority Report, “Uber: sanction de 400.000€ pour une atteinte à la sécurité des données des utilisateurs”, 20 Decembre 2018. CNIL, French Data Protection Authority Report, Délibération de la formation restreinte n SAN – 2019–001 du prononçant une sanction pécuniaire à l’encontre de la société Google LLC, 21 Janvier 2019. Court of Justice of the European Union, Judgment of 13 May 2014 in Case C-131/12, Google Spain SL, Google Inc. v. Agencia Espanola de Proteccion de Datos (AEPD), Mario Costeja Gonzalez. Court of Justice of the European Union, Judgment of 1 October 2015, Case C-230/14, Weltimmo s.r.o. v Nemzeti Adatvédelmi és Információszabadság Hatóság. Court of Justice of the European Union, Judgment of 6 October 2015, Case C-362/14, Maximillian Schrems v. Data Protection Commissioner, joined party: Digital Rights Ireland Ltd. Erdemoglu, E. (2016). A law and economics approach to the new EU privacy regulation: Analysing the European general data protection regulation. In Governance and security issues of the European Union (pp. 109–126). The Hague: TMC Asser Press. European Commission (2012a), Press Release IP/12/46, ‘Commission Proposes a Comprehensive Reform of Data Protection Rules to Increase Users’ Control of Their Data and to Cut Costs for Businesses’, 25 January 2012. Available at http://europa.eu/rapid/press-release_IP-12-46_en. htm?locale=en. Accessed 15 Oct 2015. European Commission (2012b), Communication ‘Proposal for a Regulation of the European Parliament and of the Council on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation)’, COM (2012), 2012/0011 (COD), Brussels, 25 January 2012. European Commission, Communication ‘Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: A Digital Single Market Strategy for Europe’, COM (2015) 192 of 6 May 2015. European Commission, Communication “Exchanging and Protecting Personal Data in a Globalised World”, COM (2017), 2017/7, Brussels, 10 January 2017. European Commission, Eurobarometer 431. (2015, June 24). Available at http://ec.europa.eu/ public_opinion/archives/ebs/ebs_431_sum_en.pdf. Accessed 31 May 2016. European Data Protection Board, Information Note on Data Transfers Under the GDPR in the event of a No-Deal Brexit, 12 February 2019.

12

Data Breaches and GDPR

255

European Data Protection Supervisor, Opinion 3/2019, Opinion regarding the participation in the negotiations in view of a Second Additional Protocol to the Budapest Cybercrime Convention, 2 April 2019. Available at https://edps.europa.eu/data-protection/our-work/publications/opin ions/budapest-cybercrime-convention_en Fiegerman, S. (2019, April 24). CNN Business, “Facebook expects FTC fine could be as much as $5 billion”. Available at https://edition.cnn.com/2019/04/24/tech/facebook-q1-earnings/index. html Freiherr, A. V. D. B., & Zeiter, A. (2016). Implementing the EU general data protection regulation: A business perspective. The European Data Protection Law Review, 2, 576. Gellert, R. (2018). Understanding the notion of risk in the general data protection regulation. Computer Law & Security Review, 34(2), 279–288. Goldman, E ( 2019, June). An introduction to the California Consumer Privacy Act (CCPA). Santa Clara Univ. Legal Studies Research Paper. Available at SSRN https://ssrn.com/abstract= 3211013 or https://doi.org/10.2139/ssrn.3211013 Hamburg Commissioner for Data Protection, Der Hamburgische Beauftragte für Datenschutz und Informationsfreiheit, 27. Tätigkeitsbericht Datenschutz des Hamburgischen Beauftragten für Datenschutz und Informationsfreiheit, 2018. Hildebrandt, M., & Tielemans, L. (2013). Data protection by design and technology neutral law. Computer Law & Security Review, 29(5), 509–521. Houser, K. A., & Voss, W. G. (2018). Gdpr: The end of Google and Facebook or a new paradigm in data privacy? Richmond Journal of Law & Technology, 25, 1. Information Commissioner’s Office, Monetary Penalty Notice, 26 November 2018 Supervisory Powers of the Information Commissioner. Information Commissioner’s Office Guidelines on “Leaving the EU – Six Steps to Take”, March 2019 v.2.2. Koops, B. J. (2014). The trouble with European data protection law. International Data Privacy Law, 4(4), 250–261. Koops, E. J., Koops, B. J., Lips, A. M. B., Prins, J. E. J., & Schellekens, M. H. M. (2006). Should ICT regulation be technology-neutral?. IT & Law, (9), 77–108. Kostopoulos, G. (2017). Cyberspace and cybersecurity. New York: Auerbach Publications. Kuner, C. (2010). Data protection law and international jurisdiction on the internet (part 1). International Journal of Law and Information Technology, 18(2), 176–193. Kuner, C., Bygrave, L., & Docksey, C. (2019). Draft commentaries on 10 GDPR articles (from commentary on the EU general data protection regulation). Oxford: Oxford University Press. Quelle, C. (2018). Enhancing compliance under the general data protection regulation: The risky upshot of the accountability-and risk-based approach. European Journal of Risk Regulation, 9(3), 502–526. SANS Institute Threat Landscape Survey. (2017). Users on the front line, SANS institute whitepaper, SANS institute Reading room. Available at https://www.sans.org/reading-room/ whitepapers/threats/2017-threat-landscape-survey-users-front-line-37910 Schneier, B. (2015). Data and goliath: The hidden battles to collect your data and control your world. New York: WW Norton. Schwartz, P. (2013). The EU-US privacy collision: A turn to institutions and procedures. Harvard Law Review, 126, 1. Securities and Exchange Commission, 17 CFR Parts 229 and 249, [Release Nos. 33-10459; 34-82746] Commission Statement and Guidance on Public Company Cybersecurity Disclosures. Available at: https://www.sec.gov/rules/interp/2018/33-10459.pdf Silva, J., Calegari, N., & Gomes, E. (2019, May). After Brazil’s general data protection law: Authorization in decentralized web applications. In Companion proceedings of the 2019 World Wide Web conference (pp. 819–822). New York: ACM. Ustaran E. (2018). Room S., Security of personal data. In European data protection law and practice. Portsmouth: IAPP.

256

E. Kiesow Cortez

Victor, J. M. (2013). The EU general data protection regulation: Toward a property regime for protecting data privacy. Yale Law Journal, 123, 513. Voigt, P., & Von dem Bussche, A. (2017). The EU general data protection regulation (GDPR). A practical guide (1st ed.). Cham: Springer International Publishing. Working Party 29, 17/EN, Guidelines on the application and setting of administrative fines for the purposes of the Regulation 2016/679, Adopted 3 October 2017. Accessible at https://ec.europa. eu/newsroom/article29/item-detail.cfm?item_id=611237 Working Party 29, 18/EN, Guidelines on Personal data breach notification under Regulation 2016/ 679, Adopted 3 October 2017, Revised and Adopted on 6 February 2018. Accessible at https:// ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612052

Cybercrime Legislation in the United States

13

Adam M. Bossler

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computer Hacking and Malicious Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Fraud and Identity Theft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Piracy and Intellectual Property Theft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pornography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Child Sexual Exploitation Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sexting and Revenge Pornography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cyber Harassment, Cyberbullying, and Cyberstalking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cyberterrorism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

258 259 262 265 268 269 272 273 275 276 277 278 278

Abstract

As a result of both a significant increase in cybercrime and a growing concern about its economic and societal impact, the United States has enacted legislation aimed at curtailing a wide variety of cybercrimes at the state and federal levels. This chapter summarizes US legislation that has been modified or enacted over the past 30 years. The chapter focuses on cybercrimes that can be classified as computer hacking and malicious software; online fraud and identity theft; digital piracy and intellectual property theft; pornography; child sexual exploitation materials; sexting and revenge pornography; cyber harassment, cyberbullying, and cyberstalking; SPAM; and cyberterrorism. This chapter is neither meant to be exhaustive nor critical in nature. Instead, the chapter was written as a resource to provide an overview of the most relevant US legislation on cybercrime. A. M. Bossler (*) Department of Criminal Justice and Criminology, Georgia Southern University, Statesboro, GA, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_3

257

258

A. M. Bossler

Keywords

Cybercrime legislation · Hacking · Computer Fraud and Abuse Act · Identity theft · Digital piracy · Pornography · Child sexual exploitation · Sexting · Cyber harassment · Cyberterrorism

Introduction The goal of this chapter is to summarize US legislation that criminalizes online behavior that is typically referred to as “cybercrime.” There are many different definitions for cybercrime (see ▶ Chap. 1, “Defining Cybercrime”), but it is generally viewed as crimes “in which the perpetrator uses special knowledge of cyberspace” (Furnell 2002, p. 21) or simply as “using computer technology to commit crimes” (Brenner 2011, p. 16). The term “cybercrime” is an umbrella term encompassing a wide variety of online behavior, including acts of trespassing (e. g., computer intrusions and hacking), deception and theft (e.g., fraud and identity theft), pornography and obscenity (e.g., child sexual exploitation materials), and violence (e.g., cyberstalking) (Wall 2001). Cybercrime is viewed differently than cyberwarfare (see ▶ Chap. 66, “Cyberwarfare as Realized Conflict”) which uses technology to attempt to achieve military objectives (Brenner 2011). The objective of the chapter is quite broad as it attempts to summarize US legislation aimed at addressing a wide variety of different forms of crime that are facilitated by technology in some fashion. Following Brenner (2011), this chapter focuses on cybercrimes that can be viewed as computers being either the target of the criminal activity or in which they are used as an instrument to commit a crime. The chapter does not focus on all crime types in which a computer is simply incidental to the crime because of its role in storing digital evidence. New legislation was generally not needed to prosecute offenses in these cases. Differences between the cyber and physical world, however, have led to challenges in prosecuting offenses in which computers were targeted. This required these crimes to be viewed as new types of crimes which necessitated the adoption of new statutes in some cases. Even when a computer was simply used as a means to commit a crime, existing legislation still needed to be amended to include language that encompassed newer types of technology (i.e., a computer) and intangible property. It should be noted that this chapter is neither meant to be exhaustive nor critical in nature. Limited space prevents fully discussing the history of legislation for each cybercrime type, critically analyzing the statutes, and assessing their effectiveness. Rather, the chapter is meant as a reference for the current state of cybercrime legislation. Although limited US Supreme Court cases are mentioned, this chapter does not focus on court cases dealing with the interpretation of legislation or various previous court decisions related to digital evidence. In addition, the chapter does not focus on how civil courts can be used to impose injunctions and to file lawsuits. The chapter also does not cover extralegal approaches taken by corporations and nonprofit organizations to decrease cybercrime. Rather, this chapter focuses on the most common approach in the United States to address cybercrime, which is modifying or enacting federal and state legislation aimed at criminalizing specific online behaviors.

13

Cybercrime Legislation in the United States

259

Computer Hacking and Malicious Software The 1980s witnessed a significant increase in home computers, video games, and other forms of electronics (Holt and Bossler 2016). These advances led to younger individuals wanting to explore and play with these new technologies. At the same time, strong concerns about the “dangers” of hackers arose as a result of movie portrayals of hackers (i.e., War Games) and news media reports of hacker arrests (see ▶ Chap. 35, “Computer Hacking and the Hacker Subculture”). In order to protect classified defense information, financial institutions, and federal government computers from unauthorized access, the US Congress passed the Counterfeit Access Device and Computer Fraud and Abuse Act of 1984 (Brenner 2011; Curtis 2012). As the potential role of computers to commit other forms of crime quickly became more apparent, the Congress amended the Counterfeit Access Device and Computer Fraud and Abuse Act of 1984 with the Computer Fraud and Abuse Act (CFAA) of 1986, codified as 18 U.S.C § 1030, to broaden the scope of the types of computers protected under the statute and the types of behavior criminalized. The CFAA is the most important piece of federal legislation in the United States that deals with cybercrime in general and computer hacking specifically (Brenner 2011; Curtis 2012; Holt et al. 2018). The CFAA is used to prosecute attacks against “protected computers,” which can be defined as a computer “(a) exclusively for the use of a financial institution or the United States government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government; or (b) which is used in or affecting interstate or foreign commerce or communication” (18 U.S.C § 1030 (e) (2)). Although originally passed in 1986, the CFAA has been revised several times over the years. This quite broad definition, which was adopted in 1996, safeguards virtually any computer that is connected to the Internet, even if the Internet was not used for the crime. This increases the ability of this statute to be used by federal prosecutors for a wide variety of computer intrusion offenses, regardless of whether the offender and victim reside within the same state or are separated by international waters (Brenner 2011). The CFAA specifies seven applications of hacking that violate federal law. The first four sections refer directly to unauthorized access: 1. Obtaining national security information: Knowingly accessing a computer without authorization or by exceeding authorized access and obtaining information to injure the United States or to the advantage of a foreign nation and willfully deliver that information to another person not entitled to receive it or withhold the information from the one who is entitled to it (18 U.S.C. § 1030 (a) (1)). 2. Accessing a computer and obtaining information: Intentionally accessing a computer without authorization or by exceeding authorization and obtaining information – contained in a financial record from a financial institution or consumer reporting agency, from any US department or agency, or from any protected computer (18 U.S.C. § 1030 (a) (2)).

260

A. M. Bossler

3. Trespassing in a government computer: Intentionally and without authorization accessing any nonpublic computer of a US department or agency or computer not exclusively used by the government but the conduct affects the use of the computer by the government (18 U.S.C. § 1030 (a) (3)). 4. Accessing a computer to defraud and obtain value: Knowingly and with intent to defraud accesses a protected computer without authorization or exceeds authorization obtains anything of value (18 U.S.C. § 1030 (a) (4)). The fifth statute, U.S.C. § 1030 (a)(5), specifies that it is illegal to (1)“knowingly cause[s] the transmission of a program, information, code, or command, and as a result of such conduct, intentionally, causes damage without authorization, to a protected computer,” (2) “intentionally access[es] a protected computer without authorization, and as a result of such conduct, recklessly causes damage,” or (3) “intentionally access [es] a protected computer without authorization, and as a result of such conduct, causes damage and loss.” The first part of the statute specifically criminalizes the use of code to knowingly cause damage to a protected computer. In essence, this makes the CFAA the primary federal statute to prosecute offenses involving the use of malicious software. It should be made clear that the creation of malicious software itself is not illegal as it is simply computer code (Brenner 2011). In the United States and in many other countries, it is the use of malicious software to access computers without the authorization of the computer or system owner, or the use of malware within that computer or system, that is illegal. Thus, the United States and other countries focus on prosecuting cases in which malicious software was used to access computers or systems without authorization with their computer hacking statutes. The sixth and seventh statutes criminalize the trafficking of passwords and extortion involving computers. 18 U.S.C. § 1030 (a)(6) makes it illegal to buy, sell, or trade passwords or other information that can be used to access any computer used for interstate or foreign commerce or used by federal government without authorization. 18 U.S.C. § 1030(a)(7) criminalizes the extortion of funds or anything of value from owners of protected computers to prevent damage or loss of information. Finally, it should be noted that § 1030 (b) made it illegal to attempt or to conspire to commit any of the above discussed seven offenses. According to Brenner (2011), prosecutors are most likely to use §§ 1030(a)(4), 1030(a)(5), 1030(a)(6), and 1030(a)(7) because these statutes cover the more generic offenses, while the other sections focus on more specific types of cybercrime or computers, such as financial institution computers rather than all protected computers. The severity of the punishment associated with the conviction of these offenses is influenced by both the harm caused and the number of prior convictions. Trespassing acts designed to obtain national security information may receive minimum sentences of 10–20 years. The sentence for accessing a computer to obtain information of value may range from a minimum of 1 year in prison and/or a fine up to 10 years if the offender is charged with multiple charges or committed the offense for commercial or private gain. Similarly, trespassing against government-owned computers may lead to a punishment of up to 1 year in prison and/or a fine or up to 10 years if the offense was connected with another offense.

13

Cybercrime Legislation in the United States

261

Offenses covered by (a) (4) have the greatest range of punishments with penalties ranging from up to 5 years “if the object of the fraud and the thing obtained consists only of the use of the computer and the value of that use does not exceed $5,000 in any one-year period”; a minimum of 10 years if the harm exceeded $5000 or affected “more than ten computers, affects medical data, causes physical injury to a person, poses a threat to public health or safety or affects the US government’s administration of justice, defense, or national security”; up to 20 years if the computer intrusion causes serious bodily injury; and up to life in prison if the computer hack knowingly or recklessly led to death (Brenner 2011; Holt et al. 2018). For (a)(5), the punishment can include a fine and a sentence from 2 years all the way to life depending on whether the offense led to death. Offenders convicted of (a) (6) can receive a sentence of a fine and a prison sentence up to 5 years depending on whether the violator gained financially or if the data was valued at over $5000. The convicted offender may receive a sentence up to 10 years if they are found guilty of multiple counts or if the value of the data exceeded $5000. Finally, violations of (a)(7) can be fined and/or imprisoned up to 10 years if the offender had prior convictions. Another relevant federal statute related to computer hacking in the United States is that of 18 U.S.C. § 2701 (Unlawful Access to Stored Communications). It was created to protect personal communication and information (not electronic bulletin boards) from computer hackers and corporate spies. The statute specifically makes it illegal to either “intentionally access[es] without authorization a facility through which an electronic communication service is provided” or “intentionally exceed[s] authorization to access that facility; and thereby obtains, alters, or prevents authorized access to a wire or electronic communication while it is in electronic storage in such system.” The punishments ranged from up to 1 year for a conviction of one count to up to 5 years for subsequent offenses. The punishments, however, were increased under the Homeland Security Act of 2002 if the offense was committed for “purposes of commercial advantage, malicious destruction or damage, or private commercial gain, or in furtherance of any criminal or tortuous act in violation of the Constitution or laws of the United States or any State” (Brenner 2011; Holt et al. 2018). An offender may receive a fine and up to 5 years in prison for the first offense and up to 10 years if convicted of multiple offenses. Computer hacking and the use of code for unauthorized access are criminalized at the state level as well. Florida was the first state to pass a “computer hacking” law with the passage of the Computer Crimes Act in 1978. Today, all US states have passed computer crime legislation with most laws addressing computer hacking or unauthorized access (NCSL 2019a). A majority of states use a two-tiered system to criminalize hacking with simple hacking (unauthorized access but no damage to the system or further criminal behavior) being considered a misdemeanor and aggravated hacking, unauthorized access leading to further criminal behavior in the form of copying or destroying of data, being treated as a felony (Brenner 2011). Other states use a single statute to criminalize unauthorized access regardless of whether further criminal activity occurs. States also differ on whether they criminalized these behaviors under existing statutes relating to burglary and theft or whether new statutes were enacted to more clearly establish the unique characteristics of computer hacking (Brenner 2011).

262

A. M. Bossler

Many states have also passed specific legislation dealing with malware, spyware, phishing, ransomware, and denial of service attacks. Similar to federal legislation that does not refer specifically to “malicious software,” these state statutes also do not generally make reference to malicious software or viruses but instead refer to computer contaminants which are “designed to modify, damage, destroy, record, or transmit information within a computer system or network without the permission of the owner” (NCSL 2019a). Whether the use of malware is considered a misdemeanor or felony at the state level depends on the harm caused, the access to sensitive data, and the value of the information (Brenner 2011; NCSL 2019a). Half of all states, as of 2016, have specific language in their statutes criminalizing denial of service attacks (NCSL 2019a). In addition, at least five states (California, Connecticut, Michigan, Texas, and Wyoming) specifically criminalize ransomware and computer extortion, although other states may simply use their existing computer trespass or malware statutes to prosecute ransomware and computer extortion (NCSL 2019a).

Online Fraud and Identity Theft Various forms of online fraud, including romance scams (see ▶ Chap. 43, “Romance Fraud”), auction fraud (see ▶ Chap. 47, “Counterfeit Products Online”), and Nigerian scams (see ▶ Chaps. 40, “Social Engineering”, and ▶ 41, “Spam-Based Scams”), are some of the most prevalent and costly cybercrimes in the United States. Yet, there is no specific online fraud federal statute in the United States. Instead, traditional mail and wire fraud statutes are used to prosecute offenders who commit Internet-based fraud (Brenner 2011). The only substantial difference between the mail and wire fraud statutes is the means – traditional mail versus telephony – with which the fraud was committed. For example, the Department of Justice (2019a) summarizes 18 U.S.C. § 1341 (Frauds and Swindles) as specifying that there are two elements to mail fraud: “(1) having devised or intending to devise a scheme to defraud (or to perform specified fraudulent acts), and (2) use of the mail for the purpose of executing, or attempting to execute, the scheme (or specified fraudulent acts).” 18 U.S.C. § 1343, on the other hand, makes it illegal to commit similar behaviors via transmission by “wire, radio, or television communication in interstate or foreign commerce, any writings, signs, signals, pictures, or sounds for the purpose of executing such scheme or artifice.” Historically, the wire fraud statute has been the more pertinent statute to prosecute online fraud as a result of the Internet being used for the commission of the crime (Brenner 2011). It should be noted that many frauds still consist of offenders and victims sending purchases and payments through traditional mail (Holt et al. 2018). As a result, the mail fraud statute can be used in charging as well. The penalties for wire and mail fraud are identical. An individual can be penalized with a fine and/or imprisonment up to 20 years; the sentence can be increased up to 30 years if a financial institution was involved. In addition, conspirators of online fraud can also be charged under 18 U.S.C. § 371, which is the traditional conspiracy statute, with the penalty including a fine and imprisonment up to 5 years if the conspired offense was a felony.

13

Cybercrime Legislation in the United States

263

Another relevant fraud statute, 18 U.S.C. § 1029, makes it illegal to knowingly have the intent to defraud others using counterfeit or unauthorized access devices. According to the statute, “the term ‘access device’ means any card, plate, code, account number, electronic serial number, mobile identification number, personal identification number, or other telecommunications service, equipment, or instrument identifier, or other means of account access that can be used, alone or in conjunction with another access device, to obtain money, goods, services, or any other thing of value, or that can be used to initiate a transfer of funds (other than a transfer originated solely by paper instrument)” (e) (1). Depending on the severity of the offense (e.g., number of access devices and economic costs), the penalty for a conviction can range up to 15 years; a previous conviction increases the penalty up to 20 years. Online fraud and identity theft almost go hand-in-hand (see ▶ Chap. 46, “Identity Theft: Nature, Extent, and Global Response”). Criminals can use personally identified information (PII) to commit various forms of cybercrime, ranging from committing credit card fraud or obtaining government assistance to the creation of fraudulent passports aiding terrorism. PII are unique identifiers and can include a wide range of detailed information, including but not limited to names, birthdates, social security numbers, passport numbers, and drivers’ license numbers. The United States views unauthorized possession of PII to be illegal, which is different than many places in the world in which it is the unauthorized usage of the PII and not simply the possession of it that is considered illegal (Holt et al. 2018). The passage of identity theft legislation has been a focus in the United States since the late 1990s. The most relevant statute is the Identity Theft and Assumption Deterrence Act of 1998 (18 U.S.C. § 1028), which made it a federal crime to possess, transfer, or use PII without authorization with the intent to commit a crime at the local, state, or federal level. An identification document was defined under the statute as “a document made or issued by or under the authority [.] with information concerning a particular individual, is of a type of intended, or commonly accepted for the purpose of identification of individuals” (§ 1028(d)). The statute has wide reach and protects many forms of PII, including but not limited to names, social security numbers, drivers’ license or identification numbers, passport information, employer identification numbers, biometric data (such as fingerprinting), unique electronic identification numbers, bank routing numbers, and IP addresses of a computer system (Brenner 2011). The Act also made the Federal Trade Commission (FTC) the clearinghouse for any consumer information related to identity theft-related crimes. The specific acts that are covered under this Act are outlined in 18 U.S.C. § 1028 (a) and include: (a) “Knowingly and without authority produces an identification document, authentication feature, or a false identification document;” (a) (1) (b) “Knowingly transfers an identification document, authentication feature, or a false identification document knowing that such document or feature was stolen or produced without lawful authority;” (a) (2) (c) “Knowingly possesses with intent to use unlawfully or transfer unlawfully five or more identification documents (other than those issues lawfully for the use of the possessor), authentication features, or false identification documents;” (a) (3)

264

A. M. Bossler

(d) “Knowingly possesses an identification document (other than one issued lawfully for the possessor), authentication feature, or a false identification document, with the intent such document or feature be used to defraud the United States;” (a) (4) (e) “Knowingly produces, transfers, or possesses a document-making implement or authentication feature with the intent such document-making implement or authentication feature will be used in the production of a false identification document or another document-making implement or authentication feature which will be so used;” (a) (5) (f) “Knowingly possesses an identification document or authentication feature that is or appears to be an identification document or authentication feature of the United States or a sponsoring entity of an event designated as a special event of national significance which is stolen or produced without lawful authority knowing that such document or feature was stolen or produced without such authority;” (a) (6) (g) “Knowingly transfers, possesses, or users, without lawful authority, a means of identification of another person with then intent to commit, or to aid to abet, or in connection with, any unlawful activity that constitutes a violation of Federal law, or that constitutes a felony under any applicable State or local law;” (a) (7) (h) “Knowingly traffics in false or actual authentication features for use in false identification documents, document-making implements, or means of identification.” (a) (8) The punishments for the offenses covered under the Act vary significantly. Most offenses are punishable by a fine and up to 5 years. If the offense, however, was associated with a drug trafficking or violent crime, the offender may receive a sentence up to 20 years; if the offense was related to terrorism, the sentence can range up to 30 years. Two other important pieces of federal legislation which provide consumers more rights to protect their identities are the Fair and Accurate Credit Transactions Act of 2003 (15 U.S.C. § 1681) and the Identity Theft Enforcement and Restitution Act of 2008 (18 U.S.C.§ 3663(b)). The Fair and Accurate Credit Transactions Act required that consumers be provided extra protections while also providing more access to their credit histories. Businesses became required to remove credit card information, with the exception of the last four digits, from receipts. Consumers were allowed to receive one free credit report each year from the credit reporting agencies, purchase a credit score with pertinent information on how the score was calculated for “a reasonable fee,” and place fraud alerts in their credit files. The passage of the Identity Theft Enforcement and Restitution Act of 2008 was also important for several reasons, including that it allowed courts to grant restitution to victims of identity theft in the amount of the actual harm and the time spent to remedy the situation (Holt et al. 2018). Many states have also passed legislation criminalizing online fraud and identity theft. These laws may fall under new statutes specifically pertaining to computer fraud while other states may prosecute these offenses under their already existing

13

Cybercrime Legislation in the United States

265

computer crime statutes. All states have laws on either identity theft or impersonation (NCSL 2019b). Twenty-nine states have specific restitution provisions for the victim. Five states also allow for asset forfeiture. Finally, 11 states “have created identity theft passport programs to help victims from continuing identity theft” (NCSL 2019b). An additional category of laws meant to protect consumers from identity theft, or to at least make them aware that their PII has been stolen or compromised as a result of a data breach, are data breach notification laws. When data breaches involving PII occur, hundreds of thousands of individuals, if not millions, are potential victims through no fault of their own. In order for potential victims to take steps to decrease their chances of becoming a victim of identity theft, such as changing passwords, checking credit histories, or putting fraud alerts on their credit reports, they have to be aware that a data breach occurred. Thus, the purpose behind data breach notification laws are to inform residents or consumers that a data breach occurred and what PII has been potentially lost. In the United States, there is no federal statute that covers all data breaches. Data breaches of certain types are covered by certain Acts. For example, the GrammLeach-Bliley Act, also known as the Financial Modernization Act of 1999, requires banks and financial institutions to establish written information security plans and data breach response programs, which generally should include notifying customers when a breach occurs. They are also responsible for third-party vendors taking appropriate steps to protect confidential data (FTC 2019). Data breach notification laws, however, have been passed in all 50 states and several territories (NCSL 2019c). These data breach notification laws require private or governmental entities to notify residents when data breaches involving PII occur. The first data breach notification law was passed in California in 2003 and was entitled the California Security Breach Notification Act. Most states followed suit and passed laws similar to that of California. These laws specify what types of entities must comply with the law, such as businesses, government entities, etc., what is defined as PII (e.g., names in combination with social security numbers, state identification numbers, etc.), what actually constitutes a breach, who specifically needs to be notified, the timing and method of the notification, and whether any exemptions exist, such as for encrypted data (NCSL 2019c). A strong focus has been placed on breached records dealing with financial and health information. In the end, most residents throughout the United States should be protected by these state data breach notification laws, although the level of protection and notification varies by state and territory.

Digital Piracy and Intellectual Property Theft Creative original ideas become intellectual property when placed in fixed medium (e.g., paper, canvas, tape, disk, etc.) (Holt et al. 2018) (see ▶ Chap. 48, “Digital Piracy”). Intellectual property therefore expands a wide variety of works, including but not limited to art, literature, architectural designs, music, films, and software

266

A. M. Bossler

code. The idea should be viewed as property, which can then be controlled by an owner regarding how that property will be shared, rented, leased, or sold to others. In order to protect intellectual property, individuals copyright, trademark, or patent it. Copyright protections are granted automatically at the time of the work being placed in some type of fixed medium (Holt et al. 2018; Yar 2013). In the United States, however, creators must register their copyright with the federal government in order to receive all necessary protections and to be able to pursue criminal or civil actions against those that violate their rights. Thus, protecting intellectual property in the United States takes more forethought than in most countries (Holt et al. 2018). Protecting copyrighted material has become significantly more challenging over the last several decades because of the dramatic increase in technology since the dawn of the Digital Age. The ability to connect to the Internet at any time has made it easier for individuals to access, copy, and reproduce digital work. In most cases, the owner of the intellectual property never becomes aware that their intellectual property was used without authorization. As a result, most countries, including the United States, have either substantially modified their existing copyright laws or created new laws completely to help intellectual property owners better protect their rights. The evolution of technology, in fact, has always made it challenging to protect the rights of intellectual property owners. The Berne Convention for the Protection of Literary and Artistic Works (the Berne Convention) was agreed upon by a handful of countries (UK, France, Belgium, Germany, Italy, Spain, Switzerland, Haiti, Liberia, and Tunisia) in Berne, Switzerland, in 1886 to address concerns regarding international protections for intellectual property rights. The signees of the convention wanted to ensure that authors’ rights, as protected by copyright laws, were recognized in other countries (Holt et al. 2018; WIPO 2019). The Berne Convention aimed to protect intellectual property rights through three principles: (1) principle of national treatment, works will be afforded the same protections in all signatory nations regardless of the nation of origin; (2) principle of automatic protection, works must be automatically protected when the creative or intellectual work is fixed in the medium; and (3) principle of independence of protection, the protection is independent of any protection existing in the work’s originating nation (WIPO 2019). In addition, the Berne Convention required certain minimum protection standards that needed to be present for all works, including the rights to translate, make adaptations and arrangements of the work, perform in public dramatic and musical works, recite literary work in public, communicate to the public the performance of such works, broadcast, use the work as a basis for an audiovisual work, and reproduce, distribute, perform in public, or communicate to the public that audiovisual work (WIPO 2019). The Berne Convention also specified the length of time copyright protections lasted. The general rule is that copyright protections last for 50 years after the author’s death. The length of protection varies depending on the type of media. In addition, the Berne Convention established the “Rule of the Shorter Term,” which states that the intellectual property cannot be protected for a longer period of time internationally than it is in its country of origin (WIPO 2019).

13

Cybercrime Legislation in the United States

267

The Berne Convention concluded in 1886, was revised in 1896 and 1908, and completed in 1914. It has been revised and amended multiple times over the last century, and as of the time of the writing of this chapter, 177 of the world’s 195 countries are signatories to the Berne Convention. In addition, all countries who are members of the World Trade Organization (WTO) are bound by the principles of the Berne Convention, with the exclusion of the moral rights provision, regardless of whether they are parties to the Berne Convention (WIPO 2019). Although the United States had legislation protecting intellectual property dating back to 1790, and had ratified intellectual property conventions throughout the twentieth century, it did not enter into force in the Berne Convention until 1989. The primary concern was because the United States wanted intellectual property owners to be required to register their creative and intellectual work (Holt et al. 2018; WIPO 2019). Today, the Copyright Act of 1976 is the most important copyright law in the United States that protects intellectual property from being reproduced and distributed without authorization. The Act moved the power to prosecute intellectual property infringement cases from state to federal courts. It also created new criminal sanctions under Titles 17 and 18 of the US Criminal Code (Brenner 2011). The Copyright Act of 1976 made it a federal crime to willfully infringe on an existing copyright for commercial advantage, private gain, or by reproducing or distributing one or more copies of copyrighted work with a value of more than $1000 over a 180day period. Individuals are charged with a felony if they reproduce or distribute at least 10 copies of one or more copyrighted works with a total value of more than $2500 over a period of 180 days (Brenner 2011). Over the past few decades, changes in technology have required the United States to pass additional legislation to better protect different forms of media, particularly digital media. For example, the Copyright Felony Act of 1992 extended copyright protection to software (Brenner 2011; Holt et al. 2018). The No Electronic Theft Act of 1997 recognized infringement of copyrighted material even when the person who received or expected to receive copyrighted work did not profit through commercial or personal gain. This made it illegal to simply try to acquire pirated media through file sharing rather than paying for the intellectual property (Brenner 2011; Holt et al. 2018). These important revisions also clearly made music piracy illegal by sanctioning the reproduction or distribution of one or more copies of phonorecords. These acts also increased the penalties for piracy up to 5 years in prison and $250,000 in fines while also increasing statutory damages. Finally, the Digital Millennium Copyright Act (DMCA), which was passed in 1998, revised the Copyright Act to further address concerns by online media piracy (Brenner 2011; Holt et al. 2018). The second section under the title added § 1201 of the Copyright Act which made it illegal to circumvent any protective technologies placed on copyrighted materials. § 1202 made it illegal to tamper with copyright management software or protections. The DMCA also contained Title II, which was entitled the Online Copyright Infringement Liability Limitation Act, which extended protections to Internet Service Providers from liability if ISPs blocked infringing material or removed infringing materials if a complaint was received from a copyright holder or owner. Additionally, this title allowed copyright holders to subpoena

268

A. M. Bossler

ISPs for the ISP addresses, names, and home addresses of customers who had engaged in the distribution of copyrighted materials. This would allow copyright holders to pursue civil and criminal charges. The DMCA also represented the US implementation of the WIPO Copyright Treaty which revised the Berne Convention in 1996 and went into force in 2002 (Brenner 2011; Holt et al. 2018).

Pornography It can be argued that while the Internet has revolutionized business, it had the most radical effect on the pornography industry. Pornography can broadly be defined as the representation of sexual situations for the purposes of sexual stimulation (Holt et al. 2018; Lane 2000). Pornographic content can be represented in a wide variety of media, including but not limited to drawings, writings, photos, videos, games, and audio content. Technology has advanced in providing more personal forms of pornography, such as virtual reality. In the United States, the creation, distribution, possession, and viewing of pornography is legal as long as the participants in the work and the consumer are of legal age (i.e., 18). Some content is considered illegal regardless of the age of the consumer if the pornographic content depicts minors, sex between humans and animals, or true-life rape or physical harm (as opposed to performing) (Curtis 2012; Holt et al. 2018; Quinn and Forsyth 2013). Pornographic content may be considered deviant depending on the social norms and values of specific communities (Brenner 2011). The issue is whether the content is deemed obscene in a specific community or whether the content is protected by the First Amendment. The legal definition of obscenity in the United States has evolved via various cases decided upon by the Supreme Court. The United States Supreme Court decision in Miller vs. California, 1973 established the current definition of obscenity used by states and communities to determine whether something is obscene or not (Brenner 2011; Curtis 2012; Holt et al. 2018; US Department of Justice 2018). The Court stated that content may be considered obscene and not protected by the First Amendment right to free speech if the work meets one of three criteria: 1. Prurient interest: “Whether the average person, applying contemporary adult community standards, finds that the matter, taken as a whole, appears to prurient interests (i.e. an erotic, lascivious, abnormal, unhealthy, degrading, shameful, or morbid interest in nudity, sex, or excretion)” 2. Patently offensive: “Whether the average person, applying contemporary adult community standards, finds that the matter depicts or describes sexual conduct in a patently offensive way (i.e., ultimate sexual acts, normal or perverted, actual or simulated, masturbation, excretory functions, lewd exhibition of the genitals, or sado-masochistic sexual abuse)” 3. SLAPS: “Whether a reasonable person finds that the matter, taken as a whole, lacks serious literary, artistic, political, or scientific value” (US DOJ 2018)

13

Cybercrime Legislation in the United States

269

The United States also sets the standard for what constitutes as being obscene lower for youth than for adults. Although the three-pronged Miller standard universally applied for both adults and minors, any form of communication that consists of nudity, sex, or excretion is considered harmful for minors (Curtis 2012; US DOJ 2018). Within the context of what constitutes obscenity, the federal government has passed statutes regarding the creation, distribution, and possession of obscene content. Under Title 18 U.S.C. §§ 1460-1470, it is illegal to (1) possess obscene material with the intent to distribute those materials on federal property; (2) import or transport obscene materials across borders; (3) distribute or receive obscene material through a common carrier in interstate commerce, including postal mail, private carriers, or computer- and Internet-based services; (4) broadcast obscene, profane, or indecent language via television, radio, or cable and subscription television services; (5) knowingly produce, transport, or engage in the sale of obscene, lewd, or filthy material through interstate commerce; and (6) transfer obscene material to minors. The punishments associated with these offenses range based on the severity of the offense. For example, broadcasting obscene content can lead to a fine and a 2-year prison sentence. Most of the other offenses may be punishable with a fine and a prison sentence up to 5 years. Transferring obscene materials to minors, however, can be punished with a fine and 10 years in prison (US DOJ 2018). With the ability of the Internet to make pornography easily accessible to anyone with Internet access, the United States has passed additional legislation to protect minors from obscenity or pornography. Under the Truth in Domain Names Act of 2003, the use of misleading domain names to bring unsuspecting individuals to websites containing sexually explicit and obscene content was criminalized (Brenner 2011). This type of crime can be especially egregious if the individual intentionally creates domain names with slight misspellings of popular artists, characters, shows, or similar items for children. A person found guilty of this crime can be fined and imprisoned for up to 4 years if the domain was created to attract children. In addition, the Children’s Internet Protection Act (CIPA) and the Neighborhood Children’s Internet Protection Act (NCIPA) went into effect in 2001 and required the use of filtering and security protocols to block the access to obscenity by minors. CIPA applied to all K-12 schools while NCIPA covered public libraries. In addition, these facilities must also implement a technology protection measure on each Internet-connected computer and adopt Internet safety policies addressing a range of cybercrimes (American Library Association 2019). Schools or libraries which do not properly implement filters may lose certain forms of federal funds and grants.

Child Sexual Exploitation Materials Child pornography can be defined as depicting “the sexual or sexualized physical abuse of children under 16 years of age or who appear to be less than 16 that would offend a reasonable adult” (Krone 2004, p. 1). These depictions can include a wide variety of media but generally involve video and still photography. Pornography

270

A. M. Bossler

involving children is viewed as different from legal pornography for several reasons (Brenner 2011; Curtis 2012; Holt et al. 2018). Primarily, minors cannot give consent to engage in sexual acts or for it to be photographed or filmed. Minors, especially younger minors, are incapable of understanding the consequences of their actions. In extreme cases, such as when infants and toddlers are involved, they may not even be able to verbally communicate their wishes. In addition, adults are normally financially compensated for their participation in pornography. Minors are generally not financially compensated and are forced to participate in pornography through either force or by having their trust violated. Because of the serious physical and psychological consequences associated with children performing sex acts with adults, agencies and scholars have started to use the terms “child sexual abuse material” or “child sexual exploitation (CSE) material” rather than child pornography (see ▶ Chaps. 56, “Child Sexual Exploitation: Introduction to a Global Problem.” and ▶ 57, “The Past, Present, and Future of Online Child Sexual Exploitation: Summarizing the Evolution of Production, Distribution, and Detection”). The United States has multiple federal laws that criminalize child sexual exploitation and the creation and dissemination of CSE material. The first law criminalizing CSE materials was not enacted until 1977 with the Protection of Children Against Sexual Exploitation Act (18 U.S.C. §§ 2251-53). This Act made it illegal for anyone under the age of 16, later modified to 18 in 1986, to participate in the creation of sexually explicit materials. The reason for such a recent passage of the nation’s first law against CSE was primarily because all pornography, including that of adults, was generally considered obscene and illegal through the country. After the Miller decision (discussed in the section above), states and the federal government needed to define what type of pornography was obscene and harmful, with the primary focus being on the age of the participants (Brenner 2011; Curtis 2012; Holt et al. 2018). The Child Pornography Prevention Act of 1996 (18 U.S.C. §§ 2251-60) had a greater impact by extending the existing CSE laws and creating a new definition for child pornography. With this Act, child pornography was defined in the criminal code under Title 18 as “any visual depiction, including any photograph, film, video, picture, or computer of computer-generated image or picture of sexually explicit conduct” (Brenner 2011, p. 51). The law applies to any produced image that may involve actual minors engaged in sexual activity, those appearing to involve a minor, and/or creation, adaptation, or modification that appears to show a minor engaged in sexual activity. The broad definition of what images constitute child pornography was necessary to provide law enforcement and prosecutors the flexibility to prosecute cases in which images were created or modified using Photoshop or similar programs and sent electronically (Brenner 2011; Curtis 2012; Holt et al. 2018). In addition to broadening the definition of what constitutes child pornography, the Act also made other activities associated with the creation of child sexual exploitation materials illegal. First, it made it illegal for anyone to persuade, entice, induce, or transport minors in order to engage in sexual acts for the purpose of creating CSE materials. It also made it illegal for anyone to entice a minor to engage in sexual acts outside of the United States to create CSE material. In addition, the printing or

13

Cybercrime Legislation in the United States

271

publishing of advertisements associated with the sexual exploitation of children was criminalized. Finally, the Act made it illegal to conspire or attempt to commit any of the above offenses (Brenner 2011; Curtis 2012; Holt et al. 2018). The penalties for the creation and dissemination of child sexual exploitation materials are some of the most severe penalties in the US federal system. The sentence for the commission of any one of these offenses ranges from 15 to 30 years in prison and/or fine. The offender may receive a sentence of 25–50 years if they have a prior child sexual exploitation charge on their record at either the state or federal level. The offender is eligible for a life sentence in prison if they have two or more charges. If a child died as a result of one of the offenses above, the offender is eligible for the death penalty (Brenner 2011). Section 2252 of the Child Pornography Prevention Act of 1996 also criminalized other acts associated with child sexual exploitation materials other than its creation. The Act also criminalized the following activities: (1) mailing, transporting, or shipping child sexual exploitation materials by any means, whether physically or electronically; (2) receiving or distributing child sexual exploitation materials; (3) selling or possessing said materials with intent to sell; or (4) possessing books, films, or any other materials that contain CSE images. Similar to other sections, it also made it illegal to conspire or attempt to commit any of the above acts. The committing, attempting, or conspiring to commit any of the first three acts is punishable by 5 to 20 years in prison. A person can receive a sentence of 15–40 years if they have a prior child sexual exploitation or abuse conviction at the state or national level. The committing, attempting, or conspiring to commit the fourth act is punishable by a fine and/or a sentence up to 10 years, unless the individual has a prior conviction of child sexual exploitation, which increases the sentence to a minimum of 10 years and a maximum of 20 years (Brenner 2011; Holt et al. 2018). The Prosecutorial Remedies and Other Tools to end the Exploitation of Children Today Act (PROTECT Act) of 2003 also further redefined what images would be considered CSE by criminalizing virtual child pornography. The legal definition of CSE material was extended to include “a digital image, computer image, or computergenerated image that is, or is indistinguishable from, that of a minor engaged in sexually explicit conduct” (Brenner 2011, p. 57). This benefitted prosecutors in that it shifted the burden of proof to the defense to demonstrate that the images did not include actual victims, although the law has been found to be unconstitutional as it is overly broad and is not congruent with previous rulings on obscenity by the Supreme Court. Finally, 18 U.S.C § 2425 also makes it a federal crime for anyone to use any form of communication that is used for interstate commerce to knowingly “initiate the transmission of the name, address, telephone number, social security number, or electronic mail address of one who he or she knows to be under sixteen years of age with the intent to entice, encourage, offer, or solicit any person to engage in any sexual activity for which any person can be charged with a criminal offense” (Brenner 2011, p. 61–62). In addition to the US federal government, all 50 states and the District of Columbia have criminalized the production, possession, and dissemination of CSE materials and solicitation and exploitation of minors (Children’s Bureau 2019). The

272

A. M. Bossler

penalties for these offenses are felonies in all states, although the range of prison sentences varies based on the severity of the offense and prior record. In addition, some states have passed mandatory reporting legislation requiring commercial film or photography processors and IT workers to report any CSE material they find through the course of their jobs (Children’s Bureau 2019). These individuals are not required to actively seek out these materials but simply to report them if uncovered during normal operations and procedures. Not reporting these materials may lead to being charged with a misdemeanor and/or fined (Holt et al. 2018).

Sexting and Revenge Pornography Sexting can be defined as, “the practice of sending or posting sexually suggestive text messages and images, including nude or semi-nude, photographs via cellular telephones or over the Internet” (Sweeny 2011, p. 952) (see ▶ Chap. 51, “Sexting and Social Concerns”). Sexting can therefore be viewed as the creation and distribution of self-made pornography. Sexting is generally legal unless it violates two conditions. First, it is illegal if it involves sexual harassment. Second, it is illegal if the distributor and/or receiver is a minor. There is no national law in the United States that specifically addresses sexting, although some states have adopted sexting statutes. Critics argue that sexting legislation was meant to protect minors from pornographic material and sexual exploitation but may instead stigmatize and criminalize normal sexual activity for youth in the modern technological age (O’Connor et al. 2017). Similarly, legislating what individuals are allowed to do with sexual images that they have received has been challenging in the United States as well. In many cases, individuals send sexually explicit images of themselves to others for their personal viewing but do not provide consent for those images to be disseminated to others. In some cases, the receivers of the sexual photos betray the trust of the original sender and disseminate the photos or videos in order to cause emotional harm to the original sender. This behavior became known as “revenge pornography” (see ▶ Chap. 53, “Revenge Pornography”). Calling it revenge pornography, however, understates the emotional and psychological harm done to victims. Some scholars therefore argue that “image-based sexual abuse” is a more appropriate term (see ▶ Chap. 52, “Image-Based Sexual Abuse: A Feminist Criminological Approach”). Currently, there is no federal law in the United States that criminalizes the nonconsensual disclosure of sexual images and content. Almost all states (46) and the District of Columbia, however, have passed legislation addressing revenge pornography or image-based sexual abuse. These statutes vary considerably as some states consider them to be misdemeanors, while others treat them as felonies. Some states consider them as part of their cyber harassment provisions, while other states created stand-alone provisions (NCSL 2019d). In addition, half of all states have passed laws criminalizing extortion related to sexual images, photos, and videos, or what is called “sextortion” (NCSL 2019d).

13

Cybercrime Legislation in the United States

273

Cyber Harassment, Cyberbullying, and Cyberstalking The terms cyberbullying, cyber harassment, and cyberstalking are challenging to define and differentiate as many scholars use the terms interchangeably (see ▶ Chap. 59, “Cyberstalking” and ▶ 58, “Risk and Protective Factors for Cyberbullying Perpetration and Victimization”). In some cases, only the ages of the offenders and victims differentiate how the terms are used. For example, cyberbullying generally refers to the behavior of youth. Cyberbullying shares many of the characteristics of traditional bullying and can be defined as “willful and repeated harm inflicted through the medium of electronic text” (Patchin and Hinduja 2006, p. 152). When it comes to how cyberstalking may differ from the other two behaviors, a few characteristics may distinguish it. Both cyberstalking and online harassment involve constant use of email, texts, or computer-mediated communications (CMCs), but cyberstalking quite often can lead a victim to fear for their personal safety and experience significant emotional distress (Bocij 2004). In the United States, there are no federal statutes criminalizing bullying, cyberbullying, or catfishing. Advocates have attempted to pass anti-cyberbullying legislation to address stories in the media about teenagers committing suicide as a result of being cyberbullied. One of the most infamous cases regarding how the United States has not responded legislatively to cyberbullying is the case of Megan Meier in 2006. Megan befriended a boy named Josh Evans, who was approximately her age, on the social networking site MySpace (Morphy 2008). They had frequent conversations, which led to Megan developing an emotional connection with him. His messages soon became mean and hurtful, however, including telling her the world would be a better place without her. She soon thereafter hanged herself. Investigators discovered that Josh Evans was actually a fictitious account created by Lori Drew, the mother of one of Megan’s former friends. Drew created the account to emotionally hurt Megan. Drew was charged with three felony counts of computer fraud and one conspiracy count under the Computer Fraud and Abuse Act (discussed above) because she violated the social media platform’s terms of service which specified that users could not create fictitious accounts. The jury found Drew guilty of three misdemeanor counts of computer fraud. A judge later overturned the convictions on the basis that the CFAA was traditionally used to prosecute computer hacking and data theft offenses and that it was not created with the purpose of criminalizing catfishing. After the failed prosecution of Lori Drew, advocates, including the Meier family, called for the creation of federal laws to criminalize bullying and cyberbullying. As a result, the Megan Meier Cyberbullying Prevention Act (H.R. 1966) was introduced in the 111th Congress in 2009. This Act would have criminalized interstate or foreign communication that had “the intent to coerce, intimate, harass, or cause substantial emotional distress to a person, using electronic means to support severe, repeated, and hostile behavior.” Violators of this law would have been punished by a fine or a sentence up to 2 years. The House Judiciary Subcommittee on Crime, Terrorism, and Homeland Security had hearings but took no formal action.

274

A. M. Bossler

Although there is no federal law that bans cyberbullying, almost every state has legislation addressing cyberbullying and the roles of schools to prevent bullying and cyberbullying. Montana is the only state that does not require school districts to address bullying (Hinduja and Patchin 2018). Forty-eight states recognize the terms “cyberbullying” or “electronic harassment” in their state bullying legislation. Fortyfour (44) states provide criminal sanctions for electronic harassment. Most states (45) require schools to punish bullying behaviors. Importantly, some states’ (17) statutes indicate that off-campus bullying can also be sanctioned (Hinduja and Patchin 2018). This provides an avenue for states to punish cyberbullying which often happens off-campus. Although some opponents of these statutes argue that schools should not have jurisdiction over behavior that does not occur on their physical property, proponents argue that bullying victimization greatly affects students’ academic performance and attendance because of its impact on students’ mental health and emotional state. They therefore argue that it is essential for schools to be able to protect their students while off-campus as well (Holt et al. 2018). The United States also does not have a federal statute that directly bans cyberstalking. Various statutes can be used to prosecute individuals who send obscene, abusive, or harassing messages. Title 47 U.S.C § 223 (Obscene or Harassing Telephone Calls in the District of Columbia or in Interstate or Foreign Communications) prohibits the use of telecommunication devices to carry out several harassing behaviors (Brenner 2011). Individuals cannot use telecommunication devices to make, create, solicit, or initiate requests that are obscene or that involve child sexual exploitation materials with the intent to annoy, threaten, abuse, or harass others. It is also illegal to withhold one’s identity and use these devices to annoy, abuse, threaten, or harass someone. In addition, a person should not allow another person’s phone to ring continuously or to call them repeatedly with the intention to harass the individual. Any of these offenses can be punished by a fine and/or imprisonment for up to 2 years (Brenner 2011). More serious threats using interstate or foreign communication are addressed by other statutes. 18 U.S.C § 875 (Interstate Communications) criminalizes the use of communication devices via interstate or foreign communication to (1) demand a ransom for the release of a kidnapped person; (2) send a message with the intent to extort money; (3) threaten to injure a person; or (4) threaten to damage property. The statute can be used to prosecute an offender who lives in the same state as the victim if the Internet was used. The punishments for these offenses range from a fine and up to 2 years in prison for threats to damage property and extortion and up to 20 years in prison for demanding ransoms and threatening physical injury. 18 U.S.C. § 2261A (Stalking) regulates how both interactive computer services cannot be used for any activities that cause a person to feel substantial emotional distress or places that person or their family in reasonable fear of death or serious bodily injury. This statute also criminalizes interstate travel with the intent to kill, injure, threaten, or harass another person and place them or their family in fear of death or serious bodily harm. The punishment for these offenses range from 5 years in prison if only a threat is made, up to 10 years if serious bodily injury occurred as a result of a weapon, 20 years if the victim was permanently disfigured or received a life-threatening

13

Cybercrime Legislation in the United States

275

injury, and up to a life sentence if the victim died in relation to the threat (Brenner 2011; Holt et al. 2018). Both of the statutes discussed above require a credible threat made to either the person or property. In the United States v. Alkhabaz (1997), the court ruled that the communication needed to generate actual fear or concern for safety. In this case, Abraham Jacob Alkhabaz, also known as Jake Baker, wrote graphic stories describing fictional rapes, tortures, and murders and posted them on Usernet. In one of the stories, he described the raping and killing of a female character who shared the same name as one of his fellow classmates. The female classmate complained to the University of Michigan police department who investigated the offense and brought in the FBI because of the interstate aspect of the online communication. Baker was arrested on six counts of communicating threats to kidnap and injure a person under § 875. His case was dismissed, however, by a judge stating that there was insufficient evidence that Baker would act out his fantasies. The government appealed the decision, but the decision was upheld. Thus, §S 875 and 2261 cannot be used for many acts of cyberstalking in the absence of a true threat (Brenner 2011; Holt et al. 2018). Most US states have statutes criminalizing cyberstalking, online harassment, or both. These statutes allow for the prosecution of cyberstalking or harassment in which the offender used electronic communications to stalk or engage in a pattern of threatening behaviors (WHOA 2019). Congruent with the language of the United States v. Alkhabaz, these statutes include language requiring that a credible threat of harm to the victim must be present. Forty states have statutes which criminalize the use of CMCs to annoy or harass the victim; these do not require a credible threat. The violations of these offenses are considered either misdemeanors or felonies depending on the seriousness of the offense.

Spam A unique form of harassment is the receiving of email spam (see ▶ Chap. 41, “SpamBased Scams”). Citizens and businesses are inundated with millions of unsolicited emails each year. To address this form of harassment that can often lead to fraud, the Congress passed the Controlling the Assault of Non-Solicited Pornography and Marketing Act of 2003 (15 U.S.C. §S 7701-13), which went into effect on January 1, 2004. This Act, known as the CAN-SPAM Act, does not actually ban spam. Businesses and marketers may send unsolicited email as long as they correctly identify themselves and do not use fraudulent headers. Unsolicited emails must clearly identify themselves as solicitations or advertisements. They must also provide the recipients the sender’s physical postal address. If the email contains sexually explicit material, the label must clearly indicate the sexual nature of the content. In addition, consumers must be allowed to request to cease the sending of the unsolicited commercial email. Thus, junk email is seen somewhat similar to that of junk postal mail. It is legal as long as it is not fraudulent and consumers are allowed to opt out of it (Brenner 2011; Holt et al. 2018).

276

A. M. Bossler

The penalties for violating the CAN-SPAM Act start at a fine and/or imprisonment of up to 1 year in prison and increase up to 3 years based on the severity of the offense, such as the number of falsified electronic email or online user account registrations, the volume of spam sent, the amount of financial loss to the victim, amount of profit originating from the spam, and the number of defendants involved. If the violator sent spam containing sexually explicit material without proper warning labels, they can be punished by a fine and/or imprisonment up to 5 years. Violators may also be required to forfeit any property, including any form of technology, which was connected to the profit or the commission of the offense (Brenner 2011). The CAN-SPAM Act of 2003 supersedes any state statute on spam. This means that it prevents states from enacting stronger protections for its residents from spam. States can still address spamming by enforcing laws on fraud, false advertising, and similar crimes as along as their prosecutions do not violate First Amendment protections (Brenner 2011).

Cyberterrorism Legislating hate speech in the United States is exceedingly challenging because of the United States’ strong protection of First Amendment free speech rights (see ▶ Chap. 64, “Hate Speech in Online Spaces”). One of the few ways in which speech can be regulated within the United States is whether it passes the imminent danger test in which the speaker attempts to incite dangerous or illegal activities (Brenner 2011; Curtis 2012; Holt et al. 2018). As a result, the United States has dealt with extremist language by focusing on threats to individuals and the nation. After the terrorist attacks on September 11, 2011, the CFAA (which was discussed previously in this chapter regarding hacking and malicious software) was expanded by the passage of the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism in 2011, which is more commonly known as the USA PATRIOT Act. The PATRIOT ACT extended the protections afforded to protected computers under the CFAA to any computer anywhere in the world that is involved with interstate or foreign commerce or communications with the United States. The PATRIOT ACT also expanded what was meant by unauthorized access to a computer, network, or system to include access that “modifies or impairs access to medical data; causes physical injury to a person; poses a threat to public health or safety; or damages a computer by a government entity in the administration of justice, national defense, or national security.” This statute made it much easier for federal law enforcement and prosecutors to protect the nation’s critical infrastructure and sensitive data repositories (Brenner 2011; Holt et al. 2018; US DOJ 2019b). Another important aspect of the PATRIOT ACT was that it revised provisions of the Electronic Communications Privacy Act (ECPA) related to subpoenas of Internet service providers (ISPs) and cable companies. This made it easier for law enforcement agencies to surveil electronic communications by allowing them to obtain crucial information on subscribers and users, including but not limited to names,

13

Cybercrime Legislation in the United States

277

addresses, billing records, phone numbers called, duration of online sessions, types of services used, and information on communication devices. This type of information can provide law enforcement a clearer picture of who is doing what, when, where, and how. Specifically, law enforcement may trace the specific activities of a user to exact websites and content in which the user interacted with during any Internet use session. It also altered the ECPA to define email on a third-party server for more than 180 days as being abandoned. Thus, judicial review is not necessary for law enforcement to request the content of the emails, regardless of whether the emails were opened or not. Another important provision of the PATRIOT ACT meant to improve national security was allowing Internet service providers to make emergency disclosures to law enforcement in cases of extreme threats to public safety (Brenner 2011; Holt et al. 2018; US DOJ 2019b). In the United States, the investigation and enforcement of cyberterrorism is generally handled at the federal level. Several states, however, also have cyberterrorism legislation. For example, Arkansas’s definition of terrorism includes acts that attempt to disable or destroy data, computers, or computer networks that are used by the government, industry, or contractors. In 2019, it addressed the authority of the Governor to order the militia into service to address cybersecurity threats and cybersecurity vulnerabilities (NCSL 2019e). Georgia previously criminalized the use of computers to spread information supporting terrorist activities. Some state-level legislation focuses more on cybersecurity protection to prevent cyberattacks. For example, Indiana in 2019 passed legislation requiring that wastewater treatment plants may not be issued a permit unless the application contains a cybersecurity plan (NCSL 2019e).

Conclusion Cybercrime is a large umbrella term that covers a wide range of online criminal behavior involving the use of technology. As a result, cybercrime legislation varies as much as cybercrime itself. This chapter discussed legislation in the United States that addresses a wide variety of cybercrimes, including computer hacking, the use of malicious software, online fraud, intellectual property theft, child sexual exploitation, sexting, image-based sexual abuse, cyber harassment and cyberstalking, spam, and cyberterrorism. In some cases, the federal and state governments simply amended existing statutes to accommodate technological innovations in the commission of traditional crimes. In other cases, they created new statutes as they viewed certain cybercrime types as being newer forms of crime needing separate legislation. These statutes treat most cybercrime offenses as serious felonies with significant penalties related to the harm done and the offender’s prior convictions. In many cases, the commission of cybercrimes is punished with sentences ranging up to 20 years. In the most serious cases, offenders can be sentenced with life in prison, such as in some child sexual exploitation cases or a computer hack knowingly or recklessly leading to death. In the most rare cases, the death penalty is an option (e.g., if a child died as a result of child sexual exploitation).

278

A. M. Bossler

Although the United States has passed a plethora of statutes at the federal and state levels to address cybercrime, the United States has not passed legislation criminalizing many forms of cybercrimes that disproportionately affect women, such as image-based sexual abuse and cyberstalking. In addition, there is no federal data breach notification law. The United States also relies on traditional mail and wire fraud statutes to prosecute online fraud. As technology continues to evolve, it will be important to discuss the applicability of extant legislation, the need for modifying current laws, and the possible enactment of new laws for new types of crimes. As the field continues to move forward, it will be essential for scholars to examine this legislation through a more critical and empirical lens in order to provide a better and fuller understanding of whether cybercrime legislation is evolving at the same pace as that of technology.

Cross-References ▶ Child Sexual Exploitation: Introduction to a Global Problem ▶ Computer Hacking and the Hacker Subculture ▶ Counterfeit Products Online ▶ Cyberstalking ▶ Cyberwarfare as Realized Conflict ▶ Defining Cybercrime ▶ Digital Piracy ▶ Hate Speech in Online Spaces ▶ Identity Theft: Nature, Extent, and Global Response ▶ Image-Based Sexual Abuse: A Feminist Criminological Approach ▶ Revenge Pornography ▶ Risk and Protective Factors for Cyberbullying Perpetration and Victimization ▶ Romance Fraud ▶ Sexting and Social Concerns ▶ Social Engineering ▶ Spam-Based Scams ▶ The Past, Present, and Future of Online Child Sexual Exploitation: Summarizing the Evolution of Production, Distribution, and Detection

References American Library Association. (2019). The Children’s Internet Protection Act (CIPA). Accessed online on 11/8/2019 at: http://www.ala.org/advocacy/advleg/federallegislation/cipa Bocij, P. (2004). Cyberstalking: Harassment in the internet age and how to protect your family. Westport: Praeger Publishers. Brenner, S. W. (2011). Defining cybercrime: A review of federal and state law. In R. D. Clifford (Ed.), Cybercrime: The investigation, prosecution, and defense of a computer-related crime (3rd ed., pp. 15–104). Raleigh: Carolina Academic Press.

13

Cybercrime Legislation in the United States

279

Children’s Bureau. (2019). Mandatory reporters of child abuse and neglect. Accessed online on 11/ 8/2019 at: https://www.childwelfare.gov/pubPDFs/manda.pdf Computer Fraud and Abuse Act of 1986, 18 USC § 1030. Conspiracy to Commit Offense or to Defraud United States, 18 USC § 371. Controlling the Assault of Non-Solicited Pornography and Marketing Act of 2003, 15 USC § 7701–13. Copyright Act of 1976, Pub L. No. 94–553, 90 Stat 2541 (Oct. 19, 1976), codified at various parts of Title 17 U.S. Code. Curtis, G. (2012). The law of cybercrimes and their investigations. Boca Raton: CRC Press. Department of Justice. Prosecuting Computer Crimes. (2010). Computer crime and intellectual property section criminal division. Published by Office of Legal Education Executive Office for United States Attorneys. Accessed online on 9/18/2019 at: https://www.justice.gov/sites/ default/files/criminal-ccips/legacy/2015/01/14/ccmanual.pdf Fair and Accurate Credit Transactions Act of 2003, 15 USC § 1681. Federal Trade Commission. (2019). Gramm-Leach-Bliley Act. Accessed online on 11/8/2019 at: https://www.ftc.gov/tips-advice/business-center/privacy-and-security/gramm-leach-bliley-act Fraud and Related Activity in Connection with Access Devices, 18 USC § 1029. Fraud and Swindles, 18 USC § 1341. Fraud by Wire, Radio, or Television, 18 USC § 1343. Furnell, S. (2002). Cybercrime: Vandalizing the information society. London: Addison-Wesley. Hinduja, S., & Patchin, J. W. (2018). State bullying laws. Cyberbullying Research Center. Accessed 11/8/2019 at: https://cyberbullying.org/Bullying-and-Cyberbullying-Laws.pdf Holt, T. J., & Bossler, A. M. (2016). Cybercrime in progress: Theory and prevention of technologyenabled offenses. London: Routledge. Holt, T. J., Bossler, A. M., & Seigfried-Spellar, K. C. (2018). Cybercrime and digital forensics: An introduction (2nd ed.). London: Routledge. Identity Theft and Assumption Deterrence Act of 1998, 18 USC § 1028. Identity Theft Enforcement and Restitution Act of 2008, 18 USC § 3663(b). Interstate Communications, 18 USC § 875. Krone, T. (2004). A typology of online child pornography offending. Trends & Issues in Crime and Criminal Justice, 279, 1–6. Lane, F. S. (2000). Obscene profits: The entrepreneurs of pornography in the cyber age. New York: Routledge. Morphy, E. (2008). The Computer Fraud Act: Bending a law to fit a notorious case. E-Commerce Times, December 9, 2008. Accessed online on 11/8/2019 at: https://www.ecommercetimes.com/ story/65424.html National Conference on State Legislatures (NCSL). (2019a). Computer crime statutes. Accessed online on 11/27/2019 at: http://www.ncsl.org/research/telecommunications-and-informationtechnology/computer-hacking-and-unauthorized-access-laws.aspx National Conference on State Legislatures (NCSL). (2019b). Identity theft. Accessed online on 11/ 27/2019 at: http://www.ncsl.org/research/financial-services-and-commerce/identity-theft-statestatutes.aspx National Conference on State Legislatures (NCSL). (2019c). Security breach notification laws. Accessed online on 11/27/2019 at: http://www.ncsl.org/research/telecommunications-and-infor mation-technology/security-breach-notification-laws.aspx National Conference on State Legislatures (NCSL). (2019d). Fighting revenge porn and ‘Sextortion.’ Accessed online on 11/27/2019 at: http://www.ncsl.org/research/telecommunica tions-and-information-technology/fighting-revenge-porn-and-sextortion.aspx National Conference on State Legislatures (NCSL). (2019e). Cybersecurity legislation 2019. Accessed online on 11/27/209 at: http://www.ncsl.org/research/telecommunications-and-infor mation-technology/cybersecurity-legislation-2019.aspx O’Connor, K., Drouin, M., Yergens, N., & Newsham, G. (2017). Sexting legislation in the United States and abroad: A call for uniformity. International Journal of Cyber Criminology, 11, 218–245.

280

A. M. Bossler

Obscene or Harassing Telephone Calls in the District of Columbia or in Interstate or Foreign Communications, 47 USC § 223. Patchin, J. W., & Hinduja, S. (2006). Bullies move beyond the schoolyard: A preliminary look at cyberbullying. Youth Violence and Juvenile Justice, 4(2), 148–169. Quinn, J. F., & Forsyth, C. J. (2013). Red light districts on blue screens: A typology for understanding the evolution of deviant communities on the internet. Deviant Behavior, 34, 579–585. Stalking, 18 USC 2261A. Sweeny, J. (2011). Do sexting prosecutions violate teenagers’ constitutional rights? San Diego Law Review, 48, 951–991. Unlawful Access to Stored Communications, 18 USC § 2701. US Department of Justice. (2018). Citizen’s guide to U.S. Federal law on obscenity. Accessed online on 11/8/2019 at: https://www.justice.gov/criminal-ceos/citizens-guide-us-federal-lawobscenity US Department of Justice. (2019a). 18 USC Section 1341 – Elements of Mail Fraud. Accessed online on 11/27/2019 at: https://www.justice.gov/jm/criminal-resource-manual-940-18-usc-sec tion-1341-elements-mail-fraud US Department of Justice. (2019b). Highlights of the USA Patriot Act. Accessed online 11/8/2019 at: https://www.justice.gov/archive/ll/highlights.htm Wall, D. S. (2001). Cybercrimes and the internet. In D. S. Wall (Ed.), Crime and the internet (pp. 1–17). New York: Routledge. Working to Halt Online Abuse (WHOA). (2019). U.S. Laws. Accessed 11/8/2019 at: https://www. haltabuse.org/resources/laws/index.shtml World Intellectual Property Organization (WIPO). (2019). Berne convention for the protection of literary and artistic works. Accessed 11/8/2019 at: https://www.wipo.int/treaties/en/ip/berne/ Yar, M. (2013). Cybercrime and society (2nd ed.). London: Sage.

Legislative Frameworks: The United Kingdom

14

Patrick Bishop

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cyber-Dependent Crimes: Computer Misuse Act 1990 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Background to the Computer Misuse Act 1990 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Computer Misuse Act 1990: General Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Offences Provided by the Computer Misuse Act 1990 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cyber-Harassment: The Protection from Harassment Act 1997 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Protection from Harassment Act 1997 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Offence of “Stalking” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Offensive or Malicious Online Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Communications Act 2003, s.127 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disclosing Private Sexual Photographs and Films with Intent to Cause Distress: Criminal Justice and Courts Act 2015, s.33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Sexual Grooming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Fraud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Fraud Act 2006 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Possession and Making or Supplying Articles for Use in Frauds . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

282 283 283 284 286 292 293 293 294 294 296 298 300 300 301 302 303 303

Abstract

This chapter discusses the cybercrime legislative framework within the United Kingdom, including computer misuse, cyber-harassment, offensive online communications, online sexual grooming, and online fraud. Following a discussion of the scale of cybercrime in the UK, this chapter provides an analysis of criminal offences which are designed to combat cybercrime or are capable of combatting P. Bishop (*) Cyber Threats Research Centre, Hilary Rodham Clinton School of Law, Swansea University, Swansea, UK e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_4

281

282

P. Bishop

cybercrime. The prevalence of broad statutory language as means of ensuring maximum flexibility in a fast changing technological landscape is highlighted as a notable feature of the UK legislative framework. Keywords

Cybercrime legislation in the UK · Computer misuse · Cyberharrassment · Illegal online communications · Online fraud

Introduction To make a rather trite assertion, cybercrime in the United Kingdom is a significant problem; the latest results from the British Crime Survey for the year ending September 2018 reveal over a million incidents of computer misuse and 0.5 million incidents each of computer virus distribution and unauthorized access to personal information (Office for National Statistics 2019). Indeed, where cybercrimes were first included in British Crime Survey 2017, it was discovered that individuals are 10 times more likely to be a victim of fraud and computer misuse than a victim of theft and 35 times more likely than robbery (Office for National Statistics 2018). In terms of the classification of cybercrime, it has become conventional to divide cyber criminality into two broad categories, namely, cyber-dependent crimes and cyber-enabled crimes (McGuire and Dowling 2013); the former are offences that can only be committed via the use of a computer (either networked or non-networked) or other forms of information and communication technology (ICT). The latter is comprised of crimes which pre-date the Internet or World Wide Web and may be committed either with or without the use of a computer or other form of ICT but are commonly, even predominately, committed via such means; online fraud is perhaps the most obvious example. In relation to cyber enabled crimes, a notable feature of the legislative landscape in the UK is the absence of any statutory provisions designed specifically to combat cybercrime; the preferred option seems to be the drafting of applicable laws in a sufficiently broad manner so that the relevant offence may be committed either online/with the aid ICT or offline/without the aid of ICT. In terms of cyber-dependent crimes, the central legislative enactment, namely, the Computer Misuse Act 1990, is clearly bespoke in its approach and application but the desire to promote flexibility remains an implicit but clear objective of the statutory drafting. The structure of this chapter will broadly follow the dichotomy highlighted above by focusing on the following areas: • Cyber-dependent crimes – The Computer Misuse Act 1990 • Cyber-enabled crimes – Cyber-harassment: the Protection from Harassment Act 1997 and Criminal Justice and Courts Act 2015, s.33 – Offensive communications: Communications Act 2003, s.127 – Online grooming: Sexual Offences Act 2003, s.15 – Online fraud: Fraud Act 2006

14

Legislative Frameworks: The United Kingdom

283

At the outset, it should be noted the list of key statutory provisions above is by no means exhaustive and merely represents a broad framework of the main legislative responses to criminal conduct committed online or via ICT. Perhaps the most significant omission is the legal control of pornography in the UK; numerous statues combat pornography, both adult and child and online and offline. Thus, anti-pornography law in the UK is so extensive as to warrant specific discussion beyond the scope of this chapter.

Cyber-Dependent Crimes: Computer Misuse Act 1990 The Background to the Computer Misuse Act 1990 The UK Law Commission published a Consultation Paper and Report (both entitled “Computer Misuse”) in 1989. Both publications highlighted the perceived need for bespoke computer misuse legislation. In conjunction with the views of the Law Commission, the case of R v Gold [1988] 1 AC 1063 added to the impetus for the creation of new criminal offences. The House of Lords (at the time, the highest appellate court in the UK) concluded that the offence of forgery (Forgery and Counterfeiting Act 1981, s.1) could not be utilized to combat an instance of what today would be considered unauthorized access to a computer system. The case concerned unauthorized access to an information network operated by British Telecom. The defendants had underhandedly acquired a username and password and accessed the system on several occasions. The database could be legitimately accessed on a payment of fee; as such, the most obvious criminal offence would have been obtaining services by deception (Theft Act 1978, s.1, now repealed). However, there was considerable uncertainty whether this offence could be committed when the “victim” was a machine on the basis that only a person may be deceived (see discussion on “Online Fraud” below). As such, a decision was taken to bring a prosecution under the Forgery and Counterfeiting Act 1981; s.1 of the Act provides the general offence of forgery: A person is guilty of forgery if he makes a false instrument, with the intention that he or another shall use it to induce somebody to accept it as genuine, and by reason of so accepting it to do or not to do some act to his own or any other person’s prejudice.

s.10(3) of the Act further provides that: References to inducing somebody to accept a false instrument as genuine. . . include references to inducing a machine to respond to the instrument or copy as if it were a genuine instrument.

Thus the possibility or impossibility of deceiving a machine was not an issue. The defendants were convicted at trial but their appeal was allowed by the Court of Appeal and confirmed by the House of Lords. The main issue before the court was that whether the defendants had created a “false instrument” as required by the Act;

284

P. Bishop

s. 8(1)(d) provides a broad definition of instrument: “any disc, tape, sound track, or other device on or in which information is recorded or stored by mechanical, electronic, or other means.” The House of Lords concluded that the electronic signals comprised of identity codes used by the defendants could not be considered tangible in the same sense as a disc or tape. Lord Brandon was also influenced by the use of the words “recorded” and “stored” which clearly implied an element of permanency: The words “recorded” and “stored” are words in common use which should be given their ordinary and natural meaning. In my opinion both words in their ordinary and natural meaning connote the preservation of the thing which is the subject matter of them for an appreciable time with the object of subsequent retrieval or recovery. (pp. 1072–1073)

Given that the electronic signals were present in the system for such a fleeting moment, they could not have been recorded or stored. Therefore no false instrument had been made which, in turn, removed any possible criminal liability under the Act. As a general proposition, both the Court of Appeal and the House of Lords were unimpressed by the attempt to bring computer hacking within the ambit of the offence of forgery, Lord Lane CJ in the Court of Appeal noted: The Procrustean attempt to force these facts into the language of an Act not designed to fit them produced grave difficulties for both judge and jury which we would not wish to see repeated. The appellants’ conduct amounted in essence, as already stated, to dishonestly gaining access to the relevant. . . data bank by a trick. That is not a criminal offence. If it is thought desirable to make it so, that is a matter for the legislature rather than the courts. We express no view on the matter. (p. 1124)

The blasé attitude of the UK courts to an instance of unauthorized access may seem striking when viewed through a contemporary lens. However, prior to the digital revolution and the ubiquity of ICT in everyday life, the approach of the courts is perhaps understandable. Nevertheless, the decision was decried as a socalled “hacker’s charter” and together with the Law Commission report eventually lead to the passing of the Computer Misuse Act 1990 (CMA 1990). Since its original enactment, the CMA 1990 has been significantly amended by the Police and Justice Act 2006 and the Serious Crime Act 2015 but remains the cornerstone of cyberdependent criminal law in the UK.

The Computer Misuse Act 1990: General Approach As a general proposition, it may be argued that an activity which is lawful in the absence of a computer should not be considered illegal merely because a computer has been utilized. In this context, computer hacking could be viewed as the electronic equivalent of the tort (civil wrong) of trespass to land. Under English law, a person who gains unauthorized access to someone else’s land (a trespasser) is not considered to have committed a crime unless or until they engage in further

14

Legislative Frameworks: The United Kingdom

285

aggravating conduct. For example, a person who trespasses on land with the intention of stealing may be guilty of burglary. The approach of the CMA 1990 adopts a different approach, unauthorized access per se is considered illegal even if the person who has acquired access does not intend to damage the computer system, cause loss, or has any other form of dishonest/malicious intent. The Law Commission had previously recommended that simple unauthorized access should be sufficient to create criminal liability: [B]ecause of the possibility that any attempted entrant may have had password access to important levels of authority, sometimes to a level which has enabled him to delete records of his activities from the system, any successful unauthorised access must be taken very seriously. Substantial costs are therefore incurred in (i) taking security steps against unauthorised entry and in the equally important precaution of monitoring attempts to enter; and (ii) investigating any case, however trivial, where unauthorised entry does in fact occur. (Law Commission 1989, para.1.29)

However, the Law Commission recommended that the offence should be considered relatively minor, with a maximum sentence of 3 months’ imprisonment. The CMA 1990 provided a maximum sentence of 6 months imprisonment. Reforms made to the CMA 1990 by the Police and Justice Act 2006 increased the maximum penalty to 2 years imprisonment. The CMA 1990 does not provide a definition of the term “computer.” The Law Commission considered that any attempt to define “computer” would be thwart with difficulty, any definition would be: “So complex, in an endeavour to be all embracing, that they are likely to produce extensive argument” (Law Commission 1989, para.3.39). This may be contrasted with the position in the USA, which provides the following definition: [A]n electronic, magnetic, optical, electrochemical, or other high speed data processing device performing logical, arithmetic, or storage functions, and includes any data storage facility or communications facility directly related to or operating in conjunction with such device. (Computer Fraud and Abuse Act 1984, 18 USC s.1030(e)(1))

The tortuous and all-embracing nature of the above definition perhaps supports the view of the Law Commission that any attempt at defining a computer is an extremely difficult task. While the CMA 1990 is clearly designed to deal with the misuse of computers as commonly understood in everyday language and from reported prosecutions, it seems the Act has only been used to deal with traditional cases of computer hacking and other instances of unauthorized access, many items make extensive use of micro-processing chips. Lloyd posits the example of a washing machine, whose circuitry will contain the programs necessary for the performance of designated tasks. It might therefore be argued that a person who makes unauthorized use of a washing machine might be guilty of an offence under the CMA 1990 (Lloyd 2008). While at the time of writing, Lloyd’s hypothetical example might have seemed somewhat farfetched, the development of the Internet of things has undoubtedly created vulnerabilities for a whole host of electronic home

286

P. Bishop

devices and thus the potentially wide ranging scope of the CMA 1990 could be considered as a strength on the basis that this allows the law to adapt to changes in technology without the need for legislative amendments.

Offences Provided by the Computer Misuse Act 1990 The CMA 1990 provides five specific offences, namely: unauthorized access (the socalled basic offence); unauthorized access with intent to commit further offences (ulterior intent offence); unauthorized acts with an intent to impair; unauthorized acts causing, or creating risk of, serious damage; and making, supplying, or obtaining articles for use in an offence under the Act. Each offence will be considered in turn.

Unauthorized Access: The Basic Offence The offence of unauthorized access contained in s.1 of the CMA 1990 is the basic “hacking” offence, which provides: (1) A person is guilty of an offence if— (a) he causes a computer to perform any function with intent to secure access to any program or data held in any computer, or to enable any such access to be secured; (b) the access he intends to secure, or to enable to be secured, is unauthorised; and (c) he knows at the time when he causes the computer to perform the function that that is the case.

The elements of the offence highlighted above can be reduced to three questions: Has the defendant obtained or enabled access? Was the access unauthorized? Did the defendant know that the access was unauthorized? If these three questions are answered in the affirmative, then the defendant is guilty of the section 1 offence. However, to fully appreciate the scope of section 1, further discussion is necessary. The actus reus of the basic offence is simply causing a computer to perform any function, which is further defined by section 17(2): A person secures access to any program or data held in a computer if by causing a computer to perform any function he— (a) alters or erases the program or data; (b) copies or moves it to any storage medium other than that in which it is held or to a different location in the storage medium in which it is held; (c) uses it; or (d) has it output from the computer in which it is held (whether by having it displayed or in any other manner)

At the most basic level, simply using a computer would satisfy the actus reus of the s.1 offence. Further guidance is provided by case law; notably, the question of whether access is secured is not a technical one which requires evidence from IT professionals. For example, in Ellis v DPP (no 1) [2001] EWHC Admin 362, testimony from university employees was sufficient evidence to establish that the defendant had accessed library computers without the necessary permission.

14

Legislative Frameworks: The United Kingdom

287

Perhaps the commonly held view of a computer hacker is someone who gains remote access to the target computer system via a telephonic or Internet connection; whether the s.1 offence may be committed in a single computer scenario was considered by the Court of Appeal in A-G’s Reference (No. 1 of 1991) [1992] 3 WLR 432. The defendant had been employed as a sales assistant by a wholesale locksmith. After leaving the company’s employment, the defendant returned to the premises in order to purchase some equipment. Details of sales transactions were entered into a computer terminal. The defendant was familiar with the operation of the computer system and when a terminal was left unattended, he entered a code. The effect of the code was to instruct the computer to give a 70% discount on the equipment in question. When these facts came to light, the defendant was charged with the basic offence under the CMA 1990. The trial judge dismissed the charge on the basis that s.1 required that one computer should be used to obtain access to a program or data held on another computer. The interpretation of the basic offence adopted by the trial judge caused considerable concern on the basis that a restriction requiring remote access from one computer to another computer could have severely limited the scope of the Act. The Attorney-General therefore sought the opinion of the Court of Appeal on the interpretation to be given to the words: “causes a computer to perform any function with intent to secure access to any program or data held in any computer” (s.1(1)(a)). It was argued that the final phrase, “held in any computer,” should be read as “held in any other computer.” The Lord Chief Justice rejected such an interpretation: To read those words in that way, in our judgment, would be to give them a meaning quite different from their plain and natural meaning. It is a trite observation, when considering the construction of statutes, that one does not imply or introduce words which are not there when the plain and natural meaning is clear. In our judgment there are no grounds whatsoever for implying, or importing the word “other” between “any” and “computer,” or excepting the computer which is actually used by the offender from the phrase “any computer” at the end of the subsection (1)(a). (p. 437)

Thus, a s.1 offence may be committed where a defendant accesses a single computer without authorization. Unauthorized Access In addition to establishing the actus reus of causing a computer to perform any function, it also has to be established that the access obtained was unauthorized and the person knew this to be the case. The CMA 1990, s.17(5) provides guidance on when access is unauthorized: (a) he is not himself entitled to control access of the kind in question to the program or data; and (b) he does not have consent to access by him of the kind in question to the program or data from any person who is so entitled

The issue of unauthorized access has been the subject of two cases where the defendant was authorized to use the computer system per se but access was made for

288

P. Bishop

an unauthorized purpose. In Director of Public Prosecutions v Bignell [1998] 1 Cr App R 1, it was concluded that notwithstanding that a computer was accessed for an unauthorized purpose, the defendant was not guilty of an offence under s.1 on the basis that he was granted general authorization to access the relevant computer system. In reaching such a conclusion, the court was influenced by the view of the Law Commission (1989, para.3.36) that in such a scenario, an employee should be subject to internal disciplinary action but not subject to criminal liability. The approach of the court was heavily criticized on the basis that an activity in relation to a computer which exceeds the explicit terms of permission should be treated as an unauthorized access to the computer system overall (Smith 1998). In the extradition case, R v Bow Street Magistrates Court Ex parte Allison (No.2) [2000] 2 AC 216, the House of Lords plugged the potential lacuna in the law created by Bignell, by concluding that s.1 of the CMA 1990 was not concerned with authority to access kinds of data but with authority to access the actual data involved, on the basis that s.1 was designed to combat all forms of unauthorized access, whether by insiders or outsiders. In addition to demonstrating that the person lacked authority to access the computer, it also has to be shown that they had knowledge that the access was unauthorized. Where the defendant is external to the target organization, proving knowledge is generally unproblematic. Most computer systems will provide a gateway page warning the user that further attempts to gain access constitutes a criminal offence. The mere existence of such a warning may be sufficient to establish that any further attempts at accessing the relevant site are done so in the knowledge that access is unauthorized. While s.1 does not require the circumvention of security measures (at the most basic level, a username and password) as a condition precedent of criminal liability, the absence of such measures is likely to be relevant to the requirement to demonstrate knowledge that the access was unauthorized, i.e., a defendant may argue that in the absence of a warning page and/or username and password requirement, they did not know that further access was unauthorized.

Unauthorized Access with Intent to Commit or Facilitate Commission of Further Offences: The Ulterior Intent Offence As discussed above, the basic hacking offence provided by s.1 does not require any form of dishonest or malicious intent on the part of the defendant. However, in many instances, unauthorized access to a computer system is merely the first stage in a chain of criminality; for example, the hacking of a bank’s computerized database may be the first stage of a significant fraud. If the fraud is actually perpetrated, then a defendant can obviously be prosecuted for the full offence. However, s.2 of the CMA 1990 operates in a manner which brings forward in time the moment at which a serious criminal offence committed. Section 2(1) of the CMA 1990 provides: A person is guilty of an offence under this section if he commits an offence under section 1 above (“the unauthorised access offence”) with intent— (a) to commit an offence to which this section applies; or

14

Legislative Frameworks: The United Kingdom

289

(b) to facilitate the commission of such an offence (whether by himself or by any other person); and the offence he intends to commit or facilitate is referred to below in this section as the further offence.

The further offence is defined by s.2(2) as an offence where the sentence is fixed by law (in UK law, effectively the offence of murder given the mandatory life sentence for those convicted of murder) or offences for which a person with no criminal record, upon conviction, may be sentenced to term of 5 years imprisonment or more. Thus, s.2 operates in a two stage manner; first, the prosecution is required to prove that the defendant committed the basic hacking offence contrary to s.1; and second, the defendant has done so with the intent to commit a further offence. In most instances, the manner in which a s.1 offence is committed provides clear evidence from which the necessary intent may be inferred. For example, where a person exploits weaknesses in an e-commerce website and in doing so obtains the credit card details of customers, in the absence of a plausible justification for such actions, a court may infer the necessary intent from the conduct of the defendant. Given that s.2 is considered an aggravated form of the s.1 offence, the court is granted additional sentencing powers; those guilty of s.2 offence may be subject to a maximum sentence of 5 years imprisonment. Unauthorized Acts with Intent to Impair, or with Recklessness as to Impairing, Operation of Computer In addition to the authorized access to computer material or hacking, one of the obvious manifestations of cyber-enabled crime is the distribution of computer viruses and other forms of malware; the CMA 1990 creates specific offences designed to combat the use of malware and its creation, supply, and possession. The former is provided by s.3 of the CMA, entitled unauthorized acts with intent to impair, or with recklessness as to impairing, the operation of a computer. By virtue of s.3: (1) A person is guilty of an offence if– (a) he does any unauthorised act in relation to a computer; (b) at the time when he does the act he knows that it is unauthorised; and (c) either subsection (2) or subsection (3) below applies. (2) This subsection applies if the person intends by doing the act– (a) to impair the operation of any computer; (b) to prevent or hinder access to any program or data held in any computer; (c) to impair the operation of any such program or the reliability of any such data; or (d) to enable any of the things mentioned in paragraphs (a) to (c) above to be done. (3) This subsection applies if the person is reckless as to whether the act will do any of the things mentioned in paragraphs (a) to (d) of subsection (2) above.

As originally enacted, the CMA 1990 required that the accused person acted intentionally. Amendments made by the Police and Justice Act 2006, expanded the minimum mens rea requirement to include recklessness. The scope of s.3 follows a

290

P. Bishop

similar pattern to s.1 on the basis that a person is guilty of an offence if he does any unauthorized act in relation to a computer and he knows that that act is unauthorized; and, a person by doing the act intends to impair the operation of a computer or is reckless as to whether his acts will impair the operation of a computer, then he is guilty of the s.3 offence. The scope of s.3 is comprehensive, at a basic level, the section will be engaged where an unauthorized user distributes a virus to the target computer (s.2(2)(a)), deletes a program from a computer system (s.2(2)(b)), or modifies the computer in such a way that impairs the reliability of data held on that computer (s.2(2)(c)), for example, by gaining unauthorized access to an email account with the effect that recipients of emails emanating from that account cannot rely on the assumption that the email has been sent from the authorized user (Zezev and Yarimaka v Governor of HM Prison Brixton and another [2002] EWHC Admin 589). Further, a careful reading of the statutory language reveals a nonprescriptive and expansion approach; there is no requirement that the unauthorized act of the defendant actually resulted in any of the adverse effects listed in s.3(2)(a), (b), and (c), as long as the prosecution can prove that the defendant intended or was reckless as to whether any of those effects occurred. Further, the impairment need not be serious or permanent; thus, minor impairment which endures for a brief period of time still potentially engages s.3. The maximum sentence for a s.3 offence is 10 years imprisonment. Denial of Service Attacks and Section 3

Following the enactment of the CMA 1990, there was some concern that the s.3 offence was unable to address Denial of Service (DOS) attacks on the basis that the communication sent to the target website or email address will often fall within the class of transmission which the target website/email was set up to receive. Such an argument was centered on the concept of authorization; namely, where an email address or request of further information link is publicized online, the owner of the system implicitly grants authorization to any user of the website to make a request for information and thus the actions of a DOS perpetrator cannot be construed as unauthorized as required by s.3. However, such concerns have been allayed by the decision in DPP v Lennon [2006] EWHC 1201 (Admin), where Jack J concluded that although a party with an email address must give some consent to receipt of emails and for any consequential addition of data to the computer system involved, such consent does not: “. . .cover emails which are not sent for the purpose of communication with the owner, but are sent for the purpose of interrupting the proper operation and use of his system.” (para.10). Thus confirming that s.3 is able to combat DOS attacks. Unauthorized Acts Causing, or Creating Risk of, Serious Damage In a contemporary context, terrorist use of the Internet, and more relevant for present purposes, cyber-terrorism, is a prominent feature of public discourse. Indeed, the threat posed by a possible large-scale cyber-attack on critical national infrastructure features prominently in the UK government’s national security policy (National Security Capability Review, 2018). In response to the growing threat of cyber-terrorism,

14

Legislative Frameworks: The United Kingdom

291

the Serious Crime Act 2015, s.41, amended the CMA 1990 by inserting s.3ZA which creates the offence of unauthorized acts, causing, or creating the risk of, serious damage, which provides: (1) A person is guilty of an offence if— (a) the person does any unauthorised act in relation to a computer; (b) at the time of doing the act the person knows that it is unauthorised; (c) the act causes, or creates a significant risk of, serious damage of a material kind; and (d) the person intends by doing the act to cause serious damage of a material kind or is reckless as to whether such damage is caused.

Damage of a material kind is defined by s.3ZA(2) as damage to human welfare in any place; damage to the environment in any place; damage to the economy of any country; or damage to the national security of any country. Further guidance is provided by s.3ZA(3) which provides that an act causes damage to “human welfare” occurs only if causes: loss to human life; human illness or injury; disruption of a supply of money, food, water, energy, or fuel; disruption of a system of communication; disruption of facilities for transport; or disruption of services relating to health. The conduct element of the offence is comprised of the following components: a person must do an unauthorized act in relation to a computer (as defined in section 17(8) of the CMA 1990); and that unauthorized act must result, whether directly or indirectly (s.3ZA(4)), in serious damage to the economy, the environment, national security, or human welfare or create a significant risk of such damage. Thus the offence has a clear inchoate element in that it is not necessary to prove that serious damage has occurred; a significant risk of such damage is sufficient. The mens rea of the offence comprises of two distinct elements. The first is that, in relation to the unauthorized act, the defendant must know that the act he or she does is unauthorized (section 3ZA(1)(b) of the CMA). The second element relates to the consequences, the defendant must have intended to cause harm (that is serious damage to the economy, environment, human welfare or national security) or have been reckless as to whether such harm was caused. The rationale of s.3ZA is conceptually similar to s.2, i.e., to bring forward in time the point that a serious criminal offence is committed. For example, a person who engages in a cyber-attack which actually leads to the loss of human life could be prosecuted for murder. However, if this desired result is not achieved but the actions of the person create a significant risk of the loss of human life then they may be prosecuted under s.3ZA. In terms of sentencing power, where the unauthorized act results in, or creates a significant risk of, serious damage to human welfare or to national security, the maximum sentence is life imprisonment. Where the unauthorized act results in, or creates a significant risk of, serious damage, the economy, or the environment, the maximum sentence is 14 years. Misuse of Devices Those who commit offences under the CMA 1990 will often, if not invariably, do so with the aid of software or devices, for example, a program designed to circumvent

292

P. Bishop

password protection measures installed on a website. Such devices may either be designed and created by the user or purchased from others. In order to attempt to combat the so-called market in hacker’s tools, the Police and Justice Act 2006, amended the CMA 1990 so as to include s.3A, entitled “Misuse of Devices,” which states: (1) A person is guilty of an offence if he makes, adapts, supplies or offers to supply any article intending it to be used to commit, or to assist in the commission of, an offence under section 1 or 3. (2) A person is guilty of an offence if he supplies or offers to supply any article believing that it is likely to be used to commit, or to assist in the commission of, an offence under section 1 or 3. (3) A person is guilty of an offence if he obtains any article (a) intending to use it to commit, or to assist in the commission of, an offence under section 1, 3 or 3ZA, or (b) with a view to its being supplied for use to commit, or to assist in the commission of, an offence under section 1, 3 or 3ZA. (4) In this section “article” includes any program or data held in electronic form.

The making, adaption, etc. of an article is only an offence if the person intends that it be used in the course of offence. This is potentially a high threshold to be met in bringing a prosecution. The second offence of supplying an article provides a reduced mens rea of “believing” that the article is likely to be used in the commission of an offence. Paragraph 3 provides the offence of obtaining an article with a view to its being supplied for use in an offence or intending to use it to commit an offence. Thus, s.3A is focused on the entire supply chain from the making/adapting of articles, through to their supply and finally the possession of such articles providing the necessary intent can be proven. In terms of the mental element of each variant of the s.3A, the ease with which the prosecution is able to prove intent or belief is to some extent contingent on the nature of the device in question. Where the device/ software is only capable of a criminal use, it is reasonable to assume that the prosecution’s task will be considerably easier than would be the case with dual use devices which can be used for a legitimate or criminal purpose. Further, the characteristics of the defendant might also be relevant in this regard; for example, a penetration tester is likely to use password circumvention software for legitimate purposes connected with their profession. The maximum penalty which may be imposed for a s.3A offence is 2 years imprisonment.

Cyber-Harassment: The Protection from Harassment Act 1997 Cyber-harassment, like harassment more generally, is an offence which may be committed in a myriad of ways, including sending threatening or abusive emails or making such comments on social media sites, discussion boards, or via instant messaging. While harassment is not a new phenomenon, the Internet and ubiquity of social media, together with the online disinhibition effect, has undoubtedly provided perpetrators with an additional means of engaging in harassing conduct.

14

Legislative Frameworks: The United Kingdom

293

Protection from Harassment Act 1997 The Protection from Harassment Act 1997 (PHA 1997) was enacted following public concern over the problem of “stalking” after a number of cases were publicized in which individuals became obsessed with ex-girlfriends/boyfriends or celebrities. The PHA 1997 created both civil remedies and criminal offences in respect of behavior which amounts to harassment. The PHA 1997 was enacted during the early stages of the Internet revolution and is not specifically focused on online harassment; nevertheless, the key mechanisms contained in the PHA 1997 are sufficiently flexible and broad in scope to combat such behavior. By virtue of section 1(1) of the PHA 1997, a person must not pursue a course of conduct which amounts to harassment of another and which he knows or ought to know amounts to harassment of the other. A person who contravenes s.1 could liable to the offence of harassment (s.2) and/or subject to civil action (s.3). A number of definitional questions arise. First, for the purposes of section 1, what constitutes harassment? The term is undefined save to say that references to harassing a person includes alarming the person or causing the person distress (s.7(2)). Second, the PHA 1997 requires a “course of conduct,” which by virtue of section 7(3)(a) must include conduct on at least two occasions. Finally, a defendant in order to be guilty of an offence under the Act must know, or ought to have known, that the conduct in question amounted to harassment. The test to be applied in determining whether a person ought to have known is an objective one, based on whether a reasonable person in possession of the same information as the defendant would have thought that the conduct in question amounts to harassment. In addition to the basic offence, s.4 creates the aggravated offence of “putting people in fear of violence” which carries a maximum sentence on indictment of 5 years imprisonment. If a defendant is convicted under the PHA 1997, the court may also impose a restraining order under s.5, prohibiting future conduct likely to be injurious to the victim. As with the aggravated offence under s.4, breach of a restraining order carries a maximum sentence of 5 years imprisonment. Following amendments to the PHA 1997 by the Domestic Violence, Crime and Victims Act 2004, a new s.5A was inserted which also permits a court to issue a restraining order where the defendant has been acquitted of an offence under the PHA 1997.

The Offence of “Stalking” Although passed in response to the problem, as originally enacted, the PHA 1997 did not specially refer to stalking. Following a campaign by various anti-stalking groups, the Protection of Freedoms Act 2012 amended the PHA 1997 to include the new offence of stalking (s.2A) and a more serious offence of stalking involving fear of violence or serious alarm or distress (s.4A). In order to be guilty of the stalking offence, a person must pursue a course of conduct (as defined by s.1(1), i.e., two separate instances of harassing conduct) and that course of conduct amounts to stalking. In relation to the latter requirement, a person’s course of conduct amounts

294

P. Bishop

to stalking if the harassing conduct involves acts or omissions one associates with stalking; a list of indicative conduct is provided by s.2(3) and includes: following a person; contacting, or attempting to contact, a person by any means; publishing any statement or other material relating to the victim or purporting to originate from the victim; monitoring the use of the Internet, email, or any other form of electronic communication by a person; loitering in any place (whether public or private); interfering with any property in the possession of a person; or watching or spying on a person. As with the general offence of harassment, the stalking offence is not a cyber-specific provision; however, a number of the indicative acts listed above were clearly provided with cyber-stalking in mind. The s.2A offence is punishable by a maximum sentence of 51 weeks imprisonment.

Offensive or Malicious Online Communications UK criminal laws designed to combat inappropriate or objectionable communications have a long history dating back to the Post Office (Amendment) Act 1935, s.10(2)(a), i.e., sending of a message by telephone which is of a grossly offensive or indecent, obscene, or menacing character. Since then the law in this area has been constantly updated to take into account advancements in communication technology but the key element of requiring gross offense has remained a constant feature of contemporary legislative measures. Cyber-harassment will often take the form of a perpetrator sending emails designed to cause anxiety or distress to the victim. If this is done on more than one occasion so as to represent a “course of conduct,” then the PHA 1997 (above) may be utilized. However, the sending of such emails, even if done on one occasion, may constitute a criminal offence by virtue of s.127 of the Communications Act 2003.

Communications Act 2003, s.127 The Communications Act 2003, s.127, provides the offence of improper use of a public electronic communications network. The offence may be committed in one of two ways, namely: by sending a message that is grossly offensive or of an indecent, obscene, or menacing character; or for the purposes of causing annoyance, inconvenience, or needless anxiety, sending a message which the defendant knows to be false. A number of definitional issues arise: the offence is limited to the sending of a message via a “public electronic communications network,” defined by s.151 as “an electronic communications network provided wholly or mainly for the purpose of making electronic communications services available to members of the public.” The use of the word “public” is used to draw a distinction between communications networks open to members of the public such as the Internet and internal communication systems such as a company’s internal intranet. The definition would clearly encompass any messages sent via email, instant messaging, social media posts, or

14

Legislative Frameworks: The United Kingdom

295

any posting on a webpage; for example, while s.127 uses the generic term “message,” it has been held that a tweet falls within the scope of the offence (Chambers v DPP [2012] EWHC 2157 (Admin), although on the facts, the defendant was not liable). The first variant of the offence contained in s.127, namely, sending a message which is grossly offensive or of an indecent, obscene, or menacing character (s.127(1)) is particularly broad in scope: a message which is grossly offensive or of a menacing character may take a number of forms whereas a message which is indecent or obscene would most obviously apply to pornographic material. While there is a relative dearth of case law considering the application of s.127, the leading case, DPP v Collins [2006] UKHL 40, has provided significant guidance on its interpretation, particularly, in relation to the concept of gross offensiveness. First, the rationale of s.127 is the protection of the public electronic communications network itself and not users of the network. Thus, a message may be illegal notwithstanding that the recipient of the message was not subjectively grossly offended and may even have welcomed the communication; for example, providing the threshold of gross offensiveness is met, a racist who sends a racist message to another racist is potentially liable. Indeed, even if the message is not received or read by the intended recipient then an offence may still be committed. In practice, prosecutions will almost invariably involve a scenario where the intended recipient has read the message, considers it to be objectionable, and has reported the communication to the police; without such a “victim,” the message is unlikely to be brought to the attention of the relevant authorities. Second, the court concluded that the test to be used for gross offensiveness is an objective one based on the “standards of an open and just multiracial society.” Finally, the first variant of the offence does explicitly include a requisite mental element on the part of the defendant. However, the court concluded that the offence is not one of strict liability and requires an intention to send a grossly offensive message or knowledge that the message might be grossly offensive. The second variant of the offence, sending a message which the defendant knows to be false for the purposes of causing annoyance, inconvenience, or needless anxiety (s.127(2)) is more limited in scope and requires the prosecution to prove that the defendant acted with the required purpose, i.e., to cause needless anxiety. Given the breadth of the s.127 offence, its application raises obvious free speech concerns. In terms of cybercrime generally, it is often asserted that the nature of cyberspace causes considerable enforcement challenges, not least the ability of cybercriminals to use technical means, e.g., the TOR browser, to significantly reduce the likelihood of detection (Bishop 2015). However, in this particular context, online speech is arguably more likely to lead to enforcement action than would be the case with offline speech: Two decades ago, ill-judged remarks made in the heat of the moment or poor taste jokes among friends were unlikely to be on the radar of law enforcers. Now that those comments can be made available to the world at large and remain recorded and searchable, such expression has greater potential to come to the attention of prosecutors and litigators. (Rowbottom 2012)

296

P. Bishop

The counter assertion to the above is that online communication has the potential to reach a much wider audience than is the case when statements are made “among friends” and thus has the potential to cause comparatively greater levels of harm. In any event, the scope of the s.127 offence places particular significance on prosecutorial discretion as means of avoiding undue interference with freedom of speech. To this end, the relevant guidance issued by the Crown Prosecution Service (the primary prosecuting agency in England and Wales) states that a prosecution should only be instigated where there is sufficient evidence that the communication crosses the high threshold necessary to protect freedom of expression; unwelcome freedom of expression, such as speech which is offensive, shocking or disturbing, and/or uninhibited and ill thought contributions should not lead to a prosecution unless the statement in question is beyond what is considered tolerable in society (CPS 2018).

Disclosing Private Sexual Photographs and Films with Intent to Cause Distress: Criminal Justice and Courts Act 2015, s.33 In recent times, a phenomenon colloquially referred to as “revenge porn” has attracted the attention of legislators and policy makers. Although no official legal definition of revenge porn exists, it is often comprised of the sharing of private, sexual photographs and/or videos of another person without their consent and with the purpose of causing embarrassment and distress. Although not exclusively the case, such activity often takes place in the context of a failed relationship, hence the use of the word “revenge,” where one party will distribute images obtained for personal perusal during the relationship after the relationship has ended. As with cyber-harassment and offensive communications considered above, social media represents a well-used platform for such conduct (Pegg 2018). The sending or posting of sexual images may constitute an offence under the Protection from Harassment Act 1997 providing a course of conduct (two or more instances of harassing conduct) is present. In addition, the disclosure of a single image may constitute an offence under the Communications Act 2003 providing the image in question is construed as “indecent or obscene.” However, prior to the enactment of the Criminal Justice and Courts Act 2015, s.33, a single image which could not be considered indecent or obscene and the person portrayed in the image was an adult so as to not contravene child pornography legislation would not constitute a criminal offence. This potential lacuna in the law was remedied by the s.33, which created the offence of disclosing private sexual photographs and films with intent to cause distress. By virtue of s.33(1), the offence is committed where a person discloses a private sexual photograph/film if the disclosure is made without the consent of the individual who appears in the photograph or film and the intention of causing that individual distress. However, it is not an offence under s.33 to disclose the photograph/film only to the individual that appears in the photograph/film (s.33(2)).

14

Legislative Frameworks: The United Kingdom

297

The definition of “disclose” provided by s.34 is broad in scope, namely: A person “discloses” something to a person if, by any means, he or she gives or shows it to the person or makes it available to the person. Further guidance which accompanies the Act states: Disclosure would. . . include electronic disclosure of a photograph or film, for example, by posting it on a website or e-mailing to someone. It would also include the disclosure of a physical document, for example, by giving a printed photograph to another person or displaying it in a place where other people would see it.

The definition of “private” and “sexual” is provided by s.35: a photograph/film is “private” if it shows something that is not of a kind ordinarily seen in public (s.35(2)). A photograph/film is “sexual” if it shows all or part of an individual’s exposed genitals or pubic area; it shows something that a reasonable person would consider to be sexual because of its nature; or its content, taken as a whole, is such that a reasonable person would consider it to be sexual (s.35(3)). The use of the word “or” in s.35(3) indicates that only one of the three variants of a sexual photograph/ film is required. The first variant is prescriptive in scope by including a reference to an individual’s genitals or pubic area. The second and third variant includes an element of objectivity by including a reference to a reasonable person but are considerably less prescriptive, presumably in an attempt to maximize flexibility; for example, an image which shows female breasts but does not show that person’s pubic area but is likely to be considered sexual by a reasonable person. In order to attract liability, the photo/film has to be both private and sexual. Thus, a photograph of two people kissing might be construed as “sexual” by a reasonable person. However, such an image is unlikely to be construed as “private” on the basis that kissing is an activity that is often seen in public. The s.33 offence has a clear mens rea element, which requires the prosecution to prove that the person who disclosed intended to cause distress. It is submitted that in most instances, this requirement will be easily satisfied; the very fact that private sexual images are disclosed is obvious evidence from which the jury or court may infer the necessary intent. In any event, s.33 has not been the subject of appellant level judicial analysis, so the exact scope of the offence is yet to be determined. A number of defenses are available, namely: the defendant reasonably believed that the disclosure was necessary for the purposes of preventing, detecting, or investigating crime. (s.33(3)); the disclosure was made in the course of, or with a view to, the publication of journalistic material, and the defendant reasonably believed that, in the particular circumstances, the publication of the journalistic material was, or would be, in the public interest (s.33(4)); or the defendant reasonably believed that the photograph or film had previously been disclosed for reward and he had no reason to believe that the previous disclosure for reward was made without the consent of the individual portrayed in the photograph/film (s.33(5)). The maximum sentence for a s.33 offence on conviction on indictment is a maximum of 2 years imprisonment or a fine (or both) (s.33(9)(a)).

298

P. Bishop

Online Sexual Grooming While there is no universally accepted definition of grooming, the term is generally used to describe the process by which a person engineers a relationship with a child in the hope of gaining the child’s trust prior to some form of sexual contact. While sexual grooming is not a new phenomenon, the Internet, and in particular chat rooms/instant messaging, and social media has provided a new means via which a perpetrator can partake in sexual grooming. The anonymity associated with the Internet allows the perpetrator to disguise their identity, often by claiming to be a similar age to the victim. The grooming process is usually comprised of several stages and can take several weeks/months or longer. Typically, the perpetrator may begin by engaging in innocent dialogue before gradually introducing sexual content into the conversation (Ost 2004). When one considers the “dark side” of the Internet, online sexual exploitation of children is often the first thing which springs to mind. In the UK, the distribution of child pornography has constituted a criminal offence since the enactment of the Protection of Children Act 1978, which has been variously amended to capture electronic images and those distributed and published online. However, prior to the Sexual Offences Act 2003, no specific offence of sexual grooming existed; a defendant could be guilty of some form of sexual assault or providing that the grooming process involved more than merely preparatory acts (Criminal Attempts Act 1981, s.1(1)), an attempted sexual offence. However in both instances, the response of the criminal law lacked a clear proactive element; even where an attempted crime was a possible charge, the perpetrator was required to travel so far down the path of criminality so as to place the victim in grave danger. Following calls for the creation of a new offence by a number of child protection organizations (including the Internet Taskforce on Child Protection), the Sexual Offences Act 2003, s.15, created the offence of meeting a child following sexual grooming; the offence is construed in the following way: 15(1) A person aged 18 or over (A) commits an offence if– (a) A has met or communicated with another person (B) on one or more occasions and subsequently— (i) A intentionally meets B, (ii) A travels with the intention of meeting B in any part of the world or arranges to meet B in any part of the world, or (iii) B travels with the intention of meeting A in any part of the world, (b) A intends to do anything to or in respect of B, during or after the meeting mentioned in paragraph (a)(i) to (iii) and in any part of the world, which if done will involve the commission by A of a relevant offence, (c) B is under 16, and (d) A does not reasonably believe that B is 16 or over.

Several constitute parts of the offence may be gleaned from the above statutory language: • The perpetrator (A) must be aged 18 or over and the victim (B) is under 16. On the basis of s.15(1)(d), a person may be absolved of liability if he has a reasonable belief that B is 16 or over.

14

Legislative Frameworks: The United Kingdom

299

• A must have communicated with B on one or more occasions (prior to the enactment of the Criminal Justice and Courts Act 2015, s.36, communication on two at least two occasions was required), and: • A intentionally meets B; A travels with the intention to meet B or A arranges to meet B. • B travels with the intention of meeting A. • A’s intention in meeting, travelling to meet, or arranging to meet B is to do anything, which if done, would involve the commission of an offence. The approach of s.15 is designed to trigger criminal liability at an early stage; it is clear that the meeting does not actually have to occur; the offence is also triggered where a person arranges to meet a child or travels to meet a child. It is also clear that the act of grooming per se without arranging a meeting does not trigger the offence. However, in cases where an adult (aged 18 or over) engages in sexual communication with a child (under 16), a specific offence is provided by s.15A. In practice, the most difficult aspect of the offence for the prosecution to prove is that the defendant has met, travelled to meet, or arranged a meeting with the intent to commit a “relevant offence.” By virtue of s.15(2), relevant offence includes any offence under part one of the Sexual Offences Act 2003, including, inter alia, rape (s.1), sexual assault (s.3), engaging in sexual activity in the presence of a child (s.11), and causing a child to watch a sexual act (s.12). This raises the question of how the prosecution can prove that the defendant has the necessary intent. In the absence of a confession, useful evidence may include transcripts of chat room conversations and emails/text messages sent between the offender and the victim. Similarly, when perpetrators have been arrested en route to a meeting with a child, possession of certain items, e.g., condoms, lubricating jelly, alcohol, pornography, etc., represents evidence from which the jury may infer the necessary intent (R v G [2010] EWCA Crim 1693). The requirement to prove the necessary intent can to some extent limit the proactive element of the offence. For example, in the absence of transcripts of communications which might reveal the necessary intent, the only option for the police may be to allow the defendant to travel to meet the child in the hope that incriminating items are found in his possession. As the police are unlikely to be aware beforehand that the defendant possesses such items, they might be inclined to allow the meeting to take place so that the defendant’s actions may be used to infer the necessary intent. Such an approach clearly places the child at significant risk; even where the police are able to intervene before an actual sexual offence is committed, it is not difficult to imagine a situation where the child in question suffers considerable psychological trauma. Further, the above discussion presupposes that the police are aware that the grooming process has commenced and a meeting has been arranged. The grooming process often involves the building of trust between an adult and child and/or the child may be unaware that they have been communicating with an adult, in many cases therefore, the police will not receive advance notice of a potential grooming offence. As such, it is often the case that it only becomes apparent that a child has been the victim of grooming where the

300

P. Bishop

grooming process has culminated in an attempted or actual sexual assault of the child. This problem has been succinctly summed up by an explanatory memorandum to the Republic of Ireland’s Criminal Law (Child Grooming) Bill 2014 (enacted in 2017), which noted: “the concern arises that if a meeting or steps towards a meeting is required, it may be too late to avert the threat to the child in question, even though grooming has already occurred.” A possible solution to this problem is the increased use of technology, including the utilization of algorithms which are able to detect grooming orientated language used in social media discourse (Amparo et al. 2014). Should this result in an indication that a social media user regularly uses such language in online communications, this information may be cross-referenced with the profile data on the user held by a social media company, for example, if an adult user has a large number of child, non-familial “friends” and/or has a significant number of rejected friend requests from non-familial children. The resulting information from this process could be utilized to trigger an alert to the user’s potential victims and be passed on to the relevant investigatory authorities; such an approach has been strongly advocated by the National Society for the Prevention of Cruelty to Children, the pre-eminent child protection charity in the UK (NSPCC 2018).

Online Fraud Clearly fraud is not a new phenomenon but as is the case with all cyber-enabled crime, the Internet and ICT has provided fraudsters with a new means of committing fraud, often in a manner which is far more efficient than traditional methods. The ingenuity of fraudsters has even created a fraud lexicon, with terms like “phishing” and “advance fee fraud” becoming relatively well-known outside of the legal and IT communities. Historically, the criminal concept of fraud was made up of eight different “deception”-based offences under various Theft Acts from 1968 to 1996. In an electronic context, the obvious problem with the concept of deception was the prevalent view that a machine cannot be deceived, on the basis that a machine has no mind and cannot believe a proposition to be true when it is in fact false (Law Commission Report 2002). In response to the inability of the law to combat fraud committed online or via electronic means, the Fraud Act 2003 was enacted.

The Fraud Act 2006 The Fraud Act 2006 replaced the numerous fraud offences which existed prior to 2006 with a single general offence of fraud, albeit an offence which may be committed in one of three ways, namely: fraud by false representation (s.2); fraud by failing to disclose information (s.3); and fraud by abuse of position (s.4). There are two basic requirements which must be met before any of the three variants of the fraud offence can be charged. First, the behavior of the defendant must be dishonest. Second, it must also be his intention to make a gain or cause a loss to another. In relation to the former, the Act does not provide a definition of dishonesty. However, dishonesty is a well-known concept in English law; the courts

14

Legislative Frameworks: The United Kingdom

301

apply an objective test based on the following question: was the defendant dishonest by the standards of an ordinary, reasonable individual (having the same knowledge as the defendant)? Whether or not the defendant subjectively believed that he was acting honestly is irrelevant for the purpose of the test (Ivey v Genting Casinos [2017] UKSC 67). In relation to the latter requirement, s.5(2) limits gain and loss to money or other property and further states that gain or loss may be temporary or permanent. In relation to online fraud, the most useful of the three variants of the offence is unquestionably fraud by false representation. Many online frauds involve and express implied false representation; for example, an email which claims that a bank customer is required to log in to their account because of a security risk (phishing) or one which states that a large amount of money resides in a foreign bank account which can be released upon the payment of a fee (advance fee fraud) are clear examples of fraud by false representation.

Possession and Making or Supplying Articles for Use in Frauds In common with the Computer Misuse Act 1990, which criminalizes both the act of authorized access and the possession and supply of hacker’s tools, the Fraud Act 2006 contains specific offences relating to the possession, making or supplying of articles for use in frauds. In this context, the possession offence is extremely wide, s.6(1) simply provides that: a person is guilty of an offence if he has in his possession or under his control any article for use in the course of or in connection with any fraud. By virtue of s.8, “article” includes: “any program or data held in electronic form.” The obvious application of the offence would include the possession of credit card cloning software, but the offence is not limited to software, specifically designed or modified for committing fraud, for example, a simple word document containing illegitimately obtained credit details could potentially trigger s.6. A number of definitional questions arise; first, what is meant by “possession or under his control.” “Possession” is a concept which has proven difficult for the criminal courts to define. In the context of articles comprised of a computer program or electronic data, one question which has arisen in another context is: if files have been deleted from a computer’s hard drive, is the defendant still in possession of the files? In relation to the offence of possessing an indecent image of child (Criminal Justice Act 1988, s.160), it has been held that a defendant is still in “possession” of deleted images as long as he has the technical skill to retrieve the files subsequent to their deletion, for example, by the use of file recovery software (R v Porter [2006] EWCA Crim 560). Given that practically any article may be used in the commission of a fraud, e.g., pen and paper, laptop, printer, etc., much will turn on mens rea. However, the offence contained in s.6 has no apparent mens rea requirement. Nevertheless, after persistent lobbying during the passage of the bill through parliament, the attorneygeneral and solicitor-general confirmed that the offence is not one of strict liability. Thus it seems the prosecution will have to prove that the defendant had a general intention that the article be used in a fraud. In cases involving articles where it is difficult to imagine a legitimate use, e.g., credit card cloning software, proving a general intent that the article is to be used in the commission of a fraud will be a

302

P. Bishop

relatively straightforward task. However, establishing the necessary intent may be more difficult in the case of dual use articles, i.e., articles which can be used for legitimate and illegitimate purposes. In such cases, much will turn on the surrounding context, for example, in R. v Nimley [2011] 1 Cr App R(S) 120 the appellant used his mobile telephone to record three movies at his local cinema and then uploaded the material to a website where the movies could be viewed by other web users. In addition to charges relating to the distribution of copyrighted material, the appellant was also convicted under s.6 for the possession of an article for use in connection with a fraud, i.e., the mobile telephone. In addition to the s.6 offence which is targeted at the demand for articles which may be used in fraud, s.7 targets the supply of such articles by providing the following offence: 7(1) A person is guilty of an offence if he makes, adapts, supplies or offers to supply any article– (a) knowing that it is designed or adapted for use in the course of or in connection with fraud, or (b) intending it to be used to commit, or assist in the commission of, fraud.

Similar offences exist in specific contexts (Communications Act, s.126; Mobile Telephones (Re-programming) Act 2002, s.2; Computer Misuse Act 1990, s.3A) but s.7 creates a more general offence designed to apply more widely, for example, software manufacturers who design programs for use in fraudulent activity. In terms of the actus reus, there is some uncertainty: . . .it is unclear whether s.7 extends beyond those articles designed/adapted exclusively for crime, i.e. which have no other purpose than a criminal one. A pack of cards that has been marked might be used for a magic trick or to deceive someone in a fraud. (Ormerod 2007)

However, in most instances, it will be relatively straightforward for the prosecution to prove that an article designed/adapted for use in a fraud, for example, it is difficult to imagine a non-criminal purpose for software capable of cloning credit cards. Unlike s.6, the offence has a clear mens rea element, namely, that the defendant knows that the article is designed for use in connection with fraud or that the defendant intends that they will be used in fraud. The s.7 offence has some advantages over charges that might otherwise be utilized by the prosecuting authorities. For instance, the offence may be utilized in the context of a lone actor whereas conspiracy to commit fraud requires more than one individual. Further, the offence of aiding and abetting requires that the fraud-related offence has actually been perpetrated; s.7 permits a prosecution at an earlier stage.

Conclusion In accordance with its obligations under the Council of Europe Convention on cybercrime, the overall framework of legislation within the UK includes offences capable of combating the vast majority of illicit uses of the Internet and ICT both in

14

Legislative Frameworks: The United Kingdom

303

terms of cyber-dependent crimes and cyber-enabled crimes. An evident feature of many of the offences considered in this chapter is the use of broad statutory language in an implicit or explicit attempt to ensure that the criminal law is sufficiently flexible to be capable of utilization in an ever changing technological environment. While an attempt at future proofing statutory offenses is to be lauded, to a greater or lesser extent, breadth and flexibility in the drafting of the law inevitably undermines legal certainty. This is potentially problematic in the context of the criminal law, the rationale of which, inter alia, is to provide a clear and unambiguous signal to society as a whole as to which conduct is acceptable and which conduct is not. Despite this flexibility, it is also apparent that the overall corpus of cyber-offences in the UK has developed in a somewhat ad hoc manner, with amendments to existing legislation in response to perceived lacunae or wholesale reform where it has become apparent that existing legislation is outdated. Regardless of any attempts to future proof cybercriminal law, it is perhaps inevitable that the tortoise of the law will always lag behind the hare of technology (Stokes 2012).

Cross-References ▶ Computer Hacking and the Hacker Subculture ▶ Cybercrime Legislation in the United States ▶ Cyberstalking ▶ Defining Cybercrime ▶ Hate Speech in Online Spaces ▶ Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime ▶ Revenge Pornography ▶ The Past, Present, and Future of Online Child Sexual Exploitation: Summarizing the Evolution of Production, Distribution, and Detection

References Amparo, E., Cano, M., & Alani, H. (2014). Detecting child grooming behaviour patterns on social media. In L. Aiello & D. McFarland (Eds.), Social informatics, 6th international conference, Barcelona. New York: Springer. Bishop, P. (2015). Cyberterrorism, criminal law and punishment-based deterrence. In L. Jarvis, S. Macdonald, & T. Chen (Eds.), Terrorism online: Politics, law and technology. Abingdon: Routledge. Crown Prosecution Service. (2018). Social media – Guidelines on prosecuting cases involving communications sent via social media. Retrieved from https://www.cps.gov.uk/legal-guidance/ social-media-guidelines-prosecuting-cases-involving-communications-sent-social-media HM Government (2018) National security capability review. Retrieved from https://assets.publish ing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/705347/6.4391_ CO_National-Security-Review_web.pdf Law Commission. (1989). Computer misuse. Report (No. 186) Cm819. Law Commission. (2002). Fraud. Report (No. 276) Cm5560. Lloyd, I. (2008). Information technology law (5th ed.). Oxford: Oxford University Press.

304

P. Bishop

McGuire, M., & Dowling, S. (2013). Cyber crime: A review of the evidence (Research report 75). London: Home Office. National Society for the Prevention of Cruelty to Children. (2018). Report: How safe are our children?. Retrieved from https://learning.nspcc.org.uk/media/1067/how-safe-are-our-children2018.pdf Office for National Statistics. (2018). Overview of fraud and computer misuse statistics for England and Wales. Retrieved from https://www.ons.gov.uk/peoplepopulationandcommunity/crimeand justice/articles/overviewoffraudandcomputermisusestatisticsforenglandandwales/2018-01-25 Office for National Statistics. (2019). Crime in England and Wales: Year ending September 2018. Retrieved from https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/bulle tins/crimeinenglandandwales/yearendingseptember2018#computer-misuse-offences-show-adecrease-in-computer-viruses. Last accessed 24 Feb 2019. Ormerod, D. (2007). The Fraud Act 2006 – Criminalising lying? Criminal Law Review, 193–219. Ost, S. (2004). Getting to grips with sexual grooming? The new offence under the Sexual Offences Act 2003. Journal of Social Welfare and Family Law, 26(2), 147–159. Pegg, S. (2018). A matter of privacy or abuse? Revenge porn in the law. Criminal Law Review, 512–530. Rowbottom, J. (2012). To rant, vent and converse: Protecting low level digital speech. Cambridge Law Journal, 71(2), 355–383. Smith, J. (1998). Police officers securing access to Police National Computer for non-police purposes. Criminal Law Review, 53–54. Stokes, E. (2012). Nanotechnology and the products of inherited regulation. Journal of Law and Society, 39(1), 93–112.

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

15

Sesha Kethineni

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structure of the Central Government . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cyber Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Information Technology Act (IT Act) (2000) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Information Technology (Amendment) Act (ITAA) (2008) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Indian Penal Code (IPC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Special and Local Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gender and Cybercrimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Government Initiatives to Prevent Cybercrimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

306 306 308 308 309 311 312 314 316 319 321 323 323

Abstract

Cybercrime involves the use of computers or computer network to commit criminal activity. Cybercrime is not unique to India. As the country has experienced technological innovation in the last two decades, the use and misuse of the Internet also spread across the country. As of 2016, India is globally ranked third for “malicious activity,” whereas China and the United States have taken the first and the second place, respectively (Mallapur 2016). Also, the number of Internet users in India has reached over 330 million in 2017 and projected to grow to about 512 million in 2022 (Statista 2019). With these growing trends, the country has witnessed an increase in cybercrimes such as phishing, introducing malicious

S. Kethineni (*) Department of Justice Studies, Prairie View A&M University, Prairie View, TX, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_7

305

306

S. Kethineni

codes, identity theft, bank fraud, transmission of sexually explicit materials, cyberstalking, and cyberbullying. The chapter provides a brief description of the Indian government structure, including the legislative and judicial branches, law enforcement, legislations dealing with cybercrimes, and the nature and extent of cybercrimes. Also, the current debate about cybercrimes in the country is presented. Keywords

Cybercrime · Code of Criminal Procedure · India · Information Technology Act · Data/Computer Theft · Cybeterrorism · Cryptocurrency · Cyber security

Introduction Globally, access to the Internet has grown exponentially, and India is no exception. With the increase in access to technology, including cell phones, computers, Internet, and other electronic devices, countries around the world are grappling with the problem of regulating abuse of technology and related criminal activities. Until recently, India had relied on outdated laws such as the Information Technology Act (IT Act) 2000, the Indian Penal Code (IPC), and the Indian Copy Right Act of 1957 to address copyright infringement, trademark infringement, to a wide range of cybercrimes including identify theft, credit card fraud, computer hacking, cyberbullying, extortion, distribution child pornography, and cyber-terrorism. The Forensic & Integrity Services (2017) study reported that 26% of the technology, media, and telecommunication (TMT) and 24% financial services (FS) experienced the highest percentage of cyberattacks compared to any other business entity in recent years. About 90% of the respondents in their study identified social media as a major threat factor as employees post their work profiles on social media. The most alarming results were that about two-thirds of businesses were unable to detect cyberattacks quickly. While cybercriminals are quickly adapting to technological transformations, India is making progress, although slowly, to address the problem of cybercrimes through legislation, improving enforcement efforts, providing training to law enforcement agencies and judiciary, and organizing awareness campaigns for safe and secure cyberspace. This chapter describes the government structure, legislations about cybercrimes, crime statistics, type of cybercrimes, and government efforts to prevent cybercrimes.

Structure of the Central Government India has 29 states and 7 Union Territories (directly under the Central/Union government). The Indian government operates at the central, state, and local levels. The central government, similar to the federal government in the United States, primarily handles national defense, foreign policy, taxation, public expenditure, and economic planning. The governments at the central and state levels consist of three branches – the executive, legislative, and the judiciary. The vice president and the cabinet ministers assist the president. The president appoints the prime minister, and

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

307

most of the executive powers rest with the prime minister, similar to the British prime minister (Chakrabarty and Pandey 2008). Executive Branch. At the central level, the president is elected by the members of an electoral college who are also members of the House of Parliament and the Legislative Assemblies at the state level and the Union Territories. As the head of the executive branch of the central government, the president has the power to approve any new legislation, pardon offenders, and convene the Parliament. The president also has the authority to appoint governors of each state, the chief justice and other justice of the Supreme Court (the highest appellate court for the country), justices of the High Courts (the highest court at the state level), and the attorney general, among others. At the state level, the executive branch consists of the governor, the chief minister (counterpart to the prime minister at the state level), and the council of ministers (Chakrabarty and Pandey 2008). Legislative Branch. The Parliament is the legislative branch at the central government level, and the members can be either elected or appointed by the president of India. The Parliament has two houses – the Lok Sabha (Lower House) and the Rajya Sabha (Upper House). The Rajya Sabha consists of 250 members who are selected by the members of the legislative assemblies of various states. Lok Sabha has 552 members who, unlike the Rajya Sabha members, are elected by the people every 5 years (Chakrabarty and Pandey 2008). Judicial Branch. Unlike the judicial system in the United States, which has a hierarchical system of courts at the federal and state levels, the Indian judiciary is a unitary system with the Supreme Court as the apex court, and the High Courts in the states, followed by a number of Districts (county) courts and other subordinate courts in the states. Both the Supreme Court and the High Courts have original and appellate jurisdiction. Original jurisdiction of the Supreme Court covers disputes involving one or more states, or between the states and the central government, issues related to the interpretation of the Constitution or rights of citizens. It also hears appeals from the High Courts (Alam et al. 2010; Chakrabarty and Pandey 2008). The High Courts, similar to the Supreme Court, have appellate jurisdiction. The subordinate courts – Sessions (or District) level courts – are under the administrative control of the High Court in each state. The District Court has appellate jurisdiction over civil and criminal matters in the district and appeals from the Court are heard in the High Court. The District Court is also known as the Session Court when it hears criminal cases such as murder, theft, gang robbery (dacoity), and rape. Other courts include judicial, metropolitan, and executive magistrates. The state government appoints executive magistrates. These magistrates have the power to pass injunctions, stop public nuisances, prevent unlawful assembly, issue a search warrant, and hold an inquest in cases of unnatural deaths (Radhakrishnan 2014). In addition to these courts, there are many quasi-judicial bodies (e.g., tribunals and appellate boards) established to handle specific issues such as the Intellectual Property Appellate Board, Central Administrative Tribunal (CAT), Telecom Disputes Settlement Appellate Tribunal (TDSAT), and the Cyber Regulations Appellate Tribunal (Alam et al. 2010; Chakrabarty and Pandey 2008; Society of Indian Law Firms 2011; S. S. Rana and Co. Advocates 2010–2017). The Indian legal jurisprudence is based on Common Law, and many of the laws passed during the colonial time are still in force. The Supreme Court is headed by

308

S. Kethineni

the chief justice and 30 associate justices. The three main pieces of legislation – the Code of Civil Procedure (1908), Indian Penal Code (1860), and Code of Criminal Procedure (1882) – have been amended to meet the changes in the society. The most recent amendment to the Indian Penal Code was carried out in 1995, the Code of Criminal Procedure was amended in 2005, and the Criminal Law (Amendment) Act was passed in 2013. Also, laws such as the Information Technology Act were passed in 2000 and were amended in 2008 to meet the technology-related crimes (Alam et al. 2010; Discovered India n.d.). Law Enforcement. In India, the police subject falls under the state authority, which means state governments can formulate policies and procedures governing the police in their respective states. Likewise, Union Territories have their own police force. Most of the senior level police officers – director general of police (DGP), additional director general of police (ADGP), inspector general of police (IGP), deputy inspector general of police (DIGP), superintendent of police, and assistant superintendent of police – come from the Indian Police Service (IPS), who are “recruited, trained, and managed by the Central Government. . .” (Commonwealth Human Rights Initiative n. d., p. 8). There are also DGP for training, criminal investigation, and technical services. The ADGP is in charge of economic offenses, anti-corruption, and intelligent and security (Shah 1999; Verma and Subramanian 2009). The additional superintendent of police or the deputy superintendent of police can come from either the Indian Police Service or the state police services. Inspectors, subinspectors, and constables come from state police services. The director general of police (DGP) is the head of the police hierarchy, whereas the inspector is in-charge of the police station. However, rural police stations or smaller police stations are headed by a sub-inspector. Head constables and constables make up the line officers. At the central government level, there are agencies such as the Central Bureau of Investigation (CBI), Directorate of Coordination of Police Wireless (DCPW), Intelligence Bureau (IB), National Crime Records Bureau (NCRB), and the National Police Academy (NPA; Commonwealth Human Rights Initiative n.d.). The NCRB provides annual crime statistics, Crime in India. The NCRB is designated as the central agency to compile data on cybercrimes under Cyber Crime Prevention against Women and Children Scheme by the Ministry of Home Affairs. The NCRB documents offenses related to information technology and intellectual property under three laws – the Information Technology (IT) Act (2000), the Copyright Act (1957), and the Trade Mark Act (1999).

Cyber Laws Information Technology Act (IT Act) (2000) In 2000, India had passed the Information Technology Act (IT Act), based on the Electonic Commerce Model Law adopted by the United Nations Commission on International Trade Law. The IT Act provides recognition to electronic commerce and allows the companies/agencies to file electronic records with the government.

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

309

The Act is applicable throughout India and covers citizens as well as persons from other nations whose crimes fall under the Act. Due to the cross-border nature of these crimes, the IT 2000 Act also covers cybercrimes committed outside of India as long as the offense involves computers, computer systems, or networks operated within India. It provides definitions of terms such as computer network, digital signature, private key (i.e., a pair of keys used for creating digital signature), public key (i.e., a pair of keys used to verify digital signatures), breach of confidentiality and privacy, in addition security procedures, offenses, the role of investigative authority, penalties, and adjudications (Ministry of Law, Justice and Company Affairs 2000). A number of offenses – tampering with computer documents; loss or damage to computer resources; hacking; publishing of obscene materials in electronic method; failure of companies to assist in decrypting the information intercepted by the government; unauthorized access to computer system, obtaining license or Digital Signature Certification (DSC) by misrepresentation, publishing false/fraud DSC; and breach of confidentiality – are included in the 2000 IT Act (Dubbudu 2016). Police officers who are at the rank of deputy superintendent of police or above have the investigatory powers. The Act empowers the officers to search public places without a warrant if they reasonably suspect the person has committed the offense. Data Protection. Under the IT 2000 Act, Chapter IX (Penalties and Adjudication), Section 43 defines civil (i.e., monetary) penalties for (1) unauthorized access to any computer, computer system, or computer network; (2) downloading, copying, or extracting any data or information; (3) introducing a virus into any computer, computer system, or computer network; (4) causing damage to any computer, computer system, or computer network; (5) disrupting any computer, computer system, or computer network; (6) denying access to any computer, computer system, or computer network; (7) providing assistance to anyone who is not authorized to access a computer system; and (8) tampering or manipulating any computer or network, or computer system. Violators shall be liable for damages up to Rs. 1 Crore (US$140,000). The Act created the Cyber Regulations Appellate Tribunal to adjudicate offenses under the Act. The Tribunal shall be a judge of a High Court or a member of the Indian Legal Service of a certain rank and experience. The procedures and powers of the Tribunal are similar to that of the civil court (Ministry of Law, Justice and Company Affairs 2000). Also, the Act facilitated the amendments to the Indian Penal Code (1860), the Indian Evidence Act (1872), the Bankers’ Books Evidence Act (1891), and the Reserve Bank of India Act (1934) (Ministry of Law, Justice, and Company Affairs 2000). For example, several sections of IPC dealing with false documents or false records – 192, 204, 463, 464, 464, 468–470, 471, 474, 476 – have been amended to include electronic record and electronic document to bring IPC in line with the IT Act (Indian Institute of Banking & Finance, 2017).

Information Technology (Amendment) Act (ITAA) (2008) The 2000 IT Act had only 10 chapters with limited coverage of cybercrimes: it “fell short of the industry’s requirements to meet global standards” (Nappinai 2010,

310

S. Kethineni

p. 1). Furthermore, the prosecution of offenses such as data theft, illegal access or removal of data, and virus attacks was difficult due to the lack of penal provisions in the IT Act. Criticisms were also raised about the lack of attention to cybercrimes against women and children such as cyberstalking and pedophilia. Despite objections to the proposed draft amendments of 2005 and 2006, the IT Amendment Act (ITAA) of 2008 was passed. Nappinai (2010) argues that the ITAA was passed as a “kneejerk” reaction to the terrorist attack in November of 2008 in Mumbai (p. 2). Despite criticisms, the ITAA (2008) made some important additions to cyberlaw. It defines “computer network” to include “inter-connection of one or more computers or computer systems or communication device through. . .” utilizing satellite, microwave, wire or wireless, or any other communication tools (Ministry of Law and Justice 2009, p. 2). It replaced the word “digital” with “electronic,” which included cell phones and other electronic devices, thereby broadening the definition of communication devices. It expanded the definition of “intermediary” by including all service providers (Nappinai 2010). The inclusion of “Cyber Café,” under the term “intermediary” clarifies its meaning. The Act also expanded the definition of “cybersecurity,” to include any “unauthorized access, use, disclosure, disruption, modification or destruction” of any communication device, computer, computer resource, equipment, or communication device (Ministry of Law and Justice 2009). Data Protection Under ITAA (2008). Data protection procedures were added to meet the industry needs. According to Section 43A, “body corporate” (e.g., a firm, sole proprietorship, or any association conducting commercial or professional activities) should maintain reasonable security measures to prevent unauthorized access, damage, or other illegal activities. If they fail to maintain “reasonable security practice and measures,” they could receive not only civil penalties but criminal sanctions as well (Ministry of Law and Justice 2009, p. 6). For example, under the amended act, if any person with “dishonest” or “fraudulent” means commits any of the acts under Section 43, he or she could receive imprisonment of up to 3 years or a fine of up to Rs. 5 lakhs (US$1,500) or both (Ministry of Law and Justice 2009, p. 9). Likewise, punishment for sending offensive messages may result in 3 years of imprisonment. Newer forms of cybercrimes – identity theft, cheating by impersonation, cyberterrorism, publishing or transmitting obscene materials, publishing or transmitting sexually explicit materials, or depicting children in sexually explicit acts – were included in the 2008 ITAA (Ministry of Law and Justice 2009). Data/Computer Theft. Section 66B provides punishment for stolen property (i.e., data theft). Anyone who dishonestly receives or retains “stolen computer resource or communication device” shall receive the punishment of up to 3 years as well as a fine of up to Rs. 1 lakh (US$1,500; Ministry of Law and Justice 2009, p. 10). The term “computer resource” includes computers, computer network, computer system, data, databases, and software (Nappinai 2010). Confidentiality and Privacy. Section 66E addresses privacy. If anyone intentionally or knowingly captures, publishes, or transmits without permission any private area (e.g., naked or partially covered genitals, buttocks, or female breast) of the person in electronic form, he or she is considered to have violated the person’s privacy. The privacy and confidentiality provisions under Section 66E, therefore, prohibit revenge pornography. The punishment for such violation is imprisonment

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

311

for up to 3 years or a fine of up to Rs. 2 lakhs (US$3,000 ) or both. Section 72A expands the coverage to include an intermediary or anyone providing services under a contract who discloses personal information with the intent to cause harm or wrongfully gain profit shall be liable and may receive the punishment of up to 3 years of imprisonment or a fine or both (Ministry of Law and Justice 2009). Child Pornography and Sexually Explicit Materials. Although the punishment for most of the offenses under ITAA is imprisonment for 2–3 years, the sanctions for electronically publishing or transmitting of sexually explicit content (Section 67A) is 5 years and a fine. Likewise, depicting children in a sexually explicit manner, enticing or inducing children into having an online relationship with other children for transmitting images of sexually explicit acts, facilitating online child sexual abuse, or recording one’s own abuse in any electronic form is punishable by imprisonment for up to 5 years and a fine of up to Rs. 10 lakhs (US$15,000). Any subsequent conviction may result in as much as 10 years of imprisonment and a fine of up to Rs. 10 lakhs (Ministry of Electronics and Information Technology 2009). Also, any intermediary who violates the provisions of the Act is liable and may receive the punishment of up to 3 years in prison and a fine. Cyberterrorism. Cyberterrorism (Section 67F) is defined as any act that is intended to “threaten the unity, integrity, security or sovereignty” of the country or create terror in the people through (1) denying people authorized access to a computer resource; (2) accessing a computer resource without authorization or when such authorization expires; (3) introducing any virus or contaminants; (4) the use of any such methods may or actually cause death or injury to a person, or results in damage to property or disruption to supplies or services that are essential for the community; or (5) adversely affect the critical infrastructure (Ministry of Electronics and Information Technology 2009). The punishment for conspiring to commit or committing cyberterrorism may result in imprisonment for life. Furthermore, the ITAA gave the investigative powers to officers in the rank of an inspector or above, expanding the enforcement powers in investigating cybercrimes. Government Powers and Interception of Data. Under Section 69 of ITAA, the central government and a state government or any officer authorized by the government can direct appropriate agency “to intercept, monitor or decrypt . . . any information generated, transmitted, received or stored in any computer source” (Ministry of Electronics and Information Technology 2009, p. 12). A subscriber or intermediary who refuses to assist such authority shall receive a punishment of up to 7 years of imprisonment and fine. In addition, it gives the government the power to issue orders to block any public information through computers if it finds it necessary to protect the interest of the sovereignty or integrity of the country, defense and security of the country, protect the foreign relations with friendly nations, or to prevent an incitement that could lead to the commission of a serious offense.

Indian Penal Code (IPC) Several sections of IPC were made applicable to cybercrimes through amendments to the IT Act 2000 and ITAA 2008 (see Table 1). They include offenses such as data

312

S. Kethineni

Table 1 Cases registered and persons arrested under IT Act and IPC, 2008–2017 Year 2008 2009 2010 2011 2012 2013 2014 2015 2016

IT Act Cases registered 288 420 966 1,791 2,876 4,356 7,201 8,045 8,613

Persons arrested 178 288 799 1,184 1,522 2,098 4,246 5,102 5,964

IPC Cases registered 176 276 356 422 601 1,337 2,272 3,422 3,518

Persons arrested 195 263 394 446 549 1,203 1,224 2,867 1,785

Source: NCRB (2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016)

theft (Sections 379–381), criminal breach of trust/fraud/credit or debit card fraud/ online banking fraud (Sections 406, 408, and 409), cheating (Section 420), forgery (Section 465, 468-, 469, 471, and 477A), counterfeiting (Section 489A–489E), and fabrication/destruction of electronic records for evidence (IPC Section 193 and 204; Dubbudu 2016). It also included other offenses. The category of other offenses consists of sending threatening message (Section 503) or defamatory messages by email (Section 499); creating a bogus website or committing cyber fraud (Section 420); web-jacking (Section 500); email abuse (Section 500); posting fake news on social media (section 505); criminal intimidation by anonymous communication (Section 507), obscenity, obscene act, and songs (ISection 294); or sale of obscene objects to a young person (Section 293; Indian Institute of Banking & Finance 2017). In addition to cyber-related IPC offenses, Special and Local Laws (SLL) were made applicable to cybercrimes.

Special and Local Laws The SLL laws – Copyright Act, 1957 and Trade Mark Act, 1999 – cover cybercrimes. For example, the Copyrights Act was the first copyright act passed after India’s independence, and since then several amendments have been made. Infringement of copyright was covered under sections 2(o) of the Act. Under the Act, the definition of “literary work” covers computer programs and databases. Punishment for infringement of the Act may lead to fine or penal sanctions, depending on the magnitude of the infringement (My Legal Work Private Limited 2018). In addition to literary work, the Copyright Act covers dramatic, musical, and artistic work such as painting, films, videos, photographs, and sound recordings. Certain acts are exempt from copyright infringement. Unlike the United States that use the term “fair use” as an exception to the law, Indian and British laws use “fair deal” as a criterion. The US law uses four factors – (1) determine the purpose and character of the use of copyright work (i.e., commercial purpose or

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

313

for nonprofit educational use); (2) “the nature of copyright work”; (3) the amount of work used; and (4) “the potential market or value of the copyright work” (Rathod 2012, para 9) – to determine the fair use. The term “fair deal” covers literary, dramatic, musical, or artistic work used for research, private study, critique or review, judicial proceedings, performance given at an amateur club or nonpaying audience, or report current event by means of newspaper, magazine, periodical, cinematograph film, or photographs (Government of India n.d.; Rathod 2012). There are no specific courts to handle copyrights cases. Most of these cases are tried in regular courts. However, there are Copyright Boards that have the jurisdiction to adjudicate cases to determine as to whether the work has been published, the term of copyright work in another country, or to determine the rate of royalties. In addition to the Boards, there are other registered copyright societies such as the Society for Copyright Regulation of Indian Producers for Film and Television (SCRIPT) and the Indian Performing Rights Society Limited (IPRS). There are criminal penalties under copyright law (Government of India n.d.). For example, knowingly using an infringing copy of computer program is considered a criminal offense (Section 63B) and the person may receive a punishment of imprisonment ranging from 7 days to 3 years and a fine (Nayak 2013; Copyright Office 1957). In many cases, the minimum punishment for copyright infringement is 6 months and a minimum fine of Rs. 50,000 (US$ 750). India’s recognition as a global leader in software technology as well as its thriving knowledge economy “pushed questions about the control of knowledge and creativity— about ‘intellectual property’—into the foreground of economic policy debates” (Liang and Sundaram 2011, p. 339). Also, India’s ambition to enter into the global media markets created the revisions to the Indian Copyright Act of 1957. The Act was amended in 1983, 1984, 1994, 1999, and most recently in 2012 (Pandey 2013). The 2012 Copyright (Amendment) Act is now in compliance with Internet treaties. Other salient features include expanding the “fair use” to include digital work, introducing digital rights management (DRM), providing protection to copyright materials, clarifying the right to artistic works by including storing of such work in any electronic medium or other means (e.g., computer, network, or visual recording), strengthening enforcement and protection against Internet piracy by including liability for Internet service providers (Ministry of Law and Justice 2012; Pandey 2013; Scaria 2013). The Trade Mark Act dealt with the registration of an Internet domain. People have to go through a specific registration process to get a domain name. Because India does not have a law protecting the domain name or cybersquatting, the Trade Mark Act is used to prosecute these cases. Cybersquatting involves “registering an Internet domain name that is likely to be wanted by another person, business, or organization with the hope that it can be sold to them for profit” (Singh and Associates 2012, para 1). The first case that was decided by the Indian courts related to a domain issue was Yahoo!, Inc. v. Akash Arora and Another (1999). In the case, the defendants tried to use the domain name for services on the Internet. Of course, the domain name was very similar to . The courts ruled that both domains were nearly identical and intended to mislead Internet users into thinking they both belonged to the same common source (Kaur and Aggarwal 2011; Singh and Associates 2012).

314

S. Kethineni

Cybercrime Statistics As noted above, the NCRB reports cybercrimes in India in its annual Crime in India report. Cybercrimes fall under the three broad categories of law – the IT Act, IPC, and the SLL. Since 2002, when the NCRB started documenting cybercrimes, the number of cases registered under the IT Act has seen rapid growth. For example, in 2002, a total of 70 cases were registered under the IT Act, and 738 cases were registered under IPC. The number of IT cases increased to 8,613 by 2016. Likewise, IPC offenses (using the computer) increased to 3,518 by 2016 (see Table 1). Overall, there were 12,317 cybercrimes in 2016 (NCRB 2016). Table 2 shows the type of offenses and number of cases registered under the IT Act, IPC, and SLL in 2016. The NCRB also compiles information on the motives of cybercriminals, the status of the investigation, and court dispositions. Motivations noted include for illegal gain, revenge, insult modesty of women, extortion/ blackmailing, and sexual exploitation. Other reported reasons are to cause disrepute, develop business, prank/satisfaction of gaining control, political motivation, disruption of public services, piracy, stealing information to commit espionage, selling or purchasing illegal drugs and other items, and inciting hate against the country. There are other reasons, which have not been identified by NCRB. Of the specific reasons reported, the top five reasons for committing cybercrimes were illegal gain, revenge, Table 2 Cybercrimes recorded under IT Act, IPC, and SLL, 2016 Offenses IT Act offenses Tampering with computer source documents Computer-related offenses (Sec 66, 66B–E) Other IT cybercrimes Publication/transmission of an obscene/sexually explicit act in the electronic form Breach of confidentiality/privacy & disclosure of information Other cybercrimes Total IPC offenses Data theft Criminal breach of trust/fraud Cheating Forgery Counterfeiting Fabrication of false evidence/destruction of electronic records Other IPC offenses Total SLL offenses Copyright Act Trademark Act Other SLL Offenses Source: NCRB (2016)

Cases 78 6,818 12 957 35 713 8,613 86 56 2,329 81 10 6 950 3,518 181 2 3

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

315

insulting modesty of women, extortion/blackmailing, and sexual exploitation. The statewide breakdown of cybercrimes provides some insight as to the reasons for committing these crimes. The state of Assam, located in the northeastern part of India, was on the top for revenge/political motives for committing cybercrime. Uttar Pradesh, located in north-central India, recorded the highest incidents of blackmail as well as the highest incidents of “hate crimes against a community” (Madanapalle 2017, para 10). The state “has an infestation of trolls as it also tops the list where the motive is ‘Prank/Satisfaction of Gaining Control’” (Madanapalle 2017, para 10). Police disposal of cybercrimes shows a total of 24,187 cases investigated in 2016, of which 11,870 cases were pending investigation from the previous year. In 2016 alone, a total of 9,213 (40.3%) cases were disposed of by the police, and 14,973 (61.9%) cases were pending. These statistics indicate a need for more investigating officers to complete the investigations promptly. Table 3 shows the number of persons convicted, acquitted, or discharged. Of the 691 persons tried by the courts/tribunals under the IT Act, 202 (29.2%) persons were Table 3 Disposal of cybercrimes by court, IT Act., IPC, and SLL Offenses IT Act 1. Tampering with computer source documents 2. Computer-related offenses 3. Cyberterrorism 4. Publication/transmission of obscene/ sexually explicit content 5. Breach of confidentiality/privacy & disclosure of information 6. Others Subtotal IPC 1. Data theft 2. Criminal breach of trust/fraud 3. Cheating 4. Forgery 5. Counterfeiting 6. Fabrication/destruction of electronic records for evidence 7. Other Subtotal SLL 1. Copyright Act, 1957 2. Trademark Act, 1999 3. Other SLL Offenses Subtotal Total Source: NCRB (2016)

Persons convicted

Persons acquitted

Persons discharged

1 172 0 12

15 370 0 53

0 14 0 1

0

1

0

17 202

33 472

2 17

0 0 6 0 0 0

6 2 37 32 1 0

0 0 1 0 0 0

15 21

48 126

0 1

31 0 0 31 254

96 0 1 97 695

0 0 0 0 18

316

S. Kethineni

convicted, and the remainder of cases resulted in either acquittal or dismissals. A total of 148 people were tried under IPC offenses, only 21 (14.2%) people were convicted. Under SLL offenses, 128 people were tried by the courts/tribunals, resulting in 31 convictions (24.2%). Overall, 26.2% were convicted. Again, the low conviction rate suggests the need for specially trained judges to handle cybercrimes.

Gender and Cybercrimes Under the IT Act in 2016, a smaller percentage of women were arrested (n = 85; 1.4%) compared to men (n = 5,879; 98.6%). Of those arrested, charges were filed against 33 women and 3,381 men. When it comes to IPC offenses, 58 (3.2%) women and 1,727 (96.8%) men were arrested, and charges were filed against 43 women and 1,228 men. Only four (1.7%) women were arrested for SLL offenses, compared to 237 (98.3%) men, and charges were filed against all four women and 224 men. Except for cyberterrorism, where seven men were arrested, women were represented in all categories of offenses under the IT Act. Likewise, women were arrested under most of the categories of IPC offenses, except for data theft and fabrication/destruction of electronic records to avoid any evidence. Under SLL offenses, the four females who were arrested were charged under the Copyright Act (NCRB 2016). Cybercrimes Against Women. There is no official documentation of cybercrimes against women in India. Because crimes against women such as rape and domestic violence are on the rise, the online platform has become another avenue used by cybercriminals to intimidate women. Women have been victims of cyber-related offenses such as “trolling, threatening, stalking, voyeurism, body-shaming, defaming, surveillance, revenge porn and other forms of indecent representation. . .” (Kapadia 2018, para 1). “Trolls (Trolls are individuals who create a false or inflammatory message about others that is “grossly offensive or has menacing character. . . for the purpose of causing annoyance, inconvenience, danger, obstruction, insult, injury, criminal intimidation, enmity, hatred or ill will, persistently by making use of such computer resource or a communication device” (Mali 2015, p. 21).)” spread inflammatory messages in an online community via blogs, chat rooms, and forums to emotionally abuse the victim. Also, the number of women complaining of being blackmailed by their exhusband has been on the rise (Nanjappa 2015). The first reported case of cyberstalking in India involved a woman named Ritu Kohli. Her stalker, Manish Kathuria, a software engineer, followed Kohli on a chat website and used obscene language and distributed her phone number to others. He later used her identity on a chat website, . As a result, she received obscene telephone calls at night for three days. Kohli complained to the police. The police traced the IP address of Kathuria and charged him under Section 509 of IPC (insulting the modesty of women). Mrs. Kohli’s husband was a senior executive where Kathuria worked as a financial analyst but was fired from his job. This case brought the issue to the attention of the legislators. However, it was not

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

317

until the ITAA of 2008 that the offense of cyberstalking was introduced in legislation (Balakrishnan n.d.). Her case not only pertained to cyberstalking but included elements of identity theft, revenge, cyberbullying, trolling, and the “creation of fake avatar. . .” (Halder and Jaishankar 2017, p. 21). Trolling has become “a new phenomenon in India” (Halder and Jaishankar 2017, p. 62). Trolls divert the focus of the victim’s message – a text message, expression, or online discussion – to get the attention of readers about “their own vicious thoughts about the victim” (p. 62). Online bullying, stalking, and grooming women and children also lead to sexual offenses against women and children. Once sexually explicit material or obscene information is uploaded online, it is difficult for police to track down the perpetrators. Also, the information (either visual or in written form) is hard to remove from the Internet and tends to reappear time and again. In Karan Girotra v. States and Another (2012), the petitioner filed an application requesting an anticipatory bail (Under Section 438 of the Code of Criminal Procedure, a person can seek a bail in anticipation of an arrest if he or she is accused of crime that is not bailable.) in respect to a case filed under Sections 328 (causing hurt using poison) and 376 (punishment for rape), and Section 66A of the ITAA. In this case, Ms. Shivani Saxena filed a complaint with the police stating that she was married to Ishan, but the marriage failed because her husband could not consummate the marriage. They decided to divorce. Then she met Mr. Karan Girotra on the Internet. He told her that he was in love with her and wanted to marry her. He invited her to his house, drugged her, and sexually assaulted her. She also alleged in her complaint that the petitioner threatened to distribute nude and obscene pictures of her if she did not maintain a physical relationship with him. They were engaged, but he broke off the engagement. The court noted that there was a delay in filing a complaint by Ms. Saxena. The court disregarded the sexual assault and held that Saxena had a consensual relationship and that she filed a complaint only when Girotra broke off the engagement. This case is indicative of the attitude of the Indian judiciary towards women (Balakrishnan n.d.). According to Halder and Jaishankar (2017), there is limited information concerning data mining on adult dating websites. However, matrimonial websites seem to take the place of dating websites. The prospective brides or grooms upload their information on these websites, but these sites do not provide much privacy protection. Fraudsters use fake identities to make contact with women. To address the issue of the misuse of information and fraud, an “Advisory on the functioning of matrimonial websites in accordance with the Information Technology Act, 2000, Rules” was issued in 2016 (Ministry of Communications and Information Technology 2016, p. 1). The advisory applies to all matrimonial websites and mobile applications. These matrimonial sites are considered intermediaries concerning any electronic records they store, and therefore they had a mandate to follow the rules set by the IT Act and other guidelines set by the government (Ministry of Communications and Information Technology 2016). Cybercrimes against children. India is experiencing a major increase in offenses against children. According to the Ministry of Women and Child Development (2007), every second child has been a victim of sexual abuse at some point in

318

S. Kethineni

time. The nongovernmental agency, Child Rights and You (CRY 2016), reported a 500% increase in crimes against minors between 2006 and 2016. As a result of an extensive nationwide study conducted by the Ministry of Women and Child Development (2007) documenting the nature and extent of child abuse of both genders, the Government of India passed comprehensive legislation, the Protection of Children from Sexual Offenses Act (POCSO Act of 2012). This Act mandated the NCRB to document offenses against children as a separate category. Before the POCSO Act, the NCRB combined crimes against children, except for the rape of a child, with other crimes. Major crimes against children include trafficking, kidnapping, rape, and infanticide (Dey 2015). To complicate the problem further, the Internet has become another avenue for criminals to target children. Cybercriminals “groom their victims to contribute to the victimization” (Halder and Jaishankar 2017, p. 73). Also, groomers use their victims to harm others or use them as pawns to recruit other victims. Section 67B of ITAA (2008) explicitly prohibits the electronic publication or transmission of any sexually explicit act or conduct involving children. Anyone who creates text or a digital image, downloads, browses, advertises, promotes, or distributes sexually explicit content of children or entices children online for sexual purposes, facilitates child abuse online, or records own abuse or that of others and share it online shall receive, if convicted, incarceration for up to 5 years plus a fine. A subsequent conviction will entail an enhanced sentence (Ministry of Law and Justice 2009). Cryptocurrency and cybercrimes. India has also taken a front seat when it comes to cryptocurrency-related crimes, especially financial crimes. With no proper regulations, cryptocurrency vendors such as BtcxIndia, Unocoin, Coinsecure, Zebpay, Koinex, and Bitcoin-India have started offering cryptocurrency exchange and trading services (Jani 2018). Even though the law enforcement has repeatedly warned the public about the investment risks associated with cryptocurrencies and reminded the investors that they were not legal, the cryptocurrency crime rate continues to grow (Wood 2019). For example, cryptocurrency expert and author of two ebooks, Amit Bhardwaj and three accomplices – Vivek Bharadwaj, Pankaj Adlakha, and Hemant Bhope – were arrested on charges of Ponzi scheme worth about Rs. 2,000 Crores (US$300 billion). Amit was operating three companies – GB Minors, Gain Bitcoin, and GB2 – and lured investors promising a high rate of return, which he failed to fulfill (Bangera 2018). In another case, a former member of Legislative Assembly (MLA) belonging to the Bharatiya Janata Party (one of the major political party in India), Nalin Kotadiya was accused of colluding with his nephew in the abduction of Shailesh Bhatt and extorting Rs. 9.95 crores (US$1.3 million in bitcoins). Several other cryptocurrencyrelated crimes have been reported in recent years involving money laundering, smuggling of drugs on the dark web to theft of bitcoins (see Table 4). The government was skeptical about cryptocurrency, and the Reserve Bank of India (RBI) reiterated the warnings concerning investing and trading in cryptocurrency. Despite these warning, Unocoin founders allegedly circumvented the banking channels and decided to set up an ATM allowing its customers to deposit cash and buy

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

319

Table 4 Cryptocurrency-related crimes 1. Sathvik Viswanath, founder of Unocoin Country: India 2. Amit Bhardwaj, Hemant Bhope, & Pankaj Adlakha Country: India 3. Nalin Kotadiya & Kirit Paladiya Country: India 4. Nikhil Tiwari Country: India

Bitcoin

Illegally operating the first bitcoin ATM in the world

Arrested

Bitcoin

Money laundering

Confiscation of property

Bitcoin

Kidnapping

Arrested

Bitcoin

Arrested

5. Divyesh Darji Country: India Unknown Country: India 6. Indian police officer Country: India 7. Deepak Country: India 8. Satyendra Kumar Singh, an investigator from Narcotic Bureau Country: India 9. Five people (including two college students). Country: India

Bitcoin

Drug dealer busted for smuggling LSD from the dark web Ponzi scheme

Bitcoin

Arrested

Bitcoin

Theft of bitcoins from Coinsecure US$3 million Kidnapping & abduction

Unknown

Bitcoin

Hacking & theft of bitcoin

Bitcoin

Theft of bitcoins that were frozen from a drug bust

Arrested

Bitcoin

Trafficking of drugs – LSD

Arrested

Under investigation In Custody

Source: Kethineni and Ying (2019)

cryptocurrencies such as bitcoin or Ethereum. To curb the cryptocurrency craze, the police arrested the developers of the ATM in Bangalore (known as the silicon valley of India). The developers were well known and “‘brightest’ tech pioneers and an ‘icon of the crypto industry’” (Huillet 2018, para 3). Currently, a legislative bill – Banning of Cryptocurrencies and Regulation of Official Digital Currencies Bill 2019 – is being reviewed by intergovernmental agencies. The Bill, if passed into law, will fully ban cryptocurrencies (Khatri 2019).

Government Initiatives to Prevent Cybercrimes The government has developed initiatives – National Cyber Security Policy, Computer Emergency Response Team (CERT-in), the Botnet Cleaning and Malware Analysis Centre, collaboration with industry partners, and Cyber Crime Prevention against Women and Children (CCPWC) – to address cybercrimes. The National Cyber Security Policy was formulated in 2013 to (1) protect cyberspace information and infrastructure; (2) take proactive measures to prevent and respond to cyber attacks; (3) minimize the damage resulting from cyber attacks through coordinated efforts; (4) create a secure cyber ecosystem that meets international security

320

S. Kethineni

standards; (5) strengthen existing regulatory mechanisms; (6) create a national center for the protection of critical information infrastructure; (7) increase the number of cybersecurity personnel; and (8) improve cybersecurity practices through industry collaboration (Pandey 2017). To address the rise in cybersecurity violations, the CERT-in established the Botnet Cleaning and Malware Analysis Center (known as the Cyber Swachhta Kendra). The Center identifies botnet infections and prevents the spread of infections to other systems. Also, it notifies the end-users and assists them in removing the malware. The Center offers security and protective tools such as USB Pratirodh, which identifies unauthorized use of USB devices; the Samvis app that protects desktop from loading unauthorized files; M-Kavach, which provides security on Android devices; and JSGuar, which prevents HTML and Javascript attacks (Pandey 2017). The Indian government is also working in collaboration with industry partners. For example, the anti-virus company Quick Heal is assisting the government by providing a bot-removal tool free of cost. The government is partnering with CISCO to create a threat intelligence-sharing program. These efforts are in the right direction. However, timely identification and elimination of these threats are vital to ensure cybersecurity at all levels. The CCPWC is an important initiative taken by the government to address various forms of cybercrimes against women and children. Although women make up only 30% of 500 million Internet users in India, about 76% of women under age 30 were victims of online harassment and 1 in 10 under the same age group had been the targets of “revenge porn and/or sextortion” (Center for Advanced Research in Digital Forensics and Cyber Security n.d., para 1). Even children were targeted in a most heinous way. For example, an online suicide game, known as the Blue Whale, circulated with instructions for children on how to commit suicide. In 2017, the CCPWC documented 135 deaths due to the Blue Whale suicide challenge game and three deaths due to the Momo challenge game. The Indian government took note of the urgency of the situation and contacted Google, Facebook, Whatsapp, Microsoft, and Yahoo and asked them to immediately remove any links to such games (Nair 2017). The CCPWC also conducts research related to cybercrimes against women and children to protect them against cyberpornography, cyberbullying, online harassment, cyberstalking, matrimonial fraud, and banking fraud (Center for Advanced Research in Digital Forensics and Cyber Security n.d.). The three important aspects of the CCPWC initiative are that they created an online cybercrime reporting platform, a national cybercrime forensic laboratory, and a capacity building unit. The portal is currently available to victims where they can report cybercrimes online such as child pornography, and any materials related to child sexual abuse, rape, or gang rape (Ministry of Home Affairs n.d.). Also, the portal has information about law enforcement and regulatory agencies at the national, state, and local levels. Forensic laboratories operate at national and state levels. A team of cybersecurity experts from a forensic unit assists law enforcement in the analysis of cybercrimes. Training is provided to law enforcement, judges, and prosecutors on cybercrimes, with a special focus on women and children. As part of the CCPWC, citizen awareness campaigns were initiated with “dos and don’ts”

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

321

about online activity. The emphasis is on educating parents, students, and teachers on how to recognize and prevent cyberbullying, how to report online abuse, what are the responsible and safe ways to use the Internet, and how to “maintain a healthy screen time” (Center for Advanced Research in Digital Forensics and Cyber Security n.d., para 4). The Ministry has also developed a handbook on cybersecurity for young people (Dharmaraj 2019), and the government has disseminated computer policy and guidelines to all concerned departments.

Conclusion India, with its amendment to the IT Act, has made progress in addressing cybercrimes. The introduction of several provisions in the ITAA related to data protection, the role of intermediaries, refining the definition of electronic records, and stringent monetary and criminal sanctions are important measures in the right direction. However, ITAA and other laws are not equipped to address the changing needs of information technology. As a result, many companies are taking cyber insurance policies to cover losses resulting from criminal acts, forensic costs, cyberextortion costs, and any other liabilities (S. S. Rana and Co. Advocates 2018). The IT industry has become aware of reasonable security measures such as site certification, security initiative, awareness training, monitoring the compliance of password protection, access control, and email policy. The Government of India has distributed the Information Technology Rules of 2011 (Reasonable security practice and procedures and sensitive personal data or information) to law enforcement, IT companies, and other related corporations. If there is an information security breach, the corporation or person involved must demonstrate that they have implemented appropriate security control measures as per the Rules. According to the Indian Institute of Banking & Finance (2017), compliance issues will have “wide ramifications especially in the use of cloud computing. . .” (p. 7) where the information is stored on a remote network server hosted on the Internet. As more and more organizations start storing and maintaining data on the cloud, the questions of who bears the responsibility of maintaining privacy and security (the information owner, the information container, or the information custodians; Indian Institute of Banking and Finance 2017) remains to be answered. There are also enforcement problems as investigations of virtual communities create unique challenges for law enforcement because of the global nature of the Internet. Cybercriminals can hide in virtual space to harass, stalk, troll, cheat, defraud, or abuse their victims. The victim can be contacted via email, instant messaging (IM), social media, or chat rooms (Deo 2013). Concerns were also raised about the government’s power to intercept, monitor, and block websites for national security, defense, and other reasons by invoking the authority under the Indian Telegraph Act (1885). A writ petition was filed in the Supreme Court of India by the People’s Union for Civil Liberties in 1991 challenging the government’s authority. The Court delivered its verdict in 1996, stating that the government has no jurisdiction to exercise this power unless there is a public

322

S. Kethineni

emergency or that public safety demands such an intervention. However, the new ITAA (Section 69) allows the government greater power to listen to a phone conversation, read messages sent via SMS and emails, monitor websites, and order service providers to decrypt any communication when requested. This provision was viewed by many as a draconian form of government intervention (Indian Institute of Banking and Finance 2017). Also, copyright infringement, especially in the area of cinematography (i.e., movies, TV shows, videos, songs, and recordings) is becoming common. In 2017, the police arrested four people for allegedly stealing episodes from season 7 of “A Game of Thrones” before its release and distributing them to pirate sites. The four men worked for Prime Focus Technologies, which served as an outside vendor for Star India (Spangler 2017). It is not an isolated incident; there is a rampant copyright infringement related to movies, videos, and other digital media. A study by Envisional and the Motion Pictures Association (MPA) in 2009 reported that online piracy of movies is a major problem in India. These pirated movies are downloaded from five major torrent sites – Mininova, Torrentz, Thepiratebay, Isohunt, and Btjunkie – and Indians were “the largest or second largest group of people who visit” these sites (Scaria 2013, p. 650). In 2018, the MPA AsiaPacific conducted forums in India to discuss the legal and regulatory issues related to digital licensing, benefits of the digital economy, and the social and economic impact of piracy of film industry (Motion Picture Association 2018a). India is paying close attention to piracy websites as the Indian film industry contributes about US$ 33.3 billion to the economy of the country (Motion Picture Association 2018b, p. 11). Attention is being paid to the piracy sites. For example, in April 2018, the Delhi High Court ordered blocking eight major audiovisual piracy websites – bmovies, fmovies, rarbg, thepiratebay, torrentmovies, extratorrent, yts, and yify (Motion Picture Association 2018c, p. 11). It is hoped that these awareness initiatives would lead to curtailing copyright infringements. There was also a lack of coverage of many cybercrimes (Kumar and Pandey 2011), including the use of cryptocurrency in criminal activities. Only one piece of legislation (IT Act of 2000 and the amended act) address cybercrimes in India. The ITAA, although it contains extensive coverage of sexual offenses against children online, many feel that there was insufficient coverage of many cybercrimes, including the use of the Internet or darknet to conduct money laundering, a data breach on cloud sourcing, or cybersecurity threats in the power sector (Kumar et al. 2014). A plan to completely ban cryptocurrency is underway. In 2018, the RBI has banned all forms of transactions involving cryptocurrencies such as registering, trading, lending, accepting, selling, or purchasing of cryptocurrencies (Helms 2019). As of May 2019, discussions are underway to fully ban cryptocurrency in India. Although India is quick to pass legislation, it lacks the necessary infrastructure such as cybersecurity-trained law enforcement personnel and cybersecurity-related courts that employ judges with knowledge in cybersecurity (Kashetri 2016). It is time for India to invest resources for training enforcement personnel on current and emerging security issues so that they will be better prepared to meet current and future demands. Introducing cybercrime courses in schools and colleges would increase awareness among young people.

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

323

Cross-References ▶ Cyberstalking ▶ Defining Cybercrime ▶ Digital Piracy ▶ Identity Theft: Nature, Extent, and Global Response ▶ Phishing and Financial Manipulation

References Alam, A. A., Kumar, R., & Abbas, H. (2010). Indian government and politics. New Delhi: Pearson India. Balakrishnan, A. (n.d.). Cyberstalking: Challenges in regulating cyberstalking at the cyberspace. Legal Service India. Retrieved 20 Jan 2019, from http://legalserviceindia.com/legal/article-214cyber-stalking-challenges-in-regulating-cyberstalking-at-the-cyber-space.html Bangera, K. (2018). Mastermind of a bitcoin [BTC] Ponzi scheme worth over $300 billion arrested in India. Retrieved 6 Apr 2018, from https://ambcrypto.com/mastermind-of-a-bitcoin-btc-ponzischeme-worth-over-300-billion-arrested-in-india/ Center for Advanced Research in Digital Forensics & Cyber Security. (n.d.). Cyber crime prevention against women and children (CCPWC). ardcindia.org. Retrieved 14 Mar 2019, from https:// www.ardcindia.org/ccpwc/ Chakrabarty, B., & Pandey, R. K. (2008). Indian government and politics. New Delhi: Sage. Commonwealth Human Rights Initiative. (n.d.). Police organization in India. Commonwealth Human Rights Initiative. Retrieved 3 Jan 2019, from https://www.humanrightsinitiative.org/ old/publications/police/police_organisations.pdf Copyright Office. (1957). The Copyright Act, 1957 (14 of 1957). Copyright.gov.in. Retrieved 10 Dec 2018, from http://copyright.gov.in/Documents/CopyrightRules1957.pdf CRY. (2016). Media release: Crime against children in India. Retrieved 10 Aug 2018, from https:// timesofindia.indiatimes.com/india/every-15-minutes-a-child-is-subjected-to-sexual-offence-inindia-cry-report/articleshow/63818155.cms Deo, S. S. (2013). Cyberstalking and online harassment: A new challenge for law enforcement. Bharati Law Review, 2(1), 86–93. Dey, S. (2015, July 18). Crimes against children jumped over 50% in one year. Times of India. Retrieved 10 Jan 2018, from https://timesofindia.indiatimes.com/india/Crimes-against-childrenjumped-over-50-in-one-year/articleshow/48120671.cms Dharmaraj, S. (2019, January 9). India’s initiative for cyber crimes against women and children. Opengov. Retrieved 15 Jan 2019, from https://www.opengovasia.com/indias-initiative-forcyber-crimes-against-women-and-children/ Discovered India. (n.d.). Government of India. Retrieved 2 Nov 2018, from http://www. discoveredindia.com/india-at-a-glance/government.htm Dubbudu, R. (2016, September 2). Most number of cyber crimes reported in Maharashtra & Uttar Pradesh. Factly. Retrieved 10 Jan 2019, from https://factly.in/cyber-crimes-in-india-whichstate-tops-the-chart/ Forensic & Integrity Services. (2017). Responding to cybercrime incidents in India. Retrieved July 23 2019, https://www.ey.com/Publication/vwLUAssets/ey-responding-to-cybercrime-inci dents-in-India-new/$FILE/ey-responding-to-cybercrime-incidents-in-india.pdf Government of India. (n.d.). A handbook of copyright law. Copyright.gov.in. Retrieved 15 Mar 2019, from http://copyright.gov.in/documents/handbook.html Halder, D., & Jaishankar, H. (2017). Cyber crimes against women. New Delhi: Sage. Helms, K. (2019, April 19). RBI excludes cryptocurrency from Indian regulatory sandbox. Bitcoin. com. Retrieved 29 May 2019, from https://news.bitcoin.com/rbi-cryptocurrency-indian-regula tory-sandbox/

324

S. Kethineni

Huillet, M. (2018, November 8). Innovation behind bars: The arrest of India’s first bitcoin “ATM” operators. cointelegraph. Retrieved 16 Mar 2019, from https://cointelegraph.com/news/innova tion-behind-bars-the-arrest-of-indias-first-bitcoin-atm-operators Indian Institute of Banking & Finance (IIBF). (2017). Cyber laws in India. iibf.org.in. Retrieved 17 Mar 2019, from http://iibf.org.in/documents/Cyber-Laws-chapter-in-Legal-Aspects-Book.pdf Jani, S. (2018). The growth of cryptocurrency in India: Its challenges and potential impacts on legislation. ResearchGate. Retrieved 16 Mar 2019, from https://www.researchgate.net/publica tion/324770908_The_Growth_of_Cryptocurrency_in_India_Its_Challenges_Potential_Impacts_ on_Legislation Kapadia, D. (2018, November 21). Cyber crimes against women and laws in India. LiveLaw.in. Retrieved 10 Jan 2019, from https://www.livelaw.in/cyber-crimes-against-women-and-laws-in-india/ Karan Girotra v. States & Another. (2012, May 8). In the high court of Delhi at New Delhi, Bail Application. 977/2011. Retrieved 10 Jan 2018, from https://indiankanoon.org/doc/107460954/ Kashetri, N. (2016). Cybercrime and cybersecurity in India: Causes, consequences and implications for the future. Crime, Law and Social Change, 66, 313–338. Kaur, R., & Aggarwal, R. (2011). Cyber-squatting: Legal implications and judicial approaches: An Indian perspective. Computer Law and Security Review, 27, 653–658. Kethineni, S., & Cao, Y. (2019). The rise in popularity of cryptocurrency and associated criminal activity. International Criminal Justice Review. https://doi.org/10.1177/1057567719827051 Khatri, Y. (2019). Indian government again discussing ban on cryptocurrencies: Report. Coindesk. com. Retrieved 28 May 2019, from https://www.coindesk.com/indian-government-againdiscussing-ban-on-cryptocurrencies-report Kumar, N. A., & Pandey, P. (2011). Cyber crime-law & enforcement in India. International Transactions in Applied Sciences, 3(1), 111–122. Kumar, V. A., Pandey, K. K., & Punia, D. K. (2014). Cyber security threats in the power sector: Need for a domain specific regulatory framework in India. Energy Policy, 65, 126–133. Liang, L., & Sundaram, R. (2011). India. In J. Karaganis (Ed.), Media piracy in emerging economies (pp. 339–398). New York, NY: Social Science Research Council. Madanapalle, A. (2017, December 17). NCRB report shows trends in cybercrime in cities, states and union territories and it contains some surprising insights. firstpost.com. Retrieved 16 Mar 2019, from https://www.firstpost.com/tech/news-analysis/ncrb-report-shows-trends-in-cybercrime-incities-states-and-union-territories-and-contains-some-surprising-insights-4242851.html Mali, A. P. (2015, January). Trolls, trolling & law. CSI Communications. Retrieved 31 Jan 2018, from https://www.academia.edu/10237949/Troll_Trolling_and_Law_in_India_By_Prashant_Mali Mallapur, C. (2016, June 2). As net use spreads, cyber crimes up 19 times over10 years. Global Investigative Journalism Network, IndiaSpend. Retrieved 10 Jan 2019, from http://archive. indiaspend.com/cover-story/as-net-use-spreads-cyber-crimes-up-19-times-over-10-years-49007 Ministry of Communications & Information Technology. (2016, June 6). Advisory on functioning of matrimonial websites in accordance with the Information Technology Act, 2000 and Rules made thereunder. Deity.gov.in. Retrieved 15 Jan 2019, from https://www.dsci.in/sites/default/ files/documents/resource_centre/Redefining%20Identity%20Management%20in%20the%20Digi tal%20World.pdf Ministry of Electronics & Information Technology. (2009). The Information Technology (Amendment) Act, 2008. New Delhi: Ministry of Law, Justice. Retrieved 15 Jan 2019, from http://meity. gov.in/writereaddata/files/it_amendment_act2008%20%281%29_0.pdf Ministry of Home Affairs. (n.d.). Cybercrime reporting portal. Retrieved 10 Jan 2019, from https:// cybercrime.gov.in/cybercitizen/home.htm Ministry of Law & Justice. (2012). The Copyright (Amendment) Act, 2012. The Gazette of India. World Intellectual Property Organization. Retrieved 16 Mar 2019, from https://www.wipo.int/ edocs/lexdocs/laws/en/in/in066en.pdf Ministry of Law and Justice. (2009). The Information Technology (Amendment) Act, 2008. eprocurement,gov. Retrieved 1 Jan 2019, from http://eprocurement.gov.in/news/act2008.pdf Ministry of Law, Justice and Company Affairs. (2000). The Information Technology Act, 2000. dot. gov.in. Retrieved 2 Jan 2019, from http://dot.gov.in/sites/default/files/itbill2000_0.pdf

15

Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms

325

Ministry of Women & Child Development. (2007). Study of child abuse in India. Government of India. Retrieved 10 Aug 2018, from https://childlineindia.org.in/pdf/MWCD-Child-Abuse-Report.pdf Motion Picture Association. (2018a, January–June). India: IPR awareness workshops highlight the social and economic impact of piracy. MPA ASIA-Pacific Reporter. Retrieved from https://www. mpa-i.org/wp-content/uploads/2018/07/MPA-Reporter-Jan-June-2018.pdf Motion Picture Association. (2018b, January–June). India: Screen sector contributes massive US $33.3 billion to India’s economy. MPA ASIA-Pacific Reporter. Retrieved from https://www.mpai.org/wp-content/uploads/2018/07/MPA-Reporter-Jan-June-2018.pdf Motion Picture Association. (2018c, January–June). India: Court orders 8 major piracy websites blocked. MPA ASIA-Pacific Reporter. Retrieved from https://www.mpa-i.org/wp-content/ uploads/2018/07/MPA-Reporter-Jan-June-2018.pdf My Legal Work Private Limited. (2018). Data theft, cyber crime and data protection laws in India. MyLegalWork. Retrieved 15 Jan 2019, from https://mylegalwork.com/guides/data-theft Nair, P. (2017, September 2). The game of death: “Blue whale challenge” in India. Mainstream Weekly, 55(37). Retrieved 15 Jan 2019, from https://www.researchgate.net/publication/ 319479309_The_Game_of_Death_'Blue_Whale_Challenge'_in_India Nanjappa, V. (2015, October 10). New wave of cyber crime against women in India. Oneindia. Retrieved 15 Jan 2019, from https://www.oneindia.com/india/new-wave-cyber-crime-againstwomen-1894591.html Nappinai, N. (2010). Cyber crime law in India: Has law kept pace with emerging trends? An empirical study. Journal of International Commercial Law and Technology, 5(1), 22–27. National Crime Records Bureau. (2008). Crime in India 2008 statistics. New Delhi: Ministry of Home Affairs, Government of India. National Crime Records Bureau. (2009). Crime in India 2009 statistics. New Delhi: Ministry of Home Affairs, Government of India. National Crime Records Bureau. (2010). Crime in India 2010 statistics. New Delhi: Ministry of Home Affairs, Government of India. National Crime Records Bureau. (2011). Crime in India 2011 statistics. New Delhi: Ministry of Home Affairs, Government of India. National Crime Records Bureau. (2012). Crime in India 2012 statistics. New Delhi: Ministry of Home Affairs, Government of India. National Crime Records Bureau. (2013). Crime in India 2013 statistics. New Delhi: Ministry of Home Affairs, Government of India. National Crime Records Bureau. (2014). Crime in India 2014 statistics. New Delhi: Ministry of Home Affairs, Government of India. National Crime Records Bureau. (2015). Crime in India 2015 statistics. New Delhi: Ministry of Home Affairs, Government of India. National Crime Records Bureau. (2016). Crime in India 2016 statistics. New Delhi: Ministry of Home Affairs, Government of India. Nayak, D. (2013, September 13). India: Copyright protection for computer software: An Indian perspective. mondaz. Retrieved 15 Jan 2019, from http://www.mondaq.com/india/x/262564/ Copyright/Copyright+Protection+For+Computer+Software+An+Indian+Prospective Pandey, A. (2013). Inside views: Development in Indian IP Law: The Copyright (Amendment) Act 2012. Intellectual Property Watch. Retrieved 15 Mar 2019, from http://www.ip-watch.org/2013/ 01/22/development-in-indian-ip-law-the-copyright-amendment-act-2012/ Pandey, A. (2017, August 17). Cyber security initiatives by the government of India. ipleaders. Retrieved 15 Mar 2019, from https://blog.ipleaders.in/cyber-security-initiatives/ Radhakrishnan, P. G. (2014, May 21). Executive magistrates: their functions and powers under the Code of Criminal Procedure, 1973, and other laws: a brief outline. Slideshare. Retrieved 10 Dec 2018, from https://www.slideshare.net/appuanandu/powers-and-duties-ofexecutive-magistrates Rathod, S. K. (2012, May 18). Fair use: comparing US and Indian copyright law. Jurist. Retrieved 15 Mar 2019, from https://www.jurist.org/commentary/2012/05/sandeep-kanakrathod-copyright/

326

S. Kethineni

S. S. Rana & Co. Advocates. (2010–2017). Indian courts and jurisdiction. ssrana.in. Retrieved 2 Nov 2018, from https://www.ssrana.in/Intellectual%20Property/IP-Enforcement-And-Litiga tion/IP-Indian-Courts-Jurisdiction-in-India.aspx S.S. Rana & Co. Advocates. (2018, September 19). India: Cyberinsurance in India. Mondaq Business Briefing. Retrieved 15 Jan 2019, from http://www.mondaq.com/india/x/737342/Insur ance/Cyber+Insurance+In+India Scaria, A. (2013). Online piracy of Indian movies: Is the film industry firing at the wrong target. Michigan State International Law Review, 21(3), 648–663. Shah, G. (1999). History and organization of Indian police. New Delhi: Anmol. Singh & Associates. (2012, November 29). India: Cyber squatting laws in India. mondaq. Retrieved 15 Jan 2019, from http://www.mondaq.com/india/x/208840/Trademark/CYBER+SQUATTI +NG+LAWS+IN+INDIA Society of Indian Law Firms [SILF]. (2011). Indian judicial system. Retrieved 30 Nov 2018, from http://www.silf.org.in/16/indian-judicial-system.htm Spangler, T. (2017, August 14). Police make arrests in “Game of Thrones” piracy case in India. variety.com. Retrieved 16 Mar 2019, from https://variety.com/2017/digital/news/game-ofthrones-leak-piracy-india-arrests-1202526854/ Statista. (2019). Number of internet users in India from 2015 to 2022 (in millions). Retrieved 15 Jan 2019, from https://www.statista.com/statistics/255146/number-of-internet-users-in-india/ Verma, A., & Subramanian, K. S. (2009). Understanding the police in India. New Delhi: LexisNexis India. Wood, A. (2019, January 3). Indian police warn public against investing in cryptocurrency. Cointelegraph. Retrieved 16 Mar 2019, from https://cointelegraph.com/news/indian-policewarn-public-against-investing-in-cryptocurrencies Yahoo!, Inc. v. Akash Arora and Another. (1999). I.A. No. 10115/1998 and Suit No. 2469/1998.

Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia

16

Lennon Y. C. Chang

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Development of the Internet in Asia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime in Asia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Emergence of Cybercrime Laws and Regulation in Asia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Budapest Convention and Northeast Asia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Budapest Convention and ASEAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

328 329 331 333 333 335 340 341 342

Abstract

Asia is one of the fastest-growing regions in the global e-commerce market place and has also been seen as the future of cybercrime. Cybercrimes are emerging in Asia, and the developing countries of the Association of Southeast Asian Nations (ASEAN) are becoming a hub for cybercriminals. To prevent Asia from becoming a cybercrime hub and safe-haven for cybercriminals, it is important that countries be equipped with comprehensive cybercrime laws aligned with international standards. This chapter reviews the development of the Internet in Asia (both Northeast and Southeast Asia) and examines existing legal measures adopted by these countries and compares them with the Council of Europe’s

An early version of this paper was published in the Marine Corps University Journal Vo. 6, No. 2, Fall 2015. The views expressed in this essay are those of the author and do not necessarily reflect the official policy or position of the Department of the Army, the Department of Defense, or the US government. L. Y. C. Chang (*) School of Social Sciences, Monash University, Clayton, VIC, Australia e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_6

327

328

L. Y. C. Chang

Convention on Cybercrime (Budapest Convention). The chapter finds that cybercrime laws in Northeast Asian countries are all aligned favorably with the Budapest Convention. While most ASEAN countries are favorably or moderately aligned with the Budapest Convention, more work needs to be done to support countries like Myanmar, Indonesia, and Cambodia to build cybercrime laws more closely aligned with the Budapest Convention. It suggests that actions need to be taken to reduce the digital divide and raise cybersecurity awareness among ASEAN member countries. It is also suggested that the Budapest Convention should be updated to match the development of new technologies and crime such as hate speech and fake news. Keywords

Cybercrime law · Asia · ASEAN · Legislative framework · Digital divide · Budapest Convention

Introduction Information and communication technologies (ICTs) are critical infrastructure for many aspects of every society. The World Wide Web, invented 30 years ago in 1989, is still seeing an exponential growth in the use of ICTs. The booming digital economy has made the Internet a new place for doing business and shopping. ICTs also provide criminals a new opportunity. Cybercrime has become a world concern. Cybercriminals are not only chasing after money but also data. As what criminologists usually say, “where there is money, there is crime,” it is becoming real that “where there is data, there is crime” as cybercriminals are “collecting” all kinds of data online for diverse purposes including monetary gain, revenge, and political purpose. Among the Asia and Pacific countries, cybersecurity in the member countries of the Association of Southeast Asian Nations (ASEAN) is an emerging concern. According to a 2018 report published jointly by Google and Temasek Holdings, a Singapore Government-owned company, six Southeast Asian countries – Indonesia, Malaysia, the Philippines, Singapore, Thailand, and Vietnam – had in total more than 350 million Internet users, which is 90 million more than the number in 2015 (Google-Temasek 2018). And we can expect the number of Internet users across all ASEAN countries to keep increasing dramatically, boosted by the growth in Internet users in Myanmar, Cambodia, and Laos, among others. Google-Temasek (2018) also predicted that the Internet economy in the six Southeast Asian countries would be about US$72 billion and reach more than US$240 billion in 2025. Asia is also seen as the future of cybercrime. As Chang et al. (2018) argue, “the globalization of cybercrime and the increasing penetration of digital technology in Asia, many see the ‘Wild East’ join, if not eclipse the ‘Wild West’ as a source of criminality.” It has been estimated that criminal activities on the Internet cost the world as much as US$600 billion in 2017. One-third of the loss was in the Asia and Pacific region (Lewis 2018). The transnational character of cybercrime makes

16

Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia

329

cybercrime investigation complicated. One of the key elements to improve international cooperation is to have harmonized laws against cybercrime. This chapter will review the development of cybercrime law in the Northeast Asia countries including China (including Hong Kong), Japan, Korea, and Taiwan and the ten ASEAN member states and compare the laws with the Council of Europe’s Convention on Cybercrime (Budapest Convention). It will also propose some key directions for the future development of the legal framework against cybercrime in the region.

The Development of the Internet in Asia Asia is the future growth area of the Internet. According to World Internet Statistics, the global number of Internet users reached over 4.5 billion. More than half of them (55%, around 2.3 billion) are located in Asia in June 2019. The number of Internet users in Asia has increased 20 times compared with the number in 2000. And there remains space for the number to increase substantially as only 54% of the population of Asia currently are Internet users (Miniwatts Marketing Group 2019). The Asia and Pacific region has also been recognized as “the fastestgrowing region in the global e-commerce marketplace, accounting for the largest share of the world’s business-to-consumer e-commerce market” (Asian Development Bank 2018). However, not all Asian countries have a similar level of Internet participation. The digital divide reminds although there has been a dramatic increase in Internet users in developing countries such as Myanmar, Laos, and Cambodia. However, the Internet penetration rate in most East Asian countries, including China, Japan, South Korea, and Taiwan, is even higher and above the average of all Asia. According to World Internet Statistics (Miniwatts Marketing Group 2019), China is the country that has the most Internet users in Asia and the world. There are more than 854 million Internet users in China. However, with a penetration rate of around 60%, there is still room for a significant increase in Chinese Internet users. In South Korea, Japan, and Taiwan, over 90% of their total population are Internet users. The digital divide is large even within the Southeast Asia, or the Association of Southeast Asian Nations (ASEAN). The ASEAN was formed in 1967 to promote regional security and cooperation. It was originally formed by Indonesia, Malaysia, the Philippines, Singapore, and Thailand. The number of member states increased over the years, and now it has ten member states: Brunei, Cambodia, Indonesia, Laos, Malaysia, the Philippines, Singapore, Thailand, Myanmar, and Vietnam. According to ASEAN Key Figures 2018, the total population of the ten ASEAN member countries reached 642.1 million in 2017. Around 48% of the ASEAN population are Internet subscribers (ASEAN 2018). Ninety percent of Internet users in ASEAN countries are connecting themselves online primarily through their mobile phones. For countries like Myanmar, the mobile phone is the device that most Internet users use to connect to the digital world. That is, they have skipped the desktop period and moved directly into the mobile Internet era. And thanks to the Internet, for many poorer and rural communities, they have skipped landline

330

L. Y. C. Chang

100 90 80

Brunei

70

Cambodia Indonisia

60

Laos Malaysia

50

Myanmar

40

Philipines

30

Singapore Thailand

20

Vietnam

10 0 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 Fig. 1 Percentage of individuals using the Internet: ASEAN. (Source: Created by the author based on the data from ITU. https://www.itu.int/en/ITU-D/Statistics/Pages/stat/default.aspx)

telephony for mobile telephony and skipped traditional bank-based banking for mobile banking. Figure 1 shows the Internet penetration rate in the ten ASEAN countries. All ten countries have seen significant growth in their Internet penetration rate over the past 10 years. The countries with a high Internet penetration rate are Brunei, Singapore, and Malaysia. Brunei overtook Singapore in 2016, with the highest penetration rate of approximately 95%. Following Brunei, Singapore comes second with a penetration rate of around 85%. Malaysia, with a penetration rate of 80%, comes third. A second group, including the Philippines, Thailand, and Vietnam, are countries with a penetration rate between 50% and 60%. The Philippines has the fastestgrowing Internet penetration rate among the ten countries. It grew from around 6% in 2008 to 60% in 2017. The development of mobile technology, including the introduction of 3G and 4G services, contributed significantly to the increase. While fixed broadband has remained expensive and limited with only 1.9 per 100 inhabitants subscribed to the fixed (wired) broadband in 2017, 68.6% of inhabitants are mobile broadband subscribers (ITU 2018). The third group includes Cambodia, Indonesia, Myanmar, and Laos. These are countries that have a penetration rate that is lower than the average rate for the whole of ASEAN. Laos has the lowest penetration rate (25%). Both Myanmar and Cambodia have seen a sudden recent increase, from less than 1% in 2008 to above 30% in 2017. Again, mobile devices are the main element contributing to this growth. Myanmar, a country that has recently reopened to the world, was the fourth

16

Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia

331

fastest-growing mobile market in the third quarter of 2015 with an estimated 36 million mobile subscribers. ITU statistics shows that the number has kept increasing and has reached around 48 million subscribers in 2017. This is 130 times the number in 2008 (367,388 mobile subscribers). The 2014 opening of the mobile telecommunications market resulted in three foreign telecommunication companies, Qatar’s Ooredoo, Norway’s Telenor, and Mytel (a consortium led by a Vietnamese mobile network operator Viettel) entering the market to compete with the previous monopoly held by Myanmar Posts and Telecommunications (MPT). The competition has caused the price of a sim card to drop from US$2000 in 2009 to K1500 (approximately US$1.5) in 2014, an affordable price for the general public (Chang 2017; Ericsson 2015; Motlagh 2014; Trautwein 2015). This gap, between the most developed nations within ASEAN and the least, in terms of the development of cybercrime and security laws and the general security of the Internet services available within countries, has been termed the digital divide (Chang 2017; Broadhurst and Chang 2013). According to the OECD (2001, p. 5), the “digital divide” refers to “the gap between individuals, households, businesses and geographic areas at different socio-economic levels with regard both to their opportunities to access information and communication technologies (ICTs) and to their use of the internet for a wide variety of activities.” Although the 2003 Singapore Declaration emphasized the importance of reducing the digital divide within individual ASEAN member countries and among ASEAN member countries (ASEAN 2003), the gap is still huge. Among the ASEAN member countries, we see countries with an Internet penetration rate of more than 90% (Brunei); we also see countries with a penetration rate below 30% (Laos). The digital divide represents a significant challenge within ASEAN, as the development of a single cybercrime framework will be difficult, given the developmental variance between ASEAN states. According to the 2017 Global Cybersecurity Index, the first group has better performance on domestic legal measures on cybercrime and cybersecurity, while most countries in the third group are performing poorly on their domestic legal measures on cybercrime and cybersecurity (ITU 2017). While ASEAN countries support collective actions to fight against cybercrime (which will be discussed later), the existence of the digital divide has impeded the ability of the member states to collaborate and take measures to combat cybercrime and build a secure ASEAN region.

Cybercrime in Asia The rapid growth of digital technologies in Asia not only makes it a prime target for cyber criminals but also a springboard or launchpad for cyberattacks. Cybercrime has long been a critical concern in East Asian countries, and emerging countries such as Indonesia, Malaysia, and Vietnam are becoming the global hotspots for the launch of malware attacks (Subhan 2018; Hadjy 2019). In Vietnam, the Internet economy has been called “a dragon being unleashed” as it tripled in 3 years (Google-Temasek 2018). However, Vietnam has also been identified as having the potential to be a

332

L. Y. C. Chang

mid-level cybercrime hub with its very good hacking traditions and other technology pursuits (Stilgherrian 2019). Broadhurst and Chang (2013) described the types of cybercrime occurring regularly in Asia, including the popularity of malware and botnets, online scams/ frauds, and serious cyberattack based on the complicated political situation in the region. We do still see the continuity and the increase of these cybercrimes happening in the region. Take the political cyberattack, for example, it has been reported that Taiwanese Government websites are under cyberattack at least 20 million times per month (Lee 2018). The technique of advanced persistent threat (APT) was employed in the hacking. That is, the hacking skills nowadays are better designed to target certain entities, countries, or regions. PLATINUM, a malicious software discovered by Microsoft in 2016, is a typical type of APT that targets mainly ASEAN countries, especially Indonesia (Microsoft 2016). Cyberattacks are also used as a way to demonstrate one’s political stand. Regularly do we see hacktivists launching cyberattack to express their anger on certain political events. Myanmar Government websites came under serious cyberattack in response to the forced displacement of 700,000 Rohingya Muslims from Rakhine state into Bangladesh. Thailand Government websites came under attack by Myanmar hacktivists after two Burmese were charged with murdering two tourists (Chang 2017). During the 2019 Hong Kong protests against the anti-extradition bill, LIHKG, an online discussion forum used by the protestors, came under severe distributed denial of service (DDoS) attack, trying to bombard the forum with traffic to overload the server (see https://lihkg. com/thread/1525319/page/1. Last access: 1 September 2019). Crimes on social media sites, as predicted in Broadhurst and Chang (2013), are becoming a serious concern in Asia. Online harassment, stalking, online scams/ frauds, and child grooming are becoming prevalent in the social media space; the amount of hate speech and fake news disseminated in the region are increasing and are of serious concerns. Myanmar has recently suffered from the spread of hate speech and fake news on the violence against Rohingya, and Facebook was accused for allowing these rumors to spread. Facebook has had to hire more Burmese speakers to review the posts in Burmese following public pressure (McLaughlin 2018; Stecklow 2018). Evidence has shown that social media such as Twitter and Facebook have been used in Hong Kong to disseminate news that is manipulated and one-sided. Some of the attacks were sophisticated and are suspected to be organized and state-sponsored (Chang 2019). Twitter has suspended approximately 200,000 accounts violating Twitter’s platform manipulation policies. These accounts are believed to be part of “a significant state-backed information operation focused on the situation in Hong Kong, specifically the protest movement and their calls for political change” (Twitter Safety 2019). Taiwan has been ranked top among the 179 countries surveyed for exposure to information operations (including fake viewpoints or false information) by foreign government and their agents (Lin and Wu 2019; Mechkova et al. 2019). Internet vigilantism, netizens using social media to realize their “real justice” by facilitating crime investigations and even punishing the suspected offenders, is also becoming popular in Asia (Chang and Poon 2017; Chang et al. 2018).

16

Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia

333

The Emergence of Cybercrime Laws and Regulation in Asia To prevent cybercrime, it is important that states are equipped with comprehensive cybercrime laws aligned with international standard. While some countries are proposing a new UN standard, the Council of Europe Convention on Cybercrime (Budapest Convention) is by far the most adopted international standard. The Budapest Convention is a non-binding agreement between signature states to criminalize cybercrime (Council of Europe 2001). The agreement, forged between the Council of Europe and many states, came into force in 2004 and has 64 signature states as in August 2019. The Budapest Convention has been signed by many Council of Europe states and many non-council members, as states do not have to be members of the Council of Europe to become party to the convention (Council of Europe 2001). Although a regional drafted convention, it has been noted by the United Nations Resolution 56/121 and ratified by 20 non-state members, which established its status as an international convention.

The Budapest Convention and Northeast Asia Among the Northeast Asian countries, including China, South Korea, Japan, and Taiwan, Japan is the only country to have ratified the Budapest Convention. Taiwan was not able to become signatory due to its political status, while China was not willing to become a signatory and is proposing a new UN cybercrime standard (Chang 2012).

China (and Hong Kong) China’s cybercrime laws are contained in the Criminal Law of the People’s Republic of China 1979. Articles 285 and 286 were added in 2009 to regulate offenses against the confidentiality, integrity, and availability of computer data and systems. Also, Article 287 regulates against committing financial crime using a computer. However, while Article 285 regulates illegal access to computer systems and misuse of devices, this article applies only to crime toward computer systems with information concerning state affairs, construction of defense facilities, and sophisticated science and technology. Article 286 regulates data interference and system interference. Article 12 of the Cybersecurity Law of the People’s Republic of China 2016 protects citizens through raising the security of network services. Similar aspects are seen in Article 21 which establishes a Multi-Level Protection System (MLPS) for cybersecurity. These are specifically designed with both national security and citizen protection in mind, which does align with the spirit of the convention. In the administrative region of Hong Kong, the Telecommunications Ordinance includes cybercrime offenses. Unauthorized computer access is criminalized under s. 27A and aligns well with the convention and other unauthorized computer access legislation. S. 161 which criminalized computer access with dishonest or criminal intent was controversial for being criticized as vague (Cheng 2018). The legislation was created to prevent certain types of cybercrime, including upskirt photography

334

L. Y. C. Chang

and the leaking of exam result papers to parents for financial gain, although wider concerns over the legislation’s scope emerged (Cheng 2018; Lum and Lau 2019). However, this section was repealed in 2019 after wider media concern in 2018.

South Korea In South Korea, cybercrime is covered by the Criminal Act and the Act on Promotion of Information and Communications Network Utilization and Information Protection, etc. (the Information and Communication Network Act). Most of the cybercrime-related clauses were added in 1995, to accommodate the need to deal with the emergence of cybercrime. Causing damage to electromagnet records used by a public office were added in Article 141 as an offense and will be punishable by imprisonment for up to 7 years or by a fine not exceeding ten million won. Similarly, Article 366 regulated the destruction and damage of electromagnet records used by others. Articles 227-2 and 232-2 were added in the 1995 amendments to punish the falsification or alteration of public or private electromagnetic records. Prohibition of data and system interference can also be seen in Article 314 on the interference with business by damaging or destroying computer and electromagnet records or putting false information into the processor (computer). Fraud by the use of a computer was added later in 2001. These amended articles and clauses are aligned well with the Budapest Convention. The Information and Communications Network Act covers the offenses such as unauthorized access, data and system interference, and misuse of devices such as launching denial of service attack and conveying or spreading malicious program (see Articles 44, 48, and 71). Japan Japan is the first and the only country among the Northeast Asian countries that has signed and ratified the Budapest Convention. The Penal Code and the Act on the Prohibition of Unauthorized Computer Access which regulate cybercrime were both amended in order to be aligned with the Budapest Convention. Article 161-2 of the Penal Code was added to prohibit unauthorized creation of electronic or magnetic records with the intention to conduct improper administration. Also, Article 234-2 was added with a focus on obstruction of business by damaging a computer, the Act on the Prohibition of Unauthorized Computer Access 1999 (amended in 2012 and 2013) which criminalizes a variety of cyber actions. Some examples include unauthorized computer access (Article 3) and prohibiting the use of an individual’s access control (Articles 4–7). Taiwan Although Taiwan is not an eligible signatory of the Budapest Convention due to its special political situation, its criminal code was amended in 2003 to regulate cybercrime consistent with the Budapes Convention by the addition of Chap. 36 Offenses Against the Computer Security. There are six articles (Articles 358–362) in this chapter, covering illegal access, illegal interception, data interference, system interference, and misuse of devices. In the context of Article 358, intentional access to a computer using another’s password without right or by the act of circumventing

16

Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia

335

protective measures or by discovering or exploiting the loopholes in another computer system will be punishable by up to 3 years in prison and/or a fine of up to NT$100,000. Article 359 regulates unauthorized acquisition, deletion, or alteration of electromagnet records of another’s computer. System interference is regulated in Article 360 to protect the Internet being paralyzed by a distributed denial of service or equivalent attacks. Article 362 focuses on the offense of the creation of computer programs specifically for perpetration of a crime. Illegal interception is regulated in the Communication Protection and Surveillance Act which provides that illegal interception of another’s communication can be punished by up to 5 years in jail. Although there is doubt over whether the Communication Protection and Surveillance Act applies only to the regulation of the illegal interception by government agencies, a broader interpretation is supported by the Taiwan’s Ministry of Justice which asserts that this Act applies to illegal interception by nongovernment organizations or individuals.

The Budapest Convention and ASEAN The Association of Southeast Asian Nations was established not only to accelerate the economic growth of the region but also aims to promote regional peace and stability. This is to be achieved through active collaboration and mutual assistance on matters of common interest in the economic, social, cultural, technical, scientific, and administrative fields (ASEAN 1967). In 2003, ASEAN started to meet and discuss cybersecurity and cybercrime issues. At the Third ASEAN Telecommunications and IT Ministers Meeting, ministers agreed in their Singapore Declaration to launch a “Virtual Forum of ASEAN Cybersecurity” and asked all member countries to establish national Computer Emergency Response Teams (CERTs). In the 2004 Joint Communique of the Fourth ASEAN Ministerial Meeting on Transnational Crime, cybercrime was recognized as an increasing transnational crime that would affect the whole of ASEAN’s security. The ministers urged member states to build effective collaboration against cybercrime. Since then, cybercrimeand cybersecurity-related issues have been addressed in several initiatives, such as the 2006 Statement on Cooperation in Fighting Cyber Attack and Terrorist Misuse of Cyberspace, the 2008 ASEAN Economic Community Blueprint, the ASEAN ICT Masterplan 2015, and the 2012 Statement on Cooperation in Ensuring Cyber Security (ASEAN 2003, 2004; Chang 2017). In November 2017, the ASEAN Declaration to Prevent and Combat Cybercrime was adopted by the heads of state/governments of ASEAN member countries at the 31st ASEAN Summit held in Manila. In the declaration, the ASEAN countries acknowledged the importance of harmonization of laws related to cybercrime and electronic evidence and encouraged ASEAN member states to explore the feasibility of acceding to existing regional and international instruments in combating cybercrime. It also addressed the importance of enhancing international collaboration among ASEAN states and promoting cooperation among ASEAN member states on community education and awareness to prevent cybercrime (ASEAN 2017).

336

L. Y. C. Chang

With regard to the harmonization of laws related to cybercrime, the EU-ASEAN Workshop on Cybercrime Legislation (Malaysia, 2008) provides some evidence of ASEAN’s desire to develop an effective cybercrime strategy (ASEAN 2008). This workshop was based on content from the Council of Europe’s Convention on Cybercrime (Budapest Convention), with a specific focus on regional cooperation, and many Council of Europe members were present during the discussions (ASEAN 2008). A Japan-ASEAN Cybercrime Dialogue, held in Bandar Seri Begawan, Brunei Darussalam, also confirmed the importance of the Budapest Convention with ASEAN member states. Although most of the ASEAN member states have developed cybercrime laws, the Philippines is the only ASEAN country that has ratified the Budapest Convention. Critics have suggested ASEAN is not serious about cybercrime, given the large numbers of Internet users in Asia along with the lack of a concrete cybersecurity framework (Chen 2017; Chang 2017). There has also been some suggestion that ASEAN intends to create its own cybercrime framework, specific to the region. There is some evidence that this is likely to be the case, as ASEAN states, including Singapore, have invested significantly in the ASEAN Cyber Capacity Program (Chen 2017). However, there is still a need to examine whether the development of cybercrime law (substantial law) aligns with the Budapest Convention. The following section will briefly introduce current measures that ASEAN countries use to combat cybercrime. As Cambodia has no cybercrime law yet, it will not be discussed and be listed as weak.

Brunei Brunei’s Computer Misuse Act was amended in 2007 which, while containing sections which lie outside the scope of the Budapest Convention’s spirit, is structured in a similar way to the Budapest Convention with a section dedicated to outlining cyber offenses. Sections are similarly worded and cover similar offenses to the Budapest Convention with sections dedicated to unauthorized access (s. 3), unauthorized interception of a computer service (s. 6), and unauthorized disclosure of an access code (s. 8), among others. Under section 9, the punishment will be enhanced if the offender knows, or ought reasonably to have known, the offense committed is related to a protected computer being a computer with data or programs involving national security, defense, international relations, information relating to law enforcement, and information in relation to the protection of public safety. Indonesia Cybercrime in Indonesia is covered by the Electronic Information and Transactions Law 2008. The law was created largely to protect electronic transactions and computers operating in national security contexts. This law was amended in 2016 without tangible changes and is moderately aligned with the Budapest Convention. Given that Indonesia’s cyberlaw is an electronic transaction law rather than a cybercrime law, it is structured in a very different way to the Budapest Convention. The opening sections are largely dedicated to electronic transactions and records. However, Articles 27–37 contain a series of prohibited acts and are very similarly

16

Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia

337

structured to the Budapest Convention itself. For example, under Article 30, unauthorized access to a computer or computer system including interference with its operation along with the use of hacking to achieve this is prohibited. Article 31 prohibits interception or interference with a computer’s electronic information or records. These articles appear to be inspired by the Budapest Convention, with similar offenses covered and worded in a similar way. However, in contrast there appears to be a strong focus on the protection of electronic records in this legislation, rather than cybercrime more directly. For example, under Article 27 knowingly distributing electronic records related to extortion, gambling or defamation is specifically prohibited. Such articles could be significantly broader so as to cover cybercrime more fully while still remaining sensitive to issues related to electronic systems and records.

Laos Laos’s law on Preventing and Combatting Cybercrime 2015 is reasonably well aligned to the Budapest Convention, which has clearly been used to structure the law. Articles 9–18 list a number of cybercrime offenses most of which are very similar to those listed in the Budapest Convention. Offenses including unauthorized computer access, intercepting computer data, causing damage via online social media, disseminating porn, and interfering with computer systems along with others are all covered. As these examples illustrate, Laos’ cybercrime law goes a step further than the Budapest Convention covering areas including social media. Article 13 which covers social media specifically legislates against using social media to cause “damage” which is defined as hate speech dissemination, misinformation, and information dissemination which damages the national interest. The creation of legislation to cover hate speech dissemination and misinformation is to be welcomed even if it is not specifically covered in the Budapest Convention. That said, sanctions for dissemination of information which damages the national interest seem like an intentionally vague legislative clause. Of concern is the possibility that such a clause could be used to suppress political or religious expression in Laos. Another excellent part of Laos’ cyberlaw is Articles 24–30 which cover government strategies to combat cybercrime. This is not directly covered by the Budapest Convention but does add significantly to detailing government plans to combat cybercrime. Similarly, Articles 31–32 cover the Laos Computer Emergency Response Team (Laos CERT), which covers mutual assistance and international or regional cooperation imperatives of the Budapest Convention. Malaysia Malaysia’s Computer Crimes Act 1997 (updated 2011) is only moderately aligned with the Budapest Convention, covering a number of the offenses listed. However, many other offenses listed in the Budapest Convention are not covered by Malaysia’s Computer Crime Act. There is considerable emphasis in Malaysia’s cyberlaw on criminalizing unauthorized access. Four separate offenses are listed which include the unauthorized access to computer material, unauthorized access to

338

L. Y. C. Chang

computer content, intention to commit a further offense, and wrongful communication of an access code or password. Variations of unauthorized access are the only major offense covered in Malaysia’s cyberlaw. Issues around illegal interception of data and system interference, computer-related forgery, child pornography, and offenses related to copyright and intellectual property are not covered. Those offenses related to unauthorized access which are covered by the cyberlaw are rigorous, which makes the lack of coverage of other cybercrime offenses all the more limiting.

The Philippines The Philippines has become the first ASEAN country to ratify the Budapest Convention. Its Cybercrime Prevention Act 2012 is strongly influenced by the Budapest Convention and covers many of the offenses listed in the Budapest Convention. The law is split into several offense categories which are clearly influenced by the Budapest Convention with categories including misuse of devices and computer- and content-related offenses. These largely mirror those contained in the Budapest Convention, and for this reason the Philippine coverage of cyber offenses is fairly rigorous. Offenses not included in many ASEAN cyberlaws are covered in Philippine legislation, including child pornography and explicit images, which aligns the legislation well with the Budapest Convention. Singapore Singapore’s Cybercrime Misuse and Cybersecurity Act 1993 is a good example of robust cybercrime legislation with favorable alignment with the Budapest Convention. The cyberlaw contains an extensive list of cyber offenses, most of which have been inspired by the Budapest Convention itself. Offenses including unauthorized access to computer material, unauthorized interference with computer functions, and disclosure of access codes and passwords are all prohibited, among others. The cyberlaw also contains an additional offense under unauthorized disclosure to criminalize the disclosure of personal information in order to commit a cybercrime offense. The offense is specifically designed to combat the selling or passing on of personal information which was obtained in an unauthorized way to another party who may use that information to commit a cybercrime offense. This is a fairly forward-thinking clause, as it allows law enforcement to prosecute those selling stolen credit card information or medical information or similar, which has been obtained by hacking or other means on the dark Web or similar sites. This clause does not apply when the individual is not aware that the information would be used to commit a cybercrime offense. This is part of the safeguards mentioned under Article 15 of the Budapest Convention to ensure that sanctions and offenses are appropriate. Singapore’s cyberlaw does a good job of clearly outlining these areas. Thailand Thailand’s Computer Crimes Act 2007 is a reasonably robust cybercrime law. (A controversial Cybersecurity Act was passed in February 2019 which increases the government’s power to control the Internet (Sattaburuth 2019). Due to the research

16

Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia

339

timeline, this paper does not include the new Cybersecurity Act.) The offenses listed in the legislation are largely inspired by the Budapest Convention and take a similar structure and cover similar offenses. Various types of unauthorized access and modification are covered. However, Thailand’s cyberlaw does cover new ground, with a new offense which legislates that selling cybercrime instructions is a crime (s. 13). This is an innovative clause and one which is not replicated in other ASEAN legislation and is not reflected in the Budapest Convention itself. Under section 16 importing data to a public computer is criminalized. This is not covered in the Budapest Convention and may be targeted at hacktivism. While hacktivist activities do present a concern, criminalizing protest and government resistance is a troubling development.

Vietnam Vietnam’s Law on Information Security 2015 is one element of Vietnam’s wider electronic laws including the Law on Information Technology 2006 and the Law on E-Transactions 2005. This law is also complemented by the newly passed Law on Cybersecurity 2018. However, this analysis will focus on the first of these, as this forms Vietnam’s first dedicated cyberlaw. The Law on Information Security 2015 is somewhat differently structured to many ASEAN cyberlaws and from the Budapest Convention itself. Instead of outlining offenses as the Budapest Convention does, this information security law outlines principles and state policies on information security. However, under Article 8, six offenses related to information security are listed. These also seem largely inspired by the Budapest Convention but are applied to information security rather than cybercrime more broadly. However, there is still no regulation on computer-related offenses such as computer-related forgery and fraud (Title 2 of the Budapest Convention). In addition, offenses in this law include prevention of network communication, disabling network security, and illegal dissemination of information using system exploits, among others. These offenses are not included in the Budapest Convention itself, indicating that there might be a need to revisit and review whether the Budapest Convention needs to be updated. Overall, this is a robust information security law which was made in the spirit of the Budapest Convention despite covering areas of information security not included in the Budapest Convention. The Law on Cybersecurity 2018 focuses on developing cybersecurity for the purposes of national security only and not increasing cybersecurity for individuals and businesses in Vietnam. The law is designed to sanction those who commit cyber offenses against sections of state infrastructure considered essential to national security and to provide a means of increasing cybersecurity of these areas including banking, law enforcement, and the military (Articles 8–15). Information related to these areas is classified as a state secret under Article 10. Much of the content of this law does very little to protect the citizens of Vietnam from cyberattack and is only focused on protecting government structures. That said, the law does mandate that children are given the necessary protection in cyberspace, which is in line with the Budapest Convention but goes on to place

340

L. Y. C. Chang

this responsibility at the feet of teachers, parents, and organizations rather than the government itself. This section therefore also falls short of providing children the protection the convention stipulates.

Myanmar Myanmar’s Electronic Transactions Law 2004 is one of the poorest examples of cyberlaw in the ASEAN, largely because it was not created to combat cybercrime at all but to provide the then military government with sweeping powers which could be exercised at will. This law has not been repealed and thus remains in force and bears almost no alignment with the Budapest Convention. Chapters IV and V are dedicated to the creation of oversight bodies, designed to ensure the law’s latter provisions are upheld. Chapter VI is dedicated to placing significant restrictions on businesses and requiring electronic keys to be used for transactions. Chapter VII is dedicated to penalties, all of which involve jail time. Under s. 34 offenses include hacking, altering electronic records, and communicating access codes or electronic keys to unauthorized persons. All of these penalties were made from the perspective of the government attempting to limit business communication and outside influence in Myanmar. These offenses are not largely created to prevent cybercrime in the nation. For this reason, the law bears no connection to the Budapest Convention whatsoever, and it does not appear that the Budapest Convention’s structure or coverage was consulted in the drafting of this law. However, drafting of a new cyberlaw is currently underway. In mid-2018, a call for tender was issued to work on a new Myanmar Cyberlaw. It is expected the new law on cybercrime should align favorably with the Budapest Convention.

Conclusion This chapter overviewed the development of the Internet in the Asian countries, both the Northeast Asia countries, including China, Japan, South Korea, and Taiwan, and the ASEAN countries. Using the Budapest Convention as a criterion, this paper reviewed current measures used by Northeast Asian countries and the ASEAN countries to combat cybercrime. With the reduction of the digital divide, especially the efforts that the ASEAN has been putting into its reduction, we can see the digital divide becoming smaller. However, there is still a huge gap between developed countries in Asia and the developing countries in ASEAN, i.e., Myanmar, Laos, and Cambodia. That said, we see significant progress in the development of cybercrime laws in the region. As shown in Table 1, member states including Singapore, Laos, Brunei, the Philippines, and Thailand have adopted cybercrime laws that are favorable to the Budapest Convention. But we still see countries like Myanmar and Cambodia with inadequate cybercrime laws. Both of these countries are in the process of drafting a new cybercrime law, and we should expect that they will be aligned favorably with the Budapest Convention.

16

Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia

341

Table 1 The Budapest Convention and Asian countries Alignment to the convention Northeast Asia

ASEAN

Strong South Korea Japan Taiwan Singapore Laos Brunei The Philippines Thailand

Moderate China (including Hong Kong)

Weak

Malaysia Vietnam

Indonesia Myanmar Cambodia

Nonetheless, while these countries are still at an early stage of developing information and communication technology laws and regulations, there is no escape for them from cybercrime. Without proper knowledge and education on ICTs, online safety, and cybersecurity, nor proper laws and capacity against cybercrime, we see that these areas are becoming a safe-haven for cybercriminals to conduct cybercrime in these countries as well as launching cybercrime into other countries. Apart from the development of the legal framework, it is also important for ASEAN countries to put more resources into building cybersecurity awareness and cyber capacity. These are especially needed for countries like Myanmar, Laos, and Cambodia where general public access to the Internet is relatively new. Although cybersecurity awareness was emphasized in the ASEAN Master Plan 2015 and 2020, we still see people in these countries using the Internet without any basic cybersecurity awareness. To be effective, cybersecurity awareness programs should take into consideration local cultural and usage behaviors. Simply applying or copying materials and programs from the developed world might not always be effective. Last but not the least, it is important to acknowledge that there is a need to update the Budapest Convention. The Internet has advanced significantly since the Budapest Convention was drafted in 2001. With the development of social media and mobile technology, cybercrime is becoming more complicated, and new types of crime are emerging. The problems of fake news, hate crime, and misinformation facilitated by the popularity of social media are emerging and are causing serious harm to society. However, these issues are not included in the Budapest Convention. The darknet, the Internet of Things (IoTs), and the development of the blockchain and artificial intelligence might all influence the governance of cyberspace. Therefore, there is a need for the Budapest Convention to be revised to cover these issues.

Cross-References ▶ Technology Use, Abuse, and Public Perceptions of Cybercrime ▶ The Legislative Framework of the European Union (EU) Convention on Cybercrime

342

L. Y. C. Chang

References ASEAN. (1967). The ASEAN Declaration (Bangkok Declaration). Retrieved November 20, 2018, from http://asean.org/the-asean-declaration-bangkok-declaration-bangkok-8-august-1967/ ASEAN. (2003). The Singapore Declaration: An action agenda. Singapore: ASEAN. ASEAN. (2004). 2004 Joint communique of the fourth ASEAN ministerial meeting on transnational crime. Retrieved March 19, 2019, from https://asean.org/joint-communique-of-the-fourthasean-ministerial-meeting-on-transnational-crime-ammtc-bangkok/ ASEAN. (2008). EU-ASEAN workshop on cybercrime legislation in the ASEAN member states. Retrieved February 12, 2019, from https://www.asean.org/uploads/archive/apris2/file_pdf/Press %20Releases/EU-ASEAN%20Workshop%20on%20Cybercrime%20Legislation%20in%20the %20ASEAN%20Member%20States.pdf ASEAN. (2017). ASEAN Declaration to Prevent and Combat Cybercrime. Malina: ASEAN. ASEAN. (2018). ASEAN key figures 2018. Retrieved February 2, 2019, from https://asean.org/? static_post=asean-key-figures-2018 Asian Development Bank. (2018). Embracing the e-commerce revolution in Asia and the Pacific. Retrieved May 14, 2019, from https://www.adb.org/sites/default/files/publication/430401/ embracing-e-commerce-revolution.pdf Broadhurst, R., & Chang, L. Y. C. (2013). Cybercrime in Asia: Trends and challenges. In B. Hebenton, S. Y. Shou, & J. Liu (Eds.), Asian Handbook of criminology (pp. 49–64). Chang, L. Y. C. (2012). Cybercrime in the Greater China Region: Regulatory responses and crime prevention across the Taiwan Strait. Cheltenham: Edward Elgar. Chang, L. Y. C. (2017). Cybercrime and cyber security in ASEAN. In J. Liu, M. Travers, & L. Chang (Eds.), Comparative criminology in Asia (pp. 135–148). New York: Springer. Chang, J. Y. T. (2019, August 21). Twitter and Facebook suspend ‘China-linked’ accounts for misinformation. South China Morning Post. Retrieved September 1, 2019, from https:// www.scmp.com/video/world/3023645/twitter-and-facebook-suspend-china-linked-accountsmisinformation Chang, L. Y. C., & Poon, R. (2017). Internet vigilantism: Attitudes and experiences of university students in Hong Kong. International Journal of Offender Therapy and Comparative Criminology, 61(6), 1912–1932. Chang, L. Y. C., Zhong, Y. L., & Grabosky, P. (2018). Citizen co-production of cyber security: Selfhelp, vigilantes, and cybercrime. Regulation & Governance, 12(1), 101–114. Chen, Q. (2017, August 2). Time for ASEAN to get serious about cybercrime. The Diplomat. Retrieved March 4, 2019, from https://thediplomat.com/tag/cyber-crime-in-asean/ Cheng, K. (2018). Hong Kong’s top court to define dishonest access to computer charge. Hong Kong Free Press, 6 September 2019. Retrieved August 24, 2019, from https://www. hongkongfp.com/2018/09/06/hong-kongs-top-court-define-dishonest-access-computer-charge/ Council of Europe. (2001). Convention on cybercrime. Retrieved November 17, 2019, from https:// www.coe.int/en/web/conventions/full-list/-/conventions/treaty/185 Ericsson. (2015). Ericsson mobility report: On the pulse of the networked society. Retrieved February 9, 2019, from http://www.ericsson.com/res/docs/2015/mobility-report/ericsson-mobil ity-report-nov-2015.pdf Google-Temasek. (2018). E-Conomy SEA 2018: Southeast Asia’s Internet economy hits an inflection point. Retrieved February 14, 2019, from https://www.thinkwithgoogle.com/intl/en-apac/ tools-resources/research-studies/e-conomy-sea-2018-southeast-asias-internet-economy-hitsinflection-point/ Hadjy, P. (2019, March 13). As Internet adoption grows in Southeast Asia, SMEs must defend against sophisticated cyberattacks. South China Post. Retrieved May 14, 2019, from https:// www.scmp.com/tech/innovation/article/3001365/internet-adoption-grows-southeast-asia-smesmust-defend-against ITU. (2017). Global Cybersecurity Index (GCI) 2017. Geneva: International Telecommunication Union.

16

Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia

343

ITU. (2018). ICTEYE: Key ICT data and statistics. Retrieved February 4, 2019, from https://www. itu.int/net4/itu-d/icteye/CountryProfile.aspx Lee, S. F. (2018, April 5). Taiwanese government websites are under constant attack by the Chinese cyber army. Liberty Times. Retrieved September 10, 2019, from https://news.ltn.com.tw/news/ focus/paper/1190027 Lewis, J. (2018). Economic impact of cybercrime: No-slowing down. Santa Clara: McAfee. Lin, R., & Wu, F. (2019, April 27). Taiwan’s online ‘opinion war’ arrived. CommonWealth. Retrieved August 20, 2019, from https://english.cw.com.tw/article/article.action?id=2375 Lum, A., & Lau, H. (2019, April 4). Hong Kong’s top court rules against one-size-fits-all charge for smartphone crimes. South China Morning Post. Retrieved August 24, 2019, from https:// www.scmp.com/news/hong-kong/law-and-crime/article/3004587/hong-kongs-top-court-rulesagainst-one-size-fits-all McLaughlin, T. (2018, June 7). How Facebook’s rise fueled chaos and confusion in Myanmar. Wired. Retrieved August 20, 2019, from https://www.wired.com/story/how-facebooks-risefueled-chaos-and-confusion-in-myanmar/ Mechkova, V., Pemstein, D., Seim, B., & Wilson, S. (2019). Digital Society Project dataset v1. Retrieved August 27, 2019, from http://digitalsocietyproject.org/data/ Microsoft. (2016). PLATINUM: Targeted attacks in South and Southeast Asia. Microsoft: Seattle. Miniwatts Marketing Group. (2019). Internet World Stats. Retrieved August 25, 2019, from http:// www.internetworldstats.com/stats.htm Motlagh, J. (2014, September 30). When a SIM card goes from $2,000 to $1.50. Bloomberg. Retrieved February 9, 2019, from http://www.bloomberg.com/news/articles/2014-09-29/myan mar-opens-its-mobile-phone-market-cuing-carrier-frenzy OECD. (2001). Understanding digital divide. Paris: OECD. Sattaburuth, A. (2019). Cybersecurity Bill passed. Bangkok Post. https://www.bangkokpost.com/ news/security/1636694/cybersecurity-bill-passed. Accessed 8 Mar 2019. Stecklow, S. (2018, August 15). Why Facebook is losing the war on hate speech in Myanmar. Reuters. Retrieved August 20, 2019, from https://www.reuters.com/investigates/special-report/ myanmar-facebook-hate/ Stilgherrian. (2019, April 30). Vietnam ‘on the edge’ of becoming a mid-tier cybercrime hub. ZDnet. Retrieved April 14, 2019, from https://www.zdnet.com/article/vietnam-on-the-edge-ofbecoming-a-mid-tier-cybercrime-hub/ Subhan, A. (2018, May 20). Southeast Asia’s cybersecurity an emerging concern. The ASEAN Post. Retrieved May 14, 2019, from https://theaseanpost.com/article/southeast-asias-cybersecurityemerging-concern Trautwein, C. (2015, March 25). Myanmar named fourth-fastest-growing mobile market in the world by Ericsson. Myanmar Times. Retrieved February 9, 2019, from http://www.mmtimes. com/index.php/business/technology/17727-myanmar-named-fourth-fastest-growing-mobilemarket-in-the-world-by-ericsson.html Twitter Safety. (2019, August 19). Information operations directed at Hong Kong. Twitter. Retrieved August 24, 2019, from https://blog.twitter.com/en_us/topics/company/2019/informa tion_operations_directed_at_Hong_Kong.html

Cybercrime and Legislation in an African Context

17

Philip N. Ndubueze

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrimes: Definitional Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Patterns of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The State of Internet Infrastructure in Africa: Bridging the Digital Divide . . . . . . . . . . . . . . . . . . . Cyber Victimization: Vulnerabilities and Offenders Practices in the African Context . . . . . . . . Policing and Regulating the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The State of Cybercrime Legislations in Africa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Impediments to the Effective Establishment and Enforcement of Cybercrime Legislations in Africa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Legislations: Toward a Global Harmonization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

346 347 347 348 350 352 355 356 359 360 361 361 361

Abstract

There is an exponential growth in cybercrime incidences and victimization in the African continent. This is perhaps the fall-out of the upsurge in the Internet usage population in Africa. Arguably, there is a correlation between the number of Internet users and the rates of cybercrime victimization in Africa. Despite the growing concerns about the spate of crime and deviance in the cyberspace, Africa as a continent has not been swift in responding to the burgeoning problem of crime and disorder in the cyberspace. Many African countries are yet to establish a comprehensive legislation on cybercrime, while only a few have signed the African Union Convention on Cyber Security and Personal Data Protection. This chapter interrogates the problem of cybercrime and its legislation in the African context. It discusses P. N. Ndubueze (*) Federal University, Dutse, Nigeria e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_74

345

346

P. N. Ndubueze

the problem of the digital divide and efforts made to bridge it. The factors that make Africans vulnerable to cyber victimizations as well as the cyber offenders’ practices in Africa are explored. Finally, the chapter examines the impediments to the effective establishment and enforcement of cybercrime legislations in Africa. Keywords

Africa · Cybercrime · Cyber victimization · Legislation · Policing

Introduction Cybercrime is undoubtedly a growing problem in Africa. This is not unexpected given that there has been an exponential growth in the number of Africans who access the Internet. As is typical with every technological invention, criminals can exploit the loopholes inherent in them to their own advantage. Therefore, the unfolding digital revolution is invariably characterized by opportunities for criminal victimization that is on a scale that is unprecedented in human history. The global Internet usage population is growing significantly. According to Potogious et al. (2017), 85% of households in the European Union’s 28 member states had Internet access. The Internet World Stats (2018) indicates a 10.199% growth of Internet users in Africa over a 10-year period (2000–2018). It is reasonable to assume that the growing number of Internet users worldwide will be followed by a corresponding rise in cybercrime victimization rate and a clamor for more regulation. Wentworth (2017) perhaps recognizes this when he argued that the increase in Internet use is followed by a massive increase in the number and range of people affected by decisions. Thus, cybercrime has featured prominently in national security agendas across the world (Wall and Williams 2013). Cybercrime is assuming an epidemic proportion in Africa. A Deloitte survey of 2011 reportedly found that banks in Kenya, Rwanda, Uganda, Tanzania, and Zambia suffered a loss of US $245 million to cyber fraud (Quarshie and Martin-Odoom 2012). Africa has become a “safe haven” for cybercriminals (see Cassim 2011; The United Nations Economic Commission for Africa 2014). Similarly, there were 19,531 reported cases of fraud for Deposit Money Banks in 2016 in Nigeria as against 10,743 in 2015 (Nigeria Inter-Bank Settlement System Report, as cited in The Nigeria Electronic Fraud Forum Annual Report 2016). National governments across Africa have been enormously concerned about the spate of deviance and crime in cyberspace and have made efforts to clampdown on cybercriminals by establishing appropriate legislations. The International Telecommunications Union (2012) underscores the importance of the development of adequate legislation and cybercrime-related framework as a critical component of a cybersecurity strategy. Marcum (2014), while recognizing that several legislations have been established to more effectively prosecute, punish, and deter cybercriminals and check the proliferation of cybercrime, observes that such legislations are challenged by constitutional infringement issues. Grobler and Vuuren (2010)

17

Cybercrime and Legislation in an African Context

347

opine that research have shown that no law on its own can effectively eliminate the problem of cybercrime. Nonetheless, Hakmeh (2017) acknowledges that even though cybercrime cannot be completely eradicated, governments can control it by creating resilient overall economies and institutions, as well as by strengthening deterrent capacity. She further emphasizes the role of legislation in that process. This chapter interrogates the problem of cybercrime and its legislation in the African context. It discusses the problem of the digital divide and efforts made to bridge it. The factors that make Africans vulnerable to cyber victimization as well as the cyber offenders’ practices in Africa are explored. Finally, the chapter examines the impediments to the effective establishment and enforcement of cybercrime legislations in Africa.

Cybercrimes: Definitional Issues The difficulty in arriving at a generally acceptable definition of cybercrime has been acknowledged in the literature (see Nhan and Bachmann 2015; Cassim 2011). Cybercrimes are offenses that pertain to and rely on the use of new telecommunication technologies for their commission (Leukfeldt and Yar 2016). They are illegal activities specifically anchored on Internet and information technology (Antonescu and Birau 2015). Lindgren (2018) argues that cybercrime is often the result of old-school “realworld” crime transitioning to the online space of the Internet and digital media. The above definitions of cybercrime apply to the African context for many reasons. First, the efforts of African governments to legislate on cybercrime are fundamentally provoked by the need to prevent and punish the abuse of the Internet and information and telecommunication technologies. Second, there are several traditional crimes which are now being committed with remarkable ease on the Internet. One of such crimes is advance fee fraud (AFF). For example, result from recent study by Nwokeoma et al. (2017) indicates that AFF predates the development of the Internet in Nigeria. Third, the legislations also regulate illegal use of digital media which also seems to be on the increase.

Patterns of Cybercrime There are different forms of cybercrime in Africa. However, most academic and media narratives of cybercrime have tended to focus on online fraud, thereby neglecting other forms of cybercrimes, which are increasingly becoming widespread. This section discusses some of the patterns of cybercrime prevalent in the twenty-first-century Africa. Online advance fee fraud is perhaps one of the most prevalent cybercrimes in Africa. A recent study by Nwokeoma et al. (2017) that examined the processors of online advance fee fraud in Southeast Nigeria found that the crime of false pretense existed in the domestic scene from the 1970s after the Nigerian civil war. The study argued that AFF became more internationalized and widespread with the commercialization of the Internet. Tade (2013) described the phenomenon of

348

P. N. Ndubueze

yahoo plus which he called “cyber spiritualism” as involving the accruing and use of mystical, spiritual, and supernatural powers by advance fee fraudsters (known as yahoo boys in Nigeria) to cast spell on their victims. Cyberstalking is another form of cybercrime that is common in Africa. A survey among undergraduate students in a Nigerian university found that majority of the students were aware of the problem of cyberstalking and its various forms (Ndubueze et al. 2017). Some people are being unduly harassed and intimidated online usually through text messages, phone calls, and the social media. This may be done for the purpose of revenge or with a view to coercing a person into a relationship and so on. This kind of behavior may cause the victim psychological harm. Furthermore, digital piracy seems to be on the rise in Africa. Different factors may be responsible for its prevalence and such factor may vary from country to country. For example, in a study that compared software piracy in South Africa and Zambia using social cognitive theory, Thatcher and Mathews (2012) found that the two countries had significantly different piracy rates (35% and 82%, respectively). Thus they maintain that this suggests that there are local contextual factors such as culture and climate that may be responsible for the difference. Moreover, frauds associated with automated teller machines are also prevalent. Fraudsters make unauthorized withdrawals from their victims’ accounts through automated teller machines. In a study that examines the characteristics of ATM fraud in Southwest Nigeria, Tade and Adeniyi (2016) found that ATM fraudsters had close relationship with their victims. They identified card cloning, swapping, and physical attacks as ATM galleries as the modus operandi of the fraudsters. Despite the prevalence of ATM fraud in Africa, a recent survey in Sunyari municipality Ghana revealed that only 17.76% of the respondents that use ATM services have little knowledge about ATM fraud in their municipality. Revenge pornography is an emerging variant of cybercrime in Africa, even though there are very few studies that have been conducted on the subject. But, revenge porn is not specifically captured in the legislations of some African countries. For example, Chisala-Tempelhoff and Kirya (2016) found that although Malawi and Uganda had anti-pornography and anti-obscenity in their legislations, they did not have specific legislation on revenge pornography. The absence of a clear-cut legislation on revenge pornography may delay the quest for justice for victims of this act. Finally, malware threat is also growing in Africa. Infocyte Report (2017) argues that Africa has one of the highest global mobile malware infection rates. The report indicates the following IT infrastructure rates of some African countries: Libya (98%), Zimbabwe (92%), Algeria (84%), Cameroun (82%), Nigeria (82%), Ivory Coast (81%), Kenya (78%), Senegal (78%), Tunisia (74%), Morocco (66%), and Mauritius (57%).

The State of Internet Infrastructure in Africa: Bridging the Digital Divide There have been concerns about how to bridge the digital divide in Africa. Rogers (2001) defines the digital divide as the gap that exists between individuals who are advantaged by the Internet and those who are disadvantaged by it. In the

17

Cybercrime and Legislation in an African Context

349

early days of the Internet in Africa, Adeya (1996) argues that participation in the Information Age was a development opportunity that Africans must aggressively pursue. In the same vein, Graham (2011) posits that the digital divide is believed to be one of the most significant development issues confronting impoverished regions of the world. Sonaike (2004) believes that the Internet has the potential to narrow gaps between rich western nations and poor African nations, but contends that this can only happen with informed planning and appropriate development of telecommunications and Internet technology in Africa. In 2003, it was argued that Africa lagged behind in Internet connectivity among the continents of the world, with an estimated computer penetration of less than 3 per 1000 people (Mutula 2003). Similarly, in 2004, Sonaike (2004) suggested that even though all 54 countries in Africa had access to the Internet, their global impact was low. He posited that efforts to widen connectivity were managed by American and European companies fundamentally for profit and decried the likelihood of this creating a new form of techno-dependence. Thus, Astier (2005) contends that the democratization of the Internet through wider access would assist to reconcile the societies of the globe, including many who were not aware of the existence of such networks. During the early days of the emergence of the Internet in Africa, people accessed the Internet through cyber cafes which were fundamentally the main point open the public (see Mutula 2003). Results from a study by Aikins (2019) indicate that challenges such as affordability and digital literacy influence the percentage of African households who do not access the Internet. Marshall and Clarkson (2008) observe the rapid growth in both capacity and availability of access to the global network [the Internet]. They underscore the transformation of the Internet from an infrastructure mainly used by academics and their benefactors to a widely shared resource that is becoming indispensible to humankind. Moreover, they observe the expansion of information sharing and communications systems such as the World Wide Web (WWW) and email and how information technology has made criminal to value some kinds of information. This implies that the increasing effort at bridging the digital gap in Africa is not without some consequences. One of such consequences is that African countries will begin to experience some patterns of crime that are hitherto unknown to them. Na et al. (2018) assert that the efforts of companies, governments, and international organizations toward a better Internet diffusion notwithstanding, Internet users and fixed broadband subscribers differ significantly by country and income level. Therefore, African countries with high-income level are more likely to have more Internet users. This is understandable if the cost factor is considered. Those with low income will naturally be concerned with fulfilling their physiological needs (i.e., the need for food, clothing, and shelter). They will bother less about accessing the Internet. The Internet World Stats (2018) provides the world Internet usage and population statistics as of June 30, 2018. The statistics indicates that there were 464,923,169 Internet users, with 36.1% penetration rate in Africa, representing 11.0% of the population of Internet users across the world. This statistics suggest that the gap created by the digital divide is fast closing up. It also means that Africa is fast embracing the digital revolution that is sweeping across the globe.

350

P. N. Ndubueze

Cyber Victimization: Vulnerabilities and Offenders Practices in the African Context Several factors account for the prevalence of cybercrime in the African continent. This section of the chapter will examine some cybercrime offenders’ practices in the continent. However, it is pertinent to first explore the factors that predispose Internet users to cyber victimization. Criminals generally and, in the context of this discourse, cybercriminals feed on opportunity. Essentially, they search for loopholes in information and communication technology systems and exploit it to its fullest. While computer and Internet security companies have been ingenious in building barriers to cyberattacks, the cybercriminals seem to be reinventing their tricks and tactics. The following factors explain Africa’s weak points and how cybercriminals exploit them. (i) High Number of Domains/Very Weak Network and Information Security: The United Nations Economic Commission for Africa (2014) identified the high number of domains and very weak network and information security as key drivers of Africa’s vulnerability to cybersecurity threats. The report observes that cybercriminals have since considered Africa as a favorable climate for their criminal activities. It also posits that cybersecurity experts are of the view that about 80% of personal computers in Africa are virus and other malicious software-infected. This is hardly surprising as there are several “free” Internet networks in some public places. But many of such networks are not secure and may, in fact, be a trap set by cybercriminals who want to access users’ personal information with a view to using the same to victimize them. For example, the crave for “free” Internet access and “free” software installation has landed the gullible Internet user to the bumpy-trap of cybercriminals. The user may unwittingly end up installing various kind of malware/spyware and may eventually fall victim of identity theft. (ii) Inadequate Protection of Computer System: Computer security experts have suggested that many computer systems in Africa are not properly protected and that this exposes them to cyberattack (Quarshie and Martin-Odoom 2012). This kind of scenario is worrisome as it serves to expand the vulnerability window. The ordinary computer user does not care so much about Internet security software. Even when they procure one, they may be reluctant to update them. Some have argued that auto-updating the security software will consume much of their network data. This problem is also related to the digital divide debate discussed earlier in this chapter. Income level is perhaps a factor at play here. Users, especially the low-income ones, will normally want to maximize their data usage. They may consider browsing and e-mailing as activities that are worth their data and updating of antivirus software as less important. This attitude is compounded by their low awareness of the kind of risk they expose their computer system and data to by not taking appropriate measure to protect them. Lowincome Internet users may feel that they are not famous or known and cannot be targeted by cybercriminals. This kind of mind-set perhaps accounts for why they are reluctant to take the necessary steps to protect their personal computers.

17

Cybercrime and Legislation in an African Context

351

(iii) Digital Skill Set Gap: “Digital literacy” is still relatively low in Africa when compared to the developed continents of the world. Ndubueze (2016b) provides a typology of digital skills in Nigeria, viz., (i) The digital affable – persons with strong digital skill sets. This category of people can easily operate their personal computers, cell phones, and the Internet. Digital tools have also become part of their daily activities. (ii) The digital enfeeble – persons that are weak in the use of digital technology/tools. They are not familiar with the basic functions of their cell phones and iPads. They also hardly use the Internet and carry smartphones not necessarily because they need it, but as a status symbol. (iii) The digital dumb – persons who do not have digital skill set at all. They are digitally docile and by all means try to avoid using digital tools. If they are compelled to use digital tools, they would normally do so with the assistance of a third party, who obviously is familiar with such tools. (iv) Arguably, the last two categories of people are more vulnerable to cyber victimization. This is because they would normally rely on third parties to do their digital transactions such as bank automated teller machine (ATM) withdrawals and other online transactions. This may expose their passwords to criminal elements. However, by virtue of being heavily attached to digital tools and being Internet-active, people in the first category are also exposed to cyber victimization. (iv) Absence of African Languages: Kshetri (2019) notes the inability of many African Internet users to use cybersecurity products developed in English language. In the same vein, Grobler and Vuuren (2010) observe that many African computer users may have difficulty in understanding error messages or warnings about cyber fraud not presented in their mother tongue. This, they argue, may make such persons vulnerable to cyber fraud. Cyber fraudsters will typically exploit any loophole that they find in information and communication technology (ICT) tools to victimize their targets. Thus, the language issue is a major problem. (v) Trust Factor: Trust entails the belief that a person will not be opportunistic and take undue advantage of the situation (Ridings et al. 2002). Obayan (1995) underscores the role of trust in African families by recognizing the tendency of the Nigerian family to exclude “outsiders” from trust and confidential-related conversations. The extended family system is well established in many African societies. The average African family may perhaps comprise of the couples and their relatives. Such relatives who may be domestic helps are usually trusted. Sometimes, they are used for certain online transactions, not because the principal does not have digital skill, but because they are trusted. Consequently, such relatives are expected to keep the details of such online transactions confidential. But that may not always be the case, as sometimes, these trusted relatives may be compromised through social engineering or other clandestine means by cybercriminals to reveal some personal information of their principal, which such criminals would eventually use to victimize them.

352

P. N. Ndubueze

(vi) Greed Factor: Greed is a key factor in cybercrime victimization in the African context. It is perhaps one of the most exploited weaknesses by cybercriminals. Longe (as cited in Vanguard Newspaper 2017) has linked cyber criminality and cyber victimization to human frailties such as greed, gullibility, and unbridled quest for riches. Many people in Africa think of a better and brighter future. Many people want to make it big in life; they want to amass wealth and live in opulence. But not too many of such people who are within the “generation y” age bracket realize that wealth do not come so easily. Cybercriminals in Africa, specifically the yahoo-yahoo boys (Internet advance fee fraudsters), exploit this insatiable quest for wealth to perpetrate many online advance fee fraud schemes in the continent. The yahoo-yahoo boys subculture seems to be enduring in Africa despite the efforts of law enforcement to clampdown on it. One of the factors that feed into this scheme is human greed. Greedy people are perhaps the most gullible as long as advance fee fraud is concerned, and the fraudsters know this too well. This also explains why despite the wide publicity given to this kind scam in Africa, many people still fall victim of it. (vii) Faith Factor: Heuser (2016) examines the debate on prosperity gospel in Africa and its relevance for socioeconomic change in Africa. He argues that African Pentecostal theologizing has attracted public attention because of the Gospel of Prosperity it has made popular with the controversial claims that worldly success and material well-being are signs of divine grace. There are many faith-based organizations in Africa. Some of these organizations are prayer groups that encourage members to believe in financial “miracles.” Thus, some members as a demonstration of their belief of such teachings fast and pray for some kind of extraordinary financial “miracle.” Unfortunately, online advance fee fraudsters may exploit this kind of scenario by sending some bogus (advance fee fraud) mails to such targets who often mistake such communications as evidence of answered prayers and ultimately fall victims.

Policing and Regulating the Internet It has been argued that cybercrime first featured in the radar of law enforcement more than 30 years ago (Smith 2014). According to Black (2001, p. 142), regulation refers to “a process involving the sustained and focused attempt to alter the behaviour of others according to identified purposes with the intention of producing a broadly identified outcome.” Efforts to police the Internet have been traced to an era when cybercriminals emerged (Leppanen et al. 2016). But regulating the Internet has not been an easy task as efforts to regulate are confronted by several impediments. Brenner (2012) argues that the migration of threats such as conflict and war into the cyberspace alters them, hereby rendering the application of traditional laws very problematic. She further maintains that it also undermines the effectiveness of the control mechanisms of sovereign states. She warns that the inability of states to respond effectively to cybercrime and cyber warfare would incentivize those who may want to perpetrate both acts. Many African countries have been slow in responding to the problem of cybercrime. A November, 2016, report

17

Cybercrime and Legislation in an African Context

353

of the African Union Commission (AUC) and Symantec indicates that of the 54 countries of Africa, 30 lacked specific cybercrime and electronic evidence legislations (Kshetri 2019). Policing the Internet is not an activity that is confined to law enforcement and other governmental agencies alone. There are various third-party initiatives to police the Internet across jurisdictions. Herrington and Aldrich (2013) suggest that Internet service providers (ISPs), major telecommunication companies, banks, and airlines are part of the intelligence and security agencies of the twenty-first century. They further argued that intelligence gathering focus has now shifted toward the agglomeration of private and protected information gleaned from the Internet. Huey et al. (2012) believe that given the distributed nature of the Internet, security issues surrounding it would be better addressed through partnership with various sets of public and private actors. The need for partnership in cybercrime policing has long being recognized in Africa. For example, a study that examined the use and effectiveness of third-party policing (TPP) in the prevention and control of cybercrime including online advance fee fraud in Nigeria found that some third-party strategies were used by cybercafe managers in the prevention of cybercrime (Ndubueze et al. 2017). Regulating the Internet has been a long-standing issue of public debate and public policy. In 1995, Burton (1995) observed that the potential for Internet censorship and control had become a “hot topic.” He noted that the debate was further provoked by the C7 summit conference of February 25–26, 1995, which acknowledged the need for international collaboration in resolving the issues associated with the development of the information superhighway. Goggin and Griff (2001) observe that public discourse and policy-making on Internet content regulation have always revolved around inappropriate content, such as pornography, racial vilification, and hate speech as well as how to restrict or prohibit their propagation. Wagner (2014) notes that debates on the appropriate regulation of Internet content are now common in liberal democracies, even though not much is known about the subject. McMurdie (2016) asserts that the scale, speed, and sophistication of global cybercrime cannot be effectively combated by law enforcement alone. He advocates for partnerships with industry and academia in intelligence sharing and capacity building. As pointed out in another section of this chapter, there is much that African law enforcement agencies can benefit through partnership with other stakeholders in the fight against cybercrime. This is increasingly so in countries where law enforcement agencies are understaffed. Besides, there have also been efforts to partner with law enforcement in the global north in the tackling of cybercrimes. Furthermore, Airo-Farulla (2001) identifies five non-mutually exclusive strategies adopted by governments around the world in a bid to regulate the Internet as: (i) The application of generic laws regulating the publication or possession of certain kinds of contents, for example, child pornography or false advertising, to the Internet. (ii) The establishment of Internet-specific legislation, as in the cases of jurisdictions such as Australia, South Korea, Germany, and Tunisia. He explains that such

354

P. N. Ndubueze

legislation may state the obligations of Internet content hosts and Internet service providers and/or prescribe different obligations for those publishing or accessing Internet content. (iii) The two aforementioned legislative restrictions on the content that can be hosted within a country may lead to a resort to offshore content host, which are usually beyond the reach of national jurisdiction. The third strategy is aimed at addressing this kind of scenario by providing only limited, filtered access to offshore hosted Internet content. This strategy is common in Asia and the Middle East. (iv) The fourth strategy which was designed to encourage industry self-regulation became necessary given the limited effectiveness of public regulation in controlling Internet content. The idea was to allow Internet content hosts and Internet service providers (ISPs) to device their own strategies for dealing with “inappropriate” content. (v) Finally, the fifth strategy encouraged Internet users to regulate themselves and their children by embracing educative use of the Internet and subscribing to Internet service providers that offer filtered services or installing filters on their personal computers, if they so wished. He further points out that it is argued that even if combined, the aforementioned five strategies are not likely to assist governments in realizing their policy objectives with respect to regulating the materials that can be published or accessed by their residents on the Internet. Schermer and Wagemans (2010) note that while regulation of the Internet is necessary, its nature makes effective regulation a significant challenge. Suggesting that values such as trust, safety, security, freedom, innovation, equality, fairness, reciprocity, and decency are important regulatory goals that underpin the Internet, they deduce three basic regulatory goals as (i) to stimulate trust in the infrastructure, platforms, and services that make up the Internet by ensuring a secure, safe, and fair online environment; (ii) to ensure that fundamental rights and liberties are protected in an online environment; and (iii) to create an open and level playing field for economic actors that stimulates growth and innovation (Schermer and Wagemans 2010, p. 289). The efforts of African governments to regulate the Internet have elicited different reactions from the citizens. While some believe that regulation is the only way to avoid abuse, others believe that regulation may stifle freedom. For example, Adibe et al. (2017) are of the view that online press freedom index has worsened since the establishment of the Cybercrime Act in 2015 in Nigeria. Internet users as critical stakeholders in the information and communication technology (ICT) value chain have historically been concerned about Internet regulation. In an exploratory study that examined users’ perception of Internet regulation in the United States, McCabe and Lee (1997) found that all Internet users in the study perceived the need for Internet regulation and that those who used the Internet 6 or more hours per week were more likely to see the need for regulation. Ben-Jacob (2017, p. 253) in his examination of Internet ethics from

17

Cybercrime and Legislation in an African Context

355

user and providers’ perspectives defines net neutrality as “the belief that ISPs and government should treat all data on the Internet the same, not discriminating or charging differently by user, content, site, platform, application, type of attached equipment, or mode.” She argues that there are those who do not support net neutrality because of their belief that the Internet contains an enormous amount of information and people should be allowed to access whatever content they desire. But others are worried about child pornography, online gambling, identity theft, andother electronic schemes and as such support regulation. This kind of debate is also raging among Internet users in Africa. Obviously, the intensity and impact of this debate will shape the tenor of future government policies and legislations on Internet regulation in African countries.

The State of Cybercrime Legislations in Africa The Council of Europe/Project Cybercrime@Octopus (2016, p. 5) reviewed the specific criminal law provisions on cybercrime and electronic evidence in the 54 countries of Africa and suggests that as of April, 2016: • 11 states tend to have basic substantive and procedural law provisions in place (Bostwana, Cameroon, Côte d’Ivoire, Ghana, Mauritania, Mauritius, Nigeria, Senegal, Tanzania, Uganda, and Zambia) although implementing regulations may be missing in one or the other country. It notes that Chad reportedly adopted a law on cybercrime in July 2014 and explained that the text could not be assessed when the report was finalized. • A further 12 states seemed to have substantive and procedural law provisions partially in place (Algeria, Benin, Gambia, Kenya, Madagascar, Morocco, Mozambique, Rwanda, South Africa, Sudan, Tunisia, and Zimbabwe). • The majority of African states (30) did not have specific legal provisions on cybercrime and electronic evidence in force. • Draft laws or amendments to existing legislation reportedly had been prepared in at least 15 states (Burkina Faso, Djibouti, Ethiopia, Guinea, Kenya, Lesotho, Mali, Morocco, Namibia, Niger, South Africa, Swaziland, Togo, Tunisia, and Zimbabwe). In some instances, bills had been presented to national parliaments; in others the possible outcome of the draft bill is not known. Furthermore, the United Nations Conference on Trade and Development (2019) states that of the 54 African countries in the African continent, 28 (52%) have cybercrime legislation, 11 (20%) have draft cybercrime legislation, and 15 (28%) do not have cybercrime legislation. There have been efforts to provide a framework for combating cybercrime at the regional level in Africa. For example, the member states of the African Union guided by the Constitutive Act of the African Union, adopted in 2000, adopted the African Union Convention on Cyber Security and Personal Data Protection in

356

P. N. Ndubueze

June 27, 2014 (African Union 2014). Nonetheless, only 11 countries have signed it as at February 5, 2019 (African Union 2019). The quest for regional cooperation in the fight against cybercrime will suffer some set-back because of the lack of explicit and specific legislations on cybercrime in some African countries. For example, it will be difficult to establish the requirement of double criminality enshrined in Article 28:2 of the African Union Convention on Cyber Security and Data Protection for the purpose of extradition where one of the countries concerned in a transnational cybercrime case has no specific cybercrime legislation. The digital divide is fast closing up. It is projected that a billion people in Africa will be able to access the Internet by 2022 (Ovumone as cited in Kshetri 2019). This kind of development will have profound implications for cybercrime rates in the continent. This further amplifies the need for African countries to be more proactive and imaginative in the fight against cybercrime. One of the ways to achieve this is through establishing and periodically reviewing existing cybercrime legislations.

Impediments to the Effective Establishment and Enforcement of Cybercrime Legislations in Africa Arguably, the efforts of various African countries to establish comprehensive and effective cybercrime legislation are fraught with several impediments. Some of these impediments are identified below: (i) The Slow Pace of Processing Legislations: The process of establishing new legislations in some African countries can be tortuous and protracted. This is more so in countries with bicameral legislature such as Nigeria, the most populous African country. Bills are usually presented and passed in both the upper and lower legislative houses; public hearings are conducted on the bills. Where there are different versions to a bill, they are harmonized, passed by the legislative houses and sent to the President for assent. These processes, though statutory and required to ensure that all relevant stakeholders make their inputs to the bill, can be time-consuming. This perhaps explains the delay in passing the first comprehensive cybercrime legislation in Nigeria, Cybercrime (Prohibition, Prevention, etc.) Act, 2015. There were several drafts of cybercrime-related bills before the Nigerian National Assembly before the passage of the Cybercrime Act in 2015. (ii) Gaps in Law Enforcement Knowledge/Skills: Arguably, law enforcement have not been able to keep pace with the astounding pace of cybercrime. Cybercriminals are fast becoming supersophisticated and usually ahead of law enforcement. They are inventing novel ways of bypassing security barriers in networks and systems, but such ingenuity is not replicated by law enforcement. Law enforcement efforts to combat emerging variant of cybercrimes have, for the most part, been reactive. Marcum (2014)

17

Cybercrime and Legislation in an African Context

357

underscores this challenge when she asserts that the most challenging aspect of fighting cybercriminals is that they are often one step ahead of law enforcement in the area of knowledge and skills. In its Policy Brief, the United Nations Commission for Africa (2014) decries the limited level of awareness of information and communication technology (ICT)-related security issues by stakeholders in Africa such as ICT regulators, law enforcement agencies, the judiciary, information technology professionals, and users. As mentioned earlier in this chapter, African countries are in a race to bridge the digital divide so as to catch up with the developed countries in information and communication technology (ICT). Arguably, ICT is a relatively new terrain for many stakeholders in Africa. This therefore implies that the degree of their familiarity with ICT-related security issues will be low. A sound knowledge of the issues around information and communication technology is a prerequisite for an effective, efficient, and imaginative establishment and enforcement of cybercrime legislations in Africa. For example, the criminal justice professionals require an above average ICT-related security knowledge to satisfactorily enforce cybercrime laws. (iii) Low Tempo of Enforcement Activities: Enforcement of cybercrime legislations in many African countries seems relatively low. Several factors may account for this. First, the reporting rate of cybercrime when compared to offline crimes is low. Again, this may be because of the low level of awareness of cybercrime in the continent. Many Internet users may fall victim of cybercrime without knowing immediately. For example, a person may not know that his/her identity has been stolen online until the perpetrator uses it to commit a crime. Second, due to its technical nature, not so many law enforcement personnel understand some technical provisions in the extant legislations. Third, the process of search and seizure of digital evidence requires forensic expertise, a skill that is not so common among law enforcement personnel. The foregoing argument was accentuated by Kshetri (2019) who observes that Africa is riddled with a serve shortage of cybersecurity manpower and there will be a shortage of 100,000 cybersecurity personnel in Africa by 2020. Furthermore, he identified weak legislation and law enforcement as a concern in Africa, highlighting the permissiveness of regulatory regimes which allows cybercrime to thrive. (iv) Security Versus Privacy Debate: Sarikakis and Winter (2017, p. 2) identify four categories of legal doctrine that protect individuals from privacy violation as i) freedom of personal autonomy, ii) the right to control personal information, iii) the right to control property, and iv) the right to control and protect space. The dichotomy between security and privacy especially with respect to the use of electronic surveillance remains one of the most discussed challenges of policing cybercrime (Nhan and Bachmann 2015). The foregoing holds true for Africa as well. For example, in 2015, a draft bill titled “Frivolous Petition Bill” which proposed 2-year imprisonment or a fine of $10,000 (N3.6m) or both for anyone who post “abusive statement” via text message, Twitter, WhatsApp, or any other form of social media

358

P. N. Ndubueze

which passed the second reading in the 8th Nigerian Senate was withdrawn following public outcry against it (Punch Newspaper 2018). Similarly, in 2018, the reported directive of an incumbent Nigeria Defense Minister to security agencies to monitor the social media accounts of “notable” Nigerians so as to curb the propagation of hate speech was greeted with condemnation by many Nigerians (Punch Newspaper 2018). (v) Jurisdictional Issue: Losavio et al. (2011) posit that jurisdiction refers to a province or nation’s assertion of its right to regulate and punish conduct. They also observe that such right is limited by physical boundaries. Rowe et al. (2004) note the complexity of contract law rules on international contracts with connections with more than one jurisdiction. It is well acknowledged that jurisdiction is difficult to establish in the cyberspace and this is one of the major challenges of enforcing cybercrime legislation. This is particularly so because African countries, like their counterparts in other continents, would normally encounter some difficulty in determining the jurisdiction of cybercrime cases that cut across many countries of the world. (vi) Extradition Issues: Legislations including those pertaining to cybercrime are not uniform across African countries. This may constitute a serious obstacle to the prosecution of transnational crime such as cybercrime. Cotterrell (2015) underscores this dilemma when he recognized the controversy surrounding extradition and extraterritorial law enforcement with regard to the issue of whether crime is understood the same way by all concerned states. Maillart (2018) also recognized that the definition and scope of cybercrime significantly vary from state to state and notes that this may form the basis for refusal to grant the mutual legal assistance (MLA) request. There is no doubt that the process of extradition may be tortuous and complicated if the offense for which extradition is sought does not meet the requirement of dual criminality (this means that the offense must be a crime in both concerned countries). This explains why there have been some efforts to address this challenge under the auspices of the African Union. For example, Article 28:2 of the African Union Convention on Cyber Security and Data Protection (2014) provides that: State parties that do not have agreement on mutual assistance in cybercrime shall undertake to encourage the signing of agreement on mutual legal assistance in conformity with the principle of double criminality, liability, while promoting the exchange of information as well as the efficient sharing of data between the organization of State Parties on a bilateral and multilateral basis.

(vii) Dearth of Cybercrime Research Centers: There is a dearth of cybercrime research centers in Africa when compared to the western world (see Ndubueze 2016a). The academia and practitioners may be required to make their inputs during public hearings on cybercrime bills. Law makers may require evidence-based, domesticated, and well-documented research on the cybercrime problem to formulate appropriate legislations that will fit into the

17

Cybercrime and Legislation in an African Context

359

peculiarity of each nation-state. But where such research is not sufficiently and readily available, they may be somewhat handicapped. Thus there is need for African governments to establish and fund cybercrime and cybersecurity research centers. (viii) Low Regional Response: Overall, there seems to be low response across the spectrum to the problem of cybercrime in Africa. When compared to the developed regions of the world such as America, Australia, Europe, and so on, there are not so many conversations going on around the problem of cybercrime. There are not so many regional conferences, seminars, and debates, on cybercrimes and criminality. There is no doubt that such activities would ultimately create more awareness on the scope of the problem and underscore the need for more regional cooperation in the efforts to combat it. For example, the African Union Convention on Cyber Security and Personal Data Protection was adopted by the 23rd Ordinary Session of the Assembly held in Malabo Equatorial Guinea on June 27, 2014. However, only 11 countries have signed it as at February 5, 2019 (African Union 2019). This low response will undoubtedly affect the level of cooperation among member states in the establishment and enforcement of cybercrime legislation in Africa. (ix) Weak Voices in Global Internet Governance: The African Union Commission (2016) has decried the weak voices of Africa in global Internet governance. Wentworth (2017) notes that governing the Internet is not an easy task and that it encompasses diverse issues such as physical infrastructure, management of unique identifiers like domain names and Internet Protocol addresses, and technical standards of the Internet. Carr (2015) posits that the multi-stakeholder model of global Internet governance has become a dominant approach for addressing the complex set of interests, agendas, and implications of the growing dependence on the Internet.

Cybercrime Legislations: Toward a Global Harmonization The advocacy for a global harmonization of cybercrime legislation by researchers, professional, national governments, and regional and international organizations is being ongoing, and it is well documented in the literature. Bande (2018) calls for the harmonization of cybercrime legislations across the globe. He notes that given that the perpetrators, victims, tools, and scene of cybercrime are usually transnational, the detection, identification of perpetrators, gathering of evidence, and prosecution of suspects would require the cooperation of authorities from many jurisdictions. He therefore believes that harmonizing cybercrime legislations is the first step toward effective international collaboration to combat cybercrime. Kigerl (2012) perhaps lends credence to this argument when he asserts that the best strategy for combating cybercrime is through international treaties and cooperation, maintaining that this became necessary because of the global nature of the crime.

360

P. N. Ndubueze

Policy Implications The spate of crime and criminality on the Internet has profound implication for policy formulation in the African continent. Obviously, cybercrime is a transnational problem, and national governments are perhaps coming to terms with the fact that they cannot win the war against cybercriminals if they fight alone. There is therefore the need for more partnerships at all levels. At the regional levels, the foregoing should be taken into due consideration during the formulation of national policies of cybercrime. Policies should enable and not impede potential cross-border searches and seizures, extradition of cybercriminals, sharing of useful investigative leads, and so on. That way, the concerted fight against cybercriminals will be purposeful, well-focused, and more productive. The following steps may assist in improving cybercrime legislation and policing in Africa. (i) Periodic Review of Cybercrime Legislations: Cybercrime, being a high-tech crime, is evolving in nature. Therefore, it will be practically impossible to establish a set of legislation that will effectively capture its future patterns. Thus, law and policy-makers in Africa must continue to update their countries legislations on cybercrime to be able to satisfactory tackle the growing sophistication cyber criminality. (ii) Need for Cybercrime Research Centers: For law and policy-makers to be able to update cybercrime legislations, they need systematic and reliable data on emerging patterns of cybercrime. Therefore, there is need for African government to establish dedicated cybercrime research centers and specially commission/fund cybercrime researches. This way, they will be able to generate credible statistics/data that will guide the review of cybercrime legislation in their respective countries. (iii) Need for Specialized Training for Law Enforcement: African governments should invest more in the training and retraining of law enforcement personnel on cybercrime investigation and digital forensics. To be able to effectively and efficiently police cybercrime, law enforcement personnel need an above average knowledge of cybercrime issues. Such knowledge when rightfully applied will go a long way in strengthening the enforcement of the extant legislations on cybercrime in African countries. (iv) Need for Enforcement of Existing Legislations: There is need for law enforcement to be more aggressive in enforcing the existing legislations on cybercrime. This will send warning signals not only to cybercriminals, but it will also deter potential cybercriminals who may be scared of experiencing the full wrath of the law. For legislations to fully achieve their objectives, they have to be enforced. (v) Public Enlightenment Campaigns: There is need for regulatory agencies to increase the tempo of their public enlightenment campaigns on cybercrime and its legislations. These campaigns should also be propagated in the major local languages for wider coverage. This perhaps may assist in improving the

17

Cybercrime and Legislation in an African Context

361

reporting practices of cybercrime. It will address the seeming apathy on the part of citizens to report crime to relevant law enforcement. (vi) Need for More Regional Cooperation: Cybercrime is a transnational crime. It therefore requires international partnerships to tackle it. There is need for African countries to collaborate with one another in the fight against cybercrime. Such collaboration will reduce the technicalities associated with extradition and will make it more or less a seamless process. There is need for African countries yet to sign the African Union Convention on Cyber Security and Data Protection to consider doing so.

Conclusion The proliferation of broadband across the African continent and the corresponding geometric increase in the number of Internet users obviously have profound implications for cybersecurity in the continent. The increase in the number of users would arguably result in the number of potential victims of cyberattacks. One way to control this kind of scenario is the formulation and enforcement of relevant and well-focused legislations. The well-intended efforts by national governments in Africa to establish and enforce laws that will control the criminal use of the Internet are confronted by several challenges. Until such challenges, which have been discussed in this chapter are tackled, the cybercrime pandemic may continue to ravage African countries. There is therefore an urgent need for a well-focused and well-coordinated collaboration at the regional level to deal with the cybercrime problem. Given that cybercrime is by its nature evolving, such efforts must be considered by all stakeholders as a work in progress.

Cross-References ▶ Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms ▶ Cybercrime Legislation in the United States ▶ Legislative Frameworks Against Cybercrime: The Budapest Convention and Asia ▶ Legislative Frameworks: The United Kingdom ▶ The Legislative Framework of the European Union (EU) Convention on Cybercrime

References Adeya, N. (1996). Beyond borders: The Internet for Africa. Convergence, 2(2), 23–27. African Union. (2014). African Union convention on cyber security and personal data protection. Retrieved 2 June 2019 from https://au.int/en/treaties/african-union-conven tion-cyber-security-and-personal-data-protection.

362

P. N. Ndubueze

African Union. (2019). List of countries which have signed, ratified and acceded to African convention on cyber security and personal data protection. Retrieved from https://au.int/en/ treaties/african-union-convention-cyber-security-and-personal-data-protection Aikins, S. K. (2019). Determinants of digital divide in Africa and policy implications. International Journal of Public Administration in the Digital Age, 6(1), 64–79. https://doi.org/10.4018/ UPADA.2019010104. Airo-Farulla, G. (2001). Regulating the Internet; strategies for control. Media International Australia Incorporating Culture and Policy, 101, 5–7. Antonescu, M., & Birau, R. (2015). Financial and non-financial implications of cybercrimes in emerging countries. Procedia Economics and Finance, 32, 618–621. Astier, S. (2005). Ethical regulation of the Internet: The challenges of global governance. International Review of Administrative Sciences, 71(1), 133–150. Bande, L. C. (2018). Legislating against cyber crime in Southern African Development community: Balancing international standards with country-specific specifications. International Journal of Cyber Criminology, 12(1), 9–26. https://doi.org/10.5281/zenodo.1467632. Ben-Jacob, M. G. (2017). Internet ethics for users and providers. Journal of Educational Technology Systems, 46(2), 252–258. https://doi.org/10.1177/0047239517697967. Black, J. (2001). Decentering regulation: Understanding the role of regulation and self-regulation in a “post-regulatory” world. Current Legal Problems, 54(1), 103–146. Brenner, S. W. (2012). Cybercrime and the law: Challenges, issues, and outcomes. Boston: Northwestern University Press. Burton, P. F. (1995). Regulation and control of the Internet: Is it feasible? Is it necessary? Journal of Information Science, 21(6), 413–428. Carr, M. (2015). Power plays in global Internet governance. Millennium: Journal of International Studies, 43(2), 640–659. https://doi.org/10.1177/0305829814562655. Cassim, F. (2011). Addressing the growing spectre of cybercrime in Africa: Evaluating measures adopted by South Africa and other regional role players. XLIV CILSA. Retrieved 20 Jan 2019 from https://core.ac.uk/download/pdf/79170924.pdf Chisala-Tempelhoff, S., & Kirya, M. T. (2016). Gender, law and revenge porn in Sub-Saharan Africa: A review of Malawi and Uganda. Palgrave Communications. Retrieved 4 June 2019 from https://www.nature.com/articles/palcomms201669 Cotterrell, R. (2015). The concept of crime and transnational networks of community. In V. Mitsilegas, P. Alldridge, & L. Cheliots (Eds.), Globalisation, criminal law and criminal justice: Theoretical, comparative and transnational perspectives (pp. 7–23). Oxford: Hart Publishing. Goggin, G., & Griff, C. (2001). Regulating for content on the Internet: Meeting cultural and social objectives for broadband. Media International Australia Incorporating Culture and Policy, 101, 19–31. Graham, M. (2011). Time machines and virtual portals: The spatialities of the digital divide. Progress in Development Studies, 11(3), 211–227. Grobler, M. & Vuuren, J. J. V. (2010). Broadband broadens scope for cyber crime in Africa. In: Information Security for South Africa (ISSA) Conference. https://doi.org/10.1109/ ISSA.2010.5588287. Hakmeh, J. (2017). Cybercrime and the digital economy in the GCC countries. Retrieved 20 Jan 2019 from https://www.chathamhouse.org/publication/cybercrime-and-digital-economy-gcccountries Herrington, L., & Aldrich, R. (2013). The future of cyber-resilience in an age of global complexity. Politics, 33(4), 299–310. https://doi.org/10.1111/1467-9256.12035. Heuser, A. (2016). Charting African prosperity gospel economies. HTS Theological Studies. Retrieved 3 June 2019 from www.scielo.org.za/pdf/hts/v72n4/103.pdf Huey, L., Nhan, J., & Broll, R. (2012). ‘Uppity civilians’ and ‘cyber-vigilantes’: The role of the general public in policing cyber-crime. Criminology & Criminal Justice, 13(3), 81–97. Infocyte Report. (2017). The threat of malware in Africa. Retrieved 4 June 2019 from https:// infocyte.com/wp-content/uploads/security_brief-malware_in_africa.pdf

17

Cybercrime and Legislation in an African Context

363

International Telecommunication Union. (2012). Understanding cybercrime: Phenomenon, challenges and legal response. Retrieved 14 Jan 2019 from www.itu.int/ITU-D/cyb/cybersecurity/ docs/Cybercrime%20legislation%20EV6.pdf Kigerl, A. (2012). Routine activity theory and the determinants of high cybercrime countries. Social Science Computer Review, 30(4), 470–486. https://doi.org/10.1117/0894439311422689. Kshetri, N. (2019). Cybercrime and cybersecurity in Africa. Journal of Global Information Technology Management, 22(2), 77–81. https://doi.org/10.1080/1097198X.2019.1603527. Leppanen, A., Kiravuo, T., & Kajantie, S. (2016). Policing the cyber-physical space. Police Journal: Theory, Practice and Principles, 89(4), 290–310. https://doi.org/10.1177/ 0032258X16647420. Leukfeldt, E. R., & Yar, M. (2016). Applying routine activity theory to cybercrime: A theoretical and empirical analysis. Deviant Behaviour, 37(3), 263–289. https://doi.org/10.1080/ 01639625.2015.1012409. Lindgren, S. (2018). A ghost in the machine: Tracing the role of ‘the digital’ in discursive processes of cybervictimization. Discourse & Communication, 12(5), 517–534. https://doi.org/10.1177/ 1750481318766936. Losavio, M. M., Shutt, J. E., & Keeling, D. W. (2011). The information polity: Social and legal frameworks for critical cyber infrastructure protection. In T. Saadwal & L. Jordan (Eds.), Cyber infrastructure protection (pp. 129–158). Carlisle: The Strategic Study Institute. Maillart, J.-B. (2018). The limits of subjective territorial jurisdiction in the context of cybercrime. ERA-Forum. https://doi.org/10.1007/s12027-018-0527-2. Marcum, C. D. (2014). Cyber crime. New York: Wolters Kluwer law & Business. Marshall, A. M., & Clarkson, A. C. (2008). Future crimes and detection methods in cyberspace. Measurement + Control, 41(8), 248–251. McCabe, K. A., & Lee, M. D. (1997). Users’ perception of Internet regulation: An exploratory study. Social Science Computer Review, 15(3), 237–241. McMurdie, C. (2016). The cybercrime landscape and our policing response. Journal of Cyber Policy, 1(1), 85–93. https://doi.org/10.1080/23738871.2016.1168607. Mutula, S. M. (2003). Cyber café industry in Africa. Journal of Information Science, 29(6), 489–497. Na, S. H., Hwang, J., & Kim, H. (2018). Digital content as a fast Internet diffusion factor: Focusing on the fixed broadband Internet. Information Development, 1–15. https://doi.org/ 10.1177/026666918811878. Ndubueze, P. N. (2016a). Cyber criminology and the quest for social order in Nigerian Cyberspace. The Nigerian Journal of Sociology and Anthropology, 14(1), 32–48. Ndubueze, P. N. (2016b). Generation y and online victimization in Nigeria: How vulnerable are younger Internet users. In K. Jaishankar (Ed.), Interpersonal criminology: Revisiting interpersonal crimes and victimization (pp. 203–214). Boca Raton: CRC Press, Taylor & Francis Group. Ndubueze, P. N., Hussien, M. D., & Sarki, Z. M. (2017). Cyberstalking awareness among undergraduates in Federal University Dutse. Dutse Journal of Humanities and Social Sciences, 2(2), 244–272. Nhan, J., & Bachmann, M. (2015). Developments in cyber criminology. In M. Maguire & D. Okada (Eds.), Critical issues in crime and justice: Thought, policy and practice (2nd ed., pp. 209–228). Los Angeles: Sage. Nwokeoma, B. N., Ndubueze, P. N., & Igbo, E. U. M. (2017). Precursors of online advance fee fraud in South-East Nigeria. In P. N. Ndubueze (Ed.), Cyber criminology and technologyassisted crime control: A reader (pp. 195–218). Zaria: Ahmadu Bello University Press. Obayan, A. O. I. (1995). Changing perspectives in the extended family system in Nigeria: Implications for family dynamics and counseling. Psychology Quarterly, 8(3), 253–257. Punch Newspaper. (2018, February 3). More attacks on FG over social media monitoring. Retrieved 2 June 2019 from https://punchng.com/more-attacks-on-fg-over-social-media-monitoring/ Quarshie, H. O., & Martin-Odoom, A. (2012). Fighting cybercrime in Africa. Computer Science and Engineering, 2(6), 98–100. https://doi.org/10.5923/j.computer.20120206.03.

364

P. N. Ndubueze

Ridings, C. M., Gefen, D., & Arinze, B. (2002). Some antecedents and effects of trust in virtual communities. Journal of Strategic Information Systems, 11, 271–298. Rogers, E. M. (2001). The digital divide convergence. The International Journal of Research into New Media Technologies, 7(4), 96–111. Rowe, H., Raylor, W., & Brown, P. (2004). Staying legal: Self-regulation on the Internet. Business Information Review, 21(2), 117–124. https://doi.org/10.1177/0266382104044728. Sarikakis, K., & Winter, L. (2017). Social media users’ legal consciousness about privacy. Social Media – Society, 1–14. https://doi.org/10.1177/2056305117695325. Schermer, B., & Wagemans, T. (2010). Freedom in the days of the Internet: Regulating, legislating and liberating the Internet while producing rights and unlocking potentials. European View, 9, 287–293. https://doi.org/10.1007/s12290-010-0149-8. Smith, R. G. (2014). Transnational cybercrime and fraud. In P. Reichel & J. Albanese (Eds.), Handbook of transnational crime and justice (2nd ed., pp. 119–142). Los Angeles: Sage. Sonaike, S. A. (2004). The Internet and the dilemma of Africa’s development. International Communication Gazette, 66(1), 41–61. Tade, O. (2013). A spiritual dimension to cybercrime in Nigeria: The ‘Yahoo-Plus’ phenomenon. Human Affairs, 23, 689–705. https://doi.org/10.2478/s13374-013-0158-9. Tade, O., & Adeniyi, O. (2016). On the limit of trust: Characterising automated teller machine fraud in Southwest Nigeria. Journal of Financial Crime, 23(4), 1112–1125. Thatcher, A., & Mathews, M. (2012). Comparing software piracy in South Africa and Zambia using social cognitive theory. African Journal of Business Ethics, 6(1), 1–12. The Council of Europe/Project Cybercrime@Octopus. (2016). The state of cybercrime legislation in Africa: An overview. Retrieved 4 Jan 2019 from: https://rm.coe.int/16806b8a7 The Internet World Stats. (2018). Internet Usage Statistics. Retrieved 20 Jan 2018 from https:// www.Internetworldstats.com/stats.htm The Nigeria Electronic Fraud Forum, Annual Report. (2016). A challenging payments ecosystem: The security challenge. Abuja: Central Bank of Nigeria. The United Nations Economic Commission for Africa. (2014). Policy Brief. Tackling the Challenge of Cyber Security in Africa. Retrieved 20 Jan 2019 from https://www.uneca.org/sites/default/ files/PublicationFiles/ntis_policy_brief_1.pdf United Nations Conference on Trade and Development. (2019). Cybercrime Legislation Worldwide. Retrieved 4 Jan 2019 from https://unctad.org/en/Pages/DTL/STI_and_ICTs/ ICT4D-Legislation/eCom-Cybercrime-Laws.aspx Vanguard Newspaper. (2017, June, 15). Greed cause of cybercrime. Retrieved 3 June 2019 from https://www.vanguardngr.com/2017/06/greed-cause-cyber-crime-don/ Wagner, B. (2014). The politics of Internet filtering: The United Kingdom and Germany in a comparative perspective. Politics, 34(1), 58–71. Wall, D. S., & Williams, M. L. (2013). Policing cybercrime: Networked and social media technologies and the challenges of policing. Policing and Society, 23(4), 403–412. Wentworth, S. (2017). Internet multi-stakeholder governance. Journal of Cyber Policy, 2(3), 318–322.

Cybercrime Initiatives South of the Border: A Complicated Endeavor

18

José de Arimatéia da Cruz and Norah Godbee

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Latin America and Cybercrime (In)security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Argentina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brazil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cuba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mexico . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Venezuela . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peru . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paraguay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Strategic Implications for Latin-American Nations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations for a Safer Cyberspace Environment and Crime Mitigation in Latin America . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Vision for Cybercrime Security Rather than (In)security in Latin America . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

366 367 367 369 372 373 374 376 377 378 378 380 381 381 382

An early version of this paper was published in the Marine Corps University Journal Vo. 6, No. 2, Fall 2015. The views expressed in this essay are those of the author and do not necessarily reflect the official policy or position of the Department of the Army, the Department of Defense, or the US government. J. de Arimatéia da Cruz (*) College of Behavior and Social Science, Georgia Southern University, Savannah, GA, USA U.S. Army War College, Strategic Studies Institute, Carlisle, PA, USA e-mail: [email protected] N. Godbee Savannah, GA, USA © This is a U.S. Government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_76

365

366

J. de Arimatéia da Cruz and N. Godbee

Abstract

Cybercrime is no longer just a societal problem. In the globalized world of the twenty-first century and given the democratization of technology, individuals as well as nation-state can carry out their nefarious activities without fear of protection or detention. Cybercrime is becoming a major issue in Latin America. Several Latin American countries are investigated here to illustrate their cyber issues and how they have created laws to mitigate cyber concerns. Again, cybercrime can never be eliminated. But, nation-state can mitigate and take actions to protect their critical infrastructure. Keywords

Cybesecurity · Latin America · Cybercrime

Introduction The importance of developing domestic and international cybersecurity initiatives to counter threats to the world’s communication technology infrastructure becomes more evident as Internet accessibility increases (White House 2018). The United States and Europe currently recognize the threat of cyberterrorism; Latin America, however, focuses more on cybercrime due to the “highest rates of real and perceived insecurity” (Willis et al. 2013) and the rapidly growing Internet population (OAS 2014). A comparison of these initiatives and responses to cybercrime in Argentina, Brazil, Cuba, Mexico, and Venezuela illustrates the usefulness of cooperative efforts that might adapt into policy to improve their response to similar issues. Increasingly, international, statewide, and independent actors work together to encourage digital capabilities and continue funding “to defend sovereignty and to project power” (Geers 2014). The Inter-American Committee Against Terrorism, composed of members from the Organization of American States (OAS), works to advance counter cyber strategies and techniques. The Symantec Corporation, SecDev Foundation, and Igarapé Institute are examples of independent organizations that, like Inter-American Committee and OAS, conduct studies of state efforts to collect information and provide advice to boost current cybercrime countermeasures. One such neutralizer fights cybercrime and bolsters cyber capability through public awareness. The “Stop.Think.Connect” campaign informs the public about safe Internet practices and individual digital information security practices. Led by the Anti-Phishing Working Group and the National Cyber Security Alliance, Stop. Think.Connect is gaining traction in Brazil and other countries in the Americas. As the world becomes increasingly “swamped in malware” and populated by potential cybercriminals, individuals, criminals, and law enforcement professionals are developing substantially stronger Internet skills. While law enforcement professionals and government officials are restricted to lawful activities, criminals and terrorists limitlessly traverse the Internet to pursue their illicit agendas. Deviants can

18

Cybercrime Initiatives South of the Border: A Complicated Endeavor

367

more effectively target and strike at citizens with poor device security – a noted current major issue with unprotected mobile phones – and more easily recruit members or carry out criminal acts via the Internet. State officials and international actors are progressively recognizing the danger of, and preparing to defend themselves against, cyber vulnerabilities and possible attacks. This progression can be seen in the responses and policy changes in the wake of Edward J. Snowden’s revelations and the attack on Sony possibly sanctioned by and, at the least, publicly supported by North Korea.

Latin America and Cybercrime (In)security Argentina The Republic of Argentina is a democratic society located in the Southern Cone of Latin America. Argentina, like Brazil and Chile, is a former bureaucratic authoritarian regime in which the government was formerly controlled by the military and civilian leaders appointed to key positions by the military in power. Argentina suffered recurring economic crises during most of the twentieth century, but is now beginning to transform into a more liberal economic power, taking advantage of its rich natural resources, highly literate population, export-oriented economy, and diversified industry. Argentina’s population is roughly 44.7 million people composed largely of European (mostly Spanish and Italian descent) and mestizo (mixed European and Amerindian ancestry) 97.2%, Amerindian 2.4%, and African 0.4% (2010 est.) (CIA World Fact Book 2019). Argentina’s urban population is about 91.9% of total population (2018) rate of urbanization, and 99.1% of the population is literate. This “Paris of the America,” as Argentina is commonly referred, is highly urbanized and highly connected to the Internet. According to the Internet World Stats, Argentina as of December 2018 had 41,586,960 Internet users with a penetration rate of 92.2% (Internet World Stats 2019). The Argentine government substantially improved cybersecurity during the past few years in response to growing domestic and international concerns. One of the first Latin-American countries to implement a national cyber-response team, the National Office of Information Technology National Program for Critical Information Infrastructure and Cyber Security created the Argentine Computer Emergency Response Team in 1994 which developed into the National Program for Critical Information Infrastructure and Cyber Security. Unfortunately, it is very difficult to effectively combat cybercrimes without standardized policy and a legal framework. And, since the public typically endures the most malicious technology-based attacks, countering cybercrime begins with public knowledge of how to spot and report cyber incidents. The Argentine public awareness campaign, “Internet Sano,” provides this information, best practices, and possible risks inherent to using modern technology (OAS 2014). See Table 1 for other states that have established computer security incident response teams (CSIRTs) throughout Latin America.

368

J. de Arimatéia da Cruz and N. Godbee

Table 1 CSIRTs established throughout Latin Americaa Country Argentina

Brazil

Chile Colombia Guatemala Mexico

Paraguay Peru Uruguay Venezuela

Name and/or reference of the legislation ICIC – National Program for critical information infrastructure and cyber security (Programa Nacional de Infraestructuras Críticas de Información y Ciberseguridad) CERT.Br – Computer emergency response team-Brazil (Centro de Estudos, Reposta e Tratamento de Incidentes de Seugurança no Brasil) CTIR Gov – Center of Security Incident Handling in computer networks of the Federal Public Administration (Centro de Tratamento de Incidentes de Segurança de Redes de Computadores da Aministraç^ a o Pu´blica Federal) CLCERT – Chilean computer emergency response team colCERT – Cybernetic emergency response Group of Columbia (Grupo de Repuesta a Emergencias Cibernéticas de Colombia) CSIRT.Gt – Computer security incident response team-Guatemala (Centro de Respuestas a Incidentes de Seguridad Informática de Guatemala) UNAM-CERT – Computer security response team of the National Autonomous University of Mexico (Equipo de Respuesta a Incidentes de Seguridad en Cómputo de la Universidad Nacional Autónoma de México)b CSIRTPy – Security incident response team of Paraguay (Equipo de Respuesta a Incidentes de Seguridad de Paraguay) peCERT – Emergency coordinator of telecommunication networks of Peru (Coordinadora de Emergencias de Redes Teleinformáticas de Peru) CERTUy – National Center of information security incident response (Centro Nacional de Respuesta a Incidentes en Seguridad Informática) VenCERT – National System of telecommunication incident Management of the Bolivarian Republic of Venezuela (Sistema Nacional de Gestión de Incidentes Telemáticos de la Repu´blica Bolivariana de Venezuela)

a

English designations differing from literal translation reflect currently accepted agency titles The Mexican CSIRT is located at a university and addresses incidents nationally. (Source: Diniz and Muggah 2012, p. 15)

b

According to the Carnegie Mellon University (2019), a computer emergency response team (CSIRT) with national responsibility (or “National CSIRT”) is a CSIRT that is designated by a country or economy to have specific responsibilities in cyber protection for the country or economy. A National CSIRT can be inside or outside of government, but must be specifically recognized by the government as having responsibility in the country or economy (Carnegie Mellon University). For example, the Brazilian Government Response Team for Computer Security Incidents (CTIR Gov) is part of the Information Security Department (DSI) of the Institutional Security Cabinet of the Presidency of the Republic (GSI/PR). Furthermore, CTIR Gov is a CSIRT of national responsibility for coordinating and implementing actions for the management of computer incidents (monitoring, treatment, and response to computer incidents) in government bodies and entities and has the following competencies: advise the DSI in the planning and supervision of actions to manage incidents in the national information security activity; advise the DSI in the elaboration of normative and methodological requirements related to the actions destined to the management of incidents in the national activity of information security;

18

Cybercrime Initiatives South of the Border: A Complicated Endeavor

369

operate as a CSIRT of national responsibility for cyber protection; coordinate and carry out actions for the management of computer incidents, regarding prevention, monitoring, treatment, and response to computational incidents of national responsibility; and coordinate the CSIRTs network, formed by government agencies and entities. Most CSIRTs have no actual enforcement of cybercrime law relative to coordinated incident response. They are simply an advisory organization. As the Department of Homeland Security Science and Technology states, “a Cyber Security Incident Response Team (CSIRT) is a group of experts that assesses, documents and responds to a cyber incident so that a network can not only recover quickly, but also avoid future incidents” (Department of Homeland Security 2019). The Edward Snowden leaks prompted many Latin-American nations to build multilateral partnerships, such as the cooperative agreement between Argentina and Brazil, to improve cyber defense capabilities announced in September 2013 (Cruz 2015). Snowden is the former US National Security Agency (NSA) contractor, now in exile in Russia after fleeing the United States to Hong Kong in May 2013. Snowden, “the Master on Computers” prior to joining the NSA, enlisted in the US Army and trained as a special forces. However, after breaking both of his legs in a training accident, he was discharged. Snowden also worked for the Central Intelligence Agency (CIA) on Information Technology (IT) security matters. Snowden released to the British Guardian Newspaper in June 2013 a trove top secret information concerning an NSA surveillance program known as PRISM (For those interested in further reading about the PRISM program, I highly recommend Greenwald (2014)). Under the PRISM program, the NSA was collecting the telephone records of tens of millions of Americans as well as world leaders, especially from Mexico, Brazil, Colombia, Argentina, and Chile. Snowden has also stated that the NSA had more than 61,000 hacking operations worldwide. Snowden has been charged in the United States with theft of government property, authorized communication of national defense information, and willful communication of classified communication intelligence. If Snowden ever returns to the United States, he could face a maximum 10-year prison sentence for each charge. Argentina takes a more proactive approach to digital data protection, which has been “recognized by the European Commission as the only Latin-American country with an adequate level of protection” (Cruz 2015). Argentina, however, does not model its cybercrime and cybersecurity policies after the Convention on Cybercrime. Internet privacy rights are similar to other forms of media. Argentina’s “Personal Data Protection Law No. 25.326, passed in 2000 . . . exists to guarantee individuals’ rights of honor and privacy, and to give them access to their personal data” (Hostetler 2015). The Argentine government has yet to explicitly declare a stance regarding privacy concerns despite Snowden’s revelations and growing cybersecurity concerns.

Brazil In his book Around the Cragged Hill: A Personal and Political Philosophy (Kennan 1993), the late George F. Kennan explains that a “monster country” is a country

370

J. de Arimatéia da Cruz and N. Godbee

endowed with an enormous territory and population. The characterization of Brazil as a “monster country” places Brazil in the same category of nations such as China, Britain, the United States, and Japan. A monster country is endowed with the following characteristics: continental territorial dimensions and a population of more than 212 million people, a tradition of economic development, and a diverse foreign trade policy. Brazil, the sleeping giant of South America, occupies half of the continent and is the fifth most populous country in the world with an estimated population of about 205 million people. Eighty-four percent of the Brazilian population is heavily concentrated in urban centers, especially São Paulo and Rio de Janeiro. According to the Internet World Stats, Brazil has 149,057,635 Internet users in December 2018, a 70.2% penetration rate. Also, most Brazilians are well connected to Facebook and WhatsApp. Brazil had 139,000,000 Facebook users in December 2017, a 65.4% penetration rate. Furthermore, it has 257,814,274 mobile cellular subscriptions in December 15, a 122.0% penetration rate, per the International Communication Union. Approximately 22 million Brazilians were victims of cybercrimes in 2012, and that number continues to grow (Glickhouse 2013). This large number of cybervictimization occurs despite advanced capabilities in cybersecurity and deterring cybercrime, with numerous state institutions and agencies playing active roles. Even with these attempts of combating traditional crimes and cybercrime within the state, Brazil still expresses concern with criminalizing cyber offenses. The lack of a cohesive corresponding legal framework that would address these various offenses inhibits the prosecution of those who commit recognized cybercrimes. This large number of cyber-victimization occurs despite “advanced capabilities in cybersecurity and deterring cybercrime, with numerous state institutions and agencies playing active roles” (OAS 2014). The Brazilian Information and Communication Technology Management Committee established their Cyber Security Incident Response Team in 1997, which was renamed CERT.br in 2005. CERT.br establishes and maintains supportive partnerships, conducts cybercrime trend analyses and training, builds awareness, and monitors networks to respond to cyber incidents. Privatepublic cooperation in Brazil embodies an ideal cyber defense situation because the public sector voluntarily shares cyber incident information, thereby enabling CERT. br to plan prevention strategies and responses appropriate to the most prominent types of cyberattacks and high-value targets in the absence of legal direction. While Internet service providers (ISPs) are mostly in the private sector’s hands, a synergy exists between the Brazilian government and the private sector to protect individual personal data (IPDs). This is synergy between the government and private sector is nonexistent in the United States where ISPs refuse to cooperate with law enforcement agencies unless otherwise forced by law. Atypical among Latin-American states, Brazil invests in military-based cyber defense capabilities to curb cybercrime. Created and operational since 2010, the Cyber Defense Center of the Brazilian Army currently coordinates the army’s cybersecurity actions. Eventually, the Center will also oversee the Brazilian Navy and Air Force to ensure federal and military network protection from foreign and domestic attacks. Brazil also provides an interesting case study for attempting to

18

Cybercrime Initiatives South of the Border: A Complicated Endeavor

371

quell the growing conflict among international and domestic organized criminal organizations, law enforcement, and the public with the Unidade de Policia Pacificadora (UPP). The Pacification Police Unit (UPP) offers a community-based approach to policing via technology to quell social unrest, police corruption, and cyber and traditional crime incidents through accountability and oversight (Table 2).

Table 2 A sample of police units devoted to cybercrime in Latin Americaa Country Argentina

Bolivia

Brazil Chile

Colombia

Dominican Republic

Honduras

Mexico

Peru

Uruguay

Name and/or reference of the legislation Federal computer security division of the interior superintendence Federal Argentina Police (División de Seguridad Informática Federal de la Superintendencia del Interior–Policía Federal Argentina [PFA]) Cybercrime division of the special force to fight crime National Police (División Delitos Informáticos de la Fuerza Especial de Lucha contra el Crimen [FELCC] de la Policía Nacional) Enforcement cybercrime unit Federal Police (Unidade de Repressão a Crimes Cibernéticos [URCC] da Policía Federal) Cyber crime investigation brigade of the National Police HeadquartersChilean Economic Crimes Investigations (Brigada Investigadora de Ciber Crimen [BRICIB] de la Jefatura Nacional de Delitos Económicos–Policía de Investigaciones de Chile [PDI]) Technology research Group of the Division of criminal investigation (investigative area against economic wealth) of the criminal investigation and Interpol of the National Police of Columbia (Grupo de Investigaciones Tecnológicas de la Subdirección de Investigación Criminal [Área Investigativa contra el Patrimonio Económico] de la Dirección de Investigación Criminal e Interpol [DIJIN] de la Policía Nacional de Colombia) Crimes investigation Department of High Technology of the central Directorate of Criminal Investigations National Police (Departamento de Investigaciónes y Crímenes de Alta Tecnología [DICAT] de la Dirección Central de Investigacioues Criminales [DICRIM] de la Policía Nacional) Special computer crime unit of the National Directorate of special investigation of the National Police (Unidad Especial de Delitos Informáticos de la Dirección Nacional de Servicios Especiales de Investigación [DNSEI] de la Policía Nacional) Cyber police intelligence sector of the Federal Preventive Police and the secretariat of information Technology of the Federal Public Security Secretariat (Policía Cibernética del Sector de Inteligencia de la Policía Federal Preventiva [PFP] y de la Subsecretaría de Tecnologías de la Información de la Secretaría de Seguridad Pu´blica Federal [SSP]) Research division high tech crime of the criminal investigation and support justice National Police (División de Investigación de Delitos de Alta Tecnología [DIVINDAT] de la Dirección de Investigación Criminal y de Apoyo a la Justicia [DIRINCRI] de la Policía Nacional) Computer crime Department of the National Police (Departamento de Delitos Informáticos de la Policía Nacional)

Source: Diniz and Muggah (2012), pp. 16–17 a English designations differing from literal translation reflect currently accepted agency titles

372

J. de Arimatéia da Cruz and N. Godbee

Police officers immerse themselves in the local community by patrolling on foot and giving out personal emails and phone numbers. By forging these personal connections with citizens, law enforcement hopes to bridge the gap between state and public spheres; however, it has increased the opportunity for contention between the two groups. Even with these attempts of combatting traditional and cybercrime within the state, Brazil still expresses concern over criminalizing cyber offenses. The lack of a cohesive corresponding legal framework addressing various offenses inhibits prosecuting those who commit recognized cybercrimes.

Cuba While the Island of Cuba sits just 91 miles from the United States, the cradle of information communication technology (ICT), it was only in 2013 that Wi-Fi arrived in Cuba. Since December 2018, Cuba has been offering its citizens full Internet access for mobile phones. While this is a small step toward the democratization of technology, it is a huge step for the citizens of Cuba. Cuba became the last nation to enable such service in the Western Hemisphere. Cuba has a population of 11.2 million people made up of with white (64.1%), mulatto or mixed (26.6%), and black (9.3%). Seventy-seven percent of the population lives in urban areas. Literacy in Cuba, according to government official statistics, is 99.8%. According to the CIA Fact Book, the government continues to balance the loosening of its socialist economic system against a desire for firm political control by allowing private ownership of real estate and new vehicles, allowing private farmers to sell agricultural goods, and adopting a new foreign investment law (CIA 2019). According to the Internet World Stats, Cuba has 5,642,595 Internet users as of December 2018, a 49.1% penetration rate. While Cuba has seen an increase in the proliferation of Internet and mobile phone users in recent years, the market is still highly restricted to those who can afford such devices. Cuba’s Internet penetration rate was reported to be at 25.71% as of 2013, but most users can only access the “government-filtered” Intranet domain in place of the global Internet. According to the National Statistics Office, approximately 2.9% of the Cuban population has Internet access. However, according to outside experts, the estimate is more likely to be 5–10% due to black market sales of dial-up minutes (Rodriguez 2012). Although some may determine that this small percentage of citizens with Internet access prevents cybercrime from being a major concern for Cuba, the United States’ intent to reestablish diplomatic ties and improve public communications accessibility must also be considered (White House 2014). Authorized and unauthorized channels already exist by which citizens bypass the state’s Internet blocks. There is a growing quasi-business for those who can construct antennas to access illegal dial-up connections, post content on foreign networks, or sell time on illegally shared accounts. Thus, allowing even a small degree of this flow of communication and information creates opportunities for cybercriminals to target newly introduced Internet users and the government’s institutions from both inside and outside the state. In other words, despite claims of government-controlled networks, states wrestle with this important and intrusive aspect of cybersecurity.

18

Cybercrime Initiatives South of the Border: A Complicated Endeavor

373

Cuba authorized home Internet access in 2017, and hundreds of public Wi-Fi connection points exist in Havana’s parks and plazas. However, this government service provided gives Cuban citizens access only to the state-run email accounts on their phone. Illegal connections have proliferated using smuggled or homemade antennas and pirated Wi-Fi signals. According to Kirk Semple and Hannah Berkeley Cohen of the New York Times, “legal home Internet connections still remain rare— only 67,000 homes had it by last December. . .and most legal access in offices has been restricted to certain government employees and professions” (Semple and Cohen 2019). With a population of 11.2 million people, Cuba now counts, according to Cristina Abellan Matamoros, with 1,400 Wi-Fi hotspots, mostly provided by the government and heavily controlled, about 80,000 homes now have Internet access, and 2.5 million Cubans have 3G network (Matamoros 2019). 3G cell phone service connectivity was introduced for the first time into the Island on December 2018. In addition to providing Internet access, the Cuban government now allows Cubans to import routers, register their equipment, and then create private Wi-Fi networks connected to signals from state-controlled operator ETECSA (France24 2019). ETECSA is Cuba’s state telecom and the nation’s only Internet provider. While Cuba’s young population is excited to have Internet access, accessing it is not cheap. According to Trading Economics, wages in Cuba increased to 777 CUP/month in 2018 from 767 CUP/month in 2017. Wages in Cuba averaged 567.18 CUP/month from 2008 until 2018, reaching an all-time high of 777 CUP/month in 2018 and a record low of 415 CUP/month in 2008 (Trading Economics 2019). 26 Cuba CUP (peso) is equal to one US dollar so that means that the average Cuban in 2019 takes home a state salary equating to around US$ 30 each month. Thus, Cuba’s digitalization of society is extremely expensive for its citizens which must disbursed US$1 an hour for Internet access or US$7 for 600 megabytes, the lowest 3G rate (France24 2019). As Cuba opens its doors to tourism, it is becoming a necessity to have Internet connectivity due to tourists’ demand to access their emails or communicate with loved ones at home.

Mexico Since the North American Free Trade Agreement (NAFTA) entered into force in 1994, Mexico’s $2.5 trillion economy – 11th largest in the world – has increasingly focused on manufacturing. The government’s emphasis on economic reforms aims to improve competitiveness and economic growth in the long term. According to the CIA Fact Book, Mexico has a population of 126 million (July 2018). Mexico’s urban population is 80.2% of total population (CIA World Fact Book 2019), and literacy stands at 94.9%, according to government statistics. According to the Internet World Stats, Mexico has 88,000,000 Internet users as of June 30, 2019, a 66.5% Internet penetration rate. Also, about 78,000,000 Mexicans have access to Facebook as of December 31, 2018, a 59.7% penetration rate. A major concern regarding Mexico’s cybercrime is the rising levels of hacktivism throughout the world are staggering and Mexico has been ranked “as one of the world’s most vulnerable countries to cyberattacks” (Conan 2013). It saw an

374

J. de Arimatéia da Cruz and N. Godbee

estimated 40% and a staggering 113% increase in the number of cybercrime incidents in 2012 and 2013, respectively. Largely attributed to the presidential campaign, expanding hacktivism activities, growing Internet penetration, and an upward trend of criminals implementing cyber technology also contribute to this increase. Simultaneously, limited collaboration developing a national – much less international – cyber strategy enables unconventional actors to achieve substantial success. Cartels, long a concern for the Mexican government, embrace the Internet to recruit new members, complete transactions, and search for new and more targets to exploit. Likewise, the proliferation and anonymity of the Internet foster hacktivist recruitment for groups such as anonymous and improve their ability to escape prosecution. Combined with perceived declines in social and economic conditions, hacktivism is likely to increase. Specifically, situations such as the retaliatory kidnapping of a hacker with the group anonymous who threatened the Los Zetas cartel and their cohorts with cyber tactics will be more likely (Abreu 2012). In addition to the concerns associated with prosecuting conventional crime, ambiguous cyberspace jurisdictions make it difficult to arbitrate responses to individual involvement in such events. In essence, nation-states recognize the dichotomy of hacking – strategic hacking can undermine the campaign against criminal hacking – that adds yet another complex dimension to the current cybersecurity situation for Mexican policymakers and law enforcement officials regarding cyber and traditional criminals. Prioritization of cyber threats has yet to rise like other national security concerns that result from the environment along the US-Mexico border, such as that of traditional cartel violence and corruption among Mexican law enforcement officials. Formal attempts for cyber defense and proactive cyber-response efforts did not start in Mexico until 2012 with the creation of its national cyber incident response center, CERT-MX (Conan 2013). Despite this late start, Mexico has recently become more proactive in raising awareness for increased cybersecurity of its citizens and public and private sectors. It has also been actively seeking to improve cybersecurity efforts by building cooperative relationships with international organizations and governments, including the CSIRTs of Colombia, the United States, Holland, and Japan. Mexico also seeks similar collaboration from such international organizations as the Forum of Incident Response and Security Teams and the OAS.

Venezuela Venezuela is located Northern South America, bordering the Caribbean Sea and the North Atlantic Ocean, between Colombia and Guyana. Venezuela’s population is concentrated in the northern and western highlands along an eastern spur at the northern end of the Andes, an area that includes the capital of Caracas. Venezuela’s population is 31,689,176 (July 2018). According to the Internet World Stats, 17,178,743 Venezuelans had Internet access in December 2018, a 52.4% penetration rate, per Conatel. The National Commission of Telecommunications (Conatel) is an agency of the Government of Venezuela that exercises the regulation, supervision, and control over telecommunications in Venezuela. Furthermore, according to Conatel, 30,300,058 Venezuelans had mobile cellular subscriptions in June 2012,

18

Cybercrime Initiatives South of the Border: A Complicated Endeavor

375

a 102.7% penetration rate. Facebook is also very popular in Venezuela. According to the Internet World Stats, 13,000,000 Venezuelans use Facebook as of December 2017, a 39.7% penetration rate. Almost half (about 44.1% in 2014) of Venezuelan citizens had access to the Internet, and the volume of cyber concerns is a burden on the limited capacity of the national response team, VenCERT. Venezuela claims to have an interest “in increas [ing] transparency” while encouraging cooperation between the public and private sectors while “ensur[ing] technological sovereignty” (OAS 2014). Venezuela’s Special Law against Computer Crime introduced in 2001 defines types of “crimes against information systems, economic property and patrimony, personal privacy, and communications” (EPIC 2006). The Interoperability Act of 2012 standardizes “an appropriate level of interoperability in information systems used by state agencies and entities” and consequences for inhibiting the system’s operation (Law of Interoperability 2012). Combined, these two major Venezuelan laws comprise part of the legal framework that allows coordinated, informed, and effective responses to cybercriminal offenses (Table 3).

Table 3 Key legislation for cybercrime in selected Latin-American countries Country Argentina Bolivia Chile Colombia

Costa Rica Dominican republic Ecuador Guatemala Mexico Panama

Paraguay Peru Uruguay

Venezuela

Name and/or reference of the legislation Cybercrime law 26.388, 2008 (Ley 26.388 de Delitos Informáticos, 2008) Cybercrime Law 1768, 1997 (Ley 1768 de Delitos Informáticos, 1997) Cybercrime law 19.223, 1993 (Ley 19.223 de Delitos Informáticos, 1993) Protection of Information and Data Law 1273, 2009 and Data Messages, Electronic Trade, and Digital Signatures Law 527, 1999 (Ley 1273 de la Protección de la Información y de los Datos, 2009 and Ley 527 de Mensajes de Datos, del Comercio Electrónico y de las Firmas Digitales, 1999) Cybercrime Law 8148, 2001 (Ley 8148 de Delitos Informáticos, 2001) Cybercrime law 53, 2007 (Ley 53 de Delitos Informáticos, 2007) Electronic Trade, Signatures, and Data Messages Law 67, 2002 (Ley 67 de Comercio Electrónico, Firmas y Mensajes de Datos, 2002) Penal code altered to include cybercrime (2000) Penal code altered to include, among others, cybercrime (1999) Penal Code Articles 216, 222, 283, 362, 362, and 364 and Documents and Electronic Signatures Law 51, Article 61, 2008 (Articles 216, 222, 283, 362, and 364 of the Penal Code and Article 61 from Ley 51 Documentos y Firmas Electrónica, 2008) Law 1.160, 1997 (Ley 1.160 alters the penal code in order to include, among others, cybercrime, 1997) Cybercrime Law 27.309 2000 (Ley 27.309 de Delitos Informáticos, 2000) Copyright protection and related rights law 17.616, 2003 (Ley 17.616 de Protección del Derecho de Autor y Derechos Conexos contains explicit provisions regarding digital intellectual property, 2003) Special Law Against Cybercrime, Decree 48, 2001 (Decreto 48 Ley Especial Contra los Delitos Informáticos, 2001)

Source: Diniz and Muggah (2012), p. 13

376

J. de Arimatéia da Cruz and N. Godbee

In the aftermath of the Snowden affair and the announcement of the National Security Agency’s espionage activities, Venezuela’s public scrutinized the extent to which this legislation allows the infringement of privacy in exchange for a stronger sense of national security. “In the Plan of the Nation 2013 to 2019, which seek[s] to deepen the socialist model that [the late] President [Hugo] Chávez began,” Venezuela seeks tighter constraints on the public’s use of and access to information on the Internet, social media, and communication (Benítez 2014). As with Cuba, the government is willing to forego individual privacy to increase national security. Much like Brazil, the Venezuelan plan for the nation also seeks to bolster military-based cyber defense. Venezuela’s restrictive Internet usage guidelines improved the government’s ability to respond to cybercrime incidents. The government likewise strengthened national cybersecurity efforts by hiring additional highly qualified personnel and informing the public about safe Internet practices. The “Information Security Begins with You” campaign began in 2009 as a way of “educating the staff of government institutions and organized communities” and encouraging Venezuelans to support cybercrime initiatives (OAS 2014). Venezuela also conducts regular capacity building programs for cyber-related personnel through relevant coursework, and while not in any official cooperative efforts with other state actors, Venezuela has successfully responded to previous cyber incidents by collaborating with CSIRTs internal and external to the Southern Common Market trade organization. Furthermore, Venezuela began encouraging industry cooperation regarding technology information sharing and cyberattack reporting to intensify Venezuela’s cyber defense.

Peru According to the CIA World Fact Book, Peru’s economy reflects its varied topography – an arid lowland coastal region, the central high sierra of the Andes, and the dense rain forest of the Amazon; a wide range of important mineral resources, including silver and copper, found in the mountainous regions and coastal waters provide excellent fishing grounds. Peru’s population according to the latest government statistics is 31.3 million (July 2018). 77.9% of Peru’s total population lives within urban areas. There are several barriers to Internet access in Peru, such as socioeconomic status and literacy rates. There is also very little competition among Internet service providers so speeds are slow and expensive compared to the rest of Latin America. Despite the barriers listed above, Internet use in Peru is increasing. As of 2017, Peru’s Internet penetration rate was reported to be at 48.73%, up from 38.20% in 2012. In order to combat any cybercrimes that may arise from the increased use of the Internet, Peru has taken several proactive steps and created several agencies. To monitor and investigate cyber-related crimes, Peru created the Emergency Coordinator of Telecommunication (Coordinadora de Energencias de Redes Teleinformaticias de Peru), also known as PeCERT. The agency coordinates with the larger government to ensure that all cyber issues are being addressed quickly and effectively. Other parts of the government report to

18

Cybercrime Initiatives South of the Border: A Complicated Endeavor

377

and communicate with PeCERT to ensure that they are not vulnerable to attack or cyber infiltration. PeCERT also focuses on centralizing all reports of cybercrime incidents so they can be handled effectively within the agency (AboutPeCert 2019). The agency also seeks to keep the public up to date on potential consumer cybersecurity threats such as the insecurity of information provided to social media sites such as Facebook (Foltýn 2019). Peru also has its own law enforcement agency dedicated to investigating cybercrimes called Research Division High Tech Crime of the Criminal Investigation and Support Justice National Police (Division de Investigacion de Delitos de Alta Tecnologia [DIVINDAT] de la Direccion de Investigacion Criminal y de Apoyo a la Justicia [DIRINCRI] de la Policia Nacional). This police force focuses on crimes such as card cloning and skimming, pharming, and phishing (Symantec Corporation 2019). In 2000, the Peruvian government passed legislation addressing cybercrimes such as those listed above. The law also prohibits crimes such as hacking into databases and software that pertain to national security (Law 27.309 2000).

Paraguay Paraguay which is a landlocked nation – fifth largest soy producer in the world – has a commodity-based market economy, distinguished by a large informal sector, but hampered by corruption, deficient infrastructure, and limited progress on economic reforms. It has a population of 7 million people; 61.6% of the population lives within urbanized areas. According to the World Internet Stats, Paraguay had 6,177,748 Internet users in December 2018, a 88.5% penetration rate. Between 2012 and 2017, Internet penetration in Paraguay almost doubled. Internet penetration rates jumped from 29.34% to 61.08% during this period (ITU 2018). Despite this jump in penetration, the number of households with access to the Internet is still relatively low. As of 2016, only 24% of households in Paraguay had Internet access (World Bank 2019). Despite low household access, Paraguay has taken several steps to fight any cybercrimes that may take place. Paraguay has the National Secretariat for Information and Communication Technologies (Secretaría Nacional de Tecnologías de la Información y Comunicación, SENATICs) which oversees all information and communication issues in the country. SENATICs is in charge of establishing and maintaining communication infrastructure and access (Symantec 2019). Under this department, Paraguay created the Computer Incident Response Team of Paraguay (Equipo de Respuesta a Incidentes de Seguridad de Paraguay), also known as CERT-py, to respond to any threats to the cyber infrastructure of the country. CERT-py coordinates the response to such threats and informs the affected parties of the situation. Paraguay does not have a designated police force that handles cybercrime. However, they do have a designated unit within the Office of the National Prosecutor that handles the prosecution of cybercrimes in the country. The National and Judicial police forces of the country aid this unit in the investigation of cybercrimes. Paraguay does not have many laws addressing what cybercrime

378

J. de Arimatéia da Cruz and N. Godbee

is and is not. However, the few laws that the country does have mainly deal with issues such as child pornography and data security.

Cybercrime Strategic Implications for Latin-American Nations The Internet is becoming an integral part of the globalized international system of the twenty-first century and part of the “new wars. . . in which the difference between internal and external is blurred; they are both global and local and they are different both from classic inter-state wars and classic civil wars” (Kaldor 2012). In the globalized world of the twenty-first century, nation-states and violent non-state actors alike will make use of the power of technology to advance their activities without fear of retaliation, prosecution, or concerns from geographical boundaries. In Latin America, governments have become extremely concerned about the proliferation of the Internet as a force multiplier in the commission of crimes. For example, governments in Latin America are concerned with “criminal practices of individuals and crime networks connected to cyberspace with the intention of making illicit economic gains. Common examples range from e-banking scams to drug trafficking and child pornography” (Diniz and Muggah 2012). The prevalence of drug trafficking increases in relation to “the [Internet emerging] as a critical interface in the selling and purchasing of all manner of commodities, including both prescription and illicit narcotics. . . . Likewise, drug profits are often laundered through the Internet through the purchasing of goods and services and the transferring of cash” (Diniz and Muggah 2012). In this new brave world, a “new criminality” is emerging in cyberspace. The realm of “the Internet and related social media tools have not just empowered citizens to exercise their rights, but also enabled and extended the reach of gangs, cartels, and organized criminals” (Diniz and Muggah 2012).

Recommendations for a Safer Cyberspace Environment and Crime Mitigation in Latin America In comparison to Europe and North America, Latin America’s weak countermeasures and low risk of punishment combine with the technology’s affordability to make the continent “fertile ground” for cybercrime. More than half of Latin-American and Caribbean businesses reported cyberattacks during 2012. Late that year, PiceBOT, a particularly malicious malware capable of gathering financial information, originated somewhere within Latin America. Pharming, deflecting user access from legitimate Web sites to gather sensitive information, costs Mexican banks approximately $93 million annually. Although “over 90 percent of countries that responded to the [Comprehensive Study on Cybercrime] questionnaire have begun to put in place specialized structures for the investigation of cybercrime and crimes involving electronic evidence,” many Latin-American states are still not sufficiently equipped to effectively build up cyber defense capabilities (UN Office on Drugs and Crime 2013).

18

Cybercrime Initiatives South of the Border: A Complicated Endeavor

379

To make Latin America a safer cyberspace environment and mitigate some of the occurrences of cybercrime, the following actions ought to be followed: First, Latin-American governments need to develop a clear legal framework outlining the various types of cybercrimes. Establishing a “comprehensive and consensus-based framework for legislating on cybercrime” will enable seamless investigation of cybercrimes and stronger prosecution of cybercriminals (Diniz and Muggah 2012). This standardization, in conjunction with consistent legal language across Latin America, would resolve the “patchwork of responses and loopholes open to exploitation” (Diniz and Muggah 2012). The importance of cybersecurity and cybercrime efforts becomes a state imperative with the growing frequency of high-scale incidents throughout the world and the general Latin-American population’s increasing awareness of the implications of cybercrime as identified by the Stop.Think.Connect movement. In fact, ordinary citizens typically bear the brunt of malicious Internet scams and hacking techniques. This is especially true in such cities as Rio de Janeiro and São Paulo, Brazil, that are “undergoing major technical and social transformations,” necessitating a policy change to decrease the “distance—spatial, social, and psychological—between citizens and the state” (Willis et al. 2014). Second, as a response to public cyber concerns, institutions of higher learning should also provide tracks specifically for the study of cybersecurity and cybercrime. However, for those who cannot afford to take such courses, awareness campaigns can go a long way in promoting safe Internet practices. With “2 billion Internet users worldwide . . . under protected from cybercrime,” a more informed Internet population can help improve cybersecurity (Diniz and Muggah 2012). Encouraging multilateralism and citizen empowerment through safe Internet use can only add to the success of a more structured and extensive international response system, legislation, and cyber-response units. In Mexico, concerned citizens use social media platforms, such as Twitter and Facebook, to compensate for the lack of media reports on cartel abuses. This ability to share information bypasses corrupt law enforcement and the cartels to allow neighborhood protection from illicit activity. Third, a multilateral agreement should be established to foster increased information flow and freedom of Internet use within Latin America. However, public outcry regarding privacy infringement by Latin-American governments illustrates the precarious balance of increasing state responses to cyber concerns while maintaining a status of public empowerment and freedom. Despite the state’s best efforts, the intense public scrutiny creates a disadvantage to combatting cybercrime. Fourth, Latin-American nations in partnership with the United States should establish cyber fusion centers. The cyber fusion centers’ primary goal is to “the overarching process of managing the flow of information and intelligence across all levels and sectors of government and the private sector” (Carter and Carter 2009). Fusion centers could warehouse and disseminate information and knowledge on the latest investigation and cyber forensic techniques. In this role, the centers would not only provide an overview of how computer network systems are affected once a cyberattack has been committed but also identify courses of action to mitigate damage.

380

J. de Arimatéia da Cruz and N. Godbee

Fifth, Latin-American nation-states and the United States must also work together on cyber attribution, whereby the United States’ expertise can help Latin-American governments resolve cyber intrusion situations with neighboring states. Since attribution “has a territorial dimension, and therefore turns into a political problem” (Rid 2013), cyber intrusions remain one of the “black swans” in cyberspace (Taleb 2007). Latin-American nations claim they cannot afford to attribute a cyberattack that might escalate to physical war without solid evidence. Given the animosity between LatinAmerican nations, falsely attributing cyberattacks could have devastating consequences especially as the political landscape becomes more complicated. Specifically, as more nation-states develop cyber programs with disruptive intents, cybercriminals have greater access to cyber systems, and ideological actors – such as hackers or extremists – become more involved in cyber political activity. Finally, it is necessary and recommended a strong public-private cyber partnership in Latin America. As the Igarapé Institute and the SecDev Foundation stated, “In Latin America, private corporations and firms appear to be playing a comparatively marginal role in supporting cybersecurity initiatives and enhancing public safety in cyberspace. Not a single public-private partnership could be identified in the course of the preparation of this strategic paper in Latin America” (Diniz and Muggah 2012).

A Vision for Cybercrime Security Rather than (In)security in Latin America Former Secretary of Defense Leon E. Panetta commented on the possibility of a “cyber Pearl Harbor, an attack that would cause physical destruction and the loss of life, in fact, it would paralyze and shock the nation” at the hands of a state or nonstate aggressor (Panetta 2012). He contends that many individuals “worry about hackers and cybercriminals who prowl the Internet, steal people’s identities, steal sensitive business information, steal even national security secrets,” but a concentrated cyberterrorist strike is of greater concern (Panetta 2012). Panetta cites the 2012 DDoS attacks on American financial institutions as well as the Shamoon virus that infected Saudi Arabia’s Aramco oil company as proof that the Internet is the new battlefield. In spite of advances in cyber capabilities, “Potential aggressors are exploiting vulnerabilities in [American] security” (Panetta 2012). However, the sentiment that only large-scale cyberattacks represent a national security threat overshadows the extent of cybercrimes committed in Latin America and the United States, two of the most digitally active regions in the world. And, as these geographical boundaries shrink as a result of “the Internet collaps[ing] space, as users around the world interact without regard for territory, engag[e] in crossborder exchanges and [elicit] state actions that blur the domestic-foreign divide,” it will become more evident that “the state is no longer supreme authority over information” even though their populations are in dire need of an overarching system to counteract the cybercriminals (Mueller and Hans 2014). In recognition of this, countries in Latin America are developing cybercrime strategies that are influenced

18

Cybercrime Initiatives South of the Border: A Complicated Endeavor

381

by regional actors and international organizations, such as the OAS, the Igarapé Institute, and the SecDev Foundation. By establishing national CSIRTs and cyber legislation, states and their private sector allies are better equipped to efficiently and quickly respond to, investigate, and prosecute cybercriminals. With increased private and public sector information sharing through this formal national response unit, necessary coordination will be more feasible. By institutionalizing cyber fusion centers, states can be proactive in their fight against cybercriminals. Additionally, collaborative cybersecurity efforts could support accurate attribution and the development of a formal Cyber-Westphalia Treaty. This standardization would reduce the conflict created by inconsistent cyber boundaries and encourage statewide and international coherence to acceptable conventions. To protect Internet users, awareness campaigns and education courses should be funded to improve personal cybersecurity. With a conservative estimate of approximately three billion people having access to the Internet, these measures are vital to the development of the “new frontier” of the twenty-first century.

Conclusion It is time for Latin-American nations to encourage a broaden conceptualization of cyber threats and intellectual engagement within academia to facilitate greater communication among social science disciplines traditionally ignored during cyber conversations. Because cyber threat[s] cannot be eliminated but only mitigated, governments in Latin America should include the expertise from “various sub-disciplines of computer science as well as social science, political science, legal studies, and even history” to develop more effective strategies and responses (Rid 2013). Given the proliferation of computers as tools in the commission of crimes, nationstates must embrace the concept of a Cyber-Westphalia Treaty (Demchak and Peter Dombrowski 2011). Given the absence of universally accepted and enforceable norms of behavior in cyberspace, a Cyber-Westphalia could bring some normalcy to the current situation of cyberattacks, cyberterrorism, and cyberespionage “in which multiple actors continue to test their adversaries’ technical capabilities, political resolve and thresholds” (Clapper 2015). A Cyber-Westphalia Treaty would “[laud] the benefits of order in the virtual space, based on the norms of sovereignty and power concentration in the hands of states, [which] have guided the actions of the international community in the last few years” (Cavelty 2015).

Cross-References ▶ Global Voices in Hacking (Multinational Views) ▶ Organized Financial Cybercrime: Criminal Cooperation, Logistic Bottlenecks, and Money Flows ▶ The Role of the Internet in Facilitating Violent Extremism and Terrorism: Suggestions for Progressing Research

382

J. de Arimatéia da Cruz and N. Godbee

References About PeCERT. (2019). https://www.pecert.gob.pe/index.php/acerca-de-nosotros/acerca-del-pecert Abreu, J. (2012). Mexican drug cartels and cyberspace: Opportunity and threat. InfoSec Institute, 21 March 2012, Retrieved from http://resources.infosecinstitute.com/mexican-cartels Benítez, S. (2014). Venezuela: Spying in Venezuela through social networks and emails. In global information society watch 2014: Communications surveillance in the digital age (pp. 270–275). Melville: Association for Progressive Communications and Humanist Institute for Cooperation with Developing Countries. Retrieved from http://www.giswatch.org/sites/default/files/spying_ in_venezuela_through_social_networks_and_emails.pdf Carnegie Mellon University. (2019). Retrieved from https://www.sei.cmu.edu/education-outreach/ computer-security-incident-response-teams/national-csirts/ Carter, D. L., & Carter, J. G. (2009). The intelligence fusion process for state, local, and tribal law enforcement. Criminal Justice and Behavior 36(12), 1323–1339. Cavelty, M. D. (2015). The normalization of cyber-international relations. In O. Thranert & M. Zapfe (Eds.), Strategic Trends 2015: Key developments in global affairs. Zurich: Center for Security Studies, 94, Retrieved from http://www.css.ethz.ch/publications/pdfs/Strategic-Trends-2015.pdf CIA World Fact Book (Argentina). (2019). Retrieved from https://www.cia.gov/library/publica tions/resources/the-world-factbook/attachments/summaries/AR-summary.pdf CIA World Fact Book (Cuba). (2019). Retrieved from https://www.cia.gov/library/publications/ resources/the-world-factbook/attachments/summaries/CU-summary.pdf CIA World Fact Book (Mexico). (2019). Retrieved from https://www.cia.gov/library/publications/ resources/the-world-factbook/attachments/summaries/MX-summary.pdf Clapper, J. R. (2015). Opening Statement to the Worldwide Threat Assessment Hearing (remarks, Senate Armed Services Committee, 26 February 2015), Retrieved from http://www.dni.gov/index.php/ newsroom/testimonies/209-congressional-testimonies-2015/1175-dni-clapper-opening-statement-onthe-worldwide-threat-assessment-before-the-senate-armed-services-committee Conan, R. (2013). Defending Mexico’s critical infrastructure against threats. Report company, 22 July 2013, Retrieved from http://www.the-report.net/mexico-prw/600-defending-mexico-s-criti cal-infrastructure-against-threats da Cruz, J. A., & Alvarez, T. (2015). Cybersecurity Initiatives in the Americas. Marine Corps University Journal 6(2), 45–68 Demchak, C. C., & Peter Dombrowski, P. (2011). Rise of a Cybered westphalian age. Strategic Studies Quarterly 5(1 (Spring 2011)), 32–61. Retrieved from http://www.au.af.mil/au/ssq/2011/ spring/spring11.pdf Department of Homeland Security Science & Technology. (2019). Retrieved from https://www.dhs. gov/science-and-technology/csd-csirt Diniz, G., & Muggah, R. (2012). A fine balance: Mapping Cyber (In)Security in Latin America (Strategic paper 2). Rio de Janeiro: Igarapé Institute. Electronic Privacy Information Center (EPIC), EPIC–Privacy and Human Rights Report. (2006). Bolivarian Republic of Venezuela. New South Wales: World Legal Information Institute. Retrieved from http://www.worldlii.org/int/journals/EPICPrivHR/2006/PHR2006-Bolivari.html Foltýn, T. (2019). Facebook exposed millions of user passwords to its employees. 22 March 2019. https://www.welivesecurity.com/la-es/2019/03/22/facebook-expuso-millones-de-contrasenas-deusuarios-a-sus-empleados/ France24. (2019). Cuban government cautiously expands Internet access. Retrieved from https://www.france24.com/en/20190729-cuba-technology-government-internet-access Geers, K. (2014). Pandemonium: Nation States, National Security, and the internet (Tallinn papers 1) (Vol. 1). Tallinn: NATO Cooperative Cyber Defence Centre of Excellence. 12, Retrieved from https://ccdcoe.org/publications/TP_Vol1No1_Geers.pdf Glickhouse, R. (2013). Explainer: Cybercrime in Latin America. Americas Society and Council of the Americas, 21 October 2013, Retrieved from http://www.as-coa.org/articles/explainer-cyber crime-latin-america

18

Cybercrime Initiatives South of the Border: A Complicated Endeavor

383

Greenwald, G. (2014). No place to hide: Edward Snowden, the NSA, and the US Surveillance State. New York: Metropolitan Books. Hostetler, B. (2015). International compendium of data privacy Laws (p. 1). Washington, DC: BakerHostetler Privacy and Data Protection Team. Retrieved from http://www.bakerlaw.com/ files/Uploads/Documents/Data%20Breach%20documents/International-Compendium-of-DataPrivacy-Laws.pdf International Telecommunications Union. (2018). Percentage of individuals using the internet. https://www.itu.int/en/ITU-D/Statistics/Documents/statistics/2018/Individuals_Internet_20002017_Dec2018.xls Internet World Stats. (2019). Retrieved from https://www.internetworldstats.com/stats2.htm Kaldor, M. (2012). New and Old Wars: Organized Violence in a Global Era (3d ed.). Redwood City: Stanford University Press. Kennan, G. F. (1993). Around the Cragged Hill: A personal and political philosophy. New York: W. W. Norton. Law 27.309 that incorporates cybercrime to the Penal Code of June 26, 2000. http://www. informatica-juridica.com/anexos/ley-27-309-que-incorpora-el-cibercrimen-al-codigo-penal-de-26de-junio-de-2000-nbsp/ Ley de Interoperabilidad [Law of Interoperability], Gaceta Oficial N 39.945 [National Diary Number 39.945], Decreto N 9.051 [Decree number 9.051] (15 June 2012), Retrieved from http://interoperabilidad.gobiernoenlinea.gob.ve/index.php/conceptos/sobre-promociones/pro mociones/96-ley-de-interoperabilidad Matamoros, C. A. (2019). Cubans now allowed to access the internet from their own homes, but at what price? Retrieved from https://www.euronews.com/2019/07/29/cubans-now-allowed-toaccess-the-internet-from-their-own-homes-but-at-what-price Mission, Vision and Strategic axes. 2019. https://www.senatics.gov.py/institucion/mision-y-vision Mueller, M. L., & Hans, K. (2014). Sovereignty, national security, and internet governance: Proceedings of a workshop. In Internet governance project (Workshop). Syracuse: Syracuse University. 12 December 2014, 1, Retrieved from http://www.internetgovernance.org/ wordpress/wpcontent/uploads/Proceedings-publication.pdf Organization of American States (OAS) and Symantec. (2014). Latin American + Caribbean Cybersecurity Trends. Washington, DC: OAS Secretariat for Multidimensional Security/ Symantec Corporation. 11, Retrieved from http://www.symantec.com/content/en/us/enterprise/ other_resources/b-cyber-security-trends-report-lamc.pdf Panetta, L. E. (2012). Remarks by Secretary Panetta on Cybersecurity to the Business Executives for National Security, New York City (Speech). New York: Business Executives for National Security. 11 October 2012, Retrieved from http://www.defense.gov/transcripts/trasncript.aspx? transcriptid=5136 Peru – Broadband and Broadcasting Market – Overview, Statistics and Forecasts 2019. https://www.budde.com.au/Research/Peru-Fixed-Broadband-Market-Statistics-and-Analyses#s thash.NrnVfUkJ.dpuf Rid, T. (2013). Cyber war will not take place. Oxford: Oxford University Press. Rodriguez, A. (2012). In Cuba, Mystery Shrouds Fate of Internet Cable. Yahoo News, 21 May 2012, Retrieved from http://news.yahoo.com/cuba-mystery-shrouds-fate-internet-cable180553388%2D%2Dfinance.html Semple, K. & Cohen, H. B. (2019). Cuba Expands Internet Access to Private Homes and Businesses. Retrieved from https://www.nytimes.com/2019/07/29/world/americas/cuba-inter net-technology.html Symantec Corporation. (2019). Retrieved from Symantec Corporation, 2019. http://securityrespons e.symantec.com/norton/cybercrime/pharming.jsp. Card skimming is the act of stealing someone’s credit or debit card information by using a device illegally attached to machines such as an ATM or POS terminal (point of sale terminal). For more on skimming, see the antivirus company Norton’s explanation of the cybercrime concern at “What is identity theft?” Pharming is a scam where malicious software is installed on someone’s computer that redirects them from

384

J. de Arimatéia da Cruz and N. Godbee

legitimate sites to fraudulent sites. For more on pharming, see the antivirus company Norton’s explanation of the cybercrime concern at “Online Fraud: Pharming,” Phishing is a type of scam where someone attempts to gain another person’s personal information, such as social security or credit card numbers, through seemingly benign means. Taleb, N. M. (2007). The black swan: The impact of the highly improbable. New York: Random House. Trading Economics. (2019). Retrieved from https://tradingeconomics.com/cuba/wages United Nations Office on Drugs and Crime. (2013). Comprehensive study on cybercrime: Draft– February 2013 (p. xxiii). New York: United Nations Office on Drugs and Crime. Retrieved from https://www.unodc.org/documents/organized-crime/UNODC_CCPCJ_EG.4_2013/CYBERCR IME_STUDY_210213.pdf White House. (2018). The comprehensive National Cybersecurity Initiative. Washington, DC: Executive Office of the President of the United States. Retrieved from https://www. whitehouse.gov/sites/default/files/cybersecurity.pdf. Cybersecurity initiatives include strategies and programs intended to protect public, private, and government technology networks. The multilayered approach to identify and respond to cyberterrorism and cybercrime involves government policymakers and technology professionals as well as users. Although the exact meaning of “cyberterrorism” is obscure, US Federal Bureau of Investigation Special Agent Mark M. Pollitt coined the working definition of a “premeditated, politically motivated attack against information, computer systems, computer programs, and data which results in violence against noncombatant targets by subnational groups or clandestine agents.” see Serge Krasavin. (2002). What is cyber-terrorism? Zurich: Computer Crime Research Center. http://www.crimeresearch.org/library/Cyber-terrorism.htm. Cybercrimes are categorized by the damage inflicted on computer and technology assets, fraud targeting individuals and organizations, and perpetration of human abuse and trafficking. See Interpol, “Cybercrime,” http://www.interpol.int/ Crime-areas/Cybercrime/Cybercrim White House Office of the Press Secretary. (2014) Fact sheet: Charting a new course on Cuba, 17 December 2015, Retrieved from http://www.whitehouse.gov/the-press-office/2014/12/17/factsheet-charting-new-course-cuba Willis, G. D., Muggah, R., Kosslyn, J., & Leusin, F. (2013). Smarter Policing: Tracking the Influence of New Information Technology in Rio de Janeiro (Strategic Note 10). Rio de Janeiro: Igarapé Institute. November 2013, 2, Retrieved from http://igarape.org.br/wp-content/uploads/ 2013/10/Smarter_Policing_ing.pdf Willis, G. D., Muggah, R., Kossyln, J., & Leusin, F. (2014). The Changing Face of Technology Use in Pacified Communities (Strategic Note 13). Rio de Janeiro: Igarapé Institute. February 2014, 2, Retrieved from http://igarape.org.br/wpcontent/uploads/2014/01/NE-13-Changing-face-oftechology-29jan.pdf World Bank. (2019). Households w/Internet access, %. https://tcdata360.worldbank.org/indicators/ entrp.household.inet?country=BRA&indicator=3429&viz=line_chart&years=2012. 2016.

Police and Extralegal Structures to Combat Cybercrime

19

Thomas J. Holt

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Policing Typology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Users and User Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Environment Security Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Infrastructure Providers (ISPs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Corporate Security Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nongovernmental, Nonpolice Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Governmental Nonpolice Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Public Police Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Law Enforcement Challenges in Policing Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Industry Mechanisms to Maintain Order Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assessing the Risk of Extralegal Efforts and Interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discussion and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

386 386 387 388 388 389 390 390 391 392 394 396 397 399 399

Abstract

This chapter provides an overview of the various formal and informal organizations that handle the investigation and management of cybercrimes around the world. The role of hosting companies and private industry is discussed, as well as local, state, and federal law enforcement. The challenges these entities face in combatting cybercrime are considered in detail. Keywords

Policing · Cybercrime · Industry · Federal law enforcement · Transnational crime T. J. Holt (*) College of Social Science, School of Criminal Justice, Michigan State University, East Lansing, MI, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_12

385

386

T. J. Holt

Introduction The proliferation of the Internet, digital technology, and mobile computing devices have allowed for human behaviors to transcend traditional boundaries of space and time, rendering instantaneous transnational communications and financial transactions possible (Holt and Bossler 2016; Newman and Clarke 2003; Wall 2001). These devices have also radically transformed crime as well, giving rise to new offenses that are dependent on computers such as hacking and malicious software distribution (see ▶ Chaps. 37, “Global Voices in Hacking (Multinational Views),” and ▶ 38, “Malicious Software Threats”). Technologies also enhance or streamline real-world criminality, such as prostitution (see ▶ Chap. 55, “Prostitution and Sex Work in an Online Context”), fraud (see ▶ Chap. 41, “Spam-Based Scams”), stalking and harassment (see ▶ Chap. 59, “Cyberstalking”), and terrorism (see ▶ Chap. 64, “Hate Speech in Online Spaces”). Though criminals appear to adopt technologies to suit their needs with great speed and efficiency, law enforcement agencies are much less quick to respond and require perpetual training to understand the ways that offenses are evolving (Hinduja 2004; Holt et al. 2015a; Senjo 2004; Stambaugh et al. 2001). The bureaucratic structures within traditional law enforcement and government agencies also hinder their ability to collaborate or act in ways that go beyond their specific jurisdictional or legally defined duties (Brenner 2008; Cross 2015). Additionally, law enforcement agencies are often dependent upon the public and private companies that own and maintain technological infrastructure in order to obtain evidence and investigate wrongdoing (Goodman 1997; PERF 2014; Stambaugh et al. 2001). In fact, the private sector has become an essential regulatory arm in the investigation and prevention of crime (Brenner 2008; Holt and Bossler 2016; Wall and Williams 2013). The inherent limitations of the law enforcement response to technology misuse and abuse have led scholars, police administrators, and legislators to consistently call for changes to improve the criminal justice response to these offenses (Goodman 1997; NIJ 2008; Stambaugh et al. 2001). The extent to which these calls have been answered is limited and may in fact have become even more difficult to address as technology has grown even more complex in the last decade. This chapter will explore these issues and assess the current state of policing online spaces generally. The range of individuals, organizations, and entities responsible for policing cybercrime will be discussed, followed by an examination of the challenges faced by law enforcement and industry bodies. A discussion is also provided on the risks that may arise from corporate entities becoming involved in attempts to regulate online spaces outside of traditional conventional criminal justice procedures. This work concludes with a series of implications on ways to transform the entire ecosystem involved in the fight against cybercrime.

Cybercrime Policing Typology The complex landscape of technologies, service providers, and organizations that maintain the Internet and the population of citizens who use its services present a challenge for policing and law enforcement. As with traditional criminality, the local community and traditional policing agencies play significant roles in the notification

19

Police and Extralegal Structures to Combat Cybercrime

387

of and response to criminal complaints. Industry and nongovernmental organizations, however, play more prominent roles in policing cybercrimes as they not only serve as the target in some cybercrimes or hold evidence that implicates an individual in an offense. To better understand the makeup of entities involved in policing online spaces, Wall (2007) developed a seven-category typology of the groups involved in policing online spaces and Internet governance: (1) Internet users and user groups; (2) virtual environment security managers; (3) network infrastructure providers (ISPs); (4) corporate security organizations; (5) nongovernmental, nonpolice organizations; (6) governmental nonpolice organizations; and (7) public police organizations. Each category will be explored in detail to understand their potential role in policing the Internet and the ways in which they intersect with other groups.

Internet Users and User Groups As with conflict resolution in the real world (Black 1998), the global population of Internet users comprise the largest possible force to regulate online experiences and maintain order (Innes 2004; Wall 2007, 2010). There are more online spaces than law enforcement officers worldwide, inclusive of social media, forums, and other web-based platforms. As a result, Internet users are essential in the detection and reporting of behaviors that violate social norms or legal statutes (Wall 2010). Individual attempts to report spam, malicious accounts, malicious software, and other problems are all vital to police and regulatory bodies in order to document the scope of misuse and potential harm that may occur. Informal regulatory efforts operated in part by Internet service providers and place managers are also useful resources to help maintain order in online spaces. For instance, feedback and reputation systems on e-commerce platforms like Amazon and eBay give customers the opportunity to inform other consumers when vendors misrepresent or fail to deliver products (i.e., Hayne et al. 2015; Weiss et al. 2004). Similarly, consumers have banded together via forums and websites to create “virtual brand protection communities” that help to inform other consumers as to the appearance of potential counterfeit products, disreputable sellers, and ways to identify legitimate products (Mavalanova and Benbunan-Fich 2010; Narcum and Coleman 2015). There are also numerous examples of individual Internet users and groups who engage in acts of vigilantism to combat cybercrimes or the misuse of technology in general (Wall 2007). For instance, there are several websites that operate to disrupt the practices of email-based fraudsters, such as ScamBaiters and 419Eater (Wall 2007). They operate independently from law enforcement agencies and encourage individuals to engage with email fraudsters posing as a potential victim and waste their time by making silly requests, such as taking pictures of themselves with shoes or fish on their heads. The site operators argue that by making scammers waste time with bogus requests, they will have less time to actually engage with other potential victims. In some cases, Internet users also form groups that respond to certain forms of crime that threaten and/or offend their interests and beliefs (Wall 2007). Similarly,

388

T. J. Holt

hackers target a range of nefarious groups engaged in dangerous activities ranging from the distribution of child porn (Wall 2007) to ISIS and terror group social media profiles and sites (Hern 2016). Many of these groups explicitly reveal their goals and purposes within their names, rendering transparency and making identification of their mission statements clear (Wall 2007). Others have engaged in cyberattacks against government and religious organizations on the basis that they have caused harm to the larger world through their practices (Holt et al. 2017).

Virtual Environment Security Managers In addition to Internet users, there are also individuals with elevated responsibilities to help manage the behavior of users in forums and private communities. Wall (2007) argued they constitute virtual environment security managers as they are informally charged with ensuring participants in various online environments conform to community standards (Wall 2007). This sort of manager exists in forums and other computer-mediated communications platforms and is typically designated as moderators or administrators by the site owner or operator. A moderator’s primary task is to enforce any local norms on communication, such as minimizing the use of spam and flaming, or arguing publicly, to ensure participants have a pleasant experience in the community (Cho and Kwon 2015; Holt 2007). Forum moderators can use informal tools to ensure compliance, such as deleting comments or banning and blocking repeat offenders (Holt et al. 2015b). In the event a user’s behavior violates local laws, they may also report the activity to law enforcement and service providers for further formal investigation (Wall 2007). These roles are, however, much less common in the larger population of Internet users making their responsibilities relatively limited compared to other regulatory bodies.

Network Infrastructure Providers (ISPs) Network infrastructure providers serve a pivotal role in the use and management of the Internet, as they are commonly referred to as Internet service providers (ISPs) who provide Internet connectivity and hosting services for end users and corporate customers (Andress and Winterfeld 2013; Wall 2007). ISPs serve as the first point of connection for an Internet user and as a through point for traffic, which means they can play a role as a regulatory agent for managing user behavior. Most ISPs present end users and clients with a contractual agreement prior to use of their services, typically in the form of a series of terms and conditions for use that takes the form of a Fair Use Policy (Crawford 2003; Wall 2007). The ISP lays out responsible use of their services by end users, such as not engaging in illegal downloading or attempts to hack or compromise computers (Crawford 2003). In addition, they may indicate the possible legal statutes that the ISP may have to upload in the event of misuse (Wall 2007).

19

Police and Extralegal Structures to Combat Cybercrime

389

Additionally, a Fair Use Policy may also spell out the potential limited liability ISPs have in the event that personal data or sensitive information is acquired from end users while utilizing their services. For instance, open public Wi-Fi hotspots in cafes and restaurants often indicate to users that the network is not secure from compromise and users are expected to take care when using various services (Maimon et al. 2017). Unfortunately, evidence suggests that the majority of end users do not read these agreements in full or comprehend the exact liabilities they take on when utilizing the service (see Holt and Bossler 2016 for review). Network service providers also serve an important evidentiary function in the investigation of various offenses, as user behavior can be subpoenaed by law enforcement. User activities on a network are logged by IP address and additional identifying computer system-level data, creating opportunities to track search terms, web histories, and use patterns (Holt and Bossler 2016). An ISP that operates web hosting services may also be required to turn over information related to the IP addresses that connect to a given site that they host in order to facilitate investigations (Wall 2007). As a result, ISPs may have to interact with local, state, federal, and international law enforcement agencies depending on their location and cooperative agreements. Additionally, web hosting services may be at risk of civil and criminal penalties if they are found to have facilitated the distribution or hosting of illicit content, particularly child pornography (Brenner 2011). Thus, network service providers must take care to be compliant with domestic laws relevant to their physical location, as well as some foreign legal statutes depending on the nature of their operations (Wall 2010).

Corporate Security Organizations While network infrastructure providers operate largely as gatekeepers for end user populations, corporate security organizations have a much broader set of roles with sometimes conflicting agendas. Corporate security organizations are comprised primarily of IT staff who maintain internal infrastructure, such as customer databases and company-owned intellectual property, as well as external facing services, including web content and functionality, e-commerce tools, communications, and other services (Wall 2007). Corporate security staff are often focused on maintaining the interests of the organization, as well as meeting or exceeding all standards for security and legal liability compliance. The size of the organization and its products will often dictate the level of security resources they implement (Andress and Winterfeld 2013; Wall 2007). For instance, small to medium businesses may outsource IT security concerns to third-party organizations who can provide services on-demand (Harris and Patten 2014). Large businesses and organizations will more likely have their own IT security staff and focus not only on the threat of external attacks from hackers and cybercriminals but also misuse by insiders, whether they be current or former employees or contractors and consultants (Spitzner 2001; Wall 2007). Organizations that have

390

T. J. Holt

their own IT security staff may also pursue civil and criminal charges against employees in the event that their behaviors violate either contractual agreements they signed related to corporate secrets or criminal statutes for misuse while using equipment or network resources (Holt and Bossler 2016). In addition, corporate security organizations also have a responsibility to regulate and protect the behavior of their direct customers and those who utilize their services. To that end, online payment service providers can stop payments from being received or processed in the event they are being used for illicit activities like the purchase of child pornography (International Center for Missing and Exploited Children 2017). Similarly, social media companies like Instagram, Snapchat, and Twitter can temporarily or permanently block users in the event their behavior violates community standards or terms of service (Idris 2019). They are also responsible for investigating and responding to complaints from users regarding content they feel is inappropriate. It is not always clear to the public how such complaints are dealt with or how decisions are made internally (Idris 2019). These same organizations will also engage with law enforcement reactively to respond to subpoenas for user behaviors and proactively if serious illicit activities are observed. Since little information is provided regarding the interactions between law enforcement and corporate security organizations, it is difficult to assess their efficacy in regulating online spaces (Wall 2001, 2007).

Nongovernmental, Nonpolice Organizations The next layer of regulatory bodies involved in policing the Internet involves nongovernmental, nonpolice organizations (Britz 2008; Wall 2007). These entities involve public and private organizations with no legislative or constitutional mandate to engage in formal law enforcement practices such as arrest or prosecutions of offenders. In the absence of a legal remit, their roles largely consist of information sharing and information collection to aid other agencies involved in policing actions and assist victims (Wall 2007). For instance, the Internet Watch Foundation (IWF) is a charity operating in the UK with a focus on reducing the quantity of child pornography and obscene content online (Internet Watch Foundation 2017). They operate a tipline for complaints of child sexual exploitation content and provide any investigative leads produced to law enforcement and network service providers to help reduce the quantity of harmful content circulating through the Internet.

Governmental Nonpolice Organizations Though there are a small number of nongovernmental, nonpolice organizations involved in cybercrime regulation, there are far more governmental organizations that serve in this capacity. Though they are unable to arrest or prosecute wrongdoing, they have the power to establish and monitor industry standards, levy fines for non-

19

Police and Extralegal Structures to Combat Cybercrime

391

compliance, and bring cases to law enforcement agencies if necessary (Wall 2007). The specific function of these organizations in regulating Internet user or service provider behaviors varies from country to country (Wall 2007). For instance, some nations utilize these organizations to filter and minimize citizens’ access to online content, as in China, Vietnam, and Pakistan (Freedom House 2018). Other nations utilize government agencies to establish standards for technology use and implementation to secure systems and information. For instance, the US Federal Trade Commission plays a key role in enforcing standards for consumer data and can fine corporations for misuse or mismanagement of information. Similarly, the US Department of Energy plays multiple regulatory roles related to energy production, nuclear waste management, and cybersecurity of the power grid generally. They also establish industry standards associated with the implementation and security of smart devices that serve the power grid as well to minimize service interruptions.

Public Police Organizations Organizations with the greatest constitutional power to enforce laws and maintain order online are public police organizations (Wall 2007). Individuals who serve as officers or agents of police organizations have the legal mandate to investigate offenses and are empowered to make arrests for violations of law. The structure of police organizations varies in part by place, with many Western nations utilizing a system of local, regional, and national or federal organizations with specific jurisdictions and investigative powers. In the USA, local police and sheriffs’ offices serve small territories, usually the physical boundaries of a city or county, and are the primary point of contact for citizens to make criminal complaints (Walker 1999). State agencies are the next level of investigative power, as they may investigate cases that cross jurisdictional boundaries and may arrest suspects so long as they are within the state’s borders. Federal agencies, such as the US Secret Service and Federal Bureau of Investigation, can investigate cases that cross state or international boundaries (Walker 1999). The distributed nature of investigative responsibilities makes it difficult to identify a unified response to cybercrime across all levels of police agencies. In fact, local agencies tend to have the smallest response to cybercrime despite being the first point of contact for most citizens (e.g., Stambaugh et al. 2001). This is due to the other response roles these agencies serve in the community, particularly with respect to drugs, violent crime, and other crime types identified by the community as a problem. As a result, many local agencies that have a dedicated cybercrime response unit tend to reside in agencies that serve large, urban populations in urban areas (Willits and Nowacki 2016). Rural agencies that serve smaller populations are less likely to have an in-house cybercrime unit, but may consolidate resources with neighboring communities. In turn, they may able to have some response to cybercrime calls for service, even if they are not staffed directly by their own officers (Willits and Nowacki 2016). State police agencies also make their resources available to other local agencies so as to

392

T. J. Holt

increase the potential for case clearance and potentially support the case if the victim and offender live in separate jurisdictions. The US Internet Crimes Against Children (ICAC) task forces are an excellent example of multi-jurisdictional resources used to investigate cybercrimes. The ICAC program operates a specific task force in at least one jurisdiction within every state in the USA to support child sexual exploitation investigations that have a local connection across city and county lines (Marcum et al. 2010). These task forces typically investigate a range of crimes involving children, whether through downloading and sharing child pornography to human trafficking affecting minors (Marcum et al. 2010). Federal or national police constitute the highest levels of law enforcement and handle cases that involve serious financial or personal offenses that affect victims across the country or world. Their specific response roles are also likely defined by statute, though they may be revised over time (Walker 1999). For instance, the US Secret Service was the first federal law enforcement agency tasked with investigating computer hacking cases in the mid-1980s as a result of the Computer Fraud and Abuse Act (Hollinger and Lanza-Kadeuce 1988). At the time, the Secret Service was housed in the Treasury Agency, and they had a remit to investigate financial crimes. Given that hacking cases regularly affect financial institutions, the Secret Service was a logical fit. The response roles have increased dramatically over the past few decades as the FBI, Department of Homeland Security, and Customs and Border Patrol all play different roles in the investigation of cybercrimes. Analogous agencies serve a role in the response to cybercrime in Canada through the Royal Canadian Mounted Police (RCMP) and the UK National Crime Agency (NCA).

Law Enforcement Challenges in Policing Cybercrime The layered nature of public and private entities with different remits makes policing cyberspace an extremely complex process with multiple points where errors may result. This is most evident in the current academic research literature regarding formal state-sanctioned law enforcement agencies (e.g., Holt et al. 2015a; Smith et al. 2003). Perhaps the most significant challenge relates to the international nature of cybercriminality as a whole. Offenders can target a person, computer system, or network anywhere in the physical world without leaving the comfort of their home. As a result, traditional models of jurisdictional investigation responsibilities are rendered moot by the Internet (Brenner 2008). This may account for the limited investigative role local law enforcement agencies take in computer hacking and identity theft cases (e.g., Holt et al. 2010, 2015b). Instead, local agencies are more likely to respond to person-based cybercrime cases such as child sexual exploitation and harassment cases (Brenner 2008; Holt et al. 2015a). The victim and offender are more likely to reside in physical proximity to one another, and the emotional and physical harm these offenses cause may make more sense to prosecutors and judges who would hear the case (Brenner 2008; Holt and Bossler 2016). Resource allocations at the local level also directly affect the response capabilities of police agencies to investigate cybercrime cases. As noted earlier, police agencies often prioritize local community crime concerns so as to provide direct, measureable

19

Police and Extralegal Structures to Combat Cybercrime

393

outcomes that make residents feel safer (Walker 1999). Cybercrimes are often not visible to the community, relative to drug dealing, gang activity, and public order crimes which are immediately evident to residents (Goodman 1997). Thus, investing resources toward staffing and training cybercrime response officers may appear like a misallocation of resources. In that respect, even if cybercrime cases are viewed as a serious problem by community residents, the costs associated with training officers and purchasing the computer hardware, software, and forensic data capture tools needed to properly investigate may be outside of an agency’s budget (Ferraro and Casey 2005; Hadlington et al. 2018; Holt et al. 2015a). The confounding nature of these problems may account for the current state of cybercrime responses at the local level, particularly in the USA. Recent research demonstrates that local police agencies primarily in urban centers are developing specialized units that reduce the need for line officers across the agency to be able to properly respond to cybercrime calls for service (Willits and Nowacki 2016). Smaller, rural police agencies appear less likely to invest in such resources, creating a possible disparity in the views of line officers as to their responsibility to respond to cybercrime cases generally (Bossler and Holt 2012; Holt et al. 2015a; Stambaugh et al. 2001; Wall and Williams 2013). Research using small samples of line officers within the USA suggest that they feel the community should take better steps to minimize their risk of cybercrime victimization, thereby relinquishing the police role in responding to these offenses (Bossler and Holt 2012). Such attitudes may not be easily changed and create ripple effects in the community that may produce long-term negative outcomes. For instance, studies of online fraud victims’ experiences with police in Australia and the UK suggest they are highly dissatisfied with the police response when they report their experiences (e.g., Button and Cross 2017; Cross 2015). Such circumstances may minimize the likelihood of future reporting by victims and diminish their overall views of police legitimacy (Button and Cross 2017). In turn, this may only minimize police views on the frequency and impact of cybercrime victimization. Some line officers also suggest that cybercrimes should be the purview of federal police agencies as they have greater resources and latitude to investigate these offenses (Hadlington et al. 2018; Holt et al. 2015a). Federal or national agencies are not the perfect solution for all offenses, as their investigative powers are limited by several factors. First, they may be only able to respond when an incident exceeds a specific monetary threshold of harm to victims (Brenner 2008). As a result, an email-based scam that leads victims to lose hundreds of dollars will not likely allow the case to meet federal guidelines. An investigator would have to be able to link the offender to multiple victims and demonstrate a conspiracy to defraud to likely make the case eligible for prosecution. Such efforts would likely not be viewed as a useful expenditure of resources, making these cases unlikely to be pursued when presented to federal agencies. Additionally, the lack of extradition arrangements between certain countries, such as the USA, China, Russia, and Ukraine, makes it difficult for cases to lead to an arrest and successful prosecution (Brenner 2011; Holt and Bossler 2016). While federal cases can be brought against individuals who cannot be physically extradited to the USA, they are not frequent as they appear to act more as a deterrent to future

394

T. J. Holt

action than a truly punitive action against the individuals targeted (Brenner 2008). Additionally, such cases only highlight the disparity in law enforcement capabilities and illustrate that perpetrators need only reside within certain countries in order to operate with impunity.

Industry Mechanisms to Maintain Order Online The number of nongovernmental industry organizations that have the ability to monitor and assess Internet user behaviors has created a unique situation in online order maintenance. There appear to be two separate but related strains of action taking place to regulate the Internet: collaborative industry and law enforcement actions and distinct extralegal activities to affect cybercriminality (Dupont 2017; Hutchings and Holt 2016). While joint actions to combine the resources of industry security organizations with the legal powers of law enforcement are justifiable, efforts taken by organizations outside of any legal or constitutional responsibility may present challenges to the rule of law and state-based power generally (Dupont 2017; Hutchings and Holt 2016). There are multiple examples of industry groups collaborating with law enforcement, or within legal and regulatory guidelines for organizational practice. In fact, network service providers and corporate security agents may be the first to identify wrongdoing through their platforms and report it to law enforcement. For instance, the detection and removal of child sexual exploitation content from websites, social media platforms, and streaming services may be more easily accomplished by entities like Google, Microsoft, and Facebook as they control and manage content platforms. Their ability to scan and detect such content means they can move with greater speed and latitude than law enforcement agencies. In fact, some scholars argue that industry coalitions and working groups may be more effective at the investigation and mitigation of cybercriminality (Brenner 2008). The utility of industry coalitions is perhaps best demonstrated by the efforts of the Financial Coalition Against Child Pornography (FCACP), which consists of ISPs, financial institutions, and nongovernmental nonpolice agencies like the International Center for Missing and Exploited Children (International Center for Missing and Exploited Children 2017). The group organized in 2006 in an attempt to reduce the use of legitimate financial payment providers to send and receive payments related to the production and distribution of child sexual exploitation content (National Center for Missing and Exploited Children 2017). The group established new industry standards to screen for commercial exploitation content vendors actively using their platforms and block future actors from utilizing their services (International Centre for Missing and Exploited Children 2017). Their efforts led to a 93% reduction in the complaints received regarding the sale of child sexual exploitation content through legitimate financial payment processors generally (Financial Coalition Against Child Pornography 2016). Similar efforts are evident in the response to digital piracy which can be best identified and mitigated through records maintained by network service providers.

19

Police and Extralegal Structures to Combat Cybercrime

395

These organizations have the ability to monitor user behaviors and detect the use of specific programs and protocols that may be used in file transfer processes associated with the distribution of intellectual property (Nhan 2013). Additionally, revisions to the US Digital Millennium Copyright Act and other international statutes made service providers liable in the event they did not proactively attempt to mitigate piracy behaviors on their networks (Brenner 2008). In order to be compliant with the law, ISPs are now empowered to cooperate with various industry associations like the Motion Picture Association of America to notify users when they are thought to have engaged in piracy using their services (Nhan 2013). An ISP will typically send a cease and desist letter acknowledging that an individual utilizing an IP address associated with their services appeared to have downloaded intellectual property and that if future behaviors are observed, they may be blocked or fined. As a result, cease and desist notices serve as an informal deterrent to piracy behavior without the direct involvement of law enforcement or prosecutors (Nhan 2013). There are a number of other efforts undertaken by industry and commercial organizations to both covertly and overtly influence the scope of cybercrime, some of which take place outside of cooperative relationships with law enforcement. For example, intellectual property producers like entertainment companies have taken steps to impact digital piracy networks through extralegal means. In some cases, companies have paid third-party groups to produce what appear to be copies of video games and media that are actually incomplete or filled with glitches that will not allow the files to be used (i.e., TVTechnology 2007). Network service providers and organizations also attempt to disrupt piracy distribution chains by cutting off access to key data files in peer-to-peer file sharing software or throttling network connectivity associated with certain Internet protocols to complicate the process of downloading content (i.e., Torkington 2005). It is thought that these steps may complicate the process of offending and reduce the frequency of piracy or deter individuals from attempting to pirate certain content (Holt and Copes 2010). In much the same way, commercial cybersecurity vendors now constitute a unique layer of enforcement and management that not only detects and mitigates cybercrime tools but gathers intelligence on both attacker methods and their targets (Dupont 2017). For example, antivirus vendor software are deployed across the world by home consumers as well as major enterprises and governments. They have the ability to minimize the efficacy of attack efforts, as well as provide real-time intelligence on threats to networks by place and attack sophistication (Hatmaker 2017). As a result, they frequently work with law enforcement and intelligence organizations to assess threats and mitigate them when possible (Holt and Bossler 2016). The knowledge and capabilities of cybersecurity vendors create a unique dynamic whereby industry sources know a great deal about geopolitical and financial crime methodologies and practices, though they have no specific national or legal allegiances. Their actions can also influence international relations by outing the efforts of nation-states pitted against one another. This was evident in the efforts of the cybersecurity firm CrowdStrike over the last decade. The company has an extremely solid reputation in the cybersecurity industry, as they have been hired to investigate

396

T. J. Holt

to a range of attacks against high-profile government and industry targets (Leopold 2017). Many of their investigations stem from attacks originating from nation-states like Russia and China, and they frequently publish information on their results so as to inform the broader field of cybersecurity (Leopold 2017). At the same time, their results may also spur further attacks or cause attacker communities to conceal their efforts from others. For instance, CrowdStrike published a report in 2015 detailing efforts by Chinese hackers against various US targets to gain economic advantage and gain access to sensitive intellectual property. Their actions were somewhat sophisticated, and the report attempted to attribute the attacks to sub-units within the Chinese military. The company stated in the report that they were not making the information known to deliberately blame China or suggest malfeasance by the nation, but to provide information on their actions generally (Holt and Bossler 2016). Such a statement can easily be ignored by the popular press and may negatively impact relationships between countries generally.

Assessing the Risk of Extralegal Efforts and Interventions Taken as a whole, there is a need to question how corporate security organizations will be incorporated into the broader effort to police the Internet without diminishing the role of legally mandated, government-operated enforcement agencies (Dupont 2017; Holt and Bossler 2016; Hutchings and Holt 2016). The benefits to engaging industry and nongovernmental organizations in cybercrime policing are manifold, but create unique civil litigation risks and could potentially undermine the broader rule of law depending on their use (Dupont 2017; Holt 2018). Since corporate security organizations have no legal or constitutional remit to enforce national laws, any actions that they take must be considered be clearly justifiable in the context of the law (Holt and Bossler 2016; Hutchings and Holt 2016; Sunshine and Tyler 2003; Tyler 2004). An excellent example of the risks presented by allowing corporate security organizations to take direct enforcement actions is a 2014 case where Microsoft sued various malicious software operators and ISPs in civil court on the basis that their actions directly impacted the security of Microsoft users (Athow 2014). The company filed civil lawsuit against two men, Naser Al Mutairi from Kuwait and Mohamed Benabdellah from Algeria, and a Domain Name Service provider called No-IP. The suit alleged that the men created keylogging software that affected computers using Microsoft software products, while No-IP provided hosting services for their malware and data theft operations (Athow 2014). The court allowed Microsoft to seize all the domains operated by No-IP and blocked any infected computer from connecting to the Internet. Some of the 1.8 million users of No-IPs’ services were not infected, but were still blocked from the Internet. Additionally, No-IP did not notify their customers of the infections or the potential outages, which outraged their customer base (Munson 2014). Additionally, the actions enabled Microsoft to seize sensitive customer information from No-IP which may could have violated customers’ privacy rights (Adhikari 2013). No-IP

19

Police and Extralegal Structures to Combat Cybercrime

397

actually argued that Microsoft overstepped its bounds and acted in a way that exceeded their responsibilities and harmed their customers. The suit was eventually settled out of court, and Microsoft issued a formal apology to No-IP and their parent company (McMillan 2014). Incidents like this, and others, call to question what actually occurred and how effective corporations may be in acting as agents for public order management in a very public fashion (McMillian 2014; Dupont 2017). While Microsoft may argue it was within its legal rights, the absence of governmental police organizations and the resulting civil actions create an example for future actions that remove the perceived need for law enforcement agencies at any level of government to respond to cybercrimes (Dupont 2017; Holt 2018; Holt et al. 2015b; Hutchings and Holt 2016). The massive financial resources, data, and technological resources at the disposal of corporations like Microsoft and Google are far greater than any law enforcement agency. As a result, it may seem sensible to empower organizations to act more swiftly due to the limited resources, bureaucratic structures, and extradition issues that may slow the law enforcement process. At the same time, excluding police agencies from disruption and takedown efforts may cause damage to the perceived legitimacy of law enforcement (Dupont 2017; Holt 2018). Though such a situation may seem unlikely, there is already some evidence of an erosion in public confidence regarding the federal responses to cybercrime in the USA. Over the last few years, US legislators introduced bills that would allow companies to engage in retaliatory attacks against individuals or groups who attempt to steal intellectual property or sensitive information (Wolff 2017). One of the individuals supporting the bill, Representative Tom Graves (Republican, Georgia), did so because he feels that government and law enforcement are unable to respond with the same efficiency as organizations who know they have been hacked (Wolff 2017). Graves stated “this bill is about empowering individuals to defend themselves online, just as they have the legal authority to do during a physical assault” (Wolff 2017). Similarly, Stewart Baker, a former assistant secretary of homeland security, who supported the legislation stated: “the government is completely consumed just trying to take care of its own data and tracking its own attackers. It doesn’t have the resources to help firms and probably never will” (Wolff 2017). Though this sort of legislation has yet to pass, their public discussion demonstrates a serious diminishment in the responsibility of law enforcement to deal with cybercrimes. In effect, it recognizes the limited power of state-sponsored police agencies to respond in a timely fashion.

Discussion and Conclusions This chapter demonstrates the complex nature of policing online activities and the inherent challenges evident in technology use, monitoring, reporting, and enforcement. Such notions are not a function of recent technological innovations, but longstanding historical impediments evident since the earliest days of cybercrime (see Goodman 1997; Senjo 2004; Stambaugh et al. 2001). Despite various calls from

398

T. J. Holt

scholars, policymakers, and practitioners for change, there have been few innovative steps to make the different systems more cohesive or streamlined (Holt et al. 2015a; Wall 2010). Regardless of the underlying causes of the lack of change, there are several clear and immediate implications that can be made as to how to improve the state of enforcement in online spaces. One of the most immediate and obvious changes that must be introduced is an increased financial and constitutional remit for law enforcement agencies at all levels related to cybercrime investigation. First, clearly elaborating the role that local and federal law enforcement must take in the investigation of cybercrime through policy statements is essential to demonstrate to police administrators that they must take these offenses seriously (Stambaugh et al. 2001). England and Wales provide an important example in this respect as the government has specifically noted that local agencies must increase their investigation and arrest rates related to cybercrime in their national strategy to combat cybercrime (HM Government 2016). Investing in equipment and training for local, state, and federal or national police forces will in turn provide them with the resources necessary to actually respond to calls for service. Second, there is a need to dramatically reform and improve the public-private sector partnerships that currently exist and increase the transparency of their efforts and outcomes. As technology continues to evolve, and user behaviors adapt to new devices and platforms, law enforcement efforts will have to adapt in tandem. Acknowledging and empowering the private sector to be both accountable and responsible to combat cybercrime and monitor online spaces may a pivotal role to ensure collaboration. For instance, social media companies such as Facebook and Twitter have taken a scattershot approach to banning content that enables certain forms of hate speech and terrorist ideologies from their platforms (Idris 2019). If regulatory or legislative accountability could be introduced to ensure such content is banned, then these platforms may be rendered more safe for public use and demonstrate their commitment to the public good. Finally, empowering technology companies and corporate security organizations to produce tools and techniques that can automate aspects of the identification and containment of illicit activity online is vital to further mitigate cybercriminality. The development of automated tools by Microsoft and other companies that can detect and remove child porn serve as an excellent example of ways that the Internet can be partially secured by design. Similarly, Facebook and Google have begun to remove websites that feature images of humans that were shared without their permission, such as nude or sexual images that some refer to as revenge porn (Lee 2015). The companies take the perspective that the individual content creator (the person featured in the photo) did not authorize the materials be shared by others. As a result, the images are being shared in violation of intellectual property laws and can be removed from the Internet. This is an extremely novel approach that helps encourage victims to report their experience and feel a sense of control over the situation without necessarily having to engage law enforcement in the effort. Such solutions provide a direction for future efforts that may foster a more safe and secure Internet in the future.

19

Police and Extralegal Structures to Combat Cybercrime

399

Cross-References ▶ Cyberstalking ▶ Global Voices in Hacking (Multinational Views) ▶ Hate Speech in Online Spaces ▶ Malicious Software Threats ▶ Prostitution and Sex Work in an Online Context ▶ Spam-Based Scams

References Adhikari, R. (2013). Microsoft’s ZeroAccess Botnet Takedown No ‘Mission Accomplished’. TechNewsWorld December 9, 2013. [Online] Available at: http://www.technewsworld.com/ story/79586.html Andress, J., & Winterfeld, S. (2013). Cyber warfare: Techniques, tactics, and tools for security practitioners (2nd ed.). Waltham: Syngress. Athow, D. (2014). Microsoft seizes 22 No-IP domains in malware crackdown. TechRadar, July 1, 2014. [Online] Available at: http://www.techradar.com/news/software/security-software/micro soft-siezes-22-no-ip-domains-in-malware-crackdown-1255625 Black, D. (1998). The social structure of right and wrong. New York: Emerald Publishing. Bossler, A. M., & Holt, T. J. (2012). Patrol officers’ perceived role in responding to cybercrime. Policing: An International Journal of Police Strategies & Management, 35, 165–181. Brenner, S. W. (2008). Cyberthreats: The emerging fault lines of the nation state. New York: Oxford University Press. Brenner, S. W. (2011). Defining cybercrime: A review of federal and state law. In R. D. Clifford (Ed.), Cybercrime: The investigation, prosecution, and defense of a computer-related crime (3rd ed., pp. 15–104). Raleigh: Carolina Academic Press. Britz, M. T. (2008). Terrorism and technology: Operationalizing cyberterrorism and identifying concepts. In T. J. Holt (Ed.), Crime on-line: Correlates, causes, and context (pp. 193–220). Raleigh: Carolina Academic Press. Button, M., & Cross, C. (2017). Cyber frauds, scams, and their victims. London: Routledge. Cho, D, & Kwon, K. H. (2015).The impacts of identity verification and disclosure of social cues on flaming in online user comments.Computers in Human Behavior, 51, 363–372. Crawford, C. (2003). Cyberplace: Defining a right to internet access through public accommodation law. Temple Law Review, 76, 225. Cross, C. (2015). No laughing matter: Blaming the victim of online fraud. International Review of Victimology, 21, 187–204. Dupont, B. (2017). Bots, cops, and corporations: On the limits of enforcement and the promise of polycentric regulation as a way to control large-scale cybercrime. Crime, Law and Social Change, 67(1), 97–116. Ferraro, M., & Casey, E. (2005). Investigating child exploitation and pornography: The internet, the law and forensic science. New York: Elsevier. Financial Coalition Against Child Pornography. (2016). Internet Merchant Acquisition and Monitoring Sound Practices to Help Reduce the Proliferation of Commercial Child Pornography. [Online] Available at: Freedom House.(2018). Freedom on the Net 2018. [Online] Available at: https://freedomhouse.org/ report/freedom-net/freedom-net-2018 Goodman, M. D. (1997). Why the police don’t care about computer crime. Harvard Journal of Law and Technology, 10, 465–494.

400

T. J. Holt

Hadlington, L., Lumsden, K., Black, A., & Ferra, F. (2018). A qualitative exploration of police officers’ experiences, challenges, and perceptions of cybercrime. Policing: A Journal of Police and Practice, pay090. https://doi.org/10.1093/police/pay090 Harris, M. A., & Patten, K. P. (2014).Mobile device security considerations for small- and mediumsized enterprise business mobility.Information Management & Computer Security, 22(1), 97–114. Hatmaker, T. (2017). US Government bans Kaspersky software citing fears about Russian intelligence. TechCrunch, September 13, 2017. [Online] Available at: https://techcrunch.com/2017/ 09/13/kaspersky-executive-branch-ban-dhs-homeland-security/ Hayne, S. C., Wang, H., & Wang, L. (2015). Modeling reputation as a time series: Evaluating the risk of purchase decisions on eBay. Decision Sciences, 46(6), 1077–1107. Hern, A. (2016). Islamic State Twitter accounts get a rainbow makeover from Anonymous hackers. The Guardian, June 17, 2016. [Online]. Available at: https://www.theguardian.com/technology/ 2016/jun/17/islamic-statetwitter-accounts-rainbow-makeover-anonymous-hackers Hinduja, S. (2004). Perceptions of local and state law enforcement concerning the role of computer crime investigative teams. Policing: An International Journal of Police Strategies and Management, 3, 341–357. HM Government. (2016). National Cyber Security Strategy 2016–2021. Available from: https:// www.gov.uk/government/uploads/system/uploads/attachment_data/file/567242/national_cyber_ security_strategy_2016.pdf Hollinger, R. C., & Lanza-Kaduce, L. (1988).The process of criminalization: The case of computer crime laws . Criminology, 26(1), 101–126. Holt, T. J. (2007). Subcultural evolution? Examining the influence of on- and off-line experiences on deviant subcultures.Deviant Behavior, 28(2), 171–198. Holt, T. J. (2018). Regulating cybercrime through law enforcement and industry mechanisms.The ANNALS of the American Academy of Political and Social Science, 679(1), 140–157. Holt, T. J., & Bossler, A. M. (2016). Cybercrime in progress: Theory and prevention of technologyenabled offenses. London: Routledge. Holt, T. J., & Copes, H. (2010). Transferring subcultural knowledge on-line: Practices and beliefs of persistent digital pirates. Deviant Behavior, 31(7), 625–654. Holt, T. J., Bossler, A. M., & Fitzgerald, S. (2010). Examining state and local law enforcement perceptions of computer crime. In T. J. Holt (Ed.), Crime on-line: Correlates, causes, and context (pp. 221–246). Raleigh: Carolina Academic Press. Holt, T. J., Burruss, G. W., & Bossler, A. M. (2015a). Policing cybercrime and cyberterror. Raleigh: Carolina Academic Press. Holt, T. J., Smirnova, O., Chua, Y. T., & Copes, H. (2015b). Examining the risk reduction strategies of actors in online criminal markets. Global Crime, 16(2), 81–103. Holt, T. J., Freilich, J. D., & Chermak, S. M. (2017). Exploring the subculture of ideologically motivated cyber-attackers. Journal of Contemporary Criminal Justice, 33(3), 212–233. Hutchings, A., & Holt, T. J. (2016). The online stolen data market: disruption and intervention approaches. Global Crime, 18(1), 11–30. Innes, M. (2004). Reinventing tradition? Reassurance, neighbourhood security and policing. Criminal Justice, 4(2), 151–171. International Center for Missing and Exploited Children. (2017). Commercial child pornography: A brief snapshot of the Financial Coalition Against Child Pornography. [Online] Available at: http://www.icmec.org/wp-content/uploads/2016/09/FCACPTrends.pdf Idris, I. K. (2019). More responsive journalism- not social media ban- is needed to fight disinformation in Indonesia. The Conversation, May 23, 2019. [Online] Available at: http:// theconversation.com/more-responsivejournalism-not-social-media-ban-is-needed-to-fight-disin formation-in-indonesia-117604 IWF.(2017). Annual report. [Online]. Available at: https://www.iwf.org.uk/sites/default/files/ reports/2017-09/IWF%202015%20Annual%20Report%20Final%20for%20web.pdf

19

Police and Extralegal Structures to Combat Cybercrime

401

Lee, S. (2015). Pornhub joins fight against revenge porn. Newsweek, October 14, 2015. [Online] Available at: http://www.newsweek.com/pornhub-revenge-porn-help-victims-383160?utm_ source=internal&utm_campaign=incontent&utm_medium=related1 Leopold, J. (2017). He solved the DNC hack. Now he’s telling his story for the first time. BuzzFeedNews, November 8, 2017. [Online] Available at: https://www.buzzfeed.com/ jasonleopold/he-solved-the-dnc-hack-now-hes-telling-his-story-for-the?utm_term=.pbJLQlqay Y#.xhzbzLd30D Maimon, D., Becker, M., Patil, S., & Katz, J. (2017).Self-protective behaviors over public wifi networks. In The {LASER} workshop: Learning from authoritative security experiment results ({LASER} 2017) (pp. 69–76). Marcum, C., Higgins, G. E., Freiburger, T. L., & Ricketts, M. L. (2010). Policing possession of child pornography online: Investigating the training and resources dedicated to the investigation of cyber crime. International Journal of Police Science & Management, 12, 516–525. Mavlanova, T., & Benbunan-Fich, R. (2010). Counterfeit products on the internet: The role of seller-level and product-level information. International Journal of Electronic Commerce, 15(2), 79–104. McMillan, R. (2014). How Microsoft appointed itself sheriff of the Internet. Slate. [Online] Available at: http://www.slate.com/articles/technology/technology/2014/10/no_ip_what_micro soft_lawsuits_have_done_for_security_and_software_companies.html Munson, L. (2014). Microsoft and No-IP reach settlement over malware takedown. Naked Security by Sophos, July 11, 2014. [Online] Available at: https://nakedsecurity.sophos.com/2014/07/11/ microsoft-and-no-ip-reach-settlement-over-malware-takedown/ Narcum, J. A., & Coleman, J. T. (2015). You can’t fool me! Or can you? Assimilation and contrast effects on consumers evaluations of product authenticity in the online environment. Journal of Asian Business Strategy, 5(9), 200. National Center for Missing and Exploited Children. (2017). FAQs. [Online] Available at: www.missingkids.com/Missing/FAQ National Institute of Justice. (2008). Electronic crime scene investigations: A guide for first responders (2nd ed.), NCJ 219941. Washington, DC: National Institute of Justice. Newman, G., & Clarke, R. (2003). Superhighway robbery: Preventing e-commerce crime. Cullompton: Willan Press. Nhan, J. (2013). The evolution of online piracy: Challenge and response. In T. J. Holt (Ed.), Crime on-line: Causes, correlates, and context (pp. 61–80). Raleigh: Carolina Academic Press. PERF. (2014). The role of local law enforcement agencies in preventing and investigating cybercrime. Police Executive Research Forum: Critical Issues in Policing Series: Washington, DC. Senjo, S. R. (2004). An analysis of computer-related crime: Comparing police officer perceptions with empirical data. Security Journal, 17, 55–71. Smith, R. G., Grabosky, P., & Urbas, G. (2003).Cyber criminals on Trial. Cambridge University Press. Spitzner, L. (2001). The value of honeypots, part one: Definitions and values of honeypots. Symantec. [Online] Available at: https://www.symantec.com/connect/articles/value-honeypots -part-one-definitions-and-values-honeypots Stambaugh, H., Beaupre, D. S., Icove, D. J., Baker, R., Cassady, W., & Williams, W. P. (2001). Electronic crime needs assessment for state and local law enforcement. Washington, DC: National Institute of Justice, NCJ 186276. Sunshine, J., & Tyler, T. R. (2003). The role of procedural justice and legitimacy in shaping public support for policing. Law & Society Review, 37(3), 513–548. Torkington, N. (2005). HBO Attacking BitTorrent. Radar, October 4, 2005. [Online] Available at: http://radar.oreilly.com/2005/10/hbo-attacking-bittorrent.html TVTechnology. (2007). Michael Moore’s ‘Sicko’ Combats leaks on the Web. June 21, 2007. [Online] Available at: http://www.tvtechnology.com/business/0107/wwny-tv-selects-axonmultiviewer-and-video-logger/254884

402

T. J. Holt

Tyler, T. R. (2004). Enhancing police legitimacy. The Annals of the American Academy of Political and Social Science, 593(1), 84–99. Walker, S. (1999). The Police in America: An introduction. Boston: McGraw Hill College. Wall, D. S. (2001). Cybercrimes and the internet. In D. S. Wall (Ed.), Crime and the internet (pp. 1–17). New York: Routledge. Wall, D. (2007). Cybercrime: The transformation of crime in the information age (Vol. 4). Cambridge, UK: Polity. Wall, D. (2010). The organization of cybercrime and organized cybercrime. In Current issues in IT security (pp. 51–66). Berlin: Duncker & Humblot. Wall, D. S., & Williams, M. L. (2013). Policing cybercrime: Networked and social media technologies and the challenges for policing. Policing and Society, 23, 409–412. Weiss, L. M., Capozzi, M. M., & Prusak, L. (2004). Learning from the internet giants. MIT Sloan Management Review, 45(4), 79. Willits, D., & Nowacki, J. (2016). The use of specialized cybercrime policing units: An organizational analysis. Criminal Justice Studies, 29, 105–124. Wolff, J. (2017). When companies get hacked, should they be allowed to hack back? The Atlantic, July 14, 2017. [Online] Available at: https://www.theatlantic.com/business/archive/2017/07/ hacking-back-active-defense/533679/

Police Legitimacy in the Age of the Internet

20

Johnny Nhan and Neil Noakes

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Brief History of the Police and the Community . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emergence of the Public Information Officer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Police and the World of Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Symbolic Uses of Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Police Narrative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “Humanizing” Police Officers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Different Uses of Social Media on Different Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Undermining Legitimacy Through Surveillance of Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hazards and Challenges of Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Hazards of Vigilante Justice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Officer Personal Social Media Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Doxing/Doxxing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

404 405 406 407 410 410 413 415 417 418 418 419 419 420 421 421 422

Abstract

In recent years, police departments around the country have engaged in social media use in a variety of means, ranging from sharing important public safety information to producing dance videos. This chapter explores the growing use of social media by police, examining practical and symbolic reasons for its use using J. Nhan (*) Texas Christian University, Fort Worth, TX, USA e-mail: [email protected] N. Noakes Fort Worth Police Department, Texas Christian University, Fort Worth, TX, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_66

403

404

J. Nhan and N. Noakes

interview data from police public information officers. Initial findings suggest social media is playing a growing and important role in police departments, particularly in building community relations. However, the findings also reveal potential legal and personal hazards when used imprudently, especially by individual officers. Keywords

Police social media · Police public information · Police officer doxing · Police viral videos

Introduction In 2018 police departments around the country posted online videos of officers lip-syncing and dancing to Bruno Mars’ hit song “Uptown Funk” as part of a “lipsync challenge.” Norfolk, Virginia Police Department’s rendition was viewed over 70 million times on Facebook (Holson 2018). In 2016, numerous police departments, far ranging from NYPD to LAPD and internationally, produced videos of officers dancing the running-man dance in the “running man challenge.” These videos have gone “viral,” or become wildly popular within days through sharing on the Internet, often via social media. While many veteran officers may cringe at these silly videos and believe they may erode or undermine the authoritative image of police, the use of social media by law enforcement agencies can serve as a powerful tool to potentially increase police legitimacy. They can also serve as a platform to impart law enforcement’s narrative and further community-policing agendas, particularly useful at a time where traditional news media has been criticized for bias and forwarding their own narratives. Additionally, online social media platforms, such as Facebook, Twitter, and Instagram have added to law enforcement’s crime control repertoire by giving investigators additional information about a suspect and his network of family, friends, and acquaintances among other information. Moreover, the existence of social media has empowered citizens to actively participate in investigations. Many police agencies have tapped into the power of crowdsourcing by directly eliciting information from social media members. Just recently, in May 2019 at Fort Worth, Texas, 8-year-old Salem Sabatka was found safe after a brazen broaddaylight kidnapping while walking with her mother (Johnson 2019). A department spokesman specifically thanked their social media followers for helping to identify the vehicle and two citizens for finding the suspect’s car after searching the neighborhood. This type of social media assistance has led to the emergence of online vigilantism described by some criminologists as crowdsourcing criminology (See Trottier 2015; Nhan et al. 2017). Social media has not always been beneficial to law enforcement. Police viral videos more often capture controversial incidents, ranging from excessive use of force to unflattering confrontations with members of the public; often between white

20

Police Legitimacy in the Age of the Internet

405

officers and racial minorities that strain police-citizen relations. In 2015, for instance, there was a viral cell phone video of a South Carolina Sheriff Deputy flipping a high school student out of her desk and throwing her to the ground. The narrative of a white male officer using excessive force against a black female student quickly spread, resulting in the officer being fired. Similar incidents of police use of force have been captured on video and spread throughout social media, showing the hazards of social media to the police. Moreover, police have been criticized for privacy concerns by policing social media members and harvesting information on its users. This issue has been highlighted by police activities that monitor groups during civil unrest, such as during the riots in England in 2011, the Bahraini uprising of 2011, and more recently, with Black Lives Matter. This chapter discusses police legitimacy in social media through an examination of the existing literature and supplemented by several face-to-face interviews of (N = 10) police public information officers (PIOs) from six small- to large-sized municipal police departments in the Dallas-Fort Worth metropolitan area. In addition, input from two high-profile public information experts was included in this chapter. Subjects were selected based on a convenience and snowball sample from departments that were known to be active on social media. Since one of the authors is currently a ranking police officer, access to subjects was relatively easy. First, the chapter discusses police department uses of social media today. Next, the effects of social media on police legitimacy will be discussed, specifically the control of narratives, a “humanizing” effect on officers, and virtual community policing. Furthermore, the impact of police legitimacy on privacy using social media is explored. Finally, the hazards of social media use and conflict, and the contentious relationship between the police and the public.

A Brief History of the Police and the Community The relationship between the police and public has fundamentally changed since the 1930s. Prior to the 1930s, the relationship between police and citizens was described as “close and personal,” (Kelling and Moore 1988). However, this closeness was blamed by police critics as a main source of officer corruption, incompetency, and brutality. Officers during this period enforced community norms regardless of illegality and furthering and certain political agendas. Intense criticism, coupled with a growing professional movement by science and medicine that generated unprecedented prestige and pay for scientists and physicians served as the impetus for police to do the same, ushering in the transition from political era to reform era of policing. Police professionalism was pushed by reformers such as August Vollmer, O.W. Wilson, and J. Edgar Hoover to replace the image and reputation of the incompetent and corrupt officer with one that is akin to doctors and scientists which warranted high respectability and competence. Police restricted personal relationships with citizens through physical separation of reactive patrols in vehicles and impersonal officer demeanors described by Kelling and Moore (1988) as

406

J. Nhan and N. Noakes

“professionally remote.” Police legitimacy was derived from strict adherence to law enforcement. However, an unintended consequence of this impersonal and stoic demeanor created tension between the police and community, particularly poor, African-American communities. This strained relationship was ultimately blamed for a series of police incidents, such as in the Watts neighborhood of Los Angeles, that sparked ensuing major riots and social upheaval which led to further reforms. Another paradigm shift in law enforcement occurred in the 1980s as a response to these urban riots as impersonal professionalism gave way to a service orientation under the community problem-solving era. Police legitimacy in the community era was based on neighborhood and community support, which was achieved through more intimate strategies and activities such as a return to foot patrols with an emphasis on solving problems and community input. While there are variants of community policing, the DOJ identifies its three core components that can be operationalized: (1) community partnerships with stakeholders ranging from other agencies to private businesses and individuals, (2) organizational transformation to align management, structure, personnel, and information systems to support partnerships and problem-solving, and (3) proactive problem-solving. More philosophically, at the heart of community policing is empowering citizens. Most notably is the use of town meetings, where citizens can meet face-to-face with officers and communicate concerns and issues. Ideally, this open format allows for a dialogue with police, who in turn can direct resources to address citizen concerns. Citizen concerns can oftentimes be considered trivial by police or not directly a police function but may be an important neighborhood resident. In their study of Santa Ana, California residents, Bridenball and Jesilow (2008) found that citizens were not as concerned with actual violent crime, such as neighborhood shootings, but more so with matters of disorder, such as loitering and graffiti. They assert that citizen perceptions of police can improve through officers actively engaging in contact with citizens to identify and address citizen-directed problems. For police officers, this means acknowledging and embracing the fact that more than half their time is spent on non-law enforcement duties, ranging from interacting with school kids and drug education programs to escorting floats in a parade (Culbertson et al. 1993). However, not all police officers can effectively and, perhaps more importantly, willingly embrace and perform these service functions that are often dismissed for not being law-enforcement or crime control-related “real police work.” Despite internal resistance among many line officers, police departments increasingly wary of the consequences of strained public relationships and wanting to embrace the community policing model, however, began realizing the importance of public information and developing specialized public information officers.

Emergence of the Public Information Officer In the wake of negative publicity generated by high-profile incidents of police misconduct and brutality since the 1960s and subsequent decades that triggered race riots, police departments in the USA have been taking an active role in

20

Police Legitimacy in the Age of the Internet

407

managing their public image to restore public trust and confidence (Stateman 1997). In 1967, President Johnson through executive order established the President’s Commission on Law Enforcement and Administration of Justice to address (Katzenbach et al. 1967). A focus of the Johnson Commission and subsequent commissions the following years was criminal justice reform by addressing and rebuilding police-community relations strained by racial tension, including more positive news media relations. Consequently, police departments began integrating public image management through public information divisions and public information officers as an integral part of their core operations. By the 1970s, law enforcement realized the practical importance of managing public information. For instance, as most departments faced budgetary constraints from diminished federal, state, and local money, they were expected to rely more heavily on alternative financing that included private donations. This meant that officers needed to appeal to citizens directly for financial support and support for tax increases that fund the police directly, something which required positive stories and favorable publicity (Cheatham and Erickson 1984). Furthermore, public relations became integral to the changing nature of police and undoing the impersonal and professionally distant model officer that led to strained community relationships. In 1989, the importance of PIOs reached critical mass with the formation of the National Information Officers Association (NIOA) which began as a consolidation of smaller state and regional public information associations. The association offers training courses and information sessions that range from specialized issues to highprofile incidents. For example, topics covered in the 2019 NIOA annual conference range from specific topics, such as the Nextdoor social media app and employee arrests, to high-profile national incidents, such as the Chicago Police Department’s handling of the racially-charged and politically-influenced Jussie Smollett case. Today, virtually every medium to large city police department have some sort of formalized position of PIO. Public information and public information officers have a topic of study by academics. A study of PIOs based on 2000 data shows the primary function of over 90% of PIOs is to write press releases, make formal and informal contact with the media, and hold press conferences, with other functions that range from in-house advertising to writing newsletters and producing videos (Motschall and Cao 2002). The same study showed the PIO’s primary role as defined by the department, structure and size of the agency, budget, and formalization of the position affected the nature of police-media and police-citizen relations. Moreover, three quarters of respondents took a proactive approach described as “get the good stories out first” and “keep ahead of the game by having good relationship with the media” (p. 173).

Police and the World of Social Media The expanded role of the PIO to social media in the past decade and a half has profoundly affected the scope, activity, and relationship with the media and the public. The proliferation of social media has meant new demands and expectations

408

J. Nhan and N. Noakes

for using these mediums to communicate information, especially during emergency events. For example, Hughes and Palen (2012), describe the change of the role of police via PIO as shifting from “gatekeeper,” or one who manages and constrains the flow of information, to that of a “translator,” who receives information and converts it to another format for better understanding by other groups. This new role means that during emergency situations, PIOs must be active in monitoring social media sites for the flow of information and, more importantly, misinformation. One respondent to Hughes and Palen (2012) explained, “When we have a huge incident . . . one of the things I had my PIO back in the office do was monitor the social networking sites . . . We got suspect information. We had information. I would then send the information to the people on scene, to my PIOs on scene, so that we say, ‘Here is what’s being reported on Facebook,’ and those people could then address it there quickly. In some ways it gives us a chance to more quickly correct misinformation and get the right stuff out there” (p. 10). This one scenario described by Hughes and Palen is not uncommon with the proliferation of social media use by police. Instances of the police in the world of social media have exploded in the last decade, paralleling the use of social media use in general. A 2016 survey of law enforcement agencies by the Urban Institute and the International Association of Chiefs of Police (IACP) found 95% of police departments surveyed had a social media presence, with 5% of those departments having started using social media prior to 2006, or for over 10 years (Kim et al. 2017). This statistic should not be surprising given several high-profile uses of social media by police that have caused agencies around the world to take notice. In 2011, a series of riots spread across London and throughout England in protest of a police shooting. As riots broke across the country, police quickly blamed social media platforms, such as Twitter, for enabling protestors to organize, promote, and spread the riots to other parts of the country. Some politicians even called for the temporary suspension of Twitter and BlackBerry Messenger service (BBM) (Bright 2011). However, law enforcement agencies began realizing the power of social media as a law enforcement tool when these same social media platforms were used to track down and arrest rioters, and even charge many with using social media to incite criminal activity (Dzieza 2011). The rioters took over the city of Vancouver for nearly 4 h after the 2011 Stanley Cup finals when the Boston Bruins beat the home team, Canucks causing millions in damage. Police and public officials vowed to bring those rioters to justice, tapping into the thousands of pictures posted on social media to identify individuals directly involved. Over 200 arrests were made by police, who identified many individuals online with the help of online community members. In 2013, police once again tapped into the social media world when terrorists detonated two bombs near the finish line of the Boston Marathon. In the hunt for the suspects, federal, state, and local agencies solicited the help of social media communities by asking them to report leads and forward photos and videos. While bombing suspects Dzhokhar and Tamerlan Tsarnaev were ultimately found without direct help from Internet users, the incident demonstrated the potential uses of social media for police. These large events may have drawn significant attention to police

20

Police Legitimacy in the Age of the Internet

409

use of social media, but police had already been using social media as an invaluable tool in different capacities. In 2016, IACP member departments were surveyed on their use of social media (Kim et al. 2017). Social media use by law enforcement can be dichotomized into two categories: functional and symbolic. Functionally and pragmatically, the most popular use of social media, particularly Twitter, is for quickly and efficiently disseminating information to the public. Traditionally, police have relied on news media outlets to release information. Prior to social media, it was widely held that police organizations and supervisors had to contend with and even manipulate the news media to create a legitimate reputation (See Chermak and Weiss 2005). Instead, Twitter allows departments to bypass the delays and narrative filters of traditional news. Moreover, Twitter use can have a secondary effect of promoting openness and responsiveness. In 2010, an analysis of tweets by a sample of police departments in major cities showed nearly half (45.3%) of tweets contained information on crimes such as shootings, robberies, and accidents, with much smaller percentages for department events and traffic information (Heverin and Zach 2010). This information is consistent with PIOs interviewed. One PIO from a small department nested in a dense metropolitan area lauded the efficiency of using Twitter to release information to the public, particularly disseminating information to news outlets. He stated: One of the ways I have found that I can take the stress off of dispatch and off of me is if we have a working event, I can tweet the event and tweet directly to the media and tell them where the staging area is going to be and that I’m en route and I can confirm and therefore I don’t get text messages or phone calls because the media is receiving the information they need right away. . .I can in one tweet eliminate phone calls, texts, and if I do get a phone call or text they say ‘hey can you shoot us a picture we can use?’ then I treat that as an open records. I shoot it, I give it to them and they use it and I get credit and I think that’s part of us working to keep a good relationship with the media.

Another PIO echoed this opinion and highlighted the elimination of the media as a gatekeeper. He stated: Before we had social media we literally would send a press release and have to call the news stations and beg them to come out and cover something. Now, we’re in the best position to tell our own story. I don’t care if [these news channels] want to cover it because I’m pushing my stuff out regardless and a lot of times the John Q. Citizens who don’t see it on the [evening news or newspaper], they’re going to see it on the Facebook or they’re going to see it on Twitter, or YouTube. Now that doesn’t mean I have a bullhorn or the reach that a news cycle is going to have but at the end of the day we’re at least making an attempt to communicate directly and send our message out to the neighborhood level.

Another PIO in a mid-sized department echoed the viewpoint that the media was the primary audience for their tweets, stating, “If you’re a reporter in the Dallas-Fort Worth area, you’re following [all the] departments on Twitter because we know that’s where news is going to break.”

410

J. Nhan and N. Noakes

Police departments and other law enforcement agencies have also found utility in social media with their crime control agenda. In 2015, a Texas police department made headlines when it facetiously advertised a “free service” to drug dealers on Facebook to report their competitor drug dealers. To their surprise and disbelief, people began responding to the ad (Taylor and Rufener 2015). In 2017, Canadian police were able to catch a drug dealer who responded to a social media site they created (Bowen 2018). Police have also adopted social media use as an integral part of more serious criminal investigations. Criminal investigators routinely search social media sites such as Facebook, Instagram, and Twitter to look for suspect information, incriminating evidence, and networks of friends, family, and associates. Facebook has policies posted on their site that instruct law enforcement on information requests for accounts that are not publicly available. For instance, the company can release varying degrees of information based on legal requests ranging from a subpoena for the basics, such as names, emails, and IP addresses; to more private information, such as private messages, photos/videos, and wall posts pursuant to a search warrant (Fitzpatrick 2012). Upon a law enforcement information request, Facebook even saves information on a person for 90 days even if it has been deleted by the user. Police have turned to social media as part of their own background investigations for potential hires. Departments commonly have applicants sign waivers that allow background investigators to access their social media accounts to dig for “digital dirt” (Johnson 2010). Consequently, candidates have been disqualified for social media post histories that range from threats of suicide to publishing sexually explicit or racially charged posts.

Symbolic Uses of Social Media Social media has a much more powerful secondary effect that many police departments are beginning to realize. As much as police officers may dislike and be embarrassed by police department shenanigans on social media, as mentioned with lip sync challenges and dances, a social media presence can serve to reinforce police legitimacy through three ways: (1) allowing departments to project their own narrative of any situation, (2) “humanize” officers, and (3) furthering community policing agendas.

Police Narrative Historically, law enforcement agencies have had limited success and ability to project and disseminate their own narratives, especially during critical incidents. Police have relied mainly on traditional television and print media to broadcast information, messages, and perspectives. These news entities, often constrained by time and driven by commercial considerations, frequently do not share the same narrative of an incident with police. Oftentimes officers complain that information

20

Police Legitimacy in the Age of the Internet

411

filtered through traditional news outlets distorts the truth, or worse, provides a false narrative. For instance, the narrative of a white officer using excessive force or shooting an unarmed black male (whether or not the shooting was justified such as the shooting of Michael Brown) is often quickly spread throughout the news cycle. Moreover, many argue that the news is reported with incomplete information, often during a time when investigators are determining the facts and cannot comment. This void can be filled with misinformation that is often unflattering, such as accusations of racism. It is no surprise, then, that police officers feel traditional news media treats them unfairly and have an antagonistic relationship with the media. According to a poll conducted by the Pew Research Center, the majority of police officers polled (81%) felt the media treated them unfairly, with 42% of those polled strongly agreeing with the question (Gramlich and Parker 2017). According to the study, those who held strong negative views of the media felt frustrated (65%) and angry (31%). Consequently, these officers were more likely to disconnect with the public and have a poor relationship with the community. This perception of being targeted by traditional media can serve to further the police culture of an “us versus them” mentality associated with the police subculture (See Nhan 2014). Moreover, it can serve to undermine departments’ community policing efforts (Williams et al. 2018). PIOs interviewed confirmed the sentiment that traditional media does not treat the police fairly. However, their reasons given for the perceived unfairness were more nuanced and not attributed to simply a media bias that negatively targets police officers. Instead, they explain the frustration is with the selecting of more sensational stories, whether good or bad. One PIO sergeant interviewed explained, “The media, they will only play sensational stuff. You’ll get no air time. You’ll get no voice. . .that’s why [social media] is so important.” This finding was consistent with earlier studies. For example, a PIO interviewed by Hughes and Palen (2012) stated, “I can’t tell you how many times I’ve been on a scene and I’ve spoken with the media and I was misquoted. Sometimes it’s no big deal. . .other times it is a big deal. They misuse numbers and that sort of thing becomes very dangerous” (p. 5). Media unfairness also stems from the structure of news broadcasts that require heavy editing, which results in truncating stories to fit time slots. A PIO lieutenant explained a typical scenario with traditional broadcast news outlets, stating: The media, we get a good or bad event, regardless of what it is, they’re going to play. . .I might be up there talking for 15–20 minutes and they’re going to have a minute and a half, maybe three minutes on a big story to play it. Add to that, you might get a 15 to 20 second soundbite. The longest I’ve seen them run a straight clip of anything we’ve said is 20 seconds.

The distrust between the police and the media has manifested in what is known as the “Ferguson effect,” in reference to the negative coverage of police during the social unrest triggered by the 2014 shooting of Michael Brown. Wall Street Journal writer Heather MacDonald coined the term to describe the phenomenon of higher crime as a result of the culmination of negative stories written about police, such as

412

J. Nhan and N. Noakes

stories about misconduct. Officers, wary of legal exposure and negative publicity, tend to de-police, and avoid being active on duty while the public tends to act with more hostility towards officers and commit more crime as a form of revolt. Nix and Picket (2017) found that most officers firmly believed in the Ferguson effect. These officers claimed to personally notice an increase in criminal activity after the Ferguson riots. The officers blamed hostile public attitudes on negative media coverage, despite no real data to support such an effect exists (See also Wolfe and Nix 2016). However, social media can serve as a mitigating factor in countering misleading or negative news coverage. Ironically, several PIOs interviewed expressed that the Ferguson incident was a result of strained relationships that could possibly have been prevented by a stronger relationship between the Ferguson Police Department and its citizens and a better response through social media. One PIO expressed, “You have to have a relationship to begin with far in advance of any controversy. The controversy may not just be in your city, it doesn’t matter what city it’s in, it still damages that relationship so having that robust relationship in advance really helps when these things happen.” The officers also felt a timelier and more transparent response in Ferguson may have mitigated the ensuing rioting. When asked about the conflict between sharing information on an ongoing investigation, the PIO supervisor responded: There’s an important distinction between ‘we can’t tell you anything because we’re having a criminal investigation’ and ‘here’s everything we can share and try to be transparent as possible.’ There are intimate details about an investigation that would hurt everyone if you were to share them but that doesn’t mean that you can’t share anything. One of the challenges [Ferguson] ran into is because they were so reluctant to share anything and the information they shared was very little, the media was clinging on to very enraged citizens and they’re very compelling in listening to those stories. Some of those stories that were put out were inaccurate but the police department in my view could have done a better job at correcting that and getting in front of it. This changed everything. Is this going to stop rioting? Is the mood going to change? There’s a lot of factor that could have.

Social media has allowed police officers to communicate directly with citizens through their own unfiltered narratives, thereby removing traditional media as the gatekeepers of information and providing access to anyone. Entire press releases could be placed online for full context. Moreover, departments can disseminate positive stories and activities that would have been passed over by traditional media outlets. The ability to control their narrative allows departments to tap into the public’s natural curiosity of police work and satiate that demand with positive narratives. One PIO explained: The public loves to see the other side of law enforcement. The citizens are generally interested in police. If they know police are on Twitter, they’re going to follow. Why? Because police work is interesting...If we already know we’re going to be on the front end of followers compared to general city [Twitter] accounts, we’re already in an advantageous position to be able to push proactive messaging that reflects favorably on our organizations.

20

Police Legitimacy in the Age of the Internet

413

Social media allows police the ability to control and broadcast their own narrative by democratizing the dissemination of information that was once exclusive to traditional media outlets. One of the biggest legitimizing effects of this democratization process is the “humanizing” effect that makes the image of officers as friendly and approachable instead of cold and callous law enforcers.

“Humanizing” Police Officers The reform era of policing, circa 1930–1980, focused on professionalism. Prior to this period, police were criticized for being political and corrupt as demarcated by the description of this time frame as the political era (Kelling and Moore 1988). Police professionalism based on police reformers, such as August Vollmer, O.W. Wilson, and FBI Director J. Edgar Hoover, concentrated on ameliorating the corrupt image of police to one of prestige and expertise through standardization, training, education, and professional demeanor. This meant that the ideal professional law enforcement officer would be physically sound, corrupt-free, and technically proficient without allowing any emotion to cloud his objective, expert judgement. This was ideally represented in the 1950s by Jack Webb’s iconic, fictional character, LAPD Detective Joe Friday, in the popular show, Dragnet. Joe Friday’s most famous phrase “Just the facts, ma’am,” captures the officer’s demeanor and response when dealing with emotionally charged persons by focusing strictly on crime control matters. Police reform is most closely associated with the William Parker era at LAPD. Parker was, at the time, credited with transforming a largely corrupt department into a model of professionalism. After initial praise and good publicity for Chief Parker, however, the department was marred by a pattern of police brutality and discrimination against minority communities which led to major urban riots and upheavals, including the infamous Watts Riots in 1965. In similar fashion, another riot sparked by the LAPD beating of black motorist Rodney King would occur in 1991. In the wake of these riots, independent commissions and scholars pointed to strained police-community relations as a major factor in these social upheavals (Report of the Independent Commission 1991). The LAPD had become an environment with a strong police subculture dominated by an antagonistic “us versus them” mentality with the public, and its officers developed cynical attitudes towards the public that were described as a “siege mentality” (Manning 1999). Under the professional model of policing, officers had become nameless, faceless beings that drove around in police cars and only interacted with the public in a very truncated, stoic, fashion. In a word, inhuman. Social media has allowed officers to display a side of themselves that shows real emotion and approachability which can translate into building mutual respect and trust with the community. A study of the Toronto Police Department shows the department has an objective and active strategy to use social media to humanize officers for better interactions with citizen groups (Meijer and Thaens 2013). According to one PIO who trains other agencies on public information, focusing on crime alone in social media is not the most effective way to build trust. He stated:

414

J. Nhan and N. Noakes

A lot of times I tell chiefs that are resistant to be on social media or they take the approach it’s all business, it’s going to be crime, crime, crime, crash, crash, crash. I say you’re missing out on so much because the majority of the time, when we, meaning the uniformed officers, come into contact with the general public a lot of times it’s because they’re on a traffic stop, maybe they didn’t have a seatbelt on, they were speeding, or they’re calling us because they’ve been victimized. That’s the only time they’re seeing us. With social media, if I can show the day in and day out, [the public will see] you’re doing great work and we need to celebrate that.

Another PIO echoed this viewpoint, stating, Most people interact with the police when it’s some type of infraction or when they need help in some type of police matter so it’s very difficult for officers, particularly on egregious matters to be jovial. They’re there to take care of business if it’s something very serious. The nice thing about social media is it can give us that personal interaction and there’s no enforcement there. You go from Officer [Smith] or Chief [Johnson] to [Matt] and [David].

Another PIO from the same department gave an example: I had a picture of an officer, a mom, with her child. She was dropping her child off to school and giving him a hug and I posted that picture for Mother’s Day, well that’s humanizing. That’s a mom. It shows the public we’re parents, too. This is a mom with her child and she’s also a police officer. That’s humanizing, and people can really connect with that.

The humanizing effect has won over many officers who were reluctant to embrace social media as an important part of policing and police legitimacy. Many officers that object to lip-sync and dance challenges by other officers in uniform, or feel such spectacles erode and undermine the professional image of the officer, often change their minds when presented with the results. Some PIOs interviewed told of such experiences. One PIO expressed, “Officers were hesitant, they didn’t know what [the PIO team members] were doing, but they began seeing, ‘wow, the public actually [responds positively to these social media vidoes],” adding, “It builds legitimacy, it enhances trust.” The humanizing effect of police can be very useful during critical incidents, for instance, with sensitive issues that deal with alleged instances of police misconduct or race. One PIO explained that the humanizing effect of social media can build empathy towards officers as people who are relatable, stating: I want people to see our police officers not so much behind the badge in uniform, I want them to see us as regular people. We’re dads, we’re brothers, we’re softball coaches, we coach little league, we go to church. We’re just like you, we’re just like any other member of the community. The best way to show that is to show the stories of heroism, to show the personal side of officers.

During a crisis, an existing relationship between police and the community can buy police organizations time to prevent conjectured or fabricated narratives. One PIO explains the accumulation of public good will from good relations established through social media can be used during times of crisis, stating:

20

Police Legitimacy in the Age of the Internet

415

I like to use the analogy of an emotional piggy bank. From our perspective, we know we’re going to have a controversial use of force. . .we’re going to put deposits in this. Our social media is one major contributor to this piggy bank. . .at some time or another. . .one example, we had a shooting incident. . .he was unarmed. . .it was a bad shooting all the way around. We took a huge withdrawal during that time but we had done so much community investment that line officers told us we’ve done such a great job on social media presence in humanizing our badge. We didn’t have protests and people waited for the facts to come out. They waited for the investigation to occur.

A key function of community policing is connecting with citizens through informal contacts with community members to build trust and receive feedback and information about community concerns. Regularly held town hall meetings are a means by which this reciprocal information is shared in a dialogue that can better guide police resources, and social media allows for virtual town hall meetings. As a result, social media is serving as an impetus for digital governance. Facebook can also serve as an effective platform for virtual town hall meetings that further community policing agendas. One PIO explained, “People may not have time to come to a community meeting or a town hall meeting or interact with a citizens police academy or become involved in some sort of community watch group, but they do have the time to pull their smartphone out to see what’s going on and stay connected through social media.” Another PIO from a different agency underscored the direct benefit of social media to further community policing, stating: I think social media has enhanced community policing greatly. Social media allows us to tell our own story. Before social media, nobody knew anything about us. We may hold a community event here and there and may get a 100 people but what about the other thousands of people? When we want to tell it, how we want to tell it, and we don’t rely on media anymore. It’s definitely part of community policing.

However, not all communications are equal. There have been a growing number of studies that show the effects of social media on legitimacy. For example, Grimmelikhuijsen and Meijer (2015) found that Twitter had a small positive effect on public perceptions of police legitimacy based on better transparency. However, the study concluded that the public’s lack of interaction on the Twitter platform, which was best for broadcasting, meant the effects were minimal overall. However, different platforms may offer different functionality, requiring departments to use a multi-pronged and multi-platform approach to social media.

The Different Uses of Social Media on Different Platforms There are two main social media platforms that police primarily use at the time of this writing: Twitter and Facebook. To a lesser extent, some agencies use Instagram and YouTube, but most use Twitter and Facebook. It was found that police agencies take different approaches to social media use with varying effects on police legitimacy. Some departments use one social media platform in conjunction with traditional news media and email messages while others use multiple platforms simultaneously.

416

J. Nhan and N. Noakes

Twitter, with its short character limit and speed of delivery, is the ideal platform for police to notify the public of succinct information and is widely used by agencies. A survey of large city police departments’ use of Twitter shows the reporting of a crime or incident is by far the most frequent use of the platform (45%). This is followed by department information (14%), event information (10%), and traffic information (8%) (Heverin and Zach 2010). Furthermore, the speed and universal delivery of twitter was lauded by officers. One PIO explained, “What we like about Twitter is everybody gets the message the moment you tweet. With Facebook, you have to rely on algorithms.” Some departments that use social media based on legal risk as a guiding principle tend to gravitate towards Twitter use. For example, one PIO from a small department that uses Twitter exclusively described his department’s philosophy on social media: We’re very clinical and we stay within a pretty tight lane to reflect the culture of the community because the last thing I want to do is create work for myself and cause some embarrassment or speak outside the culture of our department which is extraordinarily professional and geared towards the best service.

This strict use and 140 and later 280-character limit of Twitter does not preclude the platform from being useful in connecting with the community and humanizing and legitimizing the police force. One PIO recalled when he first realized the potential of tweeting as a way to connect with the community through another agency’s use of the platform. He recalled, “I remember [the PIO in a neighboring city] did just a fantastic job because the way he approached Twitter. He used casual speak like the way we talk. Nothing read like a press release and he was putting a lot of photos out and stuff like that, so he was doing really good. . .It really caught my attention.” Another PIO interviewed touted their department’s “tweet-a-long” where officers live tweeted the happenings of the night while on duty. This allows for a large segment of the community to gain insight into the officer’s duties and even accommodate segments of the community who cannot participate in a ride-along. One PIO explained, “Maybe [certain community members] have a background that wouldn’t get them cleared to ride with an officer or maybe they don’t have time or desire to want to sit in a car. But you can see from a driver seat this what officers are dealing with.” Despite the effectiveness of Twitter, the preferred platform for social relations is Facebook, where officers can post longer narratives that are supplemented with pictures and video. More importantly, Facebook further legitimizes police through shared dialogue and interactivity. One supervisory PIO explained, “Facebook is more for our feel-good type of stories or a call to action kind of story that resonates with our followers. Facebook is going to generate a lot more conversation through shares and two-way dialogue.” There are three general categories that define the nature of communicating with the public through social media. Pushing, refers to broadcasting information, such as tweeting information to the public without any active feedback and dialogue. This

20

Police Legitimacy in the Age of the Internet

417

one-way communication push strategy is the predominant strategy in law enforcement (Mossberger et al. 2013). However, pull strategies add to push strategies with a public call to action that may elicit public input. These types of posts, for instance on Facebook, will result in a few comments. However, a networking strategy uses informal interactions and is more organic in nature, which promotes trust, manages conflicts, and potentially establishes long-term reciprocal relationships between departments and communities (Huang et al. 2017). The use of networking strategies usually result in more shares, likes, and interactive comments. Allowing for public feedback and open dialogue can bolster police legitimacy. Facebook is the preferred platform to use a more informal and even playful tone with the public, which further humanizes the officers. One PIO explained, “The public likes it when the PD responds to their comment. They think, ‘wow, they read this message!’ because they view us as big brother, big government, driving around in our cars, going to the donut shop, and write tickets. You hear that stereotype all the time. If we can engage in that two-way conversation, we’re going to do that on Facebook.” However, interactive engagements are rare, and most interactions can be described as one-way and “asymmetrical” (Waters and Williams 2011). Williams et al. (2018) content analysis of five US municipal police departments’ Twitter and Facebook use shows most departments use those platforms to push information rather than engage in meaningful dialogue. They found over 20% of social media messages were announcements, followed by accident reports (16.1%), with only approximately 12% of messages having interaction. The use of social media, however, is not completely beneficial to departments and individual officers. Policing and surveilling social media as a source of crime raises privacy concerns that can potentially undermine police legitimacy.

Undermining Legitimacy Through Surveillance of Social Media While social media is a powerful platform to aid police in conducting investigations and connecting with the community, it raises privacy concerns. Since the early 2010s, police have taken a greater role in actively monitoring social media communities and users. For example, in 2011, British police sought to monitor potential riots through community tensions as a precursor in social media. They developed the Cardiff Online Social Media Observatory (COSMOS) engine that autonomously monitors and mines big data streams for tension based on perceptions, opinions, feelings, and actions of social media networks (Williams et al. 2013). In 2011, in the wake of the 2011 Vancouver Stanley Cup Playoff riots, police investigators mined social media for rioter identities. However, Trottier (2012) pointed out two issues. First, social media users began not only identifying but directly shaming individuals online. Second, and more importantly, many citizens unwillingly enrolled in the policing process. Trottier points out the blurring the lines of slippery slope of breaching private spheres of everyday life that social media users expect to be free from surveillance, describing the phenomenon as continual “surveillance creep.”

418

J. Nhan and N. Noakes

This surveillance concern can undermine the trust of citizens towards police and ultimately the legitimacy of a police force that recognizes itself as community policing. This theoretical concern of using social media is coupled with a number of practical hazards and challenges for departments as well as individual officers.

Hazards and Challenges of Social Media Using and policing social media can have further symbolic and practical unintended consequences. First, engaging citizens as partners can result in vigilante justice, which is especially harmful if the wrong person is identified. Second, departments must contend with individual officers’ social media presence, which is often unregulated and consequential. Finally, police officers are subject to real dangers in the social media world when their identities and personal information are exposed and distributed on the Internet, a scheme known as “doxing.”

The Hazards of Vigilante Justice Police have increasingly solicited the community’s assistance in criminal investigations. Social media platforms can serve as a fast and efficient way for police agencies to distribute information on crimes and suspects. Prior to the Internet, agencies relied on news channels and television shows, such as the popular America’s Most Wanted, to distribute suspect information. Today’s police organizations regularly post suspect pictures, surveillance videos, and other information on social media to a public that is more than willing to help. However, the community members, often eager for justice, can take matters too far. There are predefined roles that police agencies want communities to engage in. Police typically want to distribute information to the public, who passively report new information back to the police, who then use it to further their investigation. Meijer and Thaens (2013) defined this as a “pull” strategy. The public, however, being eager for justice and wanting a larger role in investigations, often launch online investigations, much to the chagrin of police. Huey et al. (2013), in examining an online vigilante group that baits and exposes online child predators, found that police typically do not want to make such groups active partners. This is in part due to legal reasons, but also because they do not feel the help is worth the risks. For instance, during the 2013 Boston Marathon bombing, Internet communities organically launched investigations into finding the suspects. Reddit community members aided law enforcement in compiling and forwarding photos, videos, and other evidence to the FBI and other agencies. Members from its 70-million user base were able to identify a suspect using information derived from the group’s collective knowledge, such as the pressure cooker used in the attack. The community wrongly identified Brown student, Sunil Tripathi, who was actually missing at the time and later discovered to be dead at the time of the bombing. Unfortunately for Tripathi’s grieving family, they received hundreds of threatening anti-Islamic hate messages

20

Police Legitimacy in the Age of the Internet

419

and experienced unwarranted harassment as his name and face were distributed all over cyberspace (Shontell 2013).

Officer Personal Social Media Usage Some off-duty police officers, frustrated by negative news media accounts, directly express, “like,” or share links to Internet sources that are contrary to official department views and policies. Indiscretion in social media activity can lead to severe consequences, including termination (Goldsmith 2015). In 2015, for example, San Jose Police Officer Phil White was initially fired for tweeting “Threaten me or my family and I will use my God given and law appointed right and duty to kill you. #CopsLivesMatter” at the Black Lives Matter group. While Officer White was reinstated after arbitration, he was condemned by his department, union, city leaders, and social justice groups. In 2016, a Mount Vernon, N.Y. police officer was reprimanded and suspended for posting an inflammatory and racially insensitive Facebook post aimed at the Black Lives Matter movement. The post read, “It’s fine to be anti-police, but be 100 (percent) about it. Don’t call the police when your world is in disarray to help deal with the worst 10 minutes of your life. Figure it out yourself or better yet call Shaun King and Black Lives Matter for help” (Failla 2016). Despite not being able to fully control off-duty social media use by individual officers, police departments have acknowledged that police officers whether on or off duty are public figures and have developed general recommended guidelines. However, restrictions on individual officer’s use of social media are often considered a matter of restricting free speech. For instance, a circuit court ruled that a Petersburg, Virginia, police department was violating free speech when it prohibited negative comments posted online about the department (See Liverman v. City of Petersburg). The American Bar Association cites the problematic nature of online speech made by employees when they are not engaging in official job-duty speech (Hudson 2017).

Doxing/Doxxing Getting into trouble at work is not the only concern of police officers today when it comes to social media. Increasingly, police officers are victims of “doxing” (or “doxxing”), or the unauthorized release of private information, such as home addresses, on the Internet. Individuals or groups may retaliate against an officer by releasing and posting an officer’s private information, including phone numbers, addresses, and even names of family members, often after a controversial event. In 2011, NYPD Officer Anthony Bologna, who appeared in a viral video pepper spraying protesters, was doxed by hackers. Once identified by Internet users, Officer Bologna’s private information was released by hackers who released his personal phone number, home address, relatives’ names, high school attended, and even lawsuits. The same fate was shared by Officer Darren Wilson, who was accused of

420

J. Nhan and N. Noakes

unjustifiably shooting Michael Brown in Ferguson, Missouri, in 2014. Wilson’s marriage license with home address was doxed in retaliation for his acquittal by a grand jury. This ultimately resulted in him leaving police work and living in seclusion. Entire departments have fallen victim to doxing. In 2016, more than 50 Cincinnati police officers, including Chief of Police Eliot Isaac, were doxed by hacker group Anonymous in retaliation for a controversial shooting. The hacker group released a public video to the police, stating, “Thin Blue Line, your game is over. You lost. While we release your officers’ information, we will hold no responsibility of the actions of those that see the information.” The group added a warning to all departments, stating, “Well, we have a message to not only the Cincinnati Police Department but to every law enforcement officer. When you murder a human being when you have other choices of containing your suspect available, we will make your officers’ information public record” (Chasmer 2016).

Conclusion It has been shown that underneath the silly dances and lip-sync challenges that many police departments participate in, social media can not only be a powerful asset in assisting with crime control functions, such as during investigations, but also an important symbolic tool that potentially increases police legitimacy and strengthens police-community relations. The usefulness of social media depends on the differing perspectives of each agency’s vision of what it should be used for. For some departments that only view social media as a way to push information out to the public, the symbolic effects may be limited, but for departments that frequently engage in open dialogues with the community and balance information with more social posts in a push and pull or network model, they can see an accumulation of good-will that “humanizes” the police and proves useful in times of crisis that could otherwise create strain with the community. Perhaps most important is the ability social media creates for police organizations to promote and distribute their own narratives, which bypass traditional media’s gatekeeping function. This control allows police to contextualize events without truncated videos and information. While the traditional media still plays an important role in the critical reporting of police incidents, the police provide an important perspective that is oftentimes more complete. Despite the benefit of controlling narratives, concerns of privacy can potentially undermine police legitimacy. Data mining and surveillance activities such as COSMOS developed to monitor different groups’ and individuals’ social media activities can be concerning in its scope and personal nature of data captured, reminiscent of past surveillance activities on groups during Senator Joseph McCarthy’s search for Communists and other “subversive” groups. Today, that wide scope is often justified under the justification of anti-terrorism.

20

Police Legitimacy in the Age of the Internet

421

Additional risks and hazards are apparent as police organizations and officers use social media. Police departments and officers may be subject to public hostility and criticism. Moreover, individual officers voicing their personal opinions on controversial matters on social media can find themselves in trouble within their departments as well as legally. However, limiting these opinions can be a matter of free speech, which complicates things for police organizations. Departments today are still developing social media policies that balance organizational risk and reputation with individual officer rights to use social media. While there is limited research at the time of this writing on the effects of social media on police legitimacy, our limited sample of PIOs interviewed largely show that social media is becoming an important part of policing. Obviously, a larger sample of PIOs and more sophisticated mixed models would provide a more complete picture of police sentiments, but these interviews were meant to supplement the limited scope of this chapter. However, it serves as an initial step in further inquiry into police organizations, which are slow-moving institutions steeped in bureaucracy and strong cultural tradition, within the rapidly changing world of cyberspace and social media.

Cross-References ▶ Police and Extralegal Structures to Combat Cybercrime ▶ Race, Social Media, and Deviance ▶ Subcultural Theories of Crime ▶ The Rise of Online Vigilantism

Appendix See Table 1. Table 1 Agency use of social media Usage Public notification of safety concern Community outreach Public relations Public notification of non-crime such as traffic Soliciting crime tips Gauging public sentiment Intelligence for investigations Recruitment and application vetting Communicating with government agencies In-service training Other

% 91 89 86 86 76 72 70 58 29 6 3

422

J. Nhan and N. Noakes

References Bowen, N. (2018, July 24). Police use social media to catch drug dealer. The London Free Press. Retrieved from https://lfpress.com/news/local-news/police-use-social-media-to-catch-drugdealer/wcm/2e198bec-ffdc-4038-9921-2bc85f33f1d8 Bridenball, B., & Jesilow, P. (2008). What matters: The formation of attitudes toward the police. Police Quarterly, 11(2), 151–181. https://doi.org/10.1177/1098611107313942. Bright, P. (2011, August 8). How the London riots showed us two sides of social networking. Ars Technica. Retrieved from https://arstechnica.com/tech-policy/2011/08/the-two-sides-of-socialnetworking-on-display-in-the-london-riots/ Chasmer, J. (2016, February 22). Hacker group releases information about Cincinnati Police Department employees. Fox News. Retrieved from https://www.foxnews.com/us/hackergroup-releases-information-about-cincinnati-police-department-employees Cheatham, R. T., & Erickson, K. V. (1984). The police officer’s guide to better communication. Glenview: Scott, Foresman. Chermak, S., & Weiss, A. (2005). Maintaining legitimacy using external communication strategies: An analysis of police-media relations. Journal of Criminal Justice, 33(5), 501–512. https://doi. org/10.1016/j.jcrimjus.2005.06.001. Culbertson, H. M., Jeffers, D. W., Stone, D. B., & Terrell, M. (1993). Police in America: Catching bad guys and doing much, much more. In J. Bryant (Ed.), Social, political, and economic contexts in public relations: Theory and cases (pp. 123–151). Hillsdale: Erlbaum. Dzieza, J. (2011, August 12). London riots: Police use social media to track rioters. Daily Beast. Retrieved from https://www.thedailybeast.com/london-riots-police-use-social-media-to-trackrioters Failla, Z. (2016, September 1). New Rochelle police officer under investigation for Black Lives Matter post. New Rochelle Daily Voice. Retrieved from https://newrochelle.dailyvoice.com/ police-fire/new-rochelle-police-officer-under-investigation-for-black-lives-matter-post/679551/ Fitzpatrick, A. (2012). Here’s how police get a suspect’s Facebook information. Mashable. Retrieved from https://mashable.com/2012/12/18/police-facebook/#xraN.DoxuiqP Goldsmith, A. (2015). Disgracebook policing: Social media and the rise of police indiscretion. Policing and Society, 25(3), 249–267. https://doi.org/10.1080/10439463.2013.864653. Gramlich, J., & Parker, K. (2017). Most officers say the media treat police unfairly. Pew Research Center. Retrieved from http://www.pewresearch.org/fact-tank/2017/01/25/most-officers-saythe-media-treat-police-unfairly/ Grimmelikhuijsen, S. G., & Meijer, A. J. (2015). Does Twitter increase perceived police legitimacy? Public Administration Review, 75(4), 598–607. https://doi.org/10.1111/puar.12378. Heverin, T., & Zach, L. (2010). Twitter for city police department information sharing. Paper presented at the Association for Information Science and Technology. 22–27 October 2010. Pittsburgh. Holson, L. M. (2018, July 23). Police officers lip-sync as part of public relations dance. The New York Times. Retrieved from https://www.nytimes.com/2018/07/23/us/police-lip-sync-chal lenge.html Huang, Y., Wu, Q., Huang, X., & Bort, J. (2017). A multiplatform investigation of law enforcement agencies on social media. Information Polity, 22, 179–196. https://doi.org/10.3233/IP-170414. Hudson Jr., D. L. (2017). Public employees, private speech: 1st Amendment doesn’t always protect government workers. American Bar Association Journal. Accessed from http://www. abajournal.com/magazine/article/public_employees_private_speech?icn=most_read Huey, L., Nhan, J., & Broll, R. (2013). “Uppity civilians” and “cyber-vigilantes”: The role of the general public in policing cyber-crime. Criminology and Criminal Justice, 13(1), 81–97. https:// doi.org/10.1177/1748895812448086.

20

Police Legitimacy in the Age of the Internet

423

Hughes, A. L., & Palen, L. (2012). The evolving role of the Public Information Officer: An examination of social media in emergency management. Journal of Homeland Security and Emergency Management, 9(1), 1–20. Johnson, K. (2010, November 11). Police recruits screened for digital dirt on Facebook, etc. USA Today. Retrieved from https://usatoday30.usatoday.com/tech/news/2010-11-121Afacebookcops12_ST_N.htm Johnson, K. (2019, May 19). 8-year-old girl kidnapped in Fort Worth is found safe; police say suspect is in custody. Fort Worth Star-Telegram. Retrieved from https://www.star-telegram. com/news/local/crime/article230584799.html Katzenbach, N., et al. (1967). The challenge of crime in a free society: A report by The President’s Commission on Law Enforcement and Administration of Justice. Washington, DC: United States Government Printing Office. Retrieved from https://www.ncjrs.gov/ pdffiles1/nij/42.pdf Kelling, G. L., & Moore, M. H. (1988). The evolving strategy of policing. Perspectives on Policing, 4, 1–15. Kim, K., Oglesby-Neal, A., & Mohr, E. (2017). 2016 law enforcement use of social media survey. International Association of Chiefs of Police. Retrieved from http://www.theiacp.org/Portals/0/ documents/pdfs/2016-law-enforcement-use-of-social-media-survey.pdf Manning, P. K. (1999). Police: Mandate, strategies, and appearances. In V. E. Kappeler (Ed.), Police and society: Touchstone readings (2nd ed., pp. 94–122). Long Grove: Waveland Press. Meijer, A., & Thaens, M. (2013). Social media strategies: Understanding the differences between North American police departments. Government Information Quarterly, 30(4), 343–350. https://doi.org/10.1016/j.giq.2013.05.023. Mossberger, K., Wu, Y., & Crawford, J. (2013). Connecting citizens and local governments? Social media and interactivity in major U.S. cities. Government Information Quarterly, 30(4), 351–358. https://doi.org/10.1016/j.giq.2013.05.016. Motschall, M., & Cao, L. (2002). An analysis of the public relations role of the police public information officer. Police Quarterly, 5(2), 152–180. Nhan, J. (2014). Police culture. In J. S. Albanese (Ed.), The encyclopedia of criminology and criminal justice (pp. 1–6). Hoboken: Wiley Blackwell. https://doi.org/10.1002/9781118517383. wbeccj371. Nhan, J., Huey, L., & Broll, R. (2017). Digilantism: An analysis of crowdsourcing and the Boston Marathon bombings. British Journal of Criminology, 57, 341–361. https://doi.org/10.1093/bjc/ azv118. Nix, J., & Picket, J. T. (2017). Third-person perceptions, hostile media effects, and policing: Developing a theoretical framework for assessing the Ferguson effect. Journal of Criminal Justice, 51, 24–33. https://doi.org/10.1016/j.jcrimjus.2017.05.016. Report of the Independent Commission on the Los Angeles Police Department. (1991). Retrieved from https://archive.org/details/ChristopherCommissionLAPD/page/n9 Shontell, A. (2013, July 26). What it’s like when Reddit wrongly accuses your loved one of murder. Business Insider. Retrieved from https://www.businessinsider.com/reddit-falsely-accuses-suniltripathi-of-boston-bombing-2013-7 Stateman, A. (1997). LAPD blues: We’re cops. We’re not PR people. Public Relations Tactics, 4(1), 18. Taylor, E., & Rufener, K. (2015, August 7). Police use Facebook to catch drug dealers. CBS News Memphis. Retrieved from https://wreg.com/2015/08/07/police-use-facebook-to-catch-drugdealers/ Trottier, D. (2012). Policing social media. Canadian Review of Sociology, 49(4), 411–425. Trottier, D. (2015). Vigilantism and power users: Police and user-led investigations on social media. In D. Trottier & C. Fuchs (Eds.), Social Media, politics and the state: Protests, revolutions, riots,

424

J. Nhan and N. Noakes

crime and policing in the age of Facebook, Twitter and YouTube (pp. 209–226). New York: Routledge. Waters, R. D., & Williams, J. M. (2011). Squawking, tweeting, cooing, and hooting: Analyzing the communication patterns of government agencies on Twitter. Journal of Public Affairs, 11(4), 353–336. Williams, M. L., Edwards, A., Housley, W., Burnap, P., Rana, O., Avis, N., Morgan, J., & Sloan, L. (2013). Policing cyber-neighbourhoods: Tension monitoring and social media networks. Policing and Society, 23(4), 461–481. Williams, C. B., Federorowicz, J., Kavanaugh, A., Mentzer, K., Thatcher, J. B., & Xu, J. (2018). Leveraging social media to achieve a community policing agenda. Government Information Quarterly, 35(2), 210–222. https://doi.org/10.1016/j.giq.2018.03.001. Wolfe, S. E., & Nix, J. (2016). The alleged “Ferguson effect” and police willingness to engage in community partnership. Law and Human Behavior, 40(1), 1–10. https://doi.org/10.1037/ lhb0000164.

Forensic Evidence and Cybercrime

21

Marcus Rogers

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Forensic Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime and Digital Forensics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Categories of Forensic Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reconstructive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Forensic Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analysis and Examination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Report/Opinion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary/Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

426 427 429 431 431 432 433 434 436 438 440 441 442 442

Abstract

This chapter introduces why forensic evidence is essential, what forensic evidence is, the categories of forensic evidence, and how forensic evidence is used to investigate cybercrimes. The general issues facing the forensic sciences that center primarily on dealing with evidence, such as admissibility, expert opinion, bias, and uncertainty, are discussed concerning cybercrime and cybercriminal investigations. Brief introductions to Daubert, Joiner, and Kumho decisions are provided to put the issues regarding evidence, including digital evidence, into the proper context. The chapter introduces digital evidence, digital forensics, and the role it plays in the investigative process. It also discusses the phases of the generic M. Rogers (*) Department of Computer and Information Technology, Purdue University, West Lafayette, IN, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_13

425

426

M. Rogers

digital forensics process model that includes (1) identification, (2) collection, (3) analysis and examination, and (4) opinion/report. Two broad categories of digital evidence – trace and reconstructive – are presented to limit the scope of the chapter further, as not all the traditional types apply to this domain. The chapter also looks at what the future holds for forensic evidence in the field of cybercrime and examines some of the current and future challenges that face digital evidence and digital forensics. These include but are not limited to the volume of potential digital evidence, the variety and location of digital evidence, encryption, and the need to adopt probabilistic approaches to quantify the uncertainty of conclusions and opinions derived from digital evidence. The chapter concludes by summarizing the importance that digital evidence has to all investigations and the challenges faced by this forensic science. Keywords

Digital evidence · Digital forensics · Cybercrime · Cybercriminal investigations · Digital forensic evidence

Introduction The field of forensic sciences has been around for a relatively long time. Despite its longevity, it has recently undergone some significant changes due to the increased scrutiny placed on it by the public and the courts. The allure of scientists in white coats working “somewhat secretly” in labs, immune from the prying eyes of the public and of the courts, coming up with definitive proof regarding the guilt or innocence of an accused has passed and rightly so. The public’s comfort with forensic science, driven by the popular media’s obsession with all things forensic, has resulted in both positive and negative consequences. The negative is the CSI effect (People believe that the media’s portrayal of the “science” in the shows is accurate, which is far from the truth) and the positive – why is forensic evidence not more standardized, why is it not used in more investigations, and what are its limitations (Rasmussen College n.d.)? These questions are crucial and helped inform the National Academy of Sciences (NAS) examination of the forensic sciences in 2009 (US Committee on Identifying the Needs of the Forensic Sciences Community of the National Research Council 2009). In the NAS report, all the forensic sciences (including digital forensics) were criticized for not being scientific enough. Additionally, the term forensic evidence was found to be somewhat vague and used inconsistently across the disciplines. Furthermore, the report significantly highlighted the importance of evidence that was derived in a forensically sound manner. However, the report concluded that a consistent criterion for dealing with forensic evidence for each of the disciplines that fall under the broad umbrella of forensic sciences was sadly lacking (US Committee on Identifying the Needs of the Forensic Sciences Community of the National Research Council 2009). Despite the lack of consistency across the

21

Forensic Evidence and Cybercrime

427

disciplines, evidence that is used to derive opinions and conclusions is at the foundation of all forensic science. Forensic science relies on data/evidence and the relationship that it has to events that occurred (or thought to have happened). The relationship between the date and the event is also known as the “context” of the evidence (Rogers and Seigfried-Spellar 2014). Once the evidence is collected, a process of abductive reasoning is used to form an opinion of what occurred, who might be responsible (or not), when it occurred, how it occurred, and why it may have happened (Graves 2013; Rogers and Seigfried-Spellar 2014). This chapter focuses on forensic evidence as it relates to cybercrimes. While the primary focus of this chapter is on evidence that is of importance during the investigation of a cybercrime (which is termed digital evidence), it will do so in the context of standards and requirements for all forensic sciences. This chapter introduces why forensic evidence is essential, what forensic evidence is, the categories of forensic evidence, and how forensic evidence is used to investigate cybercrimes. The general issues facing the forensic sciences that center primarily on dealing with evidence, such as admissibility, expert opinion, bias, and uncertainty, are discussed concerning cybercrime and cybercriminal investigations. A brief introduction to the Daubert, Joiner, and Kumho court decisions is provided to put issues regarding the admissibility of expert testimony into the proper context. The chapter introduces digital evidence, digital forensics, and the role it plays in the investigative process. It also discusses the phases of the generic digital forensics process model. Two broad categories of digital evidence – trace and reconstructive – are presented to limit the scope of the chapter further, as not all the traditional types apply to this domain. The chapter also looks at what the future holds for forensic evidence in the field of cybercrime and examines some of the current and future challenges that face digital evidence and digital forensics. The chapter concludes by summarizing the importance that digital evidence has to all investigations.

Forensic Evidence To fully understand how forensics evidence impacts cybercrime, we need to be sure that there is an agreed-upon formal understanding as to what forensic evidence means and, more critically, what digital evidence is. Forensic evidence can be defined as: Forensic evidence is evidence obtained by scientific methods such as ballistics, blood test, and DNA test and used in court. Forensic evidence often helps to establish the guilt or innocence of possible suspects. (US Legal, Inc. n.d.)

Forensic evidence is derived from the legal concept of evidence: “Evidence in a broad sense refers to something that furnishes proof of a matter” (US Legal, Inc. n.d.). Whether that something is admissible in a legal proceeding to furnish as proof is up to the courts and the various rules of evidence that exist. In the United States, there are two primary rules of evidence. At the federal criminal court level, there is the Federal Rules of Evidence (FRE) and, at the federal level for civil courts, the

428

M. Rogers

Federal Rules of Civil Procedures (FRCP). However, in the United States, each state can have its own rules of evidence that may or may not be identical to either the FRCP or the FRE. Regardless, the same foundations for determining admissibility usually apply (FindLaw n.d.): 1. Relevance: Evidence is relevant when it has any tendency in reason to make the fact that it is offered to prove or disprove either probable. 2. Materiality: Evidence is material if it is offered to prove a fact that is at issue in the case. 3. Competence: Evidence is competent if the proof that is being offered meets specific traditional requirements of reliability. Germane to the notion of rules of evidence is the admissibility of expert witness testimony/opinion that is based on scientific or technical methods or techniques. In US criminal law, the Federal Rules of Evidence (FRE) 702 covers the “proffering” of expert testimony. Unfortunately, the bar has been set low for who can be deemed an expert, as FRE 702 allows a judge to consider someone an expert based on “knowledge, skill, experience, training, or education” (Legal Information Institute n.d.). Unfortunately, what constitutes enough knowledge, skill, experience, training, or education to be an expert is left up to the judge to decide. In 1993, the US Supreme Court decision in Daubert v. Merrell Dow Pharmaceuticals established the Daubert standard to assist the trier of facts (judges) to decide whether the testimony of a scientific expert witnesses is admissible (Merlino et al. 2007; Risinger et al. 2002). The Daubert decision established a five-part standard to assist judicial gatekeepers (Judges) in determining whether the expert’s opinion should be admissible. The five parts or considerations are testing, peer review, error rates, standards, and acceptability. Testing: The theory or method or technique must be tested or be testable. The goal here is to establish the reliability of the method. If the method is reliable, then someone else following the same process should derive the same results (Christensen et al. 2014). This is often thought of as replicability or repeatability, which is the hallmark of the scientific method. Peer review: The theory, method or technique also needs to have been established as legitimate by being subjected to review by other experts in the field (Merlino et al. 2007). Peer review is similar to the general acceptance consideration of Frye. However, it establishes a specific criterion that must be used to measure this acceptance. Peer review most often occurs through publications in refereed journals or periodicals. Acceptance and presentations at conferences may count toward this as well. Error rates: The trueness of the theory, method or technique must be established, or at least there must be the potential to measure this (Christensen et al. 2014). Error rates here refer to false positives (Type I) and false negatives (Type II). These error rates can also be used to determine the accuracy of a technique or tool. Standards: There must exist some type of standards or controls that the theory, method or technique adheres to, or be governed by, and these standards must be maintained (Merlino et al. 2007). This also presupposes some sort of governing body that develops the standards and maintains them (Meyers and Rogers 2004).

21

Forensic Evidence and Cybercrime

429

Acceptability: This is a carry-over from Frye and requires that the theory, method or technique be accepted in the scientific field (Merlino et al. 2007). Daubert attempted to set a higher bar for admissibility of scientific expert witness testimony that had been established in 1923 under Frye v. United States. Daubert provides a checklist that can be used to determine how “acceptable” a scientific theory, method or technique is (Merlino et al. 2007). With Frye, the admissibility of the method used to drive the evidence or come to the results/conclusion was based on only one consideration; it must be generally accepted by experts in the field (Merlino et al. 2007). Despite its objective of introducing more rigor in the determination of acceptable scientific expert evidence, the Daubert standard articulates considerations that a judge can follow or choose to ignore; these are not rules. Some legal scholars argue that Daubert in effect has watered down Frye by constraining how judges determine acceptability (Heinzerling 2006). An additional criticism is that the standard provides no direction on whether all five of the considerations are equally weighted or some more important than others (Merlino et al. 2007). Two other subsequent US Supreme Court Decisions, General Electric Co. v Joiner (1997) and Kumho Tire Co. v Carmichael (1999), further clarified the Daubert considerations and the type of expert testimony that is covered by the standard. With Joiner, the courts established that the conclusions and method used to arrive at those conclusions are not independent. Thus, the question of admissibility is extended to not just whether the findings were appropriate but also whether the method used was valid (Grudzinskas and Appelbaum 1998). In the Kumho decision, the courts extended the admissibility considerations of the Daubert Standard for FRE 702, beyond just scientific knowledge, to include technical and other specialized knowledge (Mangrum 1999; Risinger et al. 2002). The Kumho decision put digital forensic expert witness testimony clearly within the scope of Daubert, thus impacting how digital evidence must be handled and the methods/techniques that should be used to arrive at the conclusions or opinions (Meyers and Rogers 2004).

Cybercrime and Digital Forensics Cybercrime A brief introduction to cybercrime (see ▶ Chap. 1, “Defining Cybercrime”), digital forensics, and digital evidence is warranted. Cybercrime is a bit of a catch-all term that defines a class of criminal activities as opposed to any one event. According to Kirwan and Power (2013), cybercrime is divided into two general categories: 1. Property crimes (e.g., copyright infringement, denial of service) 2. Crimes against a person (e.g., cyberbullying, sexual abuse of children) These categories can be further subdivided into Internet-based crimes and Internet-specific crimes (Kirwan and Power 2013). Several law dictionaries define cybercrime as:

430

M. Rogers

Crime that takes place through the use of computers, computer technology or the Internet. (TheLaw.com 2014)

The common denominator with any of the cybercrime definitions is the fact that technology is either used as part of the criminal activity (i.e., a tool) or is the target of the criminal activity (i.e., hacking, data breach). Cybercrime in the most real sense does not include criminal behavior where technology is ancillary (there may be evidence that is digital relevant to the case since we are so wired and connected in our digital life) to the criminal activity (e.g., homicide, extortion) (Kirwan and Power 2013). Admittedly, digital evidence is essential to the investigation of these types of criminal activities, as the same investigative approaches are followed. Digital evidence is the same whether the case is a homicide or a hacking incident (Graves 2013).

Digital Forensics Digital forensics is a broad term that includes investigating any activity where the evidence is potentially digital. There is a caveat; while we use the term digital, not all technology is digitally based. Some current technology is still analog based (e.g., audio recordings), and older technologies used before the change to digital-based media still need to be analyzed. It is probably more accurate if we used digital/ electronic forensics, but precise terminology is not necessarily a strength of technology-related fields, although it is a requirement for forensic sciences. The National Institute of Science and Technology (NIST) formally defines digital forensics as: The application of science to the identification, collection, examination, and analysis, of data while preserving the integrity of the information and maintaining a strict chain of custody for the data. (Kent et al. 2006, ES-1)

This definition avoids the issue of differentiating between digital and analog by using the term data. For this chapter, we will adopt this definition as well.

Digital Evidence Differentiating between data and digital evidence is necessary. The best way to think about these two concepts is to consider a funnel. At the wide end, we have all the data that is stored and available on a computer or computing device. The amount of data on a computer or computing device is staggering, with 8TB HDDs being standard on a desktop computer and 1 TB SSD not uncommon on laptops. At the narrow end, we have digital evidence, which is: Information of probative value that is stored or transmitted in binary form. (SWGDE 2016)

In between, we have the process of deciding which data will assist in answering investigative questions. This process is referred to as the analysis and examination phase and is discussed in more detail later (DFRWS n.d.).

21

Forensic Evidence and Cybercrime

431

Categories of Forensic Evidence While numerous types of evidence can be of importance to an investigation (e.g., fingerprints, blood, DNA, hair, and fiber, ballistics), for this chapter we will lump evidence into two broad categories. The primary reason for using broad categories stems from the fact that unlike the other forensic sciences, the types of potential evidence for digital forensics are of such a variety and magnitude (and it varies with the kind of cybercriminal case being investigated) that it would be meaningless to discuss digital evidence without some boundaries. For this chapter, it is most logical to use the categories of (1) trace and (2) reconstructive (Kent et al. 2006; Lyle n.d.).

Trace Trace evidence refers to small/minute particles that are transferred between entities during an interaction (criminal offense). The idea of a transfer is based on Locard’s principle of exchange. According to this principle (Encyclopedia.com n.d.): It is impossible for a criminal to act, especially considering the intensity of a crime, without leaving traces of this presence.

Trace evidence includes pattern evidence and associative evidence. Traditionally, pattern evidence consists of any markings produced when one object encounters another object (NIST n.d.). Typical examples include latent prints, bloodstains, shoe prints, tire marks, handwriting, and even bite marks. Trace evidence is the fodder of the favorite and numerous CSI shows on TV. Pattern evidence usually includes impression evidence. For digital forensics evidence, it does not make sense to include impression evidence, as this is a physical characteristic that does not translate well to the digital space. Associative evidence ties a suspect to the crime scene, the victim, or some other bit of evidence (Lyle n.d.). Traditional examples include hair and fiber, DNA under the nails or on clothes, and other body fluids. Here again, many similar kinds of digital evidence would be analogous to the traditional associative evidence types. The analog of trace evidence in the digital world includes data that is unique in some manner and that uniqueness acts as a “fingerprint” of sorts indicating that a device or user is either connected to or accessed another computing device. This could include unique pictures that have been converted to a hash value or pieces of code that are uniquely coded or contain unique comments (Rogers and SeigfriedSpellar 2014). Trace evidence can also include log files that show users logging in either locally or remotely, logs of system or connectivity that records IP addresses or MAC addresses that are unique (see Table 1). Internet artifacts such as web cache, temporary Internet files, cookies, web browser histories, and searched terms are also considered to be trace evidence. The leftover remnants of social networking sites or locally cached texts, chats, and emails also fall into this category (Graves 2013).

432

M. Rogers

Table 1 Forensic evidence categories Category Trace

Reconstructive

Types Code Code comments Hash values IP address Internet artifacts Temporary files Log files Software Cache Log files IP addresses Web artifacts Event logs Account information Social network artifacts Texts/chats Email Calendar entries Images Videos

Reconstructive Reconstructive evidence assists law enforcement officers to better understand what occurred at the crime scene and the events that may have led up to the criminal activity (Lyle n.d.). Traditional examples here can include bullet trajectory, broken glass, and blood splatter. Digital evidence is no different. There are similar pieces of evidence that are used to try and reconstruct the pattern of events and in some cases construct a timeline of events. The timeline is handy in legal proceedings in which a cause-effect relationship needs to be demonstrated. This timeline of event “A” occurring that either directly or indirectly caused event “B,” which then resulted in “C,” is often the foundation for many cases (Carrier and Spafford 2003). In cybercriminal cases, the establishment of accurate timelines and the ability to succinctly present visual representations have become so crucial that most vendors of digital forensic tools (including open source) offer these functions by default in their tools. Reconstructive evidence can be an issue with digital evidence since not all evidence or potential evidence is stored locally on the device or the data may have a short life span (Graves 2013). With the interconnectivity options (WIFI is everywhere), the popularity of the cloud, and the Internet of things (IoT), often the computing device (whether it be a mobile phone, tablet, or actual PC) is the front end to an online portal where the communications are occurring or data is being stored (Kebande and Ray 2016). This is true for businesses as well, with many companies using cloud-based databases and email systems. When attempting to reconstruct the causal chain, it is

21

Forensic Evidence and Cybercrime

433

vital that all the data that exists are taken into consideration. If these data exist on servers in foreign countries where no treaties are in place, then even knowing the location of the information is not very helpful. If there is no legal way of obtaining the potential evidence, then knowing where it exists is moot (Casey 2011). In some cases, there has been direct interference by foreign governments with investigations where the data may exist in their jurisdiction, but the victim either resides in another country or the other country is the victim itself. The Russian meddling in Brexit and the US 2016 elections are prime examples (Kelley 2018). While locating the evidence is vital for reconstructive purposes, so too is maintaining accurate time stamps on the data (Carrier and Spafford 2003). Without going into detail, for this discussion, it is enough to say that all data has time stamps associated with it that capture the time the data were created, copied, modified, or accessed. This is true whether the data or files are a picture, a document, or a spreadsheet. These time stamps are referred to as metadata (data about data) and cannot usually be seen without the use of a tool (application). If the time stamps get altered (which can happen either on purpose or by an automated process such as the file being scanned by antivirus software or copying the data across devices), this can negatively impact creating an accurate timeline (Carrier and Spafford 2003). Additionally, most third-party companies (e.g., Internet service provider (ISP), social network site (SNS)) that maintain log files of activity that might be of interest do not retain these logs indefinitely (Casey 2011). If the records are not used for billing or troubleshooting purposes, they are usually deleted after a relatively short period (1–2 days in some cases). As was mentioned, it can take some time for investigators to identify where data (potential evidence) may be located. These delays can result in the requested data being deleted or destroyed, despite any treaties that may be in existence (Casey 2011). Reconstructive evidence can overlap with trace evidence, since it is how the evidence is used during the investigative process that ultimately determines whether it is reconstructive or not. Some typical examples are log files, time stamps, geolocation information, user logins, web or Internet artifacts, IP addresses, and emails (see Table 1). It should be noted that Table 1 is not meant to be an exhaustive list, but only representative of the types of evidence in each category.

Digital Forensic Phases While there are several models of digital forensics, the typical phases of a cybercriminal investigation include identification, collection/transportation, examination/ analysis, report/opinion (see Diagram 1) (DFRWS n.d.). All these phases deal with

Diagram 1 Digital forensic phases

434

M. Rogers

data and digital evidence. It should be noted that while these phases are discussed linearly, in some situations, it is necessary to go back a step, to collect more data, or to reexamine a device using a different context as the focus for determining what is of value or not. The next section examines each of these phases in more detail and discusses. It is important to point out that regardless of the category of digital evidence (trace, reconstructive), the steps are the same.

Identification In the identification phase, the goal is to identify data that may be evidence and the potential devices that may contain data that is pertinent to the investigation (see ▶ Chap. 3, “Technology Use, Abuse, and Public Perceptions of Cybercrime”) (Kent et al. 2006). Identification sounds easier than it is. If we consider that data are, for the most part, latent (cannot be seen with naked eye; thus, we need to use technology to abstract and display it), we start getting a feel for how difficult the task can be. An investigator or technician at a crime scene cannot see the data flowing through the air as WIFI signals or the LTE communications that may be occurring with the mobile phone. Contrary to media portrayals, we cannot see the data that is on the thumb drive, on external HDD, or even on the drive of a laptop without using technology to abstract and display it for us. Furthermore, the form factor for storage devices (due primarily to the move from magnetic-based technology to solid state) has shrunk to the point where 500 GB of data can be stored on a secure digital (SD) card the size of your fingernail. These devices can be hidden purposefully if not overlooked accidentally. The small form factor is not the only issue. With the Internet of Things (IoT), almost anything can be a computing device capable of connecting to the Internet and storing data. Devices such as Amazon Echo and Google Home Assistant have been identified in cases as containing potential evidence in the form of recorded conversations between victims and the suspect(s) (CBC Radio 2018). Wearable devices such as the Apple Watch can contain geolocation data, texts, or even the heart rate of the wearer at the time on an incident. More recently, gaming systems such as the Sony PS4, Xbox, and Nintendo Switch have become an essential source of data and potential evidence. These systems have internal storage, and most can have third-party storage added in the form of external USB drives or SD cards. The gaming systems also have in-game or network (e.g., Nintendo network) messaging capability in either in the form of texts or synchronous communication by voice. Texting evidence is always of interest during cybercriminal investigations but so too is recorded conversations. With the popularity of IoT and cloud computing, much of the data that may be of potential interest to a cybercriminal investigation are now stored with third parties, some of which may be in a different country than where the incident occurred (Du et al. 2017; Kebande and Ray 2016). This presents some difficulties related to the jurisdictional authority even to request the data. Additionally, this type of data has a relatively short time to live (TTL), as the companies in some cases do not retain transactional log information for very long (Du et al. 2017).

21

Forensic Evidence and Cybercrime

435

Collection Once the device that potentially contains data is identified, the investigator/technician must collect (acquire) the evidence in a manner consistent with evidence admissibility standards in the jurisdiction (Kent et al. 2006). As was introduced at the beginning of the chapter, in the United States, this is the FRE and the FRCP for criminal or civil matters. The proper collection of evidence has been somewhat historically problematic for digital forensics due to the inherent characteristics of data. Data is mostly latent, can be easily destroyed or modified, and can exist in multiple locations at the same time. Changes in technology also impact the proper collection, as new technologies may introduce new types of data that are unstructured or whose characteristics are not adequately understood by the vendors that create tools. The fact that the rapid change in technology impacts cybercriminal investigations is an issue that will be discussed in the analysis and examination section. The standard or “rule of thumb” is to collect data with the minimum amount of changes or modifications to the “scene” (which is the device in digital forensics), and if any modifications or changes occur, these must be fully documented, and the impact on the validity and accuracy of the data must be noted (SWGDE 2014). The validity and provenance of the data source must also be maintained. The most common method used to collect the data is usually to create a forensic image of the device (more precisely, it is the storage media in the device such as the hard drive, solid-state drive or SD card) the data is stored on, if possible (NIST 2005). The difficulty of acquiring a forensic image varies by the type of device in question. With PCs (including Apple computers), the standard method is to use a write blocker. A write block prevents the technician or investigator from making any modifications (writes) to the device being imaged (SWGDE 2014). Write blocking can be accomplished using a specialized piece of software (software write blocker) or a piece of hardware (hardware write blocker) which is a physical “bridge” that is connected between the device (storage media) being imaged and equipment the technician or investigator is using to store the image on (NIST 2005). Hardware write blockers are a more recent innovation and are less prone to failures than software write blockers. Thus, hardware write blockers are the preferred method (NIST 2005). Other devices such as mobile phones and tablets introduce the issue of not being able to use a write blocker with them (SWGDE 2013). This is because these devices need to be able to interact with the operating system and file system to function. Write blocking interferes with this (especially hardware write blockers). A hybrid method of using software to alter some of the system functions on the device, temporarily at least, can be used on some devices to minimize the changes that are being done on the machine while an image is being acquired (SWGDE 2013). The forensic image can be a physical or logical image. A physical image contains an exact copy of all the storage units on a storage device and is referred to as a bit-for-bit image (SWGDE 2016). A logical image is an exact copy of the data that are stored in a volume/partition (e.g., C drive); it does not capture any information that may be in the system or administrative area (SWGDE 2016). A physical image is the “gold standard” as the image is an “exact” replica of the storage device in question and this can be proven using a mathematical process called hashing.

436

M. Rogers

Hashing is the mathematical function of creating a fixed numerical value using variable length input. The resulting value is sensitive to bitwise changes, and therefore it can be used to prove the integrity of a forensic image (SWGDE 2016). A good analogy here is that of a digital fingerprint. Once a hash total or value is calculated and recorded, a change to any data on the image would result in a completely different hash value (digital signature). Thus, hashing is used as a method to verify that the forensic image is an exact copy of the original and that from time 1 to time 2, nothing has been altered on the image (SWGDE 2016). The courts in most countries accept the comparison and finding of precisely the same hash values, as proof of the claims of integrity and trueness (SWGDE 2016). Most of the forensic imaging software, both commercial and open source, automatically calculates and compares the hash values and provide that information so that the investigator can include this in their report (NIST 2005). The current hashing standards used by most software tools are MD5 or SHA 256. Transportation Once the data (potential evidence) is identified and adequately collected, care must be taken when transporting the data back to the lab or other location (National Institute of Justice 2008). Digital evidence has inherent properties that need to be taken into consideration when transporting the evidence from the scene to the lab. The data and devices that store the data are susceptible to electric discharges (static), magnetism (RF signals), humidity, and temperature (National Institute of Justice 2008). These sensitivities require the use of antistatic bags, faraday containers, and containers that control the relative humidity. If care is not taken, the data can be altered or permanently destroyed. Unlike the CSI episodes, there is no magic button a technician or investigator can use to recover data that no longer exists. The storage of devices that contain potential evidence also needs to be carefully considered. Certain devices if powered off (e.g., mobile phones) will lose data that was in temporary/volatile memory. This could include the last dialed numbers or texts. These same devices if left powered on and capable of receiving a signal can be wiped remotely by a suspect or a confederate. Thus, they need to be stored in a manner that allows them to remain powered on and yet not receive a signal, at least until a forensic image is made of the device. This can be accomplished in several ways, such as using an RF-blocking container or Faraday bag or by placing the device in airplane mode (SWGDE 2013). Laptops and tablets that contain a battery also need to be shielded from WIFI, as merely disconnecting the device from a power cord does not necessarily power off the machine.

Analysis and Examination Once the storage devices, images, and/or data are collected and transported to the lab (or functional equivalent), the task of determining what data may become evidence that is relevant to the investigation at hand begins. Similar to the collection/acquisition phase, digital evidence has peculiarities not found with other types of forensic

21

Forensic Evidence and Cybercrime

437

evidence (SWGDE 2014). As was previously mentioned, the data are latent and cannot be seen without the use of technology. Examination and analysis usually require specialized software that can read the various data types (structured and unstructured), capture the metadata such as date and time stamps of the different pieces of data and the geolocation information on pictures, and deal with the potentially large amount of data that must be reviewed. Cybercriminal investigations are primarily impacted by the problem of big data (Karie and Venter 2015). In some cases, the sheer amount of data would make it impractical if not impossible to examine every single piece of datum manually. In the case of a standard PC that could be found in someone’s home, the amount of data can exceed 4–6 terabytes (TB). There have been cases where a suspect had 20+ TB of storage with over 4 million pictures (see chapter “Technology Adoption and Cybercrimes”). The big data issue has two additional components that impact digital forensics, variety, and velocity. As was stated, not all data are created equal. Some data are very structured such as databases and data that are very unstructured like document files (i.e., variety) (Geradts 2018). The more unstructured the data are, the harder it is to find patterns and search. There are also new data types appearing on the market due to the introduction of new or updated technologies. These new data types may not be known to the investigator or to the vendors of the specialized software that is being used (Geradts 2018). If the data type is not recognizable, it can easily be missed or overlooked. This could hurt the investigation especially if the data contained information that was either inculpatory or exculpatory. With the increased speeds now available even to home consumers, data are flowing through cyberspace at an increasing speed/velocity (e.g., 150 MBPS) (Tabona and Blyth 2016). The faster the data are flowing, the more difficult it is to capture data in real time and to analyze it fast enough to be of any use (Tabona and Blyth 2016). Most of the technology used to “tap” data in transit cannot keep up with increases in velocity and quickly become overwhelmed, resulting in missed data or even the tool/technology shutting down. This is less than an ideal situation for investigators since they would have no idea what data was missed (Geradts 2018). The exact process used to filter all of the data down to evidence that is pertinent to the investigation differs based on the context of the inquiry at hand (Serfoji et al. 2015). However, at a minimum, investigators look at emails, documents, spreadsheets, web history, social media posts, geolocation information, pictures, movies, texts, and chats. Techniques such as timeline analysis, frequency analysis, and pattern matching are used to conduct a form of business analytics and intelligence on the data (Rogers and Seigfried-Spellar 2014; Serfoji et al. 2015). The big data problem has resulted in an increased need for automated tools based on natural language processing (NLP) and machine and deep learning (Geradts 2018). Timeline analysis is the method of using time stamps of files (i.e., modified, accessed, copied) to create a picture of the activity that occurred on a system or device (SWGDE 2016). Modified refers to when the data in a file was altered (SWGDE 2016). Accessed refers to when a file was accessed by the user or a program (SWGDE 2016). Copied refers to when a file was copied to or created on a system or storage device (SWGDE 2016). The timeline is usually depicted as an

438

M. Rogers

interactive graph that allows the user to drill down to specific events and times (Rogers and Seigfried-Spellar 2014). A frequency analysis examines the number of times a file has been accessed or viewed or the number of times that a website or a user searches for a specific Internet search term (Rogers and Seigfried-Spellar 2014). Standard software such as Microsoft Excel or R can be used to create the frequency tables. The results of the frequency analysis can provide insight into what terms, pictures, websites, etc. are essential to the user (Rogers and Seigfried-Spellar 2014). Pattern matching looks at how the data are related to each other in terms of linkages between people, places, and events (Pungila 2012). This can be useful when dealing with international investigations that span several individuals or organized criminal activities. The conventional method used to make the analysis and examination fit within a reasonable time frame is to look for the “low-hanging fruit.” This refers to the fact that with computing technology (which includes mobile phones and tablets), the file systems and applications have built-in defaults which restrict the behavior of which data are stored, where the data are stored, and how long the data are retained before it is overwritten. These defaults allow the investigators and the vendors of the tools to focus on these default locations first. These defaults should be considered the “floor” of the investigation, not the ceiling, in that these are the areas to be looked at first, but other locations and types of data may need to be examined based on what is found in these default areas. This is commonly referred to as “pulling threads” to see where to look next (Rogers et al. 2006). Examples of default areas include user profiles (home directory), Internet artifacts (e.g., web history, web cache, downloaded files), deleted files, email, chat/text, and recent documents (Rogers et al. 2006). In many cases, a large amount of data/evidence relevant to a situation is found in these locations (Graves 2013).

Report/Opinion The final phase of the process entails the investigator or expert answering a specific legal question or theory (e.g., Suspect X committed the criminal activity). The forensic sciences tend to follow an abductive reasoning approach to arriving at conclusions. Abductive reasoning deals with ill-defined problems and uses incomplete information to make a best “guess” at the correct answer – “Inference to the best explanation” (Stanford University & Center for the Study of Language and Information 1997). It is no different with digital forensics. As was discussed previously, even if we were able to collect all possible sources of the data, there is not sufficient time to possibly examine all the data that could be potential evidence. Digital forensics must now rely on automated tools to assist the investigator in focusing on the most likely sources of data that could become evidence, and even then, we can only review a portion of this smaller set of data (Geradts 2018). The report is the articulation of the who, what, when, where, why, and how of the investigation, as well as any answers to the investigative questions (or steps followed

21

Forensic Evidence and Cybercrime

439

to arrive at a specific conclusion). The exact format of the report is dependent on the standard operating procedures (SOPs) of the agency or company the author of the report is employed by. ASTM International, a standards development and approval organization like the International Organization for Standardization (ISO), has approved a standards document for reporting scientific opinions (ASTM E620-18: Standard Practice for Reporting Opinions of Scientific or Technical Experts) as well as a standard for digital forensics that includes information about reports (ASTM E2763-10: Standard Practice for Computer Forensics). ASTM E620-18 covers the requirements that conclusions be logically based on facts or evidence and that the author indicates what has supported facts and what are opinions (“ASTM E620-18 – Standard Practice for Reporting Opinions of Scientific or Technical Experts” n.d.). ASTM E2763-10 addresses techniques and practices including possible evidence, evidence handling, and management, imaging analysis/examination, documentation, as well as the report (“ASTM E2763 – 10 Standard Practice for Computer Forensics” n.d.). Similar to the other forensic sciences, the opinions and conclusions derived with digital forensics must pass legal/jurisdictional standards for the expert to offer scientific conclusions or opinions. Historically, this has been uneventful for digital forensics, as most judges and juries were intimidated with technology and technical evidence, and as such, the experts were rarely if ever challenged (Meyers and Rogers 2004). Fortunately, this has changed. The lack of challenges was not good for moving the science forward. It also led to the present situation where many investigators and experts are dependent exclusively on vendor tools, with little or no understanding of how the tools work (Meyers and Rogers 2004). These tools are little more than black boxes, where the actual workings are secret, so secret in fact that there are not even any published error rates for these tools. A potential hurdle in the report phase centers on the certainty of the opinions or conclusions. Currently, digital forensics lacks any statistical approximation of whether the conclusions reached based on the digital evidence examined and analyzed are correct. The courts have gotten somewhat used to the idea of probabilities of error from other forensic sciences such as DNA (Brown 2015). Experts in other forensic sciences can qualify their opinions and conclusions using statistical techniques such as likelihood ratios or other probabilistic methods (Christensen et al. 2014). Digital forensics does not yet have such rudimentary tools at its disposal. It is not unusual for two experts to examine and analyze the same data, identify the same evidence, yet arrive at entirely different conclusions. The courts are left trying to figure out who is correct and who is not. While this is exceptionally vexing for judges, juries are even more stymied by these “dueling experts.” The lack of formalized approaches to certainty and uncertainty in forensic sciences, in general, was highlighted in the NAS report to Congress (Bell et al. 2018; US Committee on Identifying the Needs of the Forensic Sciences Community of the National Research Council 2009). The courts to date have been patient regarding this lack of formal rigor, but that patience is not infinite. Defense lawyers are also pushing this issue. It is only a matter of time before a challenge rises to a high enough court for this to be a potential watershed event – like it was for DNA

440

M. Rogers

back in 2003 in the United States. In 2003 there was a president’s initiative to improve the scientific rigor, tools, standards, and training related to DNA evidence processing (National Institute of Justice n.d.). The commission arose from the US courts concern that DNA analysis was being misused and the supposed conclusions were being erroneously reported or the certainty exaggerated, thus undermining the US judicial system and legal process.

Future Challenges While this chapter has focused primarily on forensic evidence, specifically digital evidence that is of interest to a cybercrime investigation, it has also hinted at some of the current challenges that the forensic sciences have. Like all forensic sciences, digital forensics has problems as well. These challenges are a combination of those that are common to all the forensic sciences and those that are unique to digital forensics (Lillis et al. 2016; Meyers and Rogers 2004). The challenges that are tied explicitly to digital evidence that has been discussed thus far include: • Volume, data that may become evidence, locating all the possible devices (containers) that could possibly contain potential evidence • Variety of potential evidence (e.g., structured and unstructured data), changing technologies that introduce previously unseen types of data or radically change existing data types (e.g., flash cookies, social media) • Velocity of the data due to Internet service providers (ISP) increasing the speed of their networks and the reliance that vendor black box tools accurately and reliably abstract and represent the evidence It is prudent that we also look at some additional challenges that may be on the horizon. Encryption has been a historical potential technical issue for cybercriminal investigations and forensic evidence (Garfinkel 2010). If the data (which some of which will hopefully be evidence) are encrypted, then this becomes a huge issue. Encryption can make it impossible to determine what data may be of interest, as it is impossible to read or abstract any information from encrypted data. The technician or investigator could see the data, but they would not be able to understand it. Digital forensics technicians would be able to collect it, but it would be impossible to decipher without being unencrypted. The field has been concerned about the possibility of the widespread use of encryption because it would be turned on either by default on the device or operating system being sold or by the public using free or low-cost add-on encryption software with their tools. Before about 3 years ago (2015), the field did not see this occur, and encryption was still a theoretical threat at any significant amount. However, now most mobile phone manufacturers are encrypting the storage media on the phones and the phone’s file system by default. The threat is now, and there have been some marquee cases in the news media where federal law enforcement in the United States has threatened to take the mobile phone

21

Forensic Evidence and Cybercrime

441

manufacturer to court to compel them to decrypt a device or provide a “backdoor” method to law enforcement (Selyukh 2016). Bias is probably the Achilles heel for all the forensic sciences including digital forensics (Brown 2015; Buerger et al. 2018; Karie and Venter 2015; Koppl 2005). The field of human factors has studied the impact that bias has on the forensic sciences, and the conclusions have been extremely troublesome. So much so, the American Academy of Forensic Sciences (AAFS) has focused on this issue for “special” sessions, publications, and thought papers. Bias straightforwardly impacts forensic evidence. The technician or investigator can choose to ignore specific evidence or downplay its importance or weight. This can then directly influence the conclusion. In digital forensics, where it is now impossible to review all the data, failing to focus on the proper devices or waiting too long to request data from third parties can result in thousands of pieces of evidence being overlooked. As was previously stated, the issue of bias or confirmation bias is not unique to digital forensics. What is unique is the fact that while other forensic sciences have attempted to use restricted context (the examiner does not know whether a suspect has been arrested or if there is a suspect, and other specifics of the case are not made available to the examiner), this is not practical with digital forensics. Without case specifics, the examiner has no idea what might be essential or not (Rogers and Seigfried-Spellar 2014). Digital forensics examiners require context to focus the examination since it is now impractical if not impossible to examine all the data that could be potential evidence. Without context, it is like trying to find a needle in a haystack, without knowing one is even looking for a needle, and the haystack is the size of the galaxy. Additional future challenges are likely to be focused around the courts requiring some statistical benchmark for determining how likely it is that a conclusion/opinion is correct (or maybe, more importantly, incorrect). The requirement to employ some Bayesian probability estimate or likelihood ratio (Koppl 2005) will drastically change how cybercriminal cases are conducted. We must remember that the opinion given by the expert is based on the data that are identified, collected, transported, examined, and then analyzed. Each of these phases has differing levels of potential error and uncertainty. These will all need to be pooled together to arrive at some estimate of the total certainty.

Summary/Conclusion This chapter discussed the importance that digital forensic evidence (data) has on the domain of cybercriminal investigations. The chapter introduced two broad categories or types of digital evidence (1) trace and (2) reconstructive as well as the primary phases in the process of dealing with digital evidence, (1) identification, (2) collection/acquisition, (3) transportation, (4) examination and analysis, and (5) opinion and report. The chapter did not shy away from outlining some of the issues and challenges that forensic sciences have in general and digital forensics specifically. The problems inherent to all the forensic sciences, such as investigator bias, meeting

442

M. Rogers

the Daubert considerations, measures of certainty, and dealing with somewhat unrealistic expectations of the courts and the public (Buerger et al. 2018), will take a coordinated effort from all the forensic sciences to address. The challenges specific to digital evidence (e.g., the volume of data, variety of evidence, location of evidence, encryption, statistical methods for indicating certainty) will take a concerted effort within the digital forensic science discipline by the scientific/academic community, the practitioner community (both private sector and law enforcement), as well as the judiciary. Forensic science is a crucial part of the modern criminal investigative landscape. Advances in science and technology have led to numerous methods that are now commonplace in most law enforcement labs (e.g., latent fingerprint analysis, hair and fiber, DNA). Increasingly more common now are digital forensic tools and technologies that allow investigators to examine and analyze evidence from mobile phones, tablets, PCs, and the various social media sites we all visit and post our daily activities to. As we become more dependent on technology, it is likely that all future criminal investigations will include some form of digital evidence (Chung et al. 2017; Kebande and Ray 2016). Thus, it is essential that investigators, prosecutors, defense attorneys, judges, and juries have, at the very least, a rudimentary understanding of how common web/Internet technologies work (e.g., mobile phones, tablets, PCs). Perhaps even more important for this audience is the development of a clearer understanding of how much weight to give both the evidence that is derived from data (digital evidence) and the conclusion/opinions expressed by the experts (Karie and Venter 2015).

Cross-References ▶ Defining Cybercrime ▶ Technology Use, Abuse, and Public Perceptions of Cybercrime

References 7 Ways the CSI Effect is Altering Our Courtrooms (For Better and For Worse) | Rasmussen College. (n.d.). Retrieved 6 Feb 2019, from https://www.rasmussen.edu/degrees/justice-studies/blog/ ways-csi-effect-is-altering-our-courtrooms/ ASTM E2763 – 10 Standard Practice for Computer Forensics (Withdrawn 2019). (n.d.). Retrieved 11 Feb 2019, from https://www.astm.org/Standards/E2763.htm ASTM E620-18 – Standard Practice for Reporting Opinions of Scientific or Technical Experts. (n.d.). Retrieved 11 Feb 2019, from https://webstore.ansi.org/Standards/ASTM/ASTME62018? gclid=Cj0KCQiA14TjBRD_ARIsAOCmO9Y6lSkmjGyJ8w9x8ub-oCPbmSq8CF53wmAftD vLxpqeZH0eZ_PvfDoaAkp7EALw_wcB Bell, S., Sah, S., Albright, T. D., Gates, S. J., Denton, M. B., & Casadevall, A. (2018). A call for more science in forensic science. Proceedings of the National Academy of Sciences, 201712161. https://doi.org/10.1073/pnas.1712161115.

21

Forensic Evidence and Cybercrime

443

Brown, C. S. D. (2015). Investigating and prosecuting cybercrime: Forensic dependencies and barriers to justice. International Journal of Cyber Criminology, 9(1), 55–119. https://doi.org/ 10.5281/zenodo.22387. Buerger, M., Levin, B. H., Myers, R. (2018). Futures in forensic science. Carrier, B., & Spafford, E. (2003). Getting physical with the digital investigation process. International Journal of Digital Evidence, 2(2), 1–20. Retrieved from http://citeseerx.ist.psu.edu/ viewdoc/download?doi=10.1.1.76.757&rep=rep1&type=pdf. Casey, E. (2011). Digital evidence and computer crime: Forensic science, computers, and the Internet. Academic Press. Retrieved from https://market.android.com/details?id=book-lUnMz_ WDJ8AC CBC Radio. (2018). “Alexa, who did it?” What happens when a judge in a murder trial wants data from a smart home speaker | CBC Radio. (n.d.). Retrieved 25 Nov 2018, from https://www.cbc. ca/radio/day6/episode-417-alexa-as-murder-witness-k-tel-s-legacy-brexit-and-gibraltar-havana -s-mystery-hater-and-more-1.4916536/alexa-who-did-it-what-happens-when-a-judge-in-a-mur der-trial-wants-data-from-a-smart-home-speaker-1.4916556 Christensen, A. M., Crowder, C. M., Ousley, S. D., & Houck, M. M. (2014). Error and its meaning in forensic science. Journal of Forensic Sciences, 59(1), 123–126. https://doi.org/10.1111/15564029.12275. Chung, H., Park, J., & Lee, S. (2017). Digital forensic approaches for Amazon Alexa ecosystem. Digital Investigation. https://doi.org/10.1016/j.diin.2017.06.010. Definition of CYBERCRIME • Law Dictionary • TheLaw.com. (2014). Retrieved 6 Feb 2019, from https://dictionary.thelaw.com/cybercrime/ DFRWS. (n.d.). Digital forensic research conference a road map for digital forensic research. Retrieved from http://dfrws.org/sites/default/files/session-files/a_road_map_for_digital_foren sic_research.pdf Du, X., Le-Khac, N.-A., & Scanlon, M. (2017). Evaluation of digital forensic process models with respect to digital forensics as a service. https://doi.org/10.1007/128. Federal Rules of Evidence | Federal Rules of Evidence | LII/Legal Information Institute. (n.d.). Retrieved 17 Nov 2018, from https://www.law.cornell.edu/rules/fre Forensic Evidence Law and Legal Definition | USLegal, Inc. (n.d.). Retrieved 14 Nov 2018, from https://definitions.uslegal.com/f/forensic-evidence/ Garfinkel, S. L. (2010). Digital forensics research: The next 10 years. Digital Investigation, 7, S64–S73. https://doi.org/10.1016/j.diin.2010.05.009. Geradts, Z. (2018). Digital, big data and computational forensics. Forensic Sciences Research, 3(3), 179–182. https://doi.org/10.1080/20961790.2018.1500078. Graves, M. W. (2013). Digital archaeology: The art and science of digital forensics. AddisonWesley. Retrieved from https://market.android.com/details?id=book-BYNuAAAAQBAJ Grudzinskas, A. J., Jr., & Appelbaum, K. L. (1998). General Electric Co. v. Joiner: Lighting up the post-Daubert landscape? The Journal of the American Academy of Psychiatry and the Law, 26(3), 497–503. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/9785292. Heinzerling, L. (2006). Doubting Daubert. Journal of Law and Policy, (Spring 2006). Retrieved from http://www2.law.columbia.edu/fagan/courses/law_socialscience/documents/Spring_2006/ Class%203-Developments%20in%20Federal%20Rules%20of%20Evidence/Heinzerling_Dou bting_Daubert%5B1%5D.pdf Karie, N. M., & Venter, H. S. (2015). Taxonomy of challenges for digital forensics. Journal of Forensic Sciences, 60(4), 885–893. https://doi.org/10.1111/1556-4029.12809. Kebande, V. R., & Ray, I. (2016). A generic digital forensic investigation framework for Internet of Things (IoT). In Proceedings – 2016 IEEE 4th international conference on future internet of things and cloud, FiCloud 2016 (pp. 356–362). IEEE, Vienna. https://doi.org/10.1109/ FiCloud.2016.57. Kelley, L. (2018). Much like the U.S., the U.K. is investigating Russian meddling in its politics. (2018, July 12). NPR. Retrieved from https://www.npr.org/2018/07/12/628546565/much-likethe-u-s-the-u-k-is-investigating-russian-meddling-in-its-politics

444

M. Rogers

Kent, K., Chevalier, S., Grance, T., & Dang, H. (2006). Special publication 800-86 guide to integrating forensic techniques into incident response recommendations of the National Institute of Standards and Technology. Retrieved from https://nvlpubs.nist.gov/nistpubs/Legacy/SP/ nistspecialpublication800-86.pdf Kirwan, G., & Power, A. (2013). Cybercrime: The psychology of online offenders. Cambridge University Press. Retrieved from https://market.android.com/details?id=bookU35HVJyADlEC Koppl, R. (2005). How to improve forensic science. European Journal of Law and Economics, 20(3), 255–286. https://doi.org/10.1007/s10657-005-4196-6. Lillis, D., Becker, B., O’Sullivan, T., Scanlon, M. (2016). Current challenges and future research areas for digital forensic investigation. https://doi.org/10.13140/RG.2.2.34898.76489. Locard’s Exchange Principle | Encyclopedia.com. (n.d.). Retrieved 26 Nov 2018, from https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/locardsexchange-principle Lyle, D. P. (n.d.). Types of evidence used in forensics. Retrieved 17 Nov 2018, from https://www. dummies.com/education/science/forensics/types-of-evidence-used-in-forensics/ Mangrum, R. C. (1999). Kuhmo Tire company: The expansion of the court’s role in screening every aspect of every expert’s testimony at every stage of the proceedings. Creighton Law Review, 33, 525. Retrieved from https://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/ creigh33§ion=25. Merlino, M., Springer, V., Kelly, J., & Hammond, D. (2007). Meeting the challenges of the Daubert trilogy: Refining and redefining the reliability of forensic evidence. Tulsa Law Review, 43, 2. Retrieved from http://digitalcommons.law.utulsa.edu/tlr. Meyers, M., & Rogers, M. K. (2004). Computer forensics: The need for standardization and certification. International Journal of Digital Evidence, 3(2), 1–11. National Institute of Justice (U.S.). (2008). Electronic crime scene investigation: A guide for first responders. U.S. Dept. of Justice, Office of Justice Programs, National Institute of Justice. Retrieved from https://market.android.com/details?id=book-3FC2ClX3eJcC NIST. (2005). Hardware Write Blocker (HWB) assertions and test plan (version 1). National Institute of Standards & Technology. Retrieved from https://www.nist.gov/sites/default/files/ documents/2017/05/09/hwb-atp-19.pdf NIST. (n.d.). Pattern and impression evidence | NIST. Retrieved 17 Nov 2018, from https://www. nist.gov/oles/pattern-and-impression-evidence President’s DNA Initiative | National Institute of Justice. (n.d.). Retrieved 23 Nov 2018, from https://nij.gov/topics/forensics/dna-initiative/pages/welcome.aspx Pungila, C. (2012). Improved file-Carving through data-parallel pattern matching for data forensics. In 2012 7th IEEE international symposium on applied computational intelligence and informatics (SACI) (pp. 197–202). IEEE, Vienna. https://doi.org/10.1109/SACI.2012.6250001. Risinger, D. M., Saks, M. J., Thompson, W. C., & Rosenthal, R. (2002). The Daubert/Kumho implications of observer effects in forensic science: Hidden problems of expectation and suggestion. Ssrn, 90(1). https://doi.org/10.2139/ssrn.301408. Rogers, M. K., & Seigfried-Spellar, K. (2014). Using Internet artifacts to profile a child pornography suspect. Journal of Digital Forensics, Security and Law, 9(1), 1–10. Rogers, M. K., Mislan, R., Goldman, J., Wedge, T., & Debrota, S. (2006). Computer forensics field triage process model. Security, 1, 27–40. Selyukh, A. (2016). Encryption: Where are we a year after the San Bernardino shooting that started the apple-FBI debate?: All tech considered: NPR. Retrieved 28 Nov 2018, from https://www. npr.org/sections/alltechconsidered/2016/12/03/504130977/a-year-after-san-bernardino-and-app le-fbi-where-are-we-on-encryption Serfoji, R., Angelopoulou, O., Jones, A. (2015). Extracting intelligence from digital forensic artefacts. 6(8), 3375–3379. https://doi.org/10.13040/IJPSR.0975-8232.6(8).3375-79.

21

Forensic Evidence and Cybercrime

445

Stanford University., & Center for the Study of Language and Information (U.S.). (1997). Stanford encyclopedia of philosophy. Stanford University. Retrieved from https://plato.stanford.edu/ entries/phenomenology/ Summary of the Rules of Evidence – FindLaw. (n.d.). Retrieved 14 Nov 2018, from https://corporate.findlaw.com/litigation-disputes/summary-of-the-rules-of-evidence.html SWGDE. (2013). Scientific working group on digital evidence SWGDE best practices for mobile phone forensics SWGDE best practices for mobile phone forensics. Retrieved from https://www. swgde.org/documents/CurrentDocuments/SWGDEBestPracticesforMobilePhoneForensics SWGDE. (2014). SWGDE best practices for computer forensics (Vol. 1). Retrieved from https:// www.swgde.org/documents/CurrentDocuments/SWGDEBestPracticesforComputerForensics SWGDE. (2016). Scientific working group on digital evidence SWGDE digital & multimedia evidence glossary. Retrieved from https://www.swgde.org/documents/CurrentDocuments/ SWGDEDigitalandMultimediaEvidenceGlossary. Tabona, O., & Blyth, A. (2016). A forensic cloud environment to address the big data challenge in digital forensics. In 2016 SAI computing conference (SAI) (pp. 579–584). IEEE, Vienna. https:// doi.org/10.1109/SAI.2016.7556039. U. S. Committee on Identifying the Needs of the Forensic Sciences Community of the National Research Council. (2009). Strengthening forensic science in the United States: A path forward. Washington DC: National Academies Press. https://doi.org/10.1016/0379-0738(86)90074-5.

Part III Crime Theory

Deterrence in Cyberspace: An Interdisciplinary Review of the Empirical Literature

22

David Maimon

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deterrence Theory: General Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Criminological Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Law Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Information Systems Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Political Science Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy Implications and Directions for Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

450 451 453 456 457 459 460 462 463 463

Abstract

The popularity of the deterrence perspective across multiple scientific disciplines has sparked a lively debate regarding its relevance in influencing both offenders and targets in cyberspace. Unfortunately, due to the invisible borders between academic disciplines, most of the published literature on deterrence in cyberspace is confined within unique scientific disciplines. This chapter therefore provides an interdisciplinary review of the issue of deterrence in cyberspace. It begins with a short overview of the deterrence perspective, presenting the ongoing debates concerning the relevance of deterrence pillars in influencing cybercriminals’ and cyberattackers’ operations in cyberspace. It then reviews the existing scientific evidence assessing various aspects of deterrence in the context of several disciplines: criminology, law, information systems, and political science. This chapter ends with a few policy implications and proposed directions for future interdisciplinary academic research. D. Maimon (*) Georgia State University, College Park, MD, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_24

449

450

D. Maimon

Keywords

Deterrence · Cybercrime · Empirical evidence

Introduction The considerable literature around the topic of cyber-deterrence continues to grow. Indeed, the popularity of deterrence-based policies in fighting offline crime (Nagin 2013), maintaining diplomatic relationships between countries (Quackenbush 2011), and combating the spread of diseases (Milne et al. 2000) has cleared ground for the migration of deterrence-based approaches to cyberspace. In turn, this has sparked a lively debate regarding the relevance of this approach in influencing both cyberattackers’ (individuals and countries) malicious and non-malicious online behaviors (Taddeo 2018; Wilner 2019) and targets’ online self-protective behaviors (Maimon et al. 2017). Unfortunately, due to the invisible yet rigid boundaries erected between academic disciplines, most of the published literature on deterrence in cyberspace is confined to specific areas and subpopulations which are of limited interest across scientific fields. For example, while criminologists are interested in understanding how sanction threats and punishment influence cybercriminals’ behaviors prior to, during the progression of (Maimon et al. 2019), and in the culmination of an online criminal event, information systems scholars are more interested in understanding the effectiveness of deterrence-based policies in addressing employees’ computer misuse and increasing compliance with their employers’ cybersecurity policies (D’Arcy and Herath 2011). Similarly, while law scholars are interested in understanding the necessity of designated substantive cybercrime laws for deterring illegal online activities in the general public and among convicted offenders (Mayer 2015), political scientists tend to focus their debate on the relevance of deterrence-based principles in governing cyber conflicts between nations (Taddeo 2018). Drawing on the notion that cybercrime research should be of an interdisciplinary nature, generate a comprehensive understanding of relevant concepts in the context of several related fields, and support concrete scientific contributions in each relevant field of study, this chapter intends to provide an interdisciplinary review of the literature around the issue of deterrence in cyberspace. It begins with a short overview of the theoretical premises laid out by deterrence theoreticians and then presents the ongoing debates concerning the relevance of the theory for influencing cybercriminals. The next sections review the existing documented scientific efforts aimed at assessing the validity of different dimensions of deterrence theory, in the disciplinary contexts of criminology, law, information systems, and political science. These efforts focus on cyber-dependent crimes, i.e., illegal activities that can only be performed using a computer, computer networks, or other forms of information communication technology such as hacking and DDoS attacks (McGuire and Dowling 2013). Finally, the chapter’s conclusion proposes a few policy implications and recommends directions for future interdisciplinary academic research.

22

Deterrence in Cyberspace: An Interdisciplinary Review of the. . .

451

Deterrence Theory: General Principles Deterrence theory has its roots in the writings of the eighteenth-century philosopher Cesare Beccaria (1963 [1764]), who proposed that humans are self-interested and rational decision-makers, driven in their actions by an economical “hedonistic calculus” whereby they seek to maximize pleasure and minimize pain. One key theoretical principle of the theory suggests that individuals are open to “deterrence” inasmuch as raising the costs of a behavior through sanctions would lower their willingness to pursue that course of action. Emphasizing the difference between specific and general deterrence, Beccaria explained that punishments for criminal behaviors aim at both preventing recidivism among convicted criminals (i.e., specific deterrence) and keeping the general public from engaging in crime (i.e., general deterrence). Ultimately, the theory predicts that while forming expectations regarding the future outcomes of his or her behaviors, an individual’s fear of certain, swift, and severe punishment could translate to avoiding criminal behavior altogether (Beccaria 1963 [1764]). Explaining the delicate balance between severe yet still proportional punishments, Beccaria suggested that punishments should be proportional to the harm inflicted by the criminal act and that more serious crimes should be followed by more serious punishments. Still, he stressed that it is the certainty of punishment and not its severity that leaves a lasting, deterring impression on the minds of individuals. Accordingly, the certainty of punishment carries a more substantial deterring effect than severe punishment, since the fear of more severe punishment will fail to translate to deterrence if it is accompanied by the hope that one may escape that punishment. As part of his recommendations to criminal justice systems, Beccaria advised authorities to publicize laws (in order to avoid the threat of tyranny) and to make these laws as clear and simple as possible in order to support deterrence efforts. Beccaria’s ideas and theoretical principles were conveyed in a classic essay (On Crimes and Punishment), which condemned the punitive approaches taken by the Italian criminal justice system during the eighteenth century when dealing with culprits. Beccaria’s essay, along with the work of Bentham (1789) in England, paved the way for a reformation of the early criminal justice systems in Europe and set the stage for the emergence of the criminological field of study. In parallel to the expansion of deterrence-based policies among global criminal justice agencies and their focus on preventing crime within individuals, the theoretical tenets of the deterrence perspective have also proved useful for guiding sovereign countries’ political courses when dealing with rival international players (Jervis 1979). Specifically, Schelling (1980) suggested that a nation can commit itself to a deterrence strategy that is intended to prevent other nations from opportunistic aggression, by threatening some punishment against potential aggressors and promising rewards for a positive treatment. Explaining the deterring equation further, Snyder argued that “deterrence is a function of the total cost-gain expectations of the party to be deterred, and these may be affected by factors other than the apparent capability and intention of the deterrer to apply punishment or confer rewards” (1961, p. 9). Credible threats by a deterring party are key in this sense for instilling

452

D. Maimon

fear of consequences (Schelling 1966). Deterrence as a coercive national strategy has been discussed in the literature since World War II, yet it started to gain popularity during the 1950s and the Cold War era (Jervis 1979). Since discussions of cyber-deterrence are relevant at both the individual and group (mainly the state) levels, relevant academic research has been published in multiple academic disciplines, including criminology, law, information systems, and political science. Since their studies are imagined in the context of cyberspace, scholars from all of these fields reflect upon the relevance of deterrence and the applicability of the approach in preventing cyber-dependent crimes. On the one hand, several scholars believe that the implementation of deterrence-based strategies (e.g., sanctions and sanction threats) in cyberspace is prone to failure, since the inherently anonymous nature of this space complicates the task of attack attribution (Nye 2017) and increases online offenders’ ability to escape penalties for their illegitimate online behaviors (Harknett 1996; Harknett et al. 2010; Denning and Baugh 2000). This theoretical claim is supported by the notion that potential offenders learn through trial and error that the certainty of being detected and punished for a criminal act is relatively low, so they initiate illegitimate behaviors. Since the certainty of detection and punishment for cyber-dependent crimes is even lower than the certainty of detection of a non-cybercriminal event, due to law enforcement’s lack of preparedness to deal with cyber-dependent crimes (Dupont 2017), the enforcement of sanctions and sanction threats in a computing environment is predicted to play an insignificant role in preventing the occurrence of cyberattacks (Lupovici 2011). In contrast, other scholars contend that despite the complexities involved, attribution can still be achieved in cyberspace (Rid and Buchanan 2015; Tor 2017). Still others suggest that it is unnecessary to identify specific individuals in order for deterrence to take effect in cyberspace (Goodman 2010). Accordingly, the introduction of situational deterrence cues in an attacked cyber environment could trigger a predictable avoidance response from an online offender and consequently attenuate the consequences of an online criminal event. For instance, since detection of a system trespassing event results in increased efforts by legitimate users to deny trespassers access to the attacked computer (Stoneburner et al. 2002), implementing surveillance measures in a computing environment may lead system trespassers to overestimate the risk of detection on the system, devote increased efforts toward avoiding detection and hiding their presence, and even reduce harmful activity on the system. Therefore, even though deployment of deterrence-based measures in cyber environments will not necessarily prevent the occurrence of online crimes or result in official sanctions, it will increase offenders’ efforts to avoid detection and restrict the scope of their illegitimate activity during the progression of an online criminal event. To test the arguments raised by adherents of these two camps, scholars within these four academic disciplines have tested the validity of deterrence-based arguments in influencing online criminals and their targets. The next section reviews the specific theoretical adjustments scholars have made while using the deterrence perspective to guide academic research, as well as the empirical literature published within each relevant discipline.

22

Deterrence in Cyberspace: An Interdisciplinary Review of the. . .

453

Criminological Literature In his original discussion on the effectiveness of punishment in preventing offenders’ subsequent involvement in crime (i.e., recidivism), Beccaria proposed that certain severe and swift punishments would be more effective in deterring criminal behaviors (Beccaria 1963 [1764]). All in all, findings from extensive criminological research indicate that assigning more severe punishment (i.e., longer prison sentences) carries a modest deterrent effect and that increasing the certainty of detection and punishment (e.g., by deploying more police presence in strategic locations) results in a consistent deterring effect (Nagin 2013). In addition to testing different aspects of classic deterrence in offline environments such as residential neighborhoods (Braga and Weisburd 2012) and schools (Maimon et al. 2012), as well as elaborating the difference between general and specific deterrence (Pratt et al. 2006), contemporary criminologists have elaborated on different aspects of deterrence, including the impact of punishment avoidance on an individual’s decision to initiate a criminal event (Stafford and Warr 1993), the distinction between objective and subjective sanctions (Paternoster 1987), the communication platforms which could be used to convey a coherent deterring message (Geerken and Gove 1974), and the difference between informal and formal sanctions in deterring individuals’ involvement in crime (Anderson et al. 1977). One additional theoretical elaboration in the context of deterrence was proposed by Gibbs (1975), who differentiated between absolute and restrictive types of deterrence. Gibbs (1975) conceptualized absolute deterrence as an individual’s total avoidance of criminal activity due to fear induced by some perceived risk of punishment. Restrictive deterrence, on the other hand, is defined as the (partial) curtailment of a certain type of criminal activity in order to reduce the risk of punishment. Although deterrence-based research has dominated the criminological discipline within the last five decades, driving numerous investigations of the relationships between key theoretical constructs of deterrence and a wide range of offline crimes (Nagin 1998, 2013), empirical investigations of deterrence-based questions in cyberspace only started to emerge during the late 1990s. Skinner and Fream (1997) investigated the relationships between undergraduate students’ perceptions of punishment severity and certainty with their engagement in cybercrime (specifically digital piracy, guessing passwords, manipulating files with no permission, system trespassing, and writing malware). Their findings suggest that students’ perceptions of punishment severity were only a significant correlate to system trespassing. Similarly, Morris and Blackburn (2009) analyzed data collected from a different sample of undergraduate students. These scholars reported that a measure tapping students’ assessment of the chances of getting caught and their perceptions of severe punishment was significantly associated with password guessing, attempted hacking, and file manipulation. However, the theoretical framework that guided these two studies was that of social learning theory (Akers 2017). Moreover, although these studies offer preliminary investigations of the relationship between an individual’s perception of punishment severity and certainty and his or her

454

D. Maimon

involvement in various online crimes, they still leave something to be desired due to the questionable operationalization of key deterrence constructs (especially in Morris and Blackburn 2009) and their low alpha scores, the student-based sample they rely upon, and the cross-sectional nature of the data. More recently, Holt et al. (2017) reported a significant association between students’ perception of law enforcement’s likelihood to quickly recognize a cybercrime event and their willingness to engage in an ideologically motivated cyberattack against a foreign country. However, this research suffers from similar problems to those observed in Skinner and Fream (1997) and Morris and Blackburn (2009). In fact, Holt and associates’ reliance on students’ responses to vignettes for constructing their dependent variables is problematic in the context of deterrence, since the subjects’ lack of realistic understanding of the true costs (and benefits) of hacking casts a shadow on the validity of the constructs they create. Taking a somewhat different approach toward investigating the relationship between law enforcement reports of detection of online crime events and cybercrime, Guitton (2012) collected data on the number of attacks reported against businesses in France, Germany, and the UK between the years 2003 and 2010 and then correlated the data with several proxies for law enforcements’ successful operations in cyberspace. Findings from his analyses suggest that the rate of newspaper articles reporting cybercrime incidents with a lack of attribution is positively related to the number of cyberattacks reported against businesses in each of the observed countries. Given the serious methodological difficulties embedded in Guitton’s approach to data collection, any conclusion drawn regarding the effectiveness of attribution should be taken cautiously. Although early criminological research mainly employed survey designs and student-based samples to explore the relationships between deterrence-based constructs and cybercrime, several studies have investigated whether different aspects of deterrence influence the progression of cyber-dependent crimes using experimental research designs (Maimon et al. 2014; Wilson et al. 2015; Testa et al. 2017; Maimon et al. 2019). These studies adopted Gibbs’ (1975) conceptualization of restrictive deterrence to guide their efforts in assessing the effectiveness of deterrence-based interventions in shaping the progression of system trespassing events. Maimon et al. (2014), for example, tested the effect of a warning banner in an attacked computer system on the progression, frequency, and duration of system trespassing events. Deploying a large set of target computers built for the sole purpose of being attacked (i.e., honeypots) on the Internet infrastructure of a large US university, these scholars revealed that although a warning banner did not lead to the immediate termination of trespassing incidents or reduce their frequency, it did result in a shorter average duration of the system trespassing incidents. Interestingly, the effect of a warning message on the duration of repeated trespassing incidents was attenuated in computers with a large bandwidth capacity. Stockman et al. (2015) offered further support for these findings. Testa et al. (2017) explored the effect of a warning banner in mitigating hackers’ levels of activity (i.e., roaming the attacked system and manipulating file permissions) in an attacked computer system while considering the level of administrative privileges imposed by the system trespasser on the attacked computer. Analyzing

22

Deterrence in Cyberspace: An Interdisciplinary Review of the. . .

455

data collected by Maimon et al. (2014) in their second experiment, Testa and associates (2017) reported that the presence of a warning banner on an attacked computer system had no statistically significant effect on the probability of either navigation or file permission change commands being entered on the system. However, when testing the effect of the warning banner on computers attacked by system trespassers with nonadministrative privileges, the authors reported that a warning banner substantially reduced the use of both navigation and change file permission commands, compared to the no-warning computers. More recently, Maimon et al. (2019) analyzed data collected in a randomized trial which was deployed in China, reporting that intruders are less likely to use “clean tracks” commands in the presence of detection by a legitimate user of an attacked computing environment, in the absence of subsequent presentation of sanction threats. In addition to investigating the effectiveness of sanction threats in deterring the progression of hacking incidents, several scholars have investigated the effect of surveillance and detection signs in restricting the scope of hackers’ illegitimate behaviors while taking over a system. Wilson and associates (2015), for example, assessed the effect of a surveillance banner on the probability of commands being entered in the attacked computer system. They found that the presence of a surveillance banner in the attacked computer systems reduced the probability of commands being typed in the system during longer initial system trespassing incidents. Further, they reported that the probability of commands being typed during subsequent system trespassing incidents (on the same target computer) was conditioned by the presence of a surveillance banner and by whether commands had been entered during previous trespassing incidents. Using the same data, Maimon et al. (2019) investigated whether the level of ambiguity regarding the presence of surveillance in an attacked computer system influences system trespassers’ likelihood to clean their tracks during the progression of an event. Their findings indicate that the presence of unambiguous signs of surveillance (i.e., the presence of both a surveillance banner and program in the attacked system) increases the probability of clean tracks commands being entered on the system. Despite the growing use of honeypots for understanding system trespassers’ behaviors during the progression of criminal event among criminologists (Maimon et al. 2014; Wilson et al. 2015) and computer scientists (Farinholt et al. 2017; Rezaeirad et al. 2018), these tools present some methodological challenges to scholars (Holt 2017). For starters, while these simulated environments are indistinguishable from standard legitimate devices for less sophisticated hackers, fingerprinting techniques can be used by hackers to distinguish between regular online environments and honeypots (Mohammadzadeh et al. 2013). In addition, honeypots are able to measure explicit actions but are unable to measure the fundamental attitudes, beliefs, and capabilities of intruders who interact with the honeypot. Finally, honeypots are also unable to detect communications such as warnings and recommendations between hackers that may alter behavior within a honeypot (Holt 2017). Still, the usefulness of honeypots in understanding system trespassers’ responses to various computer configurations during the progression a criminal event is unique, and these findings should guide the design of more secure computing environments.

456

D. Maimon

Law Literature Substantive criminal laws set behavioral standards for individuals in society, detail legal rules that forbid specific types of behaviors, and elaborate potential legal sanctions imposed for deviating from these laws. Since cybercrime has become a serious threat to individuals, organizations, and governments all around the world, many countries have realized the necessity of establishing an arsenal of well-defined cybercrime laws which can guide law enforcement agencies’ efforts to pursue online offenders (Brenner 2001) and thus have enacted laws that prohibit specific types of behaviors with computers and computer networks. For example, in the USA, the Computer Fraud and Abuse Act 2008 prohibits accessing a computer without authorization or using a computer to defraud and extort. Similarly, in the UK, the Computer Misuse Act 1990 makes it illegal to gain improper access to a computer or to commit theft and extortion using computers. China, on the other hand, has amended the relevant provision of its criminal code twice (Amendment VII to the Criminal Law of the People’s Republic of China, 2009) to prevent illegal online activities. Importantly, in addition to cybercrime-specific laws, many countries also rely on laws designed to prevent terrestrial crimes when targeting certain types of cybercriminals. The underlying premise behind the enactment and enforcement of official cybercrime laws draws on the assumption that rational humans will be deterred from engagement in illegitimate online activities once threatened by harsh, immediate, and certain punishments for initiating these behaviors. However, although extensive research has explored the effectiveness of familiarity with laws and the administration of criminal justice procedures across different junctions of the criminal justice system, in deterring individuals’ onset of criminal career and recidivism (Paternoster 2010), we know little about the effectiveness of cybercrime laws in preventing cybercrime incidents. Still, recent evidence on the effectiveness of US cybercrime laws in preventing online crimes is starting to emerge. Mayer (2015), for example, analyzed data from hundreds of civil and criminal pleadings that were processed in the USA between the years 2005 and 2012 and proposed that the Computer Fraud and Abuse Act (CFAA) cannot deter cybercriminals (The CFAA aims to prevent unauthorized access to computers and password trafficking.). Accordingly, since the potential benefits from initiating a cybercrime incident during those years outpaced the potential punishment enforced in both criminal and civil cases, the deterrence benefits of this law are negligible. Similarly, in a series of papers, Kigerl (2009, 2015, 2016, 2018) explored the potential impacts of the Controlling the Assault of Non-Solicited Pornography and Marketing Act (CAN SPAM Act) of 2013 on different aspects of email spamming (CAN SPAM Act aims to regulate the way unsolicited commercial emails are sent to email users and to regulate the content that the email messages deliver.). Analyzing data collected using multiple “honey-net” email addresses posted online for spammers to find and send spam emails to, Kigerl reported mixed findings regarding the deterring effect of this act. For example, while the CAN SPAM act had no effect on the amount of spam sent to targets (Kigerl 2009, 2016) or on the probability that

22

Deterrence in Cyberspace: An Interdisciplinary Review of the. . .

457

spammers would embed their physical address in the spam email, the enactment of this act is positively associated with adding a verbal description of the email in the email’s subject line. Integrating the honey-net data with data collected from news articles published on the topic of CAN SPAM act, Kigerl (2016) reported that the number of ongoing CAN SPAM trials reported in popular news outlets is associated with a reduction in the amount of spam email sent, particularly in the USA (Kigerl 2018). In contrast, the volume of news articles reporting spammers’ detention seems to be positively associated with the volume of spam (Kigerl 2016), as well as with violation of email header forgery laws (Kigerl 2015). Finally, Hui et al. (2017) investigated the potential deterring effect of the 2001 Convention on Cybercrime (COC) legislation, in terms of reducing the volume of DDoS attacks against enforcing countries (The Convention on Cybercrime is an international treaty on crimes committed over the Internet which seeks to improve international cooperation between nations in cybercrime investigations and harmonize national cybercrime laws.). Analyzing data on DDoS attacks reported in 106 countries during 177 days between 2004 and 2008, these authors reported that enforcing the COC decreased DDOS attacks by at least 11.8%. Moreover, Hui and associates (2017) observed that enforcement of the COC resulted in an increase of the number of DDoS attacks against non-enforcing countries. Although key for generating an initial understanding regarding the effect of cybercrime laws in deterring online criminal activities, the problematic nature of the samples and data employed throughout the studies reported in this section should be considered carefully. First, scholars’ reliance on secondary data and unfamiliarity with the full extent of the methodology behind the original data collection may raise questions regarding the validity of some of the constructs composed by the scholars. Moreover, drawing on news websites and popular media to construct key independent measures may introduce some selection bias, since only some cases end up in the media. Finally, failure to control for internal processes that occur at the organized-crime group level, and that may influence the volume of online crime (Krebs 2014), calls into question the findings reported in these papers.

Information Systems Literature Consistent with the criminological literature which focuses on understanding different aspects of punishment in preventing online crime, empirical attention has been devoted within the information systems field to exploring different aspects of sanctions in preventing cyber-dependent crimes. However, while criminologists focus on online offenders, information system scholars mostly aim to understand computer misuse by employees in organizations, as well as employees’ compliance with organizational security policies (Cram et al. 2017). In general, findings regarding the effectiveness of sanctions in reducing employees’ computer misuse and violation of information security policies are mixed (D’Arcy and Herath 2011). For example, although some research reports that punishment severity decreases

458

D. Maimon

intentions to violate information security policies, technology misuse, and computer abuse (D’Arcy et al. 2009; Cheng et al. 2013), other studies find this effect in the USA only (Hovav and D’Arcy 2012), while still others do not observe this relationship at all (Hu et al. 2011). Moreover, the effect of sanctions’ certainty in reducing intentions to misuse information security was significant for specific populations only (D’Arcy et al. 2009; Hovav and D’Arcy 2012). Finally, several studies find an insignificant effect of sanction celerity on individuals’ violation of information security policies (Hu et al. 2011). Still, Barlow et al. (2013) observed that clearly communicating sanctions is key for reducing intentions to violate information security policies among employees. Since several scholars believe that non-compliance and violations of security policies are behaviors distinct from compliance behaviors (Guo 2013), extensive research has also assessed the role of deterrence in encouraging employees’ compliance with organizational cybersecurity policies. In fact, Sommestad et al.’ (2014) systematic review of the key variables that influence information security compliance behaviors suggested that deterrence-based sanctions are stronger predictors of policy compliance than non-compliance. Herath and Rao (2009a), for example, reported that sanction certainty increases employees’ intention to comply with information security policy. Operationalizing subjects’ assessments of detection probability as a proxy for sanction certainty, Li and associates (2010) further confirmed this finding. However, both studies failed to observe a significant relationship between perception of punishment severity and compliance with organizational security policies. Chen and associates (2012) reported that in addition to the effectiveness of punishment certainty, employees’ high certainty for rewards increased their intentions to comply with information security policy. In addition to exploring the effects of deterrence-based strategies on employees’ computer abuse and compliance/non-compliance with security policies, extensive IS research has investigated ways in which rewards and punishments could influence employees’ decisions to engage in self-protective behaviors (Herath and Rao 2009b; Johnston and Warkentin 2010; Siponen et al. 2010). This line of research draws on protection motivation theory (PMT) (Rogers 1975, 1983), which suggests that individuals are more likely to protect themselves from potential risks after receiving fear-arousing recommendations. Specifically, two processes must occur for a person to engage in an adaptive protective response. First, in the threat-appraisal process, the threat and generated fear that inspire protection motivation must be weighted more heavily than the maladaptive rewards earned by not engaging in protection motivation. Second, in the coping-appraisal process, a person’s response efficacy and self-efficacy must outweigh the response costs for engaging in the protection motivation (Rogers 1975). Consistent with samples used to investigate deterrence-based premises in the information systems field, most of the empirical research employing PMT draws on data collected from samples of organizational employees. Herath and Rao (2009b) reported that employees make inaccurate predictions about the probability of experiencing a security breach in their organizations, which in turn resulted in non-compliance with organizational security policies. In contrast, employees’

22

Deterrence in Cyberspace: An Interdisciplinary Review of the. . .

459

accurate assessments of their organizational vulnerability to information security threats were found to have a significant effect on their intentions to comply with security policies (Siponen et al. 2010). Moreover, Workman et al. (2008) reported that employees’ subjective assessments of risk severity as a result of a breach of their confidential information, as well as their perceived vulnerability to cyber-dependent crime, were negatively associated with failure to apply security solutions. Finally, focusing on the relationships between fear appeals and the enactment of computer security behaviors, Johnson and Warkentin (2010) reported that while there is an overall positive effect of fear appeal on the use of computer security behaviors, this effect varies in magnitude across users and based on individuals’ personality traits (e.g., level of self-efficacy), cognitive processes (i.e., threat severity), and social influence (see also Boss et al. 2015; Siponen et al. 2010). Still, no prior research has explored whether employees’ compliance with organizational security policies reduces the organization’s risk of cyber-dependent crime victimization. Similar to the issues embedded in survey-based criminological research of online offenders, the bulk of the information systems scholarship which focuses on deterrence suffers from various methodological issues. Specifically, the focus on employees’ intentions instead of actual illegitimate activities with their organizational networks (Li et al. 2010), questionable operationalizations of key deterrence constructs (Siponen et al. 2010), and the cross-sectional nature of the data collected and reported in most of these studies raise questions regarding the observed empirical patterns.

Political Science Literature In contrast to criminologists’, information systems scientists’, and law scholars’ interest in exploring how different aspects of deterrence determine individual involvement in online crime, political scientists’ discussions on cyber-deterrence are focused on countries’ efforts to prevent and dissuade rival nations’ attempts to launch cyberattacks. Specifically, Libicki (2009) suggested that the goal of cyberdeterrence is to attenuate the risk of cyberattacks to an acceptable level at an acceptable cost, when a defending state aims to mitigate potential offensive actions by threatening a potential retaliation. Several scholars identify the means thorough which nations’ deterring postures in cyberspace could be achieved. Iasiello (2014), for example, differentiated between deterrence by punishment and deterrence by denial (see also Nye (2017) and Lupovici (2011)). Specifically, while deterrence by punishment is focused on conveying to potential attackers that significant sanctions will be imposed in retaliation to any cyberattack, deterrence by denial aims to convey to potential attackers that their aggressive efforts in cyberspace will be futile. Importantly, Iasiello (2014) argued that the key factors required for supporting both means of deterrence are (1) effective communication of the deterring messages, (2) the ability to properly signal intentions to receivers, (3) the ability to successfully attribute attacks to an aggressor, and (4) proportional retaliation for different cyberattacks. Nye (2017) identified two additional means of deterrence: entanglement and

460

D. Maimon

norms. Entanglement refers to the presence of interdependencies which make the consequence of an attack serious to both the attacker and the target. Similarly, normative considerations refer to the potential reputational costs that may follow a cyberattack and which may damage an actor’s soft power beyond the value gained from an attack. Finally, Tor (2017) discussed the relevance of cumulative deterrence against cyberattacks. Drawing on the rationale advanced by Gibbs (1975) in his discussion of restrictive deterrence, cumulative deterrence refers to repeated attacks on a rival in response to specific behaviors, over a long period of time, and in some cases disproportionally to the attacker’s aggressive behaviors (Tor 2017). Given the considerable theoretical attention provided in the political science discipline to the different means of cyber-deterrence, one may expect a similar level of scientific exploration around the effectiveness of various deterrence approaches in preventing and mitigating nations’ aggression in cyberspace. However, a recent systematic review by Gorwa and Smeets (2019) suggested that such empirical works are still missing in this field. Indeed, only two studies (Kostyuk and Zhukov 2019; Valeriano and Maness 2014) examined the dynamic of cyber conflict between rival nations while employing quantitative research designs methods. Specifically, Valeriano and Maness (2014) analyzed data from 110 cyber incidents and 45 cyber disputes and found that when cyber operations and incidents occur, they tend to carry a minimal impact and low severity due to the dynamic of cyber restraint. In contrast, Kostyuk and Zhukov’s (2019) analyses of cyberattack data collected during the conflicts in Ukraine (between 2013 and 2016) and Syria (between 2011 and 2016) revealed that cyberattacks do not facilitate an effective vehicle of coercion during war. Unfortunately, neither of these papers examine the theoretical aspects of cyber-deterrence and their effectiveness in preventing and dissuading cyberattacks.

Policy Implications and Directions for Future Research The Comprehensive National Cybersecurity Initiative (CNCI) has evolved to become one of the central elements of US national cybersecurity strategy (www. whitehouse.gov). One key activity in the CNCI highlights the development of deterrence-based strategies designed to prevent and mitigate the consequences of cyberattacks against US organizations and individuals. However, despite the emphasis placed on cyber-deterrence, this review reveals mixed evidence regarding the effectiveness of different aspects of deterrence-based strategies in preventing the occurrence of malicious cyber activities. Specifically, studies published in both criminological and information systems journals suggest that the effect of sanction severity in preventing online crime is inconsistent (Skinner and Fream 1997; Morris and Blackburn 2009; D’Arcy et al. 2009; Hovav and D’Arcy 2012) and that the effect of punishment certainty is only significant among specific populations of online offenders (Morris and Blackburn 2009; D’Arcy et al. 2009; Hovav and D’Arcy 2012). In contrast, the detection and attribution of online crime (Guitton 2012; Maimon et al. 2019), along with clear communication of sanctions

22

Deterrence in Cyberspace: An Interdisciplinary Review of the. . .

461

(Barlow et al. 2013), are consistently found to be negatively associated with online criminals’ willingness to launch cyberattacks and adopt avoidance strategies. Moreover, the effectiveness of various deterrence-based methods in restricting the scope and disrupting the progression of online criminal events has been reported in several criminological studies (Maimon et al. 2014; Wilson et al. 2015; Testa et al. 2017). Finally, while previous information systems and law research tend to test the effect of general deterrence (although not explicitly) on online crime (Kigerl 2016; Mayer 2015; Hui et al. 2017), no prior research has tested the effect of punishment on online offenders’ recidivism. Therefore, in addition to ongoing policy efforts which aim to prevent online crime by adopting deterrence-based policies, practitioners and cybersecurity experts should consider adopting deterrence-based approaches for mitigating the consequence of online criminal events (Maimon and Louderback 2019; Willison et al. 2018). This approach should also guide cybersecurity experts’ efforts to develop new technical tools which may support the mitigation and discovery of cybercrime incidents. Indeed, these experts have devoted considerable attention in the last 20 years to developing tools that are designed to detect computer and network vulnerabilities and to prevent cybercrimes from developing (Waldrop 2016). Although such tools are designed to identify vulnerabilities and prevent their exploitation by malicious actors, none of them allow rapid detection of these incidents or effective mitigation of the consequences of an attack. Moreover, the effectiveness of these tools in preventing online crime is questionable. Therefore, configuring new tools while drawing on the restrictive deterrence approach may prove useful in reducing the scope of online offenders during the progression of cybercrime events (Gibbs 1975). Future research should further explore whether cyber-deterrence prevents and disrupts the progression of cybercrimes. Such research should explore the influence of both absolute and restrictive cyber-deterrence. As elsewhere, one of the major hurdles in this area could be the absence of universally accepted measurement metrics, which would provide guardians and scholars with practical techniques for assessing the effectiveness of deterrence-based efforts, security policies, and tools in preventing cybercrime (Torres et al. 2006). Indeed, the most common approach to the implementation of preventive practices in online environments draws on guardians’ personal experience in the field, as well as their personal world views when making security-related decisions that may influence offenders and targets (Siponen and Willison 2009). Such an approach does not require rigorous empirical evaluations of security tools and policies to support the decision-making by these professionals. In fact, Blakely (2002) suggested that this approach has failed to prevent individuals and organizations from becoming the targets and victims of cybercrime. Therefore, Blakely proposed the adoption of an approach that monetizes guardianship efforts and quantifies the effectiveness of security tools and policies in achieving their stated goals. Scholars within each of the scientific disciplines reviewed in this paper should consider conducting empirical research that will push the envelope in the context of all four disciplines simultaneously. Criminologists and information systems scholars should seek to collect data on online crime directly from the field and create better

462

D. Maimon

operationalization of deterrence-based concepts. Given the cynicism developed in the criminological field about the collection and analysis of data, datasets that are used by cybercriminologists in their publications should be made publicly available. Future criminological research should further explore how different configurations of online environments shape both online offenders’ and targets’ involvement in cyberdependent crimes (Lessig 2009). Encouraged by findings reported in the criminological literature indicating that environmental design could reduce the volume of robbery (Jeffrey et al. 1987), vandalism (Sloan-Howitt and Kelling 1990), and shoplifting (Farrington and Burrows 1993) incidents, guardians’ familiarity with computer and online configurations that result in lower rates of and less damage from cyberdependent crimes could guide the design of safer online environments. Law scholars should attempt to produce empirical assessments regarding the effectiveness of computer crime laws in deterring crime, both in the USA and in other places around the globe. Particular attention should be given to investigating the effectiveness of official punishment in reducing recidivism among convicted online criminals. Similarly, political scientists should also seek to produce more empirical research around the effectiveness of cyberwarfare in preventing and mitigating nations’ aggression in cyberspace (Gorwa and Smeets 2019). Furthermore, future law research should continue to assess the effect of international collaborations on cybercrime laws and their enforcement in reducing different types of online crime around the world. Finally, future research should seek to evaluate the most effective ways to successfully implement deterrence-based policies and law enforcement operations in online environments, as well as to assess the effectiveness of these approaches in preventing and mitigating the consequences of cybercrime. Such evaluations should include the development of cybercrime metrics that are clear, objective, repeatable, and simple (Atzeni and Lioy 2006).

Conclusions Governmental agencies and private corporations around the globe employ a wide range of cyber laws, technical tools, and security policies in efforts to reduce their probability of becoming victims of cybercrime. Many of the laws, security tools, and organizational procedures utilize deterrence-based strategies, which aim to prevent the occurrence of offline crimes by threatening potential offenders with sanctions. Unfortunately, despite the prevalence of cyber-deterrence policies and strategies, the effectiveness of deterrence-based strategies in preventing and mitigating the occurrence of online crime is still relatively unknown within the criminological, law, information systems, and political science fields. Therefore, interested scholars within these academic disciplines should seek to produce rigorous evidence regarding deterrence-based policies, sanctions, and threats in preventing and mitigating cybercrime events. Specifically, while the available evidence regarding the effectiveness of cyber-deterrence in preventing cybercrime tends to draw mainly on survey-based research, efforts should be made to conduct scientific research that

22

Deterrence in Cyberspace: An Interdisciplinary Review of the. . .

463

investigates the effectiveness of swift, severe, and certain sanctions for online crime in the wild, through the implementation of experimental research designs. This should be done while paying close attention to the immense disconnect that exists between the various academic disciplines around the topic of cyber-deterrence. Active efforts should seek to bridge this disconnect, in order to allow a more comprehensive and thorough understanding of different aspects of deterrence in cyberspace.

Cross-References ▶ Cybercrime Legislation in the United States ▶ Cybersecurity as an Industry: A Cyber Threat Intelligence Perspective ▶ Datasets for Analysis of Cybercrime ▶ Police and Extralegal Structures to Combat Cybercrime ▶ Surveillance, Surveillance Studies, and Cyber Criminality

References Akers, R. (2017). Social learning and social structure: A general theory of crime and deviance. New York: Routledge. Anderson, L. S., Chiricos, T. G., & Waldo, G. P. (1977). Formal and informal sanctions: A comparison of deterrent effects. Social Problems, 25(1), 103–114. Atzeni, A., & Lioy, A. (2006). Why to adopt a security metric? A brief survey. In Quality of Protection (pp. 1–12). Springer, Boston, MA. Barlow, J. B., Warkentin, M., Ormond, D., & Dennis, A. R. (2013). Don’t make excuses! Discouraging neutralization to reduce IT policy violation. Computers and Security, 39, 145–159. Beccaria, Cessare. (1963). On crimes and punishments (H. Paolucci, Trans.). Indianapolis: BobbsMerrill. (Original work published 1764). Bentham, J. (1789). The principles of morals and legislation. Amherst: Prometheus Books. Blakely, B. (2002) Consultants Can Offer Remedies to Lax SME Security. TechRepublic, 6 February 2002, http://techrepublic.com.com/5100-6329-1031090.html. Boss, S., Galletta, D., Lowry, P. B., Moody, G. D., & Polak, P. (2015). What do systems users have to fear? Using fear appeals to engender threats and fear that motivate protective security behaviors. MIS Quarterly (MISQ), 39(4), 837–864. Braga, A. A., & Weisburd, D. L. (2012). The effects of focused deterrence strategies on crime: A systematic review and meta-analysis of the empirical evidence. Journal of Research in Crime and Delinquency, 49(3), 323–358. Brenner, S. (2001). Cybercrime investigation and prosecution: The role of penal and procedural law. Murdoch University Electronic Journal of Law, 8(2), 2–42. Chen, Y., Ramamurthy, K., & Wen, K. W. (2012). Organizations’ information security policy compliance: Stick or carrot approach? Journal of Management Information Systems, 29(3), 157–188. Cheng, L., Li, Y., Li, W., Holm, E., & Zhai, Q. (2013). Understanding the violation of IS security policy in organizations: An integrated model based on social control and deterrence theory. Computers and Security, 39, 447–459. Cram, W. A., Proudfoot, J. G., & D’Arcy, J. (2017). Organizational information security policies: A review and research framework. European Journal of Information Systems, 26(6), 605–641.

464

D. Maimon

D’Arcy, J., & Herath, T. (2011). A review and analysis of deterrence theory in the IS security literature: Making sense of the disparate findings. European Journal of Information Systems, 20, 643–658. D’Arcy, J., Hovav, A., & Galletta, D. (2009). User awareness of security countermeasures and its impact on information systems misuse: A deterrence approach. Information Systems Research, 20, 79–98. Denning, D., & Baugh, W. (2000). Hiding crimes in cyberspace. In D. Thomas & D. Loader (Eds.), Cybercrime: Law enforcement, security and surveillance in the information age (pp. 105–132). London: Routledge. Dupont, B. (2017). Bots, cops, and corporations: On the limits of enforcement and the promise of polycentric regulation as a way to control large-scale cybercrime. Crime, Law, and Social Change, 67, 97–116. Farinholt, B., Rezaeirad, M., Pearce, P., Dharmdasani, H., Yin, H., Le Blond, S., McCoy, D., & Levchenko, K. (2017). To catch a ratter: Monitoring the behavior of amateur darkcomet rat operators in the wild. In 2017 IEEE symposium on Security and Privacy (SP) (pp. 770–787). Farrington, D. P., & Burrows, J. N. (1993). Did shoplifting really decrease? The British Journal of Criminology, 33, 57–69. Geerken, M. R., & Gove, W. R. (1974). Deterrence: Some theoretical considerations. Law and Society Review, 9, 497. Gibbs, J. (1975). Crime, punishment, and deterrence. New York: Elsevier Scientific Publishing Company. Goodman, W. (2010). Cyber-deterrence: Tougher in theory than in practice? Strategic Studies Quarterly Fall, 102–135. Gorwa, R., & Smeets, M. 2019. Cyber Conflict in Political Science: A Review of Methods and Literature. SocArXiv. July 25. https://doi.org/10.31235/osf.io/fc6sg Guitton, C. (2012). Criminals and cyber attacks: The missing link between attribution and deterrence. International Journal of Cyber Criminology, 6(2), 1030. Guo, K. H. (2013). Security-related behavior in using information systems in the workplace: A review and synthesis. Computers and Security, 32, 242–251. Harknett, R. (1996). Information warfare and deterrence. Parameters, 26, 93–107. Harknett, R., Callaghan, J., & Kauffman, R. (2010). Leaving deterrence behind: War-fighting and national cybersecurity. Journal of Homeland Security and Emergency Management, 7(1), 1–24. Herath, T., & Rao, H. R. (2009a). Encouraging information security behaviors in organizations: Role of penalties, pressures and perceived effectiveness. Decision Support Systems, 47(2), 154–165. Herath, T., & Rao, H. R. (2009b). Protection motivation and deterrence: A framework for security policy compliance in organisations. European Journal of Information Systems, 18, 106–125. Holt, T. J. (2017). On the value of honeypots to produce policy recommendations. Criminology and Public Policy, 16(3), 739–747. Holt, T. J., Kilger, M., Chiang, L., & Yang, C. (2017). Exploring the correlates of individual willingness to engage in ideologically motivated cyberattacks. Deviant Behavior, 38, 356–373. Hovav, A., & D’Arcy, J. (2012). Applying an extended model of deterrence across cultures: An investigation of information systems misuse in the US and South Korea. Information and Management, 49, 99–110. Hu, Q., Xu, Z., Dinev, T., & Ling, H. (2011). Does deterrence work in reducing information security policy abuse by employees? Communications of the ACM, 54, 54–60. Hui, K. L., Kim, S. H., & Wang, Q. H. (2017). Cybercrime deterrence and international legislation: Evidence from distributed denial of service attacks. MIS Quarterly, 41(2), 497. Iasiello, E. (2014). Is cyber-deterrence an illusory course of action? Journal of Strategic Security, 7(1), 54–67. Jeffrey, C. R., Hunter, R. D., & Griswold, J. (1987). Crime prevention and computer analysis of convenience store robberies in Tallahassee. Florida Police Journal, 34, 65–69. Jervis, R. (1979). Deterrence theory revisited. World Politics, 31(2), 289–324.

22

Deterrence in Cyberspace: An Interdisciplinary Review of the. . .

465

Johnston, A. C., & Warkentin, M. (2010). Fear appeals and information security behaviors: An empirical study. MIS Quarterly, 34, 549–566. Kigerl, A. C. (2009). CAN SPAM act: An empirical analysis. International Journal of Cyber Criminology, 3(2), 566. Kigerl, A. C. (2015). Evaluation of the CAN SPAM ACT: Testing deterrence and other influences of e-mail spammer legal compliance over time. Social Science Computer Review, 33(4), 440–458. Kigerl, A. C. (2016). Deterring spammers: Impact assessment of the CAN SPAM act on email SPAM rates. Criminal Justice Policy Review, 27(8), 791–811. Kigerl, A. C. (2018). Email SPAM origins: Does the CAN SPAM act shift spam beyond United States jurisdiction? Trends in Organized Crime, 21(1), 62–78. Kostyuk, N., & Zhukov, Y. M. (2019). Invisible digital front: Can cyberattacks shape battlefield events? Journal of Conflict Resolution, 63(2), 317–347. Krebs, B. (2014). Spam nation: The inside story of organized cybercrime-from global epidemic to your front door. Naperville: Sourcebooks, Inc. Lessig, L. (2009). Code 2.0. Seattle: Amazon CreateSpace Publishing. Li, H., Zhang, J., & Sarathy, R. (2010). Understanding compliance with internet use policy from the perspective of rational choice theory. Decision Support Systems, 48(4), 635–645. Libicki, M. C. (2009). Cyber-deterrence and cyberwar. Santa Monica: Rand Corporation. Lupovici, A. (2011). Cyber warfare and deterrence: Trends and challenges in research. Military and Strategic Affairs, 3(3), 49–62. Maimon, D., & Louderback, E. R. (2019). Cyber-dependent crimes: an interdisciplinary review. Annual Review of Criminology. 1–26 Maimon, D., Antonaccio, O., & French, M. T. (2012). Severe sanctions, easy choice? Investigating the role of school sanctions in preventing adolescent violent offending. Criminology, 50(2), 495–524. Maimon, D., Alper, M., Sobesto, B., & Culkier, M. (2014). Restrictive deterrent effects of a warning banner in an attacked computer system. Criminology, 52, 33–59. Maimon, D., Becker, M., Patil, S., & Katz, J. (2017). Self-protective behaviors over public WiFi networks. In The {LASER} workshop: Learning from authoritative security experiment results ({LASER} 2017) (pp. 69–76). Usenix Association. Maimon, D., Testa, A., Sobesto, B., Cukier, M., & Ren, W. (2019). Predictably Deterrable? The case of system trespassers. In International conference on security, privacy and anonymity in computation, communication and storage (pp. 317–330). Cham: Springer. Mayer, J. (2015). Cybercrime litigation. University of Pennsylvania Law Review, 164, 1453. McGuire, M., & Dowling, S. (2013).  Cyber-crime: A review of the evidence summary of key findings and implications [https://assets.publishing.service.gov.uk/government/uploads/system/ uploads/attachment_data/file/246749/horr75-summary.pdf] . Home Office Research Report 75, Home Office, United Kingdom. Milne, S., Sheeran, P., & Orbell, S. (2000). Prediction and intervention in health-related behavior: A meta-analytic review of protection motivation theory. Journal of Applied Social Psychology, 30(1), 106–143. Mohammadzadeh, H., Mansoori, M., & Welch, I. (2013). Evaluation of fingerprinting techniques and a windows-based dynamic honeypot. In Proceedings of the eleventh Australasian information security conference-Volume 138 (pp. 59–66). Australian Computer Society, Inc. Morris, R. G., & Blackburn, A. G. (2009). Cracking the code: An empirical exploration of social learning theory and computer crime. Journal of Crime and Justice, 32(1), 1–34. Nagin, D. S. (1998). Criminal deterrence research at the outset of the twenty-first century. Crime and Justice, 23, 1–42. Nagin, D. S. (2013). Deterrence: A review of the evidence by a criminologist for economists. Annual Review of Economy, 5(1), 83–105. Nye, J. S., Jr. (2017). Deterrence and dissuasion in cyberspace. International Security, 41(3), 44–71. Paternoster, R. (1987). The deterrent effect of the perceived certainty and severity of punishment: A review of the evidence and issues. Justice Quarterly, 4(2), 173–217.

466

D. Maimon

Paternoster, R. (2010). How much do we really know about criminal deterrence. Journal of Criminal Law and Criminology, 100, 765. Pratt, T. C., Cullen, F. T., Blevins, K. R., Daigle, L. E., & Madensen, T. D. (2006). The empirical status of deterrence theory: A meta-analysis. Taking Stock: The Status of Criminological Theory, 15, 367–396. Quackenbush, S. L. (2011). Deterrence theory: Where do we stand? Review of International Studies, 37(2), 741–762. Rezaeirad, M., Farinholt, B., Dharmdasani, H., Pearce, P., Levchenko, K. & McCoy, D. (2018). Schrödinger’s {RAT}: Profiling the stakeholders in the remote access trojan ecosystem. In 27th {USENIX} security symposium ({USENIX} Security 18) (pp. 1043–1060). Rid, T., & Buchanan, B. (2015). Attributing cyberattacks. Journal of Strategic Studies, 38(1–2), 4–37. Rogers, R. W. (1975). A protection motivation theory of fear appeals and attitude change. Journal of Personality, 91, 93–114. Rogers, R. W. (1983). Cognitive and psychological processes in fear appeals and attitude change: A revised theory of protection motivation. In Social psychophysiology: A sourcebook (pp. 153–176). New York: Guilford Press. Schelling, T. C. (1966). Arms and influence. New Haven: Yale University Press. Schelling, T. (1980). The Strategy of Conflict, 1960. Harvard University. Siponen, M., & Willison, R. (2009). Information security management standards: Problems and solutions. Information & Management 46.5: 267–270. Siponen, M., Pahnila, S., & Mahmood, M. A. (2010). Compliance with information security policies: An empirical investigation. Computer, 43, 64–71. Skinner, W. F., & Fream, A. M. (1997). A social learning theory analysis of computer crime among college students. Journal of Research in Crime and Delinquency, 34, 495–518. Sloan-Howitt, M., & Kelling, G. L. (1990). Subway graffiti in new York City: Gettin’up vs. meanin’it and cleanin’it. Security Journal, 1, 131–136. Snyder, G. H. (1961). Deterrence and defense. Princeton: Princeton University Press. Sommestad, T., Hallberg, J., Lundholm, K., & Bengtsson, J. (2014). Variables influencing information security policy compliance: A systematic review of quantitative studies. Information Management and Computer Security, 22(1), 42–75. Stafford, M. C., & Warr, M. (1993). A reconceptualization of general and specific deterrence. Journal of Research in Crime and Delinquency, 30(2), 123–135. Stockman, M., Heile, R., & Rein, A. (2015). An open-source honeynet system to study system banner message effects on hackers. In Proceedings of the 4th annual ACM conference on research in information technology (pp. 19–22). Stoneburner, G., Goguen, A., & Feringa, A. (2002). Risk management guide for information technology systems. NIST Special Publication, 800, 30. Taddeo, M. (2018). The limits of deterrence theory in cyberspace. Philosophy and Technology, 31(3), 339–355. Testa, A., Maimon, D., Sobesto, B., & Cukier, M. (2017). Illegal roaming and file manipulation on target computers: Assessing the effect of sanction threats on system trespassers’ online behaviors. Criminology and Public Policy, 16, 687–724. Tor, U. (2017). Cumulative deterrence as a new paradigm for cyber-deterrence. Journal of Strategic Studies, 40(1–2), 92–117. Torres, J. M., Sarriegi, J. M., Santos, J., & Serrano, N. (2006, August). Managing information systems security: critical success factors and indicators to measure effectiveness. In International Conference on Information Security (pp. 530-545). Springer, Berlin, Heidelberg. Valeriano, B., & Maness, R. C. (2014). The dynamics of cyber conflict between rival antagonists, 2001–11. Journal of Peace Research, 51(3), 347–360. Waldrop, M. M. (2016). How to hack the hackers: The human side of cybercrime. Nature News, 533 (7602), 164.

22

Deterrence in Cyberspace: An Interdisciplinary Review of the. . .

467

Willison, R., Lowry, P. B., & Paternoster, R. (2018). A tale of two deterrents: Considering the role of absolute and restrictive deterrence to inspire new directions in behavioral and organizational security research. A Tale of two deterrents: Considering the role of absolute and restrictive deterrence in inspiring new directions in behavioral and organizational security. Journal of the Association for Information Systems (JAIS), 19(12), 1187–1216. Wilner, A. S. (2019). US cyber-deterrence: Practice guiding theory. Journal of Strategic Studies, 1–36. Wilson, T., Maimon, D., Sobesto, B., & Cukier, M. (2015). The effect of a surveillance banner in an attacked computer system: Additional evidence for the relevance of restrictive deterrence in cyberspace. Journal of Research in Crime and Delinquency, 52, 829–855. Workman, M., Bommer, W. H., & Straub, D. (2008). Security lapses and the omission of information security measures: A threat control model and empirical test. Computers in Human Behavior, 24, 2799–2816.

Routine Activities

23

Billy Henson

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Routine Activity Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lifestyle-Exposure Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lifestyle-Routine Activity Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Evidence for Routine Activity Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Routine Activities and Lifestyles in Cyberspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applying Routine Activities Theories to Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Criminal Event in a Virtual World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitizing the Components of Routine Activity Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

470 470 472 473 474 476 477 478 480 486 487 487

Abstract

The growth of technology, and specifically the Internet, has had a profound effect on what was considered routine behavior and actions. Nowhere has this been more evident than with the progression of crime and victimization. As a result of the development of technology and the Internet, traditional crimes have begun to evolve, while new forms of crime have also been born. In an effort to explain cybercrime, researchers have begun adapting and examining criminology theories in this new virtual context. To that end, one of the most popular categories of criminological theory – routine activity theories – has been utilized as a primary means of analyzing online victimization. While the applicability of these types of theories to the nonphysical world of the Internet has received a fair amount of B. Henson (*) Department of Criminology and Criminal Justice, Mount St. Joseph University, Cincinnati, OH, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_23

469

470

B. Henson

debate, almost all major forms of cybercrime have been studied via a routine activities’ framework. Though results have been mixed, there appears to be a fair amount of evidence indicating the potential for the adaptation of these theories to explain and predict cybercrime. Keywords

Routine activities · Lifestyle · Cybercrime · Exposure · Guardianship

Introduction Much of criminology theory is an effort to explain why individuals commit crime (or do not commit crime in the case of control theories). As such, the central focus is typically on the offender, in an attempt to answer the question, “what motivated them to commit that crime?” However, there is a subfield of criminology theory that centers more so on the criminal event, itself, rather than just the offender. These theories often assume offender motivation as a constant and, instead, focus on the opportunity for crime. These opportunity theories are based on the assumption that there are numerous opportunities for crime at any given time. All it takes for a crime to occur is for a motivated offender to encounter the right situation. Of course, some opportunities are better than others, as the risks and rewards of each vary, but it is generally assumed that offenders will compare the benefits and consequences of a given opportunity before committing a crime (see “Rational Choice/Deterrence”). One of the most widely examined and cited categories of opportunity theories is routine activity theories. Most commonly associated with the work of Cohen and Felson (1979) and Hindelang et al. (1978), routine activity theories have been applied at both the macro and individual levels, across numerous studies, in an attempt to explain various patterns of crime and victimization. Given that the crux of these theories involves the various components of the criminal event intersecting in the same physical space at the same time, they have traditionally been viewed as a way to explain physical victimization. Still, in the last decade, many researchers have begun to apply their principles to other forms of crime and victimization – specifically, cybercrime. Due to the nonphysical nature of the Internet, however, some concessions and/or adaptations have had to be made. Nonetheless, routine activity theories have become commonly applied in the field of cybercrime.

Routine Activity Theory During the mid-twentieth century, the USA began to experience increases in a number of reported crimes, including robbery and burglary. In an attempt to explain that increase in crime, Lawrence Cohen and Marcus Felson began to examine macrolevel social trends. One of the central observations they made was that people were spending more time outside the home (Cohen and Felson 1979). More individuals

23

Routine Activities

471

were going to college, women were entering the workforce at much higher rates, and there were a number of conveniences which gave people the opportunity to leave home more often. They concluded that because individuals’ routine activities took them into public more often, the opportunities for victimization increased, which led to higher crime rates (Cohen and Felson 1979). Their routine activity theory (RAT) posited that crime is a function of opportunity, meaning when a motivated offender is in the same place at the same time as an opportunity, crime will most likely occur. Cohen and Felson (1979) describe opportunity as the presence of a suitable target (i.e., victim) and a lack of capable guardianship. In this sense, a suitable target may be an individual or an object. For example, an intoxicated person at a bar may be a suitable target for theft because their inebriation may make it easier to take advantage of them. Or, a house in a rural area may be a suitable target for burglary because it is geographically isolated. Likewise, a capable guardian may also be a person or some other form of security. Imagine that drunken bar patron has a sober friend, who may be watching out for them, or that isolated house may have a security system or dog to protect it. In either case, though the target may be suitable, the presence of guardianship may make the situation too risky for offenders. As shown in Fig. 1, RAT suggests that a crime will occur when a motivated offender intersects in the same place at the same time with a suitable target and a lack of capable guardianship. Further, the intersection in time and space occurs most often when the routine activities of a potential victim overlap, in some way, with those of a motivated offender (Brantingham and Brantingham 2004). Routine activities include any commonly repeated behaviors or actions. Think about what you do on a regular basis. How often do you take the same route to class or work? Which days of the weeks do you typically go out at night for social activities? Do you tend to go to the same grocery or department stores repeatedly? Answers to questions like these help illustrate your routine activities. Those places visited most often (e.g., home, work, grocery store, etc.) are commonly referred to as activity nodes, and the routes between them are known as paths (Brantingham and Brantingham 1993). As displayed in Fig. 2, crime typically Fig. 1 Components of routine activity theory

Motivated Offender

CRIME Suitable Target

Lack of Capable Capab a le Guardianship

472

B. Henson

Home

Work Bar High Crime Area

Work

Home

Victim’s Nodes & Paths

Offender’s Nodes & Paths

Fig. 2 Overlap of victim and offender routine nodes and paths

occurs most often in places where the nodes and paths of offenders and victims overlap (Brantingham and Brantingham 2004). It is at these locations that opportunity for many crimes is at its highest. To be clear, RAT doesn’t just explain personal crime. In fact, many early tests and discussions of the theory focused on property crime. For example, as originally proffered by Cohen and Felson (1979), when individuals’ routine activities take them outside their home more often, the suitability of their home as a potential target for burglary increases. To illustrate, think about those who live around you. You and your neighbors share a common node with the location of your homes. They likely have some understanding of your schedule, as they see you come and go each day. As a result, they may know when your home is least guarded or, stated more deviously, when there is a better opportunity to burglarize your home. Because your routine activities take you outside your home, you become a better target for motivated offenders. Perhaps, this is why the majority of burglaries occur during the day, with a large number committed by someone who lives nearby (Weisel 2002). Further, although RAT is largely examined as a way to predict individual-level crime, it was originally developed to explain macro-level crime. As can be seen with Cohen and Felson’s (1979) original discussion, the theory’s creators were attempting to explain national-level crime rates by focusing on broad social trends. To that end, the initial focus was not on why any one individual was victimized but, instead, why crime rates across the USA were increasing. Some more recent studies have continued with a macro-level approach to RAT (Tseloni et al. 2004), and others have adopted a multilevel approach (Wilcox Rountree and Land 1996); however, over the last few decades, RAT has been applied most predominantly to explain individual-level victimization. It is that type of flexibility that has made the theory so popular and widely examined.

Lifestyle-Exposure Theory At relatively the same time that Cohen and Felson were developing their RAT, Michael Hindelang, Michael Gottfredson, and James Garofalo were developing their own, similar theory – lifestyle-exposure theory (LET). According to Hindelang et al.’s (1978) LET, victimization is a function of one’s level of exposure to

23

Routine Activities

473

Fig. 3 A lifestyle-exposure model of victimization. (Adapted from Hindelang et al. 1978)

potentially risky situations. In other words, the more exposed an individual is to motivated offenders, the more prevalent the opportunity for them to experience victimization. Further, one’s level of exposure is directly influenced by a number of personal and social factors (Hindelang et al. 1978). As depicted in Fig. 3, the LET model begins with demographic characteristics (e. g., gender, race, socioeconomic status, etc.). An individual’s demographic characteristics directly influence both his or her role expectations and structural constraints. For example, as one ages, there are different social expectations placed on them. Adults have very different social expectations than children, such as the need to have a job. Likewise, they also have very different structural constraints, such as the ability to legally patronize a bar. It is both the presence of expectations/ constraints and the approach that one takes to adapt to them that influences an individual’s lifestyle. As described by Hindelang and his colleagues (1978), a person’s lifestyle directs both their daily routine activities and the types of individuals with whom he or she associates. For example, if you are an adult, society generally expects you to earn money to support yourself. To that end, perhaps you work a 9 to 5 job. That job will most likely regulate your routine daily activities (i.e., where you go, when you go there, how you get there), as well as who you associate with the most (i.e., co-workers, supervisors, family). Both your daily routine activities and associations will then influence your level of exposure to potential victimization, by placing you in situations where there may or may not be the potential for victimization. The more exposed an individual is to potential offenders, the more likely he or she will experience victimization.

Lifestyle-Routine Activity Theory In examining both RAT and LET, it can easily be seen that there is substantial overlap between the theories. As a result, the two theories quickly began to be examined in unison, with the unified theory referred to as lifestyle-routine activities theory (L-RAT) (Cohen et al. 1980, 1981; Miethe and Meier 1990). Over the last few decades, L-RAT has become one of the central theories in explaining victimization and is traditionally examined in terms of its four main components (Fisher et al. 2002, 2010; Nobles et al. 2014). Exposure to Motivated Offenders. The first component of L-RAT is exposure to motivated offenders, and it denotes that the more exposed someone is, in general, the more likely they will come into contact with potential offenders. In addition,

474

B. Henson

the more likely a person is to come into contact with an offender, the more likely they will be victimized. For example, if an individual often goes out at night to job or exercise, they are more likely to encounter an offender than an individual who stays home all night. As a result, the person who goes out more will have a higher level of exposure to motivated offenders than the person who stays home. Proximity to Potential Offenders. The second component of L-RAT is proximity to potential offenders, and it is prefaced on the notion that the closer contact an individual has with offenders, the more likely they will be victimized. Basically, being in the same place at the same time as a potential offender increases the chances of victimization. Further, the more offenders in close proximity, the higher their chances of being victimized. For example, you are much more likely to be robbed while jogging in a park at night than in class. While jogging, there is a higher likelihood of coming into contact with offenders than in class. As such, the chances of being robbed are higher at the park. Guardianship. The third component, guardianship, refers to the type and level of protection a potential target may have. Guardianship can come in numerous forms. For example, guardianship could be the presence of someone who may serve as a guardian or protector, such as a police officer who patrols the park at night to ensure park patrons are safe. Or, it could be a security object or device, such as pepper spray, that could be used to help protect oneself from potential robbers. The more capable the type and level of guardianship, the less likely an individual will be victimized. Target Attractiveness. The final component, target attractiveness, is based on the concept of how suitable an offender finds a potential target for victimization. For an offender, target attractiveness is often determined through the process of weighing the risks and rewards of a given situation. The lower the risks and higher the rewards, the more attractive a target will be. Simply, some individuals make more desirable targets. For example, if a perpetrator is looking to commit robbery, he may find a person, out alone, at night an easier target than a person who is out during the day with a large group of people. The more attractive a target may be, the more likely he or she will be victimized. Lifestyles and Routine Activities. The connecting fibers interlacing through each of the L-RAT components are lifestyles and routine activities. An individual’s lifestyle, and resulting routine activities, is central to their likelihood of experiencing victimization. A person’s exposure and proximity to offenders, level of capable guardianship, and suitability as a potential target are each directly influenced by their actions and associations. Of course, that is not to say the victim should be blamed for being victimized. However, it should be noted that the decisions and actions individuals make have the potential to tip the victimization scale in one direction or another.

Empirical Evidence for Routine Activity Theories As indicated previously, routine activity theories have received a large amount of attention from researchers. Numerous studies have attempted to explain crime and victimization through the routine activities’ lens. In fact, this subset of theories is easily one of the most widely examined in the modern era of criminological research,

23

Routine Activities

475

just behind the control theories. While there are a fair number of studies that have only reported minimal or nonsignificant findings in support of the link between routine activities and victimization, there are a plethora of other studies indicating the link is strong and important. Overall, however, the research indicates moderate support for the theories.

Lifestyle-Routine Activities Theories and Property Victimization As noted, numerous researchers have utilized the framework of routine activities theories to examine victimization. In keeping with Cohen and Felson’s original approach, many of those studies have focused on property crime. For example, Terance Miethe and Robert Meier (1990) focused on routine activities theories with their study of burglary and theft in Britain. With data from the British Crime Survey, Miethe and Meier examined measures representing each of the four components of L-RAT from a multilevel approach, including both individual- and community-level measures. They noted that the strongest predictors for burglary and theft where living in an inner-city area, the perceived level of safety of a neighborhood at night, and the crime rate (proximity). Further, they also indicated that leaving the home unoccupied more frequently and the number of activities outside the home at night (exposure) were strong predictors of burglary and theft, respectively. Further, with another multilevel study of burglary victimization among a large sample of residents in Seattle, Terance Miethe and David McDowall (1993) examined the impact of both individual-level lifestyles and neighborhood-level characteristics on victimization. They reported that the risk of burglary victimization was higher for individuals who have higher incomes, frequently spend time outside their home, and had more expensive items in their homes (attractiveness). Further, economically challenged neighborhoods also have a higher likelihood of burglary (exposure and proximity). Finally, they also reported that individuals who live alone, as well as those who employ fewer safety precautions (guardianship), were more likely to experience burglary in middle-class and wealthy neighborhoods, though the effect was not as significant for individuals who lived in economically challenged neighborhoods. Finally, with one of the most methodologically detailed empirical tests of routine activities theories, Elizabeth Mustaine and Richard Tewksbury (1998) examined larceny victimization among college students. In an attempt to improve upon previous routine activity theories research, Mustaine and Tewksbury chose to utilize more highly detailed measures of lifestyles and routine activities. With this approach, the researchers found that college students’ minor larceny experience was directly influenced by their participation in illegal activities (proximity), the place they went and activities they participated in (exposure), and their self-protective behaviors (guardianship). Lifestyle-Routine Activities Theories and Personal Victimization The routine activities approach has also been taken to examine personal crime. For example, with their study, Bonnie Fisher et al. (2002) examined stalking victimization among a national sample of female college students utilizing the L-RAT framework. They reported that 13.1% of the sample examined had experienced stalking victimization since the previous academic year began. Further,

476

B. Henson

they noted that exposure to certain situations, a lack of capable guardianship, and proximity to motivated offenders were all linked to experiencing stalking victimization. For example, they reported that women who were more often in places with alcohol (exposure) and women who lived alone (lack of capable guardianship) were significantly more likely to experience stalking. Further, Cortney Franklin et al. (2012) examined the effects of both routine activities and self-control on victimization using a large sample of female university students. In regard to their focus on routine activities, the authors reported that respondents who admitted to participation in drug sale behavior (proximity) were more likely to experience personal and sexual assault victimization. In addition, respondents who lived in off-campus housing and those who spent more time partying (exposure) were more likely to experience personal and sexual victimization, respectively, net of controls. Finally, using a national sample of data from the National Crime Victimization Survey, Jackson Bunch et al. (2015) analyzed the mediating role of routine activities on demographics as predictors of violent and theft victimization. Specifically, they focused on whether respondents went out at night and whether they went shopping almost every day (exposure). The authors found that the routine activities measures partially mediated the effects of the demographic characteristics, therefore providing validation for their role in predicting victimization.

Importance of the Core Components of Routine Activities Theories While each of the core elements of routine activities theories have been supported to some extent within the research, all do not appear equal in their importance. As illustrated by the examples provided previously, exposure and proximity tend to be the strongest predictors of victimization, with target attractiveness and guardianship often being less significant (Franklin et al. 2012; Miethe and Meier 1990). This variation could be due, at least in part, to the vagueness of both target attractiveness and guardianship. For instance, while certain characteristics may make someone a more attractive target, the level of attractiveness may vary from offender to offender. Further, in the case of guardianship, it is often difficult to determine if the guardianship efforts were present before or after the victimization, especially with cross-sectional data. This could result in insignificant or confusing findings. For example, with Miethe and Meier’s (1990) study discussed previously, it was found that respondents with higher physical guardianship (e.g., owned gun, burglary alarm) were more likely to be victimized. While this is most likely due to a timeorder issue, it is not possible to tell with the data used.

Routine Activities and Lifestyles in Cyberspace With the growth of the popularity and capabilities of the Internet, a true technological revolution has begun. At the time this chapter was written, Google was older than most college freshmen; and Facebook was almost old enough to have a driver’s license. We have seen the birth of generations that will never know life without

23

Routine Activities

477

the Internet. Almost every aspect of daily life has, in some way, been altered by technology. It should come as no surprise that a similar transition is underway with regard to crime and victimization. The rapid development of the Internet and technology has led to both the metamorphosis of many traditional forms and the birth of new forms of crime and victimization. Due to the shear speed with which technological deviance has evolved, criminologists are playing catch-up in their attempts to explain the causes and effects of behavior in the realm of cyberspace. With that in mind, much debate has surfaced regarding the ability to use traditional criminology theories to explain new-age behaviors. Some traditional criminological theories seem more easily adaptable to online application. For instance, Gottfredson and Hirschi’s (1990) General Theory of Crime (a.k.a., self-control theory) can be applied to cybercrime offenders because self-control is universal (see ▶ Chap. 28, “The General Theory of Crime”). Whether online or in the physical world, people have varying levels of self-control. Of course, researchers may choose to examine it using traditional measures, such as Grasmick et al.’s (1993) 24-item scale, or they may attempt to examine it using measures of online behaviors (e.g., amount of online shopping, number of hours online, etc.). Regardless, self-control is self-control. However, attempting to adapt other traditional criminological theories may have some problematic consequences. Routine activities theories, and especially Cohen and Felson’s (1979) RAT, are one such example. As one of the main underpinnings of RAT is that individuals must come together in the same place at the same time, adapting it to cyberspace, which has no physical locations and time is not viewed in the traditional manner, can be somewhat problematic. Nonetheless, numerous researchers have attempted to do just that (Holt and Bossler 2013; Navarro and Jasinski 2012; Pratt et al. 2010; Reyns 2013; Reyns and Henson 2016; Reyns et al. 2011).

Applying Routine Activities Theories to Cybercrime When LET and RAT were originally proposed, the Internet was still mainly theoretical. While a few scientists and researchers were hopeful of its future capabilities, no one could have imagined what cyberspace, as we know it today, would become. With that in mind, criminologists had no reason to believe that crime would exist anywhere beyond the physical world. Fast-forward a couple of decades, and we now find ourselves earnestly searching for a means to explain the cause and prevalence of cybercrime (see ▶ Chap. 1, “Defining Cybercrime”). No longer are physical encounters a necessary condition for crime and victimization to occur. In cyberspace, potential victims and offenders can interface without any interaction in the same physical space. This disconnect has led some to speculate as to the usefulness of traditional criminology theories in explaining victimization that occurs online. Still, others have simply chosen to overlook the theoretical premise of an intersection in time and physical space and have attempted to apply theories to cybercrimes without explaining why they should work (e.g., Marcum 2009; Marcum et al. 2010; Pratt et al. 2010).

478

B. Henson

These varying approaches have sparked a fair amount of debate among researchers regarding the applicability of routine activity theories in cyberspace. For example, Peter Grabosky (2001) maintains that: One of the basic tenets of criminology holds that crime can be explained by three factors: motivation, opportunity, and the absence of a capable guardian. This explanation can apply to an individual incident as well as to long-term trends. Derived initially to explain conventional “street” crime, it is equally applicable to crime in cyberspace. (p. 248)

With his statement, Grabosky is indicating that although criminology theory was developed as a means of explaining physical crime, there exists the possibility that it could also be used to explain virtual crime. Opposingly, however, Majid Yar (2005) has contended that the routine activity theories may not be as suitable for explaining cybercrime, noting that: . . .the routine activity theory holds that the ‘organization of time and space is central’ for criminological explanation (Felson 1998: 148), yet the cyber-spatial environment is chronically spatio-temporally disorganized. The inability to transpose RAT’s postulation of ‘convergence in time and space’ into cyberspace thereby renders problematic its straightforward explanatory application to the genesis of cybercrimes. (p. 424)

Yar’s argument is that the time-space intersection is a vital component of routine activity theories, and that since this is not possible in cyberspace, the theories can’t be used to explain online victimization. Similar arguments have continued to surface on both sides of the debate.

The Criminal Event in a Virtual World So, if routine activity theories necessitate that victims and offenders intersect in time and space, how can they be used to explain online victimization? One approach that has been suggested by researchers is a simple revamping of some of the more basic premises of these theories. Thanks to the work of researchers like John Eck and Ronald Clarke, however, this revamping did not have to occur in a vacuum, as some foundational groundwork had already occurred. As a component of their work proffering a classification system for common police problems, Eck and Clarke (2003) describe the existence of systems problems. These are criminal events in which the victim and offender may not interact in the same physical space or at the same time. For example, with mail bombing, the offender and victim are not in the same physical location. Further, the “offending,” or mailing of the bomb, does not occur at the same time as the “victimization,” or the reception of the bomb. This is similarly the case if someone leaves a threatening voicemail. According to Eck and Clarke, while these types of crime may not involve an overlap in time and space, they do share a common system (e.g., the mail or phone system). By analyzing such crimes from a systems perspective, we can essentially take a two-dimensional theory and examine it from a three-dimensional perspective.

23

Routine Activities

479

SHARED NETWORK

PLACE Traditional Criminal Event Triangle

Criminal Event Triangle Adapted to Show Shared Network Crime

Fig. 4 Traditional and shared network adapted criminal event triangle

Perhaps this can best be explained through illustration. As illustrated in Fig. 4, the image on the left representing the traditional criminal event triangle, as championed by John Eck, describes criminal opportunity as the convergence of a motivated offender and a suitable target in the same place. This description is consistent with RAT, which also indicates that criminal opportunity arises when a potential offender and target are in the same place at the same time. However, as described by Eck and Clarke (2003), in the case of shared network crimes, the offender and target may not be in the same place at the same time, instead only sharing a network. As displayed on the right side of the figure, the triangle essentially becomes pulled apart, with the only overlapping component being the shared network. It can also be seen that two separate points in time are indicated, one for when the offender performs the event and one for when the victim experiences the event. This shared network perspective is very useful in explaining how routine activity theories may be applicable to cybercrime (Reyns et al. 2011). Specifically, the time/ space divergence between offenders and victims can be reconciled to explain opportunities for cybercrime. Stated more simply, while cybercrime offenders and victims don’t converge in physical space or at the same time, they do interact within a shared network (i.e., the Internet). Further, this interaction may occur at shared virtual nodes, such as social networking platforms, popular websites, or chatrooms/forums. With this approach, routine activity theories are not limited to only explaining physical forms of crime.

480

B. Henson

Digitizing the Components of Routine Activity Theories As discussed previously, routine activity theories often describe opportunity as the presence of a suitable or attractive target with a lack of capable guardianship. Further, if a motivated offender encounters such an opportunity, then a crime will most likely occur. In addition, there are a number of factors which may directly influence an individual’s likelihood of encountering potential offenders, including his or her lifestyle and associations. In cyberspace, however, encounters are virtual. As such, both opportunities for crime and its related factors are digitized.

Routine Activity Theory and Lifestyle-Exposure Theory When discussing cybercrime, the offender is typically a person. While there are bots and other programs that could be used to hack accounts or send harassing messages, those programs are designed and created by people. As with traditional crime, however, suitable targets could include several things. Often, targets are people, as is the case with cybercrimes such as cyberstalking, cyberbullying, and online harassment. However, they may also be websites, businesses, or specific computer programs, as is the case with crimes like hacking, cybertheft, and, in some cases, identity theft. Likewise, capable guardianship may take several forms. It may be a parent who monitors their child’s social networking accounts or a cybersecurity officer who is charged with determining any vulnerabilities a company’s online systems may have. Or, it could be a program or device such as a virus scanner or a firewall, designed to secure computers and networks from dangerous programs or hackers. In any case, some online guardians are more capable than others. The main crux of routine activities theories are the concepts of lifestyle and routine activities. Both were previously highlighted as being central to Cohen and Felson’s (1979) RAT and Hindelang et al.’s (1978) LET. Simply, our daily lives are guided by the lifestyles we choose to adopt, as well as the routine activities that result from those lifestyles. With the growth of technology, people’s lifestyles and routine activities have expanded to include virtual activities and associations in cyberspace. In some cases, individuals’ online activities and association are reflections of their offline lives. In this sense, the Internet is just a tool for them. They use it to communicate with friends and family, shop, seek out entertainment, etc. On the other hand, some see the Internet as a means to reinvent themselves. They have an online persona, carefully curated to present an idyllic version of who they are. Still, there are others who will fall somewhere between the two. In any case, most individuals, these days, have an online lifestyle. There are online socialites who are deeply involved in social media, online gamers who spend hours on forums or watching walkthrough videos to expand their gaming knowledge, and imaginarians who use the Internet to read and share their fanfiction, to name a few. For every type of online lifestyle, there are common online routine activities. Take a moment and think about the things you do online. Which websites do you most frequently visit? Who do you most commonly interact with online? What do you spend most of your online time doing? The answers to these questions help define your online routine activities.

23

Routine Activities

481

Cyberlifestyle-Routine Activities Theory In addition to the basic concepts of routine activity theories, some effort has also been made to digitize the more complex components of such theories. With the development of L-RAT, the traditional aspects of RAT and LET were merged into four new concepts: exposure to motivated offenders, proximity to motivated offenders, target attractiveness, and guardianship. According to the theory, the combination of these four concepts leads to the presence of opportunity for crime to occur. In order to examine the applicability of L-RAT in explaining cybercrime, these base concepts must be defined in terms of online behaviors. Exposure to motivated offenders. Exposure to offenders, in the traditional sense, is typically assessed by examining phenomena such as the amount of time spent outside the home during the day and night, the patronization of risky facilities (e.g., bars, clubs, etc.), or alcohol and drug consumption (Fisher et al. 1998, 2002; Miethe and Meier 1994). Online, however, such geographic descriptors and/or physical behaviors do not apply. Instead, it is necessary to think in terms of digital actions. As such, online exposure to motivated offenders may include actions such as the amount of time spent online, the number of videos/ pictures posted online, or the amount of personal information shared. Essentially, the larger an individual’s digital fingerprint, the more exposed he or she may be to potential offenders. Proximity to motivated offenders. As with exposure, proximity to offenders is most often described in terms of how much physical contact potential targets have with offenders. This could include how close someone actually gets to an offender or the number of potential offenders with whom an individual comes into contact (Fisher et al. 2002; Reyns et al. 2011). In cyberspace, there is no physical contact. Instead, proximity is determined by examining the number and types of virtual interactions potential victims may have with offenders. With that in mind, measures of online proximity to motivated offenders may include factors such as whether an individual accepts friend or follow requests on their social networking platforms from strangers, the number of friends an individual may have on those platforms, or the frequency with which individuals visit potentially dangerous websites (e.g., pornography or illegal video-sharing websites). The more often an individual puts themselves in potentially dangerous situations, the more likely they will be seen by offenders. Target attractiveness. The concept of target attractiveness is often a confluence of many factors. It can be influenced by the ease with which someone or something may be approached by a potential offender, the meaning the person or object has for an offender, and/or the amount of information available about the person or object. Traditionally, target attractiveness has included things such as the size or value of an object, the vulnerability of a certain type of individual (i.e., children, elderly, etc.), or the perceived relationship between the target and offender (Clarke 1999; Fisher et al. 2002). In that same vein, online target attractiveness is often influenced by factors such as perceived vulnerability and available information. It may be measured by examining the amount or type of personal information an individual posts online (e.g., relationship status, contact information, photos/videos, etc.). Further, the absence of

482

B. Henson

self-protection measures (e.g., privacy settings) may also influence the level of online target attractiveness. Simply, the more information that can be accessed about an individual online, the more attractive a target he or she becomes. Capable guardianship. Discussed at several points throughout this chapter, guardianship is a central component of routine activity theories. As noted, many analyses focus on two dimensions of guardianship – social and physical guardianship. In applying routine activity theories to traditional forms of crime and victimization, researchers may measure social guardianship as the presence of roommates, friends, or significant others as protectors. In addition, physical guardianship may be measured by examining the presence of items like door locks, alarm systems, or pepper spray. A similar approach may be taken when attempting to apply routine activity theories to cybercrime. For example, online social guardianship may be examined by determining if individuals have parents or guardians that monitor their Internet activity or by measuring the level of online deviance an individual’s peers may display (Holt and Bossler 2009; Reyns et al. 2011). Further, online physical guardianship could include the presence of firewalls, security programs, and/or privacy settings (Choi 2008; Holt and Bossler 2009). The presence and quality of either or both forms of guardianship could have a big impact on the opportunity for online victimization.

Cyberlifestyle-Routine Activities Theories and Property Victimization Over the course of the last decade, a large number of cybercrime studies have been produced examining the role of routine activities and lifestyles in predicting victimization. Similar to examinations of traditional forms of offline crimes, routine activity theories have been utilized in an attempt to explain a wide variety of cybercrimes, including both property personal crimes. Evidence supporting the applicability of routine activities theories to account for cybercrime and victimization has been somewhat mixed. However, there does appear to be more studies that have found support for these theories than not. In some cases, the ability of routine activities theories to predict cybercrime varies more heavily based on the type of crime being examined. Without a doubt, one of the most widely discussed forms of online victimization is hacking (see ▶ Chap. 35, “Computer Hacking and the Hacker Subculture”). It has been the subject of numerous movies, television shows, books, and even video games. Hacking is generally defined as electronically accessing a virtual system, computer, or other electronic device without the owner’s permission (Holt 2007; van Wilsem 2013; Weulen Kranenbarg et al. 2019). Given the nature of hacking, and the fact that many victims are not even aware they’ve been victimized, statistics regarding the prevalence of hacking victimization should be considered carefully. With that said, however, researchers have noted that approximately 3–8% of individuals surveyed reported experiencing hacking (Reyns 2015; van Wilsem 2013). Though not as frequent as other forms of cybercrime, a number of research studies examining the relationship between routine activity theories and hacking victimization have been produced. With one such study, Bradford Reyns applied the

23

Routine Activities

483

L-RAT perspective to explain three forms of cybercrime, with one being hacking. Utilizing data from the Canadian General Social Survey, Reyns (2015) focused on the link between three of the main components of L-RAT (i.e., exposure, target suitability, and guardianship) and hacking. As with many of the previous studies discussed, Reyns found moderate, but mixed, support for the applicability of routine activity theories in explaining hacking victimization. The most significant factor reported appears to be online target suitability, with posting personal information online driving the increased risk for hacking. He also noted a modest relationship between online exposure and hacking victimization. Finally, the weakest relationship described was between online guardianship and hacking, with one form of guardianship, changing passwords, actually resulting in a higher risk for victimization. Though it may seem contradictory, this is not an uncommon finding with measures of guardianship, as it is difficult to determine if the guardianship actions were initiated before or after experiencing victimization. There are a number of crimes often associated with hacking, such as phishing, malware dissemination, and identity theft (see ▶ Chaps. 38, “Malicious Software Threats,” ▶ 42, “Phishing and Financial Manipulation,” and ▶ 46, “Identity Theft: Nature, Extent, and Global Response”). In a general sense, each of these types of cybercrime entails stealing, whether it is data, passwords, or identifying information. Although there is a certain amount of overlap between them, these three forms of cybercrime are ultimately unique in and of themselves. Phishing is the action of reaching out to someone online or via text posing as a legitimate agency in order to obtain potentially sensitive information, such as passwords, banking or credit card information, and personal information (e.g., social security number) (KnowBe4 2018). Malware is a form of malicious software (e.g., viruses, worms, Trojan horses) used to damage computer or online systems and steal information (Holt and Bossler 2013). Identity theft involves taking and/or disseminating an individual’s personal information without his or her permission (Reyns and Henson 2016). This may include using the information for financial gain or to pose as the individual. Though not as often as other types of cybercrime, each of these forms of online victimization have received some attention from researchers. As such, studies examining the ability of routine activity theories to explain these three types of cybercrime have produced somewhat mixed results. For example, in a previously discussed study, Bradford Reyns used the L-RAT framework to examine three forms of cybercrime, one being phishing. Analyzing data from the Canadian General Social Survey, Reyns (2015) focused on the role of three of the main components of L-RAT (i.e., exposure, target suitability, and guardianship) in explaining phishing victimization. With his study, Reyns found strong support for the link between each of the components of L-RAT and phishing, with almost every measure appearing to be a significant predictor of victimization. Further, Adam Bossler and Thomas Holt (2009) tested the link between routine activity theories and malware victimization by examining the roles of online lifestyles, routine activities, and guardianship. Their analysis produced relatively weak support for the ability of routine activity theories to predict malware victimization. As noted in their study, most of the measures of online routine activities and physical guardianship had no significant

484

B. Henson

impact on data loss from malware victimization. However, Holt and Bossler did report a significant relationship between their measures of online deviant lifestyles, social guardianship, some of the measures of online activities, and data loss due to malware infection. Finally, Bradford Reyns and Billy Henson adopted the L-RAT framework in their analysis of identity theft. With data from the Canadian General Social Survey, Reyns and Henson (2016) examined the link between proxy measures of the four main components of L-RAT and identity theft victimization, reporting weak relationships overall. Of the significant relationships found, their measures of online proximity to motivated offenders and online target suitability proved to have the most impact on the likelihood of identity theft.

Cyberlifestyle-Routine Activities Theories and Personal Victimization Given the prevalence of online communication, it should come as no surprise that one of the more common forms of personal cybercrime is online harassment. Various studies have reported that approximately 10–20% of respondents surveyed have experienced some form of online harassment (Finn 2004; Jones et al. 2012; Reyns et al. 2012). As is the case with many forms of cybercrime, online harassment has been described in a number of ways. Most commonly, however, it is defined as threatening, annoying, or otherwise hurtful communications via an online or electronic medium that may cause some type of distress for the victim (Bossler et al. 2012; Finn 2004). In many cases, online harassment is associated with cyberstalking or cyberbullying. Likewise, it may also include unwanted sexual advances or solicitation, akin to offline sexual harassment. A number of researchers have attempted to examine online harassment with a routine activities framework. For example, focusing on a sample of students from a southeastern university in the USA, Thomas Holt and Adam Bossler (2008) performed one of the first studies to analyze the role of the L-RAT components in predicting online harassment. As the authors indicate, they found moderate support for the application of the theory. First, in keeping with the traditional L-RAT model, Holt and Bossler reported that gender appeared to be a significant contributing factor in explaining online harassment, with women being more likely to experience it. As highlighted by Hindelang and his colleagues (1978), demographic characteristics play a key role in determining an individual’s lifestyle, associations, and proclivity for experiencing victimization. Second, they found that respondents who displayed a deviant online lifestyle and/or had friends who did were more likely to experience victimization. These behaviors closely coincide with the concepts of proximity and exposure to offenders and also indicate lower levels of social guardianship, all of which may influence victimization. Finally, Holt and Bossler also noted that most of their measures of routine computer use and physical guardianship had little to no significant impact on online harassment. Although this contradicts some of the basic tenants of routine activity theories, it is evidence that the theory may require additional fine-tuning to fit a virtual environment. As we’ve moved into the technological age, one of the most widely discussed forms of online victimization is cyberbullying (see ▶ Chap. 58, “Risk and Protective Factors for Cyberbullying Perpetration and Victimization”). It is most often described as any form of verbal aggression (i.e., threats of violence, spreading

23

Routine Activities

485

negative rumors, creating and/or disseminating negative or false information) occurring online or via electronic devices (U.S. Department of Health and Human Services 2018). Cyberbullying is often distinguished from online harassment as being a repeat behavior that most often affects young people. Various studies indicate that between 10% and 15% of juveniles report they have been a victim of cyberbullying (Beran et al. 2015; Hawkins et al. 2001). It has received continued attention from schools, politicians, and parents, alike. This attention has remained relatively constant mostly because of the fear of the potentially negative impact cyberbullying may have on children and young adults (e.g., anxiety, depression, suicidal tendencies) (see “Suicidal Ideation and Online Platforms”). Consequently, cyberbullying has become a key topic of research for many scholars. In one such study, Jordan Navarro and Jana Jasinski utilized nationallevel data from the Pew Internet and American Life Project Website to assess the role of routine activities in experiencing cyberbullying among a sample of teens. For their analysis, Navarro and Jasinski (2012) focused on three specific aspects of routine activity theories – availability (exposure), suitability, and guardianship. Each component was test using proxy measures that focused on respondents’ online activities. Their study produced relatively strong support for the application of routine activity theories in explaining cyberbullying. Almost all of their measures were significantly related to victimization. Specifically, Navarro and Jasinski noted that the measures approximating online target suitability were the strongest predictors, with the measures of guardianship being the weakest. One of the most multifaceted forms of cybercrime is cyberstalking (see ▶ Chap. 59, “Cyberstalking”). Its complexity derives from the variation in both its meaning and nature. While it is typically described as a form of repeat, pursuant behavior, there may be a number of actions that fall underneath the heading of cyberstalking, including repeated unwanted contact, harassment, threating behavior, and/or unwanted sexual advances perpetrated online or through electronic devices (Nobles et al. 2014; Reyns et al. 2011). Further, some see cyberstalking as merely an extension of traditional stalking, while others see it as a unique form of victimization (Nobles et al. 2014). Given the variation in its definition and measurement, it should come as no surprise that the estimated prevalence of cyberstalking also varies from study to study. However, some studies have estimated that between one-fourth and one-third of individuals have experienced cyberstalking (Baum et al. 2009; Reyns et al. 2011). As with other forms of cybercrime, numerous researchers have sought to examine cyberstalking victimization via a routine activities’ framework. With one of the first studies to do so, Bradford Reyns et al. (2011) attempted to explain cyberstalking victimization utilizing the concepts of L-RAT. For their study, Reyns and his colleagues examined a number of measures which approximated the four components of L-RAT, as well as lifestyle factors, in an online environment. Further, they also examined four different types of cyberstalking behaviors (i.e., repeated and unwanted online contacted, repeated online harassment, repeated and unwanted online sexual advances, and repeated online threats of violence). They reported relatively strong, though mixed, support for the application of L-RAT to explain cyberstalking victimization. Their measure of online deviant lifestyle appeared to be the most significant factor in explaining cyberstalking, with those respondents who reported a more

486

B. Henson

deviant online lifestyle being significantly more likely to experience the various cyberstalking behaviors. In addition, they reported a moderate relationship between their measures of online target attractiveness and guardianship and cyberstalking. Finally, it was acknowledged that online exposure and proximity had the weakest relationships with cyberstalking victimization. Although the results were somewhat mixed with their study, Reyns and his colleagues did find a fair amount of evidence supporting the online applicability of routine activity theories.

Macro-Level Cyber-Routine Activities Theories As indicated previously, Cohen and Felson (1979) originally envisioned their routine activity theory as an approach to explain macro-level social trends. While the majority of RAT research in the last several decades has focused on individual-level analyses, the macro-level approach has not fully gone to the wayside. In fact, there has been somewhat of a resurgence of this approach in many areas of research, including cybercrime research (Higgins et al. 2008; Holt et al. 2018; Williams 2016). For example, Thomas Holt et al. (2018) applied the RAT framework in their examination of macrolevel correlates of malware infections. Utilizing international data on malware infections, they focused on determining the driving factors that may lead to some countries experiencing more malware infections. Holt and his colleagues reported that nations with greater technological infrastructure and online freedoms from political oversight (target attractiveness) and less financial impact from organized crime (guardianship) were more likely to experience higher rates of malware infection. With a growing number of cybercrime studies utilizing RAT to predict and/or explain macro-level phenomena, the cybercrime literature will undoubtedly continue to evolve.

Conclusion The birth and growth of the Internet have forever changed the human landscape. With it has come a unique environment separate from the physical world to which we’ve grown so accustomed. This new cyberworld, though evolutionary and paradigmshifting, has unfortunately brought with it many of the long-standing and deplorable issues known for so long in the physical world, such as online crime and victimization. While earnestly struggling to explain and predict cybercrime, researchers have begun to mine the theoretical bag of tricks relied upon to explain offline forms of crime. In doing so, it seems that opportunity theories, and more specifically routine activity theories, have become central in this undertaking. To date, numerous research studies have been performed attempting to apply the routine activity theories’ frameworks to cybercrime victimization. As noted throughout this chapter, this undertaking has produced mixed results. With that said, however, the culmination of these studies does seem to indicate the potential for the evolution of routine activity theories. With that said, there are a number of approaches which should be considered. First, continued development of the measures representing the components of routine activities theories is necessary. Currently, the majority of cybercrime studies examining routine activities theories use proxy measures to represent the

23

Routine Activities

487

various theoretical elements. This is due, in large part, to the reliance on secondary data that may not have been collected with the intent of testing routine activities theories. Cybercrime researchers should continue to develop datasets and measures that specifically focus on the components of routine activities theories. Second, and related, while cross-sectional data is the easiest to collect and most widely available, future researchers who analyze cybercrime through a routine activities’ lens should attempt to acquire longitudinal data. This would allow for many of the current time-order limitations (i.e., the guardianship anomalies) to be more readily addressed. Third, while there has been impressive growth thus far, the macro-level routine activities analyses of cybercrime need to continue, in order to help the literature to continue to evolve by applying the theories at multiple levels. Finally, continued research and adaptation of the basic tenets of routine activity theories is necessary. While the research has shown much promise in applying the traditionally physical-world theories to the realm of cyberspace, the transition has yet to fully coalesce. With continued adaptation, we may soon see one of the first true cybercrime theories.

Cross-References ▶ Computer Hacking and the Hacker Subculture ▶ Cyberstalking ▶ Historical Evolutions of Cybercrime: From Computer Crime to Cybercrime ▶ Identity Theft: Nature, Extent, and Global Response ▶ Intimate Partner Violence and the Internet: Perspectives ▶ Malicious Software Threats ▶ Phishing and Financial Manipulation ▶ Risk and Protective Factors for Cyberbullying Perpetration and Victimization

References Baum, K., Catalano, S., Rand, M., & Rose, K. (2009). Stalking victimization in the United States. Washington, DC: US Department of Justice. Beran, T., Mishna, F., McInroy, L. B., & Shariff, S. (2015). Children’s experiences of cyberbullying: A Canadian national study. Children & Schools, 37, 207–214. Bossler, A. M., & Holt, T. J. (2009). On-line activities, guardianship, and malware infection: An examination of routine activities theory. International Journal of Cyber Criminology, 3, 400–420. Bossler, A. M., Holt, T. J., & May, D. C. (2012). Predicting online harassment victimization among a juvenile population. Youth & Society, 44, 500–523. Brantingham, P. L., & Brantingham, P. J. (1993). Nodes, paths and edges: Considerations on the complexity of crime and the physical environment. Journal of Environmental Psychology, 13, 3–28. Brantingham, P. L., & Brantingham, P. J. (2004). Computer simulation as a tool for environmental criminologists. Security Journal, 17, 21–30.

488

B. Henson

Bunch, J., Clay-Warner, J., & Lei, M.-K. (2015). Demographic characteristics and victimization risk: Testing the mediating effects of routine activities. Crime & Delinquency, 61, 1181–1205. Choi, K. (2008). Computer crime victimization and integrated theory: An empirical assessment. International Journal of Cyber Criminology, 2, 308–333. Clarke, R. V. (1999). Hot products: Understanding, anticipating and reducing demand for stolen goods. London: Home Office. Cohen, L. E., & Felson, M. (1979). Social change and crime rate trends: A routine activity approach. American Sociological Review, 44, 588–608. Cohen, L. E., Felson, M., & Land, K. C. (1980). Property crime rates in the United States: A macrodynamic analysis, 1947–1977 with ex ante forecasts for the mid-1980s. American Journal of Sociology, 86, 90–118. Cohen, L. E., Kluegel, J. R., & Land, K. C. (1981). Social inequality and predatory criminal victimization: An exposition and test of a formal theory. American Sociological Review, 46, 505–524. Eck, J. E., & Clarke, R. V. (2003). Classifying common police problems: A routine activity approach. In Crime prevention studies (Vol. 16, pp. 7–39). Monsey: Criminal Justice Press. Felson, M. (1998). Crime and everyday life. (2nd ed.). Thousand Oaks, CA: Sage. Finn, J. (2004). A survey of online harassment at a university campus. Journal of Interpersonal Violence, 19, 468–483. Fisher, B. S., Sloan, J. J., Cullen, F. T., & Lu, C. (1998). Crime in the ivory tower: Level and sources of student victimization. Criminology, 36, 671–710. Fisher, B. S., Cullen, F. T., & Turner, M. G. (2002). Being pursued: Stalking victimization in a national study of college women. Criminology & Public Policy, 1, 257–308. Fisher, B. S., Daigle, L. E., & Cullen, F. T. (2010). What distinguishes single from recurrent sexual victims? The role of lifestyle-routine activities and first-incident characteristics. Justice Quarterly, 27, 102–129. Franklin, C. A., Franklin, T. W., Nobles, M. R., & Kercher, G. A. (2012). Assessing the effect of routine activity theory and self-control on property, personal, and sexual assault victimization. Criminal Justice and Behavior, 39, 1296–1315. Gottfredson, M. R., & Hirschi, T. (1990). A general theory of crime. Stanford: Stanford University Press. Grabosky, P. N. (2001). Virtual criminology: Old wine in new bottles? Social and Legal Studies, 10, 243–249. Grasmick, H. G., Tittle, C. R., Bursik, R. J., & Arneklev, B. J. (1993). Testing the core empirical implications of Gottfredson and Hirschi’s general theory of crime. Journal of Research in Crime and Delinquency, 30, 5–29. Hawkins, D. L., Pepler, D., & Craig, W. M. (2001). Peer interventions in playground bullying. Social Development, 10, 512–527. Higgins, G. E., Hughes, T. T., Ricketts, M. L., & Wolfe, S. E. (2008). Identity theft complaints: Exploring the state-level correlates. Journal of Financial Crime, 15, 295–307. Hindelang, M. J., Gottfredson, M. R., & Garofalo, J. (1978). Victims of personal crime: An empirical foundation for a theory of personal victimization. Cambridge, MA: Ballinger Publishing Company. Holt, T. J. (2007). Subcultural evolution? Examining the influence of on- and off-line experiences on deviant subcultures. Deviant Behavior, 28, 171–198. Holt, T. J., & Bossler, A. M. (2009). Examining the applicability of lifestyle–routine activities theory for cybercrime victimization. Deviant Behavior, 30, 1–25. Holt, T. J., & Bossler, A. M. (2013). Examining the relationship between routine activities and malware infection indicators. Journal of Contemporary Criminal Justice, 29, 420–436. Holt, T. J., Burruss, G. W., & Bossler, A. M. (2018). Assessing the macro-level correlates of malware infections using a routine activities framework. International Journal of Offender Therapy and Comparative Criminology, 62, 1720–1741. Jones, L. M., Mitchell, K. J., & Finkelhor, D. (2012). Trends in youth internet victimization: Findings from three youth Internet safety surveys, 2000–2010. Journal of Adolescent Health, 50, 179–186. KnowBe4. (2018). What is phishing? Retrieved from http://www.phishing.org/what-is-phishing

23

Routine Activities

489

Marcum, C. D. (2009). Adolescent online victimization: A test of routine activities theory. El Paso: LFB Scholarly Publishing. Marcum, C. D., Higgins, G. E., & Ricketts, M. L. (2010). Assessing sex experiences of online victimization: An examination of adolescent online behaviors using routine activity theory. Criminal Justice Review, 35, 412–437. Miethe, T. D., & McDowall, D. (1993). Contextual effects in models of criminal victimization. Social Forces, 71, 741–759. Miethe, T. D., & Meier, R. F. (1990). Opportunity, choice, and criminal victimization: A test of a theoretical model. Journal of Research in Crime and Delinquency, 27, 243–266. Miethe, T. D., & Meier, R. F. (1994). Crime and its social context: Toward an integrated theory of offenders, victims, and situations. New York: SUNY Press. Mustaine, E. E., & Tewksbury, R. (1998). Predicting risks of larceny theft victimization: A routine activity analysis using refined lifestyles measures. Criminology, 36, 829–858. Navarro, J. N., & Jasinski, J. L. (2012). Going cyber: Using routine activities theory to predict cyberbullying experiences. Sociological Spectrum, 32, 81–94. Nobles, M. R., Reyns, B. W., Fox, K. A., & Fisher, B. S. (2014). Protection against pursuit: A conceptual and empirical comparison of cyberstalking and stalking victimization among a national sample. Justice Quarterly, 31, 986–1014. Pratt, T. C., Holtfreter, K., & Reisig, M. D. (2010). Routine online activity and Internet fraud targeting: Extending the generality of routine activity theory. Journal of Research in Crime and Delinquency, 47, 267–296. Reyns, B. W. (2013). Online routines and identity theft victimization: Further expanding routine activity theory beyond direct-contact offenses. Journal of Research in Crime & Delinquency, 50, 216–238. Reyns, B. W. (2015). A routine activity perspective on online victimization: Results from the Canadian General Social Survey. Journal of Financial Crime, 22, 396–411. Reyns, B. W., & Henson, B. (2016). The thief with a thousand faces and the victim with none: Identifying determinants for online identity theft victimization with routine activity theory. International Journal of Offender Therapy and Comparative Criminology, 60, 1119–1139. Reyns, B. W., Henson, B., & Fisher, B. S. (2011). Being pursued online: Applying cyberlifestyleroutine activities theory to cyberstalking victimization. Criminal Justice and Behavior, 38, 1149–1169. Reyns, B. W., Henson, B., & Fisher, B. S. (2012). Stalking in the twilight zone: Extent of cyberstalking victimization and offending among college students. Deviant Behavior, 33, 1–25. Tseloni, A., Wittebrood, K., Farrell, G., & Pease, K. (2004). Burglary victimization in England and Wales, the United States and the Netherlands: A cross-national comparative test of routine activities and lifestyle theories. The British Journal of Criminology, 44, 66–91. U.S. Department of Health and Human Services. (2018). What is cyberbullying? Retrieved from https://www.stopbullying.gov/cyberbullying/what-is-it/index.html van Wilsem, J. (2013). Hacking and harassment – Do they have something in common?: Comparing risk factors for online victimisation. Journal of Contemporary Criminal Justice, 29, 437–453. Weisel, D. (2002). Burglary of single-family houses (Problem-oriented guides for police series, no. 18). Washington, DC: Department of Justice, Office of Community Oriented Policing Services. Weulen Kranenbarg, M., Holt, T. J., & Van Gelder, J.-L. (2019). Offending and victimization in the digital age: Comparing correlates of cybercrime and traditional offending-only, victimization-only and the victimization-offending overlap. Deviant Behavior, 40, 40–55. Wilcox Rountree, P., & Land, K. C. (1996). Burglary victimization, perceptions of crime risk, and routine activities: A multilevel analysis across Seattle neighborhoods and census tracts. Journal of Research in Crime and Delinquency, 33, 147–180. Williams, M. L. (2016). Guardians upon high: An application of routine activities theory to online identity theft in Europe at the country and individual level. British Journal of Criminology, 56, 21–48. Yar, M. (2005). The novelty of ‘cybercrime’: An assessment in light of routine activity theory. European Journal of Criminology, 2, 407–427.

Environmental Criminology and Cybercrime: Shifting Focus from the Wine to the Bottles Fernando Miró-Llinares

24

and Asier Moneva

Contents New Bottles for Old Approaches: An Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Places in Cyberspace and Cybercrime Patterns: Overcoming the Geographical Gap . . . . . . . . . Environmental Criminology as a Theoretical Framework for the Situational Analysis of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Normality of Crime, Cyberspace, and the New Everyday Life . . . . . . . . . . . . . . . . . . . . . . . Focusing on Prevention (and Going Micro) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Neglect of Offenders in Cybercrime Analysis: Problem or Opportunity? . . . . . . . . . . . . Cybercrime and Crime Controls: Beyond Self-Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

492 494 498 499 500 502 504 505 506 507

Abstract

This chapter addresses the ability of the criminological approaches that comprise Environmental Criminology to constitute an adequate theoretical framework to analyze and understand the situational aspects of crimes committed through cyberspace and to define the most appropriate prevention strategies. The chapter begins by examining how these approaches have been applied. Subsequently, the reasons why the environmental approach can offer much more in this area if some apparent obstacles are overcome are presented. Finally, a method of applying these midrange theoretical frameworks to different cybercrimes is proposed. Relying on multiple empirical studies, it is stated that the essential premise of the environmental approach is also observed in cybercrime: the existence of situational patterns. These patterns are derived from the different ways in which F. Miró-Llinares · A. Moneva (*) CRIMINA Research Center for the Study and Prevention of Crime, Miguel Hernandez University, Elche, Spain e-mail: [email protected]; [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_30

491

492

F. Miró-Llinares and A. Moneva

offenders and targets, in the absence of guardians, converge in cyber places: digital interaction environments that shape the situational opportunities in which people interact. The chapter ends by summarizing the application possibilities of approaches such as the Crime Pattern Theory and Situational Crime Prevention in connection with the Routine Activity Theory and the Rational Choice Theory. It is proposed that many of the geographical applications derived from these approaches and some of their basic theoretical premises need to be adapted while seeking to enhance their strengths and mitigate the effects of their weaknesses. Keywords

Environmental Criminology · Crime Science · Criminological theory · Prevention · Opportunity · Geographical gap · Cyber place · Crime event · Crime patterns

New Bottles for Old Approaches: An Introduction Environmental Theories (also known as Theories of Crime, Opportunity Theories, or Crime Science), which include Routine Activity Theory (RAT; Cohen and Felson 1979; see ▶ Chap. 23, “Routine Activities”), Rational Choice Theory (RCT) with its preventive corollary Situational Crime Prevention (SCP; Cornish and Clarke 1986), and Crime Pattern Theory (CPT; Brantingham and Brantingham 1981), are midrange explanatory approaches to the relationship between the environment and criminal behavior. In comparison to the Theories of Criminality, Theories of Crime share an interest in crime as an event and divert attention from the offender to other elements such as the potential victim and the geographical location in which the crime may occur. This is carried out with the eminently practical intention of achieving implementable prevention strategies. From this approach come both their weaknesses as explanatory frameworks, due to their neglect of the offender and various essential aspects regarding their actions (Cullen and Kulig 2018), and their strengths, which are the precision of their analysis and their applicability to crime prevention (Wortley and Townsley 2016). Environmental approaches have been present almost since the birth of academic interest in cybercrime. Some of the theoretical environmental frameworks began to be used for cybercrime analysis since Grabosky (2001), in the now classic Virtual Criminality: Old Wine in New Bottles, warned that cyberspace might not change criminal motivations but significantly affect the opportunities and capacity of the guardians. Special attention was paid to RAT, which has been applied analytically to cybercrime with various aims, such as to rethink the challenges faced by criminology concerning future crimes (Pease 2001), to analyze whether its concepts should be adapted to the emergence of cyberspace (Yar 2005; Leukfeldt and Yar 2016), to identify the temporal patterns that describe large-scale cyberattacks (Maimon et al. 2013), or to estimate crime trends related to the appearance of new opportunities (Caneppele and Aebi 2017). But above all, RAT served as a conceptual framework

24

Environmental Criminology and Cybercrime: Shifting Focus from the. . .

493

for studying a wide variety of criminal behaviors, ranging from economic cybercrimes such as malware (Holt and Bossler 2013), identity theft (Reyns and Henson 2016), or phishing (Leukfeldt 2014) to social cybercrimes such as online harassment (Miró-Llinares 2015), cyberbullying (Navarro and Jasinski 2013), or sexting (Wolfe et al. 2016). The SCP approach was also used for the elaboration of preventive strategies for economic crime carried out via the Internet (Newman and Clarke 2003) and cyberstalking (Reyns 2010), as well as for the examination of DDoS operators (Hutchings and Clayton 2016), the reduction of information security vulnerabilities (Hinduja and Kooi 2013), or the analysis of online stolen data markets using crime scripts (Hutchings and Holt 2014). It is obvious, however, that the application of the Environmental Theories to crime committed through the Internet is still incipient and, therefore, insufficient. On the one hand, when the RAT and RCT approaches have been employed to analyze cybercrime, they have been applied as if they were separate explanatory theories, when it is known that the explanatory and applicative potential of Environmental Criminology comes from the enormous synergies between all three approaches (Clarke 2010). On the other hand, CPT, despite being a successful approach used in practice especially for urban crime, has barely been used for the analysis of cybercrime (Miró-Llinares and Johnson 2018). Instead, it has been relegated to the position of a conceptual framework within the so-called Computational Criminology (Brantingham 2011), a branch focused more on addressing purely methodological aspects of data science than on analyzing the context that facilitates cybercrime (e.g., Birks et al. 2012). Finally, the main applications of Environmental Criminology continue to focus on space in the traditional geographical sense, oblivious to digital spaces. And it is logical that this should be the case. Since the birth of the Criminology of Place (Sherman et al. 1989), the geography of crime has been studied extensively, and its applications have been many for crime prevention in the physical space, including hot spot policing (Weisburd and Green 1995), SCP (Clarke 1992), geographic profiling (Rossmo 1999), or Crime Prevention Through Environmental Design (CPTED; Cozens et al. 2005). From a theoretical perspective, cyberspace seems relegated from the potential application of these measures specifically thought for geographical spaces. This could be called the “geographical gap”: the apparent difficulty to apply the techniques of crime and place to cybercrime analysis in a non-geographical area. But the fact that specific practices are not suitable to cyberspace does not mean that the whole approach cannot be applied and generate new applications to this environment. As we will try to show, the environmental approach has much to offer when applied to the study of cybercrime. However, first it must be understood (1) that for these approaches the key organizing principle of crime is not the geographic location but the crime event itself (Clarke 2018) and (2) that its ecological premises apparently presupposed the concurrence of people and things in geographical places, but they originated from a spatiotemporal convergence between people and people and things which, thanks to the Internet, can also happen in cyberspace, although different from how this occurs in physical space (Miró-Llinares and Johnson

494

F. Miró-Llinares and A. Moneva

2018). In fact, several authors have built interesting approaches to the concept of cyberspace as a comparable space of convergence, albeit with modifications with respect to the convergence in physical space (Miró-Llinares 2011; Yar 2005). There has been a recent proposal to conceive this intercommunication space as a cyber place that can assimilate the main statements in RAT and the convergence between people and people and things (Miró-Llinares and Johnson 2018). This would enable the application of a large part of the premises of CPT and Criminology of Place to the crimes perpetrated in cyberspace. This is the direction that this chapter also follows, as it seeks to analyze the extent to which the application of the Environmental Criminology approach to crime committed in cyberspace is feasible and how the theory should be adapted for that purpose. This analysis is founded on the belief that a better understanding of the applicability of Environmental Theories to cybercrime would be especially useful in an area that is particularly in need of preventive approaches effectively implemented. However, it also adopts a realistic vision regarding the possibilities of these strategies, based on both the review of the strengths and weaknesses of the approach itself and on the acceptance that many of its practical contributions are thought for and from the geographical world and will not allow their functional adaptation for crime perpetrated on cyberspace. We believe, however, that there are many other contributions that can be adapted to cyberspace, as well as the essential premise that the social situations in which people find themselves do decisively influence (1) their decisions regarding offending and (2) them being the target of a crime. In addition, the practical consequences derived from this adaptation fit perfectly with the need for prevention strategies focused on the environment and the target of cybercrime due to the characteristics of cyberspace.

Places in Cyberspace and Cybercrime Patterns: Overcoming the Geographical Gap It seems counterintuitive to consider the application of some of the essential developments of Environmental Criminology, such as hot spots or the crime mapping technique, to the field of cybercrime. It seems less so if such tools are limited to a macro -or meso- level analysis. Perhaps for this reason when such terms are used in relation to crimes committed through the Internet, it is common to think of analysis such as “which countries carry out (or receive) more cyber-attacks” and in geographical analysis of their regional distribution (Maimon et al. 2015) or “what are the correlates of a specific cyber-threat for a given area” and how they concentrate at the polygon level (Khey and Sainato 2013). This is because we continue to use the concept of place in its purely geographical sense and we identify the place of cybercrime as that from which the attack is perpetrated or that which is affected by it. And the interest in this analysis can be very profound as the academy has shown, since there is also an irregular geographical distribution of different cybercrimes at the macro level according to the uneven distribution between different countries of factors such as the implementation of the Internet, the number of computer systems

24

Environmental Criminology and Cybercrime: Shifting Focus from the. . .

495

to which the population has access, and the value of the information they contain, among others. In this sense, although research in this area is still very limited, macroanalyses based on RAT have shown that wealthiest countries with a higher proportion of Internet users report greater activity from incidents such as spam or phishing (Kigerl 2012). Similar studies also show that less developed countries report higher piracy rates than more developed countries, despite registering fewer incidents (Kigerl 2013). But, does the geographical place where a cyberattack is conducted or received perform the role that Environmental Criminology attributed to the concept of place within crime that allows many crime patterns? To affirm this to be true would be as careless as reducing crime pattern analysis to the first studies that compared the geographic distribution of crimes at the macro level from aggregate data (e.g., Guerry 1833; Quetelet 1842). If one carries out an in-depth analysis of the meaning given to place by Environmental Theories and, particularly, by CPT, the answer is clearly negative, at least in the sense of “not completely or not on its own.” CPT is an essential contribution within Environmental Criminology as it constitutes the greatest effort to integrate the Geometry of Crime and the other approaches that constitute the environmental perspective. With this theory Branthingham and Branthingham (1981) elaborate a spatial model for crime explanation that takes into consideration many previous contributions from the ecology of crime as well as the sociological theories that different criminologists have provided regarding the social, urban, and, therefore, geographical distribution of crime (e.g., Harries 1976; Shaw and McKay 1942). However, it also incorporates the idea of opportunity as a fundamental explanatory framework, stating that the spatial distribution of crime is also influenced by the distribution of opportunities, by the urban structure, and by the mobility of people. Obviously, it is easy to identify geometry with geography when all the rules, both macro and micro, expounded and later developed by the authors regarding the relationship between crime and place were applied to crimes perpetrated in what was the only existing space of personal intercommunication or, at least, the main one: the physical space. But geography and geometry are not the same. And the truth is that CPT is not only an exclusively geographical theory, but that its ultimate meaning is only distinguished at present, as we shall see, if the idea of place as a geographical environment is surpassed and the notion of convergence space is assumed. The theoretical construction of the explanatory relationship between the crime and the place where it occurs is part of the idea that the people motivated to commit a crime, for different reasons explained by multiple etiologies, perpetrate the act in a certain place, in a specific moment, and on a particular victim, with a somewhat developed process of decision-making in which the environment plays a fundamental role (Brantingham and Brantingham 1981). The environment emits signals, or clues, about its characteristics and about the distribution of other elements (e.g., targets, guardians) that will influence the success of criminal activity from the perspective of the offender. From this premise the authors derive the importance of different elements, such as activity spaces, which are those places in which people objectively carry out their daily lives while they travel on the routes connecting those places or activity nodes where they spend more time and which have different

496

F. Miró-Llinares and A. Moneva

characteristics according to their functionality; places that generate or attract crime (i.e., crime generators and crime attractors), which enhance criminal opportunities by concentrating a large number of people and, therefore, by increasing the number of effective convergences (e.g., a parking lot full of people where a concert is held) or by attracting criminals by harboring especially attractive criminal opportunities in terms of cost-benefit (e.g., a jewelry store that contains a large number of hot products); the journey to crime, which refers to the journey made by an offender to the place where he commits the crime and which is conditioned by the opportunities and effort he must make; or the crime templates, which are mechanisms of automation for the offender’s decision-making that are relatively stable in time, that serve to select the place where the crime is committed, and that vary according to each criminal behavior. None of these suppositions is necessarily geographical, but rather spatial, since all they demand is the existence of different possibilities of convergence between people and people and things. What does have a geographical significance are all the applied consequences which have later been object of empirical demonstrations, such as the relationship between the proximity of the crime scene and the offender’s residence or other especially relevant activity nodes (i.e., anchor points) that are determined by the principle of distance decay (Capone and Nichols 1976); crime mapping, or use of maps for the geographic analysis of patterns that crimes describe (Harries 1999) through techniques, such as geographic profiling, which enables the determination of the area where the residence of a serial aggressor will most likely be found (Rossmo 1999); or hot spots, which are anomalous concentrations of excessive crime in specific places and moments (Sherman et al. 1989). All these propositions are specifically geographical, because the premises are applied to this area and there have been empirically verified. But if we change the geographic scope to which they apply and we think of cyberspace as a place of convergence, it would be possible to think of different places, of different spaces and nodes of activity linked by virtual routes, of cyberspaces that favor convergence between people or that attract criminals, and of digital microenvironments where cybercrime is concentrated in certain time intervals (Miró-Llinares et al. 2018). It is true, however, that the intrinsic characteristics of cyberspace (i.e., contraction of time and space) are essentially different from those of physical space (MiróLlinares 2011). And since these characteristics preclude thinking of a traditional form of convergence, Yar (2005) questions the applicability of RAT in cyberspace by alluding to two major reasons related to the concepts of proximity and temporality. In relation to the former, this author argues that virtual spaces are volatile as opposed to physical spaces and that there are no distances between these spaces. Regarding the latter, Yar intuits that the different way of conceiving time in cyberspace endows it with a high degree of entropy that translates into space-time disorder. Overall, this spatiotemporal divergence makes convergence difficult and suggests that the laws on which RAT is built cannot be extrapolated to cyberspace (Yar 2005). Despite the criticisms, some authors have maintained their support for the applicability of RAT to cyberspace by arguing that spatiotemporal convergence is still possible but that it simply occurs in a different way (Miró-Llinares 2011; Reyns et al. 2011). In this

24

Environmental Criminology and Cybercrime: Shifting Focus from the. . .

497

regard, Reyns et al. (2011) explain that cyberspace is composed of a network of devices that enable virtual convergence, even if it is asynchronous. Thus, the fact that time and space are different in cyberspace does not mean that convergence is necessarily more improbable, but rather the opposite. Geographical constraints that limit actions in physical space do not exist in cyberspace, so the ability to perform certain behaviors that could have been burdened by such effort is intact. In fact, the possibilities of convergence may even expand in cyberspace due to the possibility of temporarily fixing certain actions that produce almost unlimited effects and cause potential victims to interact with them at different times than those in which the offender is actually in that space (Miró-Llinares 2011). As has already been stated, the place of cybercrime is not only that geographical site from which an act emanates or where the attack produces its effects but rather the digital place where, in a specific space and time, an offender and a target converge in the absence of a guardian and that would conform “discrete nodes or areas of activity on the Internet where one is not physically located but can nevertheless act” (Miró-Llinares and Johnson 2018, p. 893). It is these cyber places that enable the convergence between offenders, victims, and guardians in very different ways, according to (1) the way in which users can interact within the space, either through store-and-forward or streaming contact with perennial or expired contents; (2) the natural surveillance that the place allows according to whether it is open to the public or, on the contrary, its access is restricted, the traffic level of people and information, and the self-protection resources it offers; and (3) the type of activity (e.g., leisure, consumption, work) that users predominantly carry out in the place (Miró-Llinares and Johnson 2018). Just as in physical space a thief must coincide in space and time with the object he wants to steal, the phishing victim must open the email that asks for his personal data. In the same way that two teenagers insult each other during break time, a user interacts with a violent message on Facebook. In addition, just as there are infrastructural differences between the neighborhoods or streets where thefts occur, each digital space has different characteristics that allow one or another form of interaction. In this way, while forums allow communication by sending and receiving messages with a certain time lapse, there are platforms such as periscope that allow events to be streamed. Similarly, direct messages in Twitter guarantee the intimacy that you do not have when posting a tweet on the timeline. And these are the places where the patterns are going to be produced both at the macro and micro levels, as academic literature has shown when analyzing, even outside the theoretical framework of Environmental Criminology, the patterns of cybercrime. Thus, and regarding the analysis of concentrations at the macro level, criminological research has shown that some Facebook accounts are used for phishing purposes (Vishwanath 2014), that certain events in the physical space generate widespread reactions of online hate speech on Twitter (Burnap and Williams 2015), that there is also considerable gang activity on both Twitter and Facebook (Décary-Hétu and Morselli 2011), that certain sexual predators can use platforms such as Myspace as hunting grounds (Guo 2008), that YouTube is used as a loudspeaker for the dissemination of violent content and jihadist propaganda (Klausen et al. 2012), that drug trafficking in crypto markets such as Silk Road

498

F. Miró-Llinares and A. Moneva

yields increasing profits (Aldridge and Décary-Hétu 2014), that it is common to receive fraudulent messages through email (Cross 2015), that certain forums are used as platforms to advertise the illegal sale and purchase of personal data (Holt et al. 2016), or that cybercriminal networks use forums to establish relationships with new accomplices and facilitators (Leukfeldt et al. 2016). Regarding the existence of patterns at the micro level, and especially from a computational perspective, studies on cybercrime show that it is possible to detect incidents of cyberbullying through the analysis of the metadata associated with Instagram publications (Hosseinmardi et al. 2015); that it has been possible to identify spam in email by analyzing clusters of the messages’ characteristics (Wei et al. 2008); that violent communication expressed on Twitter after a terrorist attack is concentrated in certain time slots (Miró-Llinares and Rodríguez-Sala 2016) and in micro spaces with specific characteristics (Miró-Llinares et al. 2018) and, in addition, certain elements of interaction related to user accounts allow us to distinguish human users from bots in this social network (Ferrara et al. 2016); that sentiment analysis in messages can be used as a predictor of cyberattacks (Shu et al. 2018); or that it is possible to identify potentially offensive videos on YouTube by analyzing their tags (Agarwal et al. 2017). In addition, many of the studies conducted from the RAT perspective implicitly show potential cyber place patterns. For example, Marcum et al. (2010) found that prolonged exposure linked to increased use of chat rooms was a significant predictor of harassment victimization in older students. In a similar vein, Näsi and collaborators (2017) found that greater social network use was related to greater probability of suffering this type of cybervictimization but that the natural vigilance exerted by the number of friends on Facebook did not seem to have an influence on such dynamics. Also, Choi and Lee (2017) found a relationship between performing certain risk activities in social networks and the probabilities of being victimized, mainly related to the publication of habits, opinions, and personal information. On the other hand, Reyns (2013) found that carrying out certain online activities, such as banking, shopping, messaging, and downloading, was related to a higher probability of suffering identity theft. Similarly, an integrated Self-Control-RAT study on a representative sample in the Netherlands showed that some activities such as downloading or using dating sites favor malware infection victimization (Holt et al. 2018). In line with previous studies that indicate that online shopping is linked to consumer fraud (Pratt et al. 2010; Van Wilsem 2013), Junger et al. (2017) found that those users who used the Internet as a platform for the sale of products were also those who were most likely to be defrauded when they were buyers.

Environmental Criminology as a Theoretical Framework for the Situational Analysis of Cybercrime The studies referenced above not only demonstrate the essential premise of Environmental Criminology regarding the nonrandom distribution of crime events (Brantingham and Brantingham 1981) but also highlight the existence of cybercrime

24

Environmental Criminology and Cybercrime: Shifting Focus from the. . .

499

concentrations in specific moments and digital spaces as a result of the different way criminal opportunities manifest themselves in cyber places. In other words, there seems to be enough evidence to sustain that cybercrimes do pattern according to different situational environments of communication where offenders, targets, and guardians interact. These patterns can be both macro and micro. This means we have overcome the main obstacle impeding the assertion that the environmental approach can constitute an adequate framework both to analyze the relationship between situational factors and cybercrime and on which to base adequate preventive strategies. Thus, this process entails substantial adaptations derived from how the diverse communicative architecture of cyberspace functions. It is time to more thoroughly develop the applicability of the environmental approach to the analysis of cybercrime. To do so, we will take as a point of reference the analysis that authors such as Cullen and Kulig (2018) and Bottoms (2012) have provided regarding its strengths and weaknesses and to which we will add the specificities that may arise from the new situational environment to which it is applied. After all, and as these authors have rightly identified, the environmental approach has already brought important analytical and practical advances to our discipline. And although it also has limitations, these are typical of a midrange perspective and have never been denied by its proponents. However, these must be understood in order to define the explanatory potential of the environmental approach with respect to cybercrime.

The Normality of Crime, Cyberspace, and the New Everyday Life One of the greatest achievements of Environmental Criminology has been to emphasize the normality of crime and to focus its research on the everyday aspects that surround delinquency. Cullen and Kulig (2018) refer to this feature and underline two strengths of the approach: on the one hand, it focuses on ordinary people, and on the other, it goes beyond the roots of crime. The people who commit crimes and the victims who suffer them are ordinary people who, at a given moment, find opportunities to converge in the absence of guardians. This convergence occurs regardless of the individual nature of each subject; that is, the personal criminogenic characteristics that academic literature attributes to offenders or victims have little influence on the appearance of opportunity. It does not matter if an offender has low self-control or if a victim has a certain propensity to ingest alcohol. As long as both actors do not converge spatiotemporally, no crime will be committed. This has not changed, but everyday life has, and this no longer happens only in physical space but also in cyberspace. Until a few decades ago, the only way for offenders and suitable targets to converge was through physical contact in the “meatspace” (Pease 2001). Nowadays, the development of IT has increased what telephony already made clear a long time ago: that it would be possible to contact others without having to physically coincide. We no longer only converge with other people and goods while we go to work or a place of leisure or when we return from them but when we open our email in the morning, when we download attachments at work, when we make purchases or carry out online banking transactions, or when we

500

F. Miró-Llinares and A. Moneva

interact with other people by mobile phone on the different social networks and instant messaging platforms that we occupy when we are connected to the Internet (Felson 2012). Of course, this form of digital convergence is different from the physical in two senses: (1) the spatial sense, because it is unrelated to distances, and (2) in the temporary sense, because the actions we perform on the Internet can be fixed and can produce their effects at another time, resulting in an asynchronous convergence with a potential receiver (Miró-Llinares 2011). The normality of crime highlights that the most effective short-term prevention mechanisms are those that affect the contexts of immediate convergence and take into account the way in which the different minimum elements for crime converge. In fact, in reference to the SCP, Clarke (1997) said that “the implementation must be specific in nature, and cater precisely to addressing particular types of crime” (pp. 4–5). When applying the environmental approach to cybercrime, it is necessary, therefore, to pay attention to the manner of convergence that enables the occurrence of each crime type and to the routine activities carried out online and offline. In relation to cyber-dependent crimes, where convergence is digital, we must consider the routine activities of the offenders and the victims in the physical space in which they act (Maimon et al. 2013), but we must pay special attention to the digital situational environment in which convergence occurs to understand why it occurs (Miró-Llinares and Johnson 2018). For example, while it is important to have updated antivirus software on a computer, it is even more important that it is activated when browsing through download websites where there is considerable threat of being infected by malware. On the other hand, if the crime originates from a dual convergence (i.e., combining physical and digital), as in some cyber-enabled crimes, the analysis of the environment should also cover both dimensions of everyday life. For instance, cyberbullying is often related to traditional bullying dynamics or the personal relationships of minors in school. Thus, the school environment will affect what happens later in cyberspace, but it is also necessary to pay attention to the space of digital convergence, in this case to the social networks on which minors interact and harassment occurs. The mobile phone intertwines physical space and cyberspace routine activities. And the Internet of Things will increase this interdependence.

Focusing on Prevention (and Going Micro) Another strength of the environmental approach is its applied and preventive nature. In fact, it is claimed that Environmental Criminology has, through a range of prevention techniques, contributed enormously to invert the Nothing Works paradigm that predominated in mainstream criminology during the 1970s (Cullen and Kulig 2018; Medina-Ariza 2011). Among the most salient examples are the establishment of preventive strategies through the identification of situational contexts that promote or reduce the risk of victimization (Cornish and Clarke 1986), the incorporation of deterrence and surveillance systems for the control of crime such as the installation of CCTV systems (Welsh and Farrington 2009) or hot spot policing

24

Environmental Criminology and Cybercrime: Shifting Focus from the. . .

501

strategies (Braga 2005), and the design of physical elements to reduce criminal opportunities such as CPTED (Jeffery 1977) or Design Against Crime (DAC; Ekblom 1997) strategies. Some of these techniques can be directly extrapolated to cybercrime prevention, as shown by the research that applies the analytical framework of RAT to cyberspace in order to identify risk factors for victimization (e.g., Holt and Bossler 2008; Miró-Llinares 2015; see also ▶ Chap. 23, “Routine Activities”). While those techniques focusing on intrinsically geographical aspects seem less applicable, as we have pointed out, reducing them to their situational and opportunity essence and adapting them to the new environment enable their use in a preventive sense. This is the case with measures such as CCTV or hot spot policing which are based on the reinforcement of deterrence strategies and surveillance systems. Despite their geographical nature, the basic principles on which they are founded clearly transcend the physical and extend their relevance to the field of cybercrime prevention: their aim is to increase the costs perceived by (cyber) offenders in terms of effort and risk, while reducing potential benefits and provocations and eliminating excuses (Cornish and Clarke 2003; Newman and Clarke 2003). Although there are already investigations that have shown, for example, the deterrent effects of warning banners with respect to unauthorized access to computer systems through honeypot structures (Maimon et al. 2014), research in this field is still incipient. In this sense, there has not been in-depth investigation on the control and dissuasion effect of administrators or moderators with regard to the management of cyber places such as forums or chats (Reyns 2010) or on the preventive impact of the regulatory functions performed by service providers. On the other hand, we know that the strategies used to manage places go beyond the inclusion of super controllers and can also include environmental design. Although both CPTED and DAC have always focused on the modification of urban spaces or corporeal objects, the truth is that digital environments are also susceptible to modification in order to condition cybercrime. For example, it is possible to limit the frequency with which a user can broadcast messages in a certain period of time, to implement CAPTCHA systems to restrict access to certain websites only to people and not bots, or to configure a website to automatically filter certain content according to its potential harmfulness. As for hot spot intervention strategies, both cybercrime police units and service providers are already using various software to identify clustered patterns for different types of cybercrime (Wall 2007). Yet, if we also want to improve understanding of the dynamics of victimization in cyberspace, it is necessary to relate the design and application of these tools with the environmental theoretical framework by studying the situational elements of cyber places. The starting hypothesis for this reasoning is that just as there are geographical places which, due to their characteristics, become attractors or generators of crime (Brantingham and Brantingham 1995), digital spaces can also provide the same conditions depending on their configuration. On the Internet, cyber places will be crime attractors insofar as the targets they contain have been previously introduced and have considerable value, and it is possible to converge with them in the absence of guardians (Miró-Llinares

502

F. Miró-Llinares and A. Moneva

2011). Cyber places will be crime generators depending on the interaction possibilities they offer, which is defined by the level of transit of people and information at specific times. And this happens both at the macro and micro levels. In this sense, and in line with the tendency found in the Criminology of Place to analyze increasingly micro units in order to avoid measurement errors (Weisburd et al. 2009), it is necessary to analyze and decompose each problem in its environment with the same levels of specificity. This is because the same specificity that characterizes the geographical distribution of crime can be observed in the different forms of cybercrime that are similarly concentrated in specific spaces and time, creating spatiotemporal hot spots of cybercrime (Miró-Llinares and Johnson 2018). The implementation of SCP measures, particularly those based on the identification of hot spots and the subsequent interventions, has always faced criticism regarding the displacement of crime (e.g., Gabor 1981). The main objective of this type of intervention has been to effectively reduce crime in high-crime places, causing zero or little crime displacement (Bowers and Johnson 2016). Scientific evidence has consistently shown that displacement affects only a small part of the total volume of crime and that, in any case, crime is reduced in absolute terms (Guerette and Bowers 2009). In addition, it has been observed that the preventive effects of SCP are propagated to places close to their implementation, a phenomenon known as diffusion of benefits (Clarke and Weisburd 1994). This is not to deny that crime can be displaced, but to affirm that this circumstance does not invalidate the preventive benefits that are obtained from the implementation of SCP measures. It is logical to think that cybercrime also moves, adapting to the different preventive measures that are implemented (Miró-Llinares 2012). Although, as some authors have pointed out (Hartel et al. 2010; Newman and Clarke 2003), no studies have been found that formally evaluate the effect of SCP implementation in relation to cybercrime displacement, others suggest plausible forms of displacement. For example, in the context of online black markets, Hutchings and Holt (2017) suggest that when an offender is threatened by an attempt to disrupt their communications, they can replace the use of Internet Relay Chats with forums. This circumstance constitutes a form of spatial displacement, since the offender changes their cyber place of action for another. In addition to spatial displacement, other forms can be produced: temporal, when the crime is committed at another time; tactical, when the commission method changes; target, when the objective of the crime is different; functional, when a different crime is committed; and perpetrator displacement, when the crime is committed by another author (Barr and Pease 1990).

The Neglect of Offenders in Cybercrime Analysis: Problem or Opportunity? One of the pioneers of Environmental Criminology, Ray Jeffery (1977), coined this term for the first time to defend the need for a school of thought that shifted the focus of attention from the individual offender to the environment. Jeffery aimed to go further than the ecological approaches of Social Disorganization Theory in which

24

Environmental Criminology and Cybercrime: Shifting Focus from the. . .

503

emphasis was placed not on the area where the crime occurred but on the offender who committed the crime in a given area (Shaw and McKay 1942). In this way, the author placed the ecological (geographic) pattern of crime, and not the offender, at the center of the analysis. And others subsequently followed this scheme, such as RAT, which started from a static concept of motivation regarding the offender and placed the dynamism in the targets and the guardians; CPT, which gave full prominence to the role of the place; and RCT, which seemed to be built on the figure of the offender but which in the end still focused its attention on the situational environment. The discipline’s general lack of interest in the elements that surround the offender (e.g., motivation, punishment, rehabilitation) in favor of the environment has been called neglect of offenders. Some authors have pointed out that such neglect may harm the development of the approach or, at least, it can limit it considerably (Bottoms 2012; Cullen and Kulig 2018). It is not that criminal motivation is not relevant to the configuration of the crime event but rather that analytically it is preferable to divert attention from this element and focus on those for which the implementation of preventive strategies is more plausible. This may be particularly appropriate in the case of crime events in cyberspace, where criminal motivations are the most static of all elements of crime, so diverting focus from the offender to other elements may be the best way to approach the analysis of the issue and the implementation of prevention strategies. To a certain extent, this is what happens especially, but not only, with cyberdependent crimes. When this typology of cybercrime is studied, there is a tendency to assume the economic motivation of the offender, while what is unknown are those elements that, in effect, end up configuring the crime itself, such as cybercrime enablers (Broadhurst et al. 2014), vulnerabilities defined by the daily activities of the victim (Bossler and Holt 2009), or the lack of surveillance in certain digital environments (Maimon et al. 2014). Peter Grabosky (2001) was the first to highlight this situation in relation to cybercrime. When analyzing the similarities and differences between cybercriminality and physical crime, the author affirmed that criminals’ motivations would remain the same but that criminal opportunities would change. In this way, he emphasized the need to focus the analysis on guardians and, in particular, on victims by stating: “In cyberspace today, as on terrestrial space two millennia ago, the first line of defence will be self-defence” (p. 248). We believe that the criminological analysis of the motivations of cybercriminals is still necessary and that very interesting advances have been made from various theoretical approaches (Bossler and Holt 2016). But we also believe that regarding the cybercrime event it is difficult to obtain real information about the offender and his motivations, that the variability of the potential objectives is generally enormous, that the most stable element is the criminal motivation, and that, moreover, the technological and situational seems highly determinant (especially in cyber-dependent crimes). Thus, the environmental approach is particularly suitable as it is not particularly focused on “why someone commits a crime” but rather on “why a cyberattack has affected one system and not another,” “how an offender has managed to access a system,” or “at what time there are more cyberattacks.” This does not mean that applying an environmental

504

F. Miró-Llinares and A. Moneva

approach to cybercrime necessarily implies ignoring the offender but rather understanding that within this analytical framework their motivations are relevant when they are related to the criminal opportunity structure. Some authors have already conducted research in this regard and have used crime scripts to analyze interviews of cyber offenders with the ultimate goal of proposing evidence-based SCP strategies (Hutchings and Holt 2017). But it is also possible to apply other frameworks of Environmental Criminology to advance the study of the cyber offender. For example, RAT and CPT could be applied to determine which cyber places are visited by cyber offenders and when they do so, to apply social network analysis techniques to identify with whom they relate and how, or to conduct studies of (near) repeat victimization to understand which targets they prefer to choose and why.

Cybercrime and Crime Controls: Beyond Self-Protection We tend to almost intuitively relate the function of crime control to the police. In fact, most experiments that have evaluated the effectiveness of crime control have done so based on a concept of formal control (Braga 2005; Weisburd et al. 2010). However, Environmental Theories have always placed the emphasis on social controls as the main elements of crime control. When Felson develops RAT and refers to the capable guardian, he does not necessarily refer to the police or the justice system, but to ordinary people whose mere presence can discourage the occurrence of a crime (Cohen and Felson 1979). In the same way, when developing the concept of the handler, this does not refer to a probation officer who watches over a potential offender, but to anyone who has such a close relationship with the offender that they are able to exercise control over them (Felson 1986). And despite the fact that when Eck (1994) introduces place managers he does so in the context of his work on illicit drug markets, he does not do this with formal controls in mind, but rather any person responsible for taking care of specific places (e.g., janitors, bus drivers, waiters). When guardianship is analyzed in cyberspace, it is generally assumed that surveillance and protection related to technical self-protection systems (e.g., antivirus, firewall) are analogous to the concept of a capable guardian. Aside from the question of whether these measures should be integrated into the idea of capable guardian or target suitability, the indisputable fact is that to assess the effect of guardianship on cybercrime we must go further and return to (1) the notion of social control and (2) specific control figures such as managers and handlers. The social control exercised by a parent through the tools of parental control installed on their child’s computer is very different from that practiced by a high school teacher who detects a case of cyberbullying in his class, and this, in turn, is different from the control performed by one workmate over another in a social network when the latter publishes a scam message. However, all of them fall within the category of social controls. In addition, we must analyze what are the elements that turn a control into an effective prevention system. In this sense, Vakhitova and Reynald (2014) have suggested that for a person to actively perform the role of guardianship in cyberspace they must first have enough contextual awareness of their environment. The authors

24

Environmental Criminology and Cybercrime: Shifting Focus from the. . .

505

add that place managers perform the important function of facilitating the intervention of these potential guardians by increasing their contextual awareness. Environmental Criminology has always placed emphasis on the specificity of situations, and, therefore, it is inconsistent to encompass all informal controls in a homogeneous category. The main consequence of this is the difficulty to evaluate informal controls, which leads to an almost complete lack of knowledge about their effects on crime. And, ultimately, it raises a debate about the usefulness of the concept (Cullen and Kulig 2018). If informal social control has as much relevance in the environmental approach as is presumed, it is necessary to break down the concept in such a way that we are able to specify its comprising elements and, later, develop a taxonomy of controls that serve to delimit each typology in each context. Thus, in the process of elaborating explanatory models of cybervictimization, Bossler and Holt (2009; Holt and Bossler 2013) develop a classification for the guardian in which they distinguish three categories: social guardianship (i.e., peers), physical guardianship (i. e., antivirus), and personal guardianship (i.e., skills). This is the only way to design adequate methodologies to measure the phenomenon in a manner that enables understanding of its preventive scope in cyberspace. In fact, there are constructs of similar complexity that have a defined methodological standard. For example, Sampson’s concept of collective efficacy, defined as “social cohesion among neighbors combined with their willingness to intervene on behalf of the common good” (Sampson et al. 1997, p. 918), has shown its operationalization and methodological consistency as an indicator of informal social control over violence. It is essential to carry out a similar methodological exercise that allows the elements of informal social control of crime proposed by Environmental Theories to be adapted to the virtual environment, for example, a concept of digital community constituted by the existence of interpersonal interactions in a social network.

Conclusions After more than a decade of reflection on the applicability of traditional criminological theories to crime committed through the Internet, it is rightly stated that the analysis and explanation of each cybercrime require consideration of a variety of theoretical frameworks (Bossler and Holt 2016). Some of the environmental approaches such as RAT or RCT are among those that academic literature has considered for the task of understanding some of the explanatory aspects of cybercrime, in this case those related to the environment where they are perpetrated. In this chapter we have tried to further the debate. Firstly, it has been demonstrated that all the approaches that comprise Crime Science are applicable to the new space of personal intercommunication that is cyberspace. This includes CPT and the applications of the “crime and place” approach, if it is understood that the “place” of convergence of offenders and targets in the absence of guardians in cybercrime will generally be a digital place and provided that the implications of this are properly developed. Environmental Criminology was conceived for the geographicalphysical, because it arose from the need to refocus prevention from the subject to the

506

F. Miró-Llinares and A. Moneva

place and, at that time, the only place was physical. But, as the most important theorists of this perspective have shown, the key to the approach was never the geographical place, but the crime event (Clarke 2010; Felson and Eckert 2016), in other words, the spatiotemporal convergence of the minimum elements of crime that can also be found on the Internet: an environment that also configures the daily actions of people and that is structured in different spaces where they interact. Secondly, it was stated that the greatest explanatory potential of Environmental Theories is obtained from the symbiosis between them and not so much from the use of each of them as separate pieces. RAT acquires a much greater explanatory potential for each cybercrime if it is linked to the diverse places where, in different ways, people converge on the Internet. The questions that CPT tries to solve, such as which cyber places are the most relevant in each crime event, what are the characteristics of the cyber places that contain more crime, or in which moments is the risk of cybervictimization higher, cannot be answered without considering the daily routines of offenders and victims. RCT and its preventive corollary, SCP, or CPTED, will acquire all their applicative potential in relation to cybercrime if we take into consideration the natural surveillance in the cyber places and other features of CPT, as well as if we understand the role of super controllers in cybercrime prevention. Finally, this chapter shows that Environmental Criminology can be more than just another theoretical approach and that it is complementary to others, for cybercrime analysis. If it adapts to the new environment and maintains the essence of the approach, it can constitute an appropriate situational explanatory framework from which to design the best preventive strategies to avoid cybercrime and to reduce its harmful effects. After all, this has always been the strength of the environmental approach: shifting analysis from what is most difficult to intervene in and modify (i.e., individual motivation) to what is easiest (i.e., opportunity and the environment). The demands of cybercrime prevention fit perfectly with this analytical and preventive philosophy. If, as one of the pioneers of the subject said when referring to crime committed in physical space, crime prevention requires focusing on the environment (Jeffery 1977), then environmental approaches constitute social approaches to understanding the cybercrime event that, by focusing on the different digital environments in which these events occur, will be highly effective in the future.

Cross-References ▶ Routine Activities Acknowledgments We thank the editors of this fantastic handbook and especially Prof. Tom Holt from Michigan State University, for their confidence in us to write this chapter on Environmental Theories. We would also like to thank Prof. Marcus Felson of Texas State University for his insights in several discussions that have served to consolidate the research presented here. Finally, we would like to thank Prof. Steven Kemp of the University of Girona for his comments that have greatly improved the translation of this work.

24

Environmental Criminology and Cybercrime: Shifting Focus from the. . .

507

Funding This research has been funded by the Spanish Ministry of Economy, Industry, and Competitiveness under the Criminology, empirical evidence, and criminal policy project: on incorporating scientific evidence to decision-making regarding criminalization of conducts (Reference DER2017-86204-R). This research has been funded by the Spanish Ministry of Education, Culture and Sports under the University Faculty Training (FPU) Grant (Reference FPU16/01671).

References Agarwal, N., Gupta, R., Singh, S. K., & Saxena, V. (2017). Metadata based multi-labelling of YouTube videos. In 7th international conference on cloud computing, data science & engineering-confluence (pp. 586–590). Noida: IEEE. https://doi.org/10.1109/CONFLUENCE. 2017.7943219. Aldridge, J., & Décary-Hétu, D. (2014). Not an ‘Ebay for Drugs’: The Cryptomarket ‘Silk Road’ as a paradigm shifting criminal innovation. SSRN Electronic Journal. https://doi.org/10.2139/ ssrn.2436643. Barr, R., & Pease, K. (1990). Crime placement, displacement, and deflection. Crime and Justice, 12, 277–318. https://doi.org/10.1086/449167. Birks, D., Townsley, M., & Stewart, A. (2012). Generative explanations of crime: Using simulation to test criminological theory. Criminology, 50(1), 221–254. https://doi.org/10.1111/j.17459125.2011.00258.x. Bossler, A. M., & Holt, T. J. (2009). On-line activities, guardianship, and malware infection: An examination of routine activities theory. International Journal of Cyber Criminology, 3(1), 400–420. Bossler, A., & Holt, T. J. (2016). Cybercrime in progress: Theory and prevention of technologyenabled offenses. New York: Routledge. Bottoms, A. (2012). Developing socio-spatial criminology. In M. Maguire, R. Morgan, & R. Reiner (Eds.), The Oxford handbook of criminology (pp. 450–488). Oxford: Oxford University Press. https://doi.org/10.1093/he/9780199590278.003.0016. Bowers, K., & Johnson, S. D. (2016). Situational prevention. In D. Weisburd, D. P. Farrington, & C. Gill (Eds.), What works in crime prevention and rehabilitation: Lessons from systematic reviews (pp. 111–136). New York: Springer. Braga, A. A. (2005). Hot spots policing and crime prevention: A systematic review of randomized controlled trials. Journal of Experimental Criminology, 1(3), 317–342. https://doi.org/10.1007/ s11292-005-8133-z. Brantingham, P. L. (2011). Computational criminology. In 2011 European intelligence and security informatics conference (EISIC) (pp. 3–3). Athens, Greece IEEE. Brantingham, P. L., & Brantingham, P. J. (1981). Notes on the geometry of crime. In P. J. Brantingham & P. L. Brantingham (Eds.), Environmental criminology (pp. 27–53). Beverly Hills: SAGE. Brantingham, P., & Brantingham, P. (1995). Criminality of place. European Journal on Criminal Policy and Research, 3(3), 5–26. https://doi.org/10.1007/BF02242925. Broadhurst, R., Grabosky, P., Alazab, M., Bouhours, B., & Chon, S. (2014). An analysis of the nature of groups engaged in cyber crime. International Journal of Cyber Criminology, 8(1), 1–20. Burnap, P., & Williams, M. L. (2015). Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making. Policy and Internet, 7(2), 223–242. https://doi.org/10.1002/poi3.85. Caneppele, S., & Aebi, M. F. (2017). Crime drop or police recording flop? On the relationship between the decrease of offline crime and the increase of online and hybrid crimes. Policing: A Journal of Policy and Practice. https://doi.org/10.1093/police/pax055.

508

F. Miró-Llinares and A. Moneva

Capone, D., & Nichols, W. J. (1976). Urban structure and criminal mobility. American Behavioral Scientist, 20(2), 199–213. https://doi.org/10.1177/000276427602000203. Choi, K. S., & Lee, J. R. (2017). Theoretical analysis of cyber-interpersonal violence victimization and offending using cyber-routine activities theory. Computers in Human Behavior, 73, 394–402. https://doi.org/10.1016/j.chb.2017.03.061. Clarke, R. V. (1992). Situational crime prevention: Successful case studies. New York: Harrow and Heston Publishers. Clarke, R. V. (1997). Situational crime prevention: Successful case studies (2nd ed.). Guilderland: Harrow and Heston Publishers. Clarke, R. V. (2010). Crime science. In E. McLaughlin & T. Newburn (Eds.), The SAGE handbook of criminological theory (pp. 271–283). London: SAGE. https://doi.org/10.4135/97814462009 26.n15. Clarke, R. V. (2018). Book review [Review of the book Place matters: Criminology for the twentyfirst century, by Weisburd, D., Eck, J. E., Braga, A. A., & Cave, B.]. Journal of Criminal Justice Education, 29(1), 157–159. Clarke, R. V., & Weisburd, D. (1994). Diffusion of crime control benefits: Observations on the reverse of displacement. Crime Prevention Studies, 2, 165–184. Cohen, L. E., & Felson, M. (1979). Social change and crime rate trends: A routine activity approach. American Sociological Review, 44(4), 588–608. Cornish, D. B., & Clarke, R. V. (1986). The reasoning criminal. New York: Springer. Cornish, D. B., & Clarke, R. V. (2003). Opportunities, precipitators and criminal decisions: A reply to Wortley’s critique of situational crime prevention. Crime Prevention Studies, 16, 41–96. Cozens, P. M., Saville, G., & Hillier, D. (2005). Crime prevention through environmental design (CPTED): A review and modern bibliography. Property Management, 23(5), 328–356. https:// doi.org/10.1108/02637470510631483. Cross, C. (2015). No laughing matter: Blaming the victim of online fraud. International Review of Victimology, 21(2), 187–204. https://doi.org/10.1177/0269758015571471. Cullen, F. T., & Kulig, T. C. (2018). Evaluating theories of environmental criminology: Strengths and weaknesses. In G. J. N. Bruinsma & S. D. Johnson (Eds.), The Oxford handbook of environmental criminology (pp. 160–176). Oxford: Oxford University Press. https://doi.org/ 10.1093/oxfordhb/9780190279707.013.7. Décary-Hétu, D., & Morselli, C. (2011). Gang presence in social network sites. International Journal of Cyber Criminology, 5(2), 876–890. Eck, J. (1994). Drug markets and drug places: A case-control study of the spatial structure of illicit drug dealing. Doctoral dissertation, University of Maryland. Ekblom, P. (1997). Gearing up against crime: A dynamic framework to help designers keep up with the adaptive criminal in a changing world. International Journal of Risk Security and Crime Prevention, 2, 249–266. Felson, M. (1986). Linking criminal choices, routine activities, informal control, and criminal outcomes. In D. Cornish & R. Clarke (Eds.), The reasoning criminal (pp. 119–128). Secaucus: Springer. Felson, M. (2012). Prólogo. In F. Miró-Llinares (Ed.), El Cibercrimen. Fenomenología y criminología de la delincuencia en el ciberespacio [Foreword] (pp. 13–16). Madrid: Martial Pons. Felson, M., & Eckert, M. (2016). Crime and everyday life (5th ed.). Los Angeles: SAGE. Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96–104. https://doi.org/10.1145/2818717. Gabor, T. (1981). The crime displacement hypothesis: An empirical examination. Crime & Delinquency, 27(3), 390–404. https://doi.org/10.1177/001112878102700306. Grabosky, P. N. (2001). Virtual criminality: Old wine in new bottles? Social & Legal Studies, 10(2), 243–249. https://doi.org/10.1177/a017405. Guerette, R. T., & Bowers, K. J. (2009). Assessing the extent of crime displacement and diffusion of benefits: A review of situational crime prevention evaluations. Criminology, 47(4), 1331–1368. https://doi.org/10.1111/j.1745-9125.2009.00177.x. Guerry, A. M. (1833). Essai sur la statistique morale de la France. Paris: Crochard. Guo, R. M. (2008). Stranger danger and the online social network. Berkeley Technology Law Journal, 23(1), 617–644. https://doi.org/10.15779/Z38J69J.

24

Environmental Criminology and Cybercrime: Shifting Focus from the. . .

509

Harries, K. D. (1976). Cities and crime: A geographic model. Criminology, 14, 369–386. https:// doi.org/10.1111/j.1745-9125.1976.tb00029.x. Harries, K. (1999). Mapping crime: Principles and practice. Washington, DC: National Institute of Justice. Hartel, P. H., Junger, M., & Wieringa, R. J. (2010). Cyber-crime science = crime science + information security. University of Twente. Retrieved from https://research.utwente.nl/files/ 5095739/0_19_CCS.pdf Hinduja, S., & Kooi, B. (2013). Curtailing cyber and information security vulnerabilities through situational crime prevention. Security Journal, 26(4), 383–402. https://doi.org/10.1057/sj.2013.25. Holt, T. J., & Bossler, A. M. (2008). Examining the applicability of lifestyle-routine activities theory for cybercrime victimization. Deviant Behavior, 30(1), 1–25. https://doi.org/10.1080/016396 20701876577. Holt, T. J., & Bossler, A. M. (2013). Examining the relationship between routine activities and malware infection indicators. Journal of Contemporary Criminal Justice, 29(4), 420–436. https://doi.org/10.1177/1043986213507401. Holt, T. J., Smirnova, O., & Hutchings, A. (2016). Examining signals of trust in criminal markets online. Journal of Cybersecurity, 2(2), 137–145. https://doi.org/10.1093/cybsec/tyw007. Holt, T. J., van Wilsem, J., van de Weijer, S., & Leukfeldt, R. (2018). Testing an integrated selfcontrol and routine activities framework to examine malware infection victimization. Social Science Computer Review. https://doi.org/10.1177/0894439318805067. Hosseinmardi, H., Mattson, S. A., Rafiq, R. I., Han, R., Lv, Q., & Mishra, S. (2015). Detection of cyberbullying incidents on the Instagram social network. arXiv preprint arXiv:1503.03909. Hutchings, A., & Clayton, R. (2016). Exploring the provision of online booter services. Deviant Behavior, 37(10), 1163–1178. https://doi.org/10.1080/01639625.2016.1169829. Hutchings, A., & Holt, T. J. (2014). A crime script analysis of the online stolen data market. British Journal of Criminology, 55(3), 596–614. https://doi.org/10.1093/bjc/azu106. Hutchings, A., & Holt, T. J. (2017). The online stolen data market: Disruption and intervention approaches. Global Crime, 18(1), 11–30. https://doi.org/10.1080/17440572.2016.1197123. Jeffery, C. R. (1977). Crime prevention through environmental design. Beverly Hills: SAGE. Junger, M., Montoya, L., Hartel, P., & Heydari, M. (2017). Towards the normalization of cybercrime victimization: A routine activities analysis of cybercrime in Europe. In 2017 international conference on cyber situational awareness, data analytics and assessment (cyber SA) (pp. 1–8). London, United Kingdom, IEEE. https://doi.org/10.1109/CyberSA.2017.8073391. Khey, D. N., & Sainato, V. A. (2013). Examining the correlates and spatial distribution of organizational data breaches in the United States. Security Journal, 26(4), 367–382. https:// doi.org/10.1057/sj.2013.24. Kigerl, A. (2012). Routine activity theory and the determinants of high cybercrime countries. Social Science Computer Review, 30(4), 470–486. https://doi.org/10.1177/0894439311422689. Kigerl, A. C. (2013). Infringing nations: Predicting software piracy rates, bittorrent tracker hosting, and p2p file sharing client downloads between countries. International Journal of Cyber Criminology, 7(1), 62–80. Klausen, J., Barbieri, E. T., Reichlin-Melnick, A., & Zelin, A. Y. (2012). The YouTube Jihadists: A social network analysis of Al-Muhajiroun’s propaganda campaign. Perspectives on Terrorism, 6(1), 36–53. Leukfeldt, E. R. (2014). Phishing for suitable targets in the Netherlands: Routine activity theory and phishing victimization. Cyberpsychology, Behavior and Social Networking, 17(8), 551–555. https://doi.org/10.1089/cyber.2014.0008. Leukfeldt, E. R., & Yar, M. (2016). Applying routine activity theory to cybercrime: A theoretical and empirical analysis. Deviant Behavior, 37(3), 263–280. https://doi.org/10.1080/01639625. 2015.1012409. Leukfeldt, E. R., Kleemans, E. R., & Stol, W. P. (2016). Cybercriminal networks, social ties and online forums: Social ties versus digital ties within phishing and malware networks. British Journal of Criminology, 57(3), 704–722. https://doi.org/10.1093/bjc/azw009. Maimon, D., Kamerdze, A., Cukier, M., & Sobesto, B. (2013). Daily trends and origin of computerfocused crimes against a large university computer network: An application of the routine-

510

F. Miró-Llinares and A. Moneva

activities and lifestyle perspective. British Journal of Criminology, 53(2), 319–343. https://doi. org/10.1093/bjc/azs067. Maimon, D., Alper, M., Sobesto, B., & Cukier, M. (2014). Restrictive deterrent effects of a warning banner in an attacked computer system. Criminology, 52(1), 33–59. https://doi.org/10.1111/ 1745-9125.12028. Maimon, D., Wilson, T., Ren, W., & Berenblum, T. (2015). On the relevance of spatial and temporal dimensions in assessing computer susceptibility to system trespassing incidents. British Journal of Criminology, 55(3), 615–634. https://doi.org/10.1093/bjc/azu104. Marcum, C. D., Higgins, G. E., & Ricketts, M. L. (2010). Potential factors of online victimization of youth: An examination of adolescent online behaviors utilizing routine activity theory. Deviant Behavior, 31(5), 381–410. https://doi.org/10.1080/01639620903004903. Medina-Ariza, J. J. (2011). Políticas y estrategias de prevención del delito y seguridad ciudadana. Madrid: Edisofer. Miró-Llinares, F. (2011). La oportunidad criminal en el ciberespacio: aplicación y desarrollo de la teoría de las actividades cotidianas para la prevención del cibercrimen. Revista Electrónica de Ciencia Penal y Criminología, 13(7), 1–55. Miró-Llinares, F. (2012). El cibercrimen. Fenomenología y criminología de la delincuencia en el ciberespacio. Madrid: Marcial Pons. Miró-Llinares, F. (2015). That cyber routine, that cyber victimization: Profiling victims of cybercrime. In R. G. Smith, R. C. C. Cheung, & L. Y. C. Lau (Eds.), Cybercrime risks and responses (pp. 47–63). London: Palgrave Macmillan. https://doi.org/10.1057/9781137474162_4. Miró-Llinares, F., & Johnson, S. D. (2018). Cybercrime and place: Applying environmental criminology to crimes in cyberspace. In G. J. N. Bruinsma & S. D. Johnson (Eds.), The Oxford handbook of environmental criminology (pp. 883–906). Oxford: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190279707.013.39. Miró-Llinares, F., & Rodriguez-Sala, J. J. (2016). Cyber hate speech on twitter: Analyzing disruptive events from social media to build a violent communication and hate speech taxonomy. International Journal of Design and Nature and Ecodynamics, 11(3), 406–415. https://doi. org/10.2495/DNE-V11-N3-406-415. Miró-Llinares, F., Moneva, A., & Esteve, M. (2018). Hate is in the air! But where? Introducing an algorithm to detect hate speech in digital microenvironments. Crime Science, 7(15), 1–12. https://doi.org/10.1186/s40163-018-0089-1. Näsi, M., Räsänen, P., Kaakinen, M., Keipi, T., & Oksanen, A. (2017). Do routine activities help predict young adults’ online harassment: A multi-nation study. Criminology & Criminal Justice, 17(4), 418–432. https://doi.org/10.1177/1748895816679866. Navarro, J. N., & Jasinski, J. L. (2013). Why girls? Using routine activities theory to predict cyberbullying experiences between girls and boys. Women & Criminal Justice, 23(4), 286–303. https://doi.org/10.1080/08974454.2013.784225. Newman, G. R., & Clarke, R. V. (2003). Superhighway robbery: Preventing e-commerce crime. New York: Routledge. Pease, K. (2001). Crime futures and foresight: Challenging criminal behaviour in the information age. In D. Wall (Ed.), Crime and the Internet (pp. 30–40). London: Routledge. Pratt, T. C., Holtfreter, K., & Reisig, M. D. (2010). Routine online activity and Internet fraud targeting: Extending the generality of routine activity theory. Journal of Research in Crime and Delinquency, 47(3), 267–296. https://doi.org/10.1177/0022427810365903. Quetelet, L. A. J. (1842). A treatise on man and the development of his faculties. Edinburgh: W. and R. Chambers. Reyns, B. W. (2010). A situational crime prevention approach to cyberstalking victimization: Preventive tactics for Internet users and online place managers. Crime Prevention and Community Safety, 12(2), 99–118. https://doi.org/10.1057/cpcs.2009.22. Reyns, B. W. (2013). Online routines and identity theft victimization: Further expanding routine activity theory beyond direct-contact offenses. Journal of Research in Crime and Delinquency, 50(2), 216–238. https://doi.org/10.1177/0022427811425539.

24

Environmental Criminology and Cybercrime: Shifting Focus from the. . .

511

Reyns, B. W., & Henson, B. (2016). The thief with a thousand faces and the victim with none: Identifying determinants for online identity theft victimization with routine activity theory. International Journal of Offender Therapy and Comparative Criminology, 60(10), 1119–1139. https://doi.org/10.1177/0306624X15572861. Reyns, B. W., Henson, B., & Fisher, B. S. (2011). Being pursued online: Applying cyberlifestyle–routine activities theory to cyberstalking victimization. Criminal Justice and Behavior, 38(11), 1149–1169. https://doi.org/10.1177/0093854811421448. Rossmo, D. K. (1999). Geographic profiling. Boca Raton: CRC Press. Sampson, R. J., Raudenbush, S. W., & Earls, F. (1997). Neighborhoods and violent crime: A multilevel study of collective efficacy. Science, 277, 918–924. https://doi.org/10.1126/ science.277.5328.918. Shaw, C. R., & McKay, H. D. (1942). Juvenile delinquency and urban areas: A study of rates of delinquency in relation to differential characteristics of local communities in American cities. Chicago: University of Chicago Press. Sherman, L. W., Gartin, P. R., & Buerger, M. E. (1989). Hot spots of predatory crime: Routine activities and the criminology of place. Criminology, 27(1), 27–56. https://doi.org/10.1111/ j.1745-9125.1989.tb00862.x. Shu, K., Sliva, A., Sampson, J., & Liu, H. (2018). Understanding cyber attack behaviors with sentiment information on social media. In R. Thomson, C. Dancy, A. Hyder, & H. Bisgin (Eds.), Social, cultural, and behavioral modeling (pp. 377–388). Cham: Springer. https://doi.org/ 10.1007/978-3-319-93372-6_41. Vakhitova, Z. I., & Reynald, D. M. (2014). Australian Internet users and guardianship against cyber abuse: An empirical analysis. International Journal of Cyber Criminology, 8(2), 156–171. Van Wilsem, J. (2013). ‘Bought it, but never got it’: Assessing risk factors for online consumer fraud victimization. European Sociological Review, 29(2), 168–178. https://doi.org/10.1093/ esr/jcr053. Vishwanath, A. (2014). Habitual Facebook use and its impact on getting deceived on social media. Journal of Computer-Mediated Communication, 20(1), 83–98. https://doi.org/10.1111/ jcc4.12100. Wall, D. S. (2007). Policing cybercrimes: Situating the public police in networks of security within cyberspace. Police Practice and Research, 8(2), 183–205. https://doi.org/10.1080/156142607 01377729. Wei, C., Sprague, A., Warner, G., & Skjellum, A. (2008, March). Mining spam email to identify common origins for forensic application. In Proceedings of the 2008 ACM symposium on applied computing (pp. 1433–1437). Fortaleza, Brazil, ACM. Weisburd, D., & Green, L. (1995). Policing drug hot spots: The Jersey City drug market analysis experiment. Justice Quarterly, 12(4), 711–735. https://doi.org/10.1080/07418829500096261. Weisburd, D., Bernasco, W., & Bruinsma, G. (Eds.). (2009). Putting crime in its place. New York: Springer. Weisburd, D., Telep, C. W., Hinkle, J. C., & Eck, J. E. (2010). Is problem-oriented policing effective in reducing crime and disorder? Findings from a Campbell systematic review. Criminology & Public Policy, 9(1), 139–172. https://doi.org/10.1111/j.1745-9133.2010.00617.x. Welsh, B. C., & Farrington, D. P. (2009). Public area CCTV and crime prevention: An updated systematic review and meta-analysis. Justice Quarterly, 26(4), 716–745. https://doi.org/10.10 80/07418820802506206. Wolfe, S. E., Marcum, C. D., Higgins, G. E., & Ricketts, M. L. (2016). Routine cell phone activity and exposure to sext messages: Extending the generality of routine activity theory and exploring the etiology of a risky teenage behavior. Crime & Delinquency, 62(5), 614–644. https://doi.org/ 10.1177/0011128714541192. Wortley, R., & Townsley, M. (Eds.). (2016). Environmental criminology and crime analysis (2nd ed.). London: Routledge. Yar, M. (2005). The novelty of ‘cybercrime’ an assessment in light of routine activity theory. European Journal of Criminology, 2(4), 407–427. https://doi.org/10.1177/147737080556056.

Subcultural Theories of Crime

25

Thomas J. Holt

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subcultural Theories and Research in Physical and Cyberspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emergent Subcultural Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technological Mediation of Subcultural Experiences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subcultures, Technology, and Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discussion and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

514 515 516 517 519 522 523 523

Abstract

This chapter will consider cybercrime offending through a subcultural perspective, which situates crime and delinquency as a function of social factors that shape the individual acceptance of values and beliefs that support action. Subcultural theories were developed throughout the mid-1900s and are still used in modern theoretical research as a means to understand a range of deviant and criminal behaviors. The historical evolution of subcultural theories will be discussed, along with contemporary examples of subcultures that operate in onand offline contexts to influence the individual offending behaviors. Keywords

Subcultural theory · Cybercrime · Computer hacking · Prostitution · Extremism

T. J. Holt (*) College of Social Science, School of Criminal Justice, Michigan State University, East Lansing, MI, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_19

513

514

T. J. Holt

Introduction The development of the Internet and computer technologies has radically transformed society, particularly with regard to interpersonal communication. As noted throughout the various chapters of this book, the impact of these devices was slow at first, but exploded globally by the mid-1990s. The rise of the World Wide Web and HTML enabled individuals to efficiently share photos, video, and text with one another, while forums and instant messaging provided unique avenues for indirect asynchronous, and direct, immediate contacts. The social connectivity fostered by the spread of technology cannot be underestimated, whether for legitimate or illicit activities. Individuals can now find others who share an interest in a particular product or idea, no matter how outlandish or unusual (Holt and Bossler 2016). Additionally, the global nature of the Internet fosters participation in communities no matter an individual’s actual position in physical space, rendering borders and language barriers meaningless (DiMarco and DiMarco 2003; Quinn and Forsyth 2013). The rise of fragmented special interest communities has enabled the formation of subcultures or groups with a shared set of values and beliefs that function either in opposition to or as a response to the broader culture (Brake 1980; Herbert 1998; Miller 1958). Some are focused on lifestyle choices such as living drug-, alcohol-, and sex-free (Williams and Copes 2005). Others are engaged in deviant activities that may occur on- or off-line, such as anorexia and eating disorders (Adler and Adler 2005) or seeking unprotected sex with HIV-positive partners (Moskowitz and Roloff 2007; Tewksbury 2006). There are also myriad subcultures focused on criminal acts that may involve off-line acts such as bestiality (Maratea 2011) or sex work (Blevins and Holt 2009; Milrod and Monto 2012), as well as online offenses including digital piracy (Cooper and Harrison 2001; Holt and Copes 2010) or computer hacking (Holt 2007; Jordan and Taylor 1998; Steinmetz 2015). The rise of online subcultures, especially those focused on deviance or crime, is largely a function of the unique nature of the Internet. The faceless, anonymous nature of online spaces allows individuals to discuss topics that they may be embarrassed to talk about in person with others, for fear of either social rejection or legal risks (DiMarco and DiMarco 2003; Holt 2007; Quinn and Forsyth 2013). Additionally, social media and online communications platforms allow individuals to learn ways to complete different behaviors or justify their actions, particularly regarding deviant behaviors (Blevins and Holt 2009; DiMarco and DiMarco 2003; Quinn and Forsyth 2013). As a result, online subcultures related to crime and deviance may enable offenders to become more effective and knowledgeable about the process of offending regardless of whether the offense occurs on or offline (e.g., Blevins and Holt 2009). Criminological inquiry regarding the formation and influence of subcultures on behavior was born in part from gang research in the 1940s and 1950s (e.g., Miller 1958). Scholars have adapted these models to online environments in the last few decades, though the breadth and scope of this literature can be complex (see Holt et al. 2010). Additionally, the theoretical underpinnings of these frameworks can be

25

Subcultural Theories of Crime

515

contradictory, complicating our understanding of their meaning. This chapter attempts to clarify the theoretical frameworks used and their conception of human action. The intersecting nature of online subcultures that operate whether the offense occurs on or off-line will also be explored to better understand their common characteristics. This chapter concludes with a discussion of future directions for scholarship.

Subcultural Theories and Research in Physical and Cyberspace Criminological and sociological scholarship on subcultures is born from the positivist theory tradition, identifying the etiology of crime in factors internal or external to the individual (Bernard et al. 2010). Subcultures consist of individuals who socially coalesce around a specific interest or activity with shared values norms and beliefs that set them apart from the dominant culture (Maurer 1981; Quinn and Forsyth 2013). Subcultural participants ascribe meaning and value to their actions which may reflect either their perceived rejection of the dominant culture (Miller 1958; Young 1997) or an interest that is not valued by society (Wolfgang and Ferracuti 1967). Individuals involved in a subculture share codes of conduct, behavioral norms, and specialized knowledge of group processes, which provide members with ways to gage ties to the subculture itself (Brake 1980; Maurer 1981). Subcultures frequently utilize specialized language and outward symbols of membership that communicate one’s connection to the culture and their status within it (Hamm 2002; Holt et al. 2010; Maurer 1981). There is no single theory of subcultural formation, as scholars have developed various models based on their assumptions about individual decision-making and adherence to normative values. Some theorists argue that subcultures form as a result of opposition to the traditional values of the dominant culture (Cohen 1955). Cohen (1955) developed a theory arguing that gangs form in lower socioeconomic communities as a way for individuals to find achievable goals based on values they identify, as they are not able to achieve those of the dominant culture. Similarly, Miller (1958) proposed delinquent gangs form in lower class communities as a function of the values held by the community as a whole. Specifically, poor communities espouse a set of values that exist in opposition to that of the dominant culture. Residents exposed to these values accept them, which in turn leads them to emphasize and value the ability to be tough, get into and out of trouble, be street smart, and seek out excitement (Miller 1958). The formation of gangs is a natural response to the lack of male role models and conditions within this environment that encourage youths to demonstrate their adherence to social values to gain status (Miller 1958). Other researchers take a more nuanced approach, recognizing that individuals may have greater agency to participate within a subculture while not accepting all of its normative values. Matza’s (1968) drift theory was one of the first to recognize that socialization is not perfect, nor is there a single monolith delinquent subculture influencing the behavior of participants at all times. Instead, there may be a subculture of delinquency that encourages the use of delinquent behavior on the basis of the

516

T. J. Holt

youth’s perception that they are freed from the moral bond of society (Matza 1968). Youth may be expected to conform to social norms, until such time as they recognize this loosening from social contention and feel a sense that delinquent behavior may be justifiable or their responsibility diminished. This is due in part to their associations with delinquent peers who provide justifications for action, as well as from perceptions that the juvenile justice system is actually unjust and has no authority over their actions. These circumstances enable youths to drift from conforming to offending behaviors as they see fit across various situations (Matza 1968). Similar arguments of imperfect socialization have been made, as with Elijah Anderson’s (1999) theory of a code of the streets. His work focused on accounting for the high rates of violence observed in urban communities over time. Anderson (1999) argued that residents in urban environments feel a sense of pervasive alienation from law enforcement and other formal systems of conflict resolution, as well as suburban, middle class values. Instead, residents survive with a culture that espouses the use of violence when threatened or slighted by others. Individuals have few ways to achieve traditional symbols of social status, making one’s reputation and perception paramount. The perceived willingness to respond with violence in public settings may reduce the potential that an individual will be challenged and helps to maintain their reputation. Though this code of conduct is presented to all, primarily during adolescence at school and at play, individuals differentially internalize its values based on their familial value systems (Anderson 1999). Thus, the risk of violent assault and victimization is thought to be a result of individuals’ situational adherence to the code of the streets (Anderson 1999). Similarly, Wolfgang and Ferracuti (1967) argued that a unique subculture of violence exists within Southern US states as reflected in historical crime statistics across the USA. Prolific use of violence in Southern states over time, either in the context of slave ownership or honor codes, created a cultural climate that encouraged the use of violence. The authors argued these conditions created variations in rates of assault and homicides at the individual level that would produce generally higher aggregate rates of violence compared to the rest of the country.

Emergent Subcultural Frameworks A recent, relatively under-examined subcultural framework recognizing human agency was developed by Herbert (1998) as a means to assess individual action in the context of participation in a subculture. He recognized that behavior is dynamic and may occur for any number of reasons, whether due to adherence to subcultural values or the simple decision to act. Thus, he proposed to examine the “normative orders” of a subculture by assessing participants’ positive or negative values toward actions (Herbert 1998: 347). This information highlights informal rules espoused by subcultural participants and enables researchers to identify conflicting or contradictory values that may complicate the decision to act in a subcultural context. In addition, there is an ongoing attempt to revitalize Matza’s (1968) subcultural drift theory to account for cybercrimes (Goldsmith and Brewer 2015; Holt et al.

25

Subcultural Theories of Crime

517

2018). Using tenets of drift, Goldsmith and Brewer (2015) argued that technology creates unique opportunities to move in and out of deviant communities in virtual and real spaces. In fact, the Internet creates a platform where individuals can feel anonymous and are freed from a sense of responsibility, in much the same way as Matza’s original conception of drift (Goldsmith and Brewer 2015). Participating in online communities also provides individuals with connections to peers with unique perspectives and values that differ from traditional social norms. Goldsmith and Brewer (2015) also recognize that simply having the ability to access content via websites and social networks provides individuals with information that could be modeled for the purposes of offending. As a consequence, the lack of moral and social controls felt in online leads individuals to feel they can engage in deviance or conforming behavior at any time. The authors originally stated that they were not interested in assessing the utility of digital drift to more common forms of cybercrime, such as hacking, piracy, or viewing pornography (Goldsmith and Brewer 2015). Instead, they focused on lone wolf terror incidents and exchanges of child porn content in their review of drift in action. A subsequent revision by Holt et al. (2018) expanded the discussion of drift to all forms of cybercrime and recognized the potential impact of police and courts in affecting the loosening of moral and social conventions. Specifically, Holt and colleagues argue that the sense of injustice produced in juveniles by police and the juvenile justice system can be observed in responses to cybercrime. The authors argue that cybercriminality is inconsistently policed at the local and federal levels due to limited resources and difficulties inherent in investigating cases that may cross jurisdictional boundaries (e.g., Holt et al., 2018). Those cases that are successfully prosecuted tend to be used as examples to deter active offenders, particularly through the application of overly punitive correctional and post-release sanctions (Holt et al. 2018). As a consequence, the current response of the criminal justice system to cybercrime may disproportionately increase individuals’ sense that police and other system agents have no legitimacy. In turn, these conditions may increase the potential for individuals to feel a sense of drift and engage in various forms of cybercrime. As a theory, digital drift appears to have utility to account for cybercrime by refining concepts originally proposed by Matza and recognizing the novel ways that technology influences subcultural formation. There is, however, almost no empirical research testing these concepts, calling to question how its key hypotheses may be operationalized or measured (see Brewer et al. 2018). Research is needed using longitudinal data in order to better assess the ways that individuals are exposed to sources of deviant information online and how their social networks may evolve over time if they increasingly gain and accept deviant peers on- and off-line.

Technological Mediation of Subcultural Experiences The development of modern communications technology has not supplanted the role of personal experience in the development and acceptance of enculturation to subcultural norms and values. The need for off-line connections and interactions

518

T. J. Holt

still influence behavior, while computer-mediated communications may simply enhance subcultural experience and provide alternative avenues for expression of norms and values. This is evident in the ways that gang members appear to utilize social media platforms relative to their off-line experiences. Examinations of gang membership and socialization toward deviant and delinquent values have largely been approached from the perspective that youth form value systems either in opposition to or appears to be used by juvenile gang members (e.g., Cohen 1955; Decker et al. 1998; Miller 1958, 2001). More recent research examining gang members’ behaviors in online spaces has found that they use the Internet to download media, enable drug sales, and utilize social media platforms for off-line crimes compared to non-gang and former gang members (Moule et al. 2014). Similarly, studies of street gang posting behaviors on social media sites suggest they are largely a tool for individual expression and promotion of violence toward others (Womer and Bunker 2010) and possibly as a recruitment tool depending on the gang or organization (Decary Hetu and Morselli 2011; Morselli and Décary-Hétu 2010; Womer and Bunker 2010). Similarly, several forms of sexual deviance are dependent on off-line experiences involving one or more partners, though they may be facilitated by the Internet. For instance, individuals may use dating and personal ad websites to enable the act of bugchasing, where HIV-negative individuals seek sexual encounters with HIVpositive partners so as to potentially become infected (Grov 2004; Tewksbury 2006). Similarly, participants in the BDSM community utilize special social media platforms to identify potential partners in their local area (Denney and Tewksbury 2013). Such encounters were possible before the Internet, but technology has simplified and streamlined the process of creating relationships that can facilitate sexual relationships (see also Sexual Subcultures and Online Spaces). Extremist groups and ideological movements can also utilize the Internet to connect with others and express their beliefs in environments that are separate from their real-world identities. Many of the phrases and expressions used by participants in online spaces to reflect their subcultural identity are similar to those used in off-line environments. For example, far-right group posts in forums and social media may incorporate phrase such as HH or 88, as abbreviations for Heil Hitler (Simi and Futrell 2015). Online communities are also designed to facilitate connections with individuals beyond their existing real-world social circles (see ▶ Chaps. 64, “Hate Speech in Online Spaces,” and ▶ 65, “The Role of the Internet in Facilitating Violent Extremism and Terrorism: Suggestions for Progressing Research”; Holt et al. 2019; Weimann 2005). In fact, many forums operated by extremist groups are designed to connect participants to others in their local areas so as to increase the size and clout of their groups (Hankes 2015; Weimann 2005). This was perhaps exemplified by Dylann Roof, who shot and killed nine African American congregants at a church in Charleston, South Carolina, in 2015 (Glenza 2015; Hamm and Spaaij 2017). Roof posted in multiple far-right group forums and used his accounts to express his beliefs and attempt to connect with others living in South Carolina for in-person meetings (Holt et al. 2018). Even if participation in online communities do not increase the overall size and scope of extremist groups off-line, they enable individuals to accept ideological

25

Subcultural Theories of Crime

519

belief systems that emphasize the need for violence or bigoted behavior (Gerstenfeld et al. 2003; Hankes 2015). For instance, the terror organization ISIS was able to utilize social media platforms like Twitter and Skype as tools to radicalize individuals in online spaces so that they would eventually travel from their home country to join the fight in Iraq and the Middle East region (Berger and Morgan 2015; Steward and Maremont 2016). The use of the Internet also enables the promotion of violent ideas and messaging in ways that may be more acceptable to readers who are supportive of a group’s ideological beliefs. As an example, white nationalists have produced a number of simple video games that can be played online that promote their beliefs in overt ways through the guise of entertainment. One of the more popular of these games was called ethnic cleansing and was designed as first-person shooter game where the player assumes the guise of a skinhead in order to kill minorities in a variety of real-world settings (Holt 2012). A number of white nationalist groups also operated small music labels that released recordings of various bands whose lyrics promote hate and violence against various groups (Britz 2010; Jipson 2007). In much the same way, the group Al Qaeda in the Arabian Peninsula published an online English language lifestyle magazine called Inspire designed to promote jihadist beliefs in a socially acceptable way (Watson 2013). Readers could gain an unvarnished view of the reasons individuals may have for engaging in violence against western targets through a lens of culture and religion. Additionally, the magazine provided readers with tactical guidelines for making and using weapons against various resources. The information published in Inspire is thought to have enabled the Tsarnaev brothers to make homemade explosive devices that were then used to attack participants in the 2013 Boston Marathon race (Cooper et al. 2013). These examples demonstrate the inherent value of the Internet in order to augment subcultural experiences and enculturation.

Subcultures, Technology, and Crime Criminologists have yet to resolve the underlying competition between these perspectives, instead choosing to focus more substantively on the values and belief structures of deviant and criminal subcultures. Additionally, scholars began to examine the interplay between online and off-line subcultural experiences in the 2000s (e.g., Holt 2007; Taylor et al. 2001; Williams and Copes 2005). These studies have not addressed whether there is a greater influence of on or off-line environments on offending behavior, or the extent to which subcultural experiences differ as a result of where individuals are socialized to its values (Holt 2007). In fact, only a small portion of subcultural research utilizes data developed from both on- and offline sources to examine offending (e.g., Holt 2007; Holt and Copes 2010; Jordan and Taylor 1998; Steinmetz 2015; Taylor 1999). The larger body of work examining the role of technology in facilitating subcultural formation is derived from online data sources, including computer hacking (Meyer 1989), digital pirates (Cooper and Harrison 2001; Steinmetz and Tunnell 2013), pedophiles (Durkin and Bryant

520

T. J. Holt

1999; Holt et al. 2010; Jenkins 2001; Quayle and Taylor 2002), prostitution (Blevins and Holt 2009; Milrod and Monto 2012; Sanders 2004), and sexual deviance generally (Denney and Tewksbury 2013; Grov 2004; Maratea 2011; Roberts and Hunt 2012; Tewksbury 2006). Regardless of the method of data collection, there are several common characteristics observed across deviant and criminal subcultures operating on-line (e.g., Holt and Bossler 2016; Leukfeldt 2015). Though these subcultures differ on the basis of the nature of their offense, and the extent to which the act occurs on or offline, they all emphasize ways to determine social position within the subculture, as well as techniques to minimize the likelihood of detection and arrest. In fact, one of the most researched subcultures engaged in cybercrime is the computer hacker subculture (see ▶ Chaps. 35, “Computer Hacking and the Hacker Subculture,” and ▶ 37, “Global Voices in Hacking (Multinational Views)”). Criminological inquiry on computer hacking dates back to the mid-1980s when this was still a novel and uncommon form of crime (e.g., Hollinger and Lanza-Kaduce 1988; Meyer 1989). Studies of hacking have focused on the ways that individuals value computer technology and provide a basis to judge the skills and abilities of others (Holt 2007; Jordan and Taylor 1998; Steinmetz 2015). In that respect, hackers value technology above all else and prize the ability to manipulate any sort of computer hardware or software in ways that it was not originally designed to be used. Individuals who can demonstrate their skill and mastery of computers, telephony, and mobile devices are given status by others within the subculture and recognized for their abilities (Holt 2007; Jordan and Taylor 1998; Meyer 1989; Steinmetz 2015; Taylor et al. 2001). In fact, individuals are often given greater social status if they can illustrate their knowledge in practice through hacking devices or providing tutorials and written explanations to illustrate how a program or concept works in practice (Holt 2007; Meyer 1989; Steinmetz 2015). Similar values are evident in examinations of the subculture of digital piracy, where participants share various forms of media and intellectual property without paying the original creator (see Digital Piracy). Though piracy is possible without the Internet, such as buying bootleg DVDs or copying CDs without payment to the content creator, it is almost impossible to separate the subculture of piracy from the Internet and computers generally (see Steinmetz and Tunnell 2013). Individuals who persistently engage in downloading and uploading of pirated content typically judge others on the size of their content libraries and ability to acquire rare or unusual media with others (Cooper and Harrison 2001; Downing 2011). In fact, the individuals or groups who get the best-quality, first-run films or television shows available online for others to download typically gain the respect of others within the subculture (Brown and Holt 2018). Both the hacker and piracy subcultures also define participants on the basis of their abilities and roles in online communities. Individuals involved in the hacker subculture who use tools to hack but are unable to fully understand how they function or achieve specific outcomes are referred to as script kiddies (Holt et al. 2010; Taylor 1999). The term script kiddie is usually used as a pejorative, while individuals who are skilled and highly proficient with technology may be referred to as hackers, white hats,

25

Subcultural Theories of Crime

521

or black hats depending on their ethical outlook (Furnell 2002; Holt et al. 2010). Participants in piracy communities who only download content without sharing files are usually referred to as leeches, as they only absorb useful materials from the broader community (Cooper and Harrison 2001; Holt and Copes 2010). Those who frequently share new content may be referred to as citizens, reflecting their engagement with the broader community (Cooper and Harrison 2001; Steinmetz and Tunnell 2013). Thus, these online subcultures provide a text-based metric that demonstrates an actor’s hierarchical position within the group (see Holt et al. 2010). A relatively similar dynamic can be observed among the participants of the online subculture of prostitution services, as they engage in detailed discussions of their experiences with paid sex workers in the real world (see Prostitution and Sex Work in an Online Context; Blevins and Holt 2009; Cunningham and Kendall 2010; Milrod and Monto 2012; Sanders 2004). Though the posters do not utilize a similar argot that demonstrates social position, forum posters encourage one another to share reviews of sex workers in ways that treat women more as commodities than as people (Blevins and Holt 2009). Reviews typically identify the location where sex workers were solicited, their appearance and demeanor, as well as the negotiations process and sex acts themselves (Cunningham and Kendall 2010; Holt and Blevins 2007; Milrod and Monto 2012). As a result, the frequency and quality of posts participants make serves as a way to gage an individuals’ expertise with prostitution and denotes status within the subculture (Blevins and Holt 2009; Sharp and Earle 2003). All three of these subcultures also recognize the risks they face from formal and informal agents of social control. Computer hacking and digital piracy have been criminalized and may involve serious prison sentences at the federal level (Brenner 2009). As a result, participants in these subcultures take steps to segment their online activities from their real identities (Holt and Copes 2010; Jordan and Taylor 1998; Taylor et al. 2001; Thomas 2002). All these communities utilize screen names, or handles, to take credit for their actions while shielding their real identity from criminal liability. Participants in these subcultures also share information through forums and websites that enable individuals to anonymize and obfuscate their actions from both police agencies and third party security firms (Blevins and Holt 2009; Holt and Copes 2010; Holt 2007; Jordan and Taylor 1998). Online subcultures that focus on offenses occurring in the real world also serve a vital role in providing offenders with information that could improve an individual’s likelihood that they offend without being detected. This is evident among sexual deviance communities, as with bestiality, or the practice of engaging in sexual acts with animals (Maratea 2011). Individuals interested in bestiality can discuss their methods for completing various acts or the inherent physical risks associated with attempting to have sex with certain animals (Maratea 2011). There are also online communities for people who are sexually attracted to children to discuss their interests with others and ways to minimize the risk of detection when observing children in public settings (e.g., Holt et al. 2010). The customers of prostitutes also regularly use online platforms to not only review and solicit sex workers but also discuss ways to detect undercover police operations in physical space and inform others of stings in progress (Holt et al. 2014).

522

T. J. Holt

Discussion and Conclusions Taken as a whole, it is clear that the Internet serves a vital role in the formation and maintenance of deviant and criminal subcultures, regardless of whether they involve offenses that occur in real or virtual settings (Hamm and Spaaij 2017; Holt 2007; Jordan and Taylor 1998; Simi and Futrell 2006). Social media, instant messaging platforms, and web forums provide an anonymous platform for individuals to share their common interests, perspectives, and beliefs about activities (DiMarco and DiMarco 2003; Quinn and Forsythe 2013). Participation in these communities may foster enculturation into a deviant subculture through the acceptance of justifications for criminal activities and methods to offend. Though computer-mediated communications platforms are mostly driven by text, subcultural participants can utilize different strategies to identify an individuals’ status within the community based on adherence to shared values. The hacker, piracy, and prostitution subcultures emphasize the importance of expressions of knowledge and expertise in their specific form of offending (Blevins and Holt 2009; Holt and Copes 2010; Jordan and Taylor 1998; Milrod and Monto 2012; Taylor et al. 2001). In addition, participants can gain insights into the practices of law enforcement and social control agents who may attempt to sanction offenders. Thus, online subcultures may prove essential in communicating risk reduction and displacement strategies that minimize the likelihood of detection (e.g., Holt et al. 2014). The insights derived from the current research literature are valuable, though there are myriad questions that must be addressed by scholars to improve our understanding of subcultural formation and practice. First, the current body of research is based on samples of posts from forums and social media sources as a means to understand communicated values between subcultural participants (Holt et al. 2016; Quinn and Forsyth 2013). These studies demonstrate the values and beliefs of various subcultures and potential shifts in their values views over time (Holt and Bossler 2016). As online communications platforms evolve, and their user bases splinter due to differences in use patterns, it is unclear how subcultural expression may shift based on its features. The way individuals share information on image-based platforms like Instagram and Snapchat differ from that of Facebook and Twitter which combine both long and short form text and multimedia use. In addition, researchers frequently utilize data from only virtual or real sources, limiting their ability to identify variations in subcultural experiences on the basis of participants’ experiences in both environments (for exceptions see Holt 2007; Jordan and Taylor 1998). Thus, future scholarship is needed that incorporates data from various online platforms as well as interviews and participant observations to situate the lived experience of deviant and criminal subcultures in total. Second, research is needed that considers the ways that new media and technologies may be incorporated into existing deviant subcultures or produce new ones that did not previously exist. As noted in revenge porn, access to high-speed Internet connectivity, high-definition video and cameras and in phones, and social media platforms like Snapchat enable individuals to share sexual images of themselves with others, colloquially known as sexting (e.g., Powell and Henry 2017). Though

25

Subcultural Theories of Crime

523

these images are meant to be kept private between the sender and recipient, some share this content with others without the sender’s permission. In some cases, these images wind up on file sharing sites, social media, and pornography sites, which have led to new fetishes and pornography categories focused on viewing amateur content that was not intended to be shared. Researchers must be diligent in documenting the formation and values of new subcultures on- and off-line to better understand the impact of technology on offender behavior as a whole (Holt and Bossler 2016). Lastly, there is a need for researchers to continue to refine and operationalize subcultural theories to improve the state of the literature. As stated earlier, most contemporary research examines the norms and values of subcultures without considering the underlying factors that impel their formation generally. Early criminological investigations by Cohen (1955) and Miller (1958) recognized that lower class community characteristics create an environment that may produce delinquent subcultures. Contemporary criminological scholarship recognizes that the features of on-line communications platforms may make it possible for individuals to connect with others who share their interest, though they do not necessarily suggest the Internet is a criminogenic environment as a whole. Such an argument could be made, though few have stated such an idea in print. Similarly, the tension between positivist arguments that individuals are perfectly socialized into a subculture or more choicebased frameworks that demonstrate the role of agency have yet to be resolved in contemporary criminological inquiry. Such research is needed to not only improve our understanding of deviant subcultures operating on- and off-line, as well as the general state of criminological theory.

Cross-References ▶ Child Sexual Exploitation: Introduction to a Global Problem ▶ Computer Hacking and the Hacker Subculture ▶ Deviant Instruction: The Applicability of Social Learning Theory to Understanding Cybercrime ▶ Digital Piracy ▶ Global Voices in Hacking (Multinational Views) ▶ Prostitution and Sex Work in an Online Context

References Adler, P. A., & Adler, P. (2005). Self-injurers as loners: The social organization of solitary deviance. Deviant Behavior 26(4), 345–378. Anderson, E. (1999). Code of the street. New York: Norton. Berger, J. M., & Morgan, J. (2015). The ISIS Twitter Census: Defining and describing the population of ISIS supporters on Twitter. The Brookings Project on US Relations with the Islamic World, 3(20), 4–1.

524

T. J. Holt

Bernard, T. J., Snipes, J. B., & Gerould, A. L. (2010). Vold’s theoretical criminology (p. 179189). New York: Oxford University Press. Blevins, K., & Holt, T. J. (2009). Examining the virtual subculture of johns. Journal of Contemporary Ethnography, 38(5), 619–648. Brake, M. (1980). The sociology of youth cultures and youth subcultures. London: Routledge and Kegan Paul. Brenner, S. W. (2009). Cyberthreats: The emerging fault lines of the nation state. New York: Oxford University Press. Brewer, R., Cale, J., Goldsmith, A., & Holt, T. (2018). Young people, the Internet, and emerging pathways into criminality: A study of Australian adolescents. International Journal of Cyber Criminology, 12, 115–132. Britz, M. T. (2010). Terrorism and technology: Operationalizing cyberterrorism and identifying concepts. In T. J. Holt (Ed.), Crime on-line: Correlates, causes, and context (pp. 193–220). Raleigh: Carolina Academic Press. Brown, S. C., & Holt, T. J. (2018). Digital piracy: A global, multidisciplinary account. Routledge. Cohen, A. (1955). Delinquent boys (Vol. 84). New York: The Free Press. Cooper, J., & Harrison, D. M. (2001). The social organization of audio piracy on the internet. Media, Culture, and Society, 23, 71–89. Cooper, M., Schmidt, M. S., & Schmitt, E. (2013). Boston suspects are seen as self-taught and fueled by the web. The New York Times, 23 Apr 2013 [online]. Available at: http://www.nytimes.com/ 2013/04/24/us/boston-marathon-bombing-developments.html?pagewanted=all&_r=0 Cunningham, S., & Kendall, T. (2010). Sex for sale: Online commerce in the world’s oldest profession. In T. J. Holt (Ed.), Crime on-line: Correlates, causes, and context (pp. 40–75). Raleigh: Carolina Academic Press. Decary Hetu, D., & Morselli, C. (2011). Gang presence in social networking sites. International Journal of CyberCriminology, 5(2), 878–890. Decker, S. H., Bynum, T., & Weisel, D. (1998). A tale of two cities: Gangs as organized crime groups. Justice Quarterly, 15(3), 395–425. Denney, A. S., & Tewksbury, R. (2013). Characteristics of successful personal ads in a BDSM online community. Deviant Behavior, 34(2), 153–168. DiMarco, A. D., & DiMarco, H. (2003). Investigating cybersociety: A consideration of the ethical and practical issues surrounding online research in chat rooms. In Y. Jewkes (Ed.), Dot.cons: Crime, deviance and identity on the internet. Portland: Willan Publishing. Downing, S. (2011). Retro gaming subculture and the social construction of a piracy ethic. International Journal of Cyber Criminology, 5(1). Durkin, K. F., & Bryant, C. D. (1999). Propagandizing pederasty: A thematic analysis of the online exculpatory accounts of unrepentant pedophiles. Deviant Behavior, 20, 103–127. Furnell, S. (2002). Cybercrime: Vandalizing the information society. London: Addison-Wesley. Gerstenfeld, P. B., Grant, D. R., & Chiang, C-P. (2003). Hate online: A content analysis of extremist internet sites. Analyses of Social Issues and Public Policy 3(1), 29–44 Glenza, J. (2015). Dylann roof: The cold stare of a killer with a history of drug abuse and racism. The Guardian, 20 June 2015. Goldsmith, A., & Brewe, R. (2015) Digital drift and the criminal interaction order. Theoretical Criminology 19(1), 112–130 Grov, C. (2004). “Make me your death slave”: Men who have sex with men and use the internet to intentionally spread HIV. Deviant Behavior, 25(4), 329–349. Hamm, M. (2002). In bad company: America’s terrorist underground. Boston: Northeastern University Press. Hamm, M. S., & Spaaij, R. (2017). The age of lone wolf terrorism. New York: Columbia University Press. Hankes, K. (2015). Black hole. Southern poverty law center intelligence report, 9 Mar 2015 [online]. Available at: https://www.splcenter.org/fighting-hate/intelligence-report/2015/black-hole Herbert, S. (1998). Police subculture reconsidered. Criminology, 36, 343–369.

25

Subcultural Theories of Crime

525

Hollinger, R. C., & Lanza-Kaduce, L. (1988) The process of criminalization: The case of computer crime laws*. Criminology 26(1), 101-126 Holt, T. J. (2007). Subcultural evolution? Examining the influence of on- and off-line experiences on deviant subcultures. Deviant Behavior, 28, 171–198. Holt, T. J. (2012). Exploring the intersections of technology, crime and terror. Terrorism and Political Violence, 24, 337–354. Holt, T. J., & Blevins, K. R. (2007). Examining sex work from the client’s perspective: Assessing johns using online data. Deviant Behavior, 28, 333–354. Holt, T. J., & Bossler, A. M. (2016). Cybercrime in Progress: Theory and Prevention of Technology-Enabled Offenses. Crime science series. London: Routledge. Holt, T. J., & Copes, H. (2010). Transferring subcultural knowledge online: Practices and beliefs of persistent digital pirates. Deviant Behavior, 31, 625–654. Holt, T. J., Blevins, K. R., & Burkert, N. (2010). Considering the pedophile subculture on-line. Sexual Abuse: Journal of Research and Treatment, 22, 3–24. Holt, T. J., Blevins, K. R., & Kuhns, J. B. (2014). Examining diffusion and arrest practices among johns. Crime and Delinquency, 60, 261–283. Holt, T. J., Freilich, J. D., & Chermak, S. M. (2016). Internet-based radicalization as enculturation to violent deviant subcultures. Deviant Behavior, 47, 1–15. Holt, T. J., Brewer, R., & Goldsmith, A. (2018) Digital Drift and the “Sense of Injustice”: Counterproductive policing of youth cybercrime. Deviant Behavior, 40(9), 1144–1156 Holt, T. J., Freilich, J. D., Chermak, S. M., Mills, C., & Silva, J. (2019). Loners, colleagues, or peers? Assessing the social organization of radicalization. American Journal of Criminal Justice, 44(1), 83–105. Jenkins, P. (2001). Beyond tolerance: Child pornography on the internet. New York: New York University Press. Jipson, A. (2007). Influence of hate rock. Popular Music and Society, 30, 449–451. Jordan, T., & Taylor, P. (1998). A sociology of hackers. The Sociological Review, 46, 757–780. Leukfeldt, E. R. (2015). Organised cybercrime and social opportunity structures. A proposal for future research directions. The European Review of Organised Crime, 2(2), 91–103. Maratea, R. J. (2011). Screwing the pooch: Legitimizing accounts in a zoophilia on-line community. Deviant Behavior, 32(10), 918–943. Matza, D. (1968). Becoming delinquent. Englewood Cliffs: Prentice Hall. Maurer, D. W. (1981). Language of the underworld. Louisville: University of Kentucky Press. Meyer, G. R. (1989). The social Organization of the Computer Underground. Master’s thesis, Northern Illinois University. Miller, W. B. (1958). Lower class culture as a generating milieu of gang delinquency. Journal of Social Issues, 14(3), 5–19. Miller, J. (2001). One of the guys: Girls, gangs, and gender. Oxford: Oxford University Press. Milrod, C., & Monto, M. A. (2012). The hobbyist and the girlfriend experience: Behaviors and preferences of male customers of internet sexual service providers. Deviant Behaviors, 33(10), 792–810. Morselli, C., & Décary-Hétu, D. (2010). Crime facilitation purposes of social networking sites: A review and analysis of the “cyberbanging” phenomenon. Sécurité publique Canada. Moskowitz, D. A., & Roloff, M. A. (2007). The existence of a bug chasing subculture. Culture, Health & Sexuality 9(4), 347–357. Moule, R. K., Pyrooz, D. C., & Decker, S. H. (2014). Internet adoption and online behaviour among American street gangs: Integrating gangs and organizational theory. British Journal of Criminology, 54(6), 1186–1206. Powell, A., & Henry, N. (2017). Sexual violence in a digital age. New York City: Springer. Quayle, E., & Taylor, M. (2002). Child pornography and the internet: Perpetuating a cycle of abuse. Deviant Behavior, 23, 331–361. Quinn, J. F., & Forsyth, C. J. (2013). Red light districts on blue screens: A typology for understanding the evolution of deviant communities on the internet. Deviant Behavior, 34, 579–585.

526

T. J. Holt

Roberts, J. W., & Hunt, S. A. (2012). Social control in a sexually deviant cybercommunity: A cappers’ code of conduct. Deviant Behavior, 33, 757–773. Sanders, T. (2004). Sex work: A risky business. Lodon: Willan Sharp, K., & Earle, S. (2003). Cyberpunters and cyberwhores: Prostitution on the internet. In Y. Jewkes (Ed.), Dot.Cons: Crime, deviance and identity on the internet (pp. 36–52). Portland: Willan Publishing. Simi, P., & Futrell, R. (2006). White power cyberculture: Building a movement. The Public Eye Magazine summer (pp. 69–72). New York: Thunder’s Mouth Press. Simi, P., & Futrell, R. (2015). American swastika: Inside the white power movement's hidden spaces of hate. Lanham: Rowman & Littlefield. Steinmetz, K. F. (2015). Craft (y) nessAn ethnographic study of hacking. The British Journal of Criminology, 55(1), 125–145. Steinmetz, K. F., & Tunnell, K. D. (2013). Under the pixelated jolly roger: A study of on-line pirates. Deviant Behavior, 34(1), 53–67. Stewart, S., & Maremont, M. (2016). Twitter and Islamic State Deadlock on Social Media Battlefield. The Wall Street Journal, 13. Taylor, P. (1999). Hackers: Crime in the digital sublime. London: Routledge. Taylor, M., Quayle, E., & Holland, G. (2001). Child pornography, the internet and offending. Isuma, 2, 9–100. Tewksbury, R. (2006). “Click here for HIV”: An analysis of internet-based bug chasers and bug givers. Deviant Behavior, 27(4), 379–395. Thomas, D. (2002). Hacker culture. Minneapolis: University of Minnesota Press. Watson, L. (2013). Al Qaeda releases guide on how to torch cars and make bombs as it names 11 public figures it wants “dead or alive” in latest edition of its glossy magazine. Daily Mail. Weimann, G. (2005). How modern terrorism uses the internet. The Journal of International Security Affairs, 8, 1–12. Wolfgang, M. E., & Ferracuti, F. (1967). The subculture of violence: Toward an integrated theory in criminology. London: Tavistock Publications. Womer, S., & Bunker, R. J. (2010). Surenos gangs and mexican cartel use of social networking sites. Small Wars and Insurgencies, 21, 81–94. Williams, J. P., & Copes, H. (2005). “How Edge Are You?” constructing authentic identities and subcultural boundaries in a straightedge internet forum. Symbolic Interaction 28(1), 67–89

Deviant Instruction: The Applicability of Social Learning Theory to Understanding Cybercrime

26

Jordana N. Navarro and Catherine D. Marcum

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . History of Social Learning Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Historical Contributions to Social Learning Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modern Criminological Social Learning Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application of Social Learning Theory to Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Against Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interpersonal Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summarizing the Applicability of Social Learning Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Influence of Social Learning Theory on Cyberlaw and Cyberpolicy . . . . . . . . . . . . . . . . . . . . . Cyberlaws and Social Learning Theory: An Examination of Digital Piracy Laws . . . . . . . . Cyberpolicies and Social Learning Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

528 528 529 530 532 532 535 538 538 539 540 542 542 543

Abstract

Although cybercrime is a new phenomenon compared to drug crime or property crime, it has existed long enough for exploration and evaluation with multiple criminological theories. This exploration and evaluation throughout the years has led to some theories gaining prominence within cybercrime research,

J. N. Navarro Department of Criminal Justice, The Citadel, Charleston, SC, USA e-mail: [email protected] C. D. Marcum (*) Department of Government and Justice Studies, Appalachian State University, Boone, NC, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_18

527

528

J. N. Navarro and C. D. Marcum

especially applying social learning theory. The purpose of this chapter is to show the importance of social learning theory as an applicable framework for understanding the origins of cybercrime, as well as to highlight how it has affected policy and programming in the field. In order to carry out these two goals, this text consists of three sections. The first section presents the history of social learning theory. The second section presents current applications of the theory, paying special attention to its use with the investigation of predictors of cybercrime. Lastly, the third section examines the policy implications of these findings.

Keywords

Social learning theory · Internet · Cybercrime

Introduction Although cybercrime is a new phenomenon compared to drug crime or property crime, it has existed long enough for exploration and evaluation with multiple criminological theories. This exploration and evaluation throughout the years has led to some theories gaining prominence within cybercrime research, especially applying social learning theory. The purpose of this chapter is to show the importance of social learning theory as an applicable framework for understanding the origins of cybercrime, as well as to highlight how it has affected policy and programming in the field. In order to carry out these two goals, this text consists of three sections. The first section presents the history of social learning theory. The second section presents current applications of the theory, paying special attention to its use with the investigation of predictors of cybercrime. Lastly, the third section examines the policy implications of these findings.

History of Social Learning Theory Before the exploration of social learning as a predictor of criminality, earlier theories had posed other potential factors as explanations of offending behavior. di Beccaria (1872) asserted that offenders made rational choices at their own free will to commit crime, while Lombroso (1897, as cited in 2006) believed criminal disposition was an inherited trait. Merton (1938) proposed that adaptations to experiencing anomie (the disjuncture between achieving social norms and the legitimate means to obtain them) explain criminality. Shaw and McKay (1942) suggested that a lack of collective efficacy and disorganized environment fosters criminality. Social learning theory gave a different interpretation of what influenced offending behavior. Below is a discussion of the chronological development of social learning, beginning with its early roots and how it is currently applied today.

26

Deviant Instruction: The Applicability of Social Learning Theory to. . .

529

Historical Contributions to Social Learning Theory Contemporary differential association-social learning theory answers a specific question: “why do people commit crime?” This framework is a general theory of crime, which means it applies to crime perpetrated by a single offender to crime overall. The earliest contribution to criminological versions of social learning theory came from Edwin Sutherland. Sutherland (1939, 1947) argued that understanding criminal behavior should be viewed as a process that takes place in either the moment (i.e., situational) or from the history of the individual (i.e., objective). The following nine propositions encapsulates Sutherland’s (1939, 1947) crime committing process: 1. Criminal behavior is learned. 2. Criminal behavior is learned in interaction with other persons in a process of communications. 3. The principal part of the learning of criminal behavior occurs within intimate personal groups. 4. When criminal behavior is learned, the learning includes (a) techniques of committing the crime, which are sometimes very complicated, sometimes very simple; (b) the specific direction of motives, drives, rationalizations, and attitudes. 5. The specific direction of motives and drives is learned from definitions of the legal codes as favorable and unfavorable. 6. A person becomes because of an excess of definitions favorable to violation of law over definitions unfavorable to violation of law. 7. Differential associations may vary in frequency, duration, priority, and intensity. 8. The process of learning criminal behavior by association with criminal and anti-criminal patterns involves all of the mechanisms that are involved in any other learning. 9. While criminal behavior is an expression of the general needs and values, it is not explained by those general needs and values, since noncriminal behavior is an expression of the same needs and values.

Sutherland (1939, 1947) asserted that the acquisition of criminal knowledge began with the idea of communicating with others. In order to successfully learn criminal behavior, the individual must have a safe environment with willing teacher(s) (e.g., close friends, relatives, family, or peer acquaintances) to get the knowledge that is necessary for offending. The environment and the teachers do not just provide the technical skills to perform offenses; they also provide the psychological ability to perform criminal acts without allowing the conscience to overrule. Although this theory was widely applicable, Sutherland believed that it did have scope limits, which meant it could not explain all offending all the time. It is important to note that social learning as we know it today had early influences by B.F. Skinner and Ivan Pavlov. Their work found that individuals should make decisions and continue to make the same decisions based on the availability of reward. Skinner’s (1953) work with rats used operant conditioning, showing that behavior was controllable through manipulation of environment. A key issue in operant conditioning was reinforcement, referring to activities that would increase the likelihood an individual would behave in a certain manner. Positive reinforcement involved reward for a valued behavior, while negative reinforcement was the removal of the reward or avoidance of an unpleasant experience of a behavior.

530

J. N. Navarro and C. D. Marcum

Pavlov (1927) also added to the development of social learning theory with his work with classical conditioning and dogs. He argued that behavior would continue when it was rewarded and that a stimulus could start behavior. After a series of opportunities, Pavlov’s sample of dogs would consistently salivate when meat was present. This process created a passive learning experience because the dog knew what to expect from specific conditions. Bandura (1964, 1969, 1976) presented a version of learning theory that was more complex than those presented by Skinner or Pavlov’s, suggesting that learning was more complex than conditioning. Bandura believed that learning occurred through observation and analysis of situations; in other words, an individual planned on action based on that process. He believed that individuals learned using role models. When role models performed a behavior, others would see this behavior and decide to perform the behavior, not because of reinforcement. Another way of thinking about this process was seeing, and experience would interact to form imitation and instigation of behavior that included criminal behavior.

Modern Criminological Social Learning Model Akers’s (1998) version of social learning theory was one of the most important advances in criminology. Akers and colleagues (1966) and Akers (1998) combined concepts from previous theories and research to provide several testable propositions. Their early work gave birth to criminology’s version of social learning, referred to originally as differential-social reinforcement theory but then shortened to just social learning theory. They asserted that Sutherland did not fully explain social learning components for offending, reformulating his theory by restating all the propositions of differential association in the context of operant conditioning (Akers 1977, 1985, 1998). This revised version of the theory was not a competitor with Sutherland’s (1939, 1947) theory, but instead Akers (1985) argued that social learning theory subsumed differential association and was broader because of the addition of differential reinforcement and imitation. These added concepts allowed for the consideration of behavior acquisition, continuation, and cessation. The central propositions from Burgess and Akers (1966) were as follows: 1. Criminal behavior is learned according to the principles of operant conditioning. 2. Criminal behavior is learned both in non-social situations that are reinforcing or discriminative and through social interaction in which the behavior of other persons is reinforcing or discriminative for criminal behavior. 3. The principal part of learning criminal behavior occurs in those groups which comprise the individual’s major source of reinforcements. 4. The learning of criminal behavior, including specific techniques, attitudes and avoidance procedures, is a function of the effective and available reinforcers, and the existing reinforcement contingencies. 5. The specific class of behaviors which are learned and their frequency of occurrence are a function of the reinforcers which are effective and available, and the rules or norms by which these reinforcers are applied.

26

Deviant Instruction: The Applicability of Social Learning Theory to. . .

531

6. Criminal behavior is a function of norms that are discriminative for criminal behavior, the learning or which takes place when such behavior is more highly reinforced than noncriminal behavior. 7. The strength of criminal behavior is a direct function of the amount, frequency, and probability of its reinforcement.

With the addition of operant behavior, Burgess and Akers (1966) could explain why some individuals continued their behavior (i.e., through rewards) or ceased their behavior (i.e., punishment). In a more contemporary version of the theory, Akers (1998) argued that “learning” was not solely related to new behaviors. He argued that social learning theory enabled the explanation of new individual behaviors as well as previously learned behaviors. In other words, social learning theory is a framework to understand the motives that drive individuals to offend or to resist crime. Akers (1998) said the following: The basic assumption of social learning theory is that the same learning process in a context of social structure, interaction, and situation, produces both conforming and deviant behavior. The difference lies in the direction . . . [of] balance of influences on behavior. The probability that persons will engage in criminal and deviant behavior is increased and the probability of their conforming to the norm is decreased when they differentially association with others who commit criminal behavior and espouse definitions favorable to it, are relatively more exposed in-person or symbolically to salient criminal/deviant models, define it as desirable or justified in a situation discriminative for the behavior, and have received in the past and anticipate in the current or future situation relatively greater toward than punishment for the behavior. (Akers 1998, p. 50)

Akers’s (1998, 2008) revised version of the theory relies on four main concepts: differential association, definitions, differential reinforcement, and imitation. Differential association is the interaction with others in an intimate peer group who engage and support certain types of behavior. Akers and Seller (2009) also asserted that influences of other persons and entities (i.e., family, neighbors, religious institutions, school teachers) can be powerful in the socialization process. Definitions are the attitudes, values, orientations, and rationalizations that allow an individual to determine if behavior is morally right or wrong (Akers 2008). The anticipation of rewards and punishments (social) for specific acts refers to differential reinforcement. The applicability and visibility of key components of social learning theory are often clear in movies, even if not recognizable to laypersons; for example, readers should consider the 1990s cult classic Hackers (1995). In that film, there are several key characters: Emmanuel Goldstein (elite hacker), Kate Libby (elite hacker), Dade Murphy (elite hacker), and Joey Pardella (novice hacker). Throughout the film, Joey shows strong ties with his elite hacker friends (i.e., Emmanuel, Kate, and (later) Dade). Joey is constantly asking their advice on hacks and how to become “an elite hacker.” These interactions show the first concept in social learning theory (i.e., differential association). As Joey learns from his peers, he starts hacking on his own

532

J. N. Navarro and C. D. Marcum

to gain status within the group, which shows that he has internalized attitudes favorable toward deviance (i.e., definitions). At one point in the movie, Joey successfully hacks a bank across state lines from his home and brags about it to his friends. His friends sharply criticize him and question his intelligence (i.e., differential reinforcement, social punishment). Joey is visibly upset and laments that his friends are not helping him enough. His friends discuss critical resources to successful hacking and tell Joey that he needs to do better to be “elite.” Later in the film, Joey carries out a different hack and gathers proof of the exploit to show he is an elite hacker (i.e., imitation). This depiction of social learning theory in cybercrime perpetration is not a unique “made-for-tv” moment. As the following passages will show, social learning theory is a powerful framework for understanding the origins of various forms of cybercrime.

Application of Social Learning Theory to Cybercrime Given the nature of many cybercrime offenses, it makes intuitive sense that social learning theory applies to these crimes. For example, in very sophisticated forms of cybercrime (e.g., hacking, malware/virus distribution), offenders are unlikely to have the knowledge necessary to ensure success without associating with seasoned offenders. Even in “low-tech” forms of cybercrime (e.g., cyberbullying, cyberstalking), offenders still need to learn tactics and methods to avoid detection from other sources before starting. Thus, social learning theory is a powerful perspective in the understanding of cybercrime perpetration. To explore the utility of this perspective across the spectrum of cybercrime, the following passages discuss this framework within specific areas. In order to organize findings, we have separated the information into two broad sections: discussion on cybercrimes (mainly) against property (e.g., digital piracy and hacking) and discussion on cybercrimes (mainly) against people (e.g., cyberbullying, cybersexual abuse, and cyberstalking).

Cybercrime Against Property Digital Piracy. Broadly speaking, digital piracy is the acquiring, copying, and/or dissemination of work protected by copyright without attribution and/or payment (Higgins et al. 2007). Thus, digital piracy is tantamount to theft via cyberspace and includes actions such as stealing artwork, media productions (e.g., movies, music), software, and written work (e.g., plagiarism) (Higgins et al. 2007). The methods by which individuals engage in piracy have evolved with technology (Nhan 2016). For instance, before the technological boom in the 2000s, individuals pirated films and music on cassette and VHS tapes (Nhan 2016). Then, perpetrators began using compact discs to pirate before they moved to using large-scale peer-to-peer file sharing networks like Napster (Nhan 2016). Although sites like Napster still exist, the technology has advanced to rely on BitTorrent rather than peer-to-peer (Nhan 2016).

26

Deviant Instruction: The Applicability of Social Learning Theory to. . .

533

Not only have these advancements changed the way piracy occurs, but it has also become easier to do (Holt and Bossler 2014). Unlike other technical cybercrimes (i.e., hacking), pirates need no advanced knowledge of computers or networks. However, potential offenders still must learn the basics of how to pirate (e.g., which sites to use, how to get files) and how to avoid detection from another source. Before even addressing the existing research, the applicability of social learning theory to pirating is striking. It should be not be surprising then that research has supported the use of social learning theory in the framing of pirating behavior. In contrast to other areas of research where social learning theory has received sparse attention (e.g., cyber dating abuse, cyber child sexual abuse), several studies have evaluated this perspective and engagement in digital piracy with notable results (Burruss et al. 2012; Higgins and Makin 2004; Higgins 2006; Higgins et al. 2007; Hinduja and Ingram 2008; Holt et al. 2010; Morris and Higgins 2009; Wolfe and Higgins 2009). In one early example, Higgins and Makin (2004) found that college students who held favorable attitudes and moral beliefs in support of software piracy were at risk for engaging in the behavior. Moreover, college students who associated with software pirates were more at risk for engaging in the behavior themselves (Higgins and Makin 2004). The increased risk connected to the association with deviant peers (i.e., differential association) and holding favorable attitudes toward pirating (i.e., definitions) was not unique to this early study; later studies have supported the importance of these risk factors (Hinduja and Ingram 2008; Higgins et al. 2007; Morris and Higgins 2009). As scholars have explored the connection between social learning theory and engagement in digital piracy through the years, other risk factors gained importance. For example, research shows that prior engagement pirating is a risk factor for associating with deviant peers (Wolfe and Higgins 2009), as well as later cyberdeviance (Higgins et al. 2007). In terms of broadening the idea of differential association, a study by Hinduja and Ingram (2008) found that youth who interacted with peers, whether offline or online, were more at risk for pirating music. Moreover, youth who learned about pirating from online media were also more likely to perpetrate the behavior (Hinduja and Ingram 2008). Findings from a later study by Miller and Morris (2014) supported this notion that differential association could assume various forms in understanding piracy. Even though differential reinforcement and imitation are absent in much of the existing work on digital piracy, research on the piracy subculture underscores the importance of these concepts for understanding this cyberoffense (Holt and Bossler 2014). Skinner and Fream (1997) found differential reinforcement had a negative effect on unauthorized access of online materials, a finding later supported by Holt et al. (2010) in a study of college student online behavior. Both Skinner and Fream and Holt et al. found imitation to be essential in the social learning process, indicating its importance when examining cyberdeviance. As in the hacking subculture (discussed next), pirates rise in social standing among their peers by the amount of content they get and share with others (Steinmetz and Tunnell 2013). This norm stems from the staunch belief that content should be freely available for all to enjoy rather than a costly commodity (Steinmetz and Tunnell 2013). Considering this, the social pressure to associate with other pirates

534

J. N. Navarro and C. D. Marcum

and take part in the community by sharing stolen material is crucial for continued access to content (Steinmetz and Tunnell 2013). Thus, these beliefs and norms encourage behavior that – if done well – will result the acquiring of social capital that in turn acts to encourage later cybertheft (i.e., positive reinforcement) (Steinmetz and Tunnell 2013). Finally, given the difficult nature of apprehending and punishing digital pirates, imitation is likely to occur absent significant deterrence. Hacking. The term “hacking” is traced back to the 1950s at the Massachusetts Institute of Technology (Marcum 2019). During that time, students involved in a railroad club spent time manipulating, hence “hacking” electronic trains. Although the term has since assumed a vastly different meaning, often carrying a negative connotation among laypersons, hacking was a benign activity to simply have fun during this time. Yet, as time passed and technology advanced, “hackers” turned their attention toward mastering computers and networks. Now, hacking has become synonymous with cybertrespassing where an individual gains access and/or control over a form of technology (Holt 2016). Gaining an understanding of how hackers organize within this subculture further underscores the applicability of social learning theory to understanding this cybercrime. Unlike other offense types where perpetrators work alone (e.g., cyberstalkers), hackers vary in terms of organizational type. For instance, some hackers are loners and do not associate or hack with others (Holt 2005). Some hackers organize according to a colleagues framework, which means that they associate with each other but they stop short of offending together (Holt 2009). In contrast, hackers who associate and offend together are organized according to a peers framework. Finally, hackers who work within a team associate and offend together as well as separate duties across members (Holt 2009). These forms of organizing are important to address, from a social learning theory perspective, because all groups (aside from loners) involve individuals associating with deviant like-minded peers (i.e., differential association, definitions). Thus, by the nature of how hackers organize, social learning theory is an applicable approach to understanding their behavior. Aside from organizing, gaining insight into the ways hackers self-identify further underscores the importance of social learning theory to understanding this cybercrime. Within the hacking subculture, individuals gain respect and status through completing successful exploits that are grounded in one’s own knowledge and skills (Kinkade et al. 2016). Hackers that rely on scripts and tools created by others to carry out their exploits (i.e., script kiddies) are perceived inferior to elite hackers that are self-reliant (Kinkade et al. 2016). Individuals in the former group do not carry significant standing within the hacker community, while individuals within the latter group receive substantial deference from others (Kinkade et al. 2016). Considering this, there is palpable social pressure for individuals to engage in exceedingly difficult hacks to increase their standing within this subculture. Individuals who falter in their exploits face ostracization (i.e., differential reinforcement, social punishment), while successful others rise in the social strata (i.e., differential reinforcement, positive reinforcement). A review of the hacking literature supports the earlier statements: association with deviant peers and internalization of deviant definitions are salient risk factors in perpetrating hacking (Holt and Bossler 2014). Indeed, in a thorough review of

26

Deviant Instruction: The Applicability of Social Learning Theory to. . .

535

cybercrime scholarship, Holt and Bossler noted that association with deviant peers is not unusual among hackers. The association with deviant peers provides hackers the opportunity to learn from others, to show off exploits and thereby earn social capital, and to rationalize their actions with like-minded peers (Holt and Bossler 2014). While there is limited research on evaluating the full social learning theory model (i.e., differential association, definitions, differential association, and imitation), numerous studies have supported the critical importance of differential association as a risk factor for perpetrating various cybercrimes including hacking (Fox et al. 2016; Higgins 2006; Higgins and Makin 2004; Marcum et al. 2014b; Navarro et al. 2014, 2016; Skinner and Fream 1997). In an early exploration of social learning theory and hacking, Skinner and Fream (1997) found that associating with deviant peers and holding attitudes favorable toward deviance were risk factors for engaging in hacking. Interestingly, they also found that trusted adults could indirectly embolden potential hackers if they were accepting of the behavior. In terms of differential reinforcement (i.e., belief of severe punishment), the results were inconsistent and only had bearing on a minor form of hacking. Finally, the scholars noted that imitation occurred through many conventional (e.g., friends, parents, teachers) and unconventional routes (e.g., bulletin boards) (Skinner and Fream 1997). Contemporary explorations of social learning theory and hacking align with these earlier findings. For example, in a study on the hacking behaviors of juveniles, Marcum and colleagues (2014) found significant support for the relationship between deviant peer association and perpetrating hacking. More specifically, youth who associated with deviant peers were more likely to hack into another individual’s e-mail and/or social media account. Moreover, by associating with deviant peers, youth were also more likely to hack websites (Marcum et al. 2014a, b). Unfortunately, not only is social learning theory applicable to understanding cybercrime against property but also gives insight as to why individuals engage in interpersonal cybercrime as well.

Interpersonal Cybercrime Cyberbullying. In contrast to the prior section, cyberbullying is not about theft or intrusion. Instead, perpetrators of this offense are looking to inflict emotional and psychological harm on another. While much of the research focuses on adolescents, studies have documented cyberbullying college students (Kowalski et al. 2016) and within the workforce (Coyne et al. 2017). Although there is no universally accepted description of this behavior, Tokunaga (2010, p.278) defines cyberbullying as: Cyberbullying is any behavior performed through electronic or digital media by individuals or groups that repeatedly communicates hostile or aggressive messages intended to inflict harm or discomfort on others.

Part of the difficulty in defining cyberbullying, in contrast to prior cybercrimes, is that this offense can assume many forms. For example, some scholars require

536

J. N. Navarro and C. D. Marcum

repetition for actions to be labeled as “cyberbullying” (Patchin and Hinduja 2006), while others have not included this feature in definitions guiding their studies (Juvonen and Gross 2008). Some cyberbullying scholars use broad definitions of offensive behaviors (Juvonen and Gross 2008), while others focus on specific actions (Allen 2012). These variations across studies have resulted in wide ranges in terms of offending and victimization rates (Tokunaga 2010). Criminologists have sought to understand cyberbullying from various theoretical perspectives like Agnew’s (1992) general strain theory or Cohen and Felson’s (1979) routine activities theory. Even though there has been minimal exploration using social learning theory, there is reason to believe this perspective is extremely applicable to various forms of interpersonal cybercrime and, particularly, cyberbullying. In order to recognize the applicability of this perspective, one simply needs to consider the situational characteristics that are salient to understanding cyberbullying. One of the most pronounced features of cyberbullying is the timeframe when this offense frequently happens: young adolescence. It is during this time that youth are undergoing intense socialization vis-a-vis the family and the school system as they prepare for adulthood. It is also during this time that peers assume a greater influence as a socialization agent. Therefore, associating with deviant peers (i.e., differential association) when one is still forming their own thoughts about correct versus harmful behavior (i.e., definitions) can have a profound impact on pathways to deviance. Moreover, during this period, one’s reputation among their peers is an extremely important form of social capital. Youth may perform acts of cyberbullying to assert their status or recover from being a bully themselves (i.e., differential reinforcement) (Compton et al. 2014). Unfortunately, these factors only encourage the behavior among others (i.e., imitation). To the best of our knowledge, only one study has applied social learning theory to cyberbullying and that investigation yielded notable results. In a study of cyberbullying among youth, Li and colleagues (2016) found that association with deviant peers and favorable beliefs toward cyberdeviance increased the odds of perpetrating cyberbullying. Although there is still limited follow-up investigation, these results and the prior paragraph are support for further research into the applicability of social learning theory in framing cyberbullying perpetration. Another offense worth exploring in terms of social learning theory that may connect to cyberbullying but is also a stand-alone problem is cybersexual abuse. Cybersexual Abuse. Cybersexual abuse, like cyberbullying, includes many different behaviors involving a wide range of victims. For example, one type of cybersexual abuse is online child pornography, which involves the acquiring, production, and/or distribution of explicit content of children whether actual or digital creations are indistinguishable from real life (Lamphere and Pikciunas 2016). Even though the context can be dramatically different, sexting is also technically child pornography and is therefore also a form of cybersexual abuse. A second type of cybersexual abuse is online sexual exploitation, which may involve the forcing of survivors to perform sexual activities visible in cyberspace to the perpetrator’s benefit. A third type of

26

Deviant Instruction: The Applicability of Social Learning Theory to. . .

537

cybersexual abuse is referred to as cybersextortion and involves perpetrators using sexually explicit content as blackmail to coerce survivors into performing undesirable actions (e.g., giving more explicit content, sending money, etc.). Finally, cybersexual abuse is revenge porn, which refers to the sharing of explicit content without the consent of all individuals depicted (Lamphere and Pikciunas 2016). As in other areas of cybercriminology, scholars have used various theoretical perspectives to understand cybersexual abuse: general theory of crime (Gottfredson and Hirschi 1990) and routine activities theory (Cohen and Felson 1979). Although there is sparse investigation into the applicability of social learning theory and cybersexual abuse, given the nature of some of these offenses (particularly those involving children), there is good reason to believe that the perspective is applicable. For instance, research shows that most sexual offenders are not abused during childhood themselves (Babchishin et al. 2011), which leads one to consider whether their victimization sets them on a pathway to offending in adulthood. It is important to consider whether these offenders’ childhood experiences with deviant individuals (i.e., differential association) had lasting negative effects for their own beliefs about proper sexual relations (i.e., definitions) that, upon reinforcement, could help frame their imitation in adulthood. While research has supported some of the prior statements, particularly the role of deviant association influencing beliefs (e.g., Burton et al. 2002; Felson and Lane 2009), investigations are not exclusive to perpetrators of cybersexual abuse. Although there is limited exploration into whether social learning theory frames cybersexual abuse, innovative research has evaluated whether this perspective is applicable with sexting behavior. For example, in one study of South Korean youth, Lee and colleagues (2016) found that youth who met peer pressure about sexting and held attitudes favorable toward the behavior were more likely to engage in the offense. Moreover, youth with a history of delinquency were also more at risk for engaging (Lee et al. 2016). Although not an expansive test of social learning theory, the significance of associating with deviant peers and the implications for sexting behavior also appeared in a similar study by Marcum and colleagues (2014). In that study, Marcum et al. found that associating with deviant peers increased the odds of engaging in sexting behavior. Although there is a dearth of investigation into the applicability of social learning theory and other forms of cybersexual abuse, the prior work outlined here shows the area is ripe for research. Cyberstalking. Cyberstalking, like offline stalking, refers to an individual repeatedly engaging in a course of conduct that is unwanted and threatening to the other party involved. Unfortunately, with the advancement of the Internet and technology, there is concern about whether a new type of stalker has appeared: one that can watch their victim without detection. While this debate is still unresolved, scholars have tried to understand the mind-set of stalkers in general. Although not widely evaluated, scholars have applied social learning theory to frame cyberstalking behavior with notable success. In one recent example, Marcum and colleagues (2016) evaluated the role of deviant peer association in cyberstalking perpetration. Unlike many other studies evaluating the role of deviant peer association and engagement in cyberdeviance, Marcum et al. found this factor did not significantly increase the risk of engaging in

538

J. N. Navarro and C. D. Marcum

cyberstalking. However, these results were inconsistent with a similar study in which deviant peer association did increase the odds of engaging in cyberstalking (Navarro et al. 2016). What was particularly interesting about this study is that the presence of an Internet addiction also increased the odds of engaging in cyberstalking (Navarro et al. 2016), which may represent the presence of differential reinforcement. In other words, if a cyberstalker has an addiction to the Internet, it makes intuitive sense that using the Internet would act as a reinforcer regardless of the reason for the use. While firm conclusions about the applicability of social learning theory to cyberstalking perpetration is impossible now, these studies show the line of research is worth exploring.

Summarizing the Applicability of Social Learning Theory The prior sections result in two broad conclusions. First, social learning theory is applicable across various forms of cybercrimes, from property cybercrime to interpersonal cybercrime. Although there is more work necessary, scholars have showed that this perspective can explain cyberdeviance just as it explains offline deviance. Secondly, the importance of differential association and definitions is pronounced across studies, but more work is necessary on the other key components of social learning theory: differential reinforcement and imitation. While these investigations could manifest as stand-alone explorations of social learning theory and various forms of cybercrime, we would remiss if we did not note that many studies have shown this perspective is particularly suited in multi-theoretical investigations. In other words, social learning theory is not only a powerful perspective on its own but is also well suited to investigations using control balance theories (e.g., Fox et al. 2016), the general theory of crime (e.g., Higgins et al. 2007; Higgins and Makin 2004; Li et al. 2016; Marcum et al. 2014a, b), and social control theory (e.g., Lee et al. 2016) to name a few. In fact, several studies have shown that social learning theory factors underscore the relationship between other perspectives, such as the role of low self-control and engagement in cyberdeviance (Higgins et al. 2007; Li et al. 2016). Thus, not only is more work necessary in the application of social learning theory to various offenses, but this research should include the use of this perspective in conjunction with other powerful theoretical frameworks.

The Influence of Social Learning Theory on Cyberlaw and Cyberpolicy As scholars learn more about criminal behavior, both offline and online, this information not only shapes later lines of research but also influences laws and policies designed to deter deviancy. Indeed, the impact of the pronounced relationship between differential association and favorable definitions toward deviance to increasing the risk of engaging in cyberdeviance is most pronounced in laws and policies targeting digital piracy, which is emphasized below.

26

Deviant Instruction: The Applicability of Social Learning Theory to. . .

539

Cyberlaws and Social Learning Theory: An Examination of Digital Piracy Laws Overview of Cyberlaws Cyberlaws surrounding digital piracy, much like other cybercrimes, developed in a very reactive way depending on how perpetrators were exploiting technology at that moment in time. For example, one the earliest laws meant to address computer abuse in general was the Computer Fraud and Abuse Act of 1986 (CFAA). This law criminalized a broad range of offenses, including digital piracy, by punishing behaviors where the perpetrator “knowingly and with the intent to defraud, accesses a protected computer without authorization, or exceeds authorized access. . .and obtains anything for value” (18 U.S. Code §1030). While this law was broad at its inception and allowed for the policing of various offenses, it emphasized criminal activity where commerce could be or was impacted (18 U.S. Code §1030) and did not focus solely on digital piracy. Yet, this first step was a clear signal to potential perpetrators that abusing computers to affect commerce carried significant penalties – particularly if there were multiple convictions under that same cyberlaw (18 U.S. Code §1030). An added movement to underscore the bounds of proper versus inappropriate behavior online, particularly about digital piracy, appeared through No Electronic Theft Act of 1997 (NET). Up until this point, prior efforts had emphasized offenses that generated a profit rather than cybercrimes where there was no monetary gain. However, with the passage of the NET Act of 1997, copyright infringement via the Internet (regardless of whether there was a profit involved or not) was outlawed (House Bill 2265 1997). The NET Act of 1997 not only addressed that digital piracy was multifaceted in motivation (e.g., nonprofit driven and profit driven), which speaks perpetrators’ definitions toward deviance, but also that association with deviant peers contributes to the larger problem. Finally, the most recent effort, although likely not the last effort, to address digital piracy is the 1998 Digital Millennium Copyright Act (DMCA) (House Bill 2281 1998). Again, as technology continued to evolve, perpetrators continued to discover innovative methods by which to pirate copyrighted works that were unaccounted for in current legislation. One such method was circumventing protections attached to copyrighted work that the DMCA then criminalized (House Bill 2281 1998). As in the NET Act of 1997, DMCA considered individual behavior, vis-à-vis punishments for sole possession, as well as behavior that contributed to the problem (e.g., creating and distribution of software to others) (House Bill 2281 1998). Cyberlaws and Social Learning Theory While the aforementioned cyberlaws emphasize a deterrence theory approach to policing cybercrime, the importance of social learning theory is also readily visible in these statutes. For example, recognizing that differential association is a salient in understanding engagement in cybercrime, these laws punish not only isolated individual actions but also behaviors that contribute to the deviancy of others. This is readily seen in the CFFA where “knowingly transmitting information” and

540

J. N. Navarro and C. D. Marcum

password trafficking, presumably to other deviant peers, are severely punished (18 U.S. Code §1030). Likewise, the impetus of the NET Act was to address pirates circumventing existing law by claiming their actions were not financially motivated (House Bill 2265 1997). Thus, to disrupt peer networks that were flagrantly violating copyright protections, legislators quickly addressed that piracy was illegal regardless of whether perpetrators financially profited (House Bill 2265 1997). The second concept of social learning theory, definitions, is present in the mere outlawing of these actions through federal and state statutes. Put another way, cyberlaws represent a formal declaration by a society’s constituents that a particular behavior is unethical, immoral, and illegal. These declarations then guide and shape individuals’ beliefs about socially acceptable behavior, which in turn influences their actions within their peer networks and larger society. Therefore, by outlawing certain behaviors in cyberspace, cyberlaws act to challenge definitions favorable toward deviance both at an individual level and within groups. The third concept of social learning theory, differential reinforcement, is also clearly seen in the punishments for engaging in deviance, which again addresses isolated individual behavior as well as actions that contribute to deviant peer groups. For example, in the NET Act, punishment quickly escalates for offenders who distribute multiple copies of protected work, which implies multiple interactions with like-minded peers (House Bill 2265 1997). Likewise, in the CFAA, offenders face severe penalties that involve conspiracies or in the furtherance of other criminal activities (18 U.S. Code §1030). In other words, punishments address direct actions by individuals as well as behaviors that contribute to the deviancy of others. For example, the founders of Pirate Bay were incarcerated as a result of their facilitation of copyright infringement (Associated Press 2014). Considering these passages, it is clear that while one objective of cyberlaws is to deter deviancy through the threat of punishment, they also have a broader mission: to prevent piracy by disrupting the process by which individuals learn to pirate.

Cyberpolicies and Social Learning Theory In order to combat digital piracy, various efforts have attempted to challenge harmful definitions that favor deviance. One such effort emphasized the human costs associated with piracy both at a federal level and at a state level, which continues to this day. This effort involves emphasizing the illegality of the action (i.e., challenging definitions) as well as reiterating the sheer amount of the employment wages lost because of widespread pirating (Cardwell 2006). For example, in 2007, New York launched a “piracy is not a victimless crime” public awareness campaign that included stakeholders from across the country (Cardwell 2006). That public awareness campaign involved displaying messages about these topics ahead of movies and television shows (Goldstein 2007) that were, again, designed to challenge favorable attitudes toward piracy documented through research (e.g., that it is not “victimless” and real people lose their jobs/wages as a result). By challenging these attitudes, the ultimate goal was to change individual thinking which would reverberate into peer networks.

26

Deviant Instruction: The Applicability of Social Learning Theory to. . .

541

Aside from raising awareness among laypersons, interested stakeholders also sought to combat piracy by directly pursuing distributors of protected content. The Copyright Alert System (CAS), also known as the “six strikes program,” was an agreement between multiple Internet service providers (ISP), as well as entertainment companies (Electronic Frontier Foundation 2017). The companies involved with CAS monitored their customers’ peer-to-peer network traffic, identifying potential copyright infringement. A series of graduated penalties for infringement included warning notices, slowing connection speeds, and temporary restriction of web access. With this penalty system, members of the CAS hoped to reduce pirating behaviors by directly punishing offenders, as well as through a method of social learning throughout the general public. Initial warning notices provided educational material on piracy, its effect on consumers and producers, as well as criminal penalties. The hope was that recipients of these notices would share this information with peers and in turn reduce the amount of pirating behaviors. CAS launched in 2013 and immediately experienced backlash (Electronic Frontier Foundation 2017). Public input was not requested before the agreement created, and the user felt as if their interests and privacy were not protected. There were even accusations of bias in the formatting of the educational warning notices sent to infringers. In addition, the actual usefulness of CAS as a deterrent was criticized, as many repeat offenders only received threats of losing ISP service. CAS disbanded in January 2017 but asserted a continued promise to participate in cooperative efforts to address digital piracy. Entertainment agencies pressed for increased enforcement of the Digital Millennium Copyright Act for repeat infringers (Electronic Frontier Foundation 2017). Not long after CAS dissolved, a group of 30 entertainment companies formed an international coalition to continue the fight against piracy by conducting research, working with law enforcement, and filing civil lawsuits. The Alliance for Creativity and Entertainment (ACE) includes huge moguls such as Netflix, Disney, Amazon, Warner Brothers, and HBO. The compilation of these huge corporations joined with one common goal is important, as it demonstrates the extent of influence this group and potentially attracting other groups. Since the creation of ACE, civil lawsuits have been filed against agencies such as Tickbox, Setvnow, and Dragon Box for providing illegal access to copyrighted materials (Alliance for Creativity and Entertainment 2018). Social learning techniques have also been applied to combat cyberbullying. In an environment where individuals feel less restricted in their expression of words, behaviors, and emotions, some online users (especially younger generations) feel empowered to participate in bullying behaviors. Olweus (1993) asserted that education was the key to bullying prevention and promoted the “whole-school” method which entails education of all individuals who interact with potential participants of bullying should be educated and trained (e.g., students, educators, staff, and parents). In addition, research has indicated that not only adults who interact with young people should be trained to manage bullying situations. There is benefit to training students of the same age as they often identify with peers better than older adults (Englander 2012). An example of a successful training program is offered by the Massachusetts Aggression Reduction Center (MARC) (Englander et al. 2015). Faculty are trained through in-service trainings and through the train-the-trainer model at their school

542

J. N. Navarro and C. D. Marcum

sites on how students and adults see bullying and interactive training. Over 80% of the faculty felt as if the training was worthwhile and reported positive outcomes in regard to bullying and cyberbullying education, addressing conflicts, and recognition of warning signs. After the training workshop, more respondents indicated a willingness to discuss bullying with all parties involved in the training. There was also a better comprehension of online privacy, loss of control over content, and appreciation for the effects of online bullying.

Conclusion Several criminological theories have been utilized to attempt to explain various forms of cybercrime, producing multiple studies that have provided valid predictors of offending and victimization behaviors. As evidenced above, social learning theory could definitely be considered one of the more useful theories when providing explanation of cybercrime. Social learning theory has provided sound predictors of the commission of cybercrime against digital property, as well as personal crimes such as cyberstalking and cyberbullying. Findings from these studies, as well as consistent media coverage, have enlightened to the true dangers of cybercrime and its direct and indirect effects. The continued empirical support of social learning theory across all forms of criminality demonstrates its usefulness for explanation of new methods of online offending as they emerge. However, there is definitely a gap in the literature regarding online and offline peer relationship dynamics. Do offline peers have more influence on cyberdeviance compared to those peers met online? Are the behaviors of peers who are “friends” online and offline more influential compared to a relationship that only has one facet? Furthermore, the role of imitation needs further exploration, especially when it comes to seeking peer approval online and offline. Empirical research based on social learning theory and the push for change has fueled the passage of laws to criminally and civilly address cybercrime. Large corporations are banding together to pool resources to combat infringement of copyright laws. Private entities are providing programming to educate teachers, parents, and students on the frequency and tactics of personal cybercrimes, as well as the potential long-term repercussions. It is our belief that social learning theory will continue to prove useful – both to criminologists seeking to understand the origins of crime and to others seeking to combat these offenses both in the private and public arena – for many years to come.

Cross-References ▶ Applying the Techniques of Neutralization to the Study of Cybercrime ▶ Critical Criminology and Cybercrime ▶ Deterrence in Cyberspace: An Interdisciplinary Review of the Empirical Literature

26

Deviant Instruction: The Applicability of Social Learning Theory to. . .

543

▶ Environmental Criminology and Cybercrime: Shifting Focus from the Wine to the Bottles ▶ Feminist Theories in Criminology and the Application to Cybercrimes ▶ General Strain Theory and Cybercrime ▶ Routine Activities ▶ Subcultural Theories of Crime ▶ The General Theory of Crime

References 18 U.S. Code § 1030. Retrieved from https://www.law.cornell.edu/uscode/text/18/1030 Agnew, R. (1992). Foundation for a general strain theory of crime and delinquency. Criminology, 30(1), 47–88. Akers, R. L. (1977). Deviant behavior: A social learning approach. Belmont: Wadsworth. Akers, R. L. (1985). Deviant behavior: A social learning approach. Belmont: Wadsworth. Akers, R. L. (1998). Social learning and social structure: A general theory of crime and deviance. Boston: Northeastern University Press. Akers, R. L. (2008). Self-control, social learning, and positivistic theory of crime. In E. Goode (Ed.), Out of control: Assessing the general theory of crime (pp. 1–55). Stanford: Stanford University Press. Akers, R. L., & Sellers, C. S. (2009). Criminological theories: Introduction, evaluation, and application. 5th ed. Los Angeles: Roxbury Publishing. Allen, K. P. (2012). Off the radar and ubiquitous: Text messaging and its relationship to ‘drama’ and cyberbullying in an affluent, academically rigorous US high school. Journal of Youth Studies, 15(1), 99–117. Alliance for Creativity and Entertainment. (2018). Alliance for creativity and entertainment: News. Retrieved from https://www.alliance4creativity.com/news/ Associated Press. (2014, November 4). All three Pirate Bay founders now in jail. Billboard. Retrieved from https://www.billboard.com/articles/business/6304466/pirate-bay-founders-jailhans-fredrik-lennart-neij Babchishin, K. M., Karl Hanson, R., & Hermann, C. A. (2011). The characteristics of online sex offenders: A meta-analysis. Sexual Abuse, 23(1), 92–123. Bandura, A. (1964). Social learning and personality development. New York: Holt, Rinehart and Winston. Bandura, A. (1969). Principles of behavior modification. New York: Holt, Rinehart and Winston. Bandura, A. (1976). Social learning theory. Englewood Cliffs: Prentice-Hall. Burgess, R., & Akers, R. (1966). A differential association-reinforcement theory of criminal behavior. Social Problems, 14(2), 128–147. Burton, D. L., Miller, D. L., & Shill, C. T. (2002). A social learning theory comparison of the sexual victimization of adolescent sexual offenders and nonsexual offending male delinquents. Child Abuse & Neglect, 26(9), 893–907. Cardwell, D. (2006, October 24). Mayor vows to combat video piracy. The New York Times. Retrieved from https://www.nytimes.com/2006/10/24/nyregion/mayor-vows-to-combat-videopiracy.html Cohen, L., & Felson, M. (1979). Social change and crime rate trends: A routine activity approach. American Sociological Review, 44, 588–608. Compton, L., Campbell, M. A., & Mergler, A. (2014). Teacher, parent and student perceptions of the motives of cyberbullies. Social Psychology of Education, 17(3), 383–400. Coyne, I., Farley, S., Axtell, C., Sprigg, C., Best, L., & Kwok, O. (2017). Understanding the relationship between experiencing workplace cyberbullying, employee mental strain and job satisfaction: A disempowerment approach. The International Journal of Human Resource Management, 28(7), 945–972.

544

J. N. Navarro and C. D. Marcum

di Beccaria, C. B. (1872). An essay on crimes and punishments. By the Marquis Beccaria of Milan. With a commentary by M. de Voltaire (A New Edition Corrected). Albany: W.C. Little & Co.. Electronic Frontier Foundation. (2017, February 6). It’s the end of the Copyright Alert System (as we know it). Retrieved from https://www.eff.org/deeplinks/2017/02/its-end-copyright-alert-systemwe-know-it Englander, E. (2012). Bullying and cyberbullying in teens: Clinical factors. The American Academy of Child and Adolescent Psychiatry. Retrieved from https://aacap.confex/com/aacap/2012/ webprogram/Session8538.html Englander, E., Parti, K., & McCoy, M. (2015). Evaluation of a university-based bullying and cyberbullying prevention program. Journal of Modern Education Review, 5(10), 937–950. Felson, R. B., & Lane, K. J. (2009). Social learning, sexual and physical abuse, and adult crime. Aggressive Behavior: Official Journal of the International Society for Research on Aggression, 35(6), 489–501. Fox, K. A., Nobles, M. R., & Fisher, B. S. (2016). A multi-theoretical framework to assess\gendered stalking victimization: The utility of self-control, social learning, and control balance theories. Justice Quarterly, 33(2), 319–347. Gottfredson, M. R., & Hirschi, T. (1990). A general theory of crime. Palo Alto: Stanford University Press. Higgins, G. E. (2006). Gender differences in software piracy: The mediating roles of self-control theory and social learning theory. Journal of Economic Crime Management, 4(1), 1–30. Higgins, G. E., & Makin, D. A. (2004). Does social learning theory condition the effects of low self-control on college students’ software piracy. Journal of Economic Crime Management, 2(2), 1–22. Higgins, G. E., & Marcum, C. D. (2016). Theories of crime. New York: Willan Publishing. Higgins, G. E., Fell, B. D., & Wilson, A. L. (2007). Low self-control and social learning in understanding students’ intentions to pirate movies in the United States. Social Science Computer Review, 25(3), 339–357. Hinduja, S., & Ingram, J. R. (2008). Self-control and ethical beliefs on the social learning of intellectual property theft. Western Criminology Review, 9, 52–72. Holt, T. J. (2005). Hacks, cracks, and crime: An examination of the subculture and social organization of computer hackers. (Unpublished doctoral dissertation). St. Louis: University of Missouri. Retrieved from https://irl.umsl.edu/dissertation/61 Holt, T. J. (2016). Crime online: Correlates, causes, and context. In T. J. Holt (Ed.), Crime online: Correlates, causes, and context. Durham: Carolina Academic Press. Holt, T. J., & Bossler, A. M. (2014). An assessment of the current state of cybercrime scholarship. Deviant Behavior, 35, 20–40. Holt, T. J., Burruss, G. W., & Bossler, A. M. (2010). Social learning and cyberdeviance examining the importance of a full social learning model in the virtual world. Journal of Crime and Justice, 33(2), 31–61. Juvonen, J., & Gross, E. F. (2008). Extending the school grounds? – Bullying experiences in cyberspace. Journal of School Health, 78(9), 496–505. Kinkade, P. T., Bachmann, M., & Smith-Bachmann, B. (2016). Hacker Woodstock: Observations on an offline cyber culture at the Chaos Communication Camp 2011. In T. J. Holt (Ed.), Crime online: Correlates, causes, and context. Durham: Carolina Academic Press. Kowalski, R. M., Morgan, C. A., Drake-Lavelle, K., & Allison, B. (2016). Cyberbullying among college students with disabilities. Computers in Human Behavior, 57, 416–427. Lamphere, R. D., & Pikciunas, K. T. (2016). Sexting, sextortion, and other Internet sexual offenses. In J. N. Navarro, S. L. Clevenger, & C. D. Marcum (Eds.), The intersection between intimate partner abuse, technology, and cybercrime: Examining the virtual enemy (pp. 141–165). Durham: Carolina Academic Press. Lee, C. H., Moak, S., & Walker, J. T. (2016). Effects of self-control, social control, and social learning on sexting behavior among South Korean youths. Youth & Society, 48(2), 242–264.

26

Deviant Instruction: The Applicability of Social Learning Theory to. . .

545

Li, C., Holt, T. J., Bossler, A. M., & May, D. C. (2016). Examining the mediating effects of social learning on low self-control-cyberbullying relationship in a youth sample. Deviant Behavior, 37(2), 126–138. Lombroso, C. (2006). Criminal man: Edition 5. In Criminal man (pp. 299–356) (Ed. & Trans. Gibson, M., & Rafter, N. H.). Durham: Duke University Press. Marcum, C. D. (2019). Cyber crime (2nd ed.). New York: Wolters Kluwer Law & Business. Marcum, C. D., Higgins, G. E., & Ricketts, M. L. (2014a). Sexting behaviors among adolescents in rural North Carolina: A theoretical examination of low self-control and deviant peer association. International Journal of Cyber Criminology, 8(2), 68–78. Marcum, C. D., Higgins, G. E., Ricketts, M. L., & Wolfe, S. E. (2014b). Hacking in high school: Cybercrime perpetration by juveniles. Deviant Behavior, 35(7), 581–591. Marcum, C. D., Higgins, G. E., & Poff, B. (2016). Exploratory investigation on theoretical predictors of the electronic leash. Computers in Human Behavior, 61, 213–218. Merton, R. (1938). Social structure and anomie. American Sociological Review, 3(5), 672–682. Miller, B., & Morris, R. G. (2014). Virtual peer effects in social learning theory. Crime and Delinquency, 62(12), 1543–1569. Morris, R. G., & Higgins, G. E. (2009). Neutralizing potential and self-reported digital piracy: A multi-theoretical exploration among college undergraduates. Criminal Justice Review, 34, 173–195. Navarro, J. N., Marcum, C. D., Higgins, G. E., & Ricketts, M. L. (2014). Addicted to pillaging in cyberspace: Investigating the role of internet addiction in digital piracy. Computers in Human Behavior, 37, 101–106. Navarro, J. N., Marcum, C. D., Higgins, G. E., & Ricketts, M. L. (2016). Addicted to the thrill of the virtual hunt: Examining the effects of Internet addiction on the cyberstalking behaviors of juveniles. Deviant Behavior, 37(8), 893–903. Nhan, J. (2016). The evolution of online piracy: Challenge and response. In T. J. Holt (Ed.), Crime online: Correlates, causes, and context. Durham: Carolina Academic Press. No Electronic Theft Act of 1997. H.R. 2265, 105th Cong. (1997). Olweus, D. (1993). Bullying at School: What We Know and What We Can Do. Oxford, Cambridge: Blackwell. Patchin, J. W., & Hinduja, S. (2006). Bullies move beyond the school yard: A preliminary look at cyberbullying. Youth Violence and Juvenile Justice, 4, 148–169. Pavlov, I. (1927). Conditioned reflexes. Oxford: Oxford University Press. Shaw, C. R., & McKay, H. D. (1942). Juvenile delinquency and urban areas; a study of rates of delinquents in relation to differential characteristics of local communities in American cities. Chicago: University of Chicago Press. Skinner, B. F. (1953). Science and human behavior. SimonandSchuster.com. Skinner, W. F., & Fream, A. M. (1997). A social learning theory analysis of computer crime among college students. Journal of Research in Crime and Delinquency, 34(4), 495–518. Steinmetz, K. F., & Tunnell, K. D. (2013). Under the pixelated jolly roger: A study of on-line pirates. Deviant Behavior, 34(1), 53–67. Sutherland, E. (1939). Principles of criminology (3rd ed.). Philadelphia: Lippincott. Sutherland, E. (1947). Principles of criminology (4th ed.). Philadelphia: Lippincott. Tokunaga, R. S. (2010). Following you home from school: A critical review and synthesis of research on cyberbullying victimization. Computers in Human Behavior, 26(3), 277–287. Wolfe, S., & Higgins, G. (2009). Explaining deviant peer associations: An examination of low selfcontrol, ethical predispositions, definitions and digital piracy. Western Criminology Review, 10(1), 43–55.

Applying the Techniques of Neutralization to the Study of Cybercrime

27

Russell Brewer, Sarah Fox, and Caitlan Miller

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assessing the Techniques of Neutralization as a Singular Construct . . . . . . . . . . . . . . . . . . . . . . . . . . Assessing the Techniques of Neutralization as Distinct Constructs . . . . . . . . . . . . . . . . . . . . . . . . . . . . Denial of Responsibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Denial of Injury . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Denial of Victim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appeal to Higher Loyalties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Condemning the Condemners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “Other” Supplementary Techniques of Neutralization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions and Reflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

548 550 551 551 553 555 556 557 559 560 562 563

Abstract

Cybercrime scholars have used a wide range of criminological theories to understand crime and deviance within digital contexts. Among the most frequently cited theoretical frameworks used in this space has been the techniques of neutralization, first proposed by Gresham Sykes and David Matza. This body of work has demonstrated the myriad ways that individual cyber-delinquents have applied the techniques of neutralization as a justification for their deviance. A thorough review of this research reveals decidedly mixed support for neutralization theory. This chapter provides an in-depth review of these studies and seeks to account for this mixed result. This is done by chronicling the methodological underpinnings of this work, and in doing so highlights the challenges facing this literature with respect to the conceptualization and measurement of Sykes and R. Brewer (*) · S. Fox · C. Miller University of Adelaide, Adelaide, SA, Australia e-mail: [email protected]; [email protected]; [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_22

547

548

R. Brewer et al.

Matza’s theory in the cyber realm. This is accomplished in two parts. First, we review the body of literature that analyzes the techniques of neutralization as a single combined construct (i.e., items are combined to produce a single measure of neutralization), and flag some of the advantages, but also pitfalls of this approach. Second, we review the treatment of individual techniques of neutralization as distinct constructs within the literature (i.e., a technique is measured and analyzed separate to others) and detail some of the common methodological hurdles encountered by researchers. The chapter concludes by elaborating on persistent gaps or challenges posed in making such assessments and proposes a path forward for future cybercrime research incorporating this framework. Keywords

Techniques of neutralization · Sykes and Matza

Introduction Cybercrime scholars have used a wide range of criminological theories to understand crime and deviance within digital contexts. Among the most frequently cited theoretical frameworks used in this space has been the techniques of neutralization, first proposed by Gresham Sykes and David Matza (1957). In this seminal article, Sykes and Matza set out to build upon the extant literature espousing subcultural theories of delinquency that embrace the notion that delinquent individuals reject law-abiding norms and values and instead adopt an alternative set of norms where delinquency may be permissible. Importantly, Sykes and Matza were critical of this work as being oversimplified, suggesting instead that delinquent transgressions may not be a product of being “thoroughly socialized into an alternate way of life.” Rather, they argued that “the individual can avoid moral culpability for his [sic] criminal action and thus avoid the negative sanctions of society if he [sic] can prove that criminal intent was lacking” (p. 666). Moreover, they contend that “much delinquency is based on what is essentially an unrecognized extension of defences to crimes in the form of justifications for deviance that are seen as valid by the delinquent but not by the legal system or society at large” (p. 666). In other words, Sykes and Matza propose that a delinquent is able to justify their actions in a way that allows them to temporarily “drift” away from mainstream, law-abiding norms and values and engage in delinquent behavior without necessarily requiring the rejection of such values entirely. In later works, Matza (1964) provides further specification of this process, arguing that “drift makes delinquency possible or permissible by temporarily removing the restraints that ordinarily control members of society” (Matza 1964, p. 181). More recently, scholars have explored the applicability of this theoretical work in digital settings, proposing a revised concept of “digital drift” to account for the unique role that technology can also play in loosening said norms and values, and thereby permit individuals to engage and disengage from delinquency within online contexts (see further, Brewer et al. 2018; Goldsmith and Brewer 2015; Holt et al. 2019).

27

Applying the Techniques of Neutralization to the Study of Cybercrime

549

Matza and colleagues (Matza 1964; Sykes and Matza 1957) suggest that at its heart, said drift is made possible through an individual’s use of a set of rationalizations or justifications termed the “techniques of neutralization,” which provide the individual the ability to distance themselves from their actions and the moral restraints of society (Matza 1964). These rationalizations are fivefold and include first the denial of responsibility, whereby the individual views their own delinquent actions as not being their own fault, accidental, or not necessarily “due to forces outside of [their] control” (Sykes and Matza 1957, p. 667). The second justification is a denial of injury, where a delinquent may consider their actions as being victimless and therefore question whether “anyone has clearly been hurt by his [sic] deviance” (p. 667). A third justification is the denial of the victim, whereby a delinquent views the victim of their actions to have either “had it coming to them” or been something that the victim “was asking for.” In this sense, the injury sustained was “not really an injury, rather...a form of rightful retaliation or punishment” (p. 668). A fourth justification is the condemnation of condemners, whereby the offender shifts the blame from their “own deviant acts to the motives and behaviors of those who disapprove of [their] violations” (e.g., authorities, parents, teachers, etc.) (p. 668). The fifth and final justification proposed by Sykes and Matza (1957) is the appeal to higher loyalties, wherein the delinquent places the needs of the smaller group for which they belong (e.g., peer group, family, etc.), above and beyond “the demands of the larger society” (p. 669). We note that since the original 1957 publication, various scholars have sought to supplement these original techniques of neutralization and develop an extensive, and in some cases, context-specific list of a priori neutralizations. Some of the more common, and oft-cited supplementary neutralizations, include the metaphor of ledgers (Klockars 1974), the defense of necessity (Minor 1981), the denial of negative intent, the claim of relative acceptability (Henry 1990), the claim of entitlement, the claim of normalcy (Coleman 1985), and the justification by comparison and postponement (Cromwell and Thurman 2003), among others. While these concepts have appeared across some subsequent research studies, their inclusion remains relatively scattershot (especially in the cybercrime literature), achieving neither uniform nor widespread acceptance relative to the original five techniques proposed by Sykes and Matza (1957). As such, this chapter focuses primarily upon the original, and widely accepted techniques, with a necessarily more limited discussion of these supplementary forms where appropriate. As alluded to above, the framework proposed by Sykes and Matza has, since its publication, been celebrated by criminologists and applied widely across various contexts. In particular, this literature has demonstrated the myriad ways that individual cyber-delinquents have applied the techniques of neutralization as an explanation for their transgressions. Such research has found evidence supporting a relationship between neutralization and various forms of cybercrime and cyberdelinquency, including but not limited to piracy (e.g., Hinduja 2007; Holt and Copes 2010; Ingram and Hinduja 2008), soliciting child exploitation material (CEM) (e.g., Durkin and Bryant 1999; O’Halloran and Quayle 2010), hacking (e.g., Chua and Holt 2016; Goode and Cruise 2006; Harrington 1996; Hutchings

550

R. Brewer et al.

2013; Hutchings and Clayton 2016), sexting (e.g., Renfrow and Rollo 2014), bullying (e.g., Zhang et al. 2016; Lowry et al. 2016; Zhang and Leidner 2018), cyber-hate and cyber-bullying (e.g., Hwang et al. 2016; Vysotsky and McCarthy 2017), and zoophilia (e.g., Maratea 2011). A thorough review of this research reveals decidedly mixed support for neutralization theory within the cybercrime literature. This chapter provides an in-depth review of these studies and seeks to account for this mixed result. This is done by chronicling the methodological underpinnings of this work, and in so doing highlights the challenges facing this literature with respect to the conceptualization and measurement of Sykes and Matza’s theory in the cyber realm. This is accomplished in two parts. First, we review the body of literature that analyzes the techniques of neutralization as a single, combined construct (i.e., various items are combined to produce a single measure of neutralization), and flag some of the advantages, but also pitfalls of this approach. Second, we review the treatment of individual techniques of neutralization as distinct constructs within the literature (i.e., a technique is measured and analyzed separate to others), and detail some of the common methodological hurdles encountered by researchers. The chapter concludes by elaborating on persistent gaps or challenges posed in making such assessments and proposes a path forward for future cybercrime research incorporating this framework.

Assessing the Techniques of Neutralization as a Singular Construct There is an abundance of studies that have sought to assess the techniques of neutralization as a singular construct. This body of work tends to be quantitative in nature, and demonstrates statistically significant relationships between a combined neutralization construct and increased involvement in various forms of cybercrime (either retrospective accounts of, future intention to commit, or scenario-based exercises), including hacking (Morris 2011), cyber-hate and cyber-bullying (Hwang et al. 2016; Lowry et al. 2016; Zhang et al. 2016; Zhang and Leidner 2018), and piracy (Brunton-Smith and McCarthy 2016; Higgins et al. 2008; Kos Koklic et al. 2016; Morris and Higgins 2009; Thongmak 2017; Vida et al. 2012; Yu 2012, 2013). This is not to say, however, that these studies, considered together, necessarily contribute to a homogenous finding or contribution to a broader narrative supportive of Sykes and Matza’s concepts. Rather, there is considerable variation with respect to the relative strength of the relationships reported – from being flagged as the most important variable considered within a given multivariate analysis (e.g., Hwang et al. 2016; Morris and Higgins 2009; Thongmak 2017; Yu 2012), to being considered less central, but nevertheless significant (e.g., Brunton-Smith and McCarthy 2016; Morris 2011). While such variation is explained through the inclusion of competing (and potentially unique) independent and dependent variables contained within the various models presented across these studies, there are also several significant conceptual differences that make meaning comparisons regarding such combined

27

Applying the Techniques of Neutralization to the Study of Cybercrime

551

“neutralization” constructs problematic. First, several studies (e.g., Brunton-Smith and McCarthy 2016; Morris 2011; Zhang et al. 2016; Zhang and Leidner 2018) have elected to operationalize their neutralization measure in such a way that does not directly measure the five techniques of neutralization (and in some cases doing so with a reduced number of items). As such, one could argue that the scales derived therefrom do not offer the same degree of depth and are therefore conceptually incompatible. Where there is overlap (including those studies that incorporate all five techniques), there also lacks uniformity with respect to the mix or breadth of items used within each study to measure each technique. For example, some may use a single item, where others use three or more. Elsewhere, some include items that appear to be rooted in the broader neutralization literature, whereas others construct new items that do not seem to have any such affiliation. The degree of such divergence presents significant epistemological challenges that make drawing definitive conclusions about this work problematic. These implications are fleshed out with more specificity below through a detailed discussion of discrete neutralization constructs.

Assessing the Techniques of Neutralization as Distinct Constructs Numerous studies have gone beyond measuring neutralization as a single construct to examine each of the techniques of neutralization as discrete constructs. That is, this work has separately analyzed each specific technique as compared to others. This section explores the body of work relating to each technique, and elaborates upon, and critically examines some of the stark differences that have emerged both between and among these discrete constructs.

Denial of Responsibility As the first of Sykes and Matza’s (1957) original techniques, denial of responsibility enables offenders to shift the blame for their harmful online behavior by claiming that their acts were the result of forces beyond their control. As such, the disapproval of the individual themselves or others does not act as a restraint to deviant actions. The use of this technique to justify participation in cybercrime has been examined by a growing body of cybercrime research. Nevertheless, the findings from this research appear to be mixed. Studies have demonstrated that cyber offenders deny responsibility to rationalize their illicit behaviors online. This is most common in digital piracy (Brown 2016; Harris and Dumas 2009; Ingram and Hinduja 2008; MacNeill 2017; Ulsperger et al. 2010), but is also applicable to CEM use and solicitation (Kettleborough and Merdian 2017), cyber-hate (Vysotsky and McCarthy 2017), and hacking (Chua and Holt 2016; Hutchings and Clayton 2016). Harrington’s (1996) study found strong support for the relationship between denial of responsibility and perpetration of cyber-dependent crimes in the workplace such as viruses, hacking, fraud, software

552

R. Brewer et al.

piracy, and corporate sabotage. Other studies found some, albeit limited, support whereby cyber offenders denied responsibility due to one reason but not others (Bryan 2014; Chua and Holt 2016; Holt and Copes 2010). For example, Bryan’s (2014) survey into digital piracy found that digital pirates were unlikely to justify their behaviors due to copyright laws and instead opted to blame the cost of the product. Similarly, Holt and Copes’ (2010) qualitative study into digital piracy subcultures found that younger participants were the only cohort to blame unclear copyright laws for their piracy. Alternatively, Steinmetz and Tunnell (2013) found that pirates denied responsibility as a way to avoid legal consequences rather than a moral justification for their behavior. In contrast, mounting evidence suggests that denial of responsibility may not be useful for explaining cyber offending. For example, Siponen and colleagues (2012) found that denial of responsibility was not a significant predictor of the intention to pirate software. Furthermore, Goode and Cruise’s (2006) mixed-methods study into software cracking found that crackers took full responsibility for their offending. Results from Hutchings’ (2013) qualitative study into hacking found that denial of harm, appeals to a common good, and the condemnation of the government were used as justifications for offending; however, denial of responsibility was not. Content analyses of sexually deviant web forums also found a lack of support, in which only rare instances of this technique were found (Durkin and Bryant 1999; O’Halloran and Quayle 2010). Interestingly, one study into digital piracy found that participants who denied responsibility were less willing to engage in music and movie piracy (Smallridge and Roberts 2013). It is clear that this scholarship is divided about how to apply this technique to explain cybercrime. An examination of quantitative studies demonstrate that definitions of denial of responsibility vary considerably which likely contributes to the inconsistent, and sometimes, conflicting results. In order for perpetrators to deny responsibility, an alternative reason or excuse for their behavior is required. Often these reasons are diverse and vary depending on the context of the cybercrime. For example, there appears to be some agreeance that pirates deny responsibility by placing the blame on their financial state, the cost of the legal product, the online availability of the legal or illegal product, and the lack of awareness or clarity of copyright infringement laws (see Bryan 2014; Hinduja 2007; Ingram and Hinduja 2008; MacNeill 2017). However, few studies actually seek to incorporate and measure this wide variety of rationalizations. As far as we know, just two studies have comprehensively examined denial of responsibility. In his study of online pirates, Hinduja (2007) used 11 items to measure denial of responsibility relating to finances, online availability, lifestyle, and time constraints and found that this technique was not significantly associated with software piracy. Meanwhile, Harrington’s (1996) study into workplace computer crime used a denial of responsibility scale consisting of 28 different items. However, this scale captured general denial of responsibility in the workplace rather than justifications directly relating to computer crime. Elsewhere, the limited number of items means that the measures are often too narrow (see Siponen et al. 2012), making it difficult to determine the applicability of this technique.

27

Applying the Techniques of Neutralization to the Study of Cybercrime

553

Elsewhere, it is believed that this technique may not be compatible with certain types of cyber offending. This is especially the case with cyber-sexual offending (Durkin and Bryant 1999) but has also been suggested about certain types of hacking (Hutchings and Clayton 2016). In explaining the lack of empirical support for the denial of responsibility, O’Halloran and Quayle (2010) state that “an excuse is a type of account in which the individual admits that behavior is wrong but denies being fully responsible for it. Because the paedophiles on this website seek to assert that sexual contact with children is not wrong, excuses are counterproductive to their objectives and so are not employed” (p. 81). Similar sentiments are expressed by others in their studies of hacking, sexting, and zoophilia communities online (Hutchings and Clayton 2016; Maratea 2011; Renfrow and Rollo 2014). The qualitative nature of these studies suggests that this type of methodology is useful for determining which techniques are applicable to different forms of cybercrime. However, it is worth noting that studies have purposefully omitted this technique from their measurement frameworks (i.e., D’Ovidio et al. 2009). It is unclear whether this specific technique was perceived as not applicable or if there were alternative explanations for this omission.

Denial of Injury Denial of injury refers to the process in which individuals justify their behavior as morally acceptable because of the perception that it does not directly cause harm (Sykes and Matza 1957). According to their original conception, Sykes and Matza (1957) state that in denying injury to others, individuals reject existing social controls while simultaneously protecting their self-image. It is this combination which enables individuals to drift into delinquency. In cybercrime research, this is particularly salient as the anonymity and distance the internet affords makes it easier for the offender to suggest that no harm has occurred. Indeed, the findings of existing research reviewed below demonstrate support for the denial of injury, with few exceptions. Promising results show that offenders deny harm occurs from their participation in various types of cybercrime. This is especially applicable to piracy because denial of injury was found to be a common way for music and movie pirates to neutralize their actions (see Brown 2016; Harris and Dumas 2009; Hinduja 2007; Holt and Copes 2010; Holt and Morris 2009; Ingram and Hinduja 2008; MacNeill 2017; Moore and McMullan 2009; Steinmetz and Tunnell 2013; Ulsperger et al. 2010; Vandiver et al. 2012). Cyber-sexual offenders including those accessing CEM also justified their behaviors by stating that their online and offline sexual interactions do not cause children or animals any physical or psychological harm (Durkin and Bryant 1999; D’Ovidio et al. 2009; Kettleborough and Merdian 2017; Maratea 2011; O’Halloran and Quayle 2010). Meanwhile, research into hackers and fraudsters also supports the relationship between denial of injury and their cyber offending (Chua and Holt 2016; Hutchings and Clayton 2016). In one of these studies, the hackers proclaimed that no harm was experienced by individual victims

554

R. Brewer et al.

because of reimbursement from banking institutions (Hutchings and Clayton 2016). From their content analysis of a hate forum, Vysotsky and McCarthy (2017) also concluded that white supremacists deny injury by dehumanizing people who are considered “non-white.” However, this conclusion is made in passing, as the research focused more on the framing of narratives rather than the application of neutralization techniques. A small body of research suggests that some offenders feel uncertain about the injury that occurs from cybercrime. Bryan’s (2014) quantitative study into music piracy reported that over half of the participants had agreed that the industry could afford a loss but did not agree that piracy is harmless, ultimately leaning toward agreeance with this technique. Similarly, Goode and Cruise’s (2006) study into software cracking concluded that most crackers had conflicting opinions about the injurious nature of their offending. Lastly, Siponen and colleagues’ (2012) research into software piracy found no empirical support for this technique. Some variations in these findings might be contingent on the way that cyber offenders view the potential victims being injured. This is most evident in quantitative research with college samples which found that pirates were more likely to deny injury to large corporations such as record labels and software companies (i.e., Bryan 2014; Ingram and Hinduja 2008), but were also likely to recognize harm to the individual authors of those products (Bryan 2014). Therefore, ambiguity around which victim is being injured in measures such as “no one is really hurt” (i.e., Goode and Cruise 2006; Siponen et al. 2012) and the omission of harm to certain victims are likely to produce varying results. For example, Vandiver et al. (2012) focused on the denial of injury to the music industry but did not consider the injury to the artist and found stronger support for the relationship between this technique and piracy. Whereas studies that considered more than one victim (i.e., Bryan 2014; Hinduja 2007) concluded that support for this technique was weaker. This demonstrates the importance of capturing the spectrum of harm which corresponds with the variety of real or imagined victims. Likewise, there has been some disagreement about how to distinguish denial of injury from the other techniques of neutralization due to conceptual overlaps. The most common overlap is between denial of injury and denial of victim. As a result some quantitative studies did not try to differentiate denial of injury and victim, instead opting for items that intend to measure both at the same time (Brown 2016; Bryan 2014; Ingram and Hinduja 2008). Other techniques were also found to overlap with the denial of injury. For example, studies have suggested that piracy is justified due to the belief that it benefits the industry, artist, or company (Hinduja 2007; Holt and Copes 2010; Moore and McMullan 2009; Vandiver et al. 2012; Yu 2012). While some studies categorize this as a denial of injury (Hinduja 2007), others suggest it represents an appeal to higher loyalties (Vandiver et al. 2012). Similarly, research into child sex offenders online found that offenders proclaimed that children benefit from sexual acts with adults (Kettleborough and Merdian 2017; O’Halloran and Quayle 2010). While O’Halloran and Quayle (2010) acknowledged that this is inextricably linked with a denial of injury, this response was categorized as a claim of benefit (one of the newer and lesser used techniques of neutralization). These overlaps complicate the ability to determine whether this technique can adequately explain participation in cybercrime.

27

Applying the Techniques of Neutralization to the Study of Cybercrime

555

Denial of Victim An empirical body of work has explored the application of Sykes and Matza’s denial of victim to cybercrime. Denial of victim occurs when an act is perceived as a rightful retaliation or punishment from wrongdoing, transforming the delinquent into an avenger. In denying a victim, the delinquent can determine who is an appropriate and deserving target (Sykes and Matza 1957). The use of this technique also occurs when the victim may be “physically absent, unknown or a vague abstraction” (p. 668). There has been considerable support of this technique among the cybercrime scholarship. So much so, it is believed that denial of victim is one of the most applicable neutralization techniques to cybercrime. Nevertheless, there have been inconsistencies with respect to the ways that this technique has been conceptualized and measured in the literature. Many of the studies in support of this technique are related to piracy. Harris and Dumas’ (2009) qualitative studies of online consumer misbehavior found that denial of victim was the most commonly used technique to neutralize illegal file sharing on peer-to-peer networks. Additionally, they found that this technique was often used prior to engagement in this illegal online activity. While other studies have also demonstrated that the denial of victim is applicable to piracy, the level of support varies with the type of piracy and the measurement framework employed (e.g., Holt and Copes 2010; MacNeill 2017; Steinmetz and Tunnell 2013; Ulsperger et al. 2010). For example, Moore and McMullan (2009) explored each technique qualitatively and found that denial of victim was the second most commonly used to justify music piracy. Whereas Ingram and Hinduja’s (2008) survey into music piracy combined the denial of victim with the denial of injury, yet also found results that supported this neutralization technique. Similarly, Bryan (2014) also combined the two techniques but found a greater mix of results, concluding that there was some support for the use of denial of victim and injury in piracy. Research studies into hacking also found some support for the denial of victim. Goode and Cruise (2006) established that some participants were willing to crack software produced by large companies due to the perception that they are a deserving victim. However, those who believed this were less likely to pirate from smaller companies that would be adversely affected by the loss of revenue (Goode and Cruise 2006). Conversely, others completely disagreed and chose to pirate from small companies rather than large companies as they believed they were giving small companies recognition and were fearful of retribution from the larger companies (Goode and Cruise 2006). One study found no statistical support for the use of this neutralization technique. Siponen et al.’s (2012) study of software piracy found that the denial of victim did not significantly influence an individual’s intention to commit software piracy. In doing so, they found that other techniques of neutralization such as condemning the condemners and appealing to higher loyalties, as well as deterrence measures and moral beliefs, were more significant. It is important to note that this technique was omitted from other empirical studies into hacking (e.g., Chua and Holt 2016). While these results indicate an overwhelmingly positive association between denial of victim and cybercrime, they must be interpreted with caution. Despite diversity in methods consisting of surveys, hypothetical scenarios, and interviews,

556

R. Brewer et al.

many of these studies are cross-sectional, draw on college or university samples, and focus on piracy (i.e., Siponen et al. 2012; Harris and Dumas 2009; Bryan 2014; Moore and McMullan 2009). Furthermore, the same conceptualization and operationalization issues found in the previous techniques were also found in the denial of victim. This is clear in piracy studies that measure the denial of victim by: high prices and exploitation by multinational firms (Harris and Dumas 2009; Holt and Copes 2010; Ulsperger et al. 2010), the downloading of music from artists that were not going to sell their work (Moore and McMullan 2009), poor quality of a product (Holt and Copes 2010; Ulsperger et al. 2010), and the idea that intellectual property cannot be stolen like tangible, material property (Steinmetz and Tunnell 2013). Who is regarded as the “victim” is also inconsistent across the empirical body of research. While both Siponen et al. (2012) and Ingram and Hinduja (2008) refer to the victim as a large corporation, others tend to group all affected parties into one (Steinmetz and Tunnell 2013; Ulsperger et al. 2010). For the reasons discussed in the other techniques, this is problematic and makes it difficult to conclude that denial of victim is truly a neutralization used for prior or future participation in cybercrime.

Appeal to Higher Loyalties Sykes and Matza’s (1957) appeal to higher loyalties is a neutralization technique whereby the offender sacrifices the demands of the larger society for the needs of a smaller, alternative social group such as familial or peer groups. As such, delinquents that appeal to higher loyalties may find themselves in a situation that requires breaking laws to be resolved. Individuals that use this justification may not reject standard social norms outright but may instead give priority to their own set of norms. Compared to the previous techniques, there have been fewer cybercrime studies exploring the applicability of this neutralization technique. Nevertheless, the findings are comparable to denial of responsibility with the extent to which these findings conflict. Some studies that have explored how appealing to higher loyalties is applicable to piracy have supported its use. For example, Siponen et al.’s (2012) found that appealing to higher loyalties had the most significant link to an increased intention to pirate software than any of the other tested techniques. Ingram and Hinduja’s (2008) study of music piracy demonstrated similar results, finding that an appeal to higher loyalties was associated with moderate to high levels of participation in piracy. Similarly, Smallridge and Roberts’ (2013) quantitative inquiry into digital piracy found that an appeal to higher loyalties was positively associated with willingness to engage in music, movie, and software piracy, but not with video game piracy. In their study of music piracy among college students, Vandiver et al. (2012) found that the justifications used by participants were consistent with appealing to higher loyalties. Outside of piracy, studies on hacking have also demonstrated support for the use of this technique to neutralize offender behavior (Chua and Holt 2016; Hutchings 2013; Hutchings and Clayton 2016).

27

Applying the Techniques of Neutralization to the Study of Cybercrime

557

However, a handful of quantitative studies have found no support for the use of this neutralization technique. Goode and Cruise (2006) found no empirical support for this neutralization technique in their study of software cracking, instead concluding that other techniques such as denial of injury and denial of victim were more commonly used. Furthermore, Hinduja’s (2007) study also found no significant statistical relationship between an appeal to higher loyalties and engagement in online software piracy. The varying levels of support for this technique may stem from a number of inconsistencies across this literature. One key issue is that there is no central or consistent conceptual application of who or what these higher loyalties are. Some studies define higher loyalties as the subcultural group in which the delinquent belongs such as hackers or pirates (Steinmetz and Tunnell 2013), a social group such as close friends and/or family (Ingram and Hinduja 2008), and a specific cause which is perceived to have social benefit such as changing copyright laws (Steinmetz and Tunnell 2013). While many studies in this area tend to rely on college samples, the ways in which this technique is conceptualized and measured varies. For example, Chua and Holt’s (2016) refer to hackers broadly “helping society” and demonstrating loyalty to other hackers, whereas Hutchings and Clayton’s (2016) refer to a perceived increase in network security. A number of other qualitative studies have also examined this neutralization technique and conceptualized it in a variety of ways including anti-copyright laws (MacNeill 2017; Steinmetz and Tunnell 2013), for the common good (Hutchings 2013), to make the internet a safer place (Hutchings 2013), to discover new artists (Harris and Dumas 2009), and an individual’s right and freedom to access all cultural and entertainment products (Harris and Dumas 2009).

Condemning the Condemners Numerous cybercrime studies have assessed the specific technique of condemning the condemners which Sykes and Matza (1957) argue enables an offender to rationalize their delinquent activities as a response to those that might condemn them. They contend that this focus on the “conforming world” is particularly pertinent “when it hardens into a bitter cynicism directed against those assigned the task of enforcing or expressing the norms of the dominant society.” By extension, they also suggest that the “rewards of conformity” (i.e., material success) become a “matter of pull or luck, thus decreasing still further the stature of those who stand on the side of the law-abiding” (p. 668). Central to this viewpoint is the deflection of negative sanctions attached to the violation of societal norms. As was the case for the preceding techniques, the body of cybercrime literature examining this technique has produced mixed results. There is some support in the literature to suggest a relationship between condemnation of condemners and cybercrime. By way of example, several studies using survey data demonstrate statistical relationships between this technique and cybercrime, notably music and software piracy. Ingram and Hinduja’s (2008) and

558

R. Brewer et al.

Bryan’s (2014) studies of prior piracy noted very weak associations with participant acceptance of this technique. Elsewhere, scholars noted a stronger relationship, albeit in this case measuring future intention to pirate, as compared to prior offending (Marcum et al. 2011; Siponen et al. 2012). Various other qualitative studies have provided further empirical support for such relationships. For example, numerous studies again, focusing on piracy, demonstrate a substantial acceptance of this technique among both those directly justifying retrospective accounts of their own piracy (Ulsperger et al. 2010) and broader discussions about the activities within online communities (MacNeill 2017; Steinmetz and Tunnell 2013). Similar assertions reflecting this technique have also been found to be widespread in other deviant online communities, including those soliciting CEM (D’Ovidio et al. 2009; Durkin and Bryant 1999; O’Halloran and Quayle 2010) and promoting zoophilia (Maratea 2011). It is important to note, however, that other empirical studies have failed to find an empirical link between the use of this technique and cybercrime – again predominantly in the case of piracy. Hinduja’s (2007) oft-cited article found general support for neutralizations as a whole (i.e., considered together as a single construct), but specifically found no effect for the distinct neutralization technique of condemnation of condemners. These results were similar to those found by Goode and Cruise (2006), who also observed no significant relationship between this technique and digital piracy. Finally, and as was the case for denial of injury, one recent study by Smallridge and Roberts (2013) revealed a counter-narrative – reporting an inverse relationship between the use of this technique and digital piracy (i.e., those accepting this technique were associated with engaging in less piracy). The variation in results reported above is unsurprising, particularly given the vastly different ways this discrete technique has been conceptualized and measured across each of the included studies. While each study indeed takes cues from Sykes and Matza’s (1957) conceptual framework, their operationalization vary considerably from study to study and are in most cases context-specific (i.e., tied to a particular form of offending). The various quantitative studies discussed in this section typically rely upon college samples, and operationalize this neutralization technique through the assessment of relatively few narrowly defined items – typically between one and three, but as many as five items that permit additional breadth (e.g., Hinduja 2007). This work is therefore not consistent conceptually, measuring this construct through reference to one or more of the following items: the unfair, hypocritical unethical behavior of corporations (or the music/software industry) (i.e., Bryan 2014; Goode and Cruise 2006; Hinduja 2007; Ingram and Hinduja 2008; Siponen et al. 2012), the unjust and unfair nature of laws (i.e., Hinduja 2007; Siponen et al. 2012), overly restrictive methods imposed by government (law enforcement) or industry to stop piracy (i.e., Bryan 2014; Hinduja 2007; Marcum et al. 2011; Siponen et al. 2012), the ineffectiveness of government or industry to police the offence (i.e., Hinduja 2007; Smallridge and Roberts 2013), and a sense of entitlement as a consequence of prior victimization (i.e., Hinduja 2007; Smallridge and Roberts 2013). The relative lack of consistency in conceptualization and measurement continues across the body of qualitative work. There is no doubt that this work provides additional depth of understanding of this neutralization technique by leveraging

27

Applying the Techniques of Neutralization to the Study of Cybercrime

559

data beyond surveys conducted with college samples to include interviews with pirates (Holt and Copes 2010; Moore and McMullan 2009), web forums (Durkin and Bryant 1999; Maratea 2011; O’Halloran and Quayle 2010; Steinmetz and Tunnell 2013), Facebook comments (MacNeill 2017), and websites (D’Ovidio et al. 2009). Through content analysis, this literature has leveraged preexisting content to identify themes appearing consistent with those original specifications proposed by Sykes and Matza. In practice, however, operationalization was again context specific, with this neutralization technique being characterized in text in various disparate ways: as critical assertions made about industry insensitivity (Moore and McMullan 2009), overreaction and over-policing (Holt and Copes 2010; Ulsperger et al. 2010), critical posts directed toward other users on forums who challenge prevailing norms or values within a given online community (e.g., warez, hacking, or CEM forums) (Steinmetz and Tunnell 2013), as well as references made to the various “others” who might seek to discredit or oppose said norms and values (e.g., law enforcement, parents, teachers, social workers, among others) (D’Ovidio et al. 2009; Durkin and Bryant 1999; MacNeill 2017; Maratea 2011; O’Halloran and Quayle 2010). The lack of cogent methodological approaches to studying this neutralization technique has therefore made drawing conclusions about this literature a challenging proposition.

“Other” Supplementary Techniques of Neutralization As flagged above, various cybercrime scholars have sought to supplement Sykes and Matza’s (1957) five original techniques of neutralization through adoption of various other neutralizations. However, discrete reference to such supplementary neutralizations within the cybercrime literature is scattershot at best, achieving neither uniform nor widespread acceptance relative to the original five techniques proposed by Sykes and Matza (1957). More specifically, cybercrime scholars have empirically accounted for a diverse assortment of supplementary techniques, including the metaphor of the ledger (Hinduja 2007; Smallridge and Roberts 2013; Siponen et al. 2012), the claim of normalcy (Harris and Dumas 2009; Hinduja 2007; Moore and McMullan 2009; Renfrow and Rollo 2014; Smallridge and Roberts 2013), the denial of negative intent (Hinduja 2007), the claim of acceptability (Harris and Dumas 2009; Hinduja 2007), the defense of necessity (Smallridge and Roberts 2013; Siponen et al. 2012), the claim of benefit (Durkin and Bryant 1999; Maratea 2011; O’Halloran and Quayle 2010; Renfrow and Rollo 2014), the claim of entitlement (Smallridge and Roberts 2013), sampling before buying (Smallridge and Roberts 2013), appeals to enlightenment (Maratea 2011), digital rights management defiance (Smallridge and Roberts 2013), neutralization by comparison (Maratea 2011), justification by comparison (Harris and Dumas 2009; Maratea 2011; Renfrow and Rollo 2014), and the claim of cultural diffusion (Maratea 2011). The proliferation of techniques used across this literature may, in some ways, be data-driven and explained through inductive analytical approaches commonly used in qualitative research (i.e., identifying and assigning codes from preexisting data). However, quantitative studies are also far-reaching with respect to the supplementary techniques selected for inclusion.

560

R. Brewer et al.

The relatively few studies that adopted overlapping supplementary techniques used measures that varied considerably in terms of their conceptualization and operationalization. Both quantitative and qualitative studies were particularly prone to such methodological discrepancy. In some cases, scholars argued certain items could serve as measures for multiple techniques (e.g., Morris and Higgins (2009) used four items to measure both the appeal to higher loyalties and the defense of necessity). Elsewhere, and more commonly, scholars measured these concepts discretely, but did so without a consistent approach. In some cases there was limited overlap (i.e., some quantitative studies adopted specific items from earlier work, or qualitative studies used related coding frameworks), but often times these items were either adapted or completely ignored in favor of entirely new items constructed for that specific study. This lack of consistency may again potentially account for the incongruous results reported across this literature with respect to the relative significance of each of these supplementary techniques.

Conclusions and Reflections The review conducted in this chapter has provided insight into the treatment of Sykes and Matza’s (1957) core techniques of neutralization across the cybercrime literature. It illustrates that the extant empirical literature is not particularly established and is theoretically underdeveloped – a similar critique leveled by Maruna and Copes (2005) well over a decade ago. These studies are mostly quantitative surveys of digital piracy that utilize college-based samples. While some studies do break the mold to consider alternate data sources, samples, cybercrimes, and qualitative methods, for the most part, the scope of the breadth of inquiry is narrowly constrained. Within this constrained narrative, however, there are several trends of note. Studies that analyzed the techniques of neutralization as a singular construct (i.e., a scale) typically showed significant relationships between said construct and involvement in cybercrime (be they retrospective accounts of engagement or potential future engagement). However, the use of said scales is limiting insofar as it has the potential to obfuscate, or miss entirely the nuances of the individual techniques comprising Sykes and Matza’s conceptual work. When the techniques were considered separately as discrete constructs, the narrative emerging was not as clear-cut. As reported above, the available evidence suggests mostly broad support across studies distinctly measuring the techniques denial of injury as well as the denial of victim. By contrast, this literature provides only mixed support for the techniques of the denial or responsibility, the condemnation of condemners, and the appeal to higher loyalties. One study even goes so far as to provide evidence contrary to Sykes and Matza’s theoretical framework – demonstrating an inverse relationship between engagement in digital piracy the techniques of denial of responsibility and condemnation of condemners (Smallridge and Roberts 2013). It is important to stress, however, the importance of exercising caution in interpreting the findings from this body of work. As noted in this discussion, the extant body of

27

Applying the Techniques of Neutralization to the Study of Cybercrime

561

literature is confronted by a myriad of limitations that makes meaningful interpretation of results problematic. Most notably, the studies considered in this review do not provide consistent conceptual definitions of neutralization. Several quantitative studies include and analyze some techniques of neutralization at the exclusion of others (sometimes without justification as to why). Where specific techniques of neutralization are examined, the conceptualization and operationalization of these measures are also inconsistent and for the most part constructed independently. The selection of items (and the conceptual basis for which they are included) varies wildly from study to study with respect to their depth and breadth. The measures included in most studies appear, at the very least, to loosely preserve the theory in its original form (a trend also common in offline neutralization studies). However, there appears to be a lack of strategic research driving conceptual and methodological development across common subject areas (e.g., the digital piracy literature). It is also important to flag the quantitative items used to measure the techniques of neutralization substantially overlap with measures of other cognitive-based theories. This is most notable in cybercrime research that seeks to measure the “definitions” component of social learning (see Holt et al. 2010). Conceptually, Social Learning Theory’s “definitions” are similar to the techniques of neutralization because both are concerned with an individual’s perception of deviant behavior and its role in facilitating criminal activity (Maruna and Copes 2005; Holt et al. 2010). However, techniques of neutralization depart from Social Learning Theory by focusing on how these perceptions are used by an individual to neutralize or justify engagement in those behaviors (Sykes and Matza 1957). Whereas Social Learning Theory is more concerned with the transmission of those perceptions for the purpose of legitimizing and encouraging crime (Akers 2009). Nevertheless, cybercrime research has used items such as “it is ok for me to pirate media because the creators are really not going to lose any money” (Burruss et al. 2012, p. 1177) to measure a “definition” associated with social learning as well as a technique of neutralization (see Morris 2011). Due to this potential for overlap, the findings from existing cybercrime research must be interpreted carefully to ensure that studies are indeed capturing the process of neutralization. Moreover, future researchers in this space need to be cognizant of this problem, and adopt methodologies that clearly differentiate the techniques of neutralization in order to effectively reduce theoretical misspecification. This should involve distinct conceptualization and clearly defined and delineated measures. Qualitative work in this area should also be interpreted cautiously, and not necessarily be regarded as strong evidence in support of neutralization theory. While these studies do provide valuable insights into the potential motivations and mindsets of offenders, they tend to be inductive in nature, illustrating how the techniques of neutralization have been used by offenders, and not explicitly set out to evaluate Sykes and Matza’s theory. As this body of qualitative research does not include a comparison group (i.e., non-offenders), it is not possible to test the central neutralization premise – that delinquents use the techniques of neutralization to enable them to temporarily drift toward deviant behavior. There are also several other methodological limitations that constrain the findings from this work, as well as their broader implications for cyber-criminology. The data

562

R. Brewer et al.

used in these empirical studies is, for the most part, quantitative and collected through self-report surveys of American college samples during the mid-to-late 2000s. Considered together, this has implications for the generalizability of this work, both in terms of its applicability to other forms of cybercrime and to nonAmerican- or non-college-based populations (who constitute the overwhelming majority of internet users). Another limitation of the quantitative literature is that studies are, with few exceptions, comprised of cross-sectional research designs. It has been pointed out elsewhere (see Maruna and Copes 2005) that such research designs (which, coincidentally, are also commonplace across offline studies of neutralizations) in actuality assess whether offenders score higher on neutralization than non-offenders. Conceptually, such designs are problematic, as Sykes and Matza’s (1957) original theory was predicated on the notion that offenders and non-offenders are both similarly committed to conventional values – and that offenders were able to offend because they were able to successfully neutralize. Therefore, one would expect to find a relationship between findings on neutralization scales and delinquency. In the absence of longitudinal designs, it is not possible to determine whether various neutralizations actually precede offending or simply reflect after-the-fact justifications (Maruna and Copes 2005). The conceptual and methodological limitations associated with the assessment of neutralization across the extant cybercrime literature therefore highlight a need for additional research and theoretical development. Such research should seek to account for distinct cognitive processes that shape the use of neutralizations and potentially drive offending. It has been argued elsewhere (see Maruna and Copes 2005) that this could be accomplished through the use of a nomothetic-idiographic strategy – making possible the development of robust quantitative scales derived from qualitative data. They suggest that qualitative methods are ideally situated to extract the nuance required for theory building through the systematic coding of data (verbal material or corpus) for “manifest (rather than latent) content or style, rendering the data comparable across individual cases or between groups” (Maruna and Copes 2005, p. 269). This enhanced methodological approach has shown promising results in other contexts (e.g., Copes 2003; Maruna 2001, 2004) and is well positioned to address many of the critiques levied in this chapter – combining the richness and theoretical validity of qualitative designs with the rigorous theorytesting capabilities of quantitative techniques (Maruna and Copes 2005).

Cross-References ▶ Computer Hacking and the Hacker Subculture ▶ Deviant Instruction: The Applicability of Social Learning Theory to Understanding Cybercrime ▶ Digital Piracy ▶ Sexual Subcultures and Online Spaces ▶ Subcultural Theories of Crime

27

Applying the Techniques of Neutralization to the Study of Cybercrime

563

References Akers, R. L. (2009). The social learning theory of criminal and deviant behavior. In Social learning and social structure: A general theory of crime and deviance. New Brunswick: Transaction Publishers. Brewer, R., Cale, J., Goldsmith, A., & Holt, T. (2018). Young people, the internet, and emerging pathways into criminality: A study of Australian adolescents. International Journal of Cyber Criminology, 12(1), 115–132. Brown, S. C. (2016). Where do beliefs about music piracy come from and how are they shared? International Journal of Cyber Criminology, 10(1), 21–39. Brunton-Smith, I., & McCarthy, D. J. (2016). Explaining young people’s involvement in online piracy: An empirical assessment using the offending crime and justice survey in England and Wales. Victims & Offenders, 11(4), 509–533. Bryan, A. (2014). Digital piracy: Neutralising piracy on the digital waves. Plymouth Law and Criminal Justice Review, 6(1), 214–235. Burruss, G. W., Bossler, A. M., & Holt, T. J. (2012). Assessing the mediation of a fuller social learning model on low self control’s influence on software piracy. Crime & Delinquency, 59(8), 1157–1184. Chua, Y. T., & Holt, T. J. (2016). A cross-national examination of the techniques of neutralization to account for hacking behaviors. Victims & Offenders, 11(4), 534–555. Coleman, J. W. (1985). The criminal elite: The sociology of white collar crime. New York: St. Martin’s Press. Copes, H. (2003). Societal attachments, offending frequency, and techniques of neutralization. Deviant Behavior, 24(2), 101–127. Cromwell, P., & Thurman, Q. (2003). The devil made me do it: Use of neutralizations by shoplifters. Deviant Behavior, 24(6), 535–550. D’Ovidio, R., Mitman, T., El-Burkia, I. J., & Shumar, W. (2009). Adult-child sex advocacy websites as social learning environments: A content analysis. International Journal of Cyber Criminology, 3(1), 421–440. Durkin, K. F., & Bryant, C. D. (1999). Propagandizing pederasty: A thematic analysis of the on-line exculpatory accounts of unrepentant pedophiles. Deviant Behavior, 20(2), 103–127. Goldsmith, A., & Brewer, R. (2015). Digital drift and the criminal interaction order. Theoretical Criminology, 19(1), 112–130. Goode, S., & Cruise, S. (2006). What motivates software crackers? Journal of Business Ethics, 65(2), 173–201. Harrington, S. J. (1996). The effect of codes of ethics and personal denial of responsibility on computer abuse judgments and intentions. MIS Quarterly, 20(3), 257–278. Harris, L. C., & Dumas, A. (2009). Online consumer misbehaviour: An application of neutralization theory. Marketing Theory, 9(4), 379–402. Henry, S. (1990). Degrees of deviance: Student accounts of their deviant behaviour. Salem: Sheffield Publishing. Higgins, G. E., Wolfe, S. E., & Marcum, C. D. (2008). Music piracy and neutralization: A preliminary trajectory analysis from short-term longitudinal data. International Journal of Cyber Criminology, 2(2), 324–336. Hinduja, S. (2007). Neutralization theory and online software piracy. Ethics and Information Technology, 9(1), 187–204. Holt, T. J., & Copes, H. (2010). Transferring subcultural knowledge on-line: Practices and beliefs of persistent digital pirates. Deviant Behavior, 31(7), 625–654. Holt, T. J., & Morris, R. G. (2009). An exploration of the relationship between MP3 player ownership and digital piracy. Criminal Justice Studies, 22(4), 381–392. Holt, T. J., Burruss, G. W., & Bossler, A. M. (2010). Social learning and cyber-deviance: Examining the importance of a full social learning model in the virtual world. Journal of Crime and Justice, 33(2), 31–61.

564

R. Brewer et al.

Holt, T. J., Brewer, R., & Goldsmith, A. (2019). Digital drift and the “sense of injustice”: Counterproductive policing of youth cybercrime. Deviant Behavior, 40(9), 1144–1156. Hutchings, A. (2013). Hacking and fraud: Qualitative analysis of online offending and victimization. In Global criminology: Crime and victimization in a globalized era (pp. 93–114). Boca Raton: CRC Press. Hutchings, A., & Clayton, R. (2016). Exploring the provision of online Booter services. Deviant Behavior, 37(10), 1163–1178. Hwang, J., Lee, H., Kim, K., Zo, H., & Ciganek, A. P. (2016). Cyber neutralisation and flaming. Behaviour & Information Technology, 35(3), 210–224. Ingram, J. R., & Hinduja, S. (2008). Neutralizing music piracy: An empirical examination. Deviant Behavior, 29(4), 334–366. Kettleborough, D. G., & Merdian, H. L. (2017). Gateway to offending behaviour: Permissiongiving thoughts of online users of child sexual exploitation material. Journal of Sexual Aggression, 23(1), 19–32. Klockars, C. B. (1974). The professional fence. New York: Free Press. Kos Koklic, M., Kukar-Kinney, M., & Vida, I. (2016). Three-level mechanism of consumer digital piracy: Development and cross-cultural validation. Journal of Business Ethics, 134(1), 15–27. Lowry, P. B., Zhang, J., Wang, C., & Siponen, M. (2016). Why do adults engage in cyberbullying on social media? An integration of online disinhibition and deindividuation effects with the social structure and social learning model. Information Systems Research, 27(4), 962–986. MacNeill, K. (2017). Torrenting game of thrones: So wrong and yet so right. Convergence, 23(5), 545–562. Maratea, R. J. (2011). Screwing the pooch: Legitimizing accounts in a zoophilia on-line community. Deviant Behavior, 32(10), 918–943. Marcum, C. D., Higgins, G. E., Wolfe, S. E., & Ricketts, M. L. (2011). Examining the intersection of self-control, peer association and neutralization in explaining digital piracy. Western Criminology Review, 12(3), 60–74. Maruna, S. (2001). Making good: How ex-convicts reform and rebuild their lives. Retrieved from https://www.apa.org/pubs/books/431645A Maruna, S. (2004). Desistance from crime and explanatory style: A new direction in the psychology of reform. Journal of Contemporary Criminal Justice, 20(2), 184–200. Maruna, S., & Copes, H. (2005). What have we learned from five decades of neutralization research? Crime and Justice, 32, 221–320. Matza, D. (1964). Delinquency and drift. New York: Wiley. Minor, W. W. (1981). Techniques of neutralization: A reconceptualization and empirical examination. Journal of Research in Crime and Delinquency, 18(2), 295–318. Moore, R., & McMullan, E. C. (2009). Neutralizations and rationalizations of digital piracy: A qualitative analysis of university students. International Journal of Cyber Criminology, 3(1), 441–451. Morris, R. G. (2011). Computer hacking and the techniques of neutralization: An empirical assessment. In T. J. Holt & B. H. Schell (Eds.), Corporate hacking and technology-driven crime: Social dynamics and implications (pp. 1–17). Hershey: Information Science Reference. Morris, R. G., & Higgins, G. E. (2009). Neutralizing potential and self-reported digital piracy: A multitheoretical exploration among college undergraduates. Criminal Justice Review, 34(2), 173–195. O’Halloran, E., & Quayle, E. (2010). A content analysis of a “boy love” support forum: Revisiting Durkin and Bryant. Journal of Sexual Aggression, 16(1), 71–85. Renfrow, D. G., & Rollo, E. A. (2014). Sexting on campus: Minimizing perceived risks and neutralizing behaviors. Deviant Behavior, 35(11), 903–920. Siponen, M., Vance, A., & Willison, R. (2012). New insights into the problem of software piracy: The effects of neutralization, shame, and moral beliefs. Information & Management, 49(7), 334–341.

27

Applying the Techniques of Neutralization to the Study of Cybercrime

565

Smallridge, J. L., & Roberts, J. R. (2013). Crime specific neutralizations: An empirical examination of four types of digital piracy. International Journal of Cyber Criminology, 7(2), 125–140. Steinmetz, K. F., & Tunnell, K. D. (2013). Under the pixelated Jolly Roger: A study of on-line pirates. Deviant Behavior, 34(1), 53–67. Sykes, G. M., & Matza, D. (1957). Techniques of neutralization: A theory of delinquency. American Sociological Review, 22(6), 664–670. Thongmak, M. (2017). Ethics, neutralization and digital piracy. International Journal of Electronic Commerce Studies, 8(1), 1–24. Ulsperger, J. S., Hodges, S. H., & Paul, J. (2010). Pirates on the plank: Neutralization theory and the criminal downloading of music among generation Y in the era of late modernity. Journal of Criminal Justice and Popular Culture, 17(1), 124–151. Vandiver, D. M., Bowman, S., & Vega, A. (2012). Music piracy among college students: An examination of low self-control, techniques of neutralization, and rational choice. The Southwest Journal of Criminal Justice, 8(2), 76–95. Vida, I., Kos Koklič, M., Kukar-Kinney, M., & Penz, E. (2012). Predicting consumer digital piracy behavior: The role of rationalization and perceived consequences. Journal of Research in Interactive Marketing, 6(4), 298–313. Vysotsky, S., & McCarthy, A. L. (2017). Normalizing cyberracism: A neutralization theory analysis. Journal of Crime and Justice, 40(4), 446–461. Yu, S. (2012). College students’ justification for digital piracy: A mixed methods study. Journal of Mixed Methods Research, 6(4), 364–378. Yu, S. (2013). Digital piracy justification: Asian students versus American students. International Criminal Justice Review, 23(2), 185–196. Zhang, S., & Leidner, D. (2018). From improper to acceptable: How perpetrators neutralize workplace bullying behaviors in the cyber world. Information & Management, 55(7), 850–865. Zhang, S., Yu, L., Wakefield, R. L., & Leidner, D. (2016). Friend or foe: Cyberbullying in social network sites. The Database for Advances in Information Systems, 47(1), 51–71.

The General Theory of Crime

28

George E. Higgins and Jason Nicholson

Contents Self-Control Theory, Cybercrime, and Cyber Victimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Self-Control Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General Review of the Self-Control Theory Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hirschi’s (2004) Reconceptualization of Self-Control Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Self-Control Theory and Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

568 568 568 570 572 573 577 579 579

Abstract

Self-control theory is a major advance in criminology. The central piece of the theory – self-control – has been shown to have a link with multiple behaviors. Here, we show self-control has a link with cyber behaviors. Keywords

Self-control · Cyber crime · Cyber criminology

G. E. Higgins (*) Department of Criminal Justice, University of Louisville, Louisville, KY, USA e-mail: [email protected] J. Nicholson Department of Criminology, University of West Georgia, Carrolton, GA, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_20

567

568

G. E. Higgins and J. Nicholson

Self-Control Theory, Cybercrime, and Cyber Victimization Introduction Crime, deviance, and victimization have all been a topic necessary of understanding for centuries. Criminologists have consistently worked to understand why individuals perform these behaviors or how individuals become victims of them. This work has led to a large development of theorizing. In the 1980s, Gottfredson and Hirschi worked to develop a singular cause of criminal behavior. Their work was in response to the growing development of expensive longitudinal and developmental research. Gottfredson and Hirschi’s view was that a singular cause of criminal behavior may be captured using cross-sectional data to understand criminal behavior, and not require expensive longitudinal data. These efforts resulted in General Theory of Crime, now known as self-control theory which advanced the traditional self-control literature in criminology (Gottfredson and Hirschi 1990). The provocative nature of a singular cause of criminal behavior has led to a substantial amount of research attention. Based on this attention, many have regarded the theory as a leading crime theory (Agnew 1995), and it is a popular theory. The research has led to at least one meta-analysis (see Pratt and Cullen’s 2000) and one major review of the extant literature. This chapter provides a review of what is presently known about self-control theory in the context of cybercrime and cyber victimization. The chapter begins with a presentation of self-control theory, summarizes the empirical research on its most important basic hypotheses, reviews the literature on the connection between selfcontrol and cybercrime and cyber victimization, and concludes with a summary of the key findings and identifies future research directions (The review of the literature is not meant to be exhaustive, but illustrative of the major works that have examined key hypotheses from the theory.). The studies that are reviewed will be organized around the specific hypothesis that they are testing. Before these studies are reviewed, self-control theory is presented.

Self-Control Theory Simply, Gottfredson and Hirschi’s (1990) self-control theory assumes that individuals are rational decision-makers. In their view, individuals make decisions using a form of bounded rationality – using their present knowledge of information about a behavior while unknown information is not taken into account. This has the effect of limiting the information that an individual may be able to use when making a decision. In other words, as the individual weighs the potential pleasure of an act against the potential pain of an act, known information may distort the individual’s perception. The distorted perception results in choosing to perform a behavior because the limited information makes it seem that the performance will be pleasurable rather than painful.

28

The General Theory of Crime

569

Crime is a behavior an individual may perform due to distorted rational choice. For Gottfredson and Hirschi (1990), the legal definition of crime is not sufficient to understand the behavior. They argue the definition of crime is only sufficient for the state’s business and not social sciences because it does not take the person or the decision into account. This view leads Gottfredson and Hirschi (1990) to redefine crime, such as crime is a potentially pleasurable behavior that is an act of force or fraud that the individual may pursue to satisfy their own interests (Gottfredson and Hirschi 1990). To help specify how crime can provide pleasure and to consider the person in their definition, Gottfredson and Hirschi (1990) argued crime has specific forms of characteristics. They specify crimes are generally short-lived, immediately gratifying, simple, easy, and exciting behaviors. These characteristics allow Gottfredson and Hirschi (1990) to link crime to the person. Gottfredson and Hirschi (1990) argued the characteristics of crime are attractive to those individuals that have deficits or low self-control. The attraction occurs because individuals with low self-control have similar characteristics as the behavior itself. For instance, those with low self-control are risk-taking and impulsive, lack empathy, prefer simple and easy tasks, and prefer physical tasks. Because of the attraction these characteristics create, the individual does not need any specific form of motivation to commit crime. Rather they need only an evaluation that the criminal behavior will provide them more pleasure than pain. The lack of knowledge about a behavior is not the only thing that distorts the view of pleasure and pain of a behavior, but the individual’s self-control level may distort the perception of the pleasure and the pain of a criminal behavior. Gottfredson and Hirschi (1990) state: . . .the dimensions [characteristics] of self-control are, in our view, factors affecting calculation of the consequences of one’s acts. The impulsive or shortsighted person fails to consider the negative or painful consequences of his acts,; the insensitive person has fewer negative consequences to consider; the less intelligent person also has fewer negative consequences to consider (has less to lose). (p. 95)

Here it is clear, those with low self-control are likely to misestimate the pleasure and pain of a criminal act because their information is distorted inhibiting their ability to accurately calculate the long-term consequences of their behavior. At this point, Gottfredson and Hirschi (1990) position low self-control as the singular individual propensity for criminal behavior. Gottfredson and Hirschi (1990) are not silent on how an individual acquires low self-control. They argue that early poor socialization is the likely culprit for an individual’s low self-control. Their view of early socialization begins with parents and their ability to consistently perform the parental management tasks. Successful parental management involves having an emotional bond or attachment to the child, monitoring their child’s behavior, evaluating the behavior to determine if the behavior is criminal or deviant, and applying some form of non-corporal punishment if the behavior is criminal or deviant. Gottfredson and Hirschi (1990) argued that parents have a brief window to perform these parental management tasks. Past the age of 8, an individual’s level of self-control is considered to be relatively stable. The poor or inconsistent application of the parenting

570

G. E. Higgins and J. Nicholson

tasks is likely to instill lower levels of self-control that will increase the likelihood that their child will seize an opportunity for crime or deviant behavior (Another version of Gottfredson and Hirschi’s theory includes a measure of opportunity. That is, the link between self-control and crime is moderated by opportunity. Researchers typically examine the direct link between self-control and crime (see Gibbs and Giever 1995; Gibbs et al. 2003 for an explanation of this point of view). The rationale that they generally use is that opportunities for the types of crimes that are being examined are ever present; therefore, opportunity for crime is ubiquitous.). The parental management component of Gottfredson and Hirschi’s (1990) selfcontrol theory also has implications for understanding gender differences in crime. Gottfredson and Hirschi’s (1990) central view on gender and crime is that females are less likely to commit crime because they possess higher level of self-control than males. Parents tend to monitor the behavior of females closer than they monitor the behavior of males. Parents, thus, are able to apply the parental management tasks more consistently to their female children than their male children. This creates variation in the levels of self-control, thus resulting in variation in criminal behavior across the genders with the consequentially higher self-control in females leading to reduced delinquency. A further consequence of low self-control is its influence over the formation of peer groups. Gottfredson and Hirschi (1990) are clear with their understanding that peers are an important covariate for criminal and deviant activity. Their view on the formation of peer groups is important for self-control theory. For Gottfredson and Hirschi (1990), peer group selection and participation is determined by one’s level of self-control. This will result in groups that have low levels of self-control, and because the group has low self-control, they are likely to perform criminal and delinquent behavior as a group. Peer groups with low levels of self-control are likely to be facilitators of crime. Crime facilitation, for Gottfredson and Hirschi (1990), is important because the group makes crime easier to perform. Although Gottfredson and Hirschi’s (1990) theory focused mostly on low selfcontrol and criminal offending, they also recognized the relationship between low self-control and victimization. The likelihood of victimization is also raised through the same characteristics that make crime attractive to those with low self-control (i.e., risk-taking, impulsive, lacks empathy, prefers simple and easy tasks, and prefers physical tasks). The inability to properly weigh the consequences of behavior leads not only to increased criminality but also victimization. Those with low self-control are more likely to expose themselves to dangerous situations, deviant peers, and behave in unsafe manners, raising their likelihood for victimization.

General Review of the Self-Control Theory Literature Gottfredson and Hirschi’s (1990) self-control theory as related to deviant behavior and its main arguments regarding parental management, stability, peer groups, gender differences, and victimization have been widely analyzed empirically. The vast majority of the studies of self-control theory have shown empirical support for

28

The General Theory of Crime

571

Gottfredson and Hirschi’s views of risky and impulsive tendencies, poor empathy, and preferences for simple, easy, and physical tasks as predictive of criminal activity. Pratt and Cullen (2000) published a meta-analysis of more than 20 studies. They showed that self-control had a moderate link with crime and deviance supporting the major hypothesis from Gottfredson and Hirschi’s theory. Since the publication of this meta-analysis, researchers have shown that low self-control has had a link with several different types of behaviors (for extensive reviews of these studies, see Pratt et al. 2004). Researchers have tested Gottfredson and Hirschi’s (1990) argument that an individual’s level of self-control is contingent on the consistency of their exposure to parental management in predicting higher levels of self-control and lower levels of crime. Gibbs et al.’s (1998) study used retrospective measures of parenting practices that asked college students to remember how their parents treated them in the ninth grade. This was used to correlate with self-control, and self-control has a link with delinquency supporting Gottfredson and Hirschi. Other researchers used a similar format to Gibbs et al. (1998) for testing the model and found support for this premise (Gibbs et al. 2003). Some researchers have shown that parenting practices have a link with self-control using current measures of parental management and selfcontrol (Hay 2001; Hope et al. 2003; Lynskey et al. 2000; Pratt et al. 2004; Unnever et al. 2003; Vazsonyi and Belliston 2007). The contention that parental management tasks must be performed early in life as self-control levels are relatively stable after age 8 has also been empirically analyzed. Turner and Piquero et al. (2002) used a national probability sample that contained behavioral and personality (i.e., attitudinal) measures of self-control to test this hypothesis. Their results showed mixed results for the hypothesis that were contingent on the type of measure that was used with behavioral measures more stable than personality. From these data, Turner and Piquero et al. (2002) argued that self-control remained relatively stable. Researchers have examined this proposition using different measures and have not found support for Gottfredson and Hirschi’s (1990) claim (Mitchell and MacKenzie 2006). Research results are thus mixed regarding the stability of self-control after age 8. Gottfredson and Hirschi’s (1990) contention that self-control varies across genders due to differences in parental management, which uniquely influences delinquency, has been supported empirically. Higgins (2004) used responses of parental management, self-control, and deviance from a nonrandom sample of college students to show that all of these measures have different distributions for males and females. To that end, females scored higher for parental management and selfcontrol but lower for deviance than males. Higgins (2004) also showed that model: parental management ➔ self-control ➔ deviance held for both males and females. This indicates that Gottfredson and Hirschi (1990) are correct that gender differences do exist in the context of parenting practices that have important implications for self-control. A number of researchers have analyzed the issues regarding self-control and peer groups (Baron 2003; Evans et al. 1997; Gibson and Wright 2001). Chapple (2005) used national level probability longitudinal data to show that individuals with low self-control are likely to have delinquent associations. This supports Gottfredson and

572

G. E. Higgins and J. Nicholson

Hirschi’s (1990) view. However, Chapple’s (2005) most notable finding is that those with low self-control experience peer rejection. This supports Gottfredson and Hirschi’s (1990) view that those with low self-control are likely to make poor friends. Meldrum et al. (2009) use social network data to show that direct measures of peer association from peers have a smaller effect on self-control than other forms of measurement. Consistent with others in the literature, Meldrum et al. (2009) contextualize these results in social learning theory rather than Gottfredson and Hirschi’s (1990) views. Thus, in the empirical literature, researchers are able to find support for Gottfredson and Hirschi’s (1990) contentions about the connection between self-control and peer associations. The influence of low self-control on victimization is also empirically supported. The components of general theory of crime have been found to influence victimization in similar ways as crime perpetration. A meta-analysis of the literature from Pratt et al. (2014) indicated that self-control is a modest yet consistent predictor of various forms of victimization across 66 studies. Previous research has shown low self-control to directly influence numerous forms of victimization including violent, property, fraud, homicide, and cybercrime among others (Holtfreter et al. 2008; Piquero et al. 2005; Schreck et al. 2006) while also indirectly influence victimization through routine activities concepts. Routine activities theory argues that crimes are likely to occur when three situational factors converge – motivated offenders, suitable targets, and a lack of capable guardians (Cohen and Felson 1979). Those with low self-control are more likely to enter risky places or situations and associate with deviant peers. The heightened likelihood for those with low self-control to associate with deviant peers (Higgins et al. 2006) places these individuals around potential motivated offenders. Those with low self-control can therefore present themselves as suitable targets to deviant peers in the absence of effective guardianship without the altering of normal routines. Empirical data thus demonstrates that the combination of the individual level aspects of self-control and the situational level attributes of routine activities theory shows promise for a greater understanding of victimization.

Hirschi’s (2004) Reconceptualization of Self-Control Theory Deriving low self-control as a personality trait via personality measurement was a problem for Hirschi. The development of the construct as a personality construct via personality measurement has led Hirschi (2004) to provide an alternative perspective of self-control. Hirschi’s (2004) reconceptualization removed the focus on the characteristics of low self-control and the focus as a personality trait. In Hirschi’s (2004) view, the personality use of self-control (1) is a search for the motives of crime and delinquency that are counter to their original theory; (2) is a use that shows little value in the explanation of crime; (3) does not provide an explanation of how self-control operates but intimates that an individual will become criminal because they are who they are; and (4) produces a measure that does not view more as better than less. Hirschi 2004 sees self-control not as a personality trait or predisposition for

28

The General Theory of Crime

573

crime, but self-control is the tendency to consider the full range of potential costs (i.e., inhibitions) of a particular act. This means the individual carries these inhibitions with them wherever they go. The result is removing the emphasis from longterm costs of an act to any cost of an act, and it places the emphasis on higher levels of self-control. This means that individuals are consistently reviewing their inhibitions consciously or unconsciously when encountering any situation. Thus, crime and delinquent acts are self-perpetuating, but that they are possible due to the absence of an enduring tendency to avoid them (i.e., the inability to see the full range of the inhibitions). Hirschi 2004 argued the main inhibitions that someone carries with them come from their bonds from social bonding theory (i.e., commitment, involvement, belief, and attachment). In other words, those with stronger bonds have higher levels of self-control, and these individuals do not wish to transgress because they understand the consequences to the bonds that would arise from the transgression. These consequences serve as the inhibitions to criminal behavior. To test this view, Hirschi (2004) used data from the Richmond Youth Survey. To capture the new conceptualization, he used nine items that capture a variety of social bonds (i.e., attachment, commitment, and belief) (Hirschi (2004) argued that his measure does not include a measure of involvement. He goes on to suggest that involvement could be used in this study and other studies.). He shows that his conceptualization of self-control has a negative link with delinquency. This is supportive of the reconceptualization of self-control that individuals add up the negative costs of an act and behave in accord. The important issue with this study was the Hirschi’s (2004) measures. His use of nine items that reflect social bonds is consistent with his view that self-control and social control are one and the same. Piquero and Bouffard (2007) used data from college students to examine the reconceptualization of self-control. They interpreted Hirschi (2004) to be more from the rational choice tradition rather than the social bonding tradition. Their approach to operationalizing self-control was to ask students to provide a list of seven “bad things” and the percentage of the likelihood of these “bad things” occurring. The product of these responses was added together, and higher scores on the measure indicated more inhibitions. Piquero and Bouffard (2007) also included the Grasmick et al. (1993) measure. In comparison, the “bad things” measure of self-control has a stronger link with drunk driving and sexual aggression than the Grasmick et al. scale.

Self-Control Theory and Cybercrime Computer crime consists of multiple forms of behavior and may have deleterious effects. Specifically, computer crime has two pieces that are relevant. The first piece of computer crime is the computer may be the object of the crime. On one hand, the computer may be physically stolen or damaged. On the other hand, the computer may be stolen or damaged using digital means. In either form, the computer is the object of the crime.

574

G. E. Higgins and J. Nicholson

The second piece of computer crime is the computer may be used as an instrument to perform criminal activity. For instance, someone may use a computer to digitally bully or stalk someone. The computer may also be used to hack into someone else’s system to steal digital media (i.e., recordings, applications, information, games, and video). Overall, the digital use of the computer to perform criminal acts is often termed Internet, digital, or cybercrime. Cybercrime is the perpetration of criminal activity using a computer of some form. The computer does not have to be limited to an actual computer. The computer may be any electronic device. For instance, devices such as mobile phones, gaming devices, some video devices, and music devices (e.g., iPod) may be used to perpetrate cybercrimes. At their core, these devices contain a memory of some sort to retain information, an operating system, and some form of networking that allows the user to use the device to send and receive information not endogenous to their device. In other words, the ability to use these devices to reach the Internet or others on private networks provides the user the ever-present opportunity to perform criminal activity if they choose to do so. As has been documented earlier, Gottfredson and Hirschi’s (1990) version of selfcontrol theory has been shown to have a link with a number of behaviors. These behaviors include crime in physical world as well as the cyber environment, which show some similarities when studied through the self-control perspective. In some forms of cybercrime, self-control theory has been able to provide some level of understanding with the theoretical connections between low self-control and various forms of cybercrime very much present. The proceeding paragraphs will detail the relationship between the components of low self-control as outlined by Gottfredson and Hirschi (1990) (risk-taking, impulsive, lacks empathy, prefers simple and easy tasks, and prefers physical tasks) and specific cybercrimes. One form of cybercrime that self-control theory has helped understand is digital piracy. Digital piracy is the theft of digital media (Higgins 2005). The media that is stolen may range from music, videos, or games. While this list encompasses a lot of media, it is not intended to be an exhaustive list of media. Digital piracy provides the benefits sought by those with low self-control showing the connection between the behavior and the theoretical perspective. Those with low self-control are not likely to have the patience to wait for a legitimate purchasing opportunity or respect copyright laws associated with digital media. They would also be likely to believe that no harm has come from their pirating behavior. Digital piracy is also easy and simple to perform, similar to theft in the physical world (i.e., shoplifting), and may provide the thrill of deviance that those with low self-control desire. To date, the research is growing that examines digital piracy with self-control theory. The vast majority of this literature has only considered the role of self-control in the context of digital piracy including music piracy (Higgins et al. 2012; Hinduja and Ingram 2008), movie piracy (Higgins et al. 2006), or software (Higgins 2005). These studies demonstrated links with digital piracy and measures representing an inability to resist temptation, recognize consequences of actions, and empathize with copyright holders. Results also show a preference for the simplicity and ease within such tasks as digital piracy. Further, researchers have shown the connection between

28

The General Theory of Crime

575

low self-control and digital piracy using both juveniles (Baek et al. 2016) and college students (Higgins 2005). This research also contains measures of deviant peer association. For instance, Higgins and Makin (2004a) used a sample of college students to show that peer association conditioned the link between low self-control and pirating (i.e., illegally downloading) software. They interpreted their findings that pirating software is generally a behavior that occurs individually, but belonging to a peer group that pirates software makes the behavior easier to perform. Some interpret these findings as support of social learning theory, but Higgins and Makin (2004b) make the case that associating with pirating peer groups makes performing the behavior easier. This is a view that is consistent with Gottfredson and Hirschi’s (1990) theory. Another form of cybercrime that self-control theory has a link with is hacking. Hacking is the unauthorized obtaining or use of someone else’s computer system or at a minimum their passwords. The components of low self-control outlined by Gottfredson and Hirschi (1990) are easily recognized in hacking behavior. Those who act impulsively or insensitively will fail to recognize the violations of trust and privacy associated with hacking behaviors. They will also overlook the efforts implemented to prevent hacking behaviors and the possibility of and legal consequences associated with being caught. Researchers have shown that low self-control does have a link with lower level forms of hacking (i.e., using someone else’s password) (Bossler and Burruss 2011; Marcum et al. 2014). Marcum et al. (2014) demonstrated that low self-control, represented through a nine-item scale encompassing several components of Gottfredson and Hirschi’s (1990) theory, was associated with logging into another’s social media account and accessing a website as an unauthorized user, both minor forms of hacking. The research literature is silent to serious forms of hacking leaving a gap in our understanding of more technically difficult forms of hacking. Holt and Bossler (2014) proffered more serious forms of hacking may not have a connection with self-control, but it is possible that more serious forms of hacking are seen as easy and simple to the perpetrator continuing the connection between self-control and hacking. While this is a logical suggestion, more research is necessary to be learned when it comes to the connection between hacking and self-control. One additional result that is notable is the connection that deviant peer association plays in the hacking literature. In the studies that examine self-control and hacking, deviant peer association consistently has an effect on hacking behavior (Bossler and Burruss 2011; Marcum et al. 2014). As with the results in digital piracy, these results are interpreted in the context of learning theory, and not in the context of deviant peers making hacking behavior easier as in Gottfredson and Hirschi’s (1990) view. Cyberbullying is another form of cybercrime that may have a link with selfcontrol. Cyberbullying is the intentional intimidation, harassment, or rumor spreading via the Internet. The theoretical connection between low self-control and bullying behaviors has been found in the virtual world and the off-line physical setting (Kulig et al. 2017; Li et al. 2016). Both physical and virtual bullying behaviors are characterized by a lack of empathy toward victims and impulsive, self-serving behavior. Regarding specific studies of cyberbullying, Lianos and Mcgrath (2018)

576

G. E. Higgins and J. Nicholson

demonstrated a significant and positive relationship between the Grasmick et al. (1993) scale, a measure consisting of the six components of low self-control identified by Gottfredson and Hirschi (1990), and cyberbullying perpetration. Holt et al. (2012) show cyberbullies may not have low self-control deficits. Interestingly, Holt et al. (2012) argued that deviant peer association mediates the influence of low self-control on cyberbullying. They argue this reduces the impact of low self-control to have an influence on cyberbullying. This is logical when considering deviant peer association as a social learning theory phenomenon. Another form of cybercrime that needs to be understood is cyberstalking. It is thought that this behavior is in line with low self-control as cyberstalking, similar to real-world stalking, is a shortsighted behavior that satisfies the immediate desire to know about someone else’s whereabouts, activities, and personal associations. Those with low self-control would also be unlikely to recognize the long-term consequences of their stalking behavior if caught including damage to the personal relationship and trauma experienced by the victim, as compared to the immediate benefits of stalking. To date, researchers show low self-control has a link with cyberstalking (Holt and Bossler 2009; Reyns et al. 2012). In addition, Marcum et al. (2016) used data from college students to show low self-control has a link with cyberstalking behavior. To be clear, the cyberstalking behavior is in the context of a romantic relationship. Marcum, Nicholson, and Higgins (2017) used data from a different sample of college students to show that low self-control, measured through the Grasmick et al. (1993) scale and Hirschi’s (2004) conceptualization, does have a link with cyberstalking. Marcum et al. (2018) further studied the perceived consequences of stalking behavior through a self-control perspective. Results indicated that those with low self-control were less inclined to believe that cyberstalking behaviors would result in damage to a personal relationship such as a partner seeking revenge for stalking behavior, becoming anger with the cyber stalker, or dissolving the relationship. In most of these studies, deviant peer association does have a link with cyberstalking behavior. Marcum et al. (2016, 2018) interpreted the deviant peer association results in the context of social learning theory. In terms of Gottfredson and Hirschi’s (1990) theory, those with low self-control gravitate toward others with low self-control. Some research has examined the role of gender within various forms of cybercrime. Low self-control can potentially mediate the gender gap for different forms of cybercrime. Higgins et al. (2006) found gender differences do exist in software piracy behaviors from the Internet and that low self-control reduces these differences. In other words, low self-control played a more important role in the pirating decision-making of males than females, supporting Gottfredson and Hirschi’s (1990) contention that males have lower levels of self-control leading to increased delinquency. However, Donner (2016) found men more likely to engage in online offending with the gender gap persistent when controlling for self-control variables. Other studies have indicated whether gender remains significant in models that contain low self-control. Higgins (2004) found low self-control was significantly associated with digital piracy, while gender as a control measure was not significantly linked to piracy.

28

The General Theory of Crime

577

The abovementioned literature detailed low self-control’s influence on the participation in or perpetration of various cybercrimes. Also important to note is the connection between low self-control and various forms of cybercrime victimization. As mentioned previously, Gottfredson and Hirschi’s (1990) theory was primarily a theory of deviant behavior, but they did recognize the correlation between their main components (risk-taking, impulsive, lacks empathy, prefers simple and easy tasks, and prefers physical tasks) and the likelihood of victimization. The link between low self-control and cyber victimization has been shown in previous empirical research, similar to traditional forms of victimization. Meta-analysis from Pratt et al. (2014) found the effect of self-control on noncontact forms of victimization such as cyber victimization was significantly stronger than in traditional forms of victimization. Bossler and Holt (2010) found low self-control was associated with various types of victimization in the cyber realm. Ngo and Paternoster (2011) found those with low self-control were significantly more likely to experience online harassment. Wilsem (2013) found low self-control was a risk factor for experiencing hacking victimization as well as harassment online. Previous studies have also noted that the link between low self-control and routines activities concepts occurs in cyberspace as well. Low self-control’s influences over the convergence of a motivated offender, suitable target, and a lack of capable guardianship online have contributed to our understanding of cyber victimization. Higgins et al. (2006) found those with low self-control are more likely to associate with peers who commit cybercrime. Holt and Bossler (2009) found this raises the proximity to potentially motivated offenders and lowers guardianship in online settings since deviant peers will often victimize friends in the virtual realm as they are the most suitable targets. To date almost all of the research using self-control and cybercrime has come using Gottfredson and Hirschi’s (1990) version of the theory and an attitudinal or personality measurement of the theoretical construct. Far less research has examined Hirschi’s (2004) reconceptualization of self-control theory in the context of cybercrime behavior. In the context of Hirschi’s (2004) reconceptualization, Higgins et al. (2008) used data from college students to examine the alternative view of selfcontrol in the context of digital piracy (i.e., the illegal downloading of digital media that include music, movies, and software). The results from their study indicated that the alternative conceptualization of self-control does have merit. In this study, the merit comes while examining other conceptualizations and measurements of Gottfredson and Hirschi’s (1990) theory. This study provides an early contribution to the literature, and it provides one method of measuring Hirschi’s (2004) reconceptualization of self-control theory that others may be able to follow to better understand the role of this version of the theory in the context of cybercrime.

Summary The purpose of this chapter was to present self-control theory, review the studies that have examined this theory, outline the recent theoretical development of self-control theory, and conclude with a summary of the key findings and identify future research

578

G. E. Higgins and J. Nicholson

directions. In its most basic form, self-control theory can be thought of as poor or ineffective parental management leads to levels of self-control, and the levels of selfcontrol have an influence on the inclination toward criminal activity. Those with deficits in self-control are more likely to be attracted to crime than those with higher levels of self-control. In general, the research shows support for Gottfredson and Hirschi’s (1990) theory. The research is growing where studies only examine the self-control component of the theory. Specifically, the research shows the link between general measures of crime and deviance and specific forms of crime and deviance (Pratt and Cullen 2000). The specific forms of crime and deviance have been expanded to cybercrimes. The cybercrimes include digital piracy, hacking, cyberstalking, and cyberbullying. The research demonstrates the reach of self-control theory. In addition, the research indicates the continued need for additional research. In the cybercrime literature, there are a number of behaviors that have largely gone unstudied in the context of self-control theory. Peer association is another area of importance for understanding Gottfredson and Hirschi’s (1990) theory. Researchers have shown support for Gottfredson and Hirschi’s (1990) view that their peers will reject those with low self-control. This is the case because those with low self-control are likely to make poor friends. In the general crime and the cybercrime literature, deviant peer association had a consistent link with these behaviors. The vast majority of the research interpreted these results in the context of social learning theory. This has led some to suggest self-control theory may be incorrect and others (Holt and Bossler 2014) to conclude integrating the theories may not be appropriate. At this point, two prevailing views must be considered. First, Akers (1998) has argued the vast majority of learning takes place in differential association, and social learning theory has a component of behavioral control that is consistent with Gottfredson and Hirschi’s (1990) theory. The results, thus, may need to be considered as a smaller version of social learning theory, but this does not satisfy Holt and Bossler’s (2014) call for fuller versions of social learning theory to be empirically examined. Another view of these results is Gottfredson and Hirschi’s (1990) position. Their view is that deviant peer association may make perpetrating the behavior easier, and not that the individual is learning the behavior. For instance, the individual and the group lack self-control; thus, deviant peers facilitate the perpetration of cybercrimes. The deviant peers are likely to have low self-control levels. To clarify this position, empirical research that uses questions about the ease and facilitation that peers may provide in performing cybercrimes are necessary to reconcile this interpretation. In short, the items that capture deviant peer association need to go beyond the number of peers that may perpetrate the behavior. The cybercrime literature is limited in the context of Hirschi’s (2004) reconceptualization of self-control theory. Hirschi’s (2004) reconceptualization of self-control theory is pregnant with research possibilities. Given that self-control theory is a control theory and control theories address the question, “why don’t people commit crime?” the measure of self-control should reflect this conceptualization. It should also be noted that the reconceptualization of self-control theory places an emphasis on the self-control being an individual propensity for criminal behavior rather than a personality trait that happens to be correlated with criminal

28

The General Theory of Crime

579

behavior. This would suggest that additional work in the areas outlined above should be ripe areas for develop and reassessment. For instance, does the new version of self-control remain stable over time? How different is self-control for males and females? To that end, how different is self-control for different races and ethnicities? Also, what are the implications of the new version of self-control for peer association? What behaviors does this new version of self-control relate to? Few studies examine the link between higher levels of self-control and cybercrime behavior. When doing so, it will be instructive to gain some perspective as to the interpretation of the higher self-control measure. To clarify, a measure of higher selfcontrol will have a negative link with forms of cybercrime. This is opposite of the typical measure of low self-control, the Grasmick et al. (1993) scale, where higher scores indicate lower levels of self-control, and the link between this version of low self-control results in a positive link. Care is important in these interpretations. These are a few questions that are relevant given the recent development of self-control theory. The replication of present findings is important as well. This would speak to the fundamental debates that Gottfredson and Hirschi (1990) are attempting to address.

Cross-References ▶ Cyberstalking ▶ Dating and Sexual Relationships in the Age of the Internet ▶ Digital Piracy ▶ Identity Theft: Nature, Extent, and Global Response ▶ Risk and Protective Factors for Cyberbullying Perpetration and Victimization

References Agnew, R. (1995). Testing the leading crime theories: An alternative strategy focusing on motivational process. Journal of Research in Crime and Delinquency, 32, 363. Akers, R. L. (1998). Social learning and social structure: A general theory of crime and deviance. Boston, MA: Northeastern University Press. Baek, H., Losavio, M. M., & Higgins, G. E. (2016). The impact of low self-control on online harassment: Interaction with opportunity. Journal of Digital Forensics, Security, and Law, 11, 27–42. Baron, S. W. (2003). Self-control, social consequences and criminal behavior: Street youth and the general theory of crime. Journal of Research in Crime and Delinquency, 40, 403–425. Bossler, A. M., & Burruss, G. W. (2011). The general theory of crime and computer hacking: Low self-control hackers? In T. J. Holt & B. H. Schell (Eds.), Corporate hacking and technologydriven crime: Social dynamics and implications (pp. 38–67). IGI Global: Hershey, PA. Bossler, A. M., & Holt, T. J. (2010). The effect of self-control on victimization in the cyberworld. Journal of Criminal Justice, 38(3), 227–236. Chapple, C. L. (2005). Self-control, peer relations, and delinquency. Justice Quarterly, 22, 89–106. Cohen, L. E., & Felson, M. (1979). Social change and crime rate trends: A routine activity approach. American Sociological Review, 44, 588–608. Donner, C. M. (2016). The gender gap and cybercrime: An examination of college students’ online offending. Victims & Offenders, 11(4), 556–577.

580

G. E. Higgins and J. Nicholson

Evans, D. T., Cullen, F. T., Burton, V. S., Dunaway, R. G., & Benson, M. L. (1997). The social consequences of self-control: Testing the general theory of crime. Criminology, 35, 475–501. Gibbs, J. J., & Giever, D. (1995). Self-control and its manifestations among university students: An empirical test of Gottfredson and Hirschi's general theory. Justice Quarterly, 12, 231–256. Gibbs, J. J., Giever, D. M., & Martin, J. (1998). Parental management and self-control: An empirical test of Gottfredson and Hirschi’s general theory. Journal of Research in Crime and Delinquency, 35, 42–72. Gibbs, J. J., Giever, D. M., & Higgins, G. E. (2003). A test of Gottfredson and Hirschi’s general theory using structural equation modeling. Criminal Justice and Behavior, 30, 441–458. Gibson, C., & Wright, J. (2001). Low self-control and coworker delinquency: A research note. Journal of Criminal Justice, 29, 483–492. Gottfredson, M. R., & Hirschi, T. (1990). A general theory of crime. Stanford, CA: Stanford University Press. Grasmick, H. G., Tittle, C. R., Bursik, R. J., & Arneklev, B. J. (1993). Testing the core empirical implications of Gottfredson and Hirschi’s general theory of crime. Journal of Research on Crime and Delinquency, 30, 5–29. Hay, C. (2001). Parenting, self-control, and delinquency: A test of self-control theory. Criminology, 39, 707–736. Higgins, G. E. (2004). Gender and self-control theory: Are there differences in the measures and the Theory’s causal model? Criminal Justice Studies, 17, 33–55. Higgins, G. E., Makin, D. A. (2004a). Does social learning theory condition the effects of low selfcontrol on college students’ software piracy? Journal of Economic Crime Management, 2, 1–22. http://www.jecm.org. Higgins, G. E., Makin, D. A. (2004b). Does associating with deviant peers condition the correlation that low self-control has on college students’ software piracy? Psychological Reports, 95, 921–931. Higgins, G. E. (2005). Can low self-control help with the understanding of the software piracy problem. Deviant Behavior, 26, 1–24. Higgins, G. E., Fell, B. D., & Wilson, A. L. (2006). Digital piracy: Assessing the contributions of an integrated self-control theory and social learning theory. Criminal Justice Studies: A Critical Journal of Crime, Law, and Society, 19, 3–22. Higgins, G. E., Wolfe, S. E., & Marcum, C. D. (2008). Digital piracy: An examination of multiple conceptualizations and Operationalizations of self-control. Deviant Behavior: An Interdisciplinary Journal, 29, 440–460. Higgins, G. E., Marcum, C., Freiburger, T., & Ricketts, M. (2012). Examining the role of peer influence and self-control on downloading behavior. Deviant Behavior: An Interdisciplinary Journal, 33, 412–423. Hinduja, S., & Ingram, J. (2008). Self-control and ethical beliefs on the social learning of intellectual property theft. Western Criminology Review, 9(2), 52–72. Hirschi, T. (2004). Self-control and crime. In R. F. Baumeister & K. D. Vohs (Eds.), Handbook of self-regulation: Research, theory, and applications. New York: Guilford Press. Holt, T. J., & Bossler, A. M. (2014). An assessment of the current state of cybercrime scholarship. Deviant Behavior, 35(1), 20–40. Holtfreter, K., Reisig, M. D., & Pratt, T. C. (2008). Low self-control, routine activities, and fraud victimization. Criminology, 46(1), 189–220. Hope, T., Grasmick, H. G., & Pointon, L. J. (2003). The family in Gottfredson and Hirschi’s general theory of crime: Structure, parenting, and self-control. Sociological Focus, 36, 291–311. Kulig, T. C., Pratt, T. C., Cullen, F. T., Chouhy, C., & Unnever, J. D. (2017). Explaining bullying victimization: Assessing the generality of the low self-control/risky lifestyle model. Victims & Offenders, 12(6), 891–912. Li, C. K., Holt, T. J., Bossler, A. M., & May, D. C. (2016). Examining the mediating effects of social learning on the low self-control—Cyberbullying relationship in a youth sample. Deviant Behavior, 37(2), 126–138. Lianos, H., & McGrath, A. (2018). Can the general theory of crime and general strain theory explain cyberbullying perpetration? Crime & Delinquency, 64(5), 674–700.

28

The General Theory of Crime

581

Lynskey, D. P., Winfree, L. T., Esbensen, F. A., & Clason, D. L. (2000). Linking gender, minority group status, and family matters to self-control theory: A multivariate analysis of key selfcontrol concepts in a youth gang context. Juvenile and Family Court Journal, 51, 1–19. Summer Issue. Marcum, C. D., Higgins, G. E., Ricketts, M. L., & Wolfe, S. (2014). Hacking in high school: Cybercrime perpetration by juveniles. Deviant Behavior: An Interdisciplinary Journal, 35, 581–559. Marcum, C. D., Poff, B., & Higgins, G. E. (2016). Exploratory investigation on theoretical predictors of the electronic leash. Computers in Human Behavior, 61, 213–218. Marcum, C. D., Nicholson, J. , & Higgins, G. E. (2017). I’m watching you: Cyberstalking behaviors of University students in romantic relationships. American Journal of Criminal Justice, 42, 373–388. Marcum, C. D., Higgins, G. E., & Nicholson, J. (2018). Crossing boundaries online in romantic relationships: An exploratory study of the perceptions of impact on partners by cyberstalking offenders. Deviant Behavior, 39(6), 716–731. Meldrum, R. C., Young, J., & Weerman, F. M. (2009). Reconsidering the effect of self-control and delinquent peers: Implications of measurement for theoretical significance. Journal of Research in Crime and Delinquency, 46(3), 353–376. Mitchell, O., & MacKenzie, D. L. (2006). The stability and resiliency of self-control in a sample of incarcerated offenders. Crime and Delinquency, 52, 432–449. Ngo, F. T., & Paternoster, R. (2011). Cybercrime victimization: An examination of individual and situational level factors. International Journal of Cyber Criminology, 5(1), 773–793. Piquero, A. R., & Bouffard, J. (2007). Something old, something new: A preliminary investigation of Hirschi’s redefined self-control. Justice Quarterly, 24, 1–27. Piquero, A., Gibson, C. L., & Tibbetts, S. G. (2002). Does self-control account for the relationship between binge drinking and alcohol related Behaviours? Criminal Behaviour and Mental Health, 12, 135–154. Piquero, A. R., MacDonald, J., Dobrin, A., Daigle, L. E., & Cullen, F. T. (2005). Self-control, violent offending, and homicide victimization: Assessing the general theory of crime. Journal of Quantitative Criminology, 21(1), 55–71. Pratt, T. C., & Cullen, F. (2000). The empirical status of Gottfredson and Hirschi's general theory of crime: A meta-analysis. Criminology, 38, 931–964. Pratt, T., Turner, M. G., & Piquero, A. (2004). Parental socialization and community context: A longitudinal analysis of the structural sources of low self-control. Journal of Research in Crime and Delinquency, 41, 2129–2243. Pratt, T. C., Turanovic, J. J., Fox, K. A., & Wright, K. A. (2014). Self-control and victimization: A meta-analysis. Criminology, 52(1), 87–116. Reyns, B. W., Henson, B., & Fisher, B. S. (2012). Stalking in the twilight zone: Estimating the extent of cyberstalking victimization and offending among college students. Deviant Behavior, 33, 1–25. Schreck, C. J., Stewart, E. A., & Fisher, B. S. (2006). Self-control, victimization, and their influence on risky lifestyles: A longitudinal analysis using panel data. Journal of Quantitative Criminology, 22(4), 319–340. Unnever, J. D., Cullen, F. T., & Pratt, T. C. (2003). Parental management, ADHD, and delinquent involvement: Reassessing Gottfredson and Hirschi’s general theory. Justice Quarterly, 20, 471–501. Vazsonyi, A. T., & Belliston, L. M. (2007). The family➔low self-control➔deviance: A crosscultural and cross-national test of self-control theory. Criminal Justice and Behavior, 343, 505–530. Wilsem, J. V. (2013). Hacking and harassment—Do they have something in common? Comparing risk factors for online victimization. Journal of Contemporary Criminal Justice, 29(4), 437–453.

General Strain Theory and Cybercrime

29

Carter Hay and Katherine Ray

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An Overview of GST and Its Empirical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using GST to Explain Involvement in Cyberoffending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theoretical Arguments for Using GST to Explain Cyberoffending . . . . . . . . . . . . . . . . . . . . . . . . Summary of Theoretical Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using GST to Explain the Consequences of Cybervictimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theoretical Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion: Addressing Gaps in Knowledge on GST and Cybercrime . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

584 585 587 587 591 591 593 593 594 595 597

Abstract

An important priority in cybercrime research is to use theory to better understand and organize information on cybercrime offending. Recent efforts in this area have informatively applied theories such as social learning, self-control, and routine activities, but there is room to apply additional theories. Agnew’s general strain theory (GST) provides an appealing possibility for this by emphasizing key causal variables that are neglected in other theories. Specifically, it hypothesizes that strainful social relationships and events give rise to negative emotional states that, in turn, are catalysts for aggressive and criminal behavior. This chapter describes how these arguments can be applied to cybercrime to better understand the causes of cyberoffending and the consequences of cybervictimization. For both areas, the relevant theoretical arguments and empirical evidence are described. The chapter concludes with a discussion of key priorities for future research. C. Hay (*) · K. Ray Florida State University, Tallahassee, FL, USA e-mail: [email protected]; [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_21

583

584

C. Hay and K. Ray

Keywords

General strain theory · Negative emotions · Anger · Cyberoffending · Cyberbullying · Cyber dating abuse · Cyberhate · Cyberterrorism

Introduction Recent decades have seen the expansion of computer technology and the Internet into virtually all aspects of daily life. Behavioral science research now increasingly explores important implications of this development for crime in particular. Specifically, many crimes now fit under the broad umbrella of cybercrime because they are enabled or facilitated by the use of cyberspace or computer technology. Most such acts fit into a set of commonly observed categories, including hacking into a computer system or network without owner permission (cyber-trespassing), using computer technology to illegally access valuable materials (cyber-deception/cybertheft), committing acts related to cyberporn and obscenity (cyberpornography), and using computer technology to harass, bully, stalk, or attack other individuals or entities (cyberviolence) (Wall 2001). In each of these areas, cyber technology has created new and varied opportunities for “traditional” crime or entirely novel forms of crime that can be committed on a large scale, often with extraordinary potential harm. With these developments, there is little surprise that cybercrime is now recognized as a significant threat to the well-being of citizens and to national security in technologically advanced societies (Meško 2018). This importance of cybercrime gives priority to theoretical scholarship on its causes and consequences. Just as with any area of research, theory can advance cybercrime research by offering novel hypotheses and by helping to organize key facts that emerge from empirical work. And indeed, theoretical efforts of this kind are emerging in cybercrime scholarship to gain insights on why cybercrime offending varies over time or across individuals and geographic units (e.g., Higgins et al. 2012; Holt et al. 2010; Holtfreter et al. 2008). These efforts have most commonly focused on theories involving routine activities, social learning, and self-control, all of which appear useful for explaining certain aspects of the cybercrime phenomenon. However, as Holt and Bossler (2014) have emphasized, extending cybercrime research into new areas of theoretical inquiry is important, in part because alternative theories may address aspects of cybercrime that have been neglected to this point – “the field needs to examine the breadth of existing and recent criminological theory in order to expand our knowledge of cybercrimes” (p. 33). The present chapter develops the idea that Agnew’s (1992) general strain theory (GST) of crime and deviance provides a promising explanation for certain aspects of the cybercrime phenomenon. As discussed below, key features of this theory make it an appealing option for studying cybercrime, especially forms of it that appear to have their origins in strainful grievances, relationships, and events that trigger negative emotions and retaliatory “corrective” responses from strained individuals. In exploring that application, the chapter begins by discussing the basics of GST,

29

General Strain Theory and Cybercrime

585

including its intellectual background, its key hypotheses, and the related empirical evidence. We then describe how GST can be applied to explaining both involvement in cyberoffending and the harmful consequences of cybercrime victimization. For both areas, we describe the GST-derived theoretical arguments that can be made and the empirical evidence on the accuracy of those arguments. We conclude by discussing key gaps in knowledge that still exist.

An Overview of GST and Its Empirical Support Agnew’s GST fits within the broader strain theory perspective that has historically emphasized economic and social status strain as a key source of criminality (Cloward and Ohlin 1960; Cohen 1955). Central to this broad perspective is the idea that individuals are pressured into crime and delinquency by strainful, undesirable, and unwanted circumstances. As Agnew (1992) has highlighted, this perspective serves an important niche in criminological theory because other theories neglect the role of negative and unwanted circumstances. Control and routine activities theories (e.g., Hirschi 1969; Gottfredson and Hirschi 1990), for example, emphasize the ways in which positive (i.e., beneficial) forms of bonding, social control, and supervision reduce crime and opportunities for crime. Social learning and subcultural perspectives similarly focus on positive associations – associations that are sought out – but do so with attention to how those associations increase crime when they involve ties to criminal and deviant others. Across these perspectives, there is little attention to the significance of truly aversive and noxious circumstances that people wish to avoid. These circumstances remain largely within the domain of strain theories. Within this broad strain theory perspective, Agnew’s GST is distinguished by its argument that strain comes from many sources, not just from poverty or low socioeconomic status. Specifically, GST sees strain as resulting from any relationship or event in which the individual is not treated as he or she would like or experiences undesired outcomes. This includes any relationship or event in which (1) the individual fails to achieve positively valued goals, (2) there is the threat of or removal of positively valued stimuli, or (3) there is the threat of or presentation of negatively valued stimuli (Agnew 1992, p. 50). These ideal types of strain increase the likelihood that individuals will experience negative emotions, including anger, anxiety, fear, or depression. Agnew highlighted anger as especially important – it increases crime by increasing the level of perceived injury, creating a desire for retaliation, energizing the individual for corrective action, and lowering inhibitions (Agnew 1992, pp. 59–60). Agnew emphasized, however, that strain will not always result in crime. Indeed, in many instances, strain may evoke corrective actions that are noncriminal and prosocial; for example, an individual may cognitively reinterpret a strainful event to minimize its harm. Similarly, a strained individual may respond to unwanted circumstances with prosocial rather than antisocial lines of action (Agnew 1992, p. 66). Agnew emphasized an ordered process by which this should occur: strain should

586

C. Hay and K. Ray

lead to criminal or aggressive responses mainly when the strain was experienced in conjunction with circumstances that promoted the criminal adaptations. Agnew identified a number of variables that influence the use of criminal adaptations, including such things as individual coping resources (e.g., high self-control) and conventional social support (e.g., attachment to parents), both of which should diminish the effects of strain on crime. Alternatively, a high predisposition for crime and aggression – as perhaps indicated by prior criminal involvement or by involvement in criminal peer groups – should amplify the effects of strain on crime. These arguments have received significant empirical scrutiny, with many tests seeking to explain juvenile crime in particular. Three important generalizations can be drawn regarding the theory’s accuracy. First, most studies reveal significant positive relationships between strain and crime, even when controlling for key sources of spuriousness, including important causes of crime identified by other theories. In early tests, strain was operationalized in terms of a wide array of negative life events (e.g., parental divorce or death of a loved one), negative relationships with others (including teachers, parents, and peers), and exposure to aversive social environments. More recent tests often have examined a subset of strains that Agnew (2001) later theorized as being especially consequential for strain. These included such things as physical or criminal victimization, bullying from peers, parental rejection, and experiences with discrimination. But across virtually all studies, significant strain-crime relationships have been confirmed (e.g., Agnew et al. 2002; Agnew and White 1992; Broidy 2001; Craig et al. 2017; Hay and Evans 2006; Paternoster and Mazerolle 1994). Indeed, strain seems to increase many noncriminal behavioral outcomes, including injurious acts against oneself (Hay and Meldrum 2010) and participation in racial insurgency (Broidy and Santoro 2018) and political violence (Pauwels and De Waele 2014). Thus, in line with GST’s predictions, strainful social circumstances appear to be a major driver of behavior. The second and third generalizations drawn from GST research relate to its proposed mediating and moderating processes. Regarding the former, studies commonly have found that the effects of strain on crime are at least partially mediated by negative emotional states, especially anger (Brezina 1998; Moon and Jang 2014; Piquero and Sealock 2000; Rebellon et al. 2012). Agnew (1985), for example, found that roughly half of strain’s effects on serious delinquency were mediated by a measure of trait anger. This is consistent with the idea that strained individuals experience heightened and enduring levels of anger. Rebellon and his colleagues (2012) found a more pronounced pattern – with nearly 100% of strain effects mediated by anger – when examining a measure of situational anger reported by subjects in reference to the specific sources of strain under scrutiny. Thus, on balance, the argument that strain affects behavior by promoting negative emotions has been supported. On moderating processes, however, the results are decidedly more mixed. Many of the conditioning factors identified by Agnew – including self-efficacy, delinquent peers, moral beliefs, and the extent of social support – have not consistently moderated the effects of strain on crime (Agnew and White 1992; Capowich et al. 2001; Hoffmann and Cerbone 1999; Hoffmann and Miller 1998; Mazerolle and Piquero 1997; Paternoster and Mazerolle 1994). Thus, it may be that effects of strain are more uniform across different people and circumstances than the theory predicts. That said, there are exceptions to that pattern (e.g., Baron 2004;

29

General Strain Theory and Cybercrime

587

Mazerolle and Maahs 2000). At least one potentially promising line of research in this area highlights the moderating role played by personality variables, including those related to concepts such as low self-control (Agnew et al. 2002; Hay and Evans 2006). Agnew et al. (2002), for example, found that abusive peer relations increased crime only for those with personalities marked by high negative emotionality and weak internal constraints. Taken together, this body of research points to the usefulness of GST for explaining variations in crime. Its key independent variables (sources of strain) consistently predict involvement in crime, and these effects often operate through the theory’s specified mediating variables (negative emotions). Moreover, those effects sometimes are conditional on key moderating factors specified by the theory.

Using GST to Explain Involvement in Cyberoffending Research on GST reveals its relevance for criminal involvement in general, but is it useful for understanding cyberoffending in particular? Two factors suggest that it is. First, as we address in greater detail below, aversive and noxious experiences – those that elicit strain – appear to be quite relevant to certain forms of cybercrime, especially cyberviolence. Thus, involvement in cybercrime is likely not just a function of weakened social control and exposure to deviant subcultures and cybercrime opportunities (as other theories contend) – cyberoffending also may be affected by bad circumstances that leave individuals feeling strained, angry, and motivated for corrective action. In this sense, GST complements other prominent theories by providing a set of causal variables that would not be otherwise considered. A second appealing feature of GST is that it goes beyond simply adding a new set of independent variables – it also provides testable predictions involving mediating and moderating processes, and this approach is integrative in nature; especially in its consideration of moderated effects, GST draws from other theories (including control, learning, and routine activities theories) to include a wide range of variables relevant to cyberoffending. In this sense, GST offers a comprehensive framework for incorporating a wide array of causal variables. For these reasons, applications of GST to cybercrime are potentially quite fruitful, but one challenge involves the sheer diversity of cybercrimes (and the resulting diversity in relevant etiological factors). Crimes as different as phishing (tricking individuals into disclosing personal information, such as credit card numbers) and the digital exchange of child pornography may be difficult to explain with just a single theory, including GST. Thus, as discussed, there is good reason to believe that GST’s causal sequence will be more relevant to some cyberoffenses.

Theoretical Arguments for Using GST to Explain Cyberoffending Agnew (1992, 2001) has presented GST as a theory that is “general” with respect to the wide array of criminal, antisocial, and harmful behaviors it should be able to

588

C. Hay and K. Ray

explain. Thus, by all indications, its hypothesized causal process can be used to explain cyberoffending in the same way that it explains property, violent, or substance use offending. The straightforward expectation is that variation across individuals in cyberoffending will be affected by exposure to strainful circumstances and relationships. To establish an independent contribution of GST variables, this pattern should emerge even when controlling for other factors, including low self-control, weak social bonds, routine activities, and subcultural affiliations. Next, these effects should be mediated in part by heightened negative emotions – strain should increase such things as anger and anxiety, and these emotions should increase cyberoffending. The importance of this mediated process cannot be underemphasized – numerous theories (including especially social control or rational choice perspectives) predict that disrupted relationships with conventional people or institutions lead to heightened offending (including cyberoffending), but GST’s emphasis on negative emotions offers a specific view for why this should happen. And last, the effects of strain and negative emotions on cyberoffending should be conditional on other factors, including individual qualities and features of the social environment that reflect constraints to criminal coping and predispositions for crime. In the non-cyber GST literature, these constructs have been applied broadly rather than with a direct connection to cyberoffending. For example, high self-esteem or selfefficacy often is presented as a constraint to delinquent coping. Although this may be true, GST-inspired cyberoffending research will benefit from conceptualizing constraints in ways that are more relevant to cyberoffending. Prominent constraints to cyberoffending may involve lacking mastery of computer technology or easy access to the Internet, with these restrictions potentially coming from parents, schools, employers, or insufficient financial resources. Similarly, “predisposition for crime” can be applied more specifically to cyberoffending by emphasizing prior cyberoffending and enmeshment in online subcultures that promote cyberoffending in particular. Although this straightforward application of GST in which strain affects cyberoffending largely as it affects other forms of offending can be pursued, the wisest application will use the theory in ways that are directly relevant to cyberoffending in particular. This is best accomplished with attention to which cyberoffenses are examined because GST may be especially relevant for some cyberoffenses and perhaps irrelevant to others. Central to this is GST’s unique focus on the mediating role of negative emotions – the cybercrimes most affected by GST processes should be those in which negative emotions are significantly implicated. Such cybercrimes are carried out with malicious and hostile intentions – these are expressive crimes in which feelings of anger and frustration make enacting harm a valued end in and of itself. Many cybercrimes will not fit this description because they are motivated instead by instrumental economically oriented concerns. This appears true for many forms of online fraud, including phishing scams and digital piracy (Holt and Bossler 2014), which arise from utilitarian goals of gaining money or avoiding paying for something. Although GST may not be wholly irrelevant to these crimes – their motives may be rooted in economic strains traditionally emphasized by strain theories (Merton 1938; Wright et al. 2006) – these economically oriented cybercrimes do not seem directly linked to GST’s emphasis on negative emotions (Hinduja 2012; Morris and Higgins 2009).

29

General Strain Theory and Cybercrime

589

With this in mind, the most fruitful applications of GST to cyberoffending will be for offenses that are in line with GST’s emphasis on anger, jealousy, hatred, and a desire for retaliation or revenge. These emotions may inspire intentional acts of harm against someone else, and indeed, major forms of cyberviolence in particular seem to fit this pattern quite well. For illustrative purposes, we describe GST relevance to three forms of cyberviolence: cyberbullying, cyber dating abuse, and ideologically motivated cyberhate and cyberterrorism. We describe below the theoretical case for GST’s relevance to these cyberoffenses along with the relevant empirical evidence. Cyberbullying. In the case of cyberbullying, the connection to GST is quite straightforward and has been commonly made (e.g., Lianos and McGrath 2018; Patchin and Hinduja 2011). Cyberbullying involves “willful and repeated harm inflicted through the use of computers, cell phones, and other electronic devices” (Patchin and Hinduja 2011, p. 728). Central to this is the bully’s “willful” attempt to harm the victim. The victim’s harm is neither unintended nor incidental; instead, it is the explicit purpose for the act. Varjas et al. (2010, p. 270) emphasized this motive in noting that in prior qualitative research, cyberbullies often describe how “they gained satisfaction or pleasure from hurting their victims.” Interviews conducted by Varjas et al. (2010) extended these patterns, finding that common motives for cyberbullying included revenge (targeting a person who had wronged them), redirection (“tak[ing] their feelings out on someone else,” p. 271), and jealousy of others. Thus, cyberbullying may result from a causal sequence marked by strainful social interactions that trigger negative emotions. Cyber dating abuse. A similar pattern can be seen with cyber dating abuse. Defined as “the control, harassment, stalking, and abuse of one’s dating partner via technology and social media” (Zweig et al. 2014, p. 1306), cyber dating abuse is an intentional act that is both a “willful and repeated harm” (Hinduja and Patchin 2009, p. 5). Prior research reveals a variety of motives for cyber dating abuse that are relevant to GST, including a desire for revenge or retaliation (Hinduja and Patchin 2009) and feelings of jealousy, insecurity, or rejection (Bennett et al. 2011; Kellerman et al. 2013; Pronk and Zimmer-Gembeck 2010). Just as with cyberbullying, jealousy is important – it predicts relationship conflict (Christofides et al. 2009) and promotes hostile or abusive behaviors that include stalking, harassment, and releasing personal information without a partner’s consent (Miller et al. 2014). In the cyber context, the abuser’s jealousy often arises from concerns about infidelity related to the partner’s social media activity (Miller et al. 2014). Anger is central to this process as well – the sharing of explicit photos often is intended to humiliate, harass, or punish the victim (Cohen-Almagor 2015; Gerson and Rappaport 2011; Halder and Jaishankar 2013), and this may sometimes include aggression like “writing hurtful things about a partner on their wall” to publicly humiliate them (Zweig et al. 2013, p. 41). One nuanced pattern that is part of this cyber dating abuse merits elaboration. Central to GST is the notion that aggression may amount to “problem-solving behavior” because it helps the individual cope with strain (Brezina 1996, p. 41). As such, aggression leads offenders to feel better afterwards because it reinforce beliefs that “harmdoers get what is coming to them” (Brezina 1996, p. 43) and that

590

C. Hay and K. Ray

their “dignity is not something to be trifled with” (Elmer 1991, p. 135). Some research suggests that this type of process is in play with cyber dating abuse, with cyber abusers labeling their behavior as “venting” in ways that suggest a cathartic intent – the offender experiences anger and indignation and then engages in aggression as coping (Zweig et al. 2013). Lyndon et al. (2011) support this notion, finding that “venting” behavior, in which subjects used Facebook to lash out at ex-partners, was a significant predictor of cyberstalking. Cyberhate and cyberterrorism. GST’s causal arguments also are relevant to the newly emerging literature on ideologically motivated cyberhate and cyberterrorism. “Cyberhate” involves the use of the Internet or computer technology to spread xenophobic, racist, or misogynistic ideas (Back 2002; Perry and Olsson 2009). As Holt (2012) notes, this activity often revolves around prominent websites, with Stormfront. org serving as one prominent example. This white nationalist website has been popular since the 1990s, at times having nearly 300,000 registered users (Southern Poverty Law Center 2014) using its message boards and social media sites that promote white supremacy, Holocaust denial, and anti-immigrant sentiment. In many nations (including the United States), these activities represent protected free speech, thus indicating that cyberhate may often be noncriminal. That said, in jurisdictions that prohibit hate speech (including England and Wales), cyber forms of hate crime have received increasing attention from law enforcement (Newsweek 2017). Beyond this, one concern is that cyberhate may encourage hate crimes in physical space. Muller and Schwarz (2018) found, for example, that German municipalities with an overrepresentation of antirefugee sentiment on Facebook experienced more violent crimes against refugees. Similarly, the Southern Poverty Law Center (2014) documented extensive cyberhate activity among physical hate crime perpetrators. Forms of cyberterror also are now being clearly documented. Cyberterror has been defined by Pollitt (1998, p. 9) as “the premeditated, politically motivated attack against information, computer systems, and data which result in violence [or harm] against noncombatant targets by subnational groups.” Like physical terrorism, cyberterror is designed to spread fear to a larger audience (beyond the immediate victims of an attack), with the ultimate goal of advancing a specific political, religious, or ideological cause. State-sponsored cyberterrorism is a concern, but cyberterror also comes from smaller groups of individuals that have a shared connection to Internet-based communities of extremists (Holt et al. 2017). Although GST rarely has been applied to hate crime or terrorism, its connection to these crimes – and to cyber forms of them in particular – is clear. Specifically, the circumstances of these crimes are consistent with an etiology in which a strainful grievance that is perceived (even if unreasonably) as unfair gives rise to negative emotions like anger, fear, jealousy, and resentment. Those emotions, in turn, encourage corrective retaliatory action. Walters (2011) discusses this in reference to physical hate crimes, but its relevance to cyberhate appears to be just as clear. Qualitative and ethnographic studies reveal that hate crimes often are committed by those of low socioeconomic standing who see their economic instability as the fault of minorities perceived as “invaders of indigenous. . .group territory” (p. 316) (also see Ray and Smith 2002). When the majority group’s adversity is blamed on

29

General Strain Theory and Cybercrime

591

others, this triggers feelings of “unfairness, anger, frustration, and even a sense of victimisation” that justify hate crimes (p. 317). One point bears emphasizing, however: although strains regarding socioeconomic instability may be important, hate crimes can originate from other “strains.” For example, some hate crime offenders are agitated by movement toward such things as gender equality and cultural and legal inclusiveness for LGBT individuals (Perry and Olsson 2009). Agnew (2010) sees similar processes playing out with respect to terrorism. In reviewing case studies on major terrorist organizations (including the Irish Republican Army, Hezbollah, and Al-Qaeda), Agnew finds that the causes they promote are linked to perceptions of high-magnitude strains, including “death and rape, major threats to livelihood, dispossession, large scale imprisonment or detention, and/or attempts to eradicate ethnic identity” (p. 137), all of which produce negative emotions such as anger, humiliation, and the desire for vengeance. Similar arguments are made by Maier-Katkin et al. (2009) in their discussion of “societal strain” and “angry aggression” as causes of genocide and crimes against humanity. Taken together, this suggests that terrorism in physical space follows a logic consistent with GST. This may be true for cyberterror as well.

Summary of Theoretical Arguments The wide-ranging theoretical arguments presented above point to the compatibility between Agnew’s GST and cyberoffending. More importantly, they point to clear testable hypotheses that can guide cyberoffending research and help organize the results of empirical evidence. We see three specific hypotheses in particular. First, there should be a positive association between strainful relationships and events and involvement in cyberviolence, including cyberbullying, cyber dating abuse, cyberhate, and cyberterrorism. Second, these relationships should be mediated by negative emotions, including anger, frustration, jealousy, insecurity, hatred, and a desire for revenge or retaliation. Thus, the experience of these emotions should be positively associated with cyberviolence. Third, any effects of strain or negative emotions on cyberviolence should be amplified by factors that make cyberviolence an easy, convenient, or feasible coping response. For example, such things as mastery of computer technology, easy and frequent access to the Internet, and heavy involvement in cyber subcultures should amplify the effects of strain and negative emotions on involvement in cyberviolence.

Empirical Evidence There is an emerging body of empirical evidence that bears on these hypotheses. Some studies have been presented as direct tests of GST, while others have considered variables that broadly relate to GST (even though their inclusion was not directly inspired by the theory). From these different types of tests, three generalizations can be made regarding the links between strain, negative emotions, and involvement in cyberoffending.

592

C. Hay and K. Ray

The first conclusion involves the consistent positive relationship between indicators of strainful relationships or events and involvement in cyberoffending. This relationship has been most clearly established for involvement in cyberbullying (Jang et al. 2014; Lianos and McGrath 2018; Patchin and Hinduja 2011; Paez 2018). Using the 2009–2010 Health Behavior in School-Aged Children Survey, Paez (2018) found that strains related to family dissatisfaction, low peer acceptance, and negative views about school increased participation in cyber and traditional bullying. Similar findings emerged in Patchin and Hinduja’s (2011) study of roughly 2,000 middle school students. They used a composite measure of strain that included items for such things as being treated unfairly by others, being a victim of crime, and experiencing money problems or a breakup with a boyfriend or girlfriend. Strain significantly predicted both cyberbullying and traditional bullying, with its effects generally stronger and more consistently significant than the effects observed for demographic variables such as gender, race, and age. The strain-cyberoffending relationship also is supported in research on cyber dating abuse (Sirianni 2015). Sirianni (2015) examined strains rooted in the dating relationship (including such things as infidelity, dissolution of the relationship, or unreturned calls), finding that these strains increased the likelihood of cyber-revenge porn perpetration. Moreover, the magnitude of the strain was consequential – those experiencing higher magnitude strain sought to inflict harm on their partner (i.e., revenge, punishment, humiliation), while those experiencing lower-magnitude strain reported an interest in alleviating negative feelings about themselves. For the outcomes of cyberhate and cyberterrorism, direct tests of GST or quantitative examinations of specific sources of strain are essentially nonexistent. Thus, in this area, firm conclusions cannot be drawn beyond the qualitative and case study evidence indicating that grievances and hardships that are reasonably viewed as strain are catalysts for cyberhate and cyberterrorism (Agnew 2010; Forester and Morrison 1994; Ray and Smith 2002). A second reliable conclusion in this area relates to the mediating process hypothesized by GST, with studies commonly finding that effects of strain on cyberviolence are partially mediated by negative emotions (Ak et al. 2015; Lianos and McGrath 2018; Smischney 2016). Using a survey of Internet-active young adults, Lianos and McGrath (2018) found that anger partially mediated the effects of a composite strain measure (cybervictimization, lack of social support, academic strain, and financial strain) on cyberbullying; roughly 20% of the total effect of strain on cyberbullying was explained by its effect on anger. Negative emotions such as depression, anxiety, and stress also have been examined, with Smischney (2016) finding that these emotions mediate the relationship between cybervictimization and both cyber and traditional bullying perpetration. Similar mediating patterns, although limited in nature, have been found in the cyber dating abuse literature. In relation to revenge porn perpetration, individuals who felt “more angered” by a provocation (i.e., infidelity, missed called) were more likely to share revenge porn to damage their partner (Sirianni 2015). A third empirical conclusion relates to moderating processes. Few studies have examined this issue, but evidence of moderation has emerged, although not always in the expected manner. For example, in a study of Hong Kong college students (Leung et al. 2018), the harmful effects of cyberbullying victimization (strain) on cyberbullying perpetration were amplified among those reporting strong friendship “security” by agreeing with statements such as “I can reconcile with my friend after a

29

General Strain Theory and Cybercrime

593

fight.” This finding is counterintuitive in suggesting that better friendships may exacerbate the consequences of strain. The authors interpreted this finding as suggesting that some degree of aggression is tolerated (perhaps encouraged) in highly secure relationships that are not easily damaged. More intuitively, the same study observed that effects of strain on cyberbullying were diminished by those reporting close relationships with peers (e.g., “I often think about my friend”). Importantly, however, both instances of moderation were observed among females but not males. An additional study from Wright and Li (2013) found that peer rejection amplifies the relationship between cybervictimization and later cyberaggression (Wright and Li 2013). This is firmly in line with the GST expectation that weak social support intensifies the consequences of strain. Other important moderating variables in the strain-cyberbullying relationship include family and personality variables. For example, Hood and Duffy (2018) found that the relationship between cybervictimization and cyberbullying perpetration was diminished by strong parental monitoring and amplified by moral disengagement (a proneness to distance oneself from accepted moral rules or actions). Alternatively, they found no diminishing effect of empathy and no amplifying effect of time spent on the Internet. Thus, taken together, these studies support GST’s suggestion that effects of strain on cyberbullying may be conditional upon other factors, but support for this is inconsistent.

Using GST to Explain the Consequences of Cybervictimization The research above largely pertains to the role that strain plays in affecting cybercrime offending. An additional application involves the causal importance of cybercrime victimization. Such victimization fits squarely within GST’s definition of strain. By the logic of GST, it therefore should affect such things as negative emotions and involvement in crime and deviance of both cyber and non-cyber forms. In this sense, a full application of GST recognizes that cybercrime can be considered as an outcome (i.e., as a dependent variable) but also as a causal variable (i.e., as an independent variable). In this section, we consider this second application of GST, focusing especially on cyberbullying victimization and cyber abuse; these are the forms of cybervictimization that have been most studied as predictors of individual behavior and well-being.

Theoretical Arguments In an important revision of GST, Agnew (2001) argued that the theory should devote greater attention to which strains in particular are of greatest consequence. He argued that some strains matter more than others because they are, for example, high in magnitude (rather than merely annoying) or unjust (rather than merely unfortunate). Strains satisfying these conditions should disrupt life more severely and produce more harmful consequences. Agnew highlighted criminal victimization and peer abuse as examples of this, describing the latter as involving “insults/ridicule, gossip, threats,

594

C. Hay and K. Ray

attempts to coerce, and physical assaults” (p. 346). Agnew (2001) made no reference to cyber forms of these strains, but his attention to abusive peer relations would be extended to cyberspace, with much of this research focusing on the harmful consequences of cyberbullying victimization (Hay et al. 2010; Hinduja and Patchin 2010) and cyber dating abuse (Zweig et al. 2014). Indeed, in making this extension, some scholars have suggested that cyber forms of victimization may be especially consequential. Hay et al. (2010) argued that the electronic nature of cyberbullying may allow it to go unnoticed by parents or teachers. Also, once information is posted to the Web, it may be difficult to have removed, and it can reach a wide audience. Mason (2008, p. 324) expresses similar concerns in emphasizing that cyberbullying “can harass individuals even when [they are] not at or around school. . . [U]nlike traditional forms of bullying, home may no longer be a place of refuge.” GST’s expectations for these types of strains match those for other strains – they should increase problematic outcomes for individuals through a mediated process in which cybervictimization produces negative emotions that, in turn, increase criminal, antisocial, or harmful behaviors. Because bullying and dating abuse are salient to a wide range of behavioral science disciplines, there has been much attention in this research to harmful outcomes that are noncriminal in nature, including such things as suicidal ideation and self-harm. Moreover, just as GST normally predicts, there is an expectation that the consequences of cybercrime victimization for these outcomes will be moderated by individual qualities and features of the social environment that reflect constraints or predispositions for criminal, antisocial, and harmful coping.

Empirical Evidence In numerous ways, the empirical evidence supports the arguments of GST. Many studies indicate that cybervictimization affects criminal outcomes such as substance use, violence, school delinquency, and weapon carrying (Beran and Li 2005; Mitchell et al. 2007; Ybarra et al. 2007; McCuddy and Esbensen 2017). For example, in a longitudinal study of a large sample of US middle school students, McCuddy and Esbensen (2017) found effects of being a cyberbullying victim on numerous forms of delinquency. Other studies document noncriminal but damaging responses to cyberbullying, including depression, anxiety, paranoia, fear, difficulty concentrating, negative school performance, self-harm, and suicidal ideation (Cassidy et al. 2013; Henson et al. 2013; Hay and Meldrum 2010; Holt et al. 2013; Schenk and Fremouw 2012). Beyond cyberbullying, these negative consequences have been demonstrated for victimizations involving cyberstalking (Wright 2018) and cyberhate (Teo et al. 2016). Regarding the latter, Teo and colleagues (2014) found cyber hate victimization to predict lower well-being and happiness. Some studies considered a possibility raised earlier that effects of cybervictimization may be greater than effects of traditional “physical space” forms of the same victimization. The weight of evidence supports this possibility. Holfeld and Leadbetter (2018), for example, found in a study of 4th–6th grade Canadian children that effects of cybervictimization on later aggression exceeded the effects of traditional

29

General Strain Theory and Cybercrime

595

victimization. Similar patterns were observed by Hay et al. (2010), McCuddy and Esbensen (2017), and Bonanno and Hymel (2013) for outcomes such as offending, self-harm, suicidal ideation, depressive symptoms, and anxiety. This pattern extends to cyber dating abuse, with cyberstalking victims more likely than traditional stalking victims to adopt self-protective behaviors (e.g., changing a job or quitting school) despite having a shorter mean duration of victimization (Nobles et al. 2014). Mediating and moderating processes also have been examined. Hay and Meldrum (2010) considered both processes in a study of middle and high school students in the southeastern United States. With respect to mediation, they found that a composite measure of negative emotions (including anxiety, depression, and low self-worth) explained 25–30% of the effects of cyberbullying victimization on selfharm and suicidal ideation. With respect to moderation, they found that high selfcontrol and exposure to good parenting reduced the harmful effects of cyberbullying. For good parenting, this was true for suicidal ideation, while for self-control, this was true for both self-harm and suicidal ideation. Similar findings emerged from McLoughlin, Spears, and Taddeo (2017). Li et al. (2018) also considered both mediation and moderation, finding that psychological insecurity explained roughly 50% of the effects of cyberbullying victimization on depression. On moderation, this study revealed that cyberbullying had its greatest effect on depression among those who experienced high levels of social support. This is counter to GST’s expectation that social support generally should serve as a constraint on harmful or antisocial responses to strain. Further analyses indicated, however, that this followed in part from a pattern in which those with low social support were at greater risk for depression in the first place – regardless of whether they had been cyberbullied. Taken together, these studies and others indicate consistent support for GST’s mediating arguments but only mixed support for its moderating arguments (see also Hood and Duffy (2018) and Leung et al. (2018)).

Conclusion: Addressing Gaps in Knowledge on GST and Cybercrime An emerging emphasis on theory and theory testing represents an important development in cybercrime scholarship. This chapter has argued that Agnew’s GST makes an important contribution to this development because of its attention to strainful relationships and events and the negative emotional states they engender. These types of variables are neglected by existing theories, but as the prior discussions clarify, they are applicable to important forms of cyberviolence, especially cyberbullying, cyber dating abuse and stalking, and cyberhate and cyberterrorism. The theory’s arguments also are useful for understanding the process by which cybervictimization affects behavioral and emotional outcomes for victims. A new but growing body of empirical evidence supports these applications of GST by revealing numerous findings: strainful events increase cyberoffending; effects of strain often are explained by heightened negative emotions, especially anger; these effects are sometimes (but certainly not always) conditional on circumstances and qualities that reflect an access to and predisposition for criminal coping; and

596

C. Hay and K. Ray

cybervictimization significantly affects individual outcomes, often in ways that parallel the mediating and moderating processes just described. With these findings in mind, GST merits important consideration in cybercrime research, and a series of GST-inspired studies in recent years suggest an appreciation of this point (Hay and Meldrum 2010; Lianos and McGrath 2018; Paez 2018; Patchin and Hinduja 2011). It bears emphasizing that there is much still to learn, with this relating especially to gaps in the empirical scrutiny of GST’s cybercrime hypotheses. Most notably, cyberbullying and (to a lesser extent) cyber dating abuse are the most heavily investigated outcomes in GST-cybercrime research, thus leaving other outcomes neglected. Indeed, much of what is known about the link between strain, negative emotions, and cybercrime is gleaned from research on these two outcomes. Thus, more research is needed on the wide variety of neglected cybercrimes. This includes cyberhate and cyberterrorism, which, as argued above, are likely to have special relevance to GST. Other cybercrimes – related to such things as cyber-trespass, cyber-theft, and cyberporn/obscenity – also may be affected by strainful relationships and events. On the other hand, involvement in digital piracy may be largely unaffected by GST processes (Hinduja 2012; Morris and Higgins 2009). Thus, as researchers seek to fill gaps in the literature, new studies should be designed with an appreciation for how results may vary across different cyberoffenses. Further research also is needed on mediating and moderating processes. To be clear, much attention has been devoted to these processes, especially in connection to cyberbullying. But even for that outcome, there are key neglected areas. Most notably, with respect to mediating mechanisms, anger remains the most examined negative emotion. Quantitative examinations that simultaneously consider a wide range of negative emotions have been rare. And on moderating processes, there remain significant questions about the accuracy of GST’s hypotheses. As described, some analyses confirm GST’s expectations that seemingly “good” circumstances (e.g., supportive relationships, prosocial skills and qualities) buffer the harmful effects of strain on cyberoffending and cybervictimization on behavior. Other studies find no such buffering effect, and still others reveal the opposite pattern – one in which “bad” circumstances (e.g., poor social support) crowd out (i.e., diminish) the effects of cyberstrain, perhaps by greatly heightening the odds of depression and delinquency (regardless of one’s exposure to cyberstrain). With these patterns in mind, more research is needed on moderating processes, and these results could suggest needed modifications to GST’s arguments. Last, it bears emphasizing that across all of these areas, rigorous tests are needed to better address causality concerns that naturally emerge in the behavioral sciences. Many studies have examined cybercrime causes and consequences with cross-sectional data that cannot establish causal order; also, in most studies, there is minimal attention to issues of spuriousness and self-selection. This appears true not just in tests of GST but also for cybercrime tests of other theories. In this sense, research on the causes and consequences of cybercrime is theoretically suggestive more than it is conclusive. This is far from surprising – theoretically-oriented cybercrime research is a new development, and there is much more research to be done. As the preceding sections have argued, Agnew’s GST can play a central role in such research.

29

General Strain Theory and Cybercrime

597

References Agnew, R. (1985). A revised strain theory of delinquency. Social Forces, 64(1), 151. Agnew, R. (1992). Foundation for a general strain theory of crime and delinquency. Criminology, 30, 47–87. Agnew, R. (2001). Building on the foundation of general strain theory: Specifying the types of strain most likely to lead to crime and delinquency. Journal of Research in Crime and Delinquency, 38, 319–361. Agnew, R. (2010). A general strain theory of terrorism. Theoretical Criminology, 14(2), 131–153. Agnew, R., & White, H. R. (1992). An empirical test of general strain theory. Criminology, 30, 475–499. Agnew, R., Brezina, T., Wright, J. P., & Cullen, F. T. (2002). Strain, personality traits, and delinquency: Extending general strain theory. Criminology, 40, 43–72. Ak, S., Ozdemir, Y., & Kuzucu, Y. (2015). Cybervictimization and cyberbullying: The mediating role of anger, don’t anger me! Computers in Human Behavior, 49, 437–443. Back, L. (2002). Aryans reading Adorno: Cyber-culture and twenty-first century racism. Ethnic and Racial Studies, 25(4), 628–651. Baron, S. W. (2004). General strain, street youth and crime: A test of Agnew’s revised theory. Criminology, 42, 457–483. Bennett, D., Guran, E., Ramos, M., & Margolin, G. (2011). College students’ electronic victimization in friendships and dating relationships: Anticipated distress and associations with risky behaviors. Violence and Victims, 26(4), 410–429. Beran, T., & Li, Q. (2005). Cyber-harassment: A study of new method for an old behavior. Journal of Educational Computing Research, 32, 265.277. Bonanno, R. A., & Hymel, S. (2013). Cyber bullying and internalizing difficulties: Above and beyond the impact of traditional forms of bullying. Journal of Youth Adolescence, 42(5), 685–697. Brezina, T. (1996). Adapting to strain: An examination of delinquent coping responses. Criminology, 34(1), 39–60. Brezina, T. (1998). Adolescent maltreatment and Delinquency: The question of intervening processes. Journal of Research in Crime and Delinquency, 35(1), 71–99. Broidy, L. M. (2001). A test of general strain theory. Criminology, 39(1), 9–36. Broidy, L. M., & Santoro, W. A. (2018). General strain theory and racial insurgency: Assessing the role of legitimate coping. Justice Quarterly, 35(1), 162–189. Capowich, G. E., Mazerolle, P., & Piquero, A. (2001). General strain theory, situational anger, and social networks: An assessment of conditioning influences. Journal of Criminal Justice, 29, 445–461. Cassidy, W., Fauchner, C., & Jackson, M. (2013). Cyberbullying among youth: A comprehensive review of current international research and its implications and application to policy and practice. School Psychology International, 34(6), 575–612. Christofides, E., Muise, A., & Desmarais, S. (2009). Information disclosure and control on Facebook: Are they two sides of the same coin or two different processes? Cyberpsychology & Behavior, 12(3), 341–345. Cloward, R. A., & Ohlin, L. E. (1960). Delinquency and opportunity: A theory of delinquent gangs. New York: Free Press. Cohen, A. K. (1955). Delinquent boys; the culture of the gang. New York: Free Press. Cohen-Almagor, R. (2015). Netcitizenship: Addressing cyberrevenge and sexbullying. Journal of Applied Ethics and Philosophy, 7, 14–23. Craig, J. M., Cardwell, S. M., & Piquero, A. R. (2017). The effects of criminal propensity and strain on later offending. Crime and Delinquency, 63(13), 1655–1681. Elmer, N. (1991). What do children care about justice? The influence of culture and cognitive development. In R. Veemunt & H. Steensma (Eds.), Social justice in human relations (Societal and psychological origins of justice, Vol. 1). New York: Plenum.

598

C. Hay and K. Ray

Forester, T., & Morrison, P. (1994). Computer ethics: Cautionary tales and ethical dilemmas in computing. Cambridge, MA: The MIT Press. Gerson, R., & Rappaport, N. (2011). Cyber cruelty: Understanding and preventing the new bullying. Adolescent Psychiatry, 1(1), 67–71. Gottfredson, M. R., & Hirschi, T. (1990). A general theory of crime. Palo Alto: Stanford University Press. Halder, D., & Jaishankar, D. (2013). Revenge porn by teens in the United States and India: A sociolegal analysis. International Annals of Criminology, 51(1–2), 85–111. Hay, C., & Evans, M. (2006). Violent victimization and involvement in delinquency: Examining predictions from general strain theory. Journal of Criminal Justice, 34, 261–274. Hay, C., & Meldrum, R. (2010). Bullying victimization and adolescent self-harm: Testing hypotheses from general strain theory. Journal of Youth and Adolescence, 39, 446–459. Hay, C., Meldrum, R., & Mann, K. (2010). Traditional bullying, cyber bullying, and deviance: A general strain theory approach. Journal of Contemporary Criminal Justice, 26(2), 130–147. Henson, B., Bradford, R. W., & Fisher, B. S. (2013). Fear of crime online? Examining the effect of risk, previous victimization, and exposure on fear of online interpersonal victimization. Journal of Contemporary Criminal Justice, 29(4), 475–497. Higgins, G. E., Marcum, C. D., Freiburger, T. L., & Ricketts, M. L. (2012). Examining the role of peer influence and selfcontrol on downloading behavior. Deviant Behavior, 33(5), 412–423. Hinduja, S. (2012). General strain, self-control, and music piracy. International Journal of Cyber Criminology, 6(1), 951–967. Hinduja, S., & Patchin, J. W. (2009). Bullying beyond the schoolyard: Preventing and responding to cyberbullying. Thousand Oaks: Sage. Hinduja, S., & Patchin, J. W. (2010). Bullying, cyberbullying, and suicide. Archives of Suicide Research, 14(3), 206–221. Hirschi, T. (1969). Causes of delinquency. Berkeley: University of California Press. Hoffmann, J. P., & Cerbone, F. G. (1999). Stressful life events and delinquency escalation in early adolescence. Criminology, 9, 33–51. Hoffmann, J. P., & Miller, A. S. (1998). A latent variable analysis of strain theory. Journal of Quantitative Criminology, 14, 83–110. Holfeld, B., & Leadbeater, B. J. (2018). The interrelated effects of traditional and cybervictimization on the development of internalizing symptoms and aggressive behaviors in elementary school. Merrill-Palmer Quarterly, 64(2), 220–247. Holt, T. J. (2012). Exploring the intersections of technology, crime, and terror. Terrorism and Political Violence, 24(2), 337–354. Holt, T. J., & Bossler, A. D. (2014). An assessment of the current state of cybercrime scholarship. Deviant Behavior, 35, 20–40. Holt, T. J., Burruss, G. W., & Bossler, A. M. (2010). Social learning and cyber-deviance: Examining the importance of a full social learning model in the virtual world. Journal of Crime and Justice, 33(2), 31–61. Holt, T. J., Chee, G., Ng, E. A. H., & Bossler, A. M. (2013). Exploring the consequences of bullying victimization in a sample of Singapore youth. International Criminal Justice Review, 23(1), 25–40. Holt, T. J., Freilich, J. D., & Chermak, S. M. (2017). Exploring the subculture of ideologically motivated cyber-attacks. Journal of Contemporary Criminal Justice, 33(3), 212–233. Holtfreter, K., Reisig, M. D., & Pratt, T. (2008). Low self-control and fraud: Offending, victimization, and their overlap. Criminal Justice and Behavior, 37(2), 188–203. Hood, M., & Duffy, A. L. (2018). Understanding the relationship between cyber-victimisation and cyber-bullying on social network sites: The role of moderating factors. Personality and Individual Differences, 133, 103–108. Jang, H., Song, J., & Kim, R. (2014). Does the offline bully-victimization influence cyberbullying behavior among youths? Application of general strain theory. Computers in Human Behavior, 31, 85–93. Kellerman, I., Margolin, G., Borofsky, L. A., Baucom, B. R., & Iturralde, E. (2013). Electronic aggression among emerging adults: Motivations and contextual factors. Emerging Adulthood, 1, 293–304.

29

General Strain Theory and Cybercrime

599

Leung, A. N. M., Wong, N., & Farver, J. M. (2018). Cyberbullying in Hong Kong Chinese students: Life satisfaction, and the moderating role of friendship qualities on cyberbullying victimization and perpetration. Personality and Individual Differences, 133, 7–12. Li, T., Li, D., Li, X., Zhou, Y., Sun, W., Wang, Y., & Li, J. (2018). Cyber victimization and adolescent depression: The mediating role of psychological insecurity and the moderating role of perceived social support. Children and Youth Services Review, 94, 10–19. Lianos, H., & McGrath, A. (2018). Can the general theory of crime and general strain theory explain cyberbullying perpetration? Crime & Delinquency, 64(5), 674–700. Lyndon, A., Bonds-Raacke, J., & Cratty, A. D. (2011). College students’ Facebook stalking of expartners. Cyberpsychology, Behavior and Social Networking, 14, 711–716. Maier-Katkin, D., Mears, D. P., & Bernard, T. J. (2009). Towards a criminology of crimes against humanity. Theoretical Criminology, 13(2), 227–255. Mason, K. L. (2008). Cyberbullying: A preliminary assessment for school personnel. Psychology in the Schools., 45(4), 323–348. Mazerolle, P., & Maahs, J. (2000). General strain and delinquency: An alternative examination of conditioning influences. Justice Quarterly, 17, 753–778. Mazerolle, P., & Piquero, A. (1997). Violent responses to strain: An examination of conditioning influences. Violence and Victims, 12, 323–343. McCuddy, T., & Esbensen, F. A. (2017). After the bell and into the night: The link between delinquency and traditional, cyber, and dual-bullying victimization. Journal of Research in Crime and Delinquency, 54(3), 409–441. McLoughlin, L., Spears, B., & Taddeo, C. (2017). The importance of social connection for cybervictims: How connectedness and technology could promote mental health and wellbeing in young people. International Journal of Emotional Education, 10(1), 5–24. Merton, R. K. (1938). Social structure and anomie. American Sociological Review, 3, 672–682. Meško, G. (2018). On some aspects of cybercrime and cybervictimization. European Journal of Crime, Criminal Law, and Criminal Justice, 26(3), 189. Miller, L. L., Martin, L. T., Yeung, D., Trujillo, M., & Timmer, M. J. (2014). Information and communication technologies to promote social and psychological well-being in the Air Force: A 2012 survey of airmen. Santa Monica: RAND Corporation. Mitchell, K. J., Ybarra, M., & Finkelhor, D. (2007). The relative importance of online victimization in understanding depression, delinquency, and substance use. Child Maltreatment, 12(4), 314–324. Moon, B., & Jang, S. J. (2014). A general strain approach to psychological and physical bullying: A study of interpersonal aggression at school. Journal of Interpersonal Violence, 29(12), 2147. Morris, R. G., & Higgins, G. E. (2009). Neutralizing potential and self reported digital piracy: A multitheoretical exploration among college undergraduates. Criminal Justice Review, 34(2), 173–195. Müller, K. & Schwarz, C. (2018). Fanning the flames of hate: Social media and hate crime (May 21, 2018). Available at SSRN: https://ssrn.com/abstract=3082972 or https://doi.org/10.2139/ ssrn.3082972. Newsweek. (2017). Online hate crimes to be taken just as seriously as offline offenses in England, Wales. By Stay Ziv, published on August 21, 2017. Nobles, M. R., Reyns, B. W., Fox, K. A., & Fisher, B. S. (2014). Protection against pursuit: A conceptual and empirical comparison of cyberstalking and stalking victimization among a national sample. Justice Quartely, 31(6), 986–1014. Paez, G. R. (2018). Cyberbullying among adolescents: A general strain theory perspective. Journal of School Violence, 17(1), 74–85. Patchin, J. W., & Hinduja, S. (2011). Traditional and nontraditional bullying among youth: A test of general strain theory. Youth and Society, 43(2), 727–751. Paternoster, R., & Mazerolle, P. (1994). General strain theory and delinquency: A replication and extension. Journal of Research in Crime and Delinquency, 31(3), 235–263. Pauwels, L., & De Waele, M. (2014). Youth involvement in politically motivated violence: Why do social integration, perceived legitimacy, and perceived discrimination matter? International Journal of Conflict and Violence, 8(1), 134–153.

600

C. Hay and K. Ray

Perry, B., & Olsson, P. (2009). Cyberhate: The globalization of hate. Information & Communications Technology Law, 18(2), 185–199. Piquero, N. L., & Sealock, M. D. (2000). Generalizing general strain theory: An examination of an offending population. Justice Quarterly, 17(3), 449.484. Pollitt, M. M. (1998). Cyberterrorism: Fact or fancy? Computer Fraud and Security, 2, 8–10. Pronk, R. E., & Zimmer-Gembeck, M. J. (2010). It’s “mean,” but what does it mean to adolescents? Relational aggression described by victims, aggressors, and their peers. Journal of Adolescent Research, 25(2), 175–204. Ray, L., & Smith, D. (2002). Hate crime, violence and cultures of racism. In P. Iganski (Ed.), The hate debate. London: Profile Books. Rebellon, C. J., Manasse, M. E., Van Gundy, K. T., & Cohn, E. S. (2012). Perceived injustice and delinquency: A test of general strain theory. Journal of Criminal Justice, 40(3), 230. Schenk, A. M., & Fremouw, W. J. (2012). Prevalence, psychological impact, and coping of cyberbully victims among college students. Journal of School Violence, 11, 21–37. Sirianni, J. M. (2015). Bad romance: Exploring the factors that influence revenge porn sharing amongst romantic partners. New York: State University of New York. Smischney, T. (2016). Examining the impact of cyberbullying victimization in a postsecondary institution: Utilizing general strain theory to explain the use of negative coping mechanisms. East Lansing: Michigan State University. Southern Poverty Law Center. (2014). White homicide worldwide. By Heidi Beirich, published March 31, 2014. Teo, K., Nasi, M., Oksanen, A., Rasanen, P. (2016). Online hate and harmful content: Crossnational perspectives. Routledge Research in Information Technology and Society Volume 200 of Routledge Advances in Sociology: Routledge. Varjas, K., Talley, J., Meyers, J., Parris, L., & Cutts, H. (2010). High school students’ perception of motivations for cyberbullying: An exploratory study. Western Journal of Emergency Medicine, 11(3), 269–273. Wall, D. S. (2001). Cybercrimes and the internet. In D. S. Wall (Ed.), Crime and the internet (pp. 1–17). New York: Routledge. Walters, M. A. (2011). A general theories of hate crime? Strain, doing difference and self control. Critical Criminology, 19(4), 313–330. Wright, M. F. (2018). Cyberstalking victimization, depression, and academic performance: The role of perceived social support from parents. Cyberpsychology, Behavior and Social Networking, 21(2), 110–116. Wright, M. F., & Li, Y. (2013). The association between cybervictimization and subsequent cyber aggression: The moderating effect of peer rejection. Journal of Youth Adolescence, 42, 662–674. Wright, J. P., Cullen, F. T., Agnew, R. S., & Brezina, T. (2006). “The root of all evil”? An exploratory study of money and delinquent involvement. Justice Quarterly, 18(2), 239–268. Ybarra, M. L., Espelage, D. L., & Mitchell, K. J. (2007). The co-occurrence of internet harassment and unwanted sexual solicitation victimization and perpetration: Associations with psychosocial indicators. Journal of Adolescent Health, 41(6), S31–S41. Zweig, J. M., Dank, M., Yahner, J., & Lachman, P. (2013). The rate of cyber dating abuse among teens and how it relates to other forms of teen dating violence. Journal of Youth and Adolescence, 42, 1063–1077. Zweig, J. M., Lachman, P., Yahner, J., & Dank, M. (2014). Correlates of cyber dating abuse among teens. Journal of Youth and Adolescence, 43(8), 1306–1321.

Critical Criminology and Cybercrime

30

Adrienne L. McCarthy and Kevin F. Steinmetz

Contents Critical Criminology and Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Critical Criminology and Critical Social Theory: Origins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radical Criminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radical Criminology and Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postmodern Criminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

602 603 604 607 612 616 617

Abstract

Critical criminology consists of a diverse assortment of theoretical perspectives that share an attunement to the role of power and conflict in crime, criminalization, and crime control yet its applications toward cybercrime issues are lacking. The application to these cybercrime issues can help recognize that online criminal actors should be understood as agents navigating social structure, intergroup conflict, and power in addition to considering other kinds of social harm than those derived from legal definitions.

Adrienne L. McCarthy is a doctoral student of sociology in the Department of Sociology, Anthropology and Social Work at Kansas State University. Kevin F. Steinmetz is an Associate Professor of Sociology in the Department of Sociology, Anthropology and Social Work at Kansas State University. A. L. McCarthy (*) · K. F. Steinmetz Department of Sociology, Anthropology and Social Work, Kansas State University, Manhattan, KS, USA e-mail: [email protected]; [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_27

601

602

A. L. McCarthy and K. F. Steinmetz

Keywords

Critical criminology · Cybercrime · Theory · Marxism · Radical criminology · Postmodernism

Critical Criminology and Cybercrime Critical criminology consists of a diverse assortment of theoretical perspectives that share an attunement to the role of power and conflict in crime, criminalization, and crime control including radical criminology, cultural criminology, postmodern criminology, peacemaking criminology, feminist criminology, and critical race theory, to name a few (Bernard et al. 2016; DeKeseredy 2011). These perspectives have been deployed to understand a constellation of crime and criminal justice issues like gangs (e.g., Brotherton 2015), graffiti artists (e.g., Ferrell 1996), drug dealing (e.g., Adler 1993), corporate crime (e.g., Friedrichs 2007), environmental crime (e.g., Lynch 1990), violence (e.g., Currie 2008), and many others. Yet, its applications toward cybercrime issues are lacking. Such an oversight is disheartening as phenomena like hacking, hacktivism, piracy, online fraud, and online harassment, to name a few, are rich sites for critical criminological analysis, a view which recognizes that online criminal actors should be understood as agents navigating social structure, inter-group (and often hierarchical) conflict, and power. For instance, a burgeoning area of research argues that hackers must be understood as computer aficionados who interact with and within social structures and cultural forces like law and information capitalism (e.g., Steinmetz 2016; Banks 2018). Additionally, critical criminology expands beyond legal definitions of crime to consider other kinds of social harm (DeKeseredy 2011). For example, critical criminologists have examined harms which can be perhaps classified as “cybercrimes of the powerful” wrought by government surveillance (e.g., Ventura et al. 2005), corporate control of intellectual property and Internet access (e.g., Steinmetz 2016; Yar 2005), and the persecution/prosecution of online whistleblowers (e.g., Rothe and Steinmetz 2013; Steinmetz 2012). The need for critical criminology in cybercrime analyses is more pressing than ever in the era where state actors use the Internet to undermine democratic elections, hacktivists fight back against corporate and political corruption, numerous political and legal battles are waged over online civil liberties, and online criminal opportunities proliferate in the era of information capitalism. To detail the promise of critical criminology in the area of cybercrime, this chapter first provides a general overview of critical criminology. Second, this chapter explores examples of existing critical criminological research in the area as well as potential unexplored areas of application. Because critical criminology encompasses a wide assortment of perspectives, space does not permit a comprehensive exploration of the field. Instead, two wellestablished areas of critical criminology and their application to cybercrime will comprise our focus, specifically radical criminology and postmodern criminology. The chapter concludes with thoughts on future applications of critical theory to cybercrime.

30

Critical Criminology and Cybercrime

603

Critical Criminology and Critical Social Theory: Origins The origins of critical criminology are somewhat nebulous with no single author or study being agreed upon as the progenitor. There is general agreement that its seeds were planted in the early 1900s when criminology in the United States began to diverge from the study of law into the social sciences. Though positivist perspectives on crime were still pervasive, some scholars began to understand the criminal justice system as unjust by its tendency to swallow the poor disproportionately, creating the foundation to critical criminology as there was “a profound shift in the interpretation of motives behind the actions of the agencies that deal with crime” (Sykes 1974, p. 208). These new interpretations led to criminal law and social control to become perceived as “largely instruments deliberately designed for the control of one social class by another” (Sykes 1974, p. 209). These insights were profoundly influenced by the work of Karl Marx, Friedrich Engels, and other social conflict theorists which tended to view rules as shaped by the powerful and weaponized in their interests (Turk 1976). One of the earliest mentions of Marxism in criminology can be traced to Willem Bonger in 1916 who connected macrostructures like the law and economy to individual degeneracy (Bernard et al. 2016; Bonger 1916; Copes and Topalli 2010). Shortly thereafter, Rusche and Kirchheimer (1939) connected the punitive system to the labor market (Weis 2017). Other major influences on contemporary critical criminology were the Frankfurt School which revitalized Marxist theory in the 1920s (Agger 2013; Cowling 2008). The Frankfurt School’s communist and socialist ideology was perceived as a threat to the Nazi movement which eventually forced the school’s members to relocate to Columbia University in the United States a decade later. Major players within this intellectual movement included Theodore Adorno, Herbert Marcuse, and Max Horkheimer. After fleeing the Nazis and witnessing the rise of Stalinism, Frankfurt School theorists distanced themselves from orthodox Marxism while retaining many of the analytic tools and ideas developed within Marxism such as the influence of economic structure on individual behavior (Agger 2013; DeKeseredy 2011). In contrast to the previous era of positivism, this “critical theory” adopted a holistic and historically sensitive standpoint to examine how economic and political power operated not only as a material force but as a cultural and ideological one (Adorno and Horkheimer 1944). In Marxist terms, the social world and its various inequalities and oppressions were not only shaped by material circumstances (ownership and control of resources and the means of production) but also by influence or outright control over the formation of ideas. With few exceptions, criminology abstained from dabbling in critical theory throughout the early twentieth century. Some theories, like Merton’s (1938) strain theory and Shaw and McKay’s (1942) social disorganizations theories, recognized the role of social conflict and inequality in crime within American society but they largely eschewed the radical focus on oppression, power, and control taken by critical theorists. Criminologists began to embrace this perspective more fully in the 1960s amidst the social turmoil of the civil rights and anti-war movements as faith in traditional institutions and ideas about equality were challenged. Amidst this

604

A. L. McCarthy and K. F. Steinmetz

turmoil came a cascade of scholarship critical of the State and other major institutions, including the reimagining of radical and critical theories on crime, criminalization, and control. For instance, William Chambliss wrote a series of papers in the 1970s which explicitly argued that criminal justice and crime is a manifestation of conflicts of power and resources (Chambliss and Zatz 1993; Chambliss 1974; Chambliss and Mankoff 1976; Hamm 2010). Similarly, Richard Quinney (1970) wrote The Social Reality of Crime, which argued that crime is constructed by those who have control over the definition of crimes and the implementation of social control. These works, and others, demonstrated a rising sensitivity to the role of power in the creation of criminal categories and in the implementation of criminal justice controls on populations. Simultaneously, the excitement of the Vietnam War and the civil rights movement also spread over seas to Europe and revitalized Marxist theory with the development of what became known as the New Criminology (Walton and Young 1998). Of course, the history presented here is truncated and limited mostly to the period between the early and late twentieth century. The objective, however, is the show that critical criminology emerged amidst periods of political turmoil and intellectual challenges to the social order. Today, critical criminology remains a vibrant area of theoretical development with a cornucopia of perspectives emerging to deal with a diverse arrangement of social problems including, but not limited to, cultural, feminist, radical, and postmodern theories. Due to the breadth of theory within critical criminology, it would take an entire book to cover every theoretical paradigm that has been applied to cybercrime. Hence, this chapter will be selective in its theoretical explorations. In particular, this chapter will explore the application of the following theories to the area of cybercrime: (1) radical criminology, a perspective often conflated with critical theory that draws on Marxism to analyze inequalities that are products of capitalism and (2) postmodern criminology which, among other things, emphasizes the role of language or discourse in crime and crime control. Table 1 provides a brief overview of other critical criminological perspectives which may be of interest to the reader, however. The reader may also wish to refer to Gresham’s ▶ Chap. 4, “Race, Social Media, and Deviance” and Marganski’s ▶ Chap. 31, “Feminist Theories in Criminology and the Application to Cybercrimes,” which explore race issues in cybercrime and applications of feminist theory, domains often subsumed within critical criminology.

Radical Criminology Radical criminology is a theoretical orientation that exists as a subset to critical criminology, drawing on Marxism and focusing explicitly on inequalities and patterns of oppression that arise from abuses of capitalistic powers. Marx understood capitalism as a model that thrives from pitting society into two broad classes: the poor working class who are fundamentally exploited by the richer owners of the means of production. This inherent class conflict within capitalism and its tendency

30

Critical Criminology and Cybercrime

605

Table 1 Summary of select critical criminological perspectives Theory/ perspective Radical criminology

Postmodern criminology

Feminist criminology

Critical race theory

Cultural criminology

Queer criminology

Peacemaking criminology

Summary Perceives social problems as a product of a conflict between the owners of the means of production (bourgeoisie) and the working class, with the criminal justice system and law a primary tool to uphold the capitalist interests of the bourgeoisie. Resolutions for crime and oppression revolve around returning the means of production to the working class Draws on deconstructionism to explain how individuals derive unique meaning and identity from the interplay of language, material, and social objects. Power is traced through these relationships to understand more symbolic and discursive forms of control A perspective which draws attention to the role of gender in crime and criminal justice. Arose in the 1960s and 1970s as a reaction to the androcentrism that dominated criminological thought which took the experiences of boys and men for granted and largely ignored or wrote of the experiences of girls and women Understands that the unequal impact of crime, crime control, and discriminatory use of law as a product of socioeconomic systems that have a historical basis in controlling categories of race Built upon a foundation comprised of radical criminology, symbolic interactionism, feminist criminology, postmodern and constitutive criminology, cultural theory, and other critical perspectives, cultural criminology provides a theoretical tool kit designed to draw connections between individual experience and broader social structural circumstances through cultural mediations and political conflict A perspective which focuses on the experiences of queer (nonheterosexual or non-cisgendered) people in both crime and criminal justice Influenced by Marxism, feminist theory, humanist philosophies, Buddhism, and other perspectives, peacemaking criminology focuses on the development

Examples of major works Chambliss and Zatz (1993), Chambliss (1994), Chambliss and Mankoff (1976)

Arrigo (2003) and Foucault (1975)

Daly and Chesney-Lind (1988), Miller (1998) and Messerschmidt (1993)

Bell (1980) and Crenshaw (1988)

Ferrell et al. (2015), Presdee (2000), and Young (2007)

Buist and Lenning (2015) and Dwyer et al. (2016)

Pepinsky and Quinney (1991)

(continued)

606

A. L. McCarthy and K. F. Steinmetz

Table 1 (continued) Theory/ perspective

Convict criminology

Anarchist criminology

Left realism/ critical realism

Summary of alternative methods of conflict resolution and the amelioration of injustice to repair communities and address social harms A perspective where persons formerly incarcerated in jail or prison or otherwise supervised under the criminal justice system are directly involved in the creation of criminological knowledge Like radical and conflict criminology, anarchist criminology sees social problems as a product of conflicting interests between groups stratified within social hierarchies. For anarchists, most if not all forms of social organization and hierarchy tend toward oppression and domination. Anarchists thus work toward destruction and creation of social hierarchies with the objective of working toward more socially just forms of social organization A perspective which attempts to address the real harms caused by crime while also addressing those caused by social structural oppression and inequities. It arose in the 1980s to address criticisms that critical criminology romanticized criminals and did little to provide solutions to solving crime problems

Examples of major works

Carceral and Bernard (2015) and Irwin (1970)

Ferrell (1996) and Williams and Arrigo (2001)

Matthews (2014) and Young and Lea (1984)

to maintain the status quo drives oppression and inequality (Bernard et al. 2016; Weis 2017). From this view, the social control of crime is a function of the state to maintain the interests of capitalists, interests which are inherently at conflict with those of the working class. These same functions of capitalism also “shapes social institutions, social identities, and social action” (Lanier et al. 2015, p. 262). Generally speaking, there exist two major trajectories of radical criminological thought. On the one hand, instrumental Marxists argue that inequality and oppression stem from capitalists consciously manipulating the state into using the criminal justice system to support their interests. For example, during the labor movement of the late 1800s and early 1900s, police forces and private security contractors were deployed on behalf of business owners to control organized laborers (Harring 1983; Zinn 2003). On the other, structural Marxists argue that the state works to preserve the overall system of capitalism, even if that means neglecting the interests of individual capitalists or even reigning in some of their excesses to do so. In other words, the state operates according to “long-term capitalist interests rather than in the short-term

30

Critical Criminology and Cybercrime

607

interest of powerful corporations” (Lanier et al. 2015, p. 264). It could be argued, for instance, that while the business regulations implemented alongside Roosevelt’s New Deal stifled individual capitalists, these measures ultimately saved American capitalism from the excesses of its greediest companies. In another example, structural Marxists have pointed out that the American criminal justice system operates in a manner which reifies class distinctions, rendering the poor and other “problem populations” as deserving of punishment through incarceration, while the gross harms caused by white collar and corporate offenders are often construed as regulator or civil offenses – if they are considered offenses at all (Reiman and Leighton 2017; Spitzer 1975). Despite major changes to the political economy in the age of the Internet that has created a host of new crime problems and criminal opportunities, applications of radical criminology on cybercrime and security are limited. It is toward the limited research from a radical perspective on these subjects that this chapter now turns to.

Radical Criminology and Cybercrime Though radical criminology does address crime and crime causation, many radical criminologists focus on criminalization and control. It is thus perhaps fitting that one of the earliest pieces of radical criminology in the realm of cybercrime focuses on the formation of an industrial complex dedicated to cybersecurity. In this piece, Majid Yar (2008a) builds from the work of Nils Christie (2000) who described the emergence of Crime Control as Industry or what others have termed the prison-industrial complex or corrections-commercial complex (e.g., Lilly and Knepper 1993). Yar (2008a) notes the emergence of an industry dedicated to computer crime control, following trends in other areas of crime control, which increasingly relies on market-based solutions. Further, Yar (2008a) notes that computer security has become “responsibilized” where the onus is placed on individuals to protect themselves from criminal predation, a trend he attributes to “a transition to a neo-liberal mode of capitalism” that displaced security to “non-market quasi-public regulatory bodies,” “communities of self-policing,” and private companies (Yar 2008a, pp. 193, 195). In other words, security is provided as a commodity and service. The computer crime control industry have itself entrenched within “nodal” approaches to crime control – where responsibilities for managing cybercrime threats are dispersed among an assortment of public and private organizations and actors. These “nodal” or decentralized methods of control have been heavily criticized. For instance, a reliance on nonpublic entities for security suffers from a lack of oversight and accountability (Loader and Sparks 2007, pp. 82, 91). Concerns have also been expressed about the role of privatization and the dispersal of control in the erosion of the state’s power for governance – that the state is being “hollowed out” (Holliday 2000; Jessop 2004; Rhodes 1994). Further, functional limitations may also confront these approaches. Yar (2011, pp. 10–13), for example, argues that such cooperative control efforts may breakdown or otherwise fail due to three endemic tendencies: (1) “failure due to inter-systemic conflicts and discordant rationalities” or

608

A. L. McCarthy and K. F. Steinmetz

the tendency for these networks to be disrupted by competing interests, agendas, and governing logics between state, market, and other organizations/actors; (2) “failure due to intra-systemic competition” or the tendency of actors within governing sectors, like local and federal law enforcement, to compete with each other; and (3) “failure due to multiple spatial-temporal scales” or the tendency of governing organizations and actors to undermine each other by operating at different scales or levels of analysis (see also: Levi and Williams 2013). Banks (2015, p. 260) provides a noteworthy analysis demonstrating the computer crime control industry at work in the aftermath of the discovery of the “Heartbleed” bug. Heartbleed exploited “vulnerable versions of OpenSSL software” and enabled “individuals to read the memory content of systems by compromising the ‘secret keys’ which are employed in the identification of service providers, the encryption of traffic, usernames and passwords and the content of messages.” Bank’s analysis examines the reporting of the bug, whereby news media and information security companies peddled a narrative of risk and encouraged fear about the possible outcomes of the coding error, creating a “culture of fear” surrounding cybercrime (Wall 2008). To ameliorate their anxieties, users were encouraged the take matters into their own hands, including acquiring products and services from various information security vendors. As the author indicates, this is a common tactic taken by private companies provisioning crime control and security – fear generates demand (Hall and Winlow 2005; Hayward 2004). Of course, as he acknowledges, “it may well be the case that the hyperbolic news reporting that follows the discovery of a new digital danger acts to drive public awareness and increase the speed with which online populations identify and deploy preventative measures and remedies” (Banks 2015, p. 275). Banks (2015), however, remains skeptical, arguing that the coverage of Heartbleed was – at least in part – an attempt to cultivate sales of security services and products. Recent works have also explored hacking and hacktivism (a portmanteau of “hacking” and “activism”) through radical criminology. In their recent analyses, Banks (2018) and Steinmetz (2016) argue that these issues are mired in in social constructionist distortions which contribute to a social unease and fear about information security and personal safety. As Banks (2018, p. 112) explains, the fear generated creates benefits for “governments, law enforcement agencies, the technosecurity industry, commercial corporations and the media” by allowing for the expansion of state surveillance apparatuses, government power through harsh punishments of offenders, and the creation of a multibillion dollar cybersecurity industry. In this manner, the actual threat posed by hackers and hacktivists is potentially distorted in the funhouse mirrors of news media, popular culture, political rhetoric, and corporate propaganda (Ferrell et al. 2015; Steinmetz 2016). Steinmetz (2016) further adds that contemporary information capitalism simultaneously harnesses the creative productive potential of hackers while seeking to suppress the threat they pose to the capitalist mode of production. He highlights three “fronts of social construction” through which this process occurs: “first, hackers are constructed as dangerous, which allows a criminal classification to be imposed. Second, intellectual property is reified, giving capitalists a set of property ‘rights’ to protect from the threats of individuals like hackers. Finally, technological infrastructures are

30

Critical Criminology and Cybercrime

609

portrayed as vulnerable, lending a degree of urgency to the need to fight off the hacker ‘menace’ along with other technocriminals” (Steinmetz 2016, pp. 178, 218). Once formed, these ideological machinations permit the expansion of corporate and state control over intellectual property and technological production. Digital piracy, or the unauthorized digital duplication and distribution of copyrighted content, is also a cybercrime that can be understood through radical criminology. Many studies have been published which are critical of copyright law as well as the rhetoric and behavior of content industries (music, movies, software, etc.) (e.g., Boyle 2003; Halbert 2016; Lessig 2004, 2012; Mueller 2016; Parkes 2012; Perelman 1998, 2002, 2014). Though an exhaustive exploration of this literature is beyond the scope of this chapter, the current analysis will highlight scholarship on piracy within criminology consistent with the radical criminological tradition. In particular, these studies understand piracy as a product of a criminalization process that largely serves the interests of capital accumulation increasingly dependent on global vectors of information transmission (Yar 2005, 2008a; Steinmetz and Pimentel 2018). Importantly, radical criminologists regard piracy as a problem without moorings in a social consensus about harm but, rather, as a product of processes which affirm the interests of capital accumulation (Yar 2008b, p. 606). Yar (2005, p. 684) argues that we need to “examine with a critical eye the social processes by which ‘piracy’ is itself being constituted (and reconstituted) as a legal (and moral) violation, and to consider how quantifications of the ‘piracy’ problem are themselves the outcome of a range of contingent (and interest-bound) processes and inferences.” In the late 1990s and early 2000s, peer-to-peer (p2p) filesharing systems like Napster, LimeWire, Kazaa, and BitTorrent emerged which allowed for rapid and decentralized distribution of files across the Internet. These systems were largely embraced for their ability to make accessible large troves of copyrighted content for free download. Needless to say, copyright industries panicked. They decried the growth of so-called online piracy, foretold of significant economic losses to industries and content creators, and called for increased copyright protections and enforcement efforts (Lessig 2004; Yar 2005). During this period, criminology produced a host of studies examining potential causes, correlates, and harms of piracy, much of it uncritically accepting the narratives and statistics produced by industry organizations (e.g., Gunter 2009; Higgins et al. 2008; Morris and Higgins 2010; Smallridge and Roberts 2013; Tade and Akinleye 2012). Yet some research echoed the scholarship in other fields which was becoming increasingly skeptical about claims made by industries regarding dangers and losses resulting from piracy. In 2005, Majid Yar provided an early radical criminological take on the piracy by examining the social construction of piracy as a crime problem by corporate interests. In particular, he attacks the reliance on industry-produced statistics on the magnitude of the issue, including losses produced to industries and content creators to create a narrative about the prevalence of and harm produced by piracy (Yar 2005). As he explains, these figures are often biased in favor of content industries who have a vested interest in maximizing the totals to increase the perceived urgency in dealing with copyright violators to lawmakers and law enforcement agencies (see also: Kariithi 2011; Lessig 2004). He further argues that:

610

A. L. McCarthy and K. F. Steinmetz

These industry “guesstimates” usually become reified into incontrovertible facts, and provide the basis for further discussion and action on the “piracy problem.” From the industry’s viewpoint, the inflation of the figures is the starting point for a “virtuous circle” – high figures put pressure on legislators to criminalize, and on enforcement agencies to police more rigorously; the tightening of copyright laws produces more “copyright theft” as previously legal or tolerated uses are prohibited, and the more intensive policing of “piracy” results in more seizures; these in turn produce new estimates suggesting that the “epidemic” continues to grow unabated; which then legitimates industry calls for even more vigorous action. What the “true” underlying levels and trends in “piracy” might be, however, remain inevitably obscured behind this process of socio-statistical construction. (Yar 2005, p. 690)

In this manner, Yar (2005) argues that a possible explanation of the increase in illegal downloading is not necessarily entirely a result in changing behaviors among Internet users but may be a result of industries crafting a narrative portraying piracy as on the rise and their industry in greater need of state protections. In 2008, Yar also provided a radical criminological exploration of industry-produced anti-piracy educational campaigns. According to him, these campaigns are designed to convince young folks that piracy is an antisocial activity and that intellectual property should be treated as analogous to forms of physical property. They are lesson plans which promote a particular view of intellectual property in the interests of content industries, views juxtaposed with alternative assessments like those who argue that content is equivalent to speech and any attempt to curtail such speech (i.e., prevent piracy) is an encroachment of speech (e.g., Dame-Boyle 2015; Doctorow 2016; Stallman 2002). Thus Yar (2008b, p. 607) argues that “anti-piracy campaigns must be viewed as ideological in character, providing support for this construction of value by both drawing upon and extending longstanding (albeit contested) notions of property and individual property rights, which themselves out to be seen as essential underpinnings of capitalist accumulation” (Yar 2008b, p. 607). More recently, Steinmetz and Pimentel (2018) authored a radical criminological examination that argued intellectual property and piracy were caught up in dialectical contradictions presented by classical liberal values endemic to both capitalism and many sectors of tech culture, including hacking and piracy. One key argument made in their analysis is that “piracy is produced under the same historical circumstances that birth intellectual property – one cannot exist without the other” (Steinmetz and Pimentel 2018, p. 200). From this view, each time a new technology emerges that alters the way information is disseminated, certain uses of the technology emerge which are decried by established business interests as “piracy” (Lessig 2004). Yet, Steinmetz and Pimentel (2018) point out that it would be insufficient to simply claim that piracy is the antithesis of copyright and content industries. Instead, variations of similar ideological values underpin both business interests and piracy – that the friction between these opposing sides may stem from contradictions endemic to liberal values. For instance, they stated that “on the one hand, the liberal value of property ownership as an expression of freedom undergirds intellectual property” but “on the other hand, liberalism also supports the view of pirates that expressive content is akin to speech and, as such, any limit on its circulation is fundamentally amoral” and that “piracy reimagines the enlightenment values of sharing information as a social good” (Steinmetz and Pimentel 2018, p. 200).

30

Critical Criminology and Cybercrime

611

This chapter has highlighted just a few of the potentially many applications of radical criminology on cybercrime. There are many other possibilities for applications of radical criminological theory. For example, online fraud can be understood as at least partially a result of the proliferation of criminal opportunities created within the context of contemporary information capitalism. Banks, for example, now use online services to access and transfer of moneys faster at a global scale, expanding the capacity for traditional use of banks for fraud by larger businesses but also the capacity for white collar crime by providing opportunities for workers to transfer funds with their exclusive access (Levi 2008). In other words, structural changes to the transfer and management of money have created new structures of opportunity for criminal activity. The same goes for the existence of online illicit marketplaces which trade in login credentials and financial information (e.g., credit card numbers). These marketplaces can only exist in an era of information capitalism which depends on the rapid and relatively unfettered transmission and storage of financial information for the purposes of commerce. Examining the intersection between money management technologies, legal market structures, and illicit marketplaces may therefore be a fruitful area of research for radical criminologists. Another potential area of radical inquiry concerns the transition of some gambling to Internet platforms has opened the gateway for transnational crimes. For example, Crawford (2014) analyzes the conflict between the United States and Antigua and Barbuda over illicit online gambling draws parallels to the capitalist outsourcing of labor, i.e., gambling is “characterize[d] the outsourcing of manufacturing from a rich country, where wages and other costs tend to be high, to a poorer nation, where the skill of workers is equivalent to that of the domestic labour force” (p. 134). The United States domestic law enforcement patterns tended to pursue offshore online casinos/gambling less than domestic gambling suppliers and thus reflecting capitalist interests rather than upholding moral codes rendering the state criminogenic. In another example, Banks (2014) demonstrates how online gambling has opened up pathways for risk of fraud and theft victimization which users much learn to navigate across licit, illicit, and semi-illicit gambling marketplaces. Additionally, such businesses engage in various strategies in attempts to manage users’ perceptions of risk regarding “underage and problem gambling, significant financial loss, crime, and victimization” – thus portraying online gambling as a safe risk (Banks 2014, p. 60). He also points out that the transition to online spaces allowed for gambling operators to be subjected to new forms of extortion – that their sites would be subjected to denial of service attacks unless they paid out. Considering the complex interplay of economic interests, legal regulations, and management strategies employed by gambling site operators, users, and fraudsters, online gambling presents a tremendous opportunity for radical criminological analysis. Up to this point, the chapter has focused primarily on power in the context of capitalist social relations. While the radical perspective of tracing power through capitalism is important, power can also be discursive and nuanced, requiring a theoretical perspective more attuned to the relationship between language and power. Enter postmodern criminology.

612

A. L. McCarthy and K. F. Steinmetz

Postmodern Criminology Postmodern criminology is by nature difficult to describe because it encompasses a vast and complex assortment of theories perhaps better described as postmodern criminologies. Fortunately, some consistencies exist among the varieties of postmodern criminology, notably the fact that most adopt a multidisciplinary approach and embrace ontological subjectivity (Einstadter and Henry 2006; Sellers and Arrigo 2018). In this manner, Einstadter and Henry (2006, p. 284) explain that: As an ideology, or a form of theorizing, postmodernism embraces the pastiche of fragmented images, bits and bites, the collage of differences, random access, multiple views, and a resistance to privileged knowledges, believing that all knowledge is no more than claims to truth and that no one truth claim is superior to any other.

Universal truths are rejected in favor for a more deconstructionist approach. Much postmodern literature traces the relationship between the individual and the unique meaning drawn from the interplay of material and social objects. Within critical criminology, power is also traced through these relationships to understand more symbolic and discursive forms of control – those which rely on the framing of a situation through language and imagery. Jacques Derrida’s concept of deconstructionism, an approach which dialectically examines meaning and meaning construction, provides a crucial tool for postmodern criminologists. Deconstruction asks us to consider what people, groups, systems, and processes are rendered visible or invisible through their framing in language. It requires the analyst to “look between the lines” to consider not only what is said but what is left unsaid. From this view, any utterance or articulation is only capable of revealing partial truths (Arrigo 2003). Further, postmodernists argue that not only are these partial or provisional truths the result of limitations in language, but they reflect power as well. The ways we choose to articulate events or characterize persons may reinforce or subvert power: concepts are deconstructed; this process brings to light what is important thereby constructing meaning (DeKeseredy 2011). For example, Arrigo (2003) demonstrates that only legalistic language is accepted within a court system and rejects alternative meanings and speech, creating implicit bias against perceived criminals. By reinforcing legalistic language, the court arena becomes exclusive against the lower class who may not possess legalistic language skills and use colloquial or “street” language. The inability to understand or use legalistic language effectively distributes power to those capable of understanding and using legalistic language who are typically upper class and marginalizes those who cannot. Actors within the court system thus reproduce the larger system of language that supports the status quo, thereby reproducing ideologies of larger institutions (Bernard et al. 2016). Postmodern criminology has spent considerable time exploring the role of technology in social control, perhaps most notable in its examinations of surveillance (Arrigo and Sellers 2018; Bigo 2008; Boggard 1996; Foucault 1975; Haggarty and Ericson 2000; Lewis 2003; Mathiason 1997). This body of research is heavily influenced by Michel Foucault’s (1975) foundational work, Discipline and Punishment, which traces the evolution of discursive forms of power and control through a

30

Critical Criminology and Cybercrime

613

historical analysis of punishment and the concomitant development of related technology. ▶ Chapter 8, “Surveillance, Surveillance Studies, and Cyber Criminality” by Nusbaum explores surveillance in detail, so we will not belabor the point here. Instead, it may be more fruitful to examine other areas of cybercrime and control to uncover the value of postmodern perspectives for the area. Perhaps a fruitful starting point for a postmodern analysis of cybercrime is a consideration of the very notions of “cyber,” “cybercrime,” and related vernacular. Prior work, including works by one of this chapter’s authors, has argued that “cyber” is a problematic prefix (e.g., Steinmetz and Nobles 2018; Wark 1997; Yar and Steinmetz 2019). While not entirely rehashing the points made in these previous explications, cyber has undergone connotative changes in meaning since its inception in the 1940s. The term was originally coined by Norbert Wiener and Arturo Rosenblueth in their pioneering work in the field of “cybernetics” (Wiener 1948). “Cyber” was derived from the Greek word for “steersman,” and cybernetics was envisioned as the study of machines and feedback systems (Rid 2016; Wiener 1948, p. 19). Over the years, however, the prefix became detached from its mooring in the study of feedback systems by cyberneticists, becoming a stylish way to describe seemingly anything related to computers. In fact, the prefix was often used to sensationalize actions and things associated with the Internet and other networked technologies (e.g., cyberspace, cybersex, and cybersurfing). Media theorist McKenzie Wark (1997, p. 154) describes the problem as “cyberhype” or the use of certain prefixes to give the illusion of explanatory power and significance to concepts. As he explains: The problem with cyberhype is the easy assumption that the buzzwords of the present day are in some magical way instantly transformable into concepts that will explain the mysterious circumstances that generated them as buzzwords in the first place. Cyber this, virtual that, information something or other. Viral adjectives, mutated verbs, digital nouns. Take away the number you first thought of and hey presto! Instant guide to the art of the new age, cutting edge, psychotropic anything-but-postmodern what have you. Not so much a theory as a marketing plan.

At some point, however, we collectively decided that not every online activity needs to be designated as “cyber.” Terms like “cyber-shopping” and “cybersex” now seem passé. Cyber, however, persists largely in application toward harmful or illicit activities like cybercrime, cyberstalking, cyberharassment, and cyberterrorism. While originally uses of cyber were “pregnant with the promise of technology,” they have since come to connote danger online (Steinmetz and Nobles 2018, p. 3; see also Yar 2014; Wall 2012, p. 5). Thus cyber no longer refers to the field of cybernetics, but it mostly describes antisocial Internet activity. Its use now has a connotation of danger which can be used to “cyberhype” criminal activity online – little to no explanatory power is offered by the concept, but it portends an elusive but seemingly omnipresent constellation of threats in our increasingly networked world. In a similar manner, McGuire (2018) examines the way that cybercrime has been framed within the academic discourse, drawing from a distinction between semantics and syntax. As he explains, “the general idea (roughly) is to say that a syntax involves grammar – the rules which determine the correct use of symbols and their combinations in any language while a semantics determines what a syntactically correct sequence of

614

A. L. McCarthy and K. F. Steinmetz

symbols mean [emphasis in original]” (McGuire 2018, p. 137). For him, cybercrime has largely been conceived in the language of syntax – that cybercrime has been largely viewed as a technical problem requiring technical solutions. He describes numerous consequences as a result of viewing cybercrime syntactically. For example, he explains that “because of the incomprehensible language of syntax which drives our digital machines that we see outcomes beyond our control” (McGuire 2018, p. 144). Syntactic thinking also tends to encourage the view that cybercrimes and their associated harms are entirely new. Instead, McGuire (2018, p. 144) argues that “criminologists have often failed to point out the basic implausibility of this conclusion. The number of ways in which humans can harm other humans is ultimately rather limited – so genuinely novel harms are therefore rare.” For instance, one could argue that while automated victimization via malware is a relatively novel form of victimization, it is not wholly dissimilar from prior forms of sabotage, vandalism, trespass, and espionage. Further, the syntactic nature of cybercrime, according to McGuire (2018), tends to encourage the view that traditional methods of regulating criminal behavior are insufficient – that these crimes require entirely new solutions. He points out, however, that the basic elements of criminal investigation and prosecution of cybercrimes differs little from traditional crimes, adding that “there is also a long history of ways in which different technologies have been policed and legally managed. From the printing press to the motor car and beyond, new legal structures have invariably evolved to deal with new technologies” (McGuire 2018, p. 145). For him, then, cybercrime discourse views cybercrime as new form of crime beyond the control of traditional approaches and institutions – that the only remedy can be found in deriving technical solutions to the problem or “fighting fire with fire.” Instead, McGuire (2018, p. 139) points out that “a syntactic view of cybercriminality appears disturbingly close to an article of religious faith because it encourages us to view cybercrime as a phenomenon which we may be able to describe in various ways, but interpret only in terms of a single way [emphasis in original].” Postmodern criminology also may help us make sense of a concept which has emerged in an era of states using network technologies to undermine each other, a phenomenon often described as information warfare or, increasingly, “cyberwarfare” (Clarke and Knake 2010, p. 44; Denning 1999). Some military and policy strategists have advised that adversarial states could make use of information technology to spread propaganda, disrupt communications, sow discord, conduct espionage, and sabotage operations with the most alarmist accounts prophesizing potential crippling attacks that could result in widespread loss of life and limb (e.g., Clarke and Knake 2010, pp. 64–68). Such accounts indicate that the future of warfare could depend on information security. In response to such concerns, multiple countries, like the United States, United Kingdom, Russia, China, North Korea, Israel, and others, have developed cadres of operatives dedicated to developing and executing offensive and defensive operations in so-called cyberspace. Some accounts argue that cyberwarfare may be particularly appealing to countries lacking in military might, a notion called “asymmetric” warfare (Clarke and Knake 2010, p. 50). Others argue that the resources, infrastructure, and expertise needed to conduct cyberwarfare mean that it will likely be the province of “states with long-established and wellfunded cyber warfare programs” (Lindsay 2013, p. 388).

30

Critical Criminology and Cybercrime

615

Postmodern criminology would ask that we pause and consider the way that “cyberwarfare” is framed as a social problem – to deconstruct the very concept itself. First, it is imperative to understand that “warfare” in this context is perhaps best understood as a metaphor. Warfare is generally the use of kinetic energy (violence) to accomplish state objectives (Rid 2013). It is a horrendously violent affair. Though some analysts, like Richard A. Clarke, claim that cyberwarfare “may actually increase the likelihood of the more traditional combat with explosives, bullets, and missiles” (Clarke and Knake 2010, p. xiii), cyberwarfare itself does not – as of yet – involve these components. As Rid (2013) explains Cyber War Will Not Take Place, such views are misleading as strategies associated with cyberwarfare may actually reduce the violence involved in war: First, at the technical high-end, weaponized code and complex sabotage operations enable highly targeted attacks on the functioning of an adversary’s technical systems without directly physically harming the human operators and managers of such systems. . . Secondly, espionage is changing: computer attacks make it possible to exfiltrate data without infiltrating humans first in highly risky operations that may imperil them. . . And finally subversion may be becoming less reliant on armed direct action: networked computers and smartphones make it possible to mobilize followers for a political cause peacefully. In certain conditions, undermining the collective trust and legitimacy of an established order requires less violence than in the past, when the state may have monopolized the means of mass communication. [Emphasis in original] (Rid 2013, pp. xiv–xv)

From this description, it is easy to see that “cyberwarfare” is a bit of a misnomer – it would be better to describe such tactics as sabotage, espionage, and subversion. While these strategies may be used to facilitate the execution of war, they are not in and of themselves war. As Rid (2013, p. 13) also explains “lethal cyber attacks, while certainly possible, remain the stuff of fiction novels and action films. Not a single human being has ever been killed or hurt as a result of a code-triggered cyber attack” (see also Brenner 2016, pp. 3–4). As explained by linguists Lakoff and Johnson (1980, p. 455), “the essence of metaphor is understanding and experiencing one kind of thing or experience in terms of another.” Further, these metaphors have the power to “structure how we perceive, how we think, and what we do” (Lakoff and Johnson 1980, p. 454). Thus if we come to understand activities like computer intrusions, data exfiltration, malware deployment, and other strategies used by governments against the networks of others through the metaphor of warfare, there is a potential for us to conflate these behaviors with the casualties and collateral damage involved in traditional forms of kinetic warfare. What does this accomplish? Examining other social issues framed through the metaphor of war may be useful here. For instance, Kappeler and Kraska (1997, p. 1) have argued that: The ideological filter encased within the war metaphor is “militarism,” defined as a set of beliefs and values that stress the use of force and domination as appropriate means to solve problems and gain political power, while glorifying the tools to accomplish this – military power, hardware, and technology.

616

A. L. McCarthy and K. F. Steinmetz

Instead of viewing drugs, for example, as a public health issue, the war metaphor allows drugs to be understood as a problem to be dealt with by force, which they argue has directly contributed to the increased militarization of US policing forced over the past 30 years (Kappeler and Kraska 2013). In this context, the framing of crime problems as wars has directly contributed to the growth of coercive state power in the form of militarized policing forces and, indirectly, mass incarceration (Kappeler and Kraska 2013; Simon 2007). Framing state conflicts over computer networks as warfare has similar consequences. As Alves (2015, p. 390) argues: The use of a “battlefield” approach frames the discussion of online security around a drive towards securitization and militarization of cyberspace, exacerbating some of the risks and threats, while dismissing the negative impacts of increased control and surveillance in online trust, freedom, and creativity.

Rhetoric like “cyberwarfare” thus may be directly responsible – at least in part – for increases in state surveillance and control on the Internet. Postmodernism has proven to be an effective lens to understand social control and discursive forms of power, particularly with the applications of surveillance studies. While surveillance studies are important, the breadth of postmodern theory could be utilized more effectively. In addition to the topics discussed in this chapter, future studies could also consider broader uses of postmodernism by analyzing the growth of extreme right-wing groups and their relationship to Internet subcultures. This would entail the analysis of discourse on online platforms and how they create, manipulate, and regulate oppressive ideology and ultimately how this ideology eventually may interject into the material world. Another application of postmodern criminology could be the legal and political discourses that have emerged framing cybercrime as a social problem. Deconstructing such language could build upon prior work examining the social construction of cybercrime and provide insights into the political and economic interests tied to cybercrime regulation or criminalization.

Conclusion Critical criminology is a large tent encompassing a variety of theoretical approaches that could be applied to the study of cybercrime. This chapter has provided an introduction to critical criminology and explored the application of two prominent critical criminological traditions, radical and postmodern criminologies, to cybercrime. We argue that cybercrime research would benefit from the inclusion of such perspectives as critical criminology allows research to examine cybercrime in the context of social structural, cultural, and political economic power and inequality. Rather than focus on individual cybercrimes, a critical criminological perspective understands cybercrime as a multifaceted phenomena and drives scholarly inquiry toward the roots of crime and social issues by tracing power from the individual to greater structures that may influence the actor. With such a model of crime that

30

Critical Criminology and Cybercrime

617

integrates larger systems, the roots of oppression and its related causal and influential factors to criminal behavior may be better understood and resolved. ▶ Chapters 6, “Organized Crime and Cybercrime,” and ▶ 24, “Environmental Criminology and Cybercrime: Shifting Focus from the Wine to the Bottles,” are other branches of criminology that may overlap to provide more insights into how critical criminology can broaden and expand the understanding of critical crime in context with greater society. Other branches of critical criminological perspectives outside of this book such as cultural criminology have also been addressed (see Steinmetz 2016; Yar 2018). Cybercrime studies may also benefit from serious engagement with theories of racial inequality, queer theory, and related perspectives. In other words, there is plenty of room for further research in these broad fields of inquiry.

References Adler, P. A. (1993). Wheeling and dealing: An ethnography of an upper-level drug dealing and smuggling community (2nd ed.). New York: Columbia University Press. Adorno, T., & Horkheimer, M. (1944). Dialectic of enlightenment. New York: Verso. Agger, B. (2013). Critical social theories: An introduction. Oxford: Oxford University Press. Alves, A. M. (2015). Between the “battlefield” metaphor and promises of generativity: Contrasting discourses on cyberconflict. Canadian Journal of Communication, 40, 389–405. Arrigo, B. A. (2003). Postmodern justice and critical criminology: Positional, relational, and provisional science. In M. Schwartz & S. E. Hatty (Eds.), Controversies in critical criminology (pp. 43–54). Cincinnatti: Anderson Publishing. Arrigo, B. A., & Sellers, B. G. (2018). Postmodern criminology and technocrime. In K. F. Steinmetz & M. R. Nobels (Eds.), Technocrime and criminological theory (pp. 133–146). New York: Routledge. Banks, J. (2014). Online gambling and crime: Causes, controls and controversies. Farnham: Ashgate Publishing Limited. Banks, J. (2015). The Heartbleed bug: Insecurity repackaged, rebranded and resold. Crime Media Culture, 11(3), 259–279. Banks, J. (2018). Radical criminology and the techno-security-capitalist complex. In K. F. Steinmetz & M. R. Nobles (Eds.), Technocrime and criminological theory (pp. 102–115). New York: Routledge. Bell, D. (1980). Brown v. Board of Education and the interest convergence dilemma. Harvard Law Review, 93(3), 518–533. Bernard, T. J., Snipes, J. B., & Berould, A. L. (2016). Vold’s theoretical criminology. New York: Oxford University Press. Bigo, D. (2008). Globalized (in)security: The field and the ban-opticon. In D. Bigo & A. Tsoukala (Eds.), Terror, insecurity and liberty: Illiberal practices of liberal regimes after 9/11 (pp. 10–49). Abingdon/Oxford: Routledge. Bogard, W. (1996). The simulation of surveillance: Hypercontrol in telematic society. Cambridge, UK: Cambridge University Press. Bonger, W. (1916). Criminality and economic conditions. Boston: Little, Brown, and Company. Boyle, J. (2003). The second enclosure movement and the construction of the public domain. Law and Contemporary Problems, 66, 33–74. Brenner, S. W. (2016). Cyberthreats and the decline of the nation-state. New York: Routledge. Brotherton, D. C. (2015). Youth street gangs: A critical appraisal. New York: Routledge. Buist, C. L., & Lenning, E. (2015). Queer criminology. New York: Routledge. Carceral, K. C., & Bernard, T. J. (2015). Prison, Inc. New York: NYU Press. Chambliss, W. J. (1974). Toward a political economy of crime. Theory and Society, 2, 149–170.

618

A. L. McCarthy and K. F. Steinmetz

Chambliss, W. J., & Mankoff, M. (1976). Whose law, what order – A conflict approach to criminology. New York: Wiley. Chambliss, W. J., & Zatz, M. S. (1993). Making law: The state, the law, and structural contradictions. Bloomington: Indiana University Press. Christie, N. (2000). Crime control as industry: Toward gulags, Western style. (3rd ed). New York: Routledge. Clarke, R. A., & Knake, R. K. (2010). Cyber war: The next threat to national security and what to do about it. New York: Ecco. Copes, H., & Topalli, V. (2010). Criminology theory: Readings and retrospectives. New York: McGraw-Hill. Cowling, M. (2008). Marxism and criminological theory: A critique and a toolkit. New York: Palgrave Macmillan. Crawford, M. (2014). The online gambling conflict: Antigua & Barbuda vs. the United States. The Estey Centre Journal of International Law and Trade Policy, 15(2), 133–161. Crenshaw, K. (1988). Race, reform, and retrenchment: Transformation and legitimation in antidiscrimination law. Harvard Law Review, 101(7), 1331–1387. Currie, E. (2008). The roots of danger: Violent crime in global perspective. Upper Saddle River: Prentice Hall. Daly, K., & Chesney-Lind, M. (1988). Feminism and criminology. Justice Quarterly, 5, 497–538. Dame-Boyle, A. (2015, April 16). EFF at 25: Remembering the case that established code as speech. Electronic Frontier Foundation. Retrieved 8 Oct 2018 at https://www.eff.org/deeplinks/ 2015/04/remembering-case-established-code-speech DeKeseredy, W. S. (2011). Contemporary critical criminology. New York: Routledge. Denning, D. (1999). Information warfare and security. New York: Addison-Wesley. Doctorow, C. (2016, March 1). Federal judge rules US government can’t force Apple to make a security-breaking tool. Boing Boing. Retrieved 8 Oct 2018 at https://boingboing.net/2016/03/ 01/federal-judge-rules-us-governm.html Dwyer, A., Ball, M., & Crofts, T. (2016). Queering criminology. London: Palgrave Macmillan. Einstadter, W. J., & Henry, S. (2006). Criminological theory: An analysis of its underlying assumptions (2nd ed.). Lanham: Rowman & Littlefield Publishers. Ferrell, J. (1996). Crimes of style: Urban graffiti and the politics of criminality. Boston: Northeastern University Press. Ferrell, J., Hayward, K., & Young, J. (2015). Cultural criminology: An invitation. Thousand Oaks: Sage. Foucault, M. (1975). Discipline and punishment: The birth of the prison. New York: Vintage Books. Friedrichs, D. O. (2007). Trusted criminal: White collar crime in contemporary society (3rd ed.). Belmont: Wadsworth. Gunter, W. D. (2009). Internet scallywags: A comparative analysis of multiple forms and measurements of digital piracy. Western Criminology Review, 10(1), 15–28. Haggarty, K., & Ericson, R. (2000). The surveillant assemblage. British Journal of Sociology, 51(4), 605–622. Halbert, D. (2016). Intellectual property theft and national security: Agendas and assumptions. The Information Society, 32(4), 256–268. Hall, S., & Winlow, S. (2005). Anti-nirvana: Crime, culture and instrumentalism in the age of insecurity. Crime Media Culture, 1(1), 31–48. Hamm, M. (2010). Encyclopedia of criminological theory. In W. J. Chambliss (Ed.), Power conflict and crime. Thousand Oaks: Sage Publications. Harring, S. (1983). Policing a class society: The experience of American cities, 1865–1915. New Brunswick: Rutgers University Press. Hayward, K. (2004). City limits: Crime, consumer culture and the urban experience. London: Glasshouse Press.

30

Critical Criminology and Cybercrime

619

Higgins, G. E., Wolfe, S. E., & Marcum, C. D. (2008). Digital piracy: An examination of three measurements of self-control. Deviant Behavior, 29(5), 440–460. Holliday, I. (2000). Is the British state hollowing out? The Political Quarterly, 71(2), 167–176. Irwin, J. (1970). The felon. Berkeley: University of California Press. Jessop, B. (2004). Hollowing out the nation-state and multilevel governance. In P. Kennett (Ed.), A handbook of comparative social policy (pp. 11–26). Cheltenham: Edward Elfar. Kappeler, V. E., & Kraska, P. B. (2013). Normalizing police militarization, living in denial. Policing and Society: An International Journal of Research and Policy, 25(3), 268–275. Kariithi, N. K. (2011). Is the devil in the data? A literature review of piracy around the world. The Journal of World Intellectual Property, 14(2), 133–154. Kraska, P. B., & Kappeler, V. E. (1997). Militarizing American police: The rise and normalization of paramilitary units. Social Problems, 44(1), 1–18. Lakoff, G., & Johnson, M. (1980). Conceptual metaphor in everyday language. The Journal of Philosophy, 77(8), 453–486. Lanier, M. M., Henry, S., and Anastasia, D. (2015). Essential Criminology. (4th ed). New York: Routledge. Lessig, L. (2004). Free culture. New York: Penguin. Lessig, L. (2012). Remix: How creativity is being strangled by the law. In M. Mandiberg (Ed.), The social media reader (pp. 155–169). New York: New York University Press. Levi, M. (2008). Organized fraud and organizing frauds: Unpacking research on networks and organization. Criminology & Criminal Justice, 8(4), 389–419. Levi, M., & Williams, M. L. (2013). Multi-agency partnerships in cybercrime reduction: Mapping the UK information assurance network cooperation space. Information Management & Computer Security, 21(5), 420–443. Lewis, L. (2003). The surveillance economy of post-Columbine schools. Review of Education, Pedagogy, and Cultural Studies, 25(4), 335–355. Lilly, J. R., & Knepper, P. (1993). The corrections-commercial complex. Crime & Delinquency, 39(2), 150–166. Lindsay, J. R. (2013). Stuxnet and the limits of cyber warfare. Security Studies, 22(3), 365–404. Loader, I. and Sparks, R. (2007). Contemporary landscapes of crime, order and control: Governance, risk and globalization. In M. Maguire, R. Morgan and R. Reiner (eds), The Oxford Handbook of Criminology (4th Ed) (pp. 78 - 101). Oxford: Oxford University Press. Lynch, M. J. (1990). The greening of criminology: A perspective for the 1990s. The Critical Criminologist, 2(3), 3–4, 11–12. Mathiason, T. (1997). The viewer society: Michel Foucault’s panopticon revisited. Theoretical Criminology, 1(2), 215–234. Matthews, R. (2014). Realist criminology. London: Palgrave Macmillan. McGuire, M. R. (2018). Cons, constructions, and misconceptions of computer related crime: From a digital syntax to a social semantics. Journal of Qualitative Criminal Justice and Criminology, 6 (2), 137–156. Merton, R. K. (1938). Social structure and anomie. American Sociological Review, 3(5), 672–682. Messerschmidt, J. W. (1993). Masculinities and crime: A critique and re-conceptualization of theory. Lanham: Rowman and Littlefield. Miller, J. (1998). Up it up: Gender and the accomplishment of street robbery. Criminology, 36(1), 37–66. Morris, R. G., & Higgins, G. E. (2010). Criminological theory in the digital age: The case of social learning theory and digital piracy. Journal of Criminal Justice, 38, 470–480. Mueller, G. (2016). Piracy as labour struggle. Communication, Capitalism & Critique, 14, 333–345. Parkes, M. (2012). Making plans for Nigel: The industry trust and film piracy management in the United Kingdom. Convergence: The International Journal of Research into New Media Technologies, 19(1), 25–43.

620

A. L. McCarthy and K. F. Steinmetz

Pepinsky, H. E., & Quinney, R. (1991). Criminology as peacemaking. Bloomington: Indiana University Press. Perelman, M. (1998). Class warfare in the information age. New York: St. Martin’s Press. Perelman, M. (2002). Steal this idea: Intellectual property rights and the corporate confiscation of creativity. New York: Palgrave Macmillan. Perelman, M. (2014). The political economy of intellectual property. Socialism and Democracy, 28(1), 24–33. Presdee, M. (2000). Cultural criminology and the carnival of crime. New York: Routledge. Quinney, R. (1970). The social reality of crime. New York: Little, Brown and Company. Reiman, J., & Leighton, P. (2017). The rich get richer and the poor get prison. New York: Routledge. Rhodes, R. A. W. (1994). The hollowing out of the state: The changing nature of the public service in Britain. The Political Quarterly, 65(2), 138–151. Rid, T. (2013). Cyber war will not take place. Cambridge, MA: Oxford University Press. Rid, T. (2016). Rise of the machines. New York: W. W. Norton & Company. Rothe, D. L., & Steinmetz, K. F. (2013). The case of Bradley Manning: State victimization, realpolitik and WikiLeaks. Contemporary Justice Review, 16(2), 280–292. Rusche, G., & Kirchheimer, O. (1939). Punishment and social structure. New York: Columbia University Press. Sellers, B. G., & Arrigo, B. A. (2018). Postmodern criminology and technocrime. In K. F. Steinmetz & M. R. Nobles (Eds.), Technocrime and criminological theory (pp. 133–146). New York: Routledge. Shaw, C., & McKay, H. (1942). Juvenile delinquency and urban areas. Chicago: University of Chicago Press. Simon, J. (2007). Governing through crime: How the war on crime transformed American democracy and created a culture of fear. New York: Oxford University Press. Spitzer, S. (1975). Toward a Marxian theory of deviance. Social Problems 22(5): 638–651. Smallridge, J. L., & Roberts, J. R. (2013). Crime specific neutralizations: An empirical examination of four types of digital piracy. International Journal of Cyber Criminology, 7(2), 125–140. Stallman, R. (2002). Free software free society: Selected essays of Richard M. Stallman. Boston: Free Software Foundation, Inc. Steinmetz, K. F. (2012). Wikileaks and realpolitik. Journal of Theoretical & Philosophical Criminology, 4(1), 14–52. Steinmetz, K. F. (2016). Hacked: A radical approach to hacker culture and crime. New York: NYU Press. Steinmetz, K. F., & Nobles, M. R. (2018). Technocrime and criminological theory. New York: Routledge. Steinmetz, K. F., & Pimentel, A. (2018). Deliberating the information commons: A critical analysis of intellectual property and piracy. In S. C. Brown & T. J. Holt (Eds.), Digital piracy: A global, multidisciplinary account (pp. 185–207). New York: Routledge. Sykes, G. M. (1974). Criminology: The rise of critical criminology. Journal of Criminal Law & Criminology, 65(2), 206–213. Tade, O., & Akinleye, B. (2012). “We are promoters not pirates”: A qualitative analysis of artists and pirates on music piracy in Nigeria. International Journal of Cyber Criminology, 6(2), 1014–1029. Turk, A. T. (1976). Law as a weapon in social conflict. Social Problems, 23(3), 276–291. Ventura, H. E., Miller, J. M., & Deflem, M. (2005). Governmentality and the war on terror: FBI project carnivore and the diffusion of disciplinary power. Critical Criminology, 13(1), 55–70. Wall, D. S. (2008). Cybercrime and the culture of fear. Information, Communication & Society, 11, 861–884. Wall, D. S. (2012). ‘The devil drives a Lada’: The social construction of hackers as cyber-criminals. In C. Gregorious (Ed.), Constructing crime: Discourse and cultural representations of crime and ‘deviance.’. Basingstoke: Palgrave Macmillan.

30

Critical Criminology and Cybercrime

621

Walton, P., & Young, J. (Eds.). (1998). The new criminology revisited. New York: Palgrave. Wark, M. (1997). Cyberhype: The packaging of cyberpunk. In A. Crawford & R. Edgar (Eds.), Transit lounge (pp. 154–157). North Ryde: Craftsman House. Weis, V. A. (2017). Marxism and criminology: A history of criminal selectivity. Boston: Brill. Wiener, N. (1948). Cybernetics: Or control and communication in the animal and the machine. New York: Wiley. Williams, C. R., & Arrigo, B. A. (2001). Anarchaos and order: On the emergence of social justice ideology. Theoretical Criminology, 5(2), 223–252. Yar, M. (2005). The global ‘epidemic’ of movie ‘piracy’: Crime-wave or social construction? Media, Culture & Society, 27(5), 677–696. Yar, M. (2008a). Computer crime control as industry: Virtual insecurity and the market for private policing. In K. F. Aas, H. O. Gundhus, & H. M. Lornell (Eds.), Technologies of InSecurity: The surveillance of everyday life. New York: Routledge. Yar, M. (2008b). The rhetorics and myths of anti-piracy campaigns: Criminalization, moral pedagogy and capitalist property relations in the classroom. New Media & Society, 10(4), 605–623. Yar, M. (2011). From the “governance of security” to “governance failure”: Refining the criminological agenda. Internet Journal of Criminology. Retrieved 21 June 2018 at https://docs. wixstatic.com/ugd/b93dd4_bcc9194abe534c75be23f6f295beb4cb.pdf Yar, M. (2014). The cultural imaginary of the internet. New York: Palgrave Macmillan. Yar, M., & Steinmetz, K. F. (2019). Cybercrime & society (3rd ed.). Thousand Oaks: Sage. Young, J. (2007). The Vertigo of late modernity. Thousand Oaks: Sage. Young, J., & Lea, J. (1984). What is to be done about law and order? New York: Penguin Books. Zinn, H. (2003). A people’s history of the United States. New York: Harper Perennial.

Feminist Theories in Criminology and the Application to Cybercrimes

31

Alison J. Marganski

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Is Feminist Theory and Why Is It Important in Criminology? . . . . . . . . . . . . . . . . . . . . . . . . . . Understanding Sex, Gender, and Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feminist Movements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feminist Theory/Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feminist Theory’s Application to Contemporary Criminological Issues . . . . . . . . . . . . . . . . . . . . . . Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

624 625 626 629 631 635 642 644 645 646

Abstract

Feminist theories and perspectives place gender at the center of discourse and analysis. This chapter examines feminist theories and perspectives in the field of criminology. Basic concepts/content central to such studies (e.g., sex, gender, “doing gender,” intersectionality) are reviewed, and the development of feminist work and social movements as related to crime is discussed. The chapter follows with an overview of various feminist theories and articulates principles of each, along with how they have enriched criminological thought. The chapter subsequently highlights the application of feminist theories to technologically-perpetrated and technologically-facilitated crimes including “revenge pornography” and image-based abuse, online harassment and e-bile, hacking and hacktivism, and hate crime transgressions. By studying sex, gender, and power dynamics, in combination with other micro- and macro-level factors, a more holistic understanding of various offenses is offered. Importantly, the chapter also draws A. J. Marganski (*) Le Moyne College, Syracuse, NY, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_28

623

624

A. J. Marganski

attention to how feminist thought shapes implications for research and practice. It closes by emphasizing the need for integrating feminist theories and frameworks into mainstream criminology to continue advancing what we know about victimization, offending, and practice. Keywords

Feminist theory · Feminism · Gender and crime · Gender violence · Inequality

Introduction The purpose of this chapter is to explore feminist theories and perspectives in the field of criminology. The chapter first provides background information as to what feminist theories are, along with the reasons they emerged and historical context that has paved the way for examining the lived experiences of women and girls. Following, the chapter explains progress that has been made in the field thanks to feminist frames of mind, which have opened doors to more fully and intricately understanding crimerelated issues. Feminist studies have advanced our ways of thinking about crime and criminality through recognizing patterns that were once commonly overlooked (e.g., sex differences in victimization/offending) and through shifting from overly simplistic, biologically deterministic explanations on offending to more sophisticated approaches that view gender and related dynamics as something much more complex, with various factors intersecting to shape ideals and behavior that, in turn, shape social systems and structures that stratify persons and contribute to inequalities. A review of various feminist theories is also offered in this chapter, along with a discussion as to how they have enriched criminology. In all, feminist theories and perspectives offer a more nuanced way of thinking about crime and the dynamics underlying criminal events and criminogenic structures. Despite being routinely suppressed in mainstream research and other communications, these perspectives have made substantial contributions that will continue to advance knowledge in the time ahead. While research on feminist theory’s application to technological crimes is relatively new, the chapter discusses some empirical studies out there and highlights the utility of feminist theories in understanding modern-day transgressions ranging from mundane offenses (e.g., cybersexism) to those that are much more extreme and even lethal (e.g., mass murder events). The chapter also brings attention to studies that have feminist implications, even when the research does not directly recognize or discuss it. Yet further feminist research is warranted, especially from an intersectional standpoint that gives voice to those overlooked, marginalized, or oppressed in some way. By working to be more inclusive while recognizing (and appreciating) differences between and among groups, we can develop knowledge that helps us arrive at more accurate truths that represent the diversity in people’s lived experiences. Further, we can be more competent in deriving effective and necessary solutions for justice that focus on the needs of others. The chapter closes by considering the ways by which feminist theories/thought can and should be integrated into mainstream criminology to advance research on victimization, offending, institutional response, and practice.

31

Feminist Theories in Criminology and the Application to Cybercrimes

625

What Is Feminist Theory and Why Is It Important in Criminology? Feminist theories in criminology place gender at the forefront of discourse and analysis, with the understanding that gender along with other factors (e.g., age, race, class, etc.) intersect to influence individuals’ thoughts, behaviors, relationships/ interactions with other persons and institutions, and societal structure in a way that creates, reproduces, and maintains power inequalities. In contrast to the belief that “feminism” or “gender” refer to only women and girls, both concepts are far more inclusive and, in fact, can greatly advance criminological studies if integrated into existing discourse and frameworks. Feminist criminology challenges existing theoretical (and methodological) perspectives in a way that derives more “truths” from lived experiences. Different dimensions of gender have also been realized at the individual, relational, community, and societal levels. Rooted in historical, social, economic, political, and other realities, feminist criminological theories examine the lived experiences of women and girls – as well as men and boys and any queer, non-binary or fluid folks – to provide meaningful context and insight into various criminological issues and concerns. Often subsumed in critical criminology, feminist approaches consider social conflict that arises due to gender inequalities and how this translates into criminal victimization, perpetration, and responses (Britton 2000). Historically, some groups have held more power than others, and the effects of those asymmetries seep into the fabric of society today in structural and interpersonal ways. Power imbalances place certain individuals in positions of vulnerability, which can influence interactions and behaviors. Therefore, we must critically think about sociocultural and historical context along with issues of privilege, oppression, and displays or manifestations of learned behavior to become more aware of complexities that greatly advance criminological thought and pave the way for diverse, innovative solutions. The study of crime has come a long way. For most of its existence, criminology has been a male-oriented, male-centered, and male-studied field (Naffine 1997). It has been criticized for being androcentric and having a patriarchal legacy (Britton 2000), which trivialized the experiences of women and other marginalized groups and ignored important structural factors that impact members of society. Likewise, the criminal justice system (i.e., police, courts, corrections) has been comprised of male practitioners dealing most often with male offenders. Thus, like many other fields, criminology/criminal justice has been male-dominated, with male-oriented frames of reference. Since theories of crime were developed by men to explain male behavior, female criminality was generally overlooked; so too were females as victims (historically, the few who caught attention were blamed for the actions of the male perpetrators, a phenomenon known as victim blaming) and women as researchers as well as practitioners. It was thought that females were passive and obedient persons who did not offend; in the rare event they did, they were considered biologically and psychologically more similar to men than other women – Lombroso’s masculinity hypothesis (see Lombroso and Ferrero 1895). Social movements illuminated issues of inequality and struggles faced by women and girls, and feminist criminology came about as a response to mainstream criminology’s neglect of them. This form of criminology attempted to fill the gaps

626

A. J. Marganski

in knowledge by including female experiences (since male experiences could not be generalized to them or did not apply), especially as related to their vulnerability for crimes like sexual and domestic violence. Through research, we have learned that women report experience higher levels of severe partner violence and sexual assault than men (Breiding et al. 2014), report greater fear associated with violent male partners as compared to men with violent female partners (Phelan et al. 2005; Walton et al. 2007), and tend to suffer far greater negative consequences (Caldwell et al. 2012; Hamberger 2005). Explorations have evolved to now examine an array of crimes perpetrated by women and girls as well as men and boys. They have also placed offending in a larger social context by paying attention to power (or lack thereof) and structural inequities. Feminist criminology has allowed us to understand that the social world is systematically shaped by relations of sex and gender and these factors function at various levels of society by structuring the self as well as interactions with other people, places, and institutions (Daly and Chesney-Lind 1988). In short, it has offered a more inclusive way of thinking about and producing knowledge on crime, criminology, and criminal justice. Through feminist frameworks (e.g., see Campbell and Wasco 2000), we have grown to recognize the importance of not only sex but also gender, differentiating these concepts and going beyond binaries to illuminate the niceties. Thanks to feminist advocates, scholars, and allies, women and girls are now studied as victims and offenders alongside male counterparts, and women can serve as professionals (e.g., researchers, faculty) and practitioners in criminology and criminal justice. By focusing on those who were excluded, the field of criminology has grown. We now know more about male and female victimization and perpetration, including the role of gender (e.g., masculinities and femininities) (Connell 1987), gender-specific types of offending/victimization and experiences (e.g., Browne et al. 1999), and pathways into crime (e.g., for research on victimization to offending and pathway to incarceration, see Gilfus 1992, 2002; Saar et al. 2015) as well as struggles in the system (Owen 1998). Such research has ushered in gender-specific programming and responses (e.g., Covington and Bloom 2007; Karp 2010). In addition to acknowledging differences between men and women and recognizing the context and quality of their experiences, feminist perspectives emphasized variation within women and within men via intersectional approaches that pay close attention to how gender intersects with race, class, and other variables.

Understanding Sex, Gender, and Crime From the time one is born, we hear “it’s a boy!” or “it’s a girl!” We color-code the lives of these children (e.g., blue = boys, while pink = girls, although history tells us that this has not always been the case) and mold the little human beings into culturally appropriate roles with certain toys and presents (e.g., dolls for girls and trucks for boys). As they grow, we police their behaviors and bodies (e.g., certain behaviors like cursing, sexual exploration, etc. may be called out for being unladylike, while similar acts may be encouraged and reinforced for the opposite sex),

31

Feminist Theories in Criminology and the Application to Cybercrimes

627

which molds expectations that are then evidenced in social systems. Thus, early childhood development, socialization experiences, and interactions with institutions in later years affect the internalization of social mores and externalization of behaviors that shape systems. Social arrangements and processes of everyday life create, reinforce, and maintain gender differences, translating into inequalities that are observed in social, political, economic, and other realms. In Western societies like the United States, there is a tendency to connect masculinity to males and femininity to females. However, this describes cis-gender persons only, representing two combination possibilities of many. The reality is that connections are complex. Sex is a biological and physiological category that primarily organizes people according to sexual and reproductive organs, hormones, and chromosomes – resulting in sex assignment as male, female, or intersex (although it actually gets even more complicated than this, as noted by various scientists). Gender, in contrast, refers to attributes, representations, or expressions that a society or culture equates or associates with being “masculine” or “feminine”; one’s gender can be masculine, feminine, genderqueer, non-binary, or fluid. Gender identity refers to how one perceives themselves to align with their sex (see Killerman 2013 for the Genderbread Person that nicely illustrates these and other concepts). Some cultures recognize a third gender (e.g., the Hijras of India – see Nanda 1986), and others advocate for a reconceptualization of the concept to recognize fluidity. Nevertheless, through the process of gender socialization, many are taught to behave in accordance with an assigned gender, based on sex assigned at birth. When it comes to explaining criminal behavior, it is well-established in criminological research that males perpetrate more crime than females. As found in nearly all types of data (official and self-reported), men have a greater likelihood of engaging in crime than their counterparts (with a few notable exceptions). The gender gap is even more apparent when it comes to violence. Men are disproportionately responsible for killing, raping, severely injuring and harming others (this does not mean all men are violent, but it does show us most violent people are men), and repeat offenses, while women are more responsible for “less serious” non-violent crimes such as welfare fraud, shoplifting, and prostitution. Thus, women’s patterns of offending are markedly different from that of men’s (Chesney-Lind and Pasko 2004), which is important. Often the crimes are associated with traditional gender roles and expectations. While men are taught to assert their strength and virility in patriarchal cultures (thereby contributing to violence via displays of machismo), women are expected to be nurturers, yet they are also overly sexualized and treated as commodities (contributing to survival crimes to support children/selves in climates marked by the feminization of poverty) (Chesney-Lind 1986; Morash 2006). Yet sometimes they too violate these roles. Gender essentialist views hold that androgens like testosterone or physical size are responsible for behavior and a desire to assert dominance over another. While biological influences exist, social factors also explain behavior through differential socialization. Sociologists, for instance, have long argued that gender is more of a social and cultural performance than a biological fact. Since the 1980s, gender has been seen as a type of accomplishment (West and Zimmerman 1987). Women and

628

A. J. Marganski

men “do gender” in response to normative beliefs about femininity and masculinity (e.g., hegemonic masculinity and emphasized femininity push culturally informed ideals – see Connell 1987). For instance, crime is a way that men and boys do gender, a contemplated way of accomplishing masculinity (e.g., see Messerschmidt 1997). Gender ideals are reinforced and reproduced in institutional systems like family, school, and work, which influences individual behavior and structures of society. Gender is therefore an activity that is learned based on our interactions with others in social systems, and this shapes how we present ourselves and reflects the systems we live in. In other words, it is a macro-level social structure that impacts individual, relational, and institutional dimensions of society (Risman 2004), and this enhances our understanding of criminality (Flavin 2001). Gender relations are influenced by constructions of masculinity and femininity that are organizing principles of social life inherent in patriarchal societies. The shaping of gender through socialization and culture and its expression creates social arrangements that produce inequality and create social divides. This, in turn, serves to maintain inequalities. Acceptable behaviors and attitudes are socially constructed and prescribed in cultures (yet often leave out categories like intersex), which creates hierarchies of those superior versus those inferior, those powerful and those powerless, etc., thereby perpetrating inequalities. Stereotypical views dictate what is commonly learned and expected, with men and women placed at opposite ends: men are taught to be strong while women are considered weak, men are active while women are passive, men are soul while women are bodies, and so on (Hirdmann 1996). Such a split has consequences that impacts the way individuals, institutions, and society operate. Patterned gender relations turn into systems, or “the sum of various complicated and interacting actions, practices, ideas, processes, etc.” (Hirdmann 1996, p. 8). In turn, systems structure behavior. Yet we also now know that there are a range of masculinities, femininities, and gender expressions. In short, progress has been made from a time when simple biological explanations of sex ruled thinking on behavior to the present day where more complex actualities are realized, from overgeneralized behavioral views (e.g., “boys will be boys”) to understanding the complexity of social factors that shape gendered behavior and how interactions with and expectations of others influence performances that are tied into inequalities. Gender is now a multidimensional construct (Risman 2004). Nonetheless, popular conceptions about gender leads one to believe that men and women are very different – men are from Mars and women are from Venus – despite research suggesting that men and women actually are very much alike (Kimmel 2014). Observed differences are typically the result of gender inequality, not the cause, which serves as a type of social control that maintains formal and informal systems. By being defined as different from men in a polar way, women have been stigmatized as inferior and therefore denied the broad spectrum of rights afforded to men (Naffine 1997), contributing to their disempowerment in patriarchal systems. Challenges by subordinate groups to grant equal rights have occurred time and time again yet result in backlash by members of dominant groups. Across the world, power differentials influence policy. Men’s violence against women is tolerated in many patriarchal societies and prevalent (Krug et al. 2013; WHO 2013) while

31

Feminist Theories in Criminology and the Application to Cybercrimes

629

women’s deviation from gender roles is subject to punishment. For instance, in Russia, a country where certain types of domestic violence were recently decriminalized, a woman may be more likely to go to prison for posting a meme suggesting that women retaliate against men’s violence than a man who actually engages in domestic abuse (Denejkina 2018). Rather than being treated as a crime, domestic violence is viewed as a private matter subject to fines instead of intervention and criminal sanctions. Likewise, Saudi Arabia recently granted women the right to drive, yet human rights groups have repeatedly noted that female activists are being tortured and harassed for speaking up (Smith-Spark 2018). Feminist criminology has become global, recognizing patterns of subordination and oppression shaped by gender socialization and gender structures.

Feminist Movements Feminist movements and ideologies have concentrated on equal rights. While some persons may say “I’m not a feminist because I believe men and women should have equal rights,” or “I’m a humanist,” the irony is that they are indeed feminists as feminism strives to promote equality in social, political, economic, personal, and other realms. It is not to say that everyone should be alike but rather that everyone has the ability to choose a path rather than be relegated to one. By toning down the language, we make the mistake of ignoring our troubled history with women and reflecting on our culture; movements have developed in response to issues of discrimination and unfair treatment from a time when women were deemed property and marital rape and domestic violence were legal actions. Still, we still have a long way to go as this is not the case everywhere, and even places granted freedoms and protections experience high rates of gender violence and social inequities. Feminist social movements came about to rebel against traditional gender norms and eliminate sexism in patriarchal cultures, thereby advocating for equality. Legal rights were fought for during a time when females were viewed only as daughters and wives of men, not autonomous persons (e.g., the right to vote, own property, and have access to education, employment, etc.). Activists questioned sex roles and pushed for social change whereby women could be treated as human beings with the same rights and opportunities as men. Although there are many early contributions to feminism and feminist thought, feminism has been viewed chronologically in waves (see Krolokke and Sorensen 2005). In the United States, the first wave of feminism is associated with women’s suffrage. Beginning in the nineteenth century around the time of the Seneca Falls convention in 1848 and continuing into early part of the twentieth, women and allies gathered to discuss the social conditions and civil rights of women, integrating this with the abolitionist movement. They drafted a declaration of independence for women, modeled after the existing Declaration of Independence, to bring attention to grievances and resolutions. Elizabeth Cady Stanton, Lucretia Mott, Sojourner Truth, Maria Stewart, Frances E. W. Harper, Susan B. Anthony, Jane Addams, Ida WellsBarnett, Alice Paul, Margaret Sanger, and male allies like John Stuart Mills are some

630

A. J. Marganski

of those associated with women’s suffrage (and the larger movement). Many marched, protested, and fought for the right to vote. Some were subjected to violence. Ultimately, this lead to the right to vote in 1920. Despite the significant work by women of color, the first wave became viewed as a movement for white women characterized by racial and class animus. White femininity coevolved alongside white masculinity, leaving out other persons. The movement was largely by and for white middle- and upper-class women, some of whom argued that their vote was needed to counteract black voting power after the 15th amendment granted African-American men the right to vote in 1870 (some even prohibited African-American women from participation alongside them in demonstrations). This conservative type of feminism supported the interests of a few, limiting the empowerment of all women. It should come as no surprise then that women’s enfranchisement yielded little social change and feminism lost much of its glamor after women won the right to vote. While “feminism” was in use, there were negative connotations associated with the term, some of which was a by-product of the treatment of persons of color during the first wave and some of which was propagated by men in the male-dominated mass media that discouraged women from self-identifying as such. The second wave of feminism sought to address women’s oppression and free one from rigid and oppressive gender roles. Commonly referred to as women’s liberation, this wave took place in the 1960s through the 1980s, with a spotlight on the sexual politics and the sexist treatment of women that valued looks (e.g., Miss America pageant) more than abilities. The mid-1970s was marked as a key time for emergence of feminist thought, with Betty Friedan’s The Feminine Mystique describing the dissatisfaction many women felt in society post-World War II and the longing for more than marriage and housework. During this wave, women were called for the ability to exist in society without government interference or restrictions and with rights that protected them from men’s violence, thereby allowing them to thrive. Betty Friedan, Gloria Steinem, and Susan Brownmiller are among those well known for critically assessing gender norms and sexist power structures, speaking of women’s experiences, and advocating for women’s rights and economic as well as sexual freedom. Activists called to end discrimination and assimilate into larger society. More women joined the workforce, female enrollment to institutions of higher learning grew, and reproductive rights (e.g., contraception and birth control) were gained for the first time in history that allowed women to be independent and career-oriented. It was during this time that people began to view women beyond the roles of mothers, wives, and daughters and that we started treating domestic and sexual violence more seriously. Yet like the first wave, women who were most victimized by structural systems of inequality were left out of the conversation and still largely invisible (see Davis 2011; hooks 1984). The third wave of feminism continued from the 1980s into the twenty-first century. This wave, associated with intersectionality and empowerment (as well as Riot Grrrl, the punk movement that defied gender stereotypes while simultaneously tackling issues like sexual assault, dating violence, and harassment), saw complexities in the lived experiences of women and girls. It recognized diversity in

31

Feminist Theories in Criminology and the Application to Cybercrimes

631

femaleness, sexuality, and existence. It questioned existing power structures and worked to strengthen support for those oppressed worldwide. Seminal works, such as From Margin to Center by bell hooks, called out double discrimination against minority women (i.e., sexism and racism). Rather than looking at a single characteristic such as gender when discussing women’s experiences of oppression, it’s critical to consider other factors that comprise one’s social identify. A black woman has differing experiences than a white woman, and compounded forms of discrimination can have a multiplicative effect that makes experiences of oppression substantially different than if facing one alone. For example, sexism can interact with racism, classism, ableism, and other factors to alter interactions and experiences. The matrix of oppression (Collins 2000) illustrates this and helps better understand victimization, offending, and justice system responses (also, see Lynn 2014 for a discussion triple oppression popularized by Claudia Jones; King 1995). Thus, this wave sought to be inclusive of all experiences by recognizing variation within and between groups. Judith Butler, Angela Davis, Audre Lorde, Gloria Anzaldúa, and many others made invaluable contributions here. In addition to recognizing the importance of social identity, this wave also brought about a poststructuralist interpretation of gender and sexuality, seeing sex and gender outside of the binary, dualistic, and mutually exclusive categories that have historically existed and calling for acceptance of all to transcend boundaries and embrace diversity in all its beautiful forms. Some say that we are still in the third wave, yet others argue that we are in a fourth, connecting through technology (e.g., see Blevins 2018) and continuously examining the status of women and men worldwide (e.g., think of the #MeToo movement, the work of men like Terry Crews and Jackson Katz (2013) of engaging men to end men’s violence against others, survivor criminology, etc.). Nevertheless, the need to analyze positionality along with social conditions and structures (including technological ones) remains present.

Feminist Theory/Theories The contributions of feminist perspectives to criminology are bountiful yet vary in terms of their propositions, diversity, and applications (see Belknap 1996; ChesneyLind 2006; Daly and Chesney-Lind 1988; DeKeseredy 2011; Renzetti 2012; Miller and Mullins 2006; Simpson 1989). Those who don’t understand feminist theory may incorrectly equate it as something concerned with women only, but it is much broader. In a most simplistic definition, feminist theory is a perspective that focuses on sex and gender and how they fundamentally shape the human experience. It is a way of seeing the world and a tool by which one can study it. Feminist theory includes theories that place sex and gender at the forefront of analysis. Such theories may examine sex, gender, sex/gender roles, gendered performances, gendered relations in society, and/or gender structure along with gender inequalities and challenges. According to Gelsthorpe (2002), feminist perspectives view sex/gender as central organization principles of social life, recognize processes that construct realities, and

632

A. J. Marganski

consider how power dynamics shape social relations. Generally speaking, feminist perspectives examine social stratification and describe cultural norms that support or contribute to men’s greater power and influence and women’s less advantaged position. It is not a single theory but rather a constellation of ideas using social, political, and other discourse to critically assess the roles of those in society. Hence, perhaps more appropriately, it is also referred to as feminist theories (plural). When men and women are not treated as equals or when traits that are masculine or feminine are not equally valued, this is a form of discrimination known as sexism, which produces and sustains inequality. Feminist criminologists have unearthed information regarding the pervasiveness of violence against women, mostly by men, including but not limited to domestic/ dating violence, sexual assault, street harassment, and cyber victimization. Although violence against women is common in nearly all societies, a few exceptions have been noted in pre-industrial societies (see Counts et al. 1992), which signals that social relations can be organized to lower rates of violence against women. Nevertheless, across most cultures, traditional sex/gender socialization posits that men have a right to authority in their families and over their female partners (Anderson and Umberson 2001). When women are cast into roles that significantly differ from men’s, there are implications. Early feminist work examined differences between men and women in terms of power and oppression (e.g., Connell 1987; Pence and Paymar 1993), along with the resulting system of patriarchy (Johnson 1997). We know that the emancipation of women alone did not greatly impact rates of offending, but the work of early researchers allowed us to see that crime was gendered, with females often committing minor, non-violent crimes such as shoplifting, check forgery, prostitution, but failing to narrow the gender gap for violence, and agents of social control responding to females’ delinquency including status offenses at rates higher than male counterparts due to paternalistic views and a preoccupation with female sexuality (Chesney-Lind 1989); race and other variables also impacted practice. Importantly, we have come to understand that men too have gender and do gender, which has placed men’s violence against others in a new light. Despite advances, sexist attitudes and internalized beliefs regarding the roles of men and women remain, with some embracing traditional gender roles and arguing that men should hold power over women: this idea underlies benevolent sexism (Glick and Fiske 2001). Hostile sexism is more direct but works in tandem with benevolent sexism to maintain gender inequality. Further, the warmth of benevolent sexism serves to mask the ideological systems that uphold such inequality (see Hopkins-Doyle et al. 2018), thereby continuing the harm. Men, women, and others partake in such systems and behavior. In terms of solutions, simply adding women to the equation (e.g., policing, courts, corrections, etc.) does not solve issues as we see that they acclimate to masculinized environments (e.g., Hautzinger 1998). Feminist criminology emanates from feminist thought and has yielded complex investigations into various gender issues and solutions. Jody Miller (2014) describes feminist criminology as “a body of research and theory that situates the study of crime and criminal justice within a complex understanding that the social world is systematically shaped by relations of sex and gender” (p. 15), while Claire Renzetti

31

Feminist Theories in Criminology and the Application to Cybercrimes

633

(2012) extends this, stating that feminist criminology “is a paradigm that studies and explains criminal offending and victimization as well as institutionalized responses to these problems as fundamentally gendered and which emphasizes the importance of using the scientific knowledge we acquire from our study of these issues to influence the creation and implementation of public policy that will alleviate oppression and contribute to more equitable social relations and social structures” (p. 131). Adler (1975), Simon (1975), and Smart (1976) made noteworthy contributions to early work on women as offenders and as victims. Others also impacted the field by challenging its androcentricity (Daly and Chesney-Lind 1988), looking at gender disparities as a by-product of the unequal power distribution between men and women in a capitalist society (e.g., Messerschmidt 1986) and calling for engagement via activist criminology (e.g., Belknap 2015). In much the same way that our understanding of sex and gender has grown, so too has theory and knowledge in the discipline. Burgess-Proctor (2006) and others have summarized existing feminist perspectives and highlighted evolution of theory from the “sameness” of women to “difference” models that noted variation within women. Examples of the wide array of feminist criminological theories include: – Traditional or conservative – posits that gender inequality is caused by biological and physiological differences that impact social behavior (e.g., male’s greater size and strength = more aggression), rather than social factors; some psychological explanations have also been included. – Liberal or mainstream – associated with early waves of feminism (as well as the work of Freda Adler and Rita Simon), which viewed gender role socialization as well as the emancipation of women and women’s entry into the workforce as opening doors to crime, in addition to sharing a belief in equal opportunity for working women. Changes in female offending were considered to be connected with female status and opportunity, with the struggle for social and economic equality relating to behavior. Within this view, subgroups also exist. – Marxist – a major feature is that society is constructed of relations people form through work, which creates class divisions that affect other institutions like family. Men earn wages, while women’s labor in the home is uncompensated and devalued, thereby contributing to the feminization of poverty and rendering them insubordinate and powerless. Marxist feminist criminology explores gender in labor and capitalist societies to understand how this is a source of oppression that shapes status, exploitation, and gendered interactions (see Schwendinger and Schwendinger 1983 for a discussion of the corrosive nature of capitalism). – Radical – examines the role of patriarchy, a male-oriented and male-dominated system, in constructing the social world and argues that gender inequality is the most prominent source of oppression, as it is found across various class structures. Attention is given to power and control (or lack thereof), with focus on male supremacy and female subordination evidenced by crimes of gender violence (e.g., sexual assault, intimate partner violence, and other violence against women – see MacKinnon 1989) and, respectively, gendered institutional responses that men’s violence against women as less serious than men’s violence

634

A. J. Marganski

against men. This perspective points to the gender privileges men enjoy over women and advocates for dismantling the oppressive system in favor of a more egalitarian. – Socialist – draws from radical and Marxist views to examine the interaction between gender inequality and class inequality, along with the patriarchal capitalist structures that impacts victimization, offending, and systems. This view observed females coming from pockets of poverty who engaged in traditionally female offenses like sex work or shoplifting, and noticed men who occupy powerful economic positions getting away with or receiving less scrutiny for corporate, environmental, and other crimes including egregious human rights abuses detrimental to many (see Messerschmidt 1986). This perspective also explores variation within men by highlighting the differential treatment of street crimes and crimes of the higher ups, reflecting racist and classist systems. – Intersectional/multiracial/multicultural – social systems are complicated, and multiple inequality -isms (e.g., sexism, racism, classism, etc.) can be present at the same time, which fundamentally impacts and compounds experiences of oppression. Intersectionality (see Crenshaw 1989, 1991a, b) is a way of understanding social relations by looking at several identity markers (e.g., gender, race, class, etc.; also see Andersen and Collins 1994) and axes of oppression. One’s social location or positionality helps understand advantage/disadvantage (e.g., Collins 1990), with multiple oppressions being interactive rather than additive. A woman who is white, heterosexual, and middle-class has experiences that greatly differ from a woman who is African-American, lesbian, and poor. While the first woman may experience sexism, she does not experience racism, homophobia, or classism in ways the second does. Intersectional perspectives value the experiences of all persons and bring attention to those most overlooked and harmed. Such theories include critical race, black feminist (e.g., Potter 2006), Latinx (e.g., Lopez and Pasko 2017), and other perspectives that make racial and ethnic differences critical to analyses involving gender and the social world. Women’s victimization, arrest rates, and incarceration, like men’s, vary by race and class; persons of color who are poor are more likely to be victimized, and they are also more likely to be arrested and sentenced than white counterparts or wealthy men, so intersectional perspectives are essential to the field of criminology. Of course, there are limitations and shortcomings to all, yet each has made contributions to criminology. Important to note, the list offers only a sample of such theories and is by no means exhaustive. Many others exist (e.g., left realism, postmodern, etc. – see Morash 2017; Renzetti 2013); also, some theories overlap, and some continue to evolve. In a discussion of feminist perspectives, Sokoloff and Dupont (2005) bring attention to the pressing need for studying micro- and macro-level factors. First, they highlight Mann and Grimes’ (2001) multicultural feminism that calls for examination of the intersections of gender, race, and class to learn from and give voice to those who have been marginalized; second, they emphasize Andersen and Collins’ work (2001) focusing on broad systems of power and privilege affecting

31

Feminist Theories in Criminology and the Application to Cybercrimes

635

those residing within. Contemporary feminist approaches, therefore, understand that it is imperative to study gender, along with intersection variables, from both individual and structural perspectives to develop more comprehensive models that facilitate knowledge. Feminism seeks to learn people’s truths and advocates for social, political, economic, and legal equality to promote a more just world. It acknowledges complexity and diversity while striving to do better and be better, particularly in terms of promoting equality for those who have been marginalized, oppressed, subjugated, or harmed due to unequal power distributions affecting human lives. Feminist methods also offer a sound and more empathetic approach than traditional ones (Renzetti 2013) and should be considered.

Feminist Theory’s Application to Contemporary Criminological Issues Feminist theories have much to offer in terms of understanding offenders and victims along with respective power dynamics and inequalities that exist between them due to the focus on gender arrangements produced by social conditions. These theories also help us recognize situational context, social factors, and cultural factors that shape criminal events. Motivations, target choices, and consequences can be better understood by applying this framework. Feminist theories are therefore essential in the field of criminology and can be applied to technocrimes. Technology (e.g., smartphones, tablets, and computers that provide access to the Internet and social media) has become embedded in human interactions across many societies, and our dependence on it is rapidly growing. Accordingly, it is used for instrumental and expressive purposes, which include the perpetration of crime to gain/maintain status, for thrills, to react to failure or social identity threats, or otherwise to demonstrate power over others. It has become a way for people to “do gender.” Marganski (2018) describes the multitude of ways by which feminist theory informs technocrimes including understanding the facilitation and perpetration of sex-related offenses (e.g., child pornography, revenge pornography and image-based abuse, snuff films, sex trafficking, prostitution, etc.), online harassment (e.g., rape and death threats, doxing, workplace harassment, cybermob attacks, etc.), cyberstalking (e.g., electronic surveillance, GPS tracking, etc.), and other modernday transgressions like the planning of mass murder events through electronic means. Most often, these acts are perpetrated by persons who, on the basis of social identity, are in some privileged position(s) and harm or exploit those from vulnerable or marginalized groups (e.g., younger, females, economically disadvantaged groups) who have less power. Research by Hall and Hearn (2019) supports the notion of victim-offender differentials. In their investigation of the non-consensual distribution of explicit images of someone’s current or former partner or other bodies by hackers, ex-lovers, etc. on the web, also referred to as “revenge pornography,” they found the majority of offenses (over 90%) were perpetrated by men against former women intimate

636

A. J. Marganski

partners (Hall and Hearn 2019). Through discourse analysis, they also showed that this kind of online gender violence and abuse, exhibited on revenge pornography sites such as MyEx.com, reflected elements of manhood by summoning stereotypical characteristics of masculinity (i.e., hypermasculinity). Thus, men’s gender practices inform gender violence. Violating others through misogynistic vitriol, degradation, humiliation, etc., for instance, can be viewed as an act of revenge – a strategy that is used to gain power back by those who feel wronged in some way – or means of gaining control over others so that they comply with demands. It can also be symbolic of sexuality and sexual behavior in that the practice over-sexualizes women, and information and communication technologies now extend traditional inperson encounters into a multitude of ways whereby others can participate and “get off” on shared content. It also holds serious consequences (Bates 2017). In all, perpetrators as well as consumers of non-consensual pornography are demonstrating ways by which they “do gender” with little to no concern for the persons being gazed upon and harmed. The most common themes for perpetration observed in the comments sections centered on power, control, and heterosexuality, with much victim blaming that sought to absolve offenders for their crimes and compensate for their perceived emasculation. Other research highlights themes of victim blaming in that contributes to unwelcoming and hostile environments (e.g., Lumsden and Morgan 2017; Zaleski et al. 2016). For example, Zaleski and colleagues (2016) examined comment sections of newspaper article postings on social media forums that pertained to cases of rape and sexual assault. By using naturalistic observation, they explored attitudes about rape, rapists, and victims/survivors and found that victim blaming was a highly present and recurring theme, regardless of whether the case represented popular culture or not. The credibility of victims and circumstances surrounding their victimization was often in question, and victims were deemed responsible for perpetrators’ behaviors. Further, all but one of the articles had perpetrator support present, which included sharing personal stories praising the offender, degrading victims, lamenting gender standards, or raising the issue of false accusations, in addition to intense ridicule of victims and events. Such tactics reflect abuser strategies to evade accountability (e.g., DARVO, which refers to the reaction of perpetrators in their wrongdoing whereby they deny the incident, attack the victim, and reverse the victim and offender – see Freyd 1997 – yet the fact that they are observed in collective online thought speaks to online culture that is particularly frightening for victims). Gendered vitriol in the form of rape and death threats as well as body shaming and hyper-sexualized insults, also referred to as “e-bile” (see Jane 2014), has been well documented online. Such vulgar displays of power are widespread and prolific; misogynistic harassment occurs countless times a day all across the globe. This kind of violence is symbolic too in that it targets women and girls as well as members of underrepresented groups in efforts to “keep them in their place” (see Lumsden and Morgan 2017). This behavior is often cold and calculated, signaling to those who are vulnerable that they are not welcomed in the cybersphere. Photoshopped images of graphic violence, meme image harassment, sketches of sexual violence, and other

31

Feminist Theories in Criminology and the Application to Cybercrimes

637

strategies have been used as means of communicating to others that they are the dominant rulers of this virtual land. Although some might argue that some of these behaviors constitute trolling (i.e., posting deliberately offensive messages or comments to evoke an inflammatory response in others) and that trolling is relatively innocuous and should be brushed off, we know that “gendertrolling” is a serious issue driven by malevolence (Mantilla 2015). Therefore, placing the burden on victims to police and modify their own behavior online (as well as in person) to avoid abuse and ignoring harmful perpetrator behavior is not only highly problematic, but it is also highly dangerous. Beyond inflicting violence, normalizing and minimizing the harm, blaming victims for perpetrators’ behaviors, etc., cybersexism is manifested in some less obvious ways. For instance, Ringrose and colleagues (2012) highlighted that, while some sexual content sharing may initially appear consensual between opposite-sex peers in exchanges like sexting, the acts are often coerced through peer pressure and harassment, which signals that there are power imbalances impacting decisions and behaviors. In this study, which focused on a sample of children rather than adults, the researchers found that girls (like women) were disproportionately harmed, while boys (like men) represented the majority of perpetrators. In addition to being harmed by opposite-sex peers, girls were also hurt by societal practices – a type of double victimization. The gender dynamics of peer groups were influenced by larger cultural practices relating to gender norm expectations that encouraged boys to sexualize girls (and rewarded them for it) while shaming girls who engaged in any sexualized behavior or were accused of it. This, in turn, “creates gender specific risks where girls are unable to openly speak about sexual activities and practices, while boys are at risk of peer exclusion if they do not brag about sexual experiences” (p. 7). Environments saturated with patriarchal gender ideologies are therefore observed early in life, which fosters problematic encounters that can be detrimental to persons in non-dominant groups in present and future situations. Technology also amplifies the consequences as it can be used to reach anyone at anytime from anywhere and mimic these patterns. The growing recognition of electronic aggression/transgressions and its relation to gender along with other variables should produce copious feminist research investigations across various disciplinary perspectives, yet the application of feminist theory to technocrimes is underdeveloped. Much research to date relies on tests of rational choice-oriented (e.g., Young et al. 2007) or lifestyle-routine activities theories (e.g., Holt and Bossler 2008; Marcum et al. 2010; Reyns et al. 2011). The paucity of feminist research leaves us with gaps in knowledge about modern-day crimes. By conducting interdisciplinary research that looks at micro- and macrolevel gender factors, we can better understand individual experiences, motivations, and consequences from an intersectional view while paying attention to the larger structures that shape, reinforce, and maintain inequalities. While research applying feminist theory to offenses perpetrated or facilitated via technology is still in its early stages and focuses mainly on gender violence, there have been efforts to test integrated theories, thereby modifying existing theory. In one such study, van Baak and Hayes (2018) examined social control theory and used

638

A. J. Marganski

a feminist framework to assess correlates of cyberstalking. In their work, the Gender Stereotyping Scale (Foshee et al. 2001) and the Adversarial Heterosexual Beliefs Scale (Lonsway and Fitzgerald 1995) were used to measure beliefs in gender stereotypes and perceptions about the nature of relations between men and women. In contrast to expectations, findings indicated that those who had greater gender stereotypes had lower rates of victimization; no significant differences emerged for perceptions about the nature of gendered relations, but the study did find gender differences in victimization experiences with females reporting victimization at higher rates than male counterparts indicating that this should be studied further. Nevertheless, such studies are not without limitations (conceptually and operationally) as related to feminist work, yet they underscore the importance of gender. Future research should continue to test variations of integrated theory such as between feminist theory and techniques of neutralization in light of previous discussions around victim blaming, or gender role socialization and strain theory, or other endless combinations. Feminist thought is a powerful tool that can be used to deduce knowledge as well as consider implications of existing research. For example, a cross-cultural study examined deviant dating behaviors and found that being younger, being female, being American, having “hooked up,” and having sexted are related to experiencing socially interactive partner aggression victimization (Marganski and Fauth 2013); nationality was one of the most robust predictors in the victimization model, and gender differences emerged, making these important variables to study. In the perpetration model, being female, having “hooked up,” and having sexted increased socially interactive partner aggression perpetration (interestingly, van Baak and Hayes 2018 also found that females reported higher rates of victimization and perpetration than male counterparts). While lifestyle theory was central in explaining how engagement in sexualized behaviors relates to partner aggression, the researchers brought attention to how sex role socialization may account for some behaviors, context, and observed differences. Sometimes, the connection to feminist work in research may not be explicit, yet it is present. A study on cyberstalking by Reyns et al. (2012) tested a cyber-lifestyleroutine activities theory framework. They discovered that victims were mostly female, nonwhite, nonheterosexual, and non-single, thereby signaling the need for future research that is intersectional in its approach. In all, despite some mixed or inconclusive findings, research points to the importance of studying gender and intersecting characteristics for online harassment (e.g., Finn 2004), cyberstalking (e.g., Nobles et al. 2014; Reyns et al. 2012; Jerin and Dolinsky 2001; van Baak and Hayes 2018), and other transgressions that occur in and outside of intimate relationships. A feminist lens can be applied to a wide array of technocrimes beyond gender violence as well. Broadly speaking, crimes that occur online (e.g., identity theft, fraud, embezzlement, bullying or harassment, etc.) are frequently those that work toward improving one’s financial or social position, and this is socially shaped by culturally defined goals and expectations for gender. Such offenses can yield not only income and bragging rights for those who enter such territory but also status,

31

Feminist Theories in Criminology and the Application to Cybercrimes

639

especially among like-minded peers. Along these lines, researchers have argued that, in contrast to existing theories and beliefs on cybercrimes, gender role socialization may play a critical role and should be more closely examined as related to motivations underlying “wizards and warriors of tech” (Schell and Holt 2009). Sex and gender are key factors in technocrimes that need to be made more visible. Research suggests that single young men generally are far more likely than counterparts to engage in computer offenses such as hacking (e.g., Donner 2016; Young et al. 2007), and they do so to demonstrate masculinity or “manliness” via genderinformed power moves that exercise authority, control, aggression, and/or domination over others (e.g., Taylor 2003; Sterling 1994; Yar 2005). This, in turn, creates and maintains masculinized spaces, which then influences the subsequent actions of persons in that space. Feminist theory helps analyze power dynamics, struggles, and inequalities that create or accelerate motivation for criminal events. It not only places a spotlight on offender and victim characteristics, but it also considers cultural and historical context underlying activities and events. In some online communities, highly charged gendered language is used to identify acts of bravery, defiance, violence, etc., which can be understood from a feminist lens. For example, “cybersoldiers” are persons who engage in hacking for political and social objectives (see Denning 2011), which may include hacktivism (i.e., hacking with activism), electronic jihad (i.e., cyberattacks conducted on behalf of al-Qaida and the global jihadist movement), and patriotic hacking (i.e., hacking by citizens to defend one’s homeland or country). The term “soldier” denotes strength, toughness, and protection, among other things, all of which are associated with masculinity, and these acts of cyber-warfare (e.g., defacement of websites, denial or service attacks, etc.) represent manifestations of powerlessness whereby persons who feel wronged, silenced, or inadequate in some way are putting up a fight and responding with attacks designed to gain power or status, protest others, or permit voices to be heard. Thus, language is also important. Further, while men comprise most of these offenders, this may in part be due to the marginalization or suppression of women online, whether subconsciously through gender-blind oblivion or intentionally due to sexism and misogynistic will (see Tanczer 2016). A feminist framework contributes understanding of gender dynamics in today’s complicated world. Investigations into the Gamergate controversy, Microsoft’s AI Chatbot, or countless other moments provide evidence of the ways by which persons from privileged groups harm others, most notably women and persons from underrepresented groups. To expand, while many may speak about Gamergate as an issue involving journalism ethics, we have learned about the online harassment and vitriol that Zoë Quinn, Brianna Wu, Anita Sarkeesian and other feminist bloggers/gamers received from some men. Much of the harassment was sexually charged and extremely graphic in nature, which was designed to silence them and keep them from critiquing or challenging existing structures. Hate campaigns and cybermob attacks ensued, which resulted not only in victim fear and threats to their personal safety but also cyber-vandalism (e.g., denial of service attacks, spamming, etc.), defamation of character online, and continued cyber-harassment (e.g., Sarkeesian 2012). As for Microsoft’s AI Twitter Chatbot named Tay who learned language

640

A. J. Marganski

through interacting with others and was shut down in less than 24 h, we see it is not only sexism we have to worry about, but also racism, homophobia, and other hateful views that close spaces for members of these underrepresented identity groups. Many persons who are vulnerable or oppressed in some way face and experience harassment in places/spaces dominated by men who are ready to inflict violence to assert their power or gain control and maintain their “safe space” where misogyny and discrimination can run amok. In recent years, groups that focus on aspects of male supremacy (e.g., A Voice for Men, the Red Pillers, Men Going Their Own Way, involuntary celibates (incels), and pickup artists like the Return of Kings) have been identified as active hate groups by the Southern Poverty Law Center for advancing extremely harmful ideologies that advocate for the subjugation and harm of women and girls (Male Supremacy 2018). Not surprisingly, this can be viewed as a type of backlash to gender progress (Faludi 1991), with persons from historically privileged groups advocating for violence against non-dominant persons through practices such as femicide and rape. Misinformation, pseudoscience, distorted religion, and hate have also been put forth to justify violence against women, but much reflects socially constructed thought that has been deeply woven in the fabric of our culture. While some may argue that men holding extreme hostility toward women and girls represent a few bad apples (or trolls), we see evidence that suggests otherwise; groups dedicated to misogyny have memberships in the thousands (Kini 2018), and such violence is systemic. Popular misogyny and related discriminatory practices (e.g., sexism, racism, homophobia, etc.) are networked, promoted, and largely unchecked, which have led to the online radicalization of wayward minds. Deeply disturbing and insidious views are more present than ever before in our online landscapes. Such cyber-talk is not without consequences; it has been linked by news media to various offline tragedies including but not limited to intimate partner violence (Ng 2019), gang rape (Mortimer 2016), and even several mass shooting events (Hudson 2018; North 2018). The Center on Extremism (2018) identified misogyny as a key element of white supremacy, thereby connecting persons of historically dominant groups to transgressions against those with lesser power (i.e., women, children, persons of color, etc.). White male supremacists commonly blame their misery and woes on women, minority groups, and others who differ from them. They demean, devalue, and take away humanity from outsiders in a struggle to maintain dominance and show how “manly” or tough they are. This hypermasculine performance (and display of hegemonic masculinity, which is one of many types of masculinities that exists but the one to prevail - e.g. Connell and Messerschmidt 2005) is a response to perceived victimhood and identity threat. The increasingly popular narrative of white men being the oppressed group flourishes in the Manosphere, which reinforces social divides rather than seeking remedies to close gaps. The rage expressed by some men against women (and others) in these forums is commonly misinformed and misdirected; they should instead be mad at structures such as patriarchy and capitalism that dictate the restricted ways they should be and what they should strive for. Interestingly, however, research has found that men’s exclusion of and

31

Feminist Theories in Criminology and the Application to Cybercrimes

641

discrimination against women reinforce male solidarity and the belief in their dominance (Kim 2018). Men perform a certain kind of masculinity for other men by engaging in behaviors (including technologically facilitated and perpetrated violence against women) that offer a reward: a sense of superiority. In addition to holding rigid ideas about masculinity that deprive them of various emotions/behaviors, men in misogynistic groups often attack others who advocate for equality. Operating under the guise of “men’s rights” or gender neutrality, some members taunt “social justice warriors,” while others advocate for cuts to services, funding, policies, etc. designed to protect and help those who have faced trauma rather than seeking to increase support (see Dragiewicz 2008). Interestingly, it is thanks to feminists, not meninists or male supremacists, that we know about male victims/survivors of abuse, partner violence, and other crimes and have services for them. Importantly, it is not only these far-right groups who engage in misogyny but also others in contemporary society. This translates into violence against women as something that is ordinary and expected. It is no wonder why one out of three women worldwide experiences an act of physical and/or sexual violence in her lifetime (WHO 2013) and why we continue to see offenses ranging from street harassment to intimate partner and sexual violence to gender technocrimes run rampant despite years of pushing for change. Threats to social identity or feelings of failure when one does not live up to standard (i.e., hegemonic) expectations can result in gender role stress. For men, this can lead to lashing out in predetermined and culturally prescribed ways that have been communicated across interactions with other people and institutions through life. Toxic masculinity, which represents repressive elements of traditional masculinity (e.g., devaluation of women, reckless use of violence and belief in it as a go-to conflict resolution strategy, etc.), is particularly harmful. This is evidenced in patriarchal-capitalist cultures, for instance, when men and boys are restricted to narrow emotional expressions, expected to sexually objectify women, and view themselves in individualistic and competitive ways. They fear being associated with anything feminine or gay, hold limited prosocial strategies for coping with challenges to their maleness, and are quick to put on a macho front (see BanetWeiser and Miltner 2016 for a discussion of #MasculinitySoFragile). Anger is one of the few acceptable emotions, and violence is the default response for disses, slights, or failures as it aims to restore status. Further, codes exist that prevent men from getting help or speaking out and require them to act in ways counter to their wellbeing. In recent years, the American Psychological Association (2018) designated traditional aspects of masculinity as hazardous, noting that they contribute to poor mental health outcomes, higher rates of suicide, substance abuse, violence, and even premature death. Despite the recognition of harms, there has been a pushback from those with pro-male identities who fear adaptation. Instead of recognizing the macro-level, structural issues and attempting to change cultural codes that promote or condone violence against others, or instead of expanding on and integrating healthy, prosocial gender scripts into children’s early education, we find many “solutions” are fixated on individual-level factors or thought not to be problems at all. Others place the onus on victim/survivors

642

A. J. Marganski

(i.e., most often, those who exhibit some disadvantages or oppressions) rather than perpetrators (i.e., most commonly, members of dominant groups) to avoid harm and navigate hostile waters. Until that changes, these dynamics will continue.

Discussion Just as women and girls have not been equal to men and boys in offline public and private spheres, they have not been equal in virtual, online, or tech ones; the structures that rule are also the same. This is why we see similar gendered patterns that are found in the “real world” reflected in the virtual one. Take experiences and treatment of victims into consideration. For example, the 2012 Steubenville rape case was rife with victim blame. In this horrific crime, male student athletes physically and sexually violated an unconscious female multiple times, documenting the assaults via technology and then sharing evidence in the form of tweets and digital photos on social media (see Fairbairn and Spencer 2017). This secondary form of victimization not only was far spreading in reach but also led to the victim being targeted and attacked with harassment and threats by others for merely being the victim of a crime. The event divided a community in much the same way that happened for cases of Rehtaeh Parsons, Audrie Pott, Daisy Coleman, and others like them. Power differentials and how they manifest in victim treatment are interesting to study, as there are variations across cases. Cases involving male victims have seen male perpetrators with higher status who often found protection from peers and/or institutions (e.g., Penn State Jerry Sandusky’s crimes against young boys and the silence of superiors, Catholic church’s sexual abuse of boys and girls, sexual assault of men and women in the military, etc.). In many instances, we see women and children are victims and survivors of men’s abuse. This is not to say that women cannot perpetrate such offenses against others, nor is it to exclude adult male victims/ survivors; it is, however, to bring common patterns to light. Transgressions once limited to traditional encounters are occurring without requiring one to be in physical proximity to one another, which makes interpersonal technological crimes particularly concerning. Research has found that intimate partner cyber abuse, for instance, is a common victimization experience (Marganski and Melander 2018), perhaps the most common type of partner violence, and it cooccurs with in-person victimization experiences. Likewise, research has found that victims of cyberstalking may also experience in-person stalking, as nearly onequarter of victims reported both (Baum et al. 2009). Due to findings of research studies that focus on such victimization and the toll it takes, a report by the United National Broadband Commission for Digital Development Working Group warned that cyber violence against women and girls (e.g., cyber-harassment, cyberstalking, etc.) is a systematic societal concern with devastating consequences that too often go unrecognized and unpunished (see Tandon and Pritchard 2015). Technology-facilitated and technology-perpetrated violence violates and undermines women’s autonomy, producing inequalities. If women and girls are disproportionately experiencing victimization such as online harassment, cyberstalking,

31

Feminist Theories in Criminology and the Application to Cybercrimes

643

etc. (e.g., Jerin and Dolinsky 2001; Reyns et al. 2011, 2012; Tandon and Pritchard 2015); or if they are closed out of online space due to institutionalized or hostile sexism (e.g., Taylor 2003; Zaleski et al. 2016), they may disconnect to avoid harmful experiences. Female persons make up half the population. If the majority experience cyber aggression and are also discouraged from joining online spaces, communities, or initiatives, it can be detrimental. This has the potential to impact not only their well-being but also the well-being of others including family, friends, communities, and the social world we live in (Tandon and Pritchard 2015). Technocrimes may impact females (and males) differently, especially when considering multiple markers of social identity, so intersectional approaches are essential in understanding variation in lived experiences. It is also worth noting that pathways into cyber violence perpetration may vary. For instance, a study by Ménard and Pincus (2012) found childhood sexual maltreatment predicted cyberstalking for men and women, but men who stalked had narcissistic vulnerability in interaction with sexual abuse, while women had insecure attachment and alcohol issues. Further differences may emerge when accounting for additional factors in one’s social location as well as societal and cultural factors that are frequently unaccounted for in research studies. Social roles of women differ from those of men in a patriarchal (and capitalist, white supremacist, ableist) society. Privilege blinds many in dominant groups (e.g., not only in terms of gender but also in terms of race, class, sexual orientation, religion, nationality, etc.) so that they fail to recognize and understand the ways in which individuals experience oppression or even contribute to it. Institutions operate with normative assumptions and selectively reward or punish practices. Further, oppressive institutions (e.g., sexism, racism, classism) are interconnected (i.e., sexism is tied to racism) and relate to experiences; structural violence such as poverty and gender or racial discrimination relates to interpersonal violence (e.g., domestic violence) (see Merry 2009; Morash 2006). To add to the complexity, one can be anti-racism yet a total sexist, be against homophobia yet exhibit xenophobia, support LGBT+ rights yet oppose policies that help those living in destitute conditions, and so on. The lack of empathy for “other” experiences and recognition of their subjugation results in a lack of unity among those without or with limited power that maintains power for advantaged persons and continues to perpetrate harm against vulnerable or already harmed persons. The coming together of dominant attributes such as gender, race, and class has allowed us to see how some individuals exert power in their privileged positions in ways that harm others and how broader structures have permitted such behavior or been inextricably linked to it. Powerful white men in prestigious occupations and positions have abused their privilege and violated countless persons who had less power, less prestige, etc. – from cases of doctors who violated patients in their care that have shaken institutions like Michigan State, the University of Southern California, and Ohio State, with later reactionary responses to add protective measures to safeguard students and to minimize their own liability, to coaches, mentors, leaders, protectors, and entertainers, from the likes of Penn State University, the Boy Scouts, the Catholic church, law enforcement and the military, Hollywood and the film industry, etc. As we move forward with research relating to technological crimes, it

644

A. J. Marganski

will become increasingly important to apply intersectional feminist perspectives to better understand perpetration as well as the experiences of vulnerable and marginalized persons who are at greatest risk for experiencing harassment, exploitation, and harm at the hands (or keyboards and computers) of those yielding more power in hyper-masculinized spaces. Feminist criminology is essential in the area of technocrimes. Technology-based violence is everywhere, and it can take a variety of forms – from revenge pornography and other image-based abuse to online harassment and cyberstalking to additional forms of gender violence and abuse. These offenses thrive thanks to the veil of anonymity offered by technology and ease by which one can engage in such behavior. Further, hegemonic power structures reproduce themes online that are found in popular culture (e.g., victim blaming). While some theories shed light on why some persons are at risk (e.g., lifestyle-routine activities theory, strain theory, etc.), they may lack substantive insights into gender issues and culture. Therefore, adopting feminist frameworks has the potential to make significant contributions by complementing such research and adding new research that analyzes gender as a multidimensional concept encompassing macro- and micro-level factors that shape societal structures, systems, and individuals within.

Conclusion Sex and gender are central features of social location, and social structures organize relations of inequality. For many men who are accustomed to privilege, the fear of appearing weak, powerless, or feminine can result in acts that aim to prove just how manly one is. This fear, insecurity, or threat to one’s sense of self shapes the lives of many and affects all those around them. Instead of a normative commitment to equitable gender arrangements, gender performances take place that continue to uphold existing structures that harm both women and men. The end result is that women are locked out of opportunities because of barriers built on sexism and patriarchy and men are pressured into actions defined by them. Thankfully, feminist frameworks are more popular in criminology today than ever before (Rafter and Heidensohn 1995) and intersecting social identities are increasingly recognized in relation to crime, agency, and empowerment. Criminology has become more gender inclusive, yet issues remain (Chesney-Lind and Chagnon 2016). Only in recent years have sex, gender, and gender-specific programming been taken seriously, with recognition that women’s and girls’ lived experiences and behavior differ from male counterparts. Feminist advocates for system-involved women and girls emphasize the need to listen to those women and girls and consider their specific needs in a gendered context that is mindful of other key factors including race, sexual orientation, socioeconomic standing, ability, etc. (ChesneyLind et al. 2008; Comack 2006). Research has also questioned current approaches to crime and looked more closely at policies that have affected women (e.g. Balfour 2006). It is essential that criminology moves toward inclusion of underrepresented

31

Feminist Theories in Criminology and the Application to Cybercrimes

645

groups, listens to all voices, and reconsiders strategies for “justice” to advance in the years ahead. Instead of attempting to engage in dialogue on how we as a society can improve, some people are quick to excuse, defend, or dismiss sexist, racist, homophobic, and other discriminatory rhetoric while being equally quick to vilify those who speak out against these issues. Let us not act as if there’s nothing we can do in moving forward – thereby being doomed to repeat the same patterns – and let us not continue to contribute to the normalization of these kinds of violence; just because it is in music, on TV, you grew up with it, etc., does not mean it’s okay. Let us break away from old patterns and help those who are stuck. Let us recognize the humanity in others. Let us hold ourselves to higher standards. Let us work to do better and to be better. Engaging in, tolerating, or minimizing misogyny, whether online and offline (and offering limited ways to “do gender”), has very real and even deadly consequences. In order to understand sexism (as well as racism, classism, ableism, or other detrimental inequality -isms) and its relation to violence, one needs to understand power dynamics; and in order to understand power dynamics, one needs to listen to the powerless instead of disrupting their claims. We need to be attentive to patterns in the lived experiences of all while simultaneously being brave enough to recognize and confront privilege and inequalities in a way that work toward human progress. We also must stop legitimizing violence. An Ubuntu saying states that we affirm our humanity when we acknowledge that of others (Radcliffe et al. 2014). To move toward a most just society, we need to value all lives by supporting those who have been harmed, marginalized, and oppressed, rather than only being concerned with matters that affects us directly. We cannot afford to view oppressions as competing woes; instead, we must recognize unique and overlapping struggles, and work toward remedying harm rather than criminalizing persons for trauma inflicted by other persons or systems. We must unite in solidarity and reinvent “justice” in ways that are restorative, rehabilitating and empowering persons with agency while simultaneously ameliorating and redressing toxic structural inequalities. Masculinities and femininities continuously evolve, and they can change social, structural, and cultural conditions just as these conditions can change them.

Cross-References ▶ Child Sexual Exploitation: Introduction to a Global Problem ▶ Critical Criminology and Cybercrime ▶ Cyberstalking ▶ Dating and Sexual Relationships in the Age of the Internet ▶ Hate Speech in Online Spaces ▶ Image-Based Sexual Abuse: A Feminist Criminological Approach ▶ Intimate Partner Violence and the Internet: Perspectives ▶ Prostitution and Sex Work in an Online Context ▶ Revenge Pornography

646

A. J. Marganski

▶ Sexting and Social Concerns ▶ Technology Use, Abuse, and Public Perceptions of Cybercrime ▶ The Dark Web as a Platform for Crime: An Exploration of Illicit Drug, Firearm, CSAM, and Cybercrime Markets ▶ The Rise of Sex Trafficking Online ▶ The Role of the Internet in Facilitating Violent Extremism and Terrorism: Suggestions for Progressing Research

References Adler, F. (1975). Sisters in crime: The rise of the new female criminal. New York: McGraw-Hill. American Psychological Association. (2018). APA guidelines for psychological practice with boys and men. https://www.apa.org/about/policy/boys-men-practice-guidelines.pdf Andersen, M., & Collins, P. H. (1994). Race, class, and gender: An anthology. Belmont: Wadsworth. Andersen, M., & Collins, P. H. (2001). Introduction. In M. Andersen & P. H. Collins (Eds.), Race, class and gender: An anthology (4th ed., pp. 1–9). Belmont, CA: Wadsworth. Anderson, K. L., & Umberson, D. (2001). Gendering violence: Masculinity and power in men’s accounts of domestic violence. Gender and Society, 15(3), 358–380. Balfour, G. (2006). Re-imagining a feminist criminology. Canadian Journal of Criminology and Criminal Justice, 48(5), 735–752. Banet-Weiser, S., & Miltner, K. M. (2016). # MasculinitySoFragile: Culture, structure, and networked misogyny. Feminist Media Studies, 16(1), 171–174. Bates, S. (2017). Revenge porn and mental health: A qualitative analysis of the mental health effects of revenge porn on female survivors. Feminist Criminology, 12(1), 22–42. https://doi.org/ 10.1177/1557085116654565. Baum, K., Catalano, S., Rand, M., & Rose, K. (2009). Stalking victimization in the United States. Washington, DC: US Department of Justice. Belknap, J. (1996). Invisible woman. Wadsworth. ISBN 0-534-15870-6 or Belknap, J. (2001). The invisible woman: Gender, crime, and justice (2nd ed.). Belmont: Wadsworth. Belknap, J. (2015). Activist criminology: Criminologists’ responsibility to advocate for social and legal justice. Criminology, 53(1), 1–22. Blevins, K. (2018). Bell hooks and consciousness-raising: Argument for a fourth wave of feminism. In J. R. Vickery & T. Everbach (Eds.), Mediating misogyny: Gender, technology, & harassment. Denton: Palgrave Macmillan. Breiding, M. J., Chen, J., & Black, M. C. (2014). Intimate partner violence in the United States– 2010. Atlanta: National Center for Injury Prevention and Control of the Centers for Disease Control and Prevention. Britton, D. M. (2000). Feminism in criminology: Engendering the outlaw (PDF). Annals of the American Academy of Political and Social Science, 571, Feminist Views of the Social Sciences, 57–76. Sage. Retrieved 27 July 2012. Browne, A., Miller, B., & Maguin, E. (1999). Prevalence and severity of lifetime physical and sexual victimization among incarcerated women. International Journal of Law and Psychiatry, 22, 301–322. Burgess-Proctor, A. (2006). Intersections of race, class, gender, and crime: Future directions for feminist criminology. Feminist Criminology, 1(1), 27–47. Caldwell, J. E., Swan, S. C., & Woodbrown, V. D. (2012). Gender differences in intimate partner violence outcomes. Psychology of Violence, 2(1), 42. Campbell, R., & Wasco, S. M. (2000). Feminist approaches to social science: Epistemological and methodological tenets. American Journal of Community Psychology, 28(6), 773–791.

31

Feminist Theories in Criminology and the Application to Cybercrimes

647

Center on Extremism. (2018). When women are the enemy: The intersection of misogyny and white supremacy. The Anti-Defamation League. https://www.adl.org/media/11707/download Chesney-Lind, M. (1986). Women and crime: The female offender. Signs, 12, 78–96. Chesney-Lind, M. (1989). Girls’ crime and woman’s place: Toward a feminist model of female delinquency. Crime & Delinquency, 35, 5–29. Chesney-Lind, M. (2006). Patriarchy, crime, and justice: Feminist criminology in an era of backlash. Feminist Criminology, 1(1), 6–26. Chesney-Lind, M., & Chagnon, N. (2016). Criminology, gender, and race: A case study of privilege in the academy. Feminist Criminology, 11(4), 311–333. Chesney-Lind, M., & Pasko, L. (2004). The female offender: Girls, women, and crime. Thousand Oaks: Sage. Chesney-Lind, M., Morash, M., & Stevens, T. (2008). Girls troubles, girls’ delinquency, and gender responsive programming: A review. Australian & New Zealand Journal of Criminology, 41(1), 162–189. Collins, P. H. (1990). Black feminist thought: Knowledge, consciousness, and the politics of empowerment. London: HarperCollins, 221–238. Collins, P. H. (2000). Black feminist thought: Knowledge, consciousness, and the politics of empowerment (2nd ed.). New York: Routledge. Comack, E. (2006). The feminist engagement with criminology. In G. Balfour & E. Comack (Eds.), Criminalizing women (pp. 22–55). Halifax: Fernwood. Connell, R. W. (1987). Gender and power: Society, the person and sexual politics. Sydney: Allen & Unwin. ISBN 9780041500868. Connell, R. W., & Messerschmidt, J. W. (2005). Hegemonic masculinity: Rethinking the concept. Gender & Society, 19(6), 829–859. Counts, D. A., Brown, J., & Campbell, J. (1992). Sanctions and sanctuary: Cultural perspectives on the beating of wives. Boulder: Westview Press. Covington, S. S., & Bloom, B. E. (2007). Gender responsive treatment and services in correctional settings. Women & Therapy, 29(3–4), 9–33. Crenshaw, K. (1989). Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics (pp. 139–167). Chicago: University of Chicago Legal Forum. Crenshaw, K. (1991a). Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43(6), 1241–1299. Crenshaw, K. (1991b). Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory, and antiracist politics. In K. Bartlett & R. Kennedy (Eds.), Feminist legal theory (pp. 57–80). Boulder: Westview. Daly, K., & Chesney-Lind, M. (1988). Feminism and criminology. Justice Quarterly, 5, 497–538. Davis, A. Y. (2011). Women, race, & class. New York: Vintage. DeKeseredy, W. S. (2011). Feminist contributions to understanding woman abuse: Myths, controversies, and realities. Aggression and Violent Behavior, 16(4), 297–302. Denejkina, A. (2018). In Russia, feminist memes buy jail time, but domestic abuse doesn’t. Foreign Policy. https://foreignpolicy.com/2018/11/15/in-russia-feminist-memes-buy-jail-time-butdomestic-abuse-doesnt/ Denning, D. E. (2011). In T. Hold & B. Schell (Eds.), Cyber conflict as an emergent social phenomenon, corporate hacking and technology-driven crime: Social dynamics and implications (pp. 170–186). Hershey: IGI Global. Donner, C. M. (2016). The gender gap and cybercrime: An examination of college students’ online offending. Victims & Offenders, 11(4), 556–577. Dragiewicz, M. (2008). Patriarchy reasserted: Fathers’ rights and anti-VAWA activism. Feminist Criminology, 3, 121–144. https://doi.org/10.1177/1557085108316731. Fairbairn, J., & Spencer, D. (2017). Virtualized violence and anonymous juries: Unpacking Steubenville’s “big red” sexual assault case and the role of social media. Feminist Criminology, 13(5), 477–497. 1557085116687032.

648

A. J. Marganski

Faludi, S. (1991). Backlash: The undeclared war against women. New York: Crown. Finn, J. (2004). A survey of online harassment at a university campus. Journal of Interpersonal Violence, 19, 468–483. Flavin, J. (2001). Feminism for the mainstream criminologist: An invitation. Journal of Criminal Justice, 29(4), 271–285. Foshee, V. A., Linder, F., MacDougall, J. E., & Bangdiwala, S. (2001). Gender differences in the longitudinal predictors of adolescent dating violence. Preventive Medicine, 32(3), 128–141. Freyd, J. J. (1997). Violations of power, adaptive blindness, and betrayal trauma theory. Feminism & Psychology, 7, 22–32. Gelsthorpe, L. (2002). Feminism and criminology. In M. Maguire, R. Morgan, & R. Reiner (Eds.), The Oxford handbook of criminology (3rd ed., pp. 112–143). Oxford: Oxford University Press. Gilfus, M. E. (1992). From victims to survivors to offenders: Women’s routes of entry and immersion into street crime. Women & Criminal Justice, 4(1), 63–89. Gilfus, M. E. (2002). Women’s experiences of abuse as a risk factor for incarceration. Harrisburg: VAWnet, a project of the National Resource Center on Domestic Violence/Pennsylvania Coalition Against Domestic Violence. Glick, P., & Fiske, S. T. (2001). An ambivalent alliance: Hostile and benevolent sexism as complementary justifications for gender inequality. American Psychologist, 56(2), 109–118. https://doi.org/10.1037/0003-066X.56.2.109. Hall, M., & Hearn, J. (2019). Revenge pornography and manhood acts: A discourse analysis of perpetrators’ accounts. Journal of Gender Studies, 28(2), 158–170. Hamberger, L. K. (2005). Men’s and women’s use of intimate partner violence in clinical samples: Toward a gender-sensitive analysis. Violence & Victims, 20, 131–151. https://doi.org/10.1891/ vivi.2005.20.2.131. Hautzinger, S. (1998). Machos and policewomen, battered women and anti-victims: Combatting violence against women in Brazil. Baltimore: Johns Hopkins University. Hirdmann, Y. (1996). Key concepts in feminist theory: Analysing gender and welfare (FREIA’s tekstserie, 34). Aalborg: Department of History, International and Social Studies, Aalborg University. https://doi.org/10.5278/freia.14136339. Holt, T. J., & Bossler, A. M. (2008). Examining the applicability of lifestyle-routine activities theory for cybercrime victimization. Deviant Behavior, 30(1), 1–25. hooks, b. (1984). Feminist theory: From margin to center. Boston: South End Press. Hopkins-Doyle, A., Sutton, R. M., Douglas, K. M., & Calogero, R. M. (2018). Flattering to deceive: Why people misunderstand benevolent sexism. Journal of Personality and Social Psychology, 116, 167–192. Hudson, L. (2018). The internet is enabling a community of men who want to kill women. They need to be stopped. The Verge. https://www.theverge.com/2018/4/25/17279294/toronto-massa cre-minassian-incels-internet-misogyny Jane, E. A. (2014). “Your a ugly, whorish, slut” understanding E-bile. Feminist Media Studies, 14(4), 531–546. Jerin, R., & Dolinsky, B. (2001). You’ve got mail! You don’t want it: Cyber-victimization and online dating. Journal of Criminal Justice and Popular Culture, 9, 15–21. Johnson, A. G. (1997). The gender knot: Unraveling our patriarchal legacy. Philadelphia: Temple University Press. Karp, D. R. (2010). Unlocking men, unmasking masculinities: Doing men’s work in prison. The Journal of Men’s Studies, 18(1), 63–83. Katz, J. (2013). Violence against women: It’s a men’s issue. TED. https://www.ted.com/talks/ jackson_katz_violence_against_women_it_s_a_men_s_issue Killerman, S. (2013). The Genderbread Person, Version 3. It’s Pronounced Metrosexual. Accessed from: https://www.itspronouncedmetrosexual.com/2015/03/the-genderbread-person-v3/ Kim, J. (2018). In J. R. Vickery & T. Everbach (Eds.), Mediating misogyny: Gender, technology, & harassment. Denton: Palgrave Macmillan. Kimmel, M. (2014). The gendered society. New York: Oxford University Press.

31

Feminist Theories in Criminology and the Application to Cybercrimes

649

King, D. K. (1995). Multiple jeopardy, multiple consciousness: The context of Black feminist ideology. In B. Guy-Sheftall (Ed.), Words of fire: An anthology of African American feminist thought (pp. 294–318). New York: The New Press. Kini, A. N. (2018). Feminists were right: Ignoring online misogyny has deadly consequences. The Washington Post. https://www.washingtonpost.com/news/posteverything/wp/2018/04/30/femi nists-were-right-ignoring-online-misogyny-has-deadly-consequences/?utm_term=.5908762321f8 Krolokke, C., & Sorensen, A. S. (2005). Three waves of feminism: From suffragettes to Grrls. Thousand Oaks: Sage. Krug, E. G., Mercy, J. A., Dahlberg, L. L., & Zwi, A. B. (2002). The world report on violence and health (pp. 1–346). Geneva: The World Health Organization. Lombroso, C., & Ferrero, G. (1895). The female offender (Vol. 1). New York: D. Appleton. Lonsway, K. A., & Fitzgerald, L. F. (1995). Attitudinal antecedents of rape myth acceptance: A theoretical and empirical reexamination. Journal of Personality and Social Psychology, 68(4), 704–711. Lopez, V., & Pasko, L. (2017). Bringing Latinas to the forefront: Latina girls, women, and the justice system. Feminist Criminology, 12(3), 195–198. https://doi.org/10.1177/ 1557085117703235. Lumsden, K., & Morgan, H. (2017). Media framing of trolling and online abuse: Silencing strategies, symbolic violence, and victim blaming. Feminist Media Studies, 17(6), 926–940. Lynn, D. (2014). Socialist feminism and triple oppression: Claudia Jones and African American women in American Communism. Journal for the Study of Radicalism, 8(2), 1–20. MacKinnon, C. A. (1989). Toward a feminist theory of the state. Cambridge, MA: Harvard University Press. Male Supremacy. (2018). The Southern Poverty Law Center. https://www.splcenter.org/fightinghate/extremist-files/ideology/male-supremacy Mann, S. A., & Grimes, M. D. (2001). Common and contested ground: Marxism and race, gender and class analysis. Race, Gender and Class, 8(2), 3–22. Mantilla, K. (2015). Gendertrolling: Gendertrolling: How misogyny went viral. Santa Barbara: ABC-CLIO. Marcum, C. D., Higgins, G. E., & Ricketts, M. L. (2010). Potential factors of online victimization of youth: An examination of adolescent online behaviors utilizing routine activity theory. Deviant Behavior, 31(5), 381–410. Marganski, A. (2018). Feminist theory and technocrime: Examining online harassment, stalking, and gender violence in contemporary society. In K. F. Steinmetz & M. R. Nobles (Eds.), Technocrime and criminological theory. Boca Raton: Routledge/CRC Press. Marganski, A., & Fauth, K. (2013). Socially interactive technology and contemporary dating: a cross-cultural exploration of deviant behaviors among young adults in the modern, evolving technological world. International Criminal Justice Review, 23(4), 357–377. Marganski, A., & Melander, L. (2018). Intimate partner violence victimization in the cyber and real world: Examining the extent of cyber aggression experiences and its association with in-person dating violence. Journal of interpersonal violence, 33(7), 1071–1095. 0886260515614283. Ménard, K. S., & Pincus, A. L. (2012). Predicting overt and cyber stalking perpetration by male and female college students. Journal of Interpersonal Violence, 27(11), 2183–2207. Merry, S. E. (2009). Gender violence: A cultural perspective. Malden: Wiley-Blackwell. Messerschmidt, J. W. (1986). Capitalism, patriarchy, and crime: Toward a socialist feminist criminology. Totowa: Rowman & Littlefield. Messerschmidt, J. W. (1997). Crime as structured social action: Gender, race, class and crime in the making. Thousand Oaks: Sage. Miller, J. (2014). Feminist criminology. In M. Schwartz & S. Hatty (Eds.), Controversies in critical criminology (pp. 15–27). New York: Routledge. Miller, J., & Mullins, C. W. (2006). The status of feminist theories in criminology. In F. T. Cullen, J. Wright, & K. Blevins (Eds.), Taking stock: The status of criminological theory (pp. 217–249). New Brunswick: Transaction Publishers.

650

A. J. Marganski

Morash, M. (2006). Understanding gender, crime, and justice. Thousand Oaks: Sage. Morash, M. (2017). Feminist theories of crime. London: Routledge. Mortimer, (2016). Father who repeatedly raped daughter told police ‘it was fun while it lasted’, court hears. Independent. https://www.independent.co.uk/news/world/australasia/father-whorepeatedly-raped-daughter-told-police-it-was-fun-while-it-lasted-court-hears-a7098236.html Naffine, N. (1997). Feminism and criminology. Malden: Polity Press. Nanda, S. (1986). The Hijras of India: Cultural and individual dimensions of an institutionalized third gender role. Journal of Homosexuality, 11(3–4), 35–54. Ng, A. (2019). Cyberstalkers are crowdsourcing danger to victims’ doorsteps with dating apps. CNET. https://www.cnet.com/news/cyberstalkers-are-crowdsourcing-danger-to-victims-doorstepswith-dating-apps/ Nobles, M. R., Reyns, B. W., Fox, K. A., & Fisher, B. S. (2014). Protection against pursuit: A conceptual and empirical comparison of cyberstalking and stalking victimization among a national sample. Justice Quarterly, 31(6), 986–1014. North, A. (2018). How mass shooters practice their hate online. Vox. https://www.vox.com/identi ties/2018/10/31/18039294/scott-beierle-tallahassee-shooting-pittsburgh-gab Owen, B. (1998). “In the mix”: Struggle and survival in a women’s prison. Albany: State University of New York Press. Pence, E., & Paymar, M. (1993). Education groups for men who batter: The Duluth model. New York: Springer Publishing Company. Phelan, M. B., Hamberger, L., Guse, C., Edwards, S., Walczak, S., & Zosel, A. (2005). Domestic violence among male and female patients seeking emergency medical services. Violence & Victims, 20, 187–206. https://doi.org/10.1891/vivi.2005.20.2.187. Potter, H. (2006). An argument for black feminist criminology: Understanding African American women’s experiences with intimate partner abuse using an integrated approach. Feminist Criminology, 1(2), 106–124. Radcliffe, K., Scott, J., & Werner, A. (2014). Anywhere but here: Black intellectuals in the Atlantic world and beyond. Jackson: University of Press of Mississippi. Rafter, N. H., & Heidensohn, F. (Eds.). (1995). International feminist perspectives in criminology: Engendering a discipline (pp. 1–14). Buckingham: Open University Press. Renzetti, C. M. (2012). Feminist perspectives in criminology, Ch. 9. In W. DeKeseredy & M. Dragiewicz (Eds.), Routledge handbook of critical criminology (pp. 129–137). Abingdon: Routledge. Renzetti, C. M. (2013). Feminist criminology. New York: Routledge. Reyns, B. W., Henson, B., & Fisher, B. S. (2011). Being pursued online: Applying cyberlifestyle–routine activities theory to cyberstalking victimization. Criminal Justice and Behavior, 38(11), 1149–1169. Reyns, B. W., Henson, B., & Fisher, B. S. (2012). Stalking in the twilight zone: Extent of cyberstalking victimization and offending among college students. Deviant Behavior, 33, 1–25. Ringrose, J., Gill, R., Livingstone, S., & Harvey, L. (2012). A qualitative study of children, young people and ‘sexting’: A report prepared for the NSPCC. London: National Society for the Prevention of Cruelty to Children. Risman, B. J. (2004). Gender as social structure: Theory wrestling with activism. Gender & Society, 18, 429–450. Saar, M. S., Epstein, R., Rosenthal, L., & Vafa, Y. (2015). The sexual abuse to prison pipeline: The girls’ story. Human rights project for girls, Center on Poverty and Inequality, Georgetown Law. http://rights4girls.org/wp-content/uploads/r4g/2015/02/2015_COP_sexual-abuse_layout_web1.pdf Sarkeesian, A. (2012). Image based harassment and visual misogyny. Feminist Frequency. https:// feministfrequency.com/2012/07/01/image-based-harassment-and-visual-misogyny/ Schell, B. H., & Holt, T. J. (2009). A profile of the demographics, psychological predispositions, and social/behavioral patterns of computer hacker insiders and outsiders. In Online consumer protection: Theories of human relativism (pp. 190–213). Charlotte: IGI Global.

31

Feminist Theories in Criminology and the Application to Cybercrimes

651

Schwendinger, J. R., & Schwendinger, H. (1983). Rape and inequality (pp. 178–179). Beverly Hills: Sage. Simon, R. J. (1975). Women and crime (pp. 33–47). Lexington: Lexington Books. Simpson, S. S. (1989). Feminist theory, crime, and justice. Criminology, 27(4), 605–632. Smart, C. (1976). Women, crime and criminology: A feminist critique. Boston: Routledge. Smith-Spark, L. (2018). Saudi Arabia tortured activists including women, rights groups claim. CNN. https://www.cnn.com/2018/11/21/middleeast/saudi-arabia-detainee-abuse-claim-intl/ index.html Sokoloff, N. J., & Dupont, I. (2005). Domestic violence at the intersections of race, class, and gender: Challenges and contributions to understanding violence against marginalized women in diverse communities. Violence Against Women, 11(1), 38–64. Sterling, B. (1994). The hacker crackdown: Law and disorder on the electronic frontier. New York, NY: Bantam Books. Tanczer, L. M. (2016). Hacktivism and the male-only stereotype. New Media & Society, 18(8), 1599–1615. Tandon, N., & Pritchard, S. (2015). Cyber violence against women and girls: A world-wide wakeup call. UN Women, UNDP and ITU. Accessed from: https://en.unesco.org/sites/default/files/ genderreport2015final.pdf Taylor, P. (2003). Maestros or misogynists? Gender and the social construction of hacking. In Y. Jewkes (Ed.), Dot.cons: Crime, deviance and identity on the Internet (pp. 134–154). Cullompton: Willan. van Baak, C., & Hayes, B. (2018). Correlates of cyberstalking victimization and perpetration among college students. Violence & Victims, 33(60), 1036–1054. Walton, M. A., Cunningham, R., Chermack, S., Maio, R., Blow, F., & Weber, J. (2007). Correlates of violence history among injured patients in an urban emergency department: Gender, substance use, and depression. Journal of Addictive Diseases, 26, 61–75. https://doi.org/10.1300/ J069v26n03_07. West, C., & Zimmerman, D. H. (1987). Doing gender. Gender & Society, 1(2), 125–151. World Health Organization (WHO). (2013). Global and regional estimates of violence against women: prevalence and health effects of intimate partner violence and non-partner sexual violence. Geneva 27, Switzerland: World Health Organization. Yar, M. (2005). Computer hacking: Just another case of juvenile delinquency? The Howard Journal of Criminal Justice, 44(4), 387–399. Young, R., Zhang, L., & Prybutok, V. R. (2007). Hacking into the minds of hackers. Information Systems Management, 24(4), 281–287. Zaleski, K. L., Gundersen, K. K., Baes, J., Estupinian, E., & Vergara, A. (2016). Exploring rape culture in social media forums. Computers in Human Behavior, 63, 922–927.

The Psychology of Cybercrime

32

Alison Attrill-Smith and Caroline Wesson

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Psychology of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Typologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Internet as a Tool for New and Old Crimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Domestic and Sexual Abuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Revenge Pornography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Researching Online Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

654 654 655 656 670 670 671 674 675 676

Abstract

Criminological and forensic psychology has had a long history of offering us unique insights into criminal behavior in the offline world. Yet the advent of the Internet has meant that we have had to think about the psychology of criminal behavior from a different perspective. The Internet brings with it new crimes and new ways to commit old crimes, and has the potential to make a criminal of the unsuspecting and naïve Internet user. This chapter introduces the reader to the area of cyberpsychology and provides an overview of current psychological understandings of online crime. We start by considering cybercrime typologies, outlining the categories of cybertrespass, cyberdeception and theft, cyber-pornography and obscenity, and cyber-violence that have informed psychological theorizing in this area, as well as highlighting some of the problems inherent within these categorization systems. We then focus on the psychology of using the Internet as a tool for new and old crimes, where a perpetrator A. Attrill-Smith (*) · C. Wesson Cyberpsychology Research Group, Department of Psychology, University of Wolverhampton, Wolverhampton, UK e-mail: [email protected]; [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_25

653

654

A. Attrill-Smith and C. Wesson

may use the Internet as a “weapon” that has the potential to cause harm or damage. Here, we focus on the old crime of domestic abuse that is committed through the new technology of the Internet and the associated new crime of revenge pornography that relies on the Internet and linked technologies to cause damage to its victims. We finish by considering some of the difficulties of researching online crime from a psychological perspective and propose a need for psychological theorizing to move away from categorizing large sways of mildly related crimes under umbrella labels. Keywords

Internet psychology · Cyberpsychology · Criminal psychology · Psychology typologies · Forensic psychology

Introduction As humankind has evolved over thousands of years, so too have their abilities to commit crimes: crimes against individuals, groups, and against themselves, inanimate objects, and animals. As criminal activity has evolved, so too have the punishments administered, which have been continuously developed and changed to address and redress criminal activities. From a criminological and forensic psychology perspective, we are now better placed than ever before to understand and explain the hows, whys, and wherefores of offline criminal activity, largely because our understanding thereof has developed with the evolution of humankind. Over the last few decades, however, this understanding along with the prevention and solution of crime has been thrown a huge curveball in the shape of the World Wide Web (www). With the Internet and www, humans have new tools for their criminal activity. They can extend their existent criminal activity to a wider population, create entirely new crimes, more adeptly carry out old crimes, or simply be lured into criminal activity online without even realizing that they are acting criminally. Almost any human desire or emotion can be readily gratified through legal or illegal activity online. Relatively speaking, the length of time we have had to understand the role of the Internet in criminal behavior is miniscule compared to the amount of time we have had to understand offline crime. This chapter will therefore provide a brief outline of a snippet of our current understanding of the psychology of online crime. It will then consider a number of main typologies and theoretical frameworks that have been put forward to understand the psychology of online crime prior to discussing the difficulties associated with theorizing about, and researching online crime.

Defining Cybercrime Prior to clarifying our understanding of what constitutes cybercrime, it is worth pointing out that we will refer to the Internet throughout this chapter. In doing so, we are actually referring to what the reader likely considers to be the World Wide Web

32

The Psychology of Cybercrime

655

(www). For the purposes of this chapter, however, the term Internet includes any form of connected communication or interaction, via any technology that facilitates that interconnectivity (for example, mobile phones, PCs, gaming consoles, and the like). Throughout this book, you will likely have come across various definitions of cybercrime. For this chapter, we consider cybercrime to be an act conducted or enabled through digital technologies that causes either online or offline harm to another person(s), item, or animal.

The Psychology of Cybercrime From a psychological perspective, our interest lies in understanding the thoughts, motivations, intentions, and internal dialogues that underlie or drive peoples’ criminal behavior, as well as the impact that being a victim has on these psychological processes. In particular, psychologists seek to create theories that enable them to promote a stable psychological world, a world of predictable intentions, behaviors, and outcomes. To this end, psychology seeks to explain why people engage in online criminal activity and the wider reaching implications thereof. This involves considering peoples’ underlying motivations to achieve either conscious or unconscious behavioral goals. Oftentimes, when readers without a psychology background see the word psychology, especially in a criminology or forensic setting, they immediately think of criminal profiling. While criminal profiling is important to understanding the psychology of criminal behavior, it is a specific technique employed to identify a potential perpetrator based on evidence found at a crime scene and known characteristics of a victim. Criminal profiling relies on assumptions of similarities and consistencies in the modus operandi and intentions of criminals (Holmes 1989), and is often used as expert evidence in criminal trials. There is however much debate around the validity and reliability of criminal profiling, with Chifflet (2015) noting, for example, the reliance on anecdotal evidence rather than on robustly tested theories as a key factor in an ever-increasing application of criminal profiling. Given that profiling is often based on linking evidence and information, one might assume that it lends itself to understanding online crimes, especially as many of these will leave a digital trail that lends itself to analytical considerations. This could not be further from the truth. The psychology of cybercrime is much more than attempting to pinpoint a perpetrator based on co-occurring evidence. Understanding the psychology of online crime is not only tasked with explaining perpetrator behavior but also with considering victim behavior as well as how to minimize and/or prevent online crime. Moreover, cyberpsychology draws on a range of theories across many different areas within psychology including forensic, social, personality, cognitive, and developmental areas of study, to name but a few. It seeks to provide a scientific understanding based on robust, valid, and replicable theoretically driven research. In order to achieve this, cyberpsychology borrows from existing psychological approaches to provide a basis of understanding that can be built upon with new theoretical perspectives and new categorizations of online behaviors. As you read through this chapter, you will be introduced to both old and

656

A. Attrill-Smith and C. Wesson

new approaches to understanding criminal acts in a digital world. Throughout, emphasis will be given to the diversity and range of online crimes that occur. Criminal activity ranges, for example, from large scale organized crimes of monetary greed to smaller individual acts borne of a need to survive, financially, emotionally, or in other ways. The organized crime of a sex trafficking ring is thus likely driven and motivated by very different psychological factors to an individual hacker trying to prove that the FBI and NASA systems are hackable. This in turn differs extremely from crimes that supposedly occur at a government level, which are often portrayed through confusing and suggestible media reports of cyberattacks on large corporate companies to inflict harm on a wider societal group. We could continue with this list of examples, but the point is that we need to consider Internet crimes from many different psychological perspectives, remain open minded as to possible explanations of those crimes, and remember not to assume that all crimes are committed for equal or similar reasons or purposes. While historically, offline criminal behavior is explained through a multifaceted combination of the various subdisciplines of psychology, there has hitherto been a tendency for theorists and researchers to attempt to explain large sweeps of online behavior with single all-encompassing theories or typologies. Working in this area, we argue that this is an insufficient approach to understanding the psychology of cybercrime, and that we need to move to a more thorough exploration of the different areas of psychology employed offline. While we do not negate the initial usefulness of typologies, we do now suggest a move away from grouping crimes together under umbrella terms towards considering different types of cybercrime as being borne from different individual factors, needs, and motivations. In doing so, we will also touch upon the difficulties that this approach faces in light of ever-developing technological advances. That said, in order to structure this chapter and to illustrate this point, we will use some of the most well-developed cybercrime typologies that have served psychology well in creating an understanding of specific types of online crime that extends beyond a single theoretical approach.

Cybercrime Typologies A psychological typology is a grouping of behaviors into limited categories. This is not the same as categorizing people for criminal profiling purposes. Probably one of the most obvious examples of psychology research that uses typologies outside of cybercrime is personality psychology. Most of us, psychologists and non-psychologists alike, are familiar with categorizing people according to their personality traits. For example, you might be told that you are more or less outgoing in social situations. In other words, you are categorized as an extravert or introvert. You might also be told that you are more or less neurotic, open, conscientious, or agreeable. These are five of the most well-known categories that researchers have used for decades to group people together. Specifically, this five-factor model (or OCEAN model) of personality is a taxonomy that groups people based on the dimensions of openness, conscientiousness, extraversion, agreeableness, and neuroticism of

32

The Psychology of Cybercrime

657

personality (Costa and McCrae 1985). The aim of such typologies or taxonomies is to understand people based on limited information. In this instance, if we know that someone is an introverted agreeable person, we know what to expect from them in a social situation. As humans, we are cognitive misers (Fiske and Taylor 1991) – we do not want to waste a lot of brain time or energy trying to understand who a person is or why they behave as they do. We want to be able to quickly process any incoming information we encounter about them. In doing so, we can create an expectation about how a person will likely behave in a predictable situation. We are therefore using limited information to think heuristically and create a stable predictable social world. In combatting online crime, such predictions can help us know how a criminal might behave. There have been many adaptations of the OCEAN model over the years, all of which work on the same assumptions of predictability. The OCEAN model’s importance for the current chapter is not only to illustrate the usefulness of such categorizations in human behavior but also to demonstrate that a taxonomy is usually based on fixed factors (personality traits) that can be empirically measured. A typology, on the other hand, can be based on heuristics and arbitrary concepts that are not necessarily measureable. According to a cybercrime typology, for example, perpetrators might be categorized based on factors that may have little in common, may be arbitrarily linked, or may be so similar that it is difficult to disentangle them. One of the most well-known typologies of cybercrime was created by David Wall in (2001). Offering a useful structure around which to introduce many forms of cybercrime which psychology looks to explain, we will begin by outlining some of the work relevant to Wall’s four broad categories of cybercrime: cybertrespass, cyberdeception and theft, cyberpornography and obscenity, and cyber-violence. In doing so, we could get carried away with offering examples of cybercrimes and their associated psychology but that would be outside the limited scope of this chapter. We therefore limit each section to a couple of examples. Cybertrespass refers to the unauthorized accessing of computers and networks to which the person does not have legal access and/or to their monitoring of private information which they do not have permission to access. Think of it as crossing boundaries without permission, much as if you were jumping over a fence onto private land offline. You might be more familiar with the terms hacking or cracking to describe these types of crimes. The key component of this behavior which makes it a crime is the accessing of a system without authorization or permission. Much early research that considered this category of crimes focused on hacking offences. Attempts were made to psychologically subcategorize those who cybertrespass, including Young’s (1993) distinction between utopians (hackers who believe they are contributing to society by demonstrating system vulnerabilities), cyberpunks (intentionally cause targeted harm as an aggressive antiestablishment act), and cyber-spies or cyber-terrorists (expert crackers who intentionally gain targeted access to inflict targeted and/or widespread harm). Young’s subcategories thus rely on understanding hackers’ intentions for their acts. More recently, hackers have been distinguished as white, black, or gray hat hackers (Hoffman 2013). Whereas black hat hackers are more likely to write malware to gain access to systems for financial

658

A. Attrill-Smith and C. Wesson

gain, blackmail, or cyber espionage, white hat hackers purport to using their hacking skills for the good. The latter often works for a company with permission to hack their system, making their work legal. The gray hat hacker sits between the white and black, often looking for system’s vulnerabilities so that they can report them and request a fixing fee from the system’s owner(s). Already we see then, that these types of hackers work for different psychological reasons, some for greed and some for the thrill of being able to crack a system. What we need to ask ourselves from a psychology perspective, however, is whether the white and gray hats really are just trying to prove a point. Their motivations are not altruistic, but likely for bragging rights, or to prove their own ability. Some hackers may simply get carried away with themselves, without realizing or consciously registering that they are crossing the border into the illegal. Indeed, over the last few years, there have been a number of reports of individuals on the autistic spectrum disorder range, who have severe mental health issues and yet are able to hack into government systems. We refer the reader to the case of Lauri Love, a UK-based activist who gained access to the US army, NASA, and other systems. In 2016, a case was brought against Love being extradited to the USA to face charges. His parents fought for him to be charged in the UK given recognition of his diagnosis of Asperger’s syndrome, a mild autistic spectrum disorder that causes social interaction difficulties and repetitive patterns of behavior among other factors. This situation paints a very different picture to the common misconception that most hackers are juvenile boys sitting in their bedroom getting carried away with their technological abilities. It also demonstrates that the psychology of hacking is too complex to be able to group everyone under a single definition of cybertrespass that does not address individual characteristics and psychological drives, needs, and motivations. It is this category of Wall’s typology that has probably received the least amount of psychological research, as it is more often considered from a technological ability perspective. This consideration of Love’s hacking highlights a need for inclusion of considerations from a clinical psychology perspective in understanding online crime. From a cybercrime viewpoint, clinical psychology has likely contributed most to understanding extreme behaviors that would also be considered illegal or detrimental to one’s offline existence, such as any type of addiction, lower cognitive functioning, or a reduced understanding of risk, moral reasoning, and behavioral consequences. Clinical psychology is mostly associated with maintaining and restoring mental health, therapy, and psychological well-being. In the case of Lauri Love, emphasis was given to his clinical condition of Asperger’s syndrome even though little evidence exists to demonstrate that having this condition caused his illegal behavior. A more encompassing consideration may have been more useful to providing a fuller understanding of his hacking behaviors. Cyber-deceptions and theft refer to the acquisition of material from cyberspace without permission. Immediately, you may think of identity theft or money-related theft, given that these are the types of theft most commonly talked about in the mass media, but it also includes the illegal downloading of music and videos, and the stealing of intellectual property online. Almost daily, we hear or see reports of Internet fraud, where someone has been fleeced of thousands of pounds through

32

The Psychology of Cybercrime

659

online banking theft. Gradually, knowledge of the methods used by fraudsters to gain access to peoples’ bank accounts, email addresses, and other personal information has filtered down through public knowledge, but we still know relatively little about the psychology of organized crime groups who engage in fraud and deception for monetary gain. This is a completely different category of offenders to those who “steal” a song, video, or image from the Internet, and yet the organized gang and the individual are classified as Internet thieves alike. One important factor to understanding Internet fraud is to acknowledge that these crimes are not reserved for those who know how to commit a crime. Without awareness, many innocent people can easily become an online criminal, mainly due to not realizing or recognizing that they are committing a theft online. Smith and Hogan (1999) suggest that the existing definitions of theft are problematic for understanding online theft as they revolve around something being permanently removed from an owner and depriving said owner(s) of that item’s use. Downloading images or music online does not remove those items from their owners’ use, and yet you can so easily become a criminal by simply copying and dispersing a copyrighted image. The psychological motivations and intentions of such a download are very different to those of the gang of thieves sat in a room which has dozens of computers burring away to hack as many bank accounts as possible to steal as much money as possible. Another subcategory of offenders in this group are those who buy and sell academic work online. Often, those selling essays and dissertations online have not produced them themselves but have in fact accessed them on open university databases. They are then selling them on to unsuspecting students having changed them sufficiently to avoid detection by academic misconduct recognition software (e.g., Turnitin). The student should however be aware that they are rarely changed sufficiently to avoid well-studied academics recognizing them as fraudulent pieces of work! The risk here is for a student to also not know that they are likely committing an act of intellectual property right infringement. The student’s motivations in this instance may be borne of laziness or a desire to do better than they are academically able. They have not bought the work for monetary gain but to achieve a desired goal. It is this difference in motivational behavior that is key to understanding behavior from a psychological perspective. If we were turn back time to the early 1990s, we might be fooled into thinking that similar offline crimes did not exist. That is not the case. Back then, for example, it was just more time consuming to go through someone’s rubbish bin to find information that could be used to misrepresent oneself to a bank employee to gain access to another person’s bank account. The Internet now facilitates this type of crime on a larger scale with a broader geographical scope. However, it is also important to acknowledge that many of us are unaware of the actual levels of fraudulent crimes committed online. This could be due to the mass media feeding off of our fears and neuroses to perpetuate the notion that we are all going to fall victim to cybercrimes. There is a level of unwarranted scaremongering in the mass media. A report by the Federal Trade Commission in 2016 (www.ftc.gov) suggested, for instance, that the amount of identity thefts reported consistently contributes around 13% to complaints of criminal acts falling into this category from 2013 through to 2016. Actual fraud

660

A. Attrill-Smith and C. Wesson

complaints have declined from 2013 (56%) to 2016 (40%). This is far less than what the media would have us believe. Psychologically speaking, why do we fear such crime more and more? The most important factor is probably knowledge. We may simply hear more about crimes as people become increasingly aware of their occurrence. The Internet by its sheer nature of connectivity enables dissemination of information like never before. Many crimes that occur offline have not risen statistically, but because we hear more about them, and know more about them, we fear them more. We are simply exposed to more information that plays on our psychology as humans. Furthermore, the availability heuristic (Tversky and Kahneman 1974) suggests that the likelihood of an event is inferred on the basis of how quickly an association comes to mind. For instance, if we are exposed to a disproportionate amount of media coverage regarding a particular crime, then we will infer an equally inflated likelihood about the risk of said crime. The mass media’s intention to create public awareness may thus be creating nothing other than public neuroses and fear of cybercrimes. A further important factor that contributes to fear propaganda is emotional contagion, which occurs when one person’s emotions spread are experienced and adopted by another person (Hatfield et al. 1993). This then leads to social contagion (Le Bon 1895), whereby people adopt the perceived behavioral norms. Recently, we experienced all three of these psychological processes in the fear, emotional, and social contagion of the Momo Challenge. Most people heard about the Momo Challenge via social media. Posts suggested that a user named Momo would appear during a game or online interaction and would lure children and adolescents into performing tasks that increasingly grew in danger, to the point of self-harm and even suicide. It transpired, however, that the Momo Challenge was most likely a hoax with not a single occurrence of a related crime or damage being recorded worldwide. The notoriety of the challenge took on a life of its own through the perpetuation of fear. Emotions associated with the possible outcomes if children accepted Momo’s challenges constituted the emotional contagion which was spread through most forms of mass media. Offline, social contagion was evidenced through schools advising parents to avoid their children interacting via online services and games based on nothing other than a psychological fear created through word of mouth. This example highlights how easily a phenomena can cause people to behave in a way that creates a psychological acceptance without question or evidence of a real threat or crime. The fear, emotional, and social contagion were psychological constructs based on nothing more than word of mouth or hearsay. In the next section on Wall’s typology, we consider a more tangible threat to online interactions. Cyber-pornography and obscenity: First let us distinguish between these two. There exists a complex legal landscape as to what constitutes pornography what is considered obscene online. Something that is obscene is usually offensive and morally unacceptable; in legal terms (in the UK according to the Obscene Publications Act of 1959), it is a crime to publish material which might “deprave or corrupt” those who are likely to read, see, or hear it. Obscene acts can be violent or involve drug taking but most commonly refer to those depicting certain forms of sexual activity. While pornography is usually considered to be obscene, other

32

The Psychology of Cybercrime

661

non-pornographic content can also be considered obscene. For example, a music video would unlikely be pornographic but can contain sufficient nudity and sexual dancing to be considered obscene. This makes it somewhat difficult to definitively state what can be considered obscene online. Most of us have some idea of what we find morally or societally offensive, but we cannot always draw that distinctive line between the acceptable and unacceptable. Arguably obscenity, and indeed what is considered pornographic, is socially constructed, reflecting the historical, moral, societal, and cultural norms of a given country at any given period in time. For instance, Kuipers (2006) highlights the differences in reactions towards the online dangers of cyberpornography between the Netherlands and the USA. In the Netherlands, where online pornography is generally accepted, cyberpornography is normalized, whereas in the USA, where online pornography is heavily regulated and controversial, cyberpornography is associated with moral panic. In the UK, an attempt to regulate who sees online pornography is currently being made through the AgeID program. Pornography websites will be required to create a non-pornographic landing page, which will ask visitors to register with age identification to prove that they are over the age of 18 via a mobile SMS, through the use of a credit card or by providing a copy of their passport or driving license. Viewing sexual content online is not necessarily illegal. Cyberpornography becomes a criminal offense when it causes harm to persons or animals. It can include, but is not limited to, the creation, distribution, publishing, downloading, or importing of material depicting sexual activity. Now, you might be asking yourself whether sexting (sending text messages containing sexually explicit material) is legal? As with all sexually explicit material disseminated through any technological means, it depends entirely on who is doing the sexting, who the recipient is, and the anti-pornography laws of the country in which you live, unless the sexting involves persons under the age of consent within a particular country. In which case, it is an illegal act. Immediately, we see that this category is problematic for understanding these behaviors from a psychological perspective. Early psychological research struggled to come up with all-encompassing explanations as to why people engage in acts of pornography online, not least because of the diversity of cyber-sexual behaviors. Are the bantering sexts exchanged between grown up lovers equally as criminal as the pornography ring who abducts young children and streams live sexual abuse of those children across continents and countries? The motivations for these acts could not be more diverse. As with the other categories of crime already outlined, it is exactly this that poses the problem to understanding online sexual behavior from a single psychology perspective. There are also other factors that come into play, such as the availability of sexually explicit material. As early as 1993, Young stated that the availability of sexually explicit material online leads to reduced inhibitions. Over time, this has contributed to changes in peoples’ levels of acceptance for online pornography, and even to their participation in viewing or uploading their own sexual content, and/or sexting. Analogizing this with offline behavior, in the 1970s in the UK, there were three major TV channels, and then came Channel 4 in 1982, and cable TV quickly grew through the 1980s and 1990s. During these early times of limited terrestrial TV, there was very limited

662

A. Attrill-Smith and C. Wesson

nudity and virtually no sexual content on national TV. The Carry On films were considered to be extremely risqué, and Ursula Andress’ walking out of the sea in a bikini in the James Bond film Dr No was considered overly sexually explicit. As recent as 1994, Eva Herzigova’s Hello Boys ad campaign for the company Wonderbra supposedly stopped people in their tracks and in their cars. It was the first time that they had seen a picture of a woman in her bra plastered across a billboard. Fast forward to modern living and homes full of TVs, live streaming and technological abundance. There is a 9 pm TV watershed in the UK which supposedly protects children from being exposed to any content, sexual or otherwise, that could psychologically harm them. Equally, however, there are daytime TV shows discussing most things sexual. The development of our acceptance of sexual content on our TV screens is no different to our acceptance thresholds being rapidly reduced, if not eroded, through Internet content. Nowadays, any and every sexual desire can be satisfied at the click of a button, whether with flirtatious or criminal intent. And it is this increased accessibility that Young, as early as 1993, cited as being one of the biggest hurdles to reducing pornography online. Of course, it is far more complex a problem than pornography being available, affordable, and accessible for the masses online. What these three attributes do promote, however, is a curiosity that could lead to a person very quickly and unintentionally falling into a criminal behavior. Some early work suggested that offenders who had viewed child pornography online often stated that they were unaware that their acts were illegal and were initially borne from curiosity. However, the more they observe, or interact with that material, the less satisfied they are by it. Their gratification thresholds shift in much the same way as our acceptability shifted for TV sex over the years. The danger here is that a completely innocent observer could gradually cross that very thin line between obscenity and pornography, with the worst outcome being one of escalating their behavior and subsequently crossing the online-offline boundary to become a contact offender offline. The main psychological risk of online pornography might thus be considered to be that people might fall into behaviors they would otherwise not engage in, simply due to the availability of pornography online. Equally, the habitual and criminal user of online sexual content may find it more difficult to resist their urges to commit crime given the availability of materials at their fingertips. It could also be argued that person(s) of sound psychological well-being and make-up would not demonstrate this curiosity to begin with. Humans are however sexual beings, with sexual urges, wants, and needs. If these are not being met offline, a person may seek to gratify them online, either through watching pornographic content or actively interacting with paid and/or unpaid online content. This does not make that person a criminal in the same way as a person accessing online child pornography. Whereas the former might be considered normal human behavior, the latter could be explained through a theory put forward to aid our understanding of online sexual offending by Ward and Beech (2006). According to their integrated theory of sexual offending, social and biological learning factors come together to shape psychological functioning. If this development is in any way faulty, for example, due to adverse developmental environments, maldevelopment of the brain and neurological systems can lead to clinical symptoms. If those clinical

32

The Psychology of Cybercrime

663

symptoms suggest a leaning towards the sexually obscene or pornographic, neuropsychological systems (motivation and emotional, action selection and control, and perception and memory) can become compromised and fail to appropriately regulate and control sexual urges and behaviors. Moreover, these behaviors may be subtly acted upon, but as the perpetrator garners gratification from his/her behaviors without being criminalized, the threshold for gratification may shift, and then to maintain that satisfaction behaviors will need to be escalated. According to this model, online sexual demeanors are thus likely borne from an integrated biological and social upbringing that seeks out an environment conducive to gratifying needs or urges increasing in demand. It can be used to distinguish those who do and do not act criminally and could potentially be adapted for use to be able to predict who might escalate their access to increasingly inappropriate sexual material. It is worth mentioning that another outcome of the repeated viewing of online pornography may be the development of porn addiction, whereby a person excessively views online pornography, coupled with excessive masturbation. From a psychological perspective, we would ask what leads the person to develop this addiction rather than satisfying their needs offline through human contact. An overview of this literature is beyond the scope of this chapter, but we do surmise from that literature that there are roles of trust, self-worth, self-esteem, mood, emotional stability, relationship dissatisfaction, and many other psychological factors that can come into play to push a person towards online pornography addiction. Many of these factors can be seen in extreme outcomes of online sexual misconduct that we now move on to discuss. Prior to doing so, however, we note that for further reading on this topic, the reader is directed to a rapidly growing body of work by Professor Mark Griffiths of Nottingham Trent University here in the UK, who has researched and written extensively on this topic. One of the issues around researching and theorizing about online pornography is the sheer diversity and volume of sexual content that can be shared worldwide. It is often bounced through different countrys’ Internet systems to avoid prosecution by legal jurisdictions that do not cooperate with one another. It can also extend beyond the viewing of online sexual material. The Internet has been implicated in many cases of human sex trafficking. This can begin with a type of grooming online, whereby an offender (or offenders) will begin an online dialogue with a potential victim. More often than not, this communication begins via social media. The offender will often engage in a rapid acceleration of the online relationship to gain the victim’s trust. In doing so, the offender will quite quickly tap into the victim’s desires and needs. These may not necessarily be borne of a need for human belonging and love (Baumesiter and Leary 1995) but could also be the promise of materialistic wealth or a job, for example. In doing so, the perpetrator can very quickly isolate the victim or restrict their interactions with people around them in their offline world. They will often ask the victim to share information with them that they can subsequently use to emotionally, psychologically, or socially blackmail them. As soon as they have this, they can begin to manipulate the victim, playing on their psychology and mental well-being while tempting them into moving away from their family and/or friends and straight into the snare of the perpetrator(s).

664

A. Attrill-Smith and C. Wesson

There is a mountain of emerging work around online pornography that we could cover in this chapter, and in deed this section is already a lot longer than those on Wall’s other categories. However, for the sake of brevity, we will come back to some of the sexual-material and relationship-associated crimes in the remaining sections of the chapter. Cyber-violence occurs when behavior carried out online or through any technological communications device leads to virtual or physical, psychological or emotional harm to an individual or group. The most well-known types of cyberviolence relate to cyberbullying which incorporates cyberstalking, harassment, and sometimes online abuse. That is not to say that each of these cannot individually exist. It is very difficult to disentangle the effects of online abuse on peoples’ offline worlds and vice versa. We will focus on online violence which can lead to physical harm, psychological and financial consequences offline. Prior to doing so, however, we briefly consider cyber-harassment and stalking. Cyber-harassment is an abusive or threatening behavior that occurs via computer-mediated communication with the intention to harm either a person, group, or organization. Like offline harassment, it is characterized by control, manipulation, and threats of harm borne from a perpetrator’s need for power and control. Unlike offline harassment, however, it does not require physical harm to be considered harassment and is more associated with psychological, social, and emotional harm. Cyber-harassment can also be related to cyberstalking. Cyberstalking occurs when a person(s) receives unwanted attention via digital communication. The effects hereof may not be physically evident offline, but they can take a toll on a person’s emotional and psychological well-being. Both of these behaviors can be linked to online abuse. In much the same way as described for online sex trafficking, a person may initially communicate with someone else online and the abuse or violence may gradually creep into that relationship. Before they realize, the victim has given the abuser sufficient information about themselves that the perpetrator can use to emotionally blackmail or bully them. Misrepresentation online can also be considered to be cyberviolence when a person fabricates an online persona in order to commit a crime offline. For example, if a child is groomed online by an older person who intends to sexually abuse them when meeting offline, it could be considered an act of cyberviolence. The astute reader will note that we already discussed sex trafficking and sexual abuse under the rubric of cyberpornography, thus illustrating another way in which these overarching categories are difficult to disentangle and define. On attempting to understand why these acts of violence lead to oftentimes devastating consequences, either online, offline, or both, we need to draw on many different areas of psychology. First and possibly foremost, underlying the victim behavior in these instances is a basic human need to belong, feel wanted, desired, liked, and needed (Baumesiter and Leary 1995). It may be that these needs are not being met in a victim’s offline world, so they seek out someone online who can provide this. A now very famous case illustrates exactly this line of reasoning. Amanda Todd met a man online when she was 12 years old. During their online interactions, he convinced her to show him her

32

The Psychology of Cybercrime

665

breasts via webcam, at which point he took a screenshot of her doing so. He then used that one image to blackmail her, with her family even moving home and Amanda moving schools to avoid the embarrassment heaped upon her when he shared the image with her classmates. Eventually, at the age of 15, Amanda created a YouTube flashcard video outlining her story which she posted prior to taking her own life. Amanda’s story, as tragic as it is, illustrates the progression from a naïve interaction to a devastating outcome via online communication, and the role that trust, emotional blackmail and bullying can play in creating and destroying such a relationship. It is an interaction borne on the one side from a need to feel liked and wanted, and on the perpetrator’s side of a need for control and dominance, as will be considered now in relation to online bullying. Cyberbullying occurs online but can also carry over to offline worlds. Equally, it may begin in the offline world and carry over to online behaviors. It can occur across many forms of communication. It can exist in images and videos, not just words, and it can occur at any level for any member of any given society, much the same as most of the crimes which we discuss in this chapter. According to the www.cyberbullying. co.uk website, there is an extensive list of behaviors which constitute cyberbullying: denigration, flaming, impersonation, outing, trickery, blackmail, threatening behavior, grooming, and exclusion to name a few. Willard (2004) also put forward a similar seven item typology of cyberbullying which included flaming (insults), exclusion (blocking and/or ignoring), denigration (negative posts about a person), outing behaviors and masquerading (misrepresentation) online, harassment, and cyberstalking. There are many difficulties associated with understanding cyberbullying, not least that all of these behaviors are grouped together under a single label. One of the biggest hurdles for psychological research is that bullying has only been recognized as an abusive and detrimental behavior since the end of the twentieth century. Up until the late 1970s, early 1980s, bullying behaviors were often considered to be banter, or character building, a part of growing up. It was mainly a behavior tolerated in the school playground through pushing and shoving, hair-pulling, and other physical acts. There was no recognition of the harm caused by verbal abuse (Campbell 2005) as is shown by a piece of research carried out by Golver et al. (1998). In their research, teachers and pupils were given a list of physical and verbal bullying behaviors. Both teachers and pupils were more likely to correctly identify physical than verbal acts of bullying. Times have changed somewhat over the last decade, and offline, bullying is now considered to be any repeated intentional act that causes harm to another person(s). By its very nature, it needs to include at least two people but can extend to larger groups, even to a societal level. It causes victims to feel powerless, lose self-respect, confidence, and selfesteem, to the point of depression and other associated mood-disorders, even to suicide. However, transferring our understanding of offline bullying to online behavior faces this first initial difficulty of understanding that bullying can also be wrapped up in words, especially when much of online communication is text-based. Another hurdle we face in understanding cyberbullying is that it is often considered to be less real and less consequential than offline bullying. There is often a notion that all you have to do to make the bullying stop or go away, is to turn off your

666

A. Attrill-Smith and C. Wesson

technology, delete associations, or simply ignore the online bully. This is however far too simplistic an approach to solving the online bullying problem. In many cases, the acts are far too intertwined and entangled between a person’s online and offline worlds. People do not want to self-exclude or feel left out of group activities. We know that people who bully offline are more than twice as likely as non-bullies to extend that behavior to their online interactions (Hinduja and Patchin 2008). This becomes particularly problematic for children being bullied by their school peers. The bullying may start in the playground but can quickly exacerbate online, spiralling out of control. Many children are members of some form of social network, even if there are age restrictions in place which should prevent such memberships. For instance, many children who own a smartphone are likely members of WhatsApp groups or similar. These groups can include whole classes of pupils. The behavioral contagion that occurs in these groups can see a single child very quickly become ostracized, picked on, or horrendously bullied. In these large technology-driven groups, other children might not want to self-exclude or remove themselves from the confrontation because of potential offline consequences. They might not defend the victim because they fear the repercussions, but equally they might not want to align with the bully and so try to remain neutral. The bullying within the group thus affects more than the bully and victim, and can be just as distressing to the innocent virtual bystander. When discussing cyberbullying during lectures, the question that most students ask is whether there is a “typical” cyberbully. There are many reports of attempts to come up with a list of personality correlates of a cyberbully, much in the same way as people assume that cyber profiling takes place, but given the diversity of this behavior and the wide-reaching consequences, it is unsurprising that, as yet, we are unable to offer a definitive cyberbully profile. What we do know is that early reports of cyberbullying cited a perceived sense of anonymity and reduced social cues as fostering the negative behaviors a person might otherwise not engage in offline. Often, these types of online behavior can also be linked to reduced sensitivity towards others (Ang and Goh 2010), possibly because people feel that online behavior is less real than offline behavior. Ang and Goh (2010) also identified that both low affective empathy (ability to experience and share others’ emotions) and low cognitive empathy (ability to understand the emotions of others) were related to higher levels of cyberbullying perpetration among a group of 12–18 year olds they tested on different personality factors. Much early work attempted to carry over what is known about offline bullying to understanding online bullying. However, the experience of being a bully or bullied online can be very different to being a bully or bullied offline. Offline, bullying is usually experienced as a repetitive negative behavior towards oneself (Olweus 2003), with a single act of verbal physical abuse being classified as violence rather than bullying. Online, however, a single act of threatening, derogatory, or negative behavior can take on a life of its own (Vandebosch and VanCleemput 2009). Imagine, for example, that someone posts a negative comment on a Facebook or Twitter feed. After some consideration, they regret that comment and remove it. In the meantime, however, there was potential for any number of people to see that

32

The Psychology of Cybercrime

667

post. Digital bullying can extend much further than offline bullying, with the victim potentially feeling exposed and humiliated to an infinite audience (Slonje and Smith 2008). If someone makes a single comment offline, there is no evidence that it has been said unless the act was witnessed by others. Often a victim will question their hearing, understanding, and interpretation of the bully’s verbal abuse. Online, although much textual communication does also depend on an individual’s perception and interpretation thereof, a single email, text, or instant message of a bullying nature can be far more detrimental to the victim. If someone is verbally abusive on a single occasion offline, over time the memory of that abuse may fade. If it remains stored online, even if only the victim has access to it, they may repeatedly read it over a longer period of time. It could feed into their negative ruminations about themselves, the content and people associated with the bullying material. This could, in turn, affect how they see and feel about themselves, and potentially lead to extreme negative psychological harm. One similarity between online and offline bullying is the bully’s need for power and control. Offline, a bully will likely experience immediate gratification of seeing their victim suffer, which could offer them a sense of power and superiority over the victim. Online, that power may be enhanced by people joining in with the act of bullying. Remember the example of the WhatsApp group in which children jump on board with the bully. This joining in could convey to them a sense of importance and power. They may however also increase their bullying behavior due to delayed gratification. In other words, a victim might not instantly see or respond to the online bullying. In fact, they might not respond at all, which of course would diminish these feelings of importance and power in the bully, who therefore keeps on bullying with the hope of a response, and so begins a vicious cycle of bullying that increases in intensity and escalates in content. Our final note on bullying relates to the bullying of larger societal groups. Research on online bullying tends to focus mainly on school children, with a nod to bullying in the workplace. There are however larger groups of people being bullied due to arbitrary labels and factors that make them a targetable group. When it comes to large scale groups of both victims and perpetrators of bullying, psychology struggles to transcend the widespread borders of societies and cultures that might be involved. We know from history that large-scale bullying can lead to heinous crimes, and yet cross-continental and cross-cultural law enforcement agencies’ hands are often tied in reducing these crimes. It also makes understanding their psychological existence extremely difficult to research and understand. Wall’s initial typology has thus far served us well in providing some structure and focus to this chapter in outlining just some of the psychology related to different types of cybercrime. It is worth mentioning that Wall (2007) also proposed a different typology which used only three categories of cybercrimes: computer integrity crimes (any act which attacks network security), computer-assisted/related crimes (identity theft, phishing and fraud), and computer-content crimes (illegal online content such as pornography). We will leave it to the reader to decide whether this is a sensible category reduction based on our outline hitherto of the difficulty in using four overarching categories of cybercrime. However, in doing so, we

668

A. Attrill-Smith and C. Wesson

emphasize that both of Wall’s typologies have been groundbreaking and informative in guiding much psychology (and other disciplines’) research and theorizing into online crime. There are of course many other taxonomies and typologies available that aim to categories cybercrime and aid our psychological understanding thereof. However, many of these have not been able to keep up with the digital times. As technology develops, so too do new and old crimes that utilize that technology. One taxonomy that has more recently been put forward does aim to be more flexible in terms of its ongoing development with advances in technology. In a chapter in the recently published Oxford Handbook of Cyberpsychology (2019), Jason Nurse offers a taxonomy that proposes a new approach to classifying cybercrimes against individuals. He focuses on the types of cybercrime that are currently most prevalent but uses categories that he considers sufficiently flexible to deal with new and emerging criminal behaviors online: (i) social engineering and trickery, (ii) online harassment, (iii) identity-related crimes, (iv) hacking, and (v) denial of service and information. One could argue that there is a role of social engineering in many types of online crime. It relates to how criminals capitalize on peoples’ fears and psychological weaknesses and is a central tenet of perpetrator behavior online. Social engineering relies on a criminal being able to understand others’ body language and behavior in order to adapt their manipulations accordingly. They need to have a good understanding of human psychology even if they have never studied the topic. This type of manipulation was demonstrated in a recent advertisement campaign by Barclays Bank in the UK, where fraudsters call bank customers. They use a language and communication strategy that leads the victim to reveal information to the fraudster, enabling them to access their bank accounts. In the advertisements, the victim offers up his/her personal PIN number. In real life, the adept social engineer will be able to make the victim believe whatever they wish him/her to believe, a skill which requires establishing a certain level of trust and rapport. Among other psychological factors that Nurse cites as important in the perpetration of cybercrime, or falling victim thereof, are a naivety in human decision-making, capitalizing on peoples’ states of anxiety and/or stress during certain periods of their life, a need for social or financial support, and the cognitive shortcuts (heuristics) that people use when judging electronic communications. If we take this last process as an example, many of us receive phishing emails on a daily basis, and we would all like to think that we would not fall for their trickery. But these emails are becoming more and more sophisticated. When people do fall for them, they subsequently express embarrassment at not seeing the obvious signs of fraud in the emails. Oftentimes, however, victims do not want to see the warning signs. Either consciously or not, on some level, they choose to ignore them. Nurse also utilizes the notion that all humans feel the need to belong, a factor that is also evident in our own research. Having carried out a number of preliminary studies on catfishing in our cyberpsychology research group at Wolverhampton, we have established that people often ignore the warning signs, or red flags, when interacting with others online. This is particularly the case when looking for a new romantic partner online. Most people like to think that they would be sufficiently savvy to identify a rogue or corrupt dater a mile off, and indeed most of our participants were

32

The Psychology of Cybercrime

669

able to correctly identify when a vignette about an online dating scenario did or did not include a red flag. However, what is most interesting about this work is that participants were only likely to recommend that the online dater cease communication with the fraudster or criminal when they detected a financial risk to the online dater, but not when there was an identifiable emotional or psychological risk (Attrill et al. 2015). We argued that online daters will usually have invested some time in getting to know the person posing the risk. In doing so, they likely feel emotionally connected to that person and do not want to see the bad in them. This is no different to offline dating – if you invest a lot of time and energy in getting to know someone, you are more likely to make allowances for their behavior and to strive for the relationship to continue. Online behavior is no different. Either online or offline, daters do not want others’ behavior to reflect badly on them. If a person they are linked with turns out to be a fraudster, they may see it as their own fault, or as embarrassing by association. They might also feel somewhat silly for having believed the other person. In order to avoid these feelings, they may persist in trying to see the good in the other person and downplay their negative attributes or behaviors. This is in line with another suggestion by Nurse, namely that humans like to be seen as being kind and/or good people. Not only do we want others to think of us as good or kind, but we also want to believe that others are kind too. Most victims do not want to believe that they have wasted their time and emotional energy on someone who is bad or the opposite of what they thought they were. This can become especially heightened when someone is falling victim of an online dating scam. Imagine the embarrassment if you have been open to your friends and family about meeting the love of your life online. You gush and rejoice in telling everyone how you know that this is the perfect person for you. Do you regale them with all of that person’s negative traits or do you focus on their positive attributes? We are guessing the latter, because as soon as you start noticing those negatives, or accepting them for real attributes, your perfect prince or princess is suddenly knocked off their thrown, and you have to tell everyone that your gushing was unfounded. It is this type of supposed humiliation that a perpetrator will play on, especially when it comes to victims not reporting crimes. Many of these crimes go unreported because the victim feels stupid or embarrassed that they were duped by the nefarious actions of the cybercriminal in the first place. We note also that Nurse and Bada (2018) consider the group elements of cybercrime. In doing so, they outline a typology put forward by McGuire (2012) which categorizes group-related cybercrime into the three categories of: crimes that occur primarily online, crimes that utilize both offline and online activities, and crimes which occur offline but which are enabled by technology. This approach is very much in line with Kirwan and Power’s (2012) suggestion that there are three types of cybercrime: Internet-enabled crimes, Internet-specific crimes, and crimes in a virtual world. By considering this role of technology in online crime, social roles, group identities, and many more features of online crime that are often overlooked can be considered. In particular, we can begin to understand how the Internet is nothing other than a tool for cybercrime and neither instigator nor perpetrator thereof.

670

A. Attrill-Smith and C. Wesson

The Internet as a Tool for New and Old Crimes For the remainder of this chapter, we will focus on the use of the Internet for crime. We emphasize here that our approach is very much one of focusing on the Internet and any related technology as nothing other than tools. In this manner, we analogize the Internet to any weapon that a perpetrator may use to cause harm or damage. It is not the knife or the gun that does the killing but the person(s) who wields that weapon. To illustrate this, we will focus on the way in which technology is used by domestic abusers as a new tool for old crimes, but equally how it can be a useful tool to encourage people to speak out about crime. We will also consider how the Internet has been used to develop new crimes. In doing so, we very much advocate for a move away from categorizing groups of criminal activities based on their behavioral similarities, without negating the benefits to law enforcement agencies in seeking behavioral patterns across different crimes. As we have already illustrated, however, from a psychological perspective, we need to consider the motivations, desires, and needs that drive both perpetrator and victim behavior, and these can be vastly different within categories of crimes that share some behavioral features. We focus on two types of crime to illustrate this: domestic and sexual abuse, and revenge pornography. Both of these behaviors can be considered more or less violent and cause more or less harm to others. The form of that harm can be physical, psychological, or emotional. It can present itself in many associated forms of criminal behavior, from stalking and harassment to psychological manipulation and control. And yet, perpetrators of these crimes can have very different goals or motivations, despite often utilizing the Internet in similar ways to achieve those goals. To this end, these crimes serve as a prime example to highlight the Internet as a tool for crime and not as an object to be blamed for that crime.

Domestic and Sexual Abuse When talking to students about the role of the Internet in criminal behavior, they are often surprised to learn that we will discuss domestic abuse or violence. Much like bullying, for a long time, many people believed that domestic abuse manifests itself only in physical behaviors, and that it is mainly characteristic of a dominant malefemale relationship. This is not the case. According to the office for National Statistics in the UK, in the year ending March 2017, there were just under two million reported cases of domestic abuse in England and Wales. Of these, 1.2 million were reported by women and 713,000 by men, which is an increase from the figure of 70% of reports being made by women between 2013 and 2016. These statistics will likely look very different in a couple of years’ time, with the emergence of the #MeToo movement. A Tweet encouraging people to use this hashtag to speak out about their experiences of sexual harassment and sexual assault went viral in October 2017 to the point that it brought down prominent media moguls and workforce members alike the world over. The ferocity with which this hashtag took on meaning and the power thereof to empower mainly women the world over to speak out about

32

The Psychology of Cybercrime

671

sexual assault and harassment highlights one way in which the Internet has been useful as a tool for combatting crime. The psychological consequence of realizing that other people have experienced similar sexual harassment can be life-changing. It can emphasize to the victim that they did not imagine, misinterpret, or overplay what happened to them and can thus validate their emotional, psychological, or even physical responses to their experience. This is very similar to what might happen to a victim of domestic abuse or violence when they find solace on the Internet or when they find people who have coped and dealt with a similar situation to their own. From a more negative perspective, the Internet can be implicated in perpetration of domestic violence and abuse in many ways, not least as a tool for controlling a victim. This usually manifests itself in the monitoring of social media and/or other forms of communication such as emails, or through both overt and covert abuse via the same interaction tools. This abuse can comprise of insults, nefarious comments, or even threats of harm to the victim, their family or friends. It can however also include the use of spyware or GPS tools. One need only think of the “Find My Friends” app on iPhones. While this was likely developed with good intentions, like so many other forms of technology, people will adapt it for their own wants and needs. Since the history of mankind, humans have changed and manipulated tools for use to their advantage, good or bad. The Internet is no different. For many victims, the abuse may begin offline and be extended online, usually by a current or ex-partner (see www.womensaid.org.uk). From a perpetrator perspective, their abusive behaviors feed into their insecurities and desires for power and control over their victim. The Internet is simply another tool which they can use to exert this control. They can restrict their partner’s communication and control who they interact with online. This can have serious psychological consequences for the victim, lowering their self-worth and self-esteem, and make them feel invaluable and embarrassed in front of their online peers. To this end, there are few differences between online and offline abuse, with technology being simply another tool used for a long-existing crime.

Revenge Pornography In the age of selfies and Instagram, we are constantly uploading photos and videos and then sharing them for others to see. It may be something that makes us laugh, something that makes us cry, something beautiful, or something shocking. We often share without thinking too much about it, with this being a normal part of our daily lives. But sometimes people make the decision to share things to humiliate others, to upset others, to ruin reputations, and/or to exact revenge. This may involve the sharing of photos or videos from an intimate relationship. An intimate relationship that had ended. Badly. In England and Wales Revenge Porn became a criminal offence in 2015 and similar legislation is being introduced internationally, most recently in Australia in 2018. The legality of the offence varies somewhat in different countries as do the sentences associated with it. If found guilty of committing an act of revenge porn,

672

A. Attrill-Smith and C. Wesson

you could face a prison sentence of between 2 years (England and Wales) and 7 years (Australia). It can be challenging to define exactly what we mean by revenge porn as a vast array of terms are used in relation to this (see Walker and Sleath 2017 for a discussion of this), but for the purposes of this chapter, we define it according to the legal description in England and Wales as “the sharing of private, sexual materials, either photos or videos, of another person, without their consent and with the purpose of causing embarrassment or distress” (UK Government 2015). Despite its recent criminalization, it is difficult to establish the extent of revenge porn as the act is not always reported, or when reported, a complaint is not always pursued by the victim. In England, a recent BBC report found that since becoming a criminal offence in 2015, there have been 7806 incidents of revenge porn in England and Wales but a third of these allegations (2813) of revenge porn were later withdrawn by the accuser (BBC 2018). A survey of 4274 people in Australia found that 20% of respondents had had sexually explicit images/videos taken of them without their consent of which 11% had been publically shared (Henry et al. 2017). Men and women were equally likely to be victims of revenge porn but perpetrators were more likely to be men than women. Young people (under 30) were found to be more at risk than older people as were people with disabilities. Lesbian, gay, or bisexual participants were also at higher risk of victimization than heterosexual participants. Clearly revenge porn is a growing problem. But who does it and why? And what is it like to be victim of revenge porn? These are some of the questions that psychological research seeks to answer. Psychological research into revenge porn is still in its infancy, although parallels of course can be drawn to previous literature in areas such as romantic revenge (which can occur online or offline) and sexting. Of the studies that have considered revenge porn specifically, most have focused on the extent to which people nonconsensually share sexually explicit images and videos. It is difficult to establish whether these were always acts of “revenge porn” per se as the underlying motivations for the sharing were not considered, but in Walker and Sleath’s 2017 systematic review of literature relating to revenge porn, they found that the prevalence of perpetration of nonconsensual sharing of images/videos (how many people admitted to doing it) was between 1.4% and 26% in an adult sample. The higher figure comes from a study on sexting by Strohmaier et al. (2014). While they used an adult sample, they actually asked their participants about behaviors from when they were minors (so under 18), and this revealed that 26% had forwarded a sexually suggestive image/video of a partner without their permission to a good friend. Prevalence rates for forwarding to an acquaintance were much lower at 3%. These figures are likely to be gross underestimates however. A 2017 survey by children’s charity Childnet International found that over half of teenagers in the UK knew someone else their age who had shared nude or partially nude images of someone they knew; just under a quarter had witnessed sexual images being secretly taken and shared online by other teens. Twelve percent of UK teens also reported that they had been pressured into sharing nude images in the past year. The astute reader will note that this is reminiscent of some of the crimes we have previously outlined, especially in relation to bullying and harassment and the Amanda Todd case, where peer

32

The Psychology of Cybercrime

673

pressure or emotional blackmail can play a large role in manipulating a victim into behaving in a way they might otherwise not. That teenagers are potentially more at risk of being victim – and perpetrators – of revenge porn may be a reflection of the journey of normal adolescent sexual experimentation in the digital age. Indeed it has been suggested that sexting is now a normal part of adolescent relationships (Scott and Gavin 2018). The normalization of revenge porn was also supported in a study by Pina et al. (2017) who found that there may be a public acceptance of the act. In their study they found that, while just over half of their participants showed the propensity to engage in the act of revenge porn, a majority endorsed the act, enjoying revenge porn (87%) and approve it (99%). Pina et al. suggest that such bystander behavior has a facilitating role in the dissemination of revenge porn. Therefore these bystanders are also complicit in the act. We therefore know that a lot of people have reported being a victim of revenge porn, and a lot have admitted to the nonconsensual sharing of sexually suggestive images/videos, but why do people do it? The name suggests that it is an act of revenge but it is that straightforward? We have already mentioned that there is a lack of consistency in the terminology used relating to revenge porn but the term “revenge porn” is itself problematic when it comes to thinking about why people nonconsensually share sexual images. Revenge porn is not always motivated by revenge, the content being shared is not always what we would consider to be “pornographic,” and while perpetrators are often ex-partners, this is not always the case (Powell and Henry 2018). These issues have lead some researchers to argue that the term “revenge porn” is too narrow and instead we should consider using a broader term such as “image-based sexual abuse.” Despite this argument, for the purposes of this chapter, we shall continue to use the more commonly known term “revenge porn.” So why do people commit acts of revenge porn? There is a paucity of research in this area and we do not yet fully understand the motivations for this but, although sometimes used for the purposes of blackmail or extortion by a stranger (Laird and Toups 2013), the offence is most often committed by an offender known to the victim in a way that exerts control over them. Leading on directly from the previous section in this chapter, such control or threatening behavior towards a current or ex-partner has been likened to form of domestic violence (Henry and Powell 2015). Indeed, an association has been found between online and offline behavior, with online abuse by a partner, such as cyber-victimization or digital dating abuse, being linked to domestic and sexual violence (Marganski and Melander 2015; Reed et al. 2016). As revenge porn often happens in the context of a broken relationship, it could be argued that as well as having the aim of hurting the partner responsible for the break up, revenge porn is also motivated by the need to save face for the spurned lover. Research on sexting has shown us that for males, who we have already seen are the most likely perpetrators of revenge porn, sharing intimate images positively impacts upon their status while negatively impacting upon a female’s reputation (Ringrose et al. 2013). We have considered the perpetrators of revenge porn above but what about the victims? If, for example, you were the victim of revenge porn, where images or videos made in an intimate relationship were later used against you. How would that make you feel? It is easy to assume that you would be reluctant to trust future

674

A. Attrill-Smith and C. Wesson

partners for fear of this happening again but the consequences are more far reaching than this. Being a victim of revenge pornography can result in lifelong mental health consequences, damaged relationships, and social isolation (Kamal and Newman 2016). The mental health consequences (e.g., trust issues, PTSD, anxiety and depression; and destroyed self-esteem and confidence and loss of control) can lead to negative coping mechanisms (e.g., such as avoidance/denial, excessive drinking of alcohol) similar to those experienced by sexual assault survivors (Bates 2017). At its most serious, 51% of victims of revenge porn in the USA said they had considered suicide (Cyber Civil Rights Initiative 2014). These negative consequences may be compounded by the victim-blaming culture that is associated with revenge porn. Victims of revenge porn often find themselves blamed, due to having (sometimes but not always) allowed the photos or videos to be made in the first place. This is partly an issue with the terminology surrounding revenge porn, bringing with it the notion that the victim was “asking for it.” Victim-blaming has already been extensively explored in the literature relating to sexual assault but psychologists are also starting to look at this specifically in relation to revenge porn. Of the few studies that have been conducted in this area to date, the findings have been mixed. For example, Bothamley and Tulley (2018) found that the general public did not really attribute blame to the victims in a vignette study on revenge porn whereas Scott and Gavin (2018) found a high level of responsibility being attributed to victims of revenge porn. Recent research by Owen et al. (n.d.) suggests that even when participants overtly reported that they did not blame a victim for initially sharing the sexual material with a perpetrator, they unconsciously, or covertly, did blame the victim. This is an important area for further research as the idea that a victim may be blamed for their victimization has been found to be a factor that may makes victims of sexual assault, including revenge porn, reluctant to report the offence.

Researching Online Crime Researching online crime from a psychological perspective comes with a number of difficulties. Firstly, most research focuses on victim behavior and attempts to educate people how to avoid becoming a victim. While this may be sensible, it does not prevent criminal behavior. It is, however, very difficult to gain access to criminals who want to divulge information about themselves and their criminal activities. Even when incarcerated, perpetrators will likely see a stigma attached to revealing their modus operandi, not least because one day they might be released and want to reoffend. What we do know from suggestions of a profile for perpetrators’ psychological make up is that it is often based on very few case studies, or inferences that are made from observing police tape recordings, gathering information that perpetrators have shared across the Internet and other offline resources, as well as possibly speaking to those once close to them. It is therefore likely less objective and more researcher subjective than we would like for robust psychological testing. There is also a risk that when criminals do divulge information, that they may report distorted memories thereof, often glorifying or over-embellishing their acts to the point that

32

The Psychology of Cybercrime

675

deriving the truth or actual behavior from their narrative can become difficult. From this perspective, psychologists need to work in tandem with other disciplines to understand online perpetrator behavior. A number of recent studies have pointed toward future research directions here. Linguistic analysis is recognized as a useful forensic tool in a legal setting; however, for this to be truly useful, forensic evidence language experts are required to do this analysis. For example, Drouin et al. (2018) recently looked at whether the psychological language used by Internet child sex sting offenders can predict recidivism. In other words, are there aspects of the way that people talk online that may give us clues as to how likely it is that they will reoffend? In a related area, Baryshetev and McGlone (2018) consider how the language used by sexual predators online may help us to understand what motivates such offenders. Growing a research body in the area of offender language use will allow for computerized programmes to be developed that identify the linguistic patterns of online offenders. Such information is valuable in investigative processes as well as for practitioners who want to understand the motivations behind offending behaviors. Similarly, Al Mutawa et al. (2019) suggest that applying behavioral evidence analysis – an approach that takes evidence from a case and deductively analyses this to identify characteristics of an offenders – to digital forensics may be an effective way of understanding offenders motivations for engaging in online crimes. While we have suggested that there needs to be more of a research focus on perpetrators rather than victims of online crime, Martens et al. (2019) argue that we cannot ignore the fact that people are the weakest link in cybersecurity. However, they suggest that rather than research focusing on victimization future research should consider people’s motivations for protecting themselves against cybercrime. Their findings in this area have shown that there are different motivations for different forms of cybercrime. Another difficulty in understanding online crime is the rapid development and progress of technology combined with the length of time it takes to produce academic and/or practical research. Timescales can be anywhere from an extremely optimistic few months to a few years for research ideas to be developed, data to be collected and analyzed, and the work to be written up. And that is not the end of it, there is then a lengthy publication process of submissions, rejections, and editing to go through that can delay the dissemination of research further. By the time that much of the research thus reaches the public domain, technology has advanced and the research is likely no longer wholly accurate. We do not have the answers of how to make work more timely relevant, other than for researchers to aim to keep up with the technological developments and incorporate them into their ongoing research programs.

Concluding Comments In this chapter, we have offered a whirlwind tour of a snippet of the psychology associated with online criminal behavior. By focusing on the use of typologies and taxonomies, we were able to highlight the need for criminal behaviors online to each

676

A. Attrill-Smith and C. Wesson

be considered in their own right, rather than attempting to categorize them as groups or types of criminal acts. We thus advocated for a psychological approach that draws on multiple areas of psychology which have been used to explain offline behavior. While we use a number of crimes as examples and signpost to further information on a whole range of diverse online crimes, we do note that rapidly developing technological advances can make researching and understanding both perpetrator and victim behavior online very difficult. We do however also suggest that more work needs to be carried out that specifically focuses on understanding the motivations, needs, and psychological drives underlying online crime as we suggest that understanding these may hold the key to moving forward with a multifaceted approach to preventing crime in the future rather than focusing on preventing victims falling foul of inevitable online crime. We have largely focused on the Internet not only as a tool for old and new crimes but equally as a tool for extending existing criminal behavior beyond the offline world. We hope that we have awakened in the reader an interest to explore some of the areas of work mentioned further and to maintain an openminded, inquisitive approach to doing so.

References Al Mutawa, N., Bryce, J., Franqueira, V. N. L., Marrington, A., & Read, J. C. (2019). Behavioural digital forensics model: Embedding behavioural evidence analysis into the investigation of digital crimes. Digital Investigation, 28, 70–82. Ang, R. P., & Goh, D. H. (2010). Cyberbullying among adoescents: The role of affective and cognitive empathy, and gender. Child Psychiatry and Human Development, 41(4), 387–397. Attrill, A., Fullwood, C., & Chadwick, D. (2015). Catfish: The detection of red flags, dangers and suspicious behaviours in the pursuit of love online. Paper presented at the Social Networking in Cyberspace Conference, Wolverhampton, September, 2015. Baryshetev, M. V., & McGlone, M. S. (2018). Pronoun usage in online sexual predation. Cyberpsychology, Behavior and Social Networking, 21(2), 117–122. Bates, S. (2017). Revenge porn and mental health: A qualitative analysis of the mental health effects of revenge porn on female survivors. Feminist Criminology, 12(1), 22–42. Baumesiter, R. F., & Leary, M. R. (1995). The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychological Bulletin, 117(3), 497–529. BBC. (2018). Revenge porn: One in three allegations dropped. https://www.bbc.co.uk/news/ukengland-44411754. Retrieved, 20th November 2018. Bothamley, S., & Tulley, R. (2018). Understanding revenge pornography: Public perceptions of revenge pornography and victim blaming. Journal of Aggression, Conflict & Peace Research, 10(1), 1–10. Campbell, M. A. (2005). Cyber bullying: An old problem in a new guise? Australian Journal of Guidance and Counselling, 15(1), 68–76. Chifflet, P. (2015). Questioning the validity of criminal profiling: An evidence-based approach. Australian and New Zealand Journal of Criminology, 48(2), 238–255. Costa, P. T., & McCrae, R. R. (1985). The NEO personality inventory manual. Odessa: Psychological Assessment Resources. Cyber Civil Rights Initiative. (2014). Revenge porn statistics. https://www.cybercivilrights.org/wpcontent/uploads/2014/12/RPStatistics.pdf. Retrieved 20 Nov 2018. Drouin, M., Boyd, R. L., & Greidanus Romaneli, M. (2018). Predicting recidivism among internet child sex sting offenders using psychological language analysis. Cyberpsychology, Behavior and Social Networking, 21(2), 78–83.

32

The Psychology of Cybercrime

677

Federal Trade Commission. (2016). https://www.ftc.gov/news-events/press-releases/2016/03/ftcreleases-annual-summary-consumer-complaints. Retrieved 16 Nov 2018. Fiske, S. T., & Taylor, S. E. (1991). Social cognition (2nd ed.). New York: McGraw-Hill. Glover, D., Cartwright, N., Gough, G., & Johnson, M. (1998). The introduction of anti-bullying policies: Do policies help in the management of change? School Leadership & Management, 18, 89–105. Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1993). Emotional contagion. Current Directions in Psychological Science, 2(3), 96–99. Henry, N., & Powell, A. (2015). Embodied harms: Gender, shame, and technology-facilitated sexual violence. Violence Against Women, 21, 758–779. Henry, N., Powell, A., & Flynn, A. (2017). Not just ‘revenge pornography’: Australians’ experiences of image-based abuse. A summary report. Melbourne: RMIT University. Hinduja, S., & Patchin, J. W. (2008). Cyberbullying: An exploratory analysis of factors related to offending and victimization. Deviant Behaviour, 29, 129–156. Hoffman, C. (2013). Hacker hat colors explained. Black hats, white hats, and gray hats. https://www.howtogeek.com/157460/hacker-hat-colors-explained-black-hats-white-hats-andgray-hats/. Retrieved 16 Nov 2018. Holmes, R. M. (1989). Profiling violent crimes: An investigative tool. New York: Sage. Kamal, M., & Newman, W. J. (2016). Revenge pornography: Mental health implications and related legislation. Journal of the American Academy of Psychiatry & Law, 44, 359–367. Kirwan, G., & Power, A. (2012). The psychology of cyber crime: Concepts and principles. Hershey: Information Science Reference. Kuipers. (2006). The social construction of digital danger: Debating, defusing and inflating the moral dangers of online humor and pornography in the Netherlands and the United States. New Media & Society, 8(3), 379–400. Laird, L., & Toups, H. (2013). Victims are taking on ‘revenge porn’ websites for posting photos they didn’t consent to. ABA Journal, 99(1), 1–10. Le Bon, G. [1895] (1903). The crowd: A study of the popular mind. London: Fisher Unwin. Marganski, A., & Melander, L. (2015). Intimate partner violence victimization in the cyber and real world: Examining the extent of cyber aggression experiences and its association with in-person dating violence. Journal of Interpersonal Violence, 33(7), 1071–1095. Martens, M., De Wolf, R., & De Marez, L. (2019). Investigating and comparing the predictors of the intention towards taking security measures against malware, scams and cybercrime in general. Computers in Human Behavior, 92, 139–150. McGuire, M. (2012). Organized crime in the digital age. London: John Grieve Centre for Policing and Security & Detica. Nurse, J. R. C. (2019). Cybercrime and you: How criminals attack and the factors that make them successful. In A. Attrill-Smith, C. Fullwood, M. Keep, & D. Kuss (Eds.), Oxford handbook of cyberpsychology. Oxford: Oxford University Press. Nurse, J. R., & Bada, M. (2018). The group element of cybercrime: Types, dynamics and criminal operations. In A. Attrill-Smith, C. Fullwood, M. Keep, & D. Kuss (Eds.), Oxford handbook of cyberpsychology. Oxford: Oxford University Press. Obscene Publications Act (1959). Olweus, D. (2003). Bulying at school: What we know and what we can do. Cambridge, MA: Blackwell Publishers, Inc. Owen, M., Attrill-Smith, A., Wesson, C., Marsh, L., Brown, K. (n.d.). The role of victim blaming in real life reports of revenge porn for gifted and stolen images and videos. Pina, A., Holland, J., & James, M. (2017). The malevolent side of revenge porn proclivity: Dark personality traits and sexist ideology. International Journal of Technoethics, 8(1), 30–43. Powell, A., & Henry, N. (2018). Policing technology-facilitated sexual violence against adult victims: Police and service sector perspectives. Policing and Society, 28(3), 291–307. Reed, L. A., Tolman, R. M., & Ward, L. M. (2016). Snooping and sexting: Digital media as a context for dating aggression and abuse among college students. Violence Against Women, 22, 1556–1576. Ringrose, J., Harvey, L., Gill, R., & Livingstone, S. (2013). Teen girls, sexual double standards and ‘sexting’: Gendered value in digital image exchange. Feminist Theory, 14, 305–323.

678

A. Attrill-Smith and C. Wesson

Scott, A. J., & Gavin, J. (2018). Revenge pornography: The influence of perpetrator-victim sex, observer sex and observer sexting experience on perceptions of seriousness and responsibility. Journal of Criminal Psychology, 8(2), 162–172. Slonje, R., & Smith, P. K. (2008). Cyberbullying: Another main type of bullying? Scandinavian Journal of Psychology, 49, 147–154. Smith, J. C., & Hogan, B. (1999). Criminal law. Cases and materials. London: Butterworths. Strohmaier, H. I., Murphy, M., & DeMatteo, D. (2014). Youth sexting: Prevalence rates, driving motivations, and the deterrent effect of legal consequences. Sexuality Research & Social Policy: A Journal of the NSRC, 11(3), 245–255. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131. UK Government. (2015). Revenge porn. https://www.gov.uk/government/publications/revengeporn. Retrieved 20 Nov 2018. Vandebosch, H., & VanCleemput, K. (2009). Cyberbulying among youngsters: Profiles of bullies and victims. New Media and Society, 11(8), 1349–1371. Walker, K., & Sleath, E. (2017). A systematic review of the current knowledge regarding revenge pornography and non-consensual sharing of sexually explicit media. Aggression and Violent Behavior, 36, 9–24. Wall, D. S. (2001). Crime and the internet. New York: Routledge. Wall, D. S. (2007). Cybercrime: The transformation of crime in the information age. Cambridge, MA: Polity Press. Ward, T., & Beech, A. (2006). An integrated theory of sexual offending. Aggression and Violent Behaviour, 11, 44–63. Willard, N. (2004). An educator’s guide to cyberbullying and cyberthreats. Center for Safe and Responsible Internet use. www.http.ewa.org. Retrieved 16 Nov 2018. Young, L. F. (1993). Utopians, cyberpunks, players and other computer criminals. In IFIP TC9/ WG9.6 Working Conference on Security and Control of Information Technology in Society.

Internet Addiction and Cybercrime

33

Bernadette Schell

Young people are increasingly committing and being drawn into cyber-criminality. For example, in 2015, a UK telecommunications company had a security breach and lost valuable data. Five suspects were arrested in connection with the investigation, all aged between 15 and 20 years of age. The company has subsequently reported that the data breach has cost the firm up to £60 million; an estimated 101,000 customers have left the company following the hack. In October 2016 the company was fined £400,000 for theft of customer details and their failure to implement basic cybersecurity measures. The UK Information Commissioner’s Office, which imposed the fine, said security was so poor that the attack succeeded ‘with ease,’ which is notable given the age of those arrested. – Aiken et al. (2016a, p. 2)

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Brief History of Computer Addiction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Addiction and Computer/Internet Addiction Are Defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Internet Addiction Was First Measured . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trends and Controversies Related to Internet Addiction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Addiction Missing from DSM 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Psychological and Psychiatric Cofactors Reportedly Underlying Internet Addiction . . . . . . . . . Exploring the Link Between Internet Addiction and Cybercrime: Is There Such a Thing as “Crime Addiction”? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime and Its Costs Continue to Spiral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybercrime Is Increasingly Appealing to Youth and Young Adults . . . . . . . . . . . . . . . . . . . . . . . . . . . Profiling Cyber Criminals and the Possible Role of Social and Psychological Aspects (Like Internet Addiction) in Perpetrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

680 680 683 684 686 687 688 690 691 692 693

B. Schell (*) Faculty of Management, Laurentian University, Sudbury, ON, Canada e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_26

679

680

B. Schell

Understanding Youth’s Pathways into Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Psychological Interventions for Reducing Internet Addiction and Other Psychological Precursors Possibly Related to Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

697 698 700 700 700

Abstract

This chapter looks at whether a credible link exists between “internet addiction” – the phenomenon of being “hooked on” online activities – and cybercrime. The bulk of this chapter explores trends and controversies surrounding internet addiction and then examines research findings linking internet addiction to cybercrime. Psychologists’ interventions for reducing the adverse effects of internet addiction are also discussed at the chapter’s end. Keywords

Computer addiction · Diagnostic criteria · Cofactors · Cybercrime links · Criminal profiling · Psychological interventions

Introduction This chapter examines a possible link between computer/internet addiction and cybercrime (Nykodym et al. 2008), focusing, in particular, on how certain cybercrimes targeting businesses and organizations can be motivated by computer addiction. The chapter opens with a brief history of computer/internet addiction and then moves to how computer/internet addiction is defined and measured in the psychological literature. Trends and controversies related to virtual addiction are then investigated, as well as the reported psychological and psychiatric cofactors underlying such addiction. The discussion then turns to the view of crime as an addiction (MacQueen 2004), with an exploration of the link between computer/internet addiction and cybercrime. The various types of cybercrimes likely to target businesses or individuals are detailed, along with the harms manifesting as a result. The chapter closes with known treatments for computer/internet addiction as a means of reducing the propensity to commit cybercrimes and, thus, the resulting harms caused to persons and property.

A Brief History of Computer Addiction Computer addiction has been talked about since the 1970s – both by experts in the fields of criminology and in psychology/psychiatry. Back then, computer programmers and hackers were often referred to as “addicts” by psychologists and

33

Internet Addiction and Cybercrime

681

mainstream society – when at that time really smart people sitting at their computer terminals for 20 h or more at a time were considered to be useful contributors to both technology and society (Reed 2002). During that time period, the government owned most of the huge computer lairds that only really smart people could run. Then during the 1980s, the home computers – known as personal computers or PCs – appeared on the scene, transforming computers into user-friendly machines that could be used by those in the mainstream. But access to computers in the 1980s also opened the doors to computer addiction for everyday folks. Actually, the idea of computer “addiction” became a more widespread term used in the 1980s, in part, because it fit in well with the social scene at the time in the United States, where drug usage and addiction were popular topics in the media. It is little wonder that the so-called obsessive use of computers back then referred to the PC as the “LSD of the 1980s” (Reed 2002; Elmer-DeWitt 1993). Probably the most notable scholarly paper in the 1980s that raised the possible concerns over the harms of computer addiction was written by Margaret Shotton (1989). She said that “in all instances where personal contact was made with people who worked with computers and computer users, there was belief in the occurrence of this syndrome” (Shotton 1989, p. 20). Shotton went on to say that negative consequences of computer addiction could lead to harms like work problems and a marked decrease in social interactions. She also affirmed that from a demographic perspective, computer addiction was more likely to be found in young, unmarried males who were the first born offspring of professional parents in the science and technology fields. By the 1990s, the notion of computer hacking (defined as “the unauthorized access to and unauthorized use of or manipulation of other people’s computer systems”; Yar 2005) and computer addiction entered the mainstream through media stories portraying repeat hackers like Kevin Mitnick as individuals bordering on “the criminal.” They required, argued the authorities, a combination of rehabilitative therapy, computer use restriction, and time in federal prison (Reed 2002). Mitnick and others at the time who interacted primarily with those similarly computer-driven and who found comfort in what was then called “the Computer Underground or CU” had their own subculture. Three norms that persistently drove hackers’ behaviors regardless of their ethical intent (known as white hat tendencies) or malicious intent (known as black hat tendencies) included the following (Holt and Kilger 2008): 1. The presence of technology, with an intimate connection to technology facilitating the ability to hack 2. Secrecy, where there is a need to avoid unwanted attention from government and law enforcement agencies, coupled with a desire to brag and share accumulated knowledge 3. Mastery or the continual learning of new skills and the mastering of one’s social and physical environment What Kevin Mitnick was so good at and that brought him lots of recognition within the CU was his particular talent in social engineering – involving the use of

682

B. Schell

nontechnical skills to manipulate and take advantage of naive or inadequately trained employees in IT security matters. In other words, back in the 1990s and now, some hacking exploits are committed without much technical sophistication but rather capitalize on the “weakest link” in the system (Schell et al. 2002). It is important to note that in the 1990s besides having the Mitnick case introduce and legitimize the notions of clinical computer addiction and social engineering to the US legal system (Reed 2002), the term internet addiction disorder, or IAD, was coined by US psychiatrist Dr. Ivan Goldberg in 1995 to describe pathological and obsessive computer use. He actually used the term as a joke to describe his own avid use of the internet when communicating with fellow psychiatrists (Schell 2016). Like Shotton earlier, Goldberg a year later in 1996 more seriously suggested that the diagnostic criteria for IAD – if it were to be more broadly received by the mental health expert community – would likely be based on the diagnostic criteria for substance abuse disorder (SAD) found in the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (the DSM) and would tend to focus on symptoms related to tolerance, withdrawal, and forgoing or reducing important social and occupational activities (Campanella et al. 2015). But by 2005, the term IAD was still not listed as a legitimate “addiction” endorsed by the American Psychiatric Association (APA); rather, it was described under the category of impulse control disorder (ICD) in the DSM 4 (Wieland 2005). The ability to control one’s impulses or urges distinguishes humans from other species and acts as a marker of humans’ psychological maturity. While society assumes that most humans take their ability to think before they act for granted, criminals have been known to be lacking in the capability known as low self-control or poor impulse control. Michael Gottfredson and Travis Hirschi’s (1990) general theory of crime, or self-control theory, argues that individuals commit crime because they have the inability to resist temptation and, therefore, commit acts having long-term consequences greater than the short-term benefits. In fact, a lack of self-control, or poor impulse control, has been demonstrated to be one of the most influential correlates of crime in both the traditional land-based crime sense and in the cybercrime sense. From a mental health perspective, people with an impulse control disorder cannot seem to resist the urge to do something harmful to themselves or to others. Impulse control disorders include addictions to alcohol and drugs, compulsive gambling, poorly inhibited sexual fantasies, behaviors involving nonhuman objects or children, fire setting, and intermittent explosive attacks of rage. Seen from this angle, it becomes clear how the psychological and criminological impacts of poor impulse control become critically intertwined. Usually, a person feels increasing tension or arousal before committing the act characterizing the disorder. During the act, the person probably will feel pleasure, gratification, or relief. Afterward, the person may blame himself or feel regret or guilt (Ploskin 2018). By 2012, the nosology and optimal diagnostic criteria for IAD remained controversial in the mental health domain. While IAD was proposed to merit inclusion in the DSM 5, what still needed to be determined is if it is more accurately described as a behavioral addiction, an impulse control disorder, or a manifestation of other underlying psychiatric disorders (Dong et al. 2012).

33

Internet Addiction and Cybercrime

683

How Addiction and Computer/Internet Addiction Are Defined Before defining the term computer/internet addiction, a definition for an addiction will be discussed. In the mental health community, an addiction is generally defined as a compulsive, continued use of a substance or behavior that is known by the user to be harmful physically and/or psychologically. In this context, an addiction is often thought to be symptomatic of an impulse control disorder – characterized by a tendency to gratify a desire or impulse, despite the negative consequences that such gratification may create for oneself or for significant others (Lesher 1997; American Psychiatric Association 2000). As noted earlier, criminologists take a similar view. Addictions are commonly thought to include a constellation of traits seen in addicted clients, including the following (Young 1996): • • • • • • •

Withdrawal from daily activities Tolerance issues A preoccupation with the substance or behavior in question Heavier or more frequent use of the substance or behavior than intended Centralized activities to obtain more of the substance or to engage in the behavior Loss of interest in other social or work or recreational activities formerly enjoyed Denial, defined as the outright disregard for the physical or psychological consequences caused by the use of the substance or the addiction of question

It is important to point out that in the mental health field, there are the purists who feel that the term “addiction” should be reserved only for individuals suffering from substance abuse – commonly alcohol and drugs (Rachlin 1990; Walker 1989). However, in recent years, more liberal thinkers have published findings related to a broader range of virtual addictions like computer/internet addiction, gaming addiction, mobile phone addiction, cybersex, and online social media addiction (Yu and Chao 2016; Sarda et al. 2016; Hwang and Jeong 2015; Laier et al. 2014; De Cock et al. 2014). Below is a light-hearted media piece that reflects the personal hassles of smartphone addiction, per se, and the personal decision made to rid oneself of the addiction (Howell 2018, p. P5): Smartphones, if you think about it, are a little bit like wearing underwear. You’re certainly free to walk through life without them, but people might think you’re crazy. So consider me crazy, because I’ve given mine up. (The smartphone, not the briefs.) In August, I purchased a Nokia 8110, an updated release of the banana-shaped Matrix phone that was first released in 1996. It texts! It calls! It has predictive text! Neo would be proud. “Okay,” you’re asking, “Why?” A few reasons. Firstly, my trusty iPhone 5C – which I used for roughly four years, a lifetime in our world of planned obsolescence – is pretty much dead. May it gather dust in peace. Like any smartphone, my 5C was great for checking e-mails, Twitter and the occasional Facebook message. It was a little too great at these actions, which brings me to my second reason for the switch: My use of the thing quickly evolved to something akin to a glorified fidget spinner. Pulling down on the feeds that are already up-to-date is addictive. It felt unhealthy.

684

B. Schell

So I bought a dumb phone – or, more accurately, a “feature phone,” a basic mobile device that lacks “smart” technology – not wanting to spend a small fortune on a luxury product that felt more like a social obligation than actual retail therapy.

As for computer/internet addiction, in particular, following the rather lighthearted comment made by Dr. Goldberg in 1995, a year later, Dr. Kimberly Young presented the term “computer addiction” at the 1996 American Psychological Association annual convention in Toronto, Ontario, and this time, she underscored that internet addiction is worthy of serious concern. She further noted in an academic paper published that same year (Young 1996) that internet addictions are not unlike other substance addictions, in that they can cause a number of negative life side effects for those so impaired – such as loss of control, social isolation, dysfunctional marital and family relationships, and/or impaired academic or work achievements. Young (1996) noted, as well, that not only do computer-addicted or internetobsessed types tend to neglect their loved ones and their chores, but they also tend to have odd sleeping patterns and poor eating habits. Of all of the diagnoses referenced in the DSM 5 in 1995, she argued that pathological gambling can be viewed as most closely related to the pathological nature seen in internet “extreme users.” By using pathological gambling as a model, “internet addiction” can be defined, she said, “as an impulse control disorder not involving an intoxicant.” Young (1996) also noted that computer- or internet-addicted types are generally not able to maintain a task- and socially/emotionally balanced regimen over the longer term and that there are often related psychological cravings or physical withdrawal symptoms when an extreme user is unable to get his or her online “fix.”

How Internet Addiction Was First Measured In 1998, Dr. Young made what is considered to be the first attempt to measure internet addiction and to assess at what level addiction differs from normal internet usage. To this end, by using pathological gaming as a model, Young (1998) developed a brief eight-item questionnaire referred to as a Diagnostic Questionnaire (DQ). The items comprising this instrument are listed below (p. 238): 1. Do you feel preoccupied with the internet (think about previous online activity or anticipate next online session)? 2. Do you feel the need to use the internet with increasing amounts of time in order to achieve satisfaction? 3. Have you repeatedly made unsuccessful efforts to control, cut back, or stop internet use? 4. Do you feel restless, moody, depressed, or irritable when attempting to cut down or stop internet use? 5. Do you stay online longer than originally intended? 6. Have you jeopardized or risked the loss of a significant relationship, job, educational, or career opportunity because of the internet?

33

Internet Addiction and Cybercrime

685

7. Have you lied to family members, a therapist, or others to conceal the extent of involvement with the internet? 8. Do you use the internet as a way of escaping from problems or of relieving a dysphoric mood (e.g., feelings of helplessness, guilt, anxiety, or depression)? Participants were volunteers who responded to national and international newspaper advertisements, flyers posted on college campuses, and postings on electronic support groups geared toward internet addiction for online respondents. Individuals searching for the keywords “internet” or “addiction” on Web search engines were also approached for participation. The participants who answered “yes” to five or more of the criteria were classified by Young (1998) as dependent internet users (Dependents), while the remainder were classified as nondependent internet users (Nondependents). A cut-off score of five or more was declared to be consistent with the number used to assess pathological gaming. On Young’s initial test sample of 157 male Dependents and 239 female Dependents, their mean ages were found to be 29 years and 43 years, respectively, and their mean educational background as a group was found to be 15.5 years. These findings seem to indicate that male Dependents are considerably younger than female Dependents. Moreover, for her initial test sample of 64 male Nondependents and 36 female Nondependents, their mean ages were found to be 25 years and 28 years, respectively – clearly less demarcated along gender lines than the former group. Their mean educational background was found to be 14 years. Young (1998) also reported that Dependents spent, on average, 38.5 h weekly online (SD = 8.04 h), while Nondependents spent, on average, 4.9 h weekly online (SD = 4.70 h). Like alcoholics, she said that the Dependents gradually developed up to ten times their initial use as familiarity with the internet increased. Thus, internet addiction seemed to be related, in part, to poor impulse control, and, in part, to a behavior-oriented addiction. According to Young, those who get hooked to being online are addicted to what they do and the feelings they experience when doing it. To this point, computers become a means to compensate for other things lacking in a person’s life – such as meaningful relationships, a strong marriage, a fulfilling career, or financial security. Given that Dependents said that they spent most of their time in chat rooms (35%) and in multiuser dungeons (MUDs), Young further reported that this group seemed to enjoy the aspects of the internet that allowed them to meet, socialize, and exchange ideas with new people through these highly interactive mediums, often preferring online socializing to real-life socializing. In contrast, the Nondependents said that they spent most of their time sending and receiving emails (30%), surfing the Web (25%), or seeking information (24%), and they also enjoyed real-life socializing. An illustration appearing in Young’s (1996) article demonstrates well the plight of a 43-year-old married homemaker who was a volunteer in one of her internet addiction studies. This woman said that unlike the stereotypical young male computer geeks who might become internet-obsessed because of their technical savvy, she was basically computer illiterate at the start of her journey into the virtual world.

686

B. Schell

She reportedly was able to go online by using the family’s recently acquired personal computer only because of the menu-driven applications provided by her internet service provider. At first, she said, she spent just a few hours online entertaining herself in chat rooms, defined as virtual communities where online users can chat with one another in real time. However, she affirmed, within just 3 months, she became more immersed in the virtual world, spending up to 60 h a week there. The woman said that she developed a sense of community in the chat room. Though she often intended to spend at most 2 h there in the mornings, she wound up spending as much as 14 h in a single sitting. Besides taking part in online chats, she also confessed to constantly checking her email. Eventually, she said, she would be online all night, finally cutting herself off from the virtual world in the early morning hours. The woman said that she experienced feelings of depression, anxiety, and irritability when she was not able to be on her computer; so in order to avoid these adverse feelings, she tried to stay in the virtual world as long as she could. She stopped visiting with her friends, she cancelled appointments, and she reduced the amount of time she spent with her family. Not only did she stop doing enjoyable activities like playing bridge with colleagues in her bridge club, but she stopped performing the normal routine chores of buying groceries or cleaning the house. Yet, despite this vast change in her behavior, she did not view her internet addiction to be a problem needing correcting (Young 1996). Other family members began to complain about the woman’s changed behaviors. Her two teenage daughters said that they felt ignored by her, and her husband of 17 years complained about the high internet service provider fees levied at that time (as much as $400 a month) because of his wife’s excessive time spent online. He was extremely concerned, he shared with her, about her lack of interest in their marital relationship. Nevertheless, the woman insisted that her online behavior was “normal,” that she was not addicted to the internet, and that she refused to get medical assistance because of her family members’ erroneous perceptions. In just 1 year after the family’s purchase of the personal computer, the woman was legally separated from her husband and was emotionally distant from her two daughters (Young 1996). Eventually, the woman admitted that she would need some kind of medical assistance because on her own, she was unable to stop going online. She further admitted to being unable to form a healthy, open relationship with her family members (Young 1996).

Trends and Controversies Related to Internet Addiction Despite the many studies completed in recent years and in different cultures on maladaptive internet use (Yu and Chao 2016; Lam et al. 2009; Wu and Cheng, 2007; Niemz et al. 2005), or internet addiction disorder (IAD), to this day, there remains no consistent diagnostic criteria or measurement scale for this phenomenon. In fact, since 1996, when Young first brought the term to the broader psychology community, based on different theoretical considerations, numerous measures have

33

Internet Addiction and Cybercrime

687

been created and applied by researchers in psychology and other fields for assessing problematic internet use. Several of these instruments are Brenner’s Internet-Related Addictive Behavior Inventory, the Generalized Problematic Internet Use Scale, the Online Cognition Scale, the Internet Addiction Scale, and the Chinese Internet Addiction Inventory. Also, recently a research team created a single-factor questionnaire titled the Compulsive Internet Use Scale. Yet, there remains scant psychometric data available on the widely used Internet Addiction Test or the eight-item Diagnostic Questionnaire created by Young (Koronczai et al. 2011). Consequently, the just-cited measures appear to be lacking a rigorous analysis on their reliability and validity that justifies their widespread usage. While such a confirmatory approach was applied in just a few cases of these measures, some of these studies were criticized as having too small a sample or too homogeneous a sample (Koronczai et al. 2011). Because of this lack of consistency in methodologies, unlike for well-established, more traditional kinds of impulse controls – like pathological gambling – information on the population prevalence of internet addiction has not been consistently forthcoming. Therefore, recent studies undertaken in various countries have suggested that the population prevalence of internet addiction ranges from 0.3% in the United States to 1% in Norway. Among adolescents, in particular, the prevalence has been estimated to be about 8% in Greece. And while early studies indicated that the bulk of people suffering from internet addiction are mostly young males with introverted personalities, more recent reports have indicated that the disorder is increasingly manifesting in females. Moreover, in recent years with the greater availability of the internet in most Asian countries, addiction has become a noticeable mental health issue among adolescents – with prevalence rates in Taiwan and China almost doubling in 4 years – from 6% in 2000 to about 11% in 2004 (Lam et al. 2009). The same kinds of definition and measurement problems exist for other kinds of noncriminal online addictions like gaming addiction. Thus, this lack of criteria has resulted in inconsistent reporting related to the prevalence, progress, and adequate treatment of online gaming overuse (Cho et al. 2014).

Internet Addiction Missing from DSM 5 It is interesting to note that despite these definition and measurement problems, in 2008, Dr. Jerald Block forcefully argued that internet addiction, or Internet Use Disorder (IUD), appeared to be a common disorder in the general population and that it merited full inclusion in the DSM 5 when it would be published. Conceptually, he argued that an IUD diagnosis in the DSM would be defined as a compulsiveimpulsive spectrum disorder involving online computer usage consisting of at least three subtypes: excessive gaming, online behaviors indicating a sexual preoccupation, and excessive email/text messaging (Block 2008). All of these variants, he noted, share the following four components (Block 2008):

688

B. Schell

1. Excessive use, often associated with a loss of sense of time or a neglect of basic drives 2. Withdrawal, including feelings of anger, tension, and/or depression when the computer is inaccessible 3. Tolerance, including the need for better computer equipment, more software, or more hours of use 4. Negative repercussions, including arguments, lying, poor achievement, social isolation, and fatigue resulting from excessive online behaviors. Interestingly, internet gaming disorder (IGD), not IUD, was identified in Section III of the DSM 5 (released in 2013) as a condition warranting more clinical research and experience before it might be considered for full inclusion in this distinguished mental health diagnostic book as a recognized disorder. The DSM 5 stated that internet gaming disorder is most common in male adolescents aged 12–20 years, that experts believe its prevalence is greatest in Asian countries (as compared to North America and Europe), and that this belief needs further study. Also noteworthy is the fact that internet gambling disorder was not included under the label of IGD, primarily because it was already included under the gambling disorder criteria in DSM 4 (Sarkis 2014). The DSM 5 (American Psychiatric Association 2013) proposed the continued vetting of nine diagnostic criteria for internet gaming disorder, based on many criteria earlier outlined by Dr. Block (2008): 1. Having a preoccupation with internet gaming 2. Having withdrawal symptoms when the internet is withdrawn or not available 3. Developing tolerance, defined as having the need to spend increasing amounts of time to be engaged in internet gaming 4. Having unsuccessful attempts to control internet gaming use 5. Having ongoing, excessive internet use despite knowledge of the negative psychosocial problems 6. Experiencing a loss of interests, previous hobbies, or real-life entertainment as a result of excessive internet gaming use 7. Escaping or relieving a dysphoric (downward-like) mood by relying on internet gaming 8. Deceiving family members, therapists, or others regarding the amount of internet gaming one is actually engaged in 9. Jeopardizing or losing a significant relationship, job, or educational or career opportunity because of excessive internet use

Psychological and Psychiatric Cofactors Reportedly Underlying Internet Addiction In 2005, Wieland reported that gender plays a role in internet addiction, with men wanting to fulfill information glut, play explicit or aggressive games, and engage in cybersex – seeking to satiate their needs for control, power, influence, and

33

Internet Addiction and Cybercrime

689

dominance. Women, on the other hand, turn to the internet as a source of friendship, romance, and support. Thus, the types of experiences men gain from heavy internet usage are more likely to become destructive to oneself and to others. A more recent study completed by Hetzel-Riggin and Pritchard (2011) found that in a sample of 425 college students, rates of internet addiction were higher in males, likely due to comorbid psychological disorders and negative beliefs about one’s body. This study’s findings suggest that men and women may have different psychological reasons for excessive internet use, including different types of psychological distress and coping styles. Unlike women, men may use the internet a lot because of weight concerns. While at least 9.8% of German adults report at least one negative consequence of internet use (Beutel et al. 2011), South Korea has identified internet addiction as a serious public health issue (Block 2008). Furthermore, research on patients addicted to the internet, particularly to the massively multiplayer online role-playing games (MMORPGs), has demonstrated that these games induced seizures in at least ten patients (Lam et al. 2009). More recently, researchers have debated nosology and optimal diagnostic criteria for internet addiction above and beyond gender or cultural differences. Yet whether internet addiction is most accurately described as a behavioral addiction, an impulse control disorder, or a manifestation of other underlying psychiatric disorders continues to be a topic of hot debate (Dong et al. 2012). Regardless of whether internet addiction is best viewed as a behavioral addiction or as an impulse control disorder, internet addiction has been argued to involve impaired inhibitory control. While impulsivity has always been recognized as central to impulse control disorders in both the psychology and criminology fields, it has become increasingly acknowledged by mental health experts to impact upon the increased risk for and maintenance of addictive disorders. Preexisting impulsivity, it is argued, may increase one’s vulnerability to develop addictive disorders, and engagement in addictive behaviors may exacerbate aspects of impulsivity (Dick et al. 2010). Defined as a personality trait characterized by the urge to act spontaneously without reflecting on an action or its consequences, this trait has been attributed to important psychological processes and behaviors, including self-regulation, risktaking, and poor decision-making. Moreover, experts have suggested that impulsivity is part and parcel of clinical conditions like hyperactivity disorder, borderline personality disorder, the manic phase of bipolar disorder, alcohol and drug abuse, and pathological gambling (Aiken et al. 2016a, b; Aiken 2016). Internet-addicted individuals seem to demonstrate high impulsivity – as measured by both a behavioral task of response inhibition (GoStop) and a self-report questionnaire (Barratt Impulsivity Scale-11) – and both behavioral and self-reported impulsivity have been reported to be positively correlated with the severity of internet addiction. Furthermore, an event-related potential (ERP) study of response inhibition (Go/No-Go task) reported both higher amplitude and longer peak latency in internet-addicted individuals as compared to those of healthy controls, suggesting that there is less efficient inhibitory processing in the internet-addicted group (Dong et al. 2012).

690

B. Schell

Importantly, a high percentage of psychiatric comorbidity in the substanceabusing or addictive population has been well documented in the mental health literature. By definition, comorbidity is the extent to which two pathological conditions occur together in a given population. However, the documented percentage of comorbidity with internet addiction, while the topic of considerable research undertaking, is not yet known (Schell 2016). Over the past several years, relationships have been demonstrated to exist between internet addiction and individual, familial, and environmental factors. Personality characteristics, such as sensation seeking, have been reportedly associated with this addictive behavior, as well as poor mental health and depression. Low family function, low life satisfaction, and poor self-esteem have also been reportedly frequently associated with internet addiction (Ko et al. 2007). Furthermore, documented comorbidity has been cited for gaming addiction and four psychiatric pathological conditions, in particular (Volkow 2004): 1. Mood disorders – including depression, bipolar disorders, and substance-induced mood disorders 2. Anxiety disorders – including social phobias and generalized anxiety disorder 3. Attentional disorders – including attention deficit disorder and attention deficit hyperactivity disorder 4. Substance use disorder – including amphetamine abuse or dependence and cocaine abuse or dependence

Exploring the Link Between Internet Addiction and Cybercrime: Is There Such a Thing as “Crime Addiction”? From about 2002 onward, an increasing number of experts have started exploring the concept of crime as a possible addiction. For example, in a 2002 scholarly article in Computer Fraud & Security, Arkin argued that a concept for tracing and profiling malicious computer attackers was possible because computer crimes (like hacking) are often of a serial nature – thus, presenting as a form of crime addiction. Along this line of argument, organizations modeled after Alcoholics Anonymous, such as Crime Addiction Anonymous in Vancouver, Canada, have arisen, at times, in North America. These facilities seem to have been built on the premise that crime can be an illness as tenacious as having a dependency on alcohol or drugs. Though mental health experts and criminologists nowadays tend to be reluctant to label crime as an addiction, per se, most would concede that crime seems to be caused by a number of psychological factors, including pronounced compulsiveness and poor impulsive control (MacQueen 2004). Commonly cited motivations for committing land-based crimes have included seeking power and reassurance, trying to assert one’s power over another, and seeking retaliation for past harms (Turvey 2002). Among the numerous concepts explored by criminologists for explaining motives of hackers causing harm, one proposed by criminologist Kilger et al. (2004), termed MEECES, is an acronym

33

Internet Addiction and Cybercrime

691

created from the words “money,” “ego,” “entertainment,” “cause,” “entrance into social groups,” and “status.” Besides pronounced compulsiveness and impulsivity, another possible link between crime as an addiction and profiling of, say, hackers, is that a particular individual tends to manifest his or her own particular motivational needs through serial hacking acts. Therefore, the establishment of a particular cybercriminal’s serial motivation for hacking becomes a key element when creating an offender’s psychological profile (Lickiewicz 2011). Though there is no clinical term for crime addiction in the DSM, clinical psychologists have attempted to classify motives for crime to develop effective treatments for criminals to assist them to overcome their harmful behaviors and prevent them from engaging in future criminal activity. Furthermore, the frequency of crimes committed in the virtual world has led to the emergence of a field aimed at assisting law enforcement agencies to catch computer criminals. In the literature, this field is referred to as cyber forensics, computer forensics, or digital forensics – utilizing experts who identify, collect, analyze, and examine electronic evidence while preserving the integrity of the data (Kent et al. 2006). Accepting that everyday folks enjoy using the internet to chat online with others, buy things online without leaving home, and glean scores of information with the flick of a send button, all types of criminals – hackers, spies, sexual predators, and murderers – have used the internet to facilitate and ease the processes of their criminal activity, causing personal and/or property harm to victims. Present-day online risks to the general population have included cyberbullying, cyber stalking, cyber pornography, and internet fraud (Schell 2016; Yu and Chao 2016) – often resulting in an increased risk of suicide attempts as well as clinical depression and anxiety by victims. A highly publicized example of a cyberbullied and sextorted teen victim was Canadian Amanda Todd, who eventually committed suicide in 2014. In January, 2014, a 35-year-old Dutch man Aydin Coban was found guilty and imprisoned, though he claimed in 2015 that he was innocent of the charges and was really a victim of bad press and shoddy investigations naming him as the perpetrator (White 2015).

Cybercrime and Its Costs Continue to Spiral Several reasons for the internet’s appeal to criminals is its extensive global reach, its capability for investigating key traits and behaviors of potential victims (both companies and individuals), and its anonymity. Criminals can alter their personalities online to obtain information directly from the intended targets (known as social engineering), and they can enter unauthorized networks by hacking – defined broadly as human activity interfering with the proper operation of computer systems and networks. Cybercrime – using a computer, digital devices, or the internet to commit crimes – is big business. Cybercrime is, in fact, an international problem having considerable cost implications for business enterprises and the economy. In 2016, it was estimated that

692

B. Schell

crimes in cyberspace cost the global economy $445 billion – and this number continues to spiral upward. As Steve Morgan wrote in Forbes magazine that same year (2016), cybercrime costs are projected to reach $2 trillion by 2019. What’s more, cybercrime costs more than the market cap of Microsoft ($411 billion), Facebook ($314 billion), and ExxonMobil ($332 billion), according to an estimate from the World Economic Forum’s 2016 Global Risks Report. While the threat of state-sponsored attacks aimed at taking down critical infrastructure continues to haunt cybersecurity experts, many believe that the greater threat is statesponsored attacks targeting business interests. The 2014 Sony hack has become the poster child in recent years of an isolated nation-state [in this case, North Korea] attacking the network of a business enterprise (Taylor 2016). There is little doubt that hackers and other varieties of cybercriminals today are considerably more skilled, better organized, and better funded than in previous times – and this reality keeps criminologists and IT security experts awake at night, despite laws existing in jurisdictions worldwide meant to criminalize such “computer trespassing” offenses (see, e.g., the US Computer Fraud and Abuse Act). Because of these many factors, today’s hackers, for example, are better at finding network weaknesses, penetrating previously effective security barriers, and creating more havoc once inside a company’s network. Though “old-school” crimes like stealing and reselling credit card numbers to those in the Dark Web continue to provide a number of cybercriminals with their “bread and butter,” cyber thieves are growing their operations by using encryption and ransomware to extort money from targets and still avoid detection. In fact, security experts know that stealth hackers will often spend months inside a company’s network, siphon off key information, and create “back doors” so they can reenter the network at will or to attack the company’s customer or supply chain (Taylor 2016).

Cybercrime Is Increasingly Appealing to Youth and Young Adults Importantly, cybercrime experts are becoming increasingly alarmed not only about the victimization of youth in the virtual world but the global surge in young people being arrested for cybercrimes, particularly hacking, cyber harassment, and cyber stalking. In the behavioral sciences, it is well established that impulsivity and risktaking behavior increase throughout the formative years, and there have been reports of the increasing involvement of youth in criminal activity in the virtual world for some time now. In 2015, the Australian Bureau of Crime Statistics and Research reported that cyber fraud offenses committed by people under 18 years of age increased by 26% in the previous 2 years, and they increased by 84% in the previous 3 years (Harris 2015). Moreover, in a recent survey conducted by an IT security company, about one in six teenagers in the United States and about one in four teenagers in the United Kingdom reported that they had tried some form of internet hacking. In general, law enforcement have noted that young people, particularly IT-savvy males, are increasingly committing cybercrime offenses that range from money

33

Internet Addiction and Cybercrime

693

laundering for criminal gangs to hacking and to using remote access Trojans known as RATs for taking over control of machines. For example, the Blackshades RAT malware, costing just $40–50, is able to log keystrokes, get passwords, and encrypt files until ransom is paid. In 2014, according to court documents, the Blackshades RAT was bought by several thousand users – 6,000 customer profiles, many of them teens – in more than 100 countries and used to infect more than half a million computers globally (Peters 2014). Below is a case study of a male teen using a RAT to extort sexual favors from a female teen, extracted from a white paper of real-world cases written by a research team of experts in the fields of criminology, developmental psychology, neurobiology, and the emerging realm of cyber psychology (Aiken et al. 2016a, p. 15): Jack [name changed] is a well-mannered boy from a stable family. He attends a prestigious US high school where he attains consistently high grades in math and science. He has an inquisitive mind and quickly develops a deep interest in computing that leads him to some basic understanding of hacking techniques. At the same time, being socially inept, he develops a fantasy of spying on girls via their computers’ webcams. By viewing Youtube tutorials Jack learns that this can be achieved easily using the Blackshades remote access trojan. The Blackshades application has a user-friendly graphical interface designed for hackers with minimal technical knowledge. Once the desired functionality is chosen, Blackshades generates an executable program – the remote access Trojan – that needs to be run on the victim’s computer to seize control of it. Once the victim’s computer is infected, the hacker can access it remotely through Blackshades to run programs, observe the victim through the webcam, and eavesdrop through its microphone. The Trojan can be delivered to the victim in many ways, including, for example, an email attachment pretending to come from a trusted friend. Jack manages to install the Trojan into the laptop of a female student attending the same school. Through lucky circumstances he is not discovered. Jack enjoys the adrenaline rush of it combined with the ability to fulfill his erotic fantasies. He continues to infect computers of other female students and gradually becomes more confident in his hacking. At some point he contacts one of his victims via email and uses her private pictures to extort sexual favours. The victim contacts the police, who are able to track and arrest Jack based on his electronic communications.

Profiling Cyber Criminals and the Possible Role of Social and Psychological Aspects (Like Internet Addiction) in Perpetrators While many IT security and law enforcement agents spend considerable time analyzing the technical and mechanical aspects of cybercrime, dissecting malware and exploit tools, and forensically analyzing code and hacker techniques, few of these experts have actively focused on the social and psychological aspects of cybercrime – who attacks, what motivates them, how and when did the so-called deviant behaviors begin, and was there an underlying problem with internet addiction or some mental health issue (Aiken et al. 2016b)?

694

B. Schell

For example, in her 2016 book entitled, The Cyber Effect: A Pioneering Cyberpsychologist Explains How Human Behavior Changes Online, psychologist Mary Aiken, who has acted as an advisor on cybercrime to Interpol, the FBI, and the White House, argues that cybercriminals may have personality disorders, such as internet addiction, that contribute to their propensity to commit crime – on land and online. In her book (Aiken 2016, pp. 45–46), this psychologist presents the 2010 realworld case of Alexandra Tobias, a 22-year-old mother in Florida who called 911 to report that her 3-month-old son Dylan stopped breathing. He allegedly was pushed off the sofa by the family dog and hit his head on the floor. Later, Tobias told the police that she was playing “FarmVille” on her computer and lost her temper when the baby’s crying distracted her from the Facebook game. She said that she picked up the baby and began shaking him violently. His head hit her computer, and the baby was pronounced dead at the hospital from head injuries and broken legs. Tobias was sentenced to federal prison for 50 years for second-degree murder. Facebook later warned online players of the highly addictive nature of the game [it was actually the top game by daily active users on Facebook between August 2009 and December 2010], and the social media platform created a page dedicated to Farmville Addicts Anonymous (FAA). Let us now turn to cyber profiling in the cyber forensics domain – a field encompassing numerous disciplines from the technical sciences (dealing with hardware and software) to the legal, military, and academic fields like criminology and psychology. Since about 2000, experts in these fields have worked to advance interdisciplinary research to increase online users’ knowledge about perils lurking in the virtual world (Ksherti 2009; Lickiewicz 2011). In 2005, for example, the US research team of Nykodym, Taylor, and Vilela categorized cybercrime into four main types, explaining how the differences in motives influence the kind of perpetrator willing and able to follow through on these cybercrimes targeting business networks. These four cyber profiling categories are described below (Nykodym et al. 2008): 1. Espionage: Generally committed by a spy planted in a competitor’s company, with the primary purpose of obtaining highly important and confidential information. Because these cyber spies need to be placed high in the hierarchy to be authorized to have access to this data, often these perpetrators will be 30 years of age or older, will reflect the race and culture of the particular company in question, will appear outwardly to be calm and reserved, and will have a thorough understanding of computers and IT in order to execute the cybercrime and then effectively hide the evidence. 2. Sabotage: Generally committed by an individual for personal reasons – typically to get revenge. Though these cyber saboteurs are generally IT-savvy and well placed in the hierarchy to be authorized to have access to confidential information, they are often upset at management, say, because of a missed promotion. The act of hacking the network can be done while employed (known as an “insider”) or after leaving the company. These cybercriminals tend to be in the 25–40 age bracket and have been employed with the company long enough to be able to identify weaknesses in the information system that can be capitalized upon.

33

Internet Addiction and Cybercrime

695

3. Theft: Generally committed by individuals for personal financial gain, these cyber thieves often access confidential information and use it later to reap the financial harvest. Trend data seem to indicate that when the cybercrime costing a company is less than $100,000, the perpetrator is generally a male or female who is lower in the hierarchy and is generally between the ages of 25 and 35 years. The trend data also seem to indicate that when the cybercrime costs the company $100,000 to $1,000,000, the perpetrator is generally a male in this 25–35 age bracket, and when the damages exceed $1,000,000, the perpetrator is generally a male over the age of 35 and in top management. 4. Personal Abuse: Generally committed by individuals within the company for personal use, such as reading the news online for prolonged periods, checking personal email for prolonged periods, doing online gambling or gaming, or watching pornography while on company time. Cyber abusers of company time and property are the most common of the four categories. Though individual cases do not account for much financial loss, what is concerning is the loss of productivity within the company as a whole after one factors in the sum of abusers within the company. Moreover, research has shown that those who continually abuse company time and property often manifest opposition toward supervisors and are non-conformant to a variety of company rules/regulations. Nykodym and Marsillac (2007) have suggested that managers concerned about insider and outsider cybercrime containment can set up multiple authentication log-on accounts for employees, install firewalls, and limit access to a need-to-know basis and according to job titles. Furthermore, managers and IT security officers need to be observant of dysfunctional online behaviors and be forthright about educating the workforce on the company’s computer usage policies as well as penalties that will be levied if the policies are breached – up to and including firing. The research team also pointed out that with improved observation and measurement of internet addiction, managers may be better able to “red flag” possible insider cybercriminals. Nykodym et al. (2008) concluded that while no research has been reported that shows how addictive personalities could be a determinate of insider cybercrime (p. 59), “it is easy to consider how addiction could cause cybercrime in an organization. It has already been shown how many people with computer addiction also have other addictions (e.g. gambling addiction), and how these addictions could easily lead to abuse of the company network.” Addictive personalities could also lead to an increased desire to sabotage or steal for financial gain from a company. Further research needs to be done, these researchers argued, to show the extent of addiction as a reason for crime and to see if programs set up by companies to help employees with addictive behaviors, especially addiction to computers, could minimize instances of insider cybercrime. In short, addictive personalities, aided by various opportunities – including access to and availability of the internet and computers and fueled by criminal motives – could facilitate the making of a cybercriminal. It is this understanding that may be used in analyzing the modus operandi of cybercriminals moving forward. In another Canadian study on both insider and outsider hackers, Schell et al. (2002) completed a multidimensional legal, psychological, and business-related

696

B. Schell

survey study on over 200 male and female hackers attending the 2000 Hackers on Planet Earth (HOPE) conference in New York City and the DefCon 8 hacker conference in Las Vegas. Their findings suggested that some previously reported hacker profiling findings and perceptions held about those in the Computer Underground (labeled “myths”) were founded, while others were not. Key findings included the following: • Though a definite trend existed along the troubled childhood hacker composite – with almost a third of the hacker respondents saying that they had experienced childhood trauma or significant personal losses – the majority of hacker respondents did not make such claims. • The stress symptom checklist developed by Derogatis and colleagues in 1974 was embedded in the study survey to assess short-term stress symptoms of the conference attendees. The findings revealed that the respondents reported mild but not pronounced stress presentations – a finding running counter to common beliefs at that point in time. • Accepting psychologist Dr. Kimberly Young’s 1996 measure for computeraddicted individuals as spending, on average, 38 h a week online (compared to nonaddicted online users who spend about 5 h a week online), the study findings revealed that contrary to popular myths, the hacker conference participants were rated as heavy online users rather than as online addicts. • Because of well-developed cognitive capacities among those in the hacker world, as Meyer’s earlier 1998 work suggested, the hacker conference study findings indicated a fair degree of multitasking capability among hackers – where they were reportedly engaged in 3–4 hacking projects at any one time. • The 70-item Grossarth-Maticek and Eysenck 1990 Type inventory assessing Type A, Type B, Type C, and Psychopathic personality and behavioral tendencies in adults was also embedded in the survey. The study findings revealed that both male and female hacker conference attendees tended to self-report “self-healing” Type B tendencies, followed by noise-in Type C “cancer-prone” tendencies rather than the Type A “cardio-vascular-prone” tendencies widely perceived as existing in the hacker population. The study findings also revealed that, on the whole, conference attendees’ psychopathic tendencies placed on the low rather than pronounced end of the continuum. • Given that the 20-item 1995 Creative Personality Test of Dubrin was also embedded in the survey, the findings revealed that those attending hacker conferences, both male and female, placed in the “highly creative” domain. • In terms of self- and other-destructive traits uncovered in the hacker conference attendees, the study findings found that compared to their over-age-30 counterparts, some hackers (about 5%) in the under-age-30 segment had a combination of higher-risk propensity traits for committing cybercrime. Traits included elevated narcissism, frequent bouts of depression and anxiety (likely related to anger regarding attachment loss and abandonment by significant others in childhood), and clearly computer-addicted behavior patterns.

33

Internet Addiction and Cybercrime

697

The research team of Schell et al. (2002) suggested that this “at-risk” segment of hackers should seek psychological intervention therapy to reduce their risk propensity in terms of self-induced and other-induced harms.

Understanding Youth’s Pathways into Cybercrime Cyber psychology, a relatively new area in the mental health domain over the past decade, has much to offer in terms of better understanding youths’ hacking and the broader field of antisocial/criminal behavior development in an online environment. Experts in this emerging field have generally hypothesized that earlier antisocial and criminal activity was generally bound by the laws of proximity and domain. Now in the virtual world, however, tech-savvy youth have become psychologically immersed in the protection of anonymity. Moreover, young people – fuelled by their life stage propensity for disinhibition and impulsivity – may tend to actively seek out like-minded youth who syndicate online to rationalize, normalize, socialize, and facilitate their cybercriminal behavior. In short, cyberspace has become a medium where youthful agents can test the boundaries and unregulated influence of potentially damaging online acts – which at the time of the exploit seems worthy of execution because of the perceived potential high gains but perceived lower personal risk of being caught (Aiken et al. 2016a, b; Aiken 2016). Also, criminologists’ contributions to better understand the pathways to criminal behaviors in youth have made similar strides over the past decade. From this vantage point, according to a general rule of profiling suggested by Ressler (1997), “what” plus “why” equals “who.” Thus, after collecting sufficient information at a crime scene – whether it be on land or online – forensic criminologists can define key characteristics comprising the central elements of youthful offenders. From a psychological vantage point, as earlier noted, computer crimes are often of a serial nature, allowing for an adequate collection of the required amount of data for criminal profiling. One of the basic assumptions of psychological profiling is that the offender’s method of operating (or modus operandi) is linked, among other factors, to his or her personality traits. Furthermore, in the criminal profiling model employed by the FBI, it is assumed that there is adequate collection of data so that the classification of the crime, the reconstruction of the crime, and the preparation of a credible criminal profile can be obtained – whether the crime was committed online or in cyberspace (Lickiewicz 2011). By adopting this kind of profiling model and after conducting stakeholder interviews with experts having behavioral science knowledge on adolescent hackers, the research team of Aiken et al. (2016a) noted that common pathway factors for teens entering the cybercrime world likely include the following: • Having an interest in and an aptitude for technology • Having a will to engage in low-level illegal activity at first – which often escalates through positive reinforcement by the online peer network

698

B. Schell

• Having an increasing level of criminal activity occurring online that is markedly different from the earlier minor acts of online deviance • Deriving intrinsic pleasure from the escalating challenge associated with higherlevel cybercrimes • Placing much importance on one’s online reputation with peers to compensate for one’s lack of self-esteem in the real world • Minimizing cyber deviance like digital piracy or copyright violation and accepting that the internet is a place with few or no guardians – thus, the law can easily be bypassed if one hones the right kind of technical skills • Becoming part of a peer network that not only normalizes but also encourages illegal online behavior • Committing online crimes and then finding that the euphoria that follows becomes addictive • Believing that moving up the peer network hierarchy is a kind of game playing and skill-testing contest that needs to be won • Fulfilling one’s goals through online cybercrime activities is important – whether these goals are for financial gain, social affiliation, or increased online reputation • Being willing and able to invest copious amounts of cognitive and emotive resources in order to build one’s online reputation score

Psychological Interventions for Reducing Internet Addiction and Other Psychological Precursors Possibly Related to Cybercrime Though internet addiction, unlike gambling disorder, has not yet been cited as a disorder in the DSM 5, mental health experts have begun developing treatment protocols for both internet and gaming addiction, based on the belief that these psychological precursors may be related to self- and other-induced harms that also present in cybercrime. Cognitive behavioral therapy (CBT), in particular, has been shown to be an effective treatment for compulsive disorders like pathological gambling, as well as for substance abuse, emotional disorders like recurrent panic attacks, and eating disorders – and for this reason, it has been used by mental health experts to reduce the adverse symptoms of internet addiction (Schell 2016). In CBT, clients are taught not only to monitor their thoughts in order to identify those provoking addictive feelings and actions but to hone new stress-coping skills to prevent relapses from occurring. The protocol typically requires CBT sessions lasting from 5 to 20 weeks for each of these compulsive disorders (Hucker 2004). At the early stages of CBT, the therapy is behavioral in nature, with a focus on specific behaviors or life situations where a client’s impulse control disorder causes the greatest difficulty. As the sessions evolve, the focus tends to move to the cognitive assumptions and distortions that have become ingrained in the user, triggering the compulsive behavior and various adverse effects. For internet addiction, the therapist will advocate moderated and controlled use of the internet by the

33

Internet Addiction and Cybercrime

699

user, with the primary goal of abstaining from problematic applications of internet behavior (such as watching online pornography at work) while retaining controlled use of the computer for employment and social interaction practices (Schell 2016; Young 2009). Near the beginning of the CBT sessions, suggestions for a routine change may be in order. If, say, a client’s internet habit involves checking email first thing in the morning, the therapist may suggest that the client take a shower or eat breakfast first thing instead of going directly online. Moreover, the goal-setting plan must be specific, structured, and reasonable. To avoid relapse in the client, a therapist may encourage the client to spend 20 h online by scheduling those 20 h in specific time slots. The client will also be advised to keep the internet sessions brief but frequent in order to avoid cravings and withdrawal (Schell 2016; Young 2009). If this goal-setting moderation plan is ineffective, then abstinence from that particular online application is the next appropriate intervention. While the client must stop all online activity involving that application, less appealing online applications can still be allowed (Schell 2016; Young 2009). Generally, abstinence is most applicable for the client also having a history of a prior addiction, such as alcoholism or drug use, where he or she often finds the internet to be a physically safe substitute addiction. In these cases, clients may feel more comfortable working toward an abstinence goal, as their prior recovery from substance abuse likely involved this model (Schell 2016; Young 2009). To be less anxious in the real world and so that the client feels less disposed to escaping into the virtual world, besides CBT, Young (2009) suggests that other remedies such as assertion training, behavioral rehearsal, coaching, support groups, and relaxation training have been shown to be effective. Finally, family therapy is encouraged for marriages and family relationships that have been disrupted and adversely impacted by internet addiction. Besides these behavioral interventions, Freeman (2008) adds that when comorbid disorders with internet or gaming addictions are present, client outcomes are greatly improved if they are addressed concurrently. As with other addictions or dependencies, the most effective treatments are a combination of psychopharmacology and psychotherapy, such as CBT. Also, the role of the neurotransmitters norepinephrine and dopamine when there is a concurrent addiction, say to gaming and to substances, is widely accepted. However, when the client’s addiction is primarily a behavior like excessive online gaming and does not involve substance abuse, research has shown dopamine and serotonin involvement to be useful. Besides these interventions, in 2011, the research team of Su et al. conducted a pilot study involving the development of an online expert system named Health Online Self-Helping Center (HOSC) as a less invasive and less time-consuming intervention tool to help those who wished to reduce their online usage. The study also explored the effectiveness of HOSC for college students’ internet addiction behaviors. After a 1-month follow-up period, the results indicated that HOSC proved effective in this regard under both natural and laboratory environments, but the research team concluded that further study was needed.

700

B. Schell

Conclusion While this chapter explored the possible links between internet addiction and cybercrime, it became apparent by the chapter’s end that there remain many “boxes” that remain unchecked in order to corroborate an empirically-reported connection. First, there is no consistent definition among mental health experts and criminologists about what constitutes internet addiction. Second, there is not yet a universally reliable and valid measurement for this concept. Third, there remain questionable prevalence rates about this phenomenon in adults, teens, and criminals. Besides these yet unchecked boxes regarding internet addiction, the link between internet addiction and cybercrime remains a tentative concept demanding further consideration and research exploration by teams of experts comprised of criminologists, forensic psychologists, and cyber security scientists. This chapter also explored some other possible psychological links worthy of further investigation as it relates to cybercrime propensity, including impulse control disorder, substance addictions, and gaming or gambling addictions. Furthermore, the importance of a compulsive-impulsive spectrum disorder in cybercriminals related to self-harm also needs further exploration in terms of potential harms caused to others. There is little question that credible research investigating many of these aspects will lead to more effective treatments for cybercriminals having underlying or comorbid psychological/psychiatric conditions. Finally, while profiling experts in psychology and criminology have just begun to scratch the surface regarding salient personality and motivational and cognitive/ behavioral predispositions empirically linked to cybercrime – such as childhood abandonment, narcissism, poor anger management, and inadequate stress-coping mechanisms – a broader range of such variables needs further attention and delineation by cyber experts in order to bring clarity on this critical and very costly societal issue.

Cross-References ▶ Dating and Sexual Relationships in the Age of the Internet ▶ Negative Emotions Set in Motion: The Continued Relevance of #GamerGate ▶ Sexting and Social Concerns ▶ Technology Use, Abuse, and Public Perceptions of Cybercrime ▶ The Psychology of Cybercrime

References Aiken, M. (2016). The cyber effect: A pioneering cyberpsychologist explains how human behavior changes online. New York: Spiegel & Grau. Aiken, M., Davidson, J., & Amann, P. (2016a). Youth pathways into cybercrime. Retrieved 21 Nov 2018, from http://www.maryaiken.com/new-blog/2016/11/21/youth-pathways-into-cybercrime -aiken-davidson-amann-2016

33

Internet Addiction and Cybercrime

701

Aiken, M., McMahon, C., Haughton, C., O’Neill, L., & O’Carroll, E. (2016b). A consideration of the social impact of cybercrime: Examples from hacking, piracy, and child abuse material online. Contemporary Social Science, 11(4), 373–391. American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: Author. American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: Author. Arkin, O. (2002). Tracing hackers: A concept for tracing and profiling malicious computer attackers. Computer Fraud & Security, 5, 8–11. Beutel, M. E., Bruhler, E., Glaesmer, H., Kuss, D. J., Wufling, K., & Muller, K. W. (2011). Regular and problematic leisure-time Internet use in the community: Results from a German populationbased survey. Cyberpsychology, Behavior and Social Networking, 14, 291–296. Block, J. (2008). Issues for DSM-V: Internet addiction. American Journal of Psychiatry, 165, 306–307. Campanella, M., Mucci, F., Baroni, S., Nardi, L., & Maratzziti, D. (2015). Prevalence of Internet addiction: A pilot study in a group of Italian high-school students. Clinical Neuropsychiatry, 12(4), 90–93. Cho, H., Kwon, M., Choi, J.-H., Lee, S.-K., Choi, J. S., Choi, S.-W., & Kim, D.-J. (2014). Development of the Internet Addiction Scale based on the Internet gaming disorder criteria suggested in DSM-5. Addictive Behaviors, 39, 1361–1366. De Cock, R., Vangeel, J., Klein, A., Minotote, P., Rosas, O., & Meerkerk, G.-J. (2014). Compulsive use of social networking sites in Belgium: Prevalence, profile, and the role of attitude toward work and school. Cyberpsychology, Behavior, and Social Networking, 17(3), 166–171. Dick, D. M., Smith, G., Olausson, P., Mitchell, S. H., Leeman, R. F., O’Malley, S. S., & Sher, K. (2010). Understanding the construct of impulsivity and its relationship to alcohol use disorders. Addiction Biology, 15, 217–226. Dong, G., DeVito, E. E., Du, X., & Cui, Z. (2012). Impaired inhibitory control in ‘Internet addiction disorder’: A functional magnetic resonance imaging study. Psychiatry Research: Neuroimaging, 203, 153–158. Elmer-DeWitt, P. (1993). Cyberpunk. Time Magazine, 141(6), 58–65. Freeman, C. B. (2008). Internet gaming addiction. Journal for Nurse Practitioners, 4, 42–47. Gottfredson, M. R., & Hirschi, T. (1990). A general theory of crime. Stanford: Stanford University Press. Harris, L. (2015). Rise in child and teen fraud arrests mainly due to increase of Internet-based crimes. Retrieved 28 Nov 2018, from https://www.dailytelegraph.com.au/news/nsw/rise-inchild-and-teen-fraud-arrests-mainly-due-to-increase-of-internetbased-crimes/news-story/fc620 acdb8379e30ab46f17493e40475 Hetzel-Riggin, M. D., & Pritchard, J. R. (2011). Predicting problematic Internet use in men and women: The contributions of psychological distress, coping style, and body esteem. Cyberpsychology, Behavior, and Social Networking, 14(9), 519. Holt, T., & Kilger, M. (2008). Techcrafters and makecrafters: A comparison of two populations of hackers. In WOMBAT workshop on information security threats data collection and sharing (pp. 67–78). Howell, J. (2018, November 10). Dumb, but happy. The Globe and Mail, p. P5. Hucker, S. J. (2004). Disorders of impulse control. In W. O. Donohue & E. Levensky (Eds.), Forensic psychology. New York: Academic. Hwang, Y., & Jeong, S.-H. (2015). Predictors of parental mediation regarding children’s smartphone use. Cyberpsychology, Behavior, and Social Networking, 18(12), 737. Kent, K., Chevalier, S., Grance, T., & Dang, H. (2006). Guide to integrating forensic techniques into incident response. Gaithersburg: Computer Security Resource Center. Kilger, M., Arkin, O., & Stutzman, J. (2004). Profiling. In Know your enemy: Learning about security threats. Boston: Addison Wesley. Ko, C.-H., Yen, J.-Y., Yen, C.-F., Lin, H.-C., & Yang, M.-J. (2007). Factors predictive for incidence and remission of Internet addiction in young adolescents: A prospective study. Cyberpsychology & Behavior, 10(4), 545–551.

702

B. Schell

Koronczai, B., Urban, R., Kokonyei, G., Paksi, B., Papp, K., Kun, B., Arnold, P., Kallai, J., & Demetrovics, Z. (2011). Confirmation of the three-factor model of problematic Internet use on off-line adolescent and adult samples. Cyberpsychology, Behavior, and Social Networking, 14(11), 657. Ksherti, N. (2009). Positive externality, increasing returns, and the rise of cybercrimes. Communications of the ACM, 52, 141–144. Laier, C., Pekal, J., & Brand, M. (2014). Cybersex Addiction in heterosexual female users of Internet pornography can be explained by gratification hypothesis. Cyberpsychology, Behavior, and Social Networking, 17(8), 505. Lam, L. T., Peng, Z.-W., Mai, J.-C., & Jing, J. (2009). Factors associated with Internet addiction among adolescents. Cyberpsychology & Behavior, 12(5), 551–555. Lesher, A. I. (1997). Addiction is a brain disease, and it matters. Science, 278, 807–808. Lickiewicz, J. (2011). Cyber crime psychology – Proposal of an offender psychological profile. Problems of Forensic Science, 87, 239–252. MacQueen, K. (2004). Hooked on crime. Maclean’s, 117(52), 82–83. Morgan, S. (2016, January 17). Cyber crime costs projected to reach $2 trillion by 2019. Retrieved 9 Apr 2019, from https://www.forbes.com/sites/stevemorgan/2016/01/17/cyber-crime-costs-pro jected-to-reach-2-trillion-by-2019/#2f4eb8153a91 Niemz, K., Griffiths, M., & Banyard, P. (2005). Prevalence of pathological Internet use among university students and correlations with self-esteem, the General Health Questionnaire (GHQ), and disinhibition. Cyberpsychology & Behavior, 8(6), 562–570. Nykodym, N., & Marsillac, E. L. (2007, March 4). The managers guide to understanding, detecting, and thwarting computer crime: An international issue. Published in the Proceedings of the National Business and Economic Society international meeting 2007. Nykodym, N., Ariss, S., & Kurtz, K. (2008). Computer addiction and cyber crime. Journal of Leadership, Accountability and Ethics, 35, 55–59. Peters, S. (2014). Over 90 arrested in global FBI crackdown on Blackshades RAT. Retrieved 28 Nov 2018, from https://www.darkreading.com/over-90-arrested-in-global-fbi-crackdownon-blackshades-rat/d/d-id/1252912 Ploskin, D. (2018). What are poor impulse controls? Retrieved 24 Nov 2018, from https:// psychcentral.com/lib/what-are-impulse-control-disorders/ Rachlin, H. (1990). Why do people gamble despite heavy losses? Psychological Science, 1, 294–297. Reed, L. (2002). Governing (through) the Internet. European Journal of Cultural Studies, 5(2), 131–153. Ressler, R. (1997). Criminal personality profiling. Problems of Forensic Sciences, 35, 32–41. Sarda, E., Begue, L., Bry, C., & Gentile, D. (2016). Internet gaming disorder and well-being: A scale validation. Cyberpsychology, Behavior, and Social Networking, 19(11), 674. Sarkis, S. (2014). Internet gaming disorder in DSM-5. Retrieved 28 Nov 2018, from https://www. psychologytoday.com/us/blog/here-there-and-everywhere/201407/internet-gaming-disorder-indsm-5 Schell, B. (2016). Online health and safety: From cyberbullying to Internet addiction. Santa Barbara: Greenwood. Schell, B., Dodge, J., & Moutsasos, S. (2002). The hacking of America: Who’s doing it, why, and how. Westport: Quorum Books. Shotton, M. (1989). Computer addiction? A study of computer dependency. London: Taylor & Francis. Su, W., Fang, X., Miller, J. K., & Wang, Y. (2011). Internet-based intervention for the treatment of online addiction for college students in China: A pilot study of the healthy online self-helping center. Cyberpsychology, Behavior, and Social Networking, 14(9), 497. Taylor, H. (2016). An inside look at what’s driving the hacking economy. CNBC. Retrieved 28 Nov 2018, from https://www.cnbc.com/2016/02/05/an-inside-look-at-whats-driving-the-hacking-ec onomy.html

33

Internet Addiction and Cybercrime

703

Turvey, B. (2002). Criminal profiling. London: Academic. Volkow, N. D. (2004). The reality of comorbidity: Depression and drug abuse. Biological Psychiatry, 56, 714–717. Walker, M. B. (1989). Some problems with the concept of “gaming addiction”; Should theories of addiction be generalized to include excessive gambling? Journal of Gambling Behavior, 5, 179–200. White, P. (2015, January 29). Suspect in Todd case proclaims innocence. The Globe and Mail, p. A3. Wieland, D. M. (2005). Computer addiction: Implications for nursing psychotherapy practice. Perspectives in Psychiatric Care, 41, 153–161. Wu, C.-S., & Cheng, F.-F. (2007). Internet café addiction of Taiwanese adolescents. Cyberpsychology & Behavior, 10(2), 220–225. Yar, M. (2005). Computer hacking: Just another case of juvenile delinquency? The Howard Journal, 44, 387–399. Young, K. S. (1996). Psychology of computer use: XL. Addictive use of the Internet: A case that breaks the stereotype. Psychological Reports, 79, 899–902. Young, K. S. (1998). Internet addiction: The emergence of a new clinical disorder. Cyberpsychology & Behavior, 1(3), 237–244. Young, K. S. (2009). Internet addiction: Diagnosis and treatment considerations. Journal of Contemporary Psychotherapy, 39, 241–246. Yu, T.-K., & Chao, C. (2016). Internet misconduct impact adolescent mental health in Taiwan: The moderating roles of internet addiction. International Journal of Mental Health and Addiction, 14, 921–936.

Biosocial Theories

34

Chad Posick

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Is Biosocial Criminology? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adolescence in the Digital Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Biosocial Causes of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Biological and Behavioral Outcomes of Cybervictimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biosocial Research Methods in the Study of Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preventing Cybercrime Using a Biosocial Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

706 706 707 709 711 713 715 717 717 717

Abstract

Cybercrime is a far-reaching social phenomenon that impacts millions of people around the globe each year. Therefore, investigating the causes and consequences of cybercrime is paramount for developing prevention strategies and addressing the needs of cybercrime survivors. To date, most of the work in this area has been social or social-psychological in nature. A biosocial model has the potential to fill in knowledge gaps that remain in the study of cybercrime and provide a fruitful framework for approaches that reduce the occurrence of cybercrime and improve outcomes for victims. In this chapter I discuss the role of biosocial criminology in explaining cybercrime and the outcomes that victims of cybercrime face. I will also discuss what a biosocial research program would look like for the study of cybercrime and what it would entail. I finish with recommendations for cybercrime prevention policy and practice.

C. Posick (*) Criminal Justice and Criminology, Georgia Southern University, Statesboro, GA, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_29

705

706

C. Posick

Keywords

Cybercrime · Hormones · Neuroscience · Physiology · Prevention · Trauma

Introduction As other chapters in this volume discuss at length, cybercrime offending and victimization have become widespread – impacting individuals across the globe – and the prevalence of cybercrime has caused overwhelming financial, personal, and communal damage (Anderson et al. 2013). This stresses the importance of finding the causes of cybercrime in order to develop prevention and intervention strategies to reduce its prevalence in society as well as assist in victim recovery. To date, the overwhelming majority of research on the causes and consequences of crime have been sociological in nature (Rocque and Posick 2017). In other words, most studies have considered the role of peers, family, and school factors in either increasing or decreasing the prevalence of cybercrime. While this work is influential and very important, there are opportunities to integrate a broader view of why cybercrime occurs and what can be done to intercede when it occurs. One such approach is biosocial criminology. In this chapter, I open by briefly discussing what biosocial criminology is for readers who might not be familiar with this subfield within criminology. I move into how the perspective can make sense of the age distribution of cyber offenders and victims which mirrors – for the most part – the well-known age-crime curve that exists for traditional crimes (Rocque et al. 2015). Then, I discuss how the biosocial approach can inform inquiry into the causes of cybercrime as well as victim outcomes and recovery. Since the biosocial model has only recently been applied to cybercrime, I discuss in the following section what biosocial research might “look like” in the study of cybercrime including neuroscientific and physiological experiments. Finally, I review some promising strategies for prevention and intervention of cybercrime which can be culled from a biosocial perspective of cybercrime. To repeat, biosocial criminology as a foundation for the study of cybercrime is very new, but it surely has a lot to offer which should become clear by the end of this chapter.

What Is Biosocial Criminology? Biosocial criminology is not a uniform concept or field of study. Instead, it is a comprehensive and multidisciplinary perspective that seeks to explain antisocial behavior and exposure to victimization (Barnes and Boutwell 2015). While biosocial criminologists vary in their primary theoretical focus and methodological approaches, most believe that in order to provide a powerful explanatory foundation for behavior, one needs to combine insights from biology (including chemistry, ecology, neuroscience, etc.) and sociology (social psychology, cultural

34

Biosocial Theories

707

anthropology, etc.). The basis for this view is that the forces behind behavior are myriad and intertwined. Using only the “bio” side or the “social” side will leave the approach woefully incomplete (Beaver et al. 2015; Wright and Boisvert 2009). Biosocial criminology also integrates ultimate level explanations (i.e., evolution) with proximal explanations (e.g., neighborhood and family factors). Inherently, this makes biosocial criminology an integrative and multidisciplinary enterprise that can be applied to all types of human behaviors. Most readers will be familiar with many of the social factors that have been linked to crime including associating with delinquent peers, inadequate family supervision and support, and living in an area characterized by low collectivity, poverty, and inequality. These social factors are highly influential in everything from physical health, mental health, and behavior (Duncan et al. 2010; Yoshikawa et al. 2012). Theories that incorporate these sociological factors include social learning theories, social control theories, and strain theories – each major approaches to the explanation of criminal behavior. These factors and theories have been related to traditional crimes as well as cybercrimes which will be discussed at length in the following sections. Biological factors that are associated with crime are less well-known. Sometimes, academics and the general public alike misunderstand biological causes as purely genetic. Surely, genes influence almost every single facet of our lives including our traits, personalities, and behaviors (see Polderman et al. 2015). In complex traits – which behavior is – there are hundreds if not thousands of individual genes that interact with each other and the environment to produce an outcome of interest. There is no “crime gene” and certainly no “cybercrime gene.” However, genes control vital physiological functions including neurotransmission, hormone production, and the structure and function of the brain – all of which influence thoughts and behaviors (Rafter et al. 2016; Raine 2013). Biosocial criminology attempts to integrate these biological factors with sociological factors to get a better picture of criminal and delinquent behavior. Biosocial criminology has recently provided a strong theoretical foundation for the study of various antisocial traits and behaviors. Recent research has concluded that antisocial behavior related to domestic violence (Soler et al. 2000), substance abuse (Vaughn et al. 2009), and psychopathic personality traits (Bergstrøm and Farrington 2018) are explained well by biosocial models and biosocial methods have also been expanded to explain exposure to victimization (Watts et al. 2017). Scholars are beginning to propose biosocial models to the study of cybercrime (Owen et al. 2017) setting the foundation for the future analysis of cyber offenders and victims.

Adolescence in the Digital Age The biosocial perspective can provide a means for understanding both the causes of cybercrime (“why do people engage in cybercrime?”) as well as the consequences of cybercrime (“what happens to people who are exposed to cybervictimization?”). The causes and consequences of cybercrime will be discussed in detail in the following two sections (see ▶ Chap. 58, “Risk and Protective Factors for Cyberbullying

708

C. Posick

Perpetration and Victimization”). However, it makes sense to first discuss how the biosocial approach can explain cyberdeviance among adolescents in the digital age. Research on adolescent cybercrime, including sexting and cyberbullying, shows that both offending and victimization peak in the early teen years and decrease after adulthood. A 2018 study in JAMA Pediatrics indicated that 14.8% of youth have “sexted” and 27.4 (over a quarter) had received sexts. A substantial percent of sexting is not consensual with 12.0% of forwarded sexts sent without consent and 8.4% having a sext forwarded to them without consent (Madigan et al. 2018). This time in a young person’s life is rife with opportunities to send and receive sexual content with and without consent resulting in severe consequences – particularly stress and suicidal ideation (Kowalski et al. 2014). A focus on this type of deviant behavior is paramount to ensure healthy youth development. Adolescence is a special time for humans. Typically thought to begin in the very early teen years, adolescence brings with it a change in both biology and the social circumstances in which a young person is exposed. Puberty leads to an influx in hormones during the time that adolescents are moving into peer groups and navigating their social life in school (Wheeler 1991). Three of the most robust changes during this time period include (1) an increase in reward seeking behavior, (2) an increase in risky behavior, and (3) a move from familial attachments to peer attachments (Steinberg 2014). All three of these changes are biosocial in nature and have profound implications for the study of cybercrime. To begin the foray into adolescent behavior, reward seeking must be understood from an evolutionary standpoint. Once individuals reach the point where they can reproduce, their behavior changes. For males and females alike, aggressive behavior becomes more normative – not only in engaging in conflict but in pursuing selfinterests (Najman et al. 2009). This change makes good sense. Humans must begin to seek out sexual partners, put their stake in the ground regarding their own moral code, and make their own in the world. Therefore, they will seek those things that bring rewards such as jobs, education, sexual partners, stimulation, and the like. Studies have concluded that adolescents engage in more reward seeking behavior – such as sexual behavior, speeding, and drug use to a greater extent than children and adults (Steinberg 2004). Reward seeking behavior may also peak during this age as adolescents seek out opportunities to make gains in the digital realm. Similarly, adolescents will begin to take more risks than they previously would (and again for the rest of their lives). One of the major culprits behind reward- and risk-seeking behavior is dopamine. During puberty, dopamine increases in the body and works as a “gas petal” for behavior. While it is dopamine that is related to this risky behavior, there are added issues that the prefrontal cortex is not developed until well into a person’s 20s. The prefrontal cortex is the “CEO” of the brain which helps individuals make careful decisions. Therefore, individuals who have a dopamine surge at the time of puberty won’t have much of an effective brake pedal from the prefrontal cortex for about another decade (see Zelazo et al. 2008). And since this is the case for almost all adolescents, it is essential to focus on this age group in preventing and intervening on cyberdeviance during this life stage. Adolescence also marks a time when individuals are moving from valuing primarily family attachments to peer attachments. Around 6–8th grade, adolescents

34

Biosocial Theories

709

move from being attached to their parents to their peers. During this time period, peers fulfill emerging needs such as camaraderie and sex which parents cannot meet. Adolescents without strong attachments to parents in childhood often seek out peer attachments at a younger age (Nickerson and Nagle 2005). Associating the delinquent peers and hanging around peers with supervision of parents are both risk factors for cybercrime (Holt et al. 2010; Leukfeldt and Yar 2016) and are likely to be associated with changes that occur during puberty in early adolescence. In sum, adolescence marks the time when youth are experiencing changes related to increases in reward- and sensation-seeking behavior as well as a move from being attached and supervised by parents to spending time with peers. Adolescents also have easy access to computer technology. A US Pew poll indicates that 87% of teens have access to a desktop or laptop computer, 81% have access to at least one gaming console, and 73% have access to a smartphone (Pew 2015). Changes during adolescence that motivate risky behavior coupled with easy access to technology create the increased potential for cyberdeviance of all kinds. A biosocial model can assist in the explanation of cybercrime during this critical period in young peoples’ lives.

The Biosocial Causes of Cybercrime When seeking to understand why some people commit crimes with the use of technology, it is useful to approach the question with scientific information from biological and social sciences. Evidence so far suggests that theories used to understand offline and traditional crime are also useful for understanding cybercrime (Holt and Bossler 2014). Online and offline crime may thus share similar underlying mechanisms that influence antisocial behavior regardless of where they occur. For instance, the major three theories of criminal behavior, (1) social learning theory, (2) social control theory, and (3) strain theory, have all been applied to cybercrime with varying levels of success. One reason these theories are useful for explaining cybercrime is that they each can be framed within the biosocial model. Social learning and differential association theories are based on the idea that who one chooses to associate with and hang around will determine how they behave (Akers 2017). These theories have found much support in the criminological research in explaining crime. Replicable findings indicate that if someone associates with delinquent peers, it is likely that they will engage in delinquent behavior themselves (Pratt et al. 2010). This is also the case for cybercrime (see ▶ Chap. 26, “Deviant Instruction: The Applicability of Social Learning Theory to Understanding Cybercrime”). If someone associates with another person who engages in cyberdeviance, they too are more likely to engage in that type of deviance (Holt et al. 2010). As a general theory of crime and deviance, it appears that social learning can explain, at least in good part, deviance online and offline. Social learning theory is a traditionally sociological theory. As such, the theory supposes that crime is learned through association with others. Sometimes behavior is imitated in social situations, and at others behavior is socially rewarded or punished directing future behavior (Akers 2017). Cyber deviance can be seen through this lens. Cyber criminals may imitate those who they associate with such

710

C. Posick

as friends or family members or those who they aspire to be such as significant figure like “anonymous.” However, there might be other reasons that people learn to be cyberdeviants including genetics and brain structure. The learning process is not only social; it is biological. Some people experience rewards differently than others which impacts the recurrence of behavior. Dopamine is a neurotransmitter that acts as the brain’s “gas pedal” – motivating people to seek rewards. Dopamine production and transportation are directed by genes including the DAT1, DRD2, and DRD4 variants. Certain alleles (or slight variations in the gene) in the dopamine system correspond to greater reward seeking- and risk-taking behavior after social motivations or punishments (Forbes et al. 2009). It is likely that individuals with frequent involvement in cybercrime feel greater reward because of their genetic makeup than those who just dabble with cyberdeviance. Related to the previous section on adolescence, it is also likely that individuals who engage in cyberdeviance do so in the presence of peers not only because of social reasons but because their brain functioning favors risk-taking in the presence of peers. Gardner and Steinberg (2005) found that undergraduates took 50% more risks in the presence of peers in a driving game but adults had no change in risktaking in the presence of peers. They attribute this to fully formed limbic structures in the brain (the emotional part of the brain) and a still developing neocortex (the decision-making part of the brain). There is little reason to believe that cyberdeviance is different in this regard. Teens and young adults are likely to be motivated to cyberbully and commit other forms of deviance online in the presence of peers than adults who are more likely to reason and envision both risks and rewards. Individuals with learning difficulties also tend to display problematic online behavior. In particular, adolescents with attention deficit hyperactivity disorder (ADHD) are more likely than their peers to spend time playing video games and are more likely to become addicted (Mazurek and Engelhardt 2013). If those who have learning disabilities have access to technology and are at risk of developing additive behaviors, it is likely that this group is also at elevated risk of offending and being victimized online. Social control theory is the second major theory of criminal behavior. Unlike social learning theory which argues people learn deviant behavior, control theory argues that people are born with tendencies to act deviantly if such behaviors fulfill individual needs. Therefore, people must be control by society in social ways. For control theorists, it is society that must informally (through parents, teachers, neighbors, etc.) and formally (through police, prosecutors, prisons, etc.) prevent people from acting antisocially. Closely related, self-control theories suggest that parents can instill control in their children through monitoring their behavior and consistently punishing poor behavior when it occurs. Therefore, self-control is an inhibition that people regularly carry with them throughout the day (Hirschi 2004). Social control theories, like social learning theories, are related to a variety of antisocial behaviors from violence to minor deviance (Pratt and Cullen 2000). Self-control in particular is related to multiple forms of antisocial behaviors online including piracy, harassment, hacking, and consumption of pornography (Donner et al. 2014; Holt et al. 2012; see ▶ Chap. 28, “The General Theory of Crime”). Low self-control might also impact cybercrime through deviant peer association as discussed in the previous section. Those with low self-control may be more likely

34

Biosocial Theories

711

to associate with other cyberdeviants which, in turn, increases their own cyberdeviance (Bossler and Holt 2010). Again, controls of this nature tend to be seen as entirely social. There might, additionally, be biosocial agents at work that influence cyberdeviance from this perspective. Social control theory hinges on attachment to informal and formal mechanisms of control (including parents who instill self-control). Important in this discussion is how well people become attached to others in order to be controlled. Studies have confirmed that not all individuals are able to become attached to prosocial individuals. Genetic variation, especially in the DRD4 genetic allele, accounts for a significant portion of attachment to parents (Brussoni et al. 2000; Gervai et al. 2005). While minor deviance online is fairly normative, those who engage in severe forms of online deviance are not likely to be inhibited or fear damaging their social bonds partly because it is difficult for them to establish those bonds in the first place. Finally, strain theories are social-psychological perspectives into the etiology of antisocial behavior. Unlike both social learning and social control theories, strain theories argue that people must be pushed into crime through some form of stress or negative emotionality (Agnew 2001). Anger and depression, two motivating emotions, have been associated with antisocial behaviors that are both internal (self-directed) and external (other-directed) (Ozkan et al. 2018; Posick et al. 2013). Strain theory has been applied to cyberdeviance with success, and anger has been shown to account for some of the association between strain and deviance online (Lianos and McGrath 2018). While strain theory relies heavily on the presence of negative emotions in the link to antisocial behavior, it is fairy silent on the biological causes of negative emotionality. While strain theorists acknowledge the role of school failure, failed relationships, and punishment on negative emotionality, the role of the body, and brain specifically, is relatively unexplored. This is surprising as emotionality is highly heritable and driven by genetic variation among individuals (Polderman et al. 2015). Anger is especially heritable, and genetic alleles, including variants in the COMT gene and TPH1 gene, are strongly linked to increased anger in individuals (Reuter 2010). High frequency cyber offenders, like offline offenders, have a high probability of experiencing negative emotions which fuel their online deviance (see ▶ Chap. 29, “General Strain Theory and Cybercrime”). Further, as cyber offenders also tend to be victimized online, there is likely to be some mutual reinforcement between offending, anger, and victimization (Marcum et al. 2014). This possibility should certainly receive more attention in the future in light of epigenetic evidence that victimization can change genetic expression leading to changes in behavior (OuelletMorin et al. 2013). Overall, biosocial criminology has much to offer in the study of cybercrime and warrants closer attention moving forward.

The Biological and Behavioral Outcomes of Cybervictimization The etiology of cybercrime, as discussed, is best examined through a biosocial lens. So too are the outcomes of experiencing cybervictimization. Survivors of traditional crimes ranging from sexual abuse to assault to burglary face subsequent challenges related to their health and well-being. Just some of these outcomes include increased

712

C. Posick

fear of crime, negative emotionality (e.g., depression, anxiety), and somatization (e.g., stomachaches, headaches) related to their traumatic experience. The deleterious impacts of cybercrime on victims can best be appreciated through a biosocial approach which considers social and biological consequences. Victims of many different types of crimes experience very similar health and behavioral outcomes (see Posick and Gruenfelder 2019). It is out of the scope of this chapter to discuss all of the unique types of victimization and their subsequent outcomes, but bullying can serve as a type of victimization that is fairly normative both online and offline. Therefore, bullying will be used in this section as an example of one type of victimization that has significant impacts on individuals and exemplifies how the biosocial approach can help understand the consequences of bullying exposure in both forms. Bullying victimization has significant impact on the biological functioning of individuals. Youth victims of bullying experience psychosomatic problems (Gini and Pozzoli 2013), heightened depressive symptoms (Lee and Vaillancourt 2018), and post-traumatic stress disorder (Nielsen et al. 2015). Bully victims also have a greater likelihood of engaging in later risky behaviors including drug use (Ttofi et al. 2016), suicidal behavior (Holt et al. 2015), and delinquency (Connolly and Beaver 2016). Many of these issues occur because exposure to bullying disrupts the neuroendocrine system leading to dysregulation of stress. These effects can be severe and sustained throughout life (Vaillancourt et al. 2013). Due to these consequences, researchers and public officials alike have increasingly focused on solutions to reduce bullying and subsequently improve adolescent health. Insight into the consequences of cyberbullying is very similar (Zych et al. 2017). Even after accounting for exposure to traditional bullying and other confounding factors, exposure to cyberbullying increases emotional and behavioral problems (Kim et al. 2018). Victims of cyberbullying are especially likely to develop increased depression and social anxiety (Fahy et al. 2016) both of which are found to be genetic and environmental in origin (Smoller 2016). However, because the consequences of cyberbullying are biological and sociological, it is difficult to conclude whether cybervictims are predisposed to be both victims and have mental and behavioral health issues (so there is shared genetic causality – which has been shown in past studies [Connolly and Beaver 2016; Kavish et al. 2019]) or whether cybervictimization itself has consequences for individuals above and beyond shared genetic causes. Luckily, there are methods to help researchers tease apart the genetic and environmental underpinnings of mental and behavioral health. A useful tool for biosocial researchers is the monozygotic twin study (discussed at greater length later). Monozygotic twins (MZ) are genetically identical. Therefore, any differences between the two cannot be due to baseline genetics and instead will be due to the unique experiences of the two. A study by Silberg and her co-authors in 2016 revealed that anxiety has a strong genetic component; if one twin was anxious, their co-twin was also likely to be anxious. However, if one twin was bullied while the other ways not, the bullied twin had significantly more social and separation anxiety as well as greater suicidal ideation.

34

Biosocial Theories

713

The emerging field of epigenetics is informing just how exposure to victimization can change the human organism. Epigenetics is the study of changes in the expression of genes but not changes in underlying DNA or genetic structure. One major epigenetic change that occurs within the body is DNA methylation where methyl groups are added to the DNA structure but make no modification to DNA itself. Epigenetic research indicates that information from the environment (i.e., environmental exposure) can facilitate epigenetic changes in the body. One of the most potent of these environmental factors is exposure to victimization, and this exposure can have significant impact on behavior. A 2013 study by Ouellet-Morin and colleagues revealed that being the victim of bullying victimization led to significant epigenetic changes including the methylation of DNA. Using a sample of 28 discordant MZ twins for bullying (one twin was bullied while the other was not), the research found that the bullied twin had significantly higher DNA methylation of a specific gene controlling serotonin (SERT gene). This led to the bullied twin having a blunted cortisol response to stress. While not focused specifically on cyberbullying, a similar study attempting to replicate these findings with cybervictims would add to this body of literature in nuanced ways. Future research must consider additional linkages that have not yet received adequate attention. The causes and consequences of online and offline bullying have been very similar (Waasdorp and Bradshaw 2015), yet biosocial factors and interactions have yet to be investigated. The impact of technology use on people themselves may lead to an increased risk of victimization such as modifying hormone levels, increasing ADHD symptoms, and increasing obesity and other health problems.

Biosocial Research Methods in the Study of Cybercrime One of the major avenues toward understanding how cyber offenders assess their situation and make a decision is to place them into functional magnetic resonance imaging (fMRI) machines and have participants complete simulations, read vignettes, or watch videos related to cyberoffending or cybervictimization. This technique enables researches to view how the brain “works” – or brain functioning. Computers are now small, lighter, and easier to use than they once were which facilitates their use inside and fMRI machine. Functional MRI measures blood flow in the brain. When a part of the brain is active, it requires oxygen to maintain activity. Oxygen is carried by blood to the part of the brain that is most active during a given task. A statistical algorithm associates certain colors to the most active parts of the brain (usually red and orange for active parts and green and gray for inactive parts). Changes in the location of activity are shown by making a participant perform different tasks such as listening, reading, walking, calculating, etc. One of these activities could be completing cybercrime simulations, reading vignettes, or watching cybercrime take place in a video. By

714

C. Posick

measuring which parts of the brain are active or muted during these tasks may lend insight into how the brain reacts to this type of deviance and what might make people engage in cybercrime behaviors. Other methods that measure CNS activation can also be carried out to examine how individuals respond to cybercrime perpetration and victimization in controlled settings. CNS activity can be measured using skin conductance. Monitors placed on areas of the body that sweat – such as fingertips and toes – provide tools to measure how the CNS responds to external stimuli. The monitors emit a very low and undetectable electrical charge which is then conducted at different levels by perspiration. Changes in conductance can be fairly rapid and conductance will increase as emotional arousal increases. Hormone secretion studies can also indicate how the body responds, at least temporarily, to external stimuli. This method is a little bit less accurate than skin conductance and fMRI studies because hormone levels are relatively stable in the body and are slower to change that brain and CNS activity. However, it still may provide some insight into how the body responds to cybercrime activity. Cortisol would be the logical starting point to measure body reactivity to cybercrime. Cortisol is a steroid hormone that circulates in the body and helps to respond to environmental stress. Cortisol is closely linked to bodily activity and can change throughout the day – so participants in a cortisol study must be measured at a shared baseline (same time of day, with similar dietary intake and similar environmental exposure). Alpha amylase is a protein enzyme which can be a similar marker in the body as cortisol. Both are detectable in saliva. In a hormone study of this type, participants would provide a saliva sample at different point in the study including a baseline sample and a final sample. The samples would then be examined for changes in hormone/enzyme levels relative to the environmental exposure the person was confronted with. As technology continues to advance, there will be more opportunities to explore the genetic contributions to behavior. With genome sequencing becoming cheaper, faster, and more accurate, researchers may be able to pinpoint specific genes and/or combinations of genes that increase risk for certain behaviors including cybercrimes (Schuster 2007). To reiterate, there is no crime gene. However, there is likely a cluster of genes that may confer risk on individuals to act antisocially including involvement in cybercrime. Looking for genetic variation associated with traditional and cybercrime would be interesting and important to locate similarities and differences between the two types of behaviors. In any of these methodologies for studying the causes and consequences of cyberoffending and cybervictimization, the use of monozygotic twins would be especially useful. Monozygotic twins share their entire genome matching them on DNA, age, sex, and other characteristics. They also, generally, share the same home environment. Therefore, differences in traits and behavior among monozygotic twins are due to different (or unique) environmental experiences. Monozygotic twins who are bullied at different frequencies and intensities show divergences in health and behavioral outcomes (Arseneault et al. 2008). Little is known about twin discordance in cyberbullying making this a rich area for future analysis.

34

Biosocial Theories

715

Preventing Cybercrime Using a Biosocial Approach In better understanding why cybercrime occurs and the outcomes of cybercrime exposure, practitioners and service providers can better respond for prevention and intervention. By now, it should be clear that only addressing one side of the coin (“bio” or “social”) will only go so far (Vaughn 2016). Therefore, the proposal here is that policies and programs that attempt to stem cybercrime incorporate the lessons learned from the biosocial perspective. Cognitive behavioral therapy (CBT) is perhaps the most “tried and true” practice for offender rehabilitation. One of the reasons why CBT is so effective is related to the fact that it engages how a person thinks about their current situation (the cognitive part) and appropriate behavioral strategies for addressing challenges in the current environment (the behavioral part). Going through this process at length (the therapy part) is shown to change the structure and function of the brain (Vaske et al. 2011). For cybercrime offenders, strengthening the ability to appropriately appraise activity online and make calculated and careful judgment about how to respond is likely to produce appropriate conduct online that does not lead to a cycle of deviance. For individuals found to consistently engage in inappropriate online behavior, mandated CBT through schools or nonprofits might decrease the prevalence of cyberdeviance in school and in the community. Mindfulness practices (e.g., meditation, yoga) works in much the same way as CBT. Mindfulness is not one particular practice but a collection of practices that trains the brain to focus on the “here and now.” It is fundamentally nonjudgmental and introspective. Individuals who practice mindfulness have been shown to reap several benefits including improved brain and central nervous system health including increased gray matter density (Hölzel et al. 2011). Schools are currently incorporating mindfulness practices into the normal curriculum. A recent meta-analysis suggests that mindfulness practices that are incorporated into the school day are effective in increasing cognitive performance and reducing stress (Zenner et al. 2014) – both impacts that can increase positive outcomes for those perpetrating and falling victim to cyberdeviance. Pharmacological treatments are also an option for those suffering from clinical levels of anxiety, depression, and attention deficit. It can be argued that children are often “overmedicated” and, while beyond the scope of this chapter, which is probably true. However, some individuals, given their social circumstances and biological makeup, can benefit from specific pharmacological treatments. A large-scale meta-analysis of individuals taking selective serotonin reuptake inhibitors (SSRI) indicated that those in the treatment group had small, but consistent, reductions in depression and anxiety compared to control groups (Locher et al. 2017). If anxiety and depression persist after surviving a victimization experience, supplementing CBT or mindfulness practices with a pharmaceutical treatment could complete an effective treatment program for individuals. CBT and mindfulness, along with other brain-based treatments, may be useful with hard-to-treat populations such as child and repeat sex offenders who themselves may commit their crimes online (see ▶ Chap. 56, “Child Sexual Exploitation: Introduction to a Global Problem”). Research indicates that sexual offending,

716

C. Posick

particularly pedophilia, is highly related to the structure and function of the brain. Child sexual offenders often have lower gray matter density in the prefrontal cortex when compared to non-offending populations – the same area that is the focus of CBT and mindfulness practices. Further, since areas associated with sexual arousal are well-known, deep brain stimulation represents an alternative to hormone therapies for treating pedophilia and similar disorders (Fuss et al. 2015). These approaches, especially when coupled together, hold the potential to address violent and sexual offending (see also ▶ Chap. 9, “Technology as a Means of Rehabilitation: A Measurable Impact on Reducing Crime”). There are also interventions that deal with more ultimate levels of explanation and, therefore, work on the societal level. For instance, the Meaningful Roles program is an antibullying initiative which is explicitly based on an evolutionary understanding of bullying (see Ellis et al. 2016). Evolutionary perspectives argue that human behavior is adaptive in many situations and is rewarding to the individual in some way. This is an important view as it suggests that people are not inherently evil, bad, or antisocial – just that behavior is driven by deeply ingrained drives to seek rewards and enhance the probability of survival and reproduction. Bullying is a specific strategy that can increase one’s prestige, status, and respect from others increasing access to mates. Bullying is not so much an aberrant behavior done by evil kids but is something quite normative that has very clear and specific goals. Therefore, responses to bullying must keep this adaptive behavior in mind in order to implement effective interventions. Bullying, and cyberbullying by extension, is a rewarding behavior, and the response to this behavior should likewise be rewarding to the individual. In Meaningful Roles, participants get filtered into a jobs training program that includes peer mentors to meet the status-driven goals of bullying behavior. By replacing the behavior of bullying, which seeks to impress peers and achieve status, with striving to learn skills and being praised for job success by peers, bullying is likely to decrease in prevalence. This type of program may achieve greater success by acknowledging and addressing the ultimate (i.e., evolutionary) reasons for human behavior (Ellis et al. 2016). Restorative justice is not so much of a “treatment” or “program” as it is an orientation to the way that justice is achieved. I also argue that it is fundamentally biosocial in nature as well by meeting the evolutionary and social goals of collectivity, mutual collaboration, and the building of empathic understanding. The traditional criminal justice system separates the victim and offender and focuses on establishing guilt. Restorative justice seeks to bring the victim and offender together to focus on how to repair the harm that was caused. Cybercrime victims and offenders, often the same individuals, likely do not understand the impact they have on other people despite experiencing the consequences of victimization themselves. Through engaging in dialogue and coming to a mutually agreed upon avenue for reparation, the continuation of cyberdeviance may be stemmed off before it can take a grasp on individuals in a specific area.

34

Biosocial Theories

717

Conclusion As technology becomes even more prevalent and impactful on society, the potential for misusing these advancements will exist along with the benefits. The scientific community should be at the forefront of understanding the causes of misuse of technology – namely, misuse of computer technology – and seek to uncover the negative health and behavioral consequences of being the victim of misuse. In this way, the scientific community, including criminologists, can lend their expertise in developing best practices for prevention and intervention. In this chapter, I argued that the best lens for which to view the myriad causes and consequences of cybercrime and cybervictimization is through the biosocial approach. The biosocial approach considers all of the biological (genetic, physiological, etc.) and social (delinquent peer association, lifestyle, etc.) factors implicated in cyberdeviance. Similarly, the consequences of exposure to cyberdeviance result in social and biological problems necessitating biosocial victim recovery strategies. Only by understanding the body and the social environment, as well as how the two interact with one another, can the best policy and prevention approaches be developed. This is beginning to be understood in criminology writ-large, and the study of deviance online is beginning to take this approach seriously. While this area of research and study is quite new but perhaps growing (Owen 2017), I outlined a few ways forward for biosocial study on cybercrime. There are surely more approaches and considerations that were not covered within this chapter. However, as more individuals become interested and involved in this exciting and important area of research, more will become known in the future. The time to consolidate thinking on biosocial criminology and cyberdeviance is here and will be here to stay far into the future.

Cross-References ▶ General Strain Theory and Cybercrime ▶ Negative Emotions Set in Motion: The Continued Relevance of #GamerGate ▶ Risk and Protective Factors for Cyberbullying Perpetration and Victimization ▶ The General Theory of Crime ▶ The Psychology of Cybercrime

References Agnew, R. (2001). Building on the foundation of general strain theory: Specifying the types of strain most likely to lead to crime and delinquency. Journal of Research in Crime and Delinquency, 38(4), 319–361. Akers, R. (2017). Social learning and social structure: A general theory of crime and deviance. New York: Routledge.

718

C. Posick

Anderson, R., Barton, C., Böhme, R., Clayton, R., Van Eeten, M. J., Levi, M., Moore, T., & Savage, S. (2013). Measuring the cost of cybercrime. In R. Böhme (Ed.), The economics of information security and privacy (pp. 265–300). Berlin: Springer. Arseneault, L., Milne, B. J., Taylor, A., Adams, F., Delgado, K., Caspi, A., & Moffitt, T. E. (2008). Being bullied as an environmentally mediated contributing factor to children’s internalizing problems: A study of twins discordant for victimization. Archives of Pediatrics & Adolescent Medicine, 162(2), 145–150. Barnes, J. C., & Boutwell, B. B. (2015). Biosocial criminology: The emergence of a new and diverse perspective. Criminal Justice Studies, 28, 1–5. Beaver, K. M., Nedelec, J. L., da Silva Costa, C., & Vidal, M. M. (2015). The future of biosocial criminology. Criminal Justice Studies, 28(1), 6–17. Bergstrøm, H., & Farrington, D. P. (2018). “The beat of my heart”: The relationship between resting heart rate and psychopathy in a prospective longitudinal study. Journal of Criminal Psychology, 8(4), 333–344. Bossler, A. M., & Holt, T. J. (2010). The effect of self-control on victimization in the cyberworld. Journal of Criminal Justice, 38(3), 227–236. Brussoni, M. J., Jang, K. L., Livesley, W. J., & MacBeth, T. M. (2000). Genetic and environmental influences on adult attachment styles. Personal Relationships, 7(3), 283–289. Connolly, E. J., & Beaver, K. M. (2016). Considering the genetic and environmental overlap between bullying victimization, delinquency, and symptoms of depression/anxiety. Journal of Interpersonal Violence, 31(7), 1230–1256. Donner, C. M., Marcum, C. D., Jennings, W. G., Higgins, G. E., & Banfield, J. (2014). Low selfcontrol and cybercrime: Exploring the utility of the general theory of crime beyond digital piracy. Computers in Human Behavior, 34, 165–172. Duncan, G. J., Ziol-Guest, K. M., & Kalil, A. (2010). Early-childhood poverty and adult attainment, behavior, and health. Child Development, 81(1), 306–325. Ellis, B. J., Volk, A. A., Gonzalez, J. M., & Embry, D. D. (2016). The meaningful roles intervention: An evolutionary approach to reducing bullying and increasing prosocial behavior. Journal of Research on Adolescence, 26(4), 622–637. Fahy, A. E., Stansfeld, S. A., Smuk, M., Smith, N. R., Cummins, S., & Clark, C. (2016). Longitudinal associations between cyberbullying involvement and adolescent mental health. Journal of Adolescent Health, 59(5), 502–509. Forbes, E. E., Brown, S. M., Kimak, M., Ferrell, R. E., Manuck, S. B., & Hariri, A. R. (2009). Genetic variation in components of dopamine neurotransmission impacts ventral striatal reactivity associated with impulsivity. Molecular Psychiatry, 14(1), 60–70. Fuss, J., Auer, M. K., Biedermann, S. V., Briken, P., & Hacke, W. (2015). Deep brain stimulation to reduce sexual drive. Journal of Psychiatry & Neuroscience, 40(6), 429–431. Gardner, M., & Steinberg, L. (2005). Peer influence on risk taking, risk preference, and risky decision making in adolescence and adulthood: An experimental study. Developmental Psychology, 41(4), 625–635. Gervai, J., Nemoda, Z., Lakatos, K., Ronai, Z., Toth, I., Ney, K., & Sasvari-Szekely, M. (2005). Transmission disequilibrium tests confirm the link between DRD4 gene polymorphism and infant attachment. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 132(1), 126–130. Gini, G., & Pozzoli, T. (2013). Bullied children and psychosomatic problems: A meta-analysis. Pediatrics, 132(4), 720–729. Hirschi, T. (2004). Self-control and crime. In R. F. Baumeister & K. D. Vohs (Eds.), Handbook of self-regulation: Research, theory, and applications. New York: Guilford Press. Holt, T. J., & Bossler, A. M. (2014). An assessment of the current state of cybercrime scholarship. Deviant Behavior, 35(1), 20–40. Holt, T. J., Burruss, G. W., & Bossler, A. M. (2010). Social learning and cyber-deviance: Examining the importance of a full social learning model in the virtual world. Journal of Crime and Justice, 33(2), 31–61.

34

Biosocial Theories

719

Holt, T. J., Bossler, A. M., & May, D. C. (2012). Low self-control, deviant peer associations, and juvenile cyberdeviance. American Journal of Criminal Justice, 37(3), 378–395. Holt, M. K., Vivolo-Kantor, A. M., Polanin, J. R., Holland, K. M., DeGue, S., Matjasko, J. L., Wolfe, M., & Reid, G. (2015). Bullying and suicidal ideation and behaviors: A meta-analysis. Pediatrics, 135(2), e496–e509. Hölzel, B. K., Carmody, J., Vangel, M., Congleton, C., Yerramsetti, S. M., Gard, T., & Lazar, S. W. (2011). Mindfulness practice leads to increases in regional brain gray matter density. Psychiatry Research: Neuroimaging, 191(1), 36–43. Kavish, N., Connolly, E. J., & Boutwell, B. B. (2019). Genetic and environmental contributions to the association between violent victimization and major depressive disorder. Personality and Individual Differences, 140, 103–110. Kim, S., Colwell, S. R., Kata, A., Boyle, M. H., & Georgiades, K. (2018). Cyberbullying victimization and adolescent mental health: Evidence of differential effects by sex and mental health problem type. Journal of Youth and Adolescence, 47(3), 661–672. Kowalski, R. M., Giumetti, G. W., Schroeder, A. N., & Lattanner, M. R. (2014). Bullying in the digital age: A critical review and meta-analysis of cyberbullying research among youth. Psychological Bulletin, 140(4), 1073–1137. Lee, K. S., & Vaillancourt, T. (2018). Longitudinal associations among bullying by peers, disordered eating behavior, and symptoms of depression during adolescence. JAMA Psychiatry, 75 (6), 605–612. Leukfeldt, E. R., & Yar, M. (2016). Applying routine activity theory to cybercrime: A theoretical and empirical analysis. Deviant Behavior, 37(3), 263–280. Lianos, H., & McGrath, A. (2018). Can the general theory of crime and general strain theory explain cyberbullying perpetration?. Crime & Delinquency, 64(5), 674–700. Locher, M. C., Koechlin, M. H., Zion, M. S. R., Werner, M. C., Pine, D. S., Kirsch, I., Kessler, R. C., & Kossowsky, J. (2017). Efficacy and safety of SSRIs, SNRIs, and placebo in common psychiatric disorders: A comprehensive meta-analysis in children and adolescents. JAMA Psychiatry, 74(10), 1011. Madigan, S., Ly, A., Rash, C. L., Van Ouytsel, J., & Temple, J. R. (2018). Prevalence of multiple forms of sexting behavior among youth: A systematic review and meta-analysis. JAMA Pediatrics, 172(4), 327–335. Marcum, C. D., Higgins, G. E., Freiburger, T. L., & Ricketts, M. L. (2014). Exploration of the cyberbullying victim/offender overlap by sex. American Journal of Criminal Justice, 39(3), 538–548. Mazurek, M. O., & Engelhardt, C. R. (2013). Video game use in boys with autism spectrum disorder, ADHD, or typical development. Pediatrics, 132(2), 260–266. Najman, J. M., Hayatbakhsh, M. R., McGee, T. R., Bor, W., O’Callaghan, M. J., & Williams, G. M. (2009). The impact of puberty on aggression/delinquency: Adolescence to young adulthood. Australian & New Zealand Journal of Criminology, 42(3), 369–386. Nickerson, A. B., & Nagle, R. J. (2005). Parent and peer attachment in late childhood and early adolescence. The Journal of Early Adolescence, 25(2), 223–249. Nielsen, M. B., Tangen, T., Idsoe, T., Matthiesen, S. B., & Magerøy, N. (2015). Post-traumatic stress disorder as a consequence of bullying at work and at school. A literature review and metaanalysis. Aggression and Violent Behavior, 21, 17–24. Ouellet-Morin, I., Wong, C. C. Y., Danese, A., Pariante, C. M., Papadopoulos, A. S., Mill, J., & Arseneault, L. (2013). Increased serotonin transporter gene (SERT) DNA methylation is associated with bullying victimization and blunted cortisol response to stress in childhood: A longitudinal study of discordant monozygotic twins. Psychological Medicine, 43(9), 1813–1823. Owen, T. (2017). Crime, genes, neuroscience and cyberspace. New York: Palgrave Macmillan. Owen, T., Noble, W., & Speed, F. C. (2017). Biology and cybercrime: Towards a genetic-social, predictive model of cyber violence. In New perspectives on cybercrime (pp. 27–44). New York: Palgrave Macmillan.

720

C. Posick

Ozkan, T., Rocque, M., & Posick, C. (2018). Reconsidering the link between depression and crime: A longitudinal assessment. Criminal Justice and Behavior. https://doi.org/10.1177/ 0093854818799811. Pew. (2015). Teens, social media and technology overview 2015. Pew Research Center. http://www. pewinternet.org/2015/04/09/teens-social-media-technology-2015/ Polderman, T. J., Benyamin, B., De Leeuw, C. A., Sullivan, P. F., Van Bochoven, A., Visscher, P. M., & Posthuma, D. (2015). Meta-analysis of the heritability of human traits based on fifty years of twin studies. Nature Genetics, 47(7), 702–709. Posick, C., & Gruenfelder, K. (2019). The health consequences of victimization. In M. G. Vaughn, C. Salas-Wright, & D. B. Jackson (Eds.), International handbook of delinquency and health. New York: Routledge. Posick, C., Farrell, A., & Swatt, M. L. (2013). Do boys fight and girls cut? A general strain theory approach to gender and deviance. Deviant Behavior, 34(9), 685–705. Pratt, T. C., & Cullen, F. T. (2000). The empirical status of Gottfredson and Hirschi’s general theory of crime: A meta-analysis. Criminology, 38(3), 931–964. Pratt, T. C., Cullen, F. T., Sellers, C. S., Thomas Winfree, L., Jr., Madensen, T. D., Daigle, L. E., Fearn, N. E., & Gau, J. M. (2010). The empirical status of social learning theory: A metaanalysis. Justice Quarterly, 27(6), 765–802. Rafter, N., Posick, C., & Rocque, M. (2016). The criminal brain: Understanding biological theories of crime. New York: New York University Press. Raine, A. (2013). The anatomy of violence: The biological roots of crime. New York: Vintage. Reuter, M. (2010). Population and molecular genetics of anger and aggression: Current state of the art. In M. Potegal, G. Stemmler, & C. Spielberger (Eds.), International handbook of anger. New York: Springer. Rocque, M., & Posick, C. (2017). Paradigm shift or normal science? The future of (biosocial) criminology. Theoretical Criminology, 21(3), 288–303. Rocque, M., Posick, C., & Hoyle, J. (2015). Age and crime. In W. G. Jennings, G. E. Higgins, D. N. Khey, & M. M. Maldonado-Molina (Eds.), Encyclopedia of crime and punishment. Hoboken: Wiley. Schuster, S. C. (2007). Next-generation sequencing transforms today’s biology. Nature Methods, 5(1), 16–18. Silberg, J. L., Copeland, W., Linker, J., Moore, A. A., Roberson-Nay, R., & York, T. P. (2016). Psychiatric outcomes of bullying victimization: A study of discordant monozygotic twins. Psychological Medicine, 46(9), 1875–1883. Smoller, J. W. (2016). The genetics of stress-related disorders: PTSD, depression, and anxiety disorders. Neuropsychopharmacology, 41(1), 297–319. Soler, H., Vinayak, P., & Quadagno, D. (2000). Biosocial aspects of domestic violence. Psychoneuroendocrinology, 25(7), 721–739. Steinberg, L. (2004). Risk-taking in adolescence: What changes, and why? Annals of the New York Academy of Sciences, 1021, 51–58. Steinberg, L. (2014). Age of opportunity: Lessons from the new science of adolescence. New York: Houghton Mifflin Harcourt. Ttofi, M. M., Farrington, D. P., Lösel, F., Crago, R. V., & Theodorakis, N. (2016). School bullying and drug use later in life: A meta-analytic investigation. School Psychology Quarterly, 31(1), 8–27. Vaillancourt, T., Hymel, S., & McDougall, P. (2013). The biological underpinnings of peer victimization: Understanding why and how the effects of bullying can last a lifetime. Theory Into Practice, 52(4), 241–248. Vaske, J., Galyean, K., & Cullen, F. T. (2011). Toward a biosocial theory of offender rehabiltiation: Why does cognitive-behavioral therapy work? Journal of Criminal Justice, 39(1), 90–102. Vaughn, M. G. (2016). Policy implications of biosocial criminology: Toward a renewed commitment to prevention science. Criminology & Public Policy, 15, 703–710.

34

Biosocial Theories

721

Vaughn, M. G., Beaver, K. M., DeLisi, M., Perron, B. E., & Schelbe, L. (2009). Gene-environment interplay and the importance of self-control in predicting polydrug use and substance-related problems. Addictive Behaviors, 34(1), 112–116. Waasdorp, T. E., & Bradshaw, C. P. (2015). The overlap between cyberbullying and traditional bullying. Journal of Adolescent Health, 56(5), 483–488. Watts, S. J., Tetzlaff-Bemiller, M. J., & McCutcheon, J. C. (2017). MAOA, drug selling, and violent victimization: Evidence of a gene  environment interaction. Criminal Justice Review, 42(4), 368–383. Wheeler, M. D. (1991). Physical changes of puberty. Endocrinology and Metabolism Clinics, 20(1), 1–14. Wright, J. P., & Boisvert, D. (2009). What biosocial criminology offers criminology. Criminal Justice and Behavior, 36(11), 1228–1240. Yoshikawa, H., Aber, J. L., & Beardslee, W. R. (2012). The effects of poverty on the mental, emotional, and behavioral health of children and youth: Implications for prevention. American Psychologist, 67(4), 272. Zelazo, P. D., Carlson, S. M., & Kesek, A. (2008). The development of executive function in childhood. In C. A. Nelson & M. Luciana (Eds.), Developmental cognitive neuroscience. Handbook of developmental cognitive neuroscience (pp. 553–574). Cambridge, MA: MIT Press. Zenner, C., Herrnleben-Kurz, S., & Walach, H. (2014). Mindfulness-based interventions in schools – A systematic review and meta-analysis. Frontiers in Psychology, 5, 603. Zych, I., Baldry, A. C., & Farrington, D. P. (2017). School bullying and cyberbullying: Prevalence, characteristics, outcomes, and prevention. In V. Van Hasselt & M. L. Bourke (Eds.), Handbook of behavioral criminology (pp. 113–138). New York: Springer.

Part IV Hacking

Computer Hacking and the Hacker Subculture

35

Thomas J. Holt

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining Computer Hacking and Hackers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The History of Computer Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Modern Hacker Subculture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secrecy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Demographics of Hackers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Motivations for Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discussion and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

726 726 728 731 732 733 735 736 737 739 740

Abstract

This chapter provides an overview of the phenomenon of computer hacking, and the ways that individuals with an interest in hacking view themselves and the larger social environment in which they engage with others. The historical development of hacking is discussed in the context of technological change, along with various reasons for which individuals may hack various targets.

Keywords

Computer hacking · Hacker · Technology · Exploit · Vulnerability · Social engineering T. J. Holt (*) College of Social Science, School of Criminal Justice, Michigan State University, East Lansing MI, USA e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_31

725

726

T. J. Holt

Introduction Computers and the Internet have radically transformed the ways that personal and financial data are maintained and communicated. The decentralized nature of data and storage has created unparalleled opportunities for theft and misuse of information, as well as general harm to computer systems and network security in order to gain access to this content (Andress and Winterfeld 2013; Brenner 2008; Wall 2007). Much of this threat stems from the activities of computer hackers or individuals who utilize technical knowledge and skills to gain access to sensitive devices or information (Furnell 2002). The conception of hacker as criminal has been promulgated in popular media since the early 1980s when home computers first became available. The presentation of hackers as malicious criminals in movies, television, and video games over the last 30 years has cemented the view of hacking as an illegal behavior in the popular conscience (Furnell 2002; Steinmetz 2015). This conception has been exacerbated in part by the lack of empirical research on hacking, particularly in the social sciences which can provide insights into the human dimensions of this technical crime (see Holt 2007; Steinmetz 2015). The realities of hacking, and hackers, are more nuanced and involve the application of hacking for both legitimate and illegal access to systems and networks. This chapter will attempt to dispel popular misconceptions of hacking by considering the way in which the act and the actors are defined by those within and outside the hacker community. In addition, this chapter will explore the history of hacking and its relationship to the evolution of technology generally. The chapter will conclude with an overview of the subculture of hackers and the ways that those involved in both legitimate and illicit hacking interact in this community.

Defining Computer Hacking and Hackers The concept of a hack is rooted in the use of computer hardware or software in order to make it operate in a way in which it was not originally intended. This could include simple acts, such as speeding up processing times through mechanical or software-based strategies. For instance, the speed of computer processors could be increased artificially by drawing a line connecting points on the unit with a pencil or other medium. Such a “hack” was perfectly legal, though it may have voided the manufacturer’s warranty. Similar actions were among the first hacks performed by computer programmers and system engineers in order to increase the efficiency of very slow processing times of mainframe computers in the 1940s and 1950s (Holt 2007; Levy 2001; Schell and Dodge 2002; Steinmetz 2015; Turkle 1984). Individuals may also utilize nontechnical strategies to gain access to sensitive information and computer networks, such as stealing a handwritten password for an email account or even simply guessing what a password may be and entering different options. In fact, many quantitative studies of computer hacking among juvenile and college samples utilize password guessing as a measure of computer

35

Computer Hacking and the Hacker Subculture

727

hacking (Bossler and Burruss 2011; Bossler and Holt 2009; Holt 2010; Marcum et al. 2014). In much the same way, hackers may misrepresent themselves via email, phone, or in person in an attempt to obtain personal information or access to different resources (Furnell 2002; Huang and Brockman 2010; Mitnick and Simon 2002). This activity, colloquially known as social engineering, usually involves posing as a service provider, employee, or potential vendor in order to make simple requests or ask questions that provide insights into the structure of a network or of services used by their target (see ▶ Chap. 40, “Social Engineering”; Mitnick and Simon 2002). Social engineering has been a proven method of obtaining information to facilitate hacking because human beings are often willing to help others and cannot be hardened from access in the same fashion as computer systems, buildings, and equipment (Huang and Brockman 2010; Mitnick and Simon 2002). To that end, a substantial proportion of data breaches affecting businesses and universities year to year typically involve the use of simple, low-skill attacks on the part of the actor to obtain access to network resources (Verison 2016). While low-tech hacks may be effective, they are often not what individuals in the general public recognize as a hack. More technical methods of hacking involve the use of programs and tools to facilitate a compromise. In that respect, hacks involve the use of two key components: vulnerabilities and exploits. A vulnerability is a flaw or error in computer hardware or software that can be leveraged to gain access into a program or system. There are hundreds of vulnerabilities that have been identified in virtually every piece of software and device that exists, whether Microsoft products or Apple devices (Wang 2006). A hacker can use the presence of a vulnerability to create an exploit or piece of software that utilizes the error in order to gain access to system-level commands (see ▶ Chap. 38, “Malicious Software Threats”). The fact that hackers attempt to enter systems or networks that they are not technically allowed to access is why some legislators and researchers have argued hacking is analogous to physical offenses such as burglary (Wall 2001). For instance, Wall (2001) argued that computer hacking should be considered as an act of cybertrespass as an attacker must utilize tools or knowledge to cross established boundaries of a computer network without approval from the owner or operator. The act of hacking without permission is illegal in most all Western and industrialized nations, in much the same way as burglary or theft (see ▶ Chap. 37, “Global Voices in Hacking (Multinational Views)”). If, however, the same activities were used by an actor who obtained permission from the system owner, then their actions would be perfectly legal. In fact, many cybersecurity firms utilize hacking techniques in order to identify weaknesses in computer networks and secure them from future attack. The challenges inherent in defining both hackers and hacking are equally present in academic research examining these activities. For instance, Schell and Dodge (2002) defined hackers as “persons who over time have enjoyed learning the details of computer systems and how to stretch their capabilities” (p. 9). The benefit of this definition is that it does not require an individual to have violated the law in order to be considered a hacker. At the same time, their definition included individuals who engaged in specific behaviors that may be illegal, including those who “gained unauthorized access to computer systems, copied software without authorization,

728

T. J. Holt

obtained free telephone/data calls by manipulating computer systems, wrote viruses, and gained unauthorized access to private branch exchange (PBX) or voice mail systems” (p. 9). Other scholars including Holt (2010) and Jordan and Taylor (1998) utilized similar legal and extralegal applications of knowledge as a way to define hacking. In fact, Steinmetz (2015) had one of the broadest definitions, recognizing hacking as a form of work situated around specific behaviors that may or may not be illicit. As a result, the general public must recognize that hacking and those who may consider themselves to be hackers may engage in criminal or legitimate applications of knowledge to computer systems and networks.

The History of Computer Hacking To better understand hackers and the act of hacking, it is necessary to consider how the act and actors have evolved in tandem with technology and modern history. Many recognize the origins of hacking in the student model railroad club at the Massachusetts Institute of Technology (MIT) in the 1940s (Levy 2001). Club members would use the term hacking to refer to goofing around with the models’ electrical systems and equipment for fun. This phrase was eventually used by individuals acting as computer programmers and engineers who had to work with computer systems of the day at various universities. The huge mainframe systems were the size of a room, with limited processing power and memory (Levy 2001). These computers were also not connected in any way, and programmers had to find strategies to keep the systems function and improve their capacity. Any steps the programmers could take to increase the processing time of the systems were beneficial, and they used the phrase hack to refer to their solutions in keeping with its origins in the railroad club (Levy 2001). Computer technology improved and became more prominent in public and private industry through the 1960s, and the concept of hacking became more common among computer programmers and engineers. The social upheaval occurring during this period, including the civil rights movement, Vietnam War, and feminist movements, also influenced their views and attitudes toward technology. A so-called hacker ethic began to develop among computer programmers, linking the counterculture to computer use (Levy 2001). Specifically, hackers began to emphasize the need to distrust authority and judge individuals based on their technical competency rather than skin color or age. They also recognized that everyone should have equal access to computers and the information and knowledge they could produce. In turn, the rise of technology could improve all aspects of modern life (Thomas 2002). Countercultural figures also emphasized the free use of technology in the 1960s and 1970s through applications of hacking telephony, or phone phreaking (Wang 2006). During this period, telephone networks were controlled by regional “Bell” companies who charged customers for all manners of services, particularly longdistance calls that cut across state or national boundaries. Activists and protestors began to encourage the use of phreaking, or manipulating telephone switching and

35

Computer Hacking and the Hacker Subculture

729

processing systems, to use these services for free (Landreth 1985). As a result, phreaking became a common problem that drew attention from law enforcement and industry groups alike (Wang 2006). Two technological innovations occurred during this period that also directly influenced hacking and hackers. First, university and military computer systems connected via telephony to create the first instantiation of the Internet (Levy 2001). The production of modems, enabling computers to send and receive information via phone lines, established a method for information sharing and access to systems that could be used by anyone who could afford the technology. Secondly, computer hardware became inexpensive and small enough that hobbyists began to band together to build computer systems and software in their homes. Their efforts to cobble together personal computers, or PCs, that could process information and run simple programs created a culture of modification and experimentation that helped inform the home computer hacker community (Ceruzzi 1998). In some respects, these innovations are the manifestation of the hacker ethic as they enabled computer technology to finally become available to the general public and enable information sharing on a wide scale. The development of PCs in the 1970s did not fully take hold among consumers until the early 1980s as part of a broader technology boom around video games generally (Ceruzzi 1998). Middle-income families were among the first to purchase computer systems primarily for parental use, though young people were attracted to the devices and their potential for gaming and entertainment. Computer systems were originally sold as educational tools, though young people were quick to explore the devices and their potential. Individuals who were able to purchase modems could also identify and explore connected computer networks (Furnell 2002). The early Internet became a haven for information sharing through so-called bulletin board systems (BBSs) to provide information, tools, and techniques on modifying or hacking computer hardware and software (Meyer 1989; Scott 2005). Since computers of this time could not display high-definition video or images, the content was posted in plain text and featured messaging systems that would enable individuals to post and respond to messages. The communications power of BBS enabled hackers to connect with one another, leading to the formation of local hacker groups around telephone area codes (Meyer 1989; Slatalla and Quittner 1995). Physical media was also used to link hackers together through homemade magazines which eventually became national publications, such as Phrack and 2600 magazines (Steinmetz 2015). The 1980s technology boom also ushered in fears of the misuse of these devices, especially by young men. This was exemplified by the release of the film WarGames, which told the story of a teenage hacker who uses his computer to access what he believes is a video game company computer network. Instead he unsuspectingly gains access to a military computer system and nearly causes a nuclear war (Schneider 2008). Despite the dramatic aspects of the plot, the film accurately portrayed some aspects of computer use that were novel at the time, such as modem connectivity and remote access to computerized data. Popular media outlets quickly focused on the issue of malicious hacks performed by groups of young men

730

T. J. Holt

in part to capitalize on the public interest in computer misuse (Marbach 1983). The first federal laws related to computer hacking were also developed and passed in 1984, called the Counterfeit Access Device and Computer Fraud and Abuse Act of 1984. This law empowered the US Secret Service to investigate cases involving the use and abuse of credit card information leading to a loss of $5000 or more (Hollinger and Lanza-Kaduce 1988). Two years later, the law was expanded to include all computerized information held by banks and financial institutions, as well as criminalized unauthorized access to computer systems (Hollinger and LanzaKaduce 1988). The resulting law enforcement efforts to crack down on hacking furthered the idea that computer hacking and hackers were only engaged in wrongdoing (Hollinger and Lanza-Kaduce 1998; Krance et al. 1983). The increased emphasis on criminal applications of hacking in the general public made it difficult for young, technologically sophisticated hackers to explain their activities to others. The tension between their interests and the misconceptions of their actions were crystallized in the publication of a brief article called The Conscience of a Hacker, or The Hacker Manifesto (Furnell 2002). This piece was published in 1986 in the hacker periodical Phrack and was attributed to “The Mentor.” The brief, but impassioned, text emphasized that young hackers were only interested in gaining knowledge, even if their actions were against the law. Adults’ attempts to criminalize their actions were misguided and only reflected their lack of concern for kids’ well-being and lack of understanding about technology. In this respect, the manifesto diverged from the hacker ethic and presented a new direction in hacking: an embrace of malicious, yet exploratory hacks. The perception of hacking as malicious, illegal activity continued throughout the late 1980s and 1990s with multiple federal investigations targeting high-profile hackers like Kevin Mitnick (Shimomura and Markoff 1996) and Kevin Poulsen (Littman 1997). At the same time, technology became increasingly affordable and user friendly, and the development of html code and the World Wide Web made it possible for individuals to connect online in ways that were not possible through BBS. These changes enabled more people to obtain computers and gain an interest in technology. A new generation of hackers emerged who were thought to be less skilled and more unethical compared to older hackers (Taylor 1999). Young hackers were also more willing to share information to facilitate hacking via forums and Internet Relay Chat (IRC) channels than their older peers (Taylor 1999). The free flow of information in online spaces made it possible for law enforcement to gather evidence of illegal activities, which created a tension in the community. Individuals could gain recognition from their peers for sharing useful information but may present themselves as a target to police agencies. Thus, hackers became extremely careful of who they communicated with and what they shared with others (Kilger 2010; Taylor 1999). The computer security industry emerged during this same period, often incorporating hackers who were active in the 1980s into legitimate positions applying their knowledge in order to secure vulnerable software and hardware from compromise (Taylor 1999). Some in the hacker community felt that this was against the notion of hacking, as they were selling out their skills to corporate interests (Taylor 1999). Others argued this was a more appropriate transition back to the origins of hacking

35

Computer Hacking and the Hacker Subculture

731

and the hacker ethic (Jordan and Taylor 1998; Taylor 1999). This was clearly evident when Kevin Mitnick, a noteworthy hacker who was arrested by the FBI and sentenced to federal prison, became a security consultant upon his release (Loper 2001). Mitnick was lauded in the hacker community because of his skills and harsh treatment by law enforcement and the justice system. For instance, Mitnick was barred from using a computer or Internet-connected device for years as a condition of his parole over fears that he might cause substantial harm to telephony or private industry (Holt and Bossler 2016). By transitioning from underground hacker to security professional, Mitnick lost a great deal of respect from those who respected his criminal exploits but also became a poster-boy for the professionalization of the hacker community. At the start of the new millennium, technological innovation had transformed the world. The Internet made it possible for individuals to communicate in near real time from any location. Additionally, the growth of e-commerce and financial service providers made it possible for individuals to buy goods and complete transactions at any time. The conveniences afforded to consumers by technology created substantial economic opportunities for hackers. Hackers began to target financial institutions to acquire sensitive information and engage in fraudulent transactions. In fact, phishing attacks emerged during this period as consumers increasingly depended on email and the web to access their finances (see ▶ Chap. 42, “Phishing and Financial Manipulation”; James 2005; Wall 2007). Malware writers also began to develop tools that could be used on a fee-for-service basis by less-skilled actors. For instance, the rise of botnets during this period engendered a massive rise in spam and denial of service attacks through rented infrastructure (see ▶ Chap. 39, “Cybercrime-as-a-Service Operations”; Holt 2013). Hacking also became a tool for political and social causes leveraged by nationstates and individual actors alike. As governments came to host sensitive information on web servers, nation-states began to target this infrastructure. For instance, Chinese hackers repeatedly stole massive quantities of information from government computer networks worldwide, ranging from direct downloads from US servers (Jordan and Taylor 2004) to highly sophisticated malicious software programs silently capturing data as it moves through the network (Rid 2013). Individuals also used hacking in the service of political causes, as evident in attacks by the hacker collective “Electronic Disturbance Theater” (Denning 2010; Jordan and Taylor 2004). The group targeted US government websites and servers, as well as the Mexican government over their treatment of its citizens (see ▶ Chap. 36, “Hacktivism: Conceptualization, Techniques, and Historical View”; Jordan and Taylor 2004; Schell and Dodge 2002).

The Modern Hacker Subculture The evolution of hacking over time demonstrates that individuals ascribe diverse meaning and value to the concept of being a hacker. The underground nature of this activity also makes it a marginalized behavior and community at odds with that of

732

T. J. Holt

the larger law-abiding society. As a result, hackers appear to exist within a broader hacker subculture or group with a shared interest in a deviant act (Holt and Bossler 2016; Wall 2007). Subcultures have a shared set of values that exist in opposition to the dominant culture, with a unique language and belief system that provides members with a way to understand who is within and outside of the subculture (e.g., Brake 1980; Herbert 1998; Miller 1958). Researchers have examined the hacker subculture since the 1980s (Levy 2001; Meyer 1989), though the findings of these studies differ across sample populations and over time (see Holt 2007). When viewed in the aggregate, the literature suggests there are three primary beliefs and values espoused in the modern hacker subculture: (1) technology; (2) knowledge; and (3) secrecy (Holt 2007; Jordan and Taylor 1998; Meyer 1989; Steinmetz 2015; Taylor 1999; Thomas 2002). These norms structure the activities and interests of hackers regardless of their involvement in ethical or malicious hacks. Each norm will be discussed below and their ties to the other norms elaborated upon.

Technology As demonstrated throughout the history of hacking, hackers share an intimate connection to the development and innovation of technology since the inception of computing (Holt 2007; Jordan and Taylor 1998; Meyer 1989; Steinmetz 2015; Taylor 1999; Thomas 2002). The interests of hackers revolve around computer hardware, software, and related devices, including gaming systems, mobile phones, and wearable technologies (Holt 2007; Jordan and Taylor 1998; Turkle 1984). Hackers play an integral role in the improvement of these devices, through their ability to identify vulnerabilities in these devices and either attack or secure them. Additionally, hackers’ appreciation for and modifications to technology help to introduce incremental and, in some cases, transformative change to the ways we interact with computers (Jordan and Taylor 1998; Taylor 1999). In order to develop a deep connection to technology, hackers must spend time reading and experimenting with computer hardware and software (Jordan and Taylor 1998). For young hackers, they may first experiment with modifying their computer to play games or developing unique programs to perform simple tricks or pranks (Holt 2007). For those who cannot afford their own computer, they may gain knowledge through using devices at public cafes, school, and social clubs (Holt 2010; Holt et al. 2017; Kilger 2010). The importance of technology also directly shapes hackers’ views of others who engage in hacks. The deeper an individual’s connection to and mastery of technology, the more they can demonstrate their ability through novel hacks and application of their “craft” (Holt 2007; Jordan and Taylor 1998; Steinmetz 2015; Taylor 1999; Thomas 2002). For instance, Kevin Poulsen utilized knowledge of telephony and hacks to rig a radio call-in contest through the Los Angeles radio station KIIS-FM. He was able to remotely block all inbound calls to the radio station to ensure he won the grand prize, a Porsche 944 (Smith et al. 2003). This was a distinct hack that drew him a great deal of respect in the criminal underground. Similarly, a hacker using the

35

Computer Hacking and the Hacker Subculture

733

nickname iskorpitx used vulnerabilities in web servers to set world records for web defacements, a hack where the content of a website is actively changed to content of the attackers’ choosing (Woo et al. 2004). His creative ability to identify vulnerable software and then exploit it has led to hundreds of thousands of web pages being changed simultaneously. Such novel attacks, while illegal, are good examples of the ways that hackers’ connection to technology is made manifest.

Knowledge The central importance of technology in the hacker subculture leads individuals to spend as much time as possible building their mastery of computer hardware and software (Holt 2007; Meyer 1989; Steinmetz 2015; Thomas 2002). In this respect, the hacker subculture is a meritocracy where individuals are judged on their commitment to learning and understanding technology (Holt 2007; Jordan and Taylor 1998; Levy 2001). The importance of knowledge has historical roots in the notion of the hacker ethic of the 1960s, as hackers espoused the importance of judging others based on their skill alone (Levy 2001). Hackers do not suggest individuals learn through direct mentorship, but rather through personal experience and practice (Holt 2007; Jordan and Taylor 1998; Taylor 1999). There are, however, indirect methods of information sharing among hackers that fosters a community of knowledge. For instance, hackers frequently share tutorials and ideas on methods of hacking and programming through forums, social media, and other online platforms (Holt 2009; Meyer 1989). The knowledge development process of hacking is demonstrative of the importance of maintaining a commitment to technology (Holt 2007; Jordan and Taylor 1998; Taylor 1999). For instance, hackers are informally encouraged to not ask obvious questions in online forum as there are myriad ways to find answers via search engines and books (Holt 2007). Additionally, if a person asks a question, they must be able to demonstrate they have put forth sufficient effort to figure out an answer on their own. If it is apparent a person is seeking shortcuts or does not understand how to apply practical concepts, they will be shamed by others publicly in forums and other platforms (Holt 2007). Thus, individuals must spend a great deal of time learning how programs work and cultivate their skills through hours of reading and experimentation (Holt 2007, 2009; Jordan and Taylor 1998; Taylor 1999). At the same time, hackers can gain social status and recognition for sharing quality information with others. There are several ways individuals can share ideas, whether in written form through forum posts or tutorials that can be shared through social media (Holt et al. 2009; Steinmetz 2015). In the modern age, hackers also share information through videos where they can demonstrate attack methods and processes in an interactive way (Holt et al. 2017). In addition, hackers frequently come together in the real world at conferences and meet-ups. It may seem unusual to hold conferences related to illegal activities, though they have evolved from underground get-togethers in the early 1990s to professional security events that draw

734

T. J. Holt

thousands of attendees every year (Holt 2007; Steinmetz 2015). As of today, hacker events primarily focus on discussions of cybersecurity concerns and novel hacks that extend technologies or demonstrate new threats (Holt 2007; Steinmetz 2015). Those who demonstrate their mastery of technology and abilities in these settings are able to gain the respect of their peers. Individuals who have minimal skills and make it clear to others that they are unable to hack are often rejected and mocked by the broader community (Holt 2007; Jordan and Taylor 1998; Meyer 1989; Steinmetz 2015). The significance of knowledge within the subculture shapes the ways that hackers refer to one another based on their ability and skill (Furnell 2002; Holt 2007, 2010; Jordan and Taylor 1998; Taylor 1999). Hackers can be placed along a continuum relative to their connection to technology and understanding of hacking generally. Those who are new to hacking and have relatively limited ability to manipulate computer hardware and software are usually referred to as noobs, or newbies (Holt 2010). A noob generally has the least status within the hacker subculture and can be used by an actor reflexively in an attempt to highlight their limited knowledge and justify why they need assistance (Holt 2010). Others may use the term noob in a derogatory fashion to shame a person who has limited knowledge. Over time, an individual may move beyond their noob status as they gain more knowledge of computers and technology. Often, these early attempts involve the application of scripts, tools, and kits that can be downloaded at no charge from hacker websites and forums (Furnell 2002; Holt 2010). These scripts may automate portions of hacks, so as to make individuals more successful in their attempts to successfully complete an attack. The idea of easily and quickly completing an attack may be attractive to young hackers, believing it will help them gain status or respect from others within the subculture (Furnell 2002; Holt 2007; Taylor 1999). At the same time, the tools they download may not be properly assembled, or the user does not understand how to properly utilize it in practice. As a result, these attacks are likely to fail and simply annoy the target rather than to cause it real harm (Furnell 2002; Holt 2007, 2010; Taylor 1999). Individuals who utilize these kits are likely to be referred to as script kiddies by individuals within the hacker subculture as well as in the cybersecurity community (Furnell 2002; Holt 2007; Taylor 1999). The use of the phrase script recognizes the automated computer code or “script” applied by the attacker’s tool kit. It is also utilized in a derogatory fashion to reflect the lack of skill on the part of the script user (Holt 2010). Some may also refer to script kiddies as lamers or wannabes to recognize their general lack of skill (Furnell 2002). Should an individual be able to cultivate sufficient knowledge of computer hardware and software, they may be able to move beyond the level of script kiddie to be recognized as a hacker (Holt 2010). Several studies note that individuals may not refer to themselves as a hacker, but rather allow others to say that they are hackers (see Holt 2010). Some may refer to themselves as a hacker after they have developed sufficient knowledge of programming languages, hardware, and scripts (Holt 2007; Taylor 1999). Additionally, individuals within the hacker subculture use unique phrasing to define their ethical attitudes toward hacking in keeping with the

35

Computer Hacking and the Hacker Subculture

735

concepts of the hacker ethic and manifesto. Those who use their knowledge to engage in hacks to secure systems and identify vulnerabilities are often referred to as white hats (see Furnell 2002; Holt 2007, 2010; Thomas 2002). By contrast, blackhat hackers use their knowledge to gain access to systems without authorization or harm computer systems and sensitive data (Furnell 2002; Holt 2007, 2010). Actors who use their skills in any way they see fit may be referred to as gray-hat hackers, recognizing their ethical flexibility over time (Furnell 2002; Holt 2010).

Secrecy Though knowledge and information sharing is key to gaining status within the hacker subculture, the questionable legal nature of some hacks means individuals must take care to minimize their risk of arrest (Jordan and Taylor 1998; Taylor 1999; Thomas 2002). Law enforcement agencies monitor communications in active hacker communities, which presents substantial risk for hackers who overtly brag about their activities in online spaces (Hutchings and Holt 2017; Jordan and Taylor 1998; Taylor 1999). Hackers realize this and espouse the use of various techniques to reduce the likelihood of detection from communicating about their activities in public spaces (Holt 2007). One of the key strategies hackers employ to minimize their risk of detection is to use a handle or screenname at all times when engaging with others on- or off-line related to their hacking activities (Furnell 2002; Holt 2010; Kinkade et al. 2013). A handle serves not only as a shield for one’s real identity but also as a way to provide a unique digital representation of one’s self (Furnell 2002). Individual handles may be overtly aggressive or violent in an attempt to suggest the actor is dangerous, as with groups of younger hackers in the 1980s who referred to themselves as the Masters of Deception and the Legion of Doom, respectively (Furnell 2002; Slatalla and Quittner 1995). Others may use humor in an attempt to diffuse their perceived risk, such as the hacker TweetyFish who used that name under the belief that no judge would ever think a serious criminal hack would be associated with such a ridiculous name (Furnell 2002). Once an individual establishes an online handle, they tend to use it at all times, including at conferences and real-world meet-ups (Holt 2010; Jordan and Taylor 1998; Steinmetz 2015). The use of handles in real-world meetings is essential to enable individuals to connect an identity they may have communicated with in a forum or online community with a real person. At the same time, it is also useful as law enforcement agencies and investigators have arrested individuals at hacker conferences for illegal activity they may have performed (e.g., Holt 2007; Slatalla and Quittner 1995). Thus, individuals may be very careful about who they communicate with in person in order to further conceal their illicit activities from outsiders (Holt 2007). In fact, a certain degree of paranoia and suspicion has been normalized by hacker conferences to proactively encourage participants to consider who they are speaking to and what they should realistically share (Holt 2007). The importance of secrecy also shapes the way hackers communicate with one another and where in online spaces. Historically, hackers have communicated

736

T. J. Holt

through BBS and forums which could be identified by anyone through various search engines or public postings. Such groups are instrumental as initial sources of information and insights into the subculture. More sophisticated actors may, however, want a place where they can discuss ideas with others who share their level of skill or simply want to keep their actions hidden from law enforcement (Dupont et al. 2017). As a result, there are hacker communities that operate on a private basis, where individuals have to register with the site in order to view posts and communicate with others (Holt 2010). In the last decade, a range of forums have also emerged that place greater restrictions on access, including operating on an invitation-only basis and requiring participants to pay a fee in order to participate (e.g., Dupont et al. 2017; Hutchings and Holt 2015). This shift is thought to stem from increased attention on hackers’ online communications by law enforcement and security researchers. As a result, hackers take great pains to shield the process of overt criminal exchanges from the public (Dupont et al. 2017; Jordan and Taylor 1998).

Demographics of Hackers Just as there are various ways to perform a hack, there are myriad ways to define what constitutes a hacker. While it may seem obvious to suggest a person who hacks is a hacker, the meaning of the term can differ based on their prior involvement with hacking and their social position (Holt 2007; Jordan and Taylor 1998; Schell and Dodge 2002; Taylor 1999). For instance, a member of the general public may assume a hacker is a young male with anti-social tendencies who may engage in illegal or at least inappropriate use of technology based on popular media representations (Furnell 2002; Steinmetz 2015). Individuals who have performed hacks are more likely to define a hacker as someone with technical skills and possibly an inquisitive temperament (Holt 2010; Steinmetz 2015). There are also unique terms used by individuals involved in hacking as a means to classify people relative to their abilities and ethical attitudes toward the use of hacking techniques which will be discussed later in the chapter (Holt 2010). Though there is a lack of consistent definitions for hacker, there are some commonalities evident as the demographics of those who hack. Research suggests the majority of individuals who hack do so in early adolescence and find an interest in technology (Holt 2007; Schell and Dodge 2002). Many individuals heavily engaged in hacking also appear to be young, in keeping with general evidence from the age crime curve as to the relationship between age and offending (Bachmann 2010; Jordan and Taylor 1998). It is less evident if there is a natural desistence point from hacking or if individuals persist in the behavior over time. Limited research suggests hackers engaged in malicious activities are predominantly under the age of 30, while the portion of older hackers observed appear to be gainfully employed in the security community (Bachmann 2010; Gilboa 1996; Jordan and Taylor 1998; Schell and Dodge 2002). The majority of hackers also appear to be male, with evidence from qualitative scholarship arguing a gender gap is observed in hacking (Holt 2010; Gilboa 1996;

35

Computer Hacking and the Hacker Subculture

737

Jordan and Taylor 1998; Taylor 1999). There is no clear explanation for this dynamic, though there is speculation it reflects the gendering of technology use toward boys in early adolescence (Taylor 1999). Evidence from quantitative research also finds males are more likely to report involvement in various forms of hacking (Bossler and Burruss 2011; Holt et al. 2010; Marcum et al. 2014). It is not clear if this gender gap will persist as technology access flattens or if there may be greater gender equity in hacking in the near future. The profound technical knowledge and skills hackers possess may lead some to believe they are highly educated. Empirical evidence suggests, however, that hackers may have high school educations, though some completed 2 or 4 year degrees (Bachmann 2010; Holt et al. 2009; Schell and Dodge 2002). Instead, most hackers cultivate their knowledge through a combination of self-guided, experiential learning and formal education. In addition, while popular media sources portray hackers as social isolates, they frequently learn through social networks of peers on- and off-line (Bossler and Burruss 2011; Holt et al. 2009; Leukfeldt et al. 2017; Skinner and Fream 1997). Many hackers report meeting and forming relationships to others with an interest in computers and technology through online forums, IRC channels, and other forms of computer-mediated communication (Holt 2009a; Jordan and Taylor 1998; Skinner and Fream 1997). A small portion also maintain social relationships to other hackers through real-world relationships, though other peers in their local network may not share their interests (Leukfeldt et al. 2017).

Motivations for Hacking Though there are some consistent demographic relationships observed in the hacker population, there are multiple reasons individuals may report for their involvement in hacking (Kilger 2010). Individuals may hack for instrumental reasons, such as direct financial gain (Holt et al. 2016), or for expressive reasons, such as emotional fulfillment and ideological expression (Woo et al. 2004). Additionally, hackers may be motivated by multiple concepts simultaneously, making it difficult to determine any single reason for an act (Kilger 2010). An actor can also have the same motivation regardless of their use of deviant or legitimate hacks. Perhaps the most commonly identified motivation to hack from a criminal justice system perspective is the desire to make money, especially through the use of illegal means of computer compromise (Holt et al. 2016; Kilger 2010). Many in the general public may recognize hacks that target financial institutions or sensitive data as an immediate and easy way for hackers to profit from the sale of this information or its use to engage in fraud (Holt et al. 2016; Leukfeldt et al. 2017). In fact, hackers regularly sell financial data acquired through phishing and hacking to others through markets operating on the open and Dark Web (Hutchings and Holt 2015). Hackers may also write malware and sell it to others on a fee-for-service basis in order to monetize their skills (see ▶ Chap. 39, “Cybercrime-as-a-Service Operations”; Holt 2013; Leukfeldt et al. 2017).

738

T. J. Holt

At the same time, individuals may be able to profit from hacking as a legitimate cybersecurity practitioner (Schell and Dodge 2002; Taylor 1999). For instance, individuals can utilize hacking techniques as a penetration tester, where they attempt to compromise computer networks on a fee-for-service basis in order to secure the system from subsequent attacks (Schell and Dodge 2002). In addition, hackers may be paid by large software vendors to identify unknown flaws so that they can be patched before being compromised by actors in the wild through so-called “bug bounty” programs (Kranenbarg et al. 2018). Thus, money is a common motivation for both ethical and malicious hackers alike. Beyond the instrument, there are many common expressive motivations that have been noted among ethical and malicious hackers alike: the sense of entertainment they feel as a result of successful hacks (Holt 2007; Kilger 2010). The intrinsic sense of enjoyment hackers find in modifying and playing with computer technology is an essential reason why many persist in hacking over time (e.g., Holt 2007; Kinkade et al. 2013; Steinmetz 2015). In fact, some of the first hacks individuals report performing are simple modifications to their computers or mobile devices in order to customize the equipment or software to better suit their needs (Holt 2007; Jordan and Taylor 1998). Some hacks are also simply entertaining or amusing, such as web defacements where the attacker can change a website’s content to images and text of the hacker’s choosing (see ▶ Chap. 36, “Hacktivism: Conceptualization, Techniques, and Historical View”; Woo et al. 2004). This form of attack allows hackers to express themselves, which may be inherently amusing depending on the content they post. A related pair of motivations to hack includes ego and social status. Individuals often report feeling a sense of pride when they are able to make a device work in a way that it was not originally designed to function (Holt 2007; Kinkade et al. 2013; Steinmetz 2015; Voiskounsky and Smyslova 2003). Hackers may also feel good when they are recognized for their actions by their peers, whether for sharing information in a relatable fashion or demonstrating new attack techniques to others (Kilger 2010). In fact, hackers actively engaged in the sale of personal data or hacking tools may receive positive feedback from their customers, which may increase their overall reputation and status in the community (Dupont et al. 2017; Holt and Lampke 2010; Motoyama et al. 2011). In that regard, a secondary motivation to hack is to gain entrance to more sophisticated hacker groups. This dynamic was present in the 1980s when hackers who could demonstrate their expertise or knowledge in bulletin board systems would be given access to closed communities. These groups would frequently host pirated games and software and promote active trading and information sharing. These relationships persist today, particularly among underground hacker groups heavily engaged in illicit activities. These forums typically allow individuals to join a group if they have been vetted by other members and, in some cases, pay for access to the community. Thus, some hackers may aspire to join hidden, high-skill communities to improve their abilities and gain access to more sophisticated tools and programs (see ▶ Chap. 39, “Cybercrime-as-a-Service Operations”). One of the least examined motives related to hacking involves ideological, religious, social, and political causes (Holt and Kilger 2012; Jordan and Taylor

35

Computer Hacking and the Hacker Subculture

739

2004; Woo et al. 2004). A number of hacks have been documented where attackers targeted governments and businesses through data breaches and web defacements because they caused harm to animals or the environment (Holt et al. 2019). Additionally, groups like Al-Qaeda recognize the value in targeting Western nations for cyberattacks in order to cause economic harm and spread fear over the potential weaknesses in the Internet (Denning 2010). Nation-states also engage in hacking with great frequency in order to acquire sensitive information and sow discord between minority groups in society (Rid 2013). As a result, it is possible that such ideological attacks may continue throughout the coming decades (Andress and Winterfeld 2013; Holt et al. 2019; Rid 2013).

Discussion and Conclusions This chapter demonstrated the breadth and depth of the meanings associated with the act of hacking and of being a hacker. While many may assume hacking is simply a criminal act, it is in reality a skill that can be applied for benevolent or malicious purposes. Hackers are also motivated by a range of instrumental and expressive needs that vary by person and place. Thus, any examination of hacking must take care to recognize the tensions that exist between legitimate and illicit applications of knowledge. To that end, future scholarship is vital to continuously assess the relationship between hackers and technology (see also Holt 2007). The constant evolution of technology toward smaller and more wearable smart devices that store information in the cloud will undoubtedly increase the opportunities malicious hackers have to obtain access to this data. It will also undoubtedly excite the security-minded hackers who attempt to secure these devices from others. As a result, the landscape of hacking and the skills of hackers will diversify in tandem with technological innovation. There is also a need for more research examining the different methods and tactics employed by hackers on the basis of their motivations to hack. Though there is a fair amount of research examining the economically driven hacker and those interested in hacking for fun and excitement, there is far less research considering those driven by a cause. For instance, there are limited qualitative studies assessing attacks driven by an ideological or social agenda (e.g., Holt et al. 2017; Holt et al. 2019; Jordan and Taylor 2004). Research is needed considering the ways that nation-state-sponsored hacks and those performed for nationalist or political reasons by individuals operate and how they may differ from those performed for other reasons (see also Andress and Winterfeld 2013; Holt et al. 2019). There is also a need for continuous research examining the hacker subculture. The majority of research to date has focused on hacker communities in the USA and Europe (e.g., Holt 2007; Jordan and Taylor 1998; Meyer 1989; Steinmetz 2015; Taylor 1999). Less is known about the consistency of the norms and values of the subculture across place, especially in nations that have less access to technology or restrictions on Internet use (see Holt et al. 2017). Such research could vastly improve our understanding of the contours of the hacker subculture and the ways in which local values and beliefs intersect with that of a more global online subculture generally.

740

T. J. Holt

References Andress, J., & Winterfeld, S. (2013). Cyber warfare: techniques, tactics, and tools for security practitioners. Waltham: Syngress. Bachmann, M. (2010). The risk propensity and rationality of computer hackers. The International Journal of Cyber Criminology, 4, 643–656. Bossler, A. M., & Burruss, G. W. (2011). The general theory of crime and computer hacking: Low self-control hackers? In T. J. Holt & B. H. Schell (Eds.), Corporate hacking and technologydriven crime: Social dynamics and implications (pp. 38–67). Hershey: ISI Global. Bossler, A. M., & Holt, T. J. (2009). On-line activities, guardianship, and malware infection: An examination of routine activities theory. International Journal of Cyber Criminology, 3, 400–420. Brake, M. (1980). The Sociology of Youth Cultures and Youth Subcultures. London: Routledge and Kegan Paul. Brenner, S. W. (2008). Cyberthreats: The emerging fault lines of the nation state. New York: Oxford University Press. Ceruzzi, P. (1998). A history of modern computing. Cambridge, MA: MIT Press. Denning, D. E. (2010). Cyber-conflict as an emergent social problem. In T. J. Holt & B. Schell (Eds.), Corporate hacking and technology-driven crime: Social dynamics and implications (pp. 170–186). Hershey: IGI-Global. Dupont, B., Côté, A. M., Boutin, J. I., and Fernandez, J. (2017). Darkode: Recruitment patterns and transactional features of “the most dangerous cybercrime forum in the world”. American Behavioral Scientist, 61(11), 1219–1243. Furnell, S. (2002). Cybercrime: Vandalizing the information society. London: Addison-Wesley. Gilboa, N. (1996). Elites, lamers, narcs, and whores: Exploring the computer underground. In L. Cherny & E. R. Weise (Eds.), Wired_Women (pp. 98–113). Seattle: Seal Press. Herbert, S. (1998). Police subculture reconsidered. Criminology 36(2):343–370. Holt, T. J. (2007). Subcultural evolution? Examining the influence of on- and off-line experiences on deviant subcultures. Deviant Behavior, 28, 171–198. Holt, T. J. (2009). Lone hacks or group cracks: Examining the social organization of computer hackers. In F. Schmalleger & M. Pittaro (Eds.), Crimes of the internet (pp. 336–355). Upper Saddle River: Pearson Prentice Hall. Holt, T. J. (2010). Examining the role of technology in the formation of deviant subcultures. Social Science Computer Review, 28, 466–481. Holt, T. J. (2012). Examining the Forces Shaping Cybercrime Markets Online. Social Science Computer Review 31(2):165–177. Holt, T. J. (2013). Exploring the social organisation and structure of stolen data markets. Global Crime 14(2–3):155–174. Holt, T. J., & Bossler, A. M. (2016). Cybercrime in progress: Theory and prevention of technologyenabled offenses. Routledge: London. Holt, T. J., & Kilger, M. (2012). Know your enemy: The social dynamics of hacking. The Honeynet Project. [Online] Available at: https://honeynet.org/files/Holt%20and%20Kilger%20-%20KYE %20-%20The%20Social%20Dynamics%20of%20Hacking.pdf Holt, T. J., & Lampke, E. (2010). Exploring stolen data markets on-line: Products and market forces. Criminal Justice Studies, 23, 33–50. Holt, T. J., Burruss, G. W., & Bossler, A. M. (2010). Social learning and cyber deviance: Examining the importance of a full social learning model in the virtual world. Journal of Crime and Justice, 33, 15–30. Holt, T. J., Bossler, A. M., & May, D. C. (2012). Low self-control, deviant peer associations, and juvenile cyberdeviance. American Journal of Criminal Justice, 37(3), 378–395. Holt, T. J., Freilich, J. D., & Chermak, S. M. (2017). Exploring the subculture of ideologically motivated cyber-attackers. Journal of contemporary criminal justice, 33(3), 212–233. Holt, T. J., Smirnova, O., & Chua, Y. T. (2016). Data thieves in action: Examining the international market for stolen personal information. New York City: Springer.

35

Computer Hacking and the Hacker Subculture

741

Holt, T. J., Stonhouse, M., Freilich, J., Chermak, S. M. (2019) Examining ideologically motivated cyberattacks performed by Far-Left groups. Terrorism and Political Violence,1–22. Holt, T. J., Kilger, M., Strumsky, D., & Smirnova, O. (2009). Identifying, exploring, and predicting threats in the Russian Hacker Community. Presented at the Defcon 17 Convention, Las Vegas, Nevada. Hollinger, R. C., Lanza-kaduce L. (1988). The process of criminalization: the case of computer crime laws . Criminology 26(1):101–126. Huang, W., & Brockman, A. (2010). Social Engineering Exploitations in Online Communications: Examining Persuasions used in Fraudulent E-mails. In Holt, T. J. (ed.), Crime On-line: Causes, Correlates, and Context (pp. 87–112). Raleigh, NC: Carolina Academic Press. Hutchings, A., & Holt, T. J. (2015). A crime script analysis of the online stolen data market. British Journal of Criminology, 55, 596–614. James, L. (2005). Phishing exposed. Rockland: Syngress. Jordan, T., & Taylor, P. (1998). A sociology of hackers. The Sociological Review, 46, 757–780. Jordan, T., & Taylor, P. (2004). Hacktivism and cyber wars. London: Routledge. Kilger, M. (2010). Social dynamics and the future of technology-driven crime. In T. J. Holt & B. Schell (Eds.), Corporate hacking and technology-driven crime: Social dynamics and implications (pp. 205–227). Hershey: IGI-Global. Kinkade, P. T., Bachmann, M., & Bachmann, B. S. (2013). Hacker Woodstock: Observations on an off-line cyber culture at the Chaos Communication Camp 2011. In T. J. Holt (Ed.), Crime online: Correlates, causes, and context (2nd ed., pp. 19–60). Raleigh: Carolina Academic Press. Krance, M., Murphy, J., & Elmer-Dewitt, P. (1983). The 414 Gang Strikes Again. Time. [Online] Available at: www.time.com/time/magazine/article/0,9171,949797,00.html Kranenbarg, M. W., Holt, T. J., & Jeroen van der Ham, (2018). Don’t shoot the messenger! A criminological and computer science perspective on coordinated vulnerability disclosure. Crime Science 7(1) Landreth, B. (1985). Out of the inner circle. Seattle: Microsoft Press. Leukfeldt, R., Kleemans, E. R., & Stol, W. (2017). Origin, growth, and criminal capabilities of cybercriminal networks. An international empirical analysis. Crime Law and Social Change, 67, 39–53. Levy, S. (2001). Hackers: Heroes of the computer revolution. New York: Penguin. Littman, J. (1997). The watchman: The twisted life and crimes of serial hacker Kevin Poulsen. New York: Little Brown. Loper, K. (2001, November). Profiling hackers: beyond psychology. In annual meeting of the American Academy of Sociology. Marbach, W. (1983). Cracking Down on Hackers. Newsweek, 34. Marcum, C. D., Higgins, G. E., Ricketts, M. L., & Wolfe, S. E. (2014). Hacking in high school: Cybercrime perpetration by juveniles. Deviant Behavior, 35(7):581–591. Meyer, G. R. (1989). The social organization of the computer underground. Master’s thesis. Miller W. B., (1958). Lower Class Culture as a Generating Milieu of Gang Delinquency. Journal of Social Issues 14(3):5–19 Mitnick, K. D., & Simon, W. L. (2002). The Art of Deception: Controlling the Human Element of Security. New York: Wiley Publishing. Motoyama, M., McCoy, D., Levchenko, K., Savage, S., & Voelker, G. M. (2011). An Analysis of Underground Forums. IMC’11, 71–79. Rid, T. (2013). Cyberwar will not take place. London: Hurst Publishers. Schell, B. H., & Dodge, J. L. (2002). The hacking of America: Who’s doing it, why, and how. Westport: Quorum Books. Schneider, H. (2008). Wargames. United Artists. Scott, J. (2005). BBS: The documentary. Shimomura, T., & Markoff, J. (1996). Takedown: The pursuit and capture of Kevin Mitnick, America’s most wanted computer outlaw – by the man who did it. New York: Hyperion. Skinner, W. F., & Fream, A. M. (1997). A social learning theory analysis of computer crime among college students. Journal of Research in Crime and Delinquency, 34, 495–518.

742

T. J. Holt

Slatalla, M., & Quittner, J. (1995). Masters of deception: The gang that ruled cyberspace. New York: Harper Collins Publishers. Smith, R., Grabosky, P., & Urbas, G. (2003). Cybercriminals on Trial. Cambridge University Press. Steinmetz, K. F. (2015). Craft(y)ness: An ethnographic study of hacking. British Journal of Criminology, 55, 125–145. Taylor, P. (1999). Hackers: Crime in the digital sublime. London: Routledge. Thomas, D. (2002). Hacker culture. Minneapolis: University of Minnesota Press. Turkle, S. (1984). The Second Self: Computers and the Human Spirit. Cambridge, MA: MIT Press. Verison. (2016). Version 2016 Data Breach Investigations Report. [Online] Avaiable at: https:// www.verizon.com/about/news/verizons-2016-data-breach-investigations-report-finds-cyber criminals-are-exploiting-human Voiskounsky, A. E., & Smyslova, O. V. (2003). Flow in Computer Hacking: A Model. Human. Society@Internet 2003. Wall, D. S. (2001). Cybercrimes and the internet. In D. S. Wall (Ed.), Crime and the internet (pp. 1–17). New York: Routledge. Wall, D. S. (2007). Cybercrime: The transformation of crime in the information age. Cambridge, UK: Polity Press. Wang, W. (2006). Steal this computer book 4.0: What they won’t tell you about the internet. Boston: No Starch Press. Woo H., Kim Y., & Dominick J. (2004) Hackers: Militants or Merry Pranksters? A Content Analysis of Defaced Web Pages. Media Psychology 6(1):63–82.

Hacktivism: Conceptualization, Techniques, and Historical View

36

Marco Romagna

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining Hacktivism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hacktivism: Taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Toward a New Way of Conceptualizing Hacktivism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hacktivism, Cyberterrorism, and State-Sponsored Hacktivism: Similar but Different in Their Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Organizational Aspects of Hacktivists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Techniques Used by Hacktivists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Denial of Service (DoS), Distributed Denial of Service (DDoS), and Virtual Sit-Ins . . . . . Cyber Trespass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Website Defacements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Theft/Leak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Website Redirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Evolution of Hacktivism Through Actors and Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Phase 1: From the 1980s to the Mid-2000s – From the Origins of the Hacker-Activist Spirit to the Affirmation of Hacktivism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Phase 2: From the Mid-2000s to 2012 – Anonymous Era and Rise of the Hackers . . . . . . . Phase 3: From 2012 to Present Days – Toward New Forms of Hacktivism . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

744 745 747 748 750 752 753 754 756 756 757 757 758 759 762 764 765 766

Abstract

Hacktivism is a relatively new phenomenon which originated in the 1980s from the meeting of hackers’ communities and technological-enthusiast activists. It grew in popularity in the late 1990s, becoming particularly famous with the advent of the collective Anonymous. The word hacktivism is a combination M. Romagna (*) Centre of Expertise Cyber Security, The Hague University of Applied Sciences, The Hague, The Netherlands e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_34

743

744

M. Romagna

of the terms “hacking” and “activism” and is generally described as the use of hacking techniques to promote a political agenda on the Internet. Using a critical approach and following a historical perspective, this chapter analyzes the intricate conceptualization of hacktivism, exploring the different existing definitions and taxonomies. The chapter will then focus on the organizational aspects of hacktivists’ groups and the techniques used to promote their ideologies. Last, it will provide an overview of the historical evolution of hacktivism, concentrating on the most important actors and operations from the origins of the phenomenon to present days. Keywords

Activism · Cybercrime · Hacker subculture · Hacking · Hacktivism · Ideology

Introduction In the last 30 years, the rise of the Internet and its technologies has deeply influenced every field of society, from economics to technology and from psychology to politics. Activism, as social phenomenon, did not escape this evolution. It adopted new methods of protests and developed new ideologies that have heavily been influenced by the cyberspace’s infrastructure, its users, and its tools (Denning 2015). Since the early 1990s, the Internet community witnessed the particular growth of a new phenomenon that has played a significant role in showing how the Internet itself can become a florid and valuable place for political struggles, civic engagement, and social protests (Li 2013; Milan 2015). This phenomenon, that combines typical aspects of hacking with sociopolitical values and ideologies taken from traditional forms of activism, is known as “hacktivism” (Denning 2015; Samuel 2004), and it has become part of a well-established tradition of radical media practices (Milan 2015). The term appeared for the first time in the hacking scene as consequence of hackers’ and “techno-savvies’” actions aimed at promoting a sociopolitical message (Jordan and Taylor 2004; Samuel 2004). The word “hacktivism” combines “hacking” and “activism” (Denning 2001), and it has been identified with the “nonviolent use of illegal or legally ambiguous digital tools in pursuit of political ends” (Samuel 2004, p. 2). Yet, hacktivism is not limited to established political ideologies taken online. The mixed background of its members has created groups that either embraced values typical of traditional activism and civil disobedience such as global justice, freedom of expression, left-wing movements (Jordan and Taylor 2004; Samuel 2004) or supported specific causes such as environmental issues in a certain region, political opposition to a national government, and freedom of the Internet (Karatzogianni 2015). Hacktivism has been addressed in different sectors of cybersecurity industry and academia, mainly in the field of political science, law, sociology, security studies, and governance. The academic community has particularly focused on the origins and the evolution of hacktivism (Denning 2001, 2015; Jordan 2002; Jordan and Taylor 2004;

36

Hacktivism: Conceptualization, Techniques, and Historical View

745

Karatzogianni 2005; Krapp 2005; Manion and Goodrum 2000; Milan 2015; Samuel 2004; Tanczer 2017; Taylor 2005; Vegh 2003); the characteristics of specific groups like Anonymous (Asenbaum 2018; Coleman 2014; Firer-Blaess 2016); the connection between hacktivism as new form of civil disobedience and more traditional social protests (Delmas 2018; Dominguez 2008; Hampson 2012; Houghton 2010; Knapp 2015); the conflictual legal and ethical position of hacktivism (Gillen 2012; Goode 2015; Himma 2007; Karagiannopoulos 2013, 2018; Li 2013; O’Malley 2013); and the specific techniques used by hacktivists (Sauter 2014). These studies have generated a rich literature around the phenomenon providing a lot of material for further investigations in a variety of other fields. Moreover, they helped stimulate the debate concerning a new tolerant approach to hacktivism, possibly regarded as a new form of digital social protest. The aim of this chapter is to provide an overview of the most relevant aspects of hacktivism and hacktivists following a historical perspective. The first two sections will focus on the definition and the taxonomy of the phenomenon; the third and fourth will look, respectively, at organizational aspects and techniques used by hacktivists, while the last section will explore the most important events and actors in the timeline of hacktivism.

Defining Hacktivism Like any other social phenomenon, hacktivism evolves, mutates, gets influenced, and is triggered by events that happen around the world, as demonstrated by the history of groups such as the Critical Art Ensemble (CAE), the Electronic Disturbance Theater (EDT) (Samuel 2004), and Anonymous (Coleman 2014). Hacktivism is closely linked to hacking, and its public image has been influenced by the fortune and misfortune of hackers. Although extensively analyzed and debated, hacking remains a complicated concept that went through several changes during the years, and it is still highly disputed especially in the computing community (Yar 2013). The term hacking was used in the 1960s to describe an innovative application of technology (especially in computer science), but it later became known as unauthorized access to someone else’s computer. If in the beginning the explorations were driven by curiosity, it later became clear that several of them were done with malicious and criminal intents aimed at exploiting, sabotaging, or stealing from the victims’ devices (Ibid.). The pop culture that promoted a romanticized idea of hackers as innovators and developers of creative solutions (Ibid.) has since shifted toward a less positive interpretation of them (Krapp 2005). Governments, law enforcement agencies, security industry, and the mass media have started a labelling process that helped to construct hacking as a criminal phenomenon (Yar 2013). Both the criminal justice sector and the mass media have been selling the image of hacking as a systemic threat to digital media, governments, corporations, and society more broadly (Chandler 1996; Rogers 2010) prompting Yar (2013) to say that, in extremis, hackers might have been made into “folk devils.” The idea of hacking as

746

M. Romagna

an elite’s activity is challenged by Yar (Ibid.) and Milan (2013), who affirm that automated software can nowadays do the work, without requiring the person to own great hacking skill. These characteristics have been attributed to hacktivism as well, though softened by hacktivists’ operations which have met the approval of part of the public opinion (Coleman 2014). Because hacktivism employs computer hacking techniques (Jordan and Taylor 2004, p. 2) and because part of the hacker ideology has been wholeheartedly embraced by hacktivists themselves, it is sometimes difficult to identify the limit between the two communities (Yar 2013). Given that hacktivism has been defined in many ways and conflated with similar notions like cyber activism, computer activism, electronic activism, digital activism, and electronic civil disobedience (Gillen 2012), it has become a controversial term. Some studies have widened the definition to accommodate a larger number of actions and actors, while others have narrowed it down. Supposedly, the word was coined in the mid-1990s by a member of the Cult of the Dead Cow, a group of long-term political hackers, self-identified as “hacktivists” (Samuel 2004) in the broad sense of the term. The concept was later remodeled by media, security industry, and academia to describe diverse forms of actions and ideologies that found place in the online environment. For some, hacktivism is a form of electronic direct action that works toward social change by merging programming skills with critical thinking (Vegh 2003). Others use it as synonym with malicious, destructive, and disruptive acts that undermine the security of the Internet to promote a non-better identified sociopolitical agenda (Krapp 2005). Yet, others specifically associate it with the expression of a political thought, free speech, human rights, and information ethics on the Internet (Vegh 2003). Even the spelling of the word hacktivism is controversial. The most common form uses the word “hacktivism” (meaning the attempts of exploration, discovery, testing, and overcoming of technical limitations or of security barriers) and tends to emphasize the technological legacy and, as consequence, the hacking component and culture. The least common form prefers to focus on the more sociological aspects of a possible radicalized activism, and it is more inclined to use the term “hactivism” (Krapp 2005). By comparison to previous works, this chapter narrows down the number of actions and behaviors traditionally identified as hacktivism by stressing the importance of the computer hacking component and, in particular, of the hacktivists’ deviant (or illegal) actions and behaviors. There are three key elements traditionally present in studies on hacktivism: (a) the need of supporting an ideology or cause that has its bases in sociopolitical struggles (concept that can accommodate diverse motivations); (b) the Internet (and cyberspace) both as necessary infrastructure that allows the activity and as target of an attack (Milan 2015); and (c) the desire for any group or single individual to promote a sociopolitical agenda that should either lead to a change in society or to keep the status quo. Borrowing part of Milan (2015, p. 550), Denning (2001, p. 241), and Samuel’s (2004, p. 2) definitions, hacktivism is here outlined as: The promotion of a sociopolitical agenda usually linked (but not limited) to ideologies typical of traditional activism and applied in cyberspace through individual and collective actions, using illegal or legally ambiguous computer hacking

36

Hacktivism: Conceptualization, Techniques, and Historical View

747

techniques that exploit, hinder, and disrupt the ICT infrastructure’s technical features, without the use of physical violence and without gaining direct economic benefits. In this definition, the computer hacking component is an essential element to identify hacktivism. Not only computer hacking techniques are required as tool for the operation (therefore excluding from hacktivism those actions that do not make use of such techniques). The hacking factor also helps to shape the ideology and the dynamics behind hacktivism itself, providing a certain set of values and a specific mental approach that is embedded in the hacker mentality. For clarity, other authors (Jordan and Taylor 2004; Samuel 2004) are less keen to equate hacking or any type of disruptive actions as a necessary feature of hacktivism. Rather, they tend to leave it out of the equation, preferring to refer to “Internet technology” as a tool to foster human rights and to freely exchange information. The lack of physical violence is another component mentioned when discussing hacktivism (Samuel 2004) and, to some extent, used to mitigate its consequences. Yet, lack of physical violence does not mean absence of negative or severe consequences for the targets. Indeed, the attacks have the potential to cause extensive economic and social harms to both individuals and businesses as well as governmental organizations (Holt et al. 2017) concurring to identify it as a threat. The controversial terminological aspects are influenced by ethical, legal, and sociopolitical elements which must be carefully weighed and used to identify hacktivism. The analysis of hacktivism does not only face a complex definition but also an intricate and challenging taxonomy, as it will be shown in the next paragraph.

Hacktivism: Taxonomy The conceptualization of hacktivism stems directly from its definition and from hacktivists’ actions and behaviors. This means that the words and elements chosen for the definition directly influence the taxonomy, excluding some actions, incorporating others, and possibly creating consequences on the legal and sociopolitical approach (Krapp 2005). There have been so many different contexts in which hacktivism was used that any online activity which involves a political element seems to deserve the label “hacktivism.” This happens to be an indiscriminate and wrong approach (Karagiannopoulos 2013). Since that, especially in the past, a large portion of hacktivism was a form of civil disobedience (Samuel 2004) which involved disobeying and openly violating norms and laws for the sake of protesting them (Himma 2007). Recently, the situation has shifted in favor of actions that involve patriotism, vigilantism, and diverse ideologies, rather than pure civil disobedience. There are two main taxonomies that tried to clarify limits and aspects of the phenomenon. The first one, created by Jordan and Taylor (2004), takes a broad approach, and it mainly focuses on hacktivism as a form of resistance to neoliberal globalization (Samuel 2004, p. 24), dividing it in two categories: mass action hacktivism and digitally correct hacktivism. Mass action hacktivism is a

748

M. Romagna

combination of politics and technology. It is the closest form of traditional mass protests taken online. It represents the promotion of electronic civil disobedience (an online form of traditional civil disobedience) done using virtual sit-ins and DDoS actions and was popular in the late 1990s and the early 2000s. The most important groups that partook in these forms of actions were the Critical Art Ensemble, the Electronic Disturbance Theater and the Electrohippies (Jordan and Taylor 2004), and to a certain extent Anonymous (Sauter 2014). Digitally correct activism represents instead “the political application of hacking to the infrastructure of cyberspace” (Jordan and Taylor 2004, p. 69). It focuses on the right of any person to get access to information (Ibid.). Among the most visible groups were the Cult of the Dead Cow and Hacktivismo (which is an independent branch of the former one). This classification has a very broad approach to the phenomenon but represents a good starting point for further elaborations. The most successful taxonomy is likely the one elaborated by Samuel (Houghton 2010), based on the combination of types of actors (hackers, programmers, artists), methods (defacements, DDoS, data theft), and orientations (transgressive or outlaw). Samuel (2004) divides hacktivism into three categories.

Hacktivism

Political cracking

Performative hacktivism

Political coding

Representation of Samuel’s taxonomy on hacktivism (Samuel 2004, pp. 35–38)

First, political cracking is the most aggressive form of hacktivism and finds its origins in the (now almost abandoned) dichotomy between hacking and cracking, making use of website defacements, cyber trespasses, website redirects, and DDoS attacks. Second, performative hacktivism consists of legally vague actions like virtual sit-ins and website parodies. Though not necessarily illegal, virtual sit-ins are able to hinder the targeted website for a certain amount of time (from some minutes to some hours). Website parodies can affect a business or governmental agencies’ activity by undermining, for instance, their public image. Lastly, political coding is described as the creation and development of software meant for political uses, for instance, to avoid censorship or to remain anonymous on the Internet (Samuel 2004).

Toward a New Way of Conceptualizing Hacktivism Both the taxonomies presented above were innovative and represented a thorough contribution for understanding the topic (Houghton 2010), but they are certainly not

36

Hacktivism: Conceptualization, Techniques, and Historical View

749

immune to criticisms and necessary updates, which will only be shortly addressed in this chapter. Jordan and Taylor’s (2004) work seems to be too broad to create a clear distinction among diverse actors. Although it worked 15 years ago when hacktivism was still in its first phase, it has since proved to be incomplete in its ability to address the most recent developments in the field. The two main issues are the excessive focus on the idea that hacktivism is only the emergence of popular political action taken online (activism gone electronic) and the fact that important tools traditionally used by hacktivists are left out of the analysis (Houghton 2010), therefore reserving a less important position to the computer hacking component. Samuel’s (2004) taxonomy is instead more complete, partially closing the gaps left by previous works (Houghton 2010). Nevertheless, the author’s decision to create a division in categories based on types of actors, methods, and on the distinction outlaw/ transgressive action, though easy to use, once again seems to create some issues. According to the author of this chapter, only political cracking and some elements of political coding are forms of hacktivism, while much of performative hacktivism is not. First, political cracking fits the definition of hacktivism and should be simply renamed “hacktivism,” also considering that the distinction hacking/cracking is not used anymore. The individuals involved in this category use computer hacking techniques and have a background that fully embraces the hacking culture. Second, political coding is on the edge between being a form of hacktivism and being a simple transposition of traditional activism to the online environment. As Samuel (2004) claims, the actors involved are the hacker-programmers. However, while the political crackers are outlaws, the coders are depicted as transgressors that tend to circumvent policies and rules without explicitly breaking them. For political coding to be accepted as form of hacktivism, hacktivists should create their own specific tools. These tools should be used to influence the cyberspace infrastructure, as it happened, for instance, with FloodNet. Only then the hacking component would be strong enough to place political coding under hacktivism. Last, performative hacktivism is too broad. In fact, if the whole performative hacktivism category was accepted as part of hacktivism, this would mean that many sociopolitical activities done online using the Internet as a simple communication medium would become a form of hacktivism. The specific issue with this category is that the computer hacking component is weak if nonexistent, losing the intrinsic nature of hacktivism itself. This choice of rearranging Samuel’s categories is justified by two main reasons. The first and most important is the fact that hacktivism needs the computer hacking element, and it is deeply influenced by the hacking culture: both are present in political cracking and partially in political coding but are lacking in performative hacktivism. Second, there has been a significant growth in the number of hackers who are using disruptive methods to promote their sociopolitical agenda, while the other categories have registered a substantial decrease. Although Samuel’s (2004) taxonomy remains the most important in the field (Houghton 2010), other scholars have suggested different approaches to hacktivism. In his study on Anonymous, Asenbaum (2018) speaks of cyborg activism, focusing on the fact that the cyborg element defines the individual as a hybrid of biology and technology. For Asenbaum (Ibid., p. 1547), “cyborg activism is defined by the

750

M. Romagna

continuous process of reconfiguration of the modern binaries of equality/hierarchy, reason/emotion and nihilism/idealism.” Wong and Brown (2013, p. 1015) take a different angle, depicting hacktivism as a form of e-banditry. Hacktivists represent a sort of modern “Robin Hood, resisting the power that [. . .] threaten the desire to keep the Internet free [. . .] and capitalize on the Internet and other information technologies to lead disembodied, virtual attacks against physical targets in order to encourage political change.” Looking at the operations launched in the last 30 years, it is clear that hacktivists have often used deviant, legally ambiguous, or completely illegal techniques to promote their sociopolitical agenda. The use of these techniques, though possibly inspired by good causes, remains criminalized in many national legislations (Gresty et al. 2005; Li 2013). At the same time, on the ethical side, hacktivism often struggles to meet an acceptable justification for its actions (Denning 2001; Samuel 2004). This happens either because it inflicts substantial damages to the victims or because the motivations adduced by the participants are irrelevant or not regarded as morally acceptable by society (Gillen 2012) and (sometimes) by other groups of hacktivists (Karagiannopoulos 2013). While Anonymous might not hesitate (as it did) to deface a website or launch a DDoS attack, other hacktivists may consider these tactics as illegitimate forms of censorship, which are not at all in line with the original spirit of hacktivism (Milan 2015).

Hacktivism, Cyberterrorism, and State-Sponsored Hacktivism: Similar but Different in Their Complexity According to more critical voices belonging mainly to the security industry and to governmental organizations, some hacktivists’ actions are thinly distinguished from cyberterrorism, prompting the criminal justice sector to potentially implement harsher punishments (Denning 2001; Gillen 2012; Hardy 2010; Knapp 2015; Kubitschko 2015; Manion and Goodrum 2000). On this point, Hardy (2010, p. 483) has clearly underlined how the Australian legislation on counter-terrorism could have been easily applied to attacks launched by hacktivists even though no lasting physical or economic damage had occurred. Such an approach might dangerously put on the same level phenomena that have some overlapping elements but are generally distinct. Nevertheless, the idea that some forms of hacktivism are comparable to cyberterrorism should not be dismissed too hastily. The notion of cyberterrorism does not help to make clarity, as the term is slippery, broad, and conveniently adaptable to rhetorical discussions on a political level (Yar 2013). So far, there have not been episodes of cyberterrorism that conform to existing definitions of cyber terror. Attacks launched by cyberterrorists using a malware and able to directly harm the population by hijacking trains or planes, destroying pipelines, and the like are still only on a hypothetical level (Holt 2012). The forms of cyberterrorism encountered are mainly linked to online propaganda, recruitment, and obtainment of financial support (Ibid.). Given these considerations, the distinction among hacktivism and cyberterrorism should be researched in the combination of three elements that differentiate the two phenomena: the

36

Hacktivism: Conceptualization, Techniques, and Historical View

751

existence/lack of physical harm or at least of a severe disruption against a vital infrastructure; the presence/absence of fear in the victim of the attack and in other individuals or organizations that might be influenced by it; and the different use of a preset label to identify the nature of the action. To begin with, it is not the type of technique used for an operation that draws the line among cyberterrorism and hacktivism, but rather the effects linked to its use. Hampson (2012, p. 539) underlines how “permissible forms of hacktivism should have as their primary purpose the nonviolent communication of a coherent message, [while] forms of hacktivism that pose a threat of physical damage or violence [. . .] are better described as cybercrime or cyberterrorism.” The focus on the lack of physical violence or physical harm is particularly important (Vegh 2003). Should a person use a malware to launch a protest, and should the malware directly or indirectly physically harm another individual, then the hacktivism label would be abandoned, and, depending on the final aim, the cybercrime or terrorist one would be applied (with all the necessary legal consequences). Nevertheless, Holt (2012) argues that given cyberspace, the physical harm would not be necessary to identify a terrorist action. A disruption able to severely disable, hinder, or cripple financial services, power grids, or of any other critical infrastructure would probably end up under the terrorism label, especially if it creates panic or fear in its targets. Fear is intrinsic in terrorism. Terrorists seek a political change by coercing society and public institutions with threats and by generating fear and/or harm with violent actions (Holt 2012; Samuel 2004). Hacktivists, instead, refuse the use of physical violence and do not aim to create fear in their targets (Tanczer 2017), with some of them even claiming that any form of disruption should not be tolerated (Manion and Goodrum 2000). Their goal is to obtain visibility, spread a message, and possibly gain more consensus among the population (Himma 2007) in order to promote a change in society. The last point is more disputable, since it merges labelling process, self-perception, and political considerations. Vegh (2003) criticizes the fact that the distinction among hacktivism and cyberterrorism became more blurred because of media and governments’ interests. They attempted (and often still do) to push hacktivism into the field of terrorism in order to have more power and capabilities to respond to any type of illegal cyber actions (Manion and Goodrum 2000). This decision is in line with the “information warfare” policies promoted in the early 2000s especially among Western governments (Vegh 2003), but later on adopted by other states. The perception issue is heavily influenced by the political situation of a country. To use a parallelism taken from studies in terrorism, in the eyes of a hacktivist, these operations are seen as legitimate forms of protest, while in the view of the condemner (usually the state), they are unjust and unjustifiable forms of criminal activity or terrorism. It is quite similar to the saying, “one man’s terrorist is the other man’s freedom fighter.” Schmid (2004, p. 397) is particularly critical of this concept, stating that “its moral relativism is highly unsatisfactory from an ethical and intellectual point of view.” Schmid’s assumption is correct, but he has to admit that it is difficult to escape this relativism, though it is necessary and worthy to try (Ibid.). A couple of examples can help to clarify the point. The first concerns a group of hackers that goes under the name of United Cyber Caliphate. The United Cyber Caliphate has hit the news for some cyberattacks

752

M. Romagna

that mainly involved website defacements and online propaganda. The group’s operations have never resulted in physical harm or violence, but its affiliates have always openly supported the Islamic State with messages and statements (Schori Liang 2017). If to a certain extent the Caliphate’s actions could be considered forms of hacktivism, the fact that its members clearly and directly support an officially recognized terrorist organization (List of designated terrorist groups n.d.) and that they abet other people to commit criminal actions make them terrorists as well. This is the approach taken in several national and international legislations. Nevertheless, the knot is even more difficult to untie, as proven by several Kurdish groups. Kurdish hackers have been involved in many cyberattacks against Turkish websites to ask for a political and territorial recognition of Kurdistan. Once again, their defacements did not seem to create physical harm or fear among the Turkish population, but they certainly called for a relevant change in the public institutions. While supporting the Islamic State is internationally recognized as a form of terrorism, in the case of the pro-Kurdistan hackers, it is mainly the Turkish government that labels them as cyberterrorists (List of designated terrorist groups n.d.). These two examples illustrate how a careful analysis is always required to correctly place an action under one or the other banner. It is not only a technical and definitional problem, since the political component heavily influences the labelling process and its connected reactions. In both cases there is nothing close to the dramatic picture of cyberterrorism as drawn by the security industry and by governments. But, in the first case, the international community easily recognizes the support as a form of terrorism. In the second one, the terrorist label is applied by one government only, certainly under the influence of clear political implications. As last remark, there is a particular form of hacktivism, known as state-sponsored hacktivism, which represents an intricate “interaction between hacktivists that voluntarily embrace the national cause and governments that subsidize, support and motivate them to hack” (Romagna in press). The main problem with state-sponsored hacktivism is given by the difficulty to draw a line between spontaneous hacktivists’ actions and hacktivists’ operations launched with a massive support of governmental resources. Once again, this is not only an academic exercise. It would not be surprising if in the near future, governments will employ highly motivated patriotic hacktivists for campaigns that will have more in common with cyber warfare rather than with pure hacktivism. This approach would create several problems of attribution on an international level, as hacktivists would fall under the category of non-state actors requiring a different analysis that leaves quite some space to legal interpretation (Mačák 2016).

Organizational Aspects of Hacktivists As seen in the definition provided above, hacktivism can either stem from a single person, due to the great visibility that the Internet can provide to lone actors (Milan 2015), or be the result of a collective action (Samuel 2004). Hacktivists’ operations are traditionally and more commonly linked to groups or network actions which provide better opportunities and likely higher chances of success (Jordan and Taylor 2004; Milan 2015; Samuel 2004). On this point, Milan (2013, p. 89) argues that

36

Hacktivism: Conceptualization, Techniques, and Historical View

753

the group’s identity (the “we”), supported by a feeling of belonging and sharing of same ideas and values, is the product of various self-contained “I’s” which are selfsufficient and technically independent. Tasks within the group are assigned based on technical skills, and knowledge tends to be shared among members (Ibid.). Group members remain autonomous, but normally agree to give away part of this freedom – how to proceed, which targets to select, what kinds of messages to publish – for the benefit of the collectivity (Samuel 2004). In hacktivist teams “individuals become functional to the group they are part of, [while] the group [itself] attributes meaning to the individual” (Milan 2013, p. 79). This feeling of belonging to a group, sharing the same identity, and being at the same level are important aspects of hacktivism, especially when considering the traditional refusal (typical of the hacker subculture) of a top-down, hierarchical organization (Asenbaum 2018). Hacktivism seems to adopt a horizontal and networked approach which emphasizes the autonomy and the role of the individual within the group, in a manner that mirrors the nodal structure of cyberspace (Milan 2013). This situation has been observed several times, for instance, among the core members of Anonymous/ LulzSec (Coleman 2014) and among other groups both in past and more recent times (Romagna in press). Yet, though this horizontal structure seems to be preferred, it should not be idealized. While there is evidence of leaderless groups that use a democratic procedure to take decisions, there are also some strong personalities that tend to take a leading role within the team (Milan 2013). This is particularly true with small teams (Romagna in press) usually made of 10–20 individuals with sufficient or extensive computer hacking skills that tend either to have a leader (normally the creator of the group) or a figure that represents a role model for the others. Computer and hacking skills are central in hacktivism, even though some scholars (Milan 2015) argue that this knowledge has lost importance in recent years, since new software makes it easier for people without particular IT skills to engage in disruptive actions. An example is the FloodNet tool used by the supporters of the Electronic Disturbance Theater (Samuel 2004) and the Low Ion Orbit Cannon (LOIC) software run by Anonymous’ members during operation PayBack (Coleman 2014; Olson 2013). Although this thesis is correct when considering the common Internet user, it fails to address the other half of hacktivism, characterized by hackers or at least by enthusiastic want-to-be hackers. Operations like the ones launched by the Electronic Disturbance Theater or Anonymous would have been impossible without the work of programmers and hackers that created these tools. To clarify, this last argument does not diminish the importance of participants that lack technical skills, but it points out that the essence of hacktivism cannot be completely understood without considering the large role played by the hackers’ culture, ideologies, and skills (Romagna in press).

Techniques Used by Hacktivists Hacktivists have made use of different techniques to promote their agendas. Many of them are illegal (Karagiannopoulos 2013). The message itself is sometimes unclear and difficult to place within the boundaries of traditional activism creating a complex

754

M. Romagna

situation that pushes hacktivism toward a pure criminal action, rather than a tolerable or justifiable one. Some of the techniques used by hacktivists might find a legal justification if applied in a less aggressive way (Sauter 2014). Others will likely never lose the criminal component and therefore will remain a threat, given the consequences that come with their use.

Denial of Service (DoS), Distributed Denial of Service (DDoS), and Virtual Sit-Ins Denial of service and distributed denial of service are among the least technically advanced methods used by hacktivists to promote their sociopolitical agenda (Karagiannopoulos 2013). Because of their simplicity, they have been and still are highly popular especially among Anonymous’ members. DoS and DDoS are attacks that overwhelm the targeted device, website, or server by bombarding it with requests and forcing it to slow down or crash (O’Malley 2013). When the attack comes from one source only, it is called DoS; when it comes from multiple sources, it becomes a DDoS. Both techniques are normally used to hinder web servers, but they can disable any other type of computer system, causing from mild to severe (though normally temporary) consequences to the target (Beck 2016). DDoS attacks can be particularly effective as they use multiple systems of computers to coordinate a simultaneous action, avoiding the issues of a single server being shut off or locked out during the operation (O’Malley 2013). Even if motivated by sociopolitical reasons, Sauter (2014) notices that DoS and DDoS are clearly labelled by mass media, criminal justice sector, and public opinion as pure criminal activities with no acceptable justification. The fact of using the term “attack” instead of “action” seems to prove this point (Ibid.). Hacktivists have tried to disengage from this approach (Dominguez 2008) using a different terminology to identify forms of electronic civil disobedience that make use of DDoS actions: the virtual sit-ins. Virtual sit-ins are regarded as a sort of cyber equivalent of street protest methods (Samuel 2004) in which the social element of the protest becomes more important than the technological component (Taylor 2005). It is necessary to underline that a long tradition in the academic literature seems to consider virtual sit-ins and DDoS as two separate phenomena. For O’Malley (2013, p. 142), a virtual sit-in is only “similar to a low level DoS attack [. . .and] by definition motivated by a social or political sentiment.” Both these approaches, if not wrong, appear to be in need of further clarifications in order to avoid possible future misconceptions. In fact, a virtual sit-in consists always and necessarily of a DDoS attack, because its aim is to block or hinder a certain target using the combined action of more devices. Second, history has proven that virtual sit-ins are not necessarily low-level attacks, as shown by the EDT (Samuel 2004) and by Anonymous (Sauter 2013). Virtual sit-ins can be divided in three categories with different levels of responsibility and direct involvement for the participants. The first one, which is the most simple, requires the participation of hundreds or better thousands of protesters who

36

Hacktivism: Conceptualization, Techniques, and Historical View

755

rapidly, manually, and individually reload a specific web page on a targeted server, overloading it (Samuel 2004). This is the only technique that requires a synchronized action of thousands of people without involving any hacking skills. It is also the original form of sit-in as theorized by the EDT, where organizers and participants would have accepted direct responsibility for their digital blockade (O’Malley 2013) in line with traditional civil disobedience (Dominguez 2008). The first category of virtual sit-ins became less effective as website traffic infrastructure improved (Karagiannopoulos 2013). Nowadays, only small websites would not be able to handle the traffic generated by this technique (assuming that enough people would participate). The second category is not obtained through a manual reload but by using a specific software pre-installed on the user’s device. The use of software to hinder a computer network has regularly been punished as the history of the LOIC used by Anonymous’ affiliates shows (Olson 2013). The LOIC software (and FloodNet before it) was downloaded by thousands of Anons (name used by several Anonymous’ affiliates to identify their group membership) in a clear collective struggle to crash web servers. The users had simply to click the bottom and let the software continuously reload the targeted IP address. Project Chanology and Operation Payback were likely the two most successful operations launched by Anonymous using the LOIC (Sauter 2014). Nevertheless, Operation Payback represented a step further, both technically and legally. In fact, what was not known by almost the entirety of the participants was that the people, who coordinated the attack, asked the support of two relatively large botnets, without which the operation would have not been successful (Olson 2013). The third and last category makes use of voluntary botnets to launch the attacks. This means that a person offers his or her device to voluntarily become part of a botnet. Employing botnets to launch a DDoS attack, no matter if it happens with or without the participants’ consensus, represents a violation of the criminal law, making the whole operation immediately illegal. Although Anonymous has been the group under the spotlight, many other teams of hacktivists launched DDoS to promote their ideas and ideologies, often with the support of botnets. Due to the particular way in which individuals participate in virtual sit-ins, there might be room for the criminal justice sector to tolerate some actions (especially the first category) as legit forms of democratic protest protected by freedom of expression (O’Malley 2013). This would be in line with an ethical and political approach that finds its roots in the nineteenth-century civil disobedience theoretical structure (Dominguez 2008). Nevertheless, if we exclude the 2001 Lufthansa case in which the German judge decided in favor of the hacktivists and considered that specific DDoS a legit form of protest (Karagiannopoulos 2013; Sauter 2014), virtual sit-ins have always been punished. Vegh (2003, pp. 184–185) reports that several “politically minded hackers,” especially the ones belonging to the first wave that merged hacking culture and politics, like the members of the Cult of the Dead Cow (cDc), did not (and still do not) see this type of hacktivism as a positive use of technology to advance human rights. On the opposite, even the apparently less invasive virtual sit-ins are

756

M. Romagna

considered immoral and aggressive methods to block the victims’ freedom of speech (Vegh 2003; Deseriis 2016). Using DDoS attacks as form of protest has been negatively identified as “slacktivism” or “clicktivism” (Sauter 2014). Participants are perceived as lazy and not committed, especially when compared to street demonstrators (Vegh 2003). Nevertheless, there are less critical voices: Sauter (2014) and Beck (2016) argue that DDoS have “new nomadic potential [. . .] necessary to challenge and change the structures of corporate and statist oppressive regimes” (Ibid., p. 347), though with the necessary adjustments (Karanasiou 2014) and possibly excluding the use of unintentional botnets. Virtual sit-ins and more generally DDoS attacks have occupied an important position in the history of hacktivism. The former were predominant in the early stages of the phenomenon from the mid-1990s to the early 2000s, though they were also used by Anonymous in the late 2000s (Coleman 2014). The latter are still common in recent days, as seen in the aftermath of the arrest of WikiLeaks’ founder Julian Assange with the OpEcuador campaign that seemed to have triggered up to 40 million DDoS attacks against Ecuadorian websites in just few weeks (Engineering and Technology 2019).

Cyber Trespass Cyber trespass is the first step for many other actions and consists of accessing a computer or any other device or system network to obtain control of it, or to steal information, or to install malicious software. Due to its clear illegality, it was negatively received among the first hacktivists of the 1990s, often causing disapproval among the public opinion (Samuel 2004). Although cyber trespass increases the risk of punishment, it has become an important and invaluable tool in the hacktivists’ repertoire. Cyber trespass has particularly met the favor of small groups of hacktivists with good hacking skills.

Website Defacements Website defacements are part of hacktivists’ repertoire since the 1990s and consist of accessing a web server (using techniques such as SQL injection, file inclusion, bruteforce attack, URL poisoning, online social engineering, and so on) and “replacing a web page with a new page bearing [a socio-]political message” (Samuel 2004, p. 8). Defacements enable hacktivists to post messages and images in which they state their ideologies and beliefs while simultaneously gaining status among the hacktivists and hacker community (Holt 2011). One of the first recorded defacements was done in 1996 to protest the US Communications Decency Act (CDA). A group of hacktivists displayed on the homepage of the US Department of Justice the words “Department of Injustice” combined with pornographic images (Denning 2015). From that moment on, hacktivists found in website defacements a privileged form of protest: easy to use, perfect to reach thousands of people, and adaptable

36

Hacktivism: Conceptualization, Techniques, and Historical View

757

to any needs and causes (Holt 2011). Another notable example was the operation launched by Anonymous in 2012 that involved the defacement of many Chinese websites with messages against Chinese government’s censorship and citizens’ control (O’Malley 2013). Website defacements have been compared to street graffiti (Samuel 2004). Yet, they can achieve greater visibility if the target is a known website.

Data Theft/Leak Data and information theft has become another important tool of protest for hacktivists, especially after the rise of Anonymous. The collective has often caught public attention using these techniques, like in the 2012 HB Gary Federal case. HB Gary was a company that worked for the American government and claimed, in the person of the CEO Aaron Barr, to know who Anonymous’ members were (Olson 2013). This statement prompted a reaction by several Anons who accessed Barr’s devices, locked him out, and destroyed his reputation by publishing confidential information and communications (Olson 2013; O’Malley 2013). Another major actor that employs data leaks is WikiLeaks (Sauter 2014). WikiLeaks belongs to that part of hacktivism which mainly supports the freedom of information. Its aim is to make governments open and accountable by suggesting constructive alternatives to capitalist relations through collaborative networks based on peer production (Karatzogianni 2015; Wong and Brown 2013). As previously argued also by Tanczer (2017), WikiLeaks does not seem to completely satisfy the criteria used in this chapter to identify a hacktivist organization. Its operations are focused on collecting and divulging leaked information hacked by someone else, rather than on doing its own hacks. Indeed, WikiLeaks plays the part of a media agency that uses the Internet simply as a communication tool, often lacking the hacking component. Data theft has traditionally been considered an illegal and unacceptable form of protest and has often taken the shape of vigilant hacktivism rather than sociopolitical one. Cause-based hacktivists have often employed this technique, for instance, during operations against alleged pedophiles or terrorists. Yet, they have been harshly criticized since they arrogated themselves a power that belongs only to the state, possibly endangering the life of relatives, friends, and acquaintances of their targets (Delmas 2018).

Website Redirect A website redirect is a technique that normally involves two phases: the usually illegal access to website and the modification of its address. If a visitor digits that specific address, he/she will land in another, tailor-made website different from the original (Samuel 2004). This technique gained popularity among hacktivists in the 1990s and the early 2000s, but has since lost part of its appeal given companies’ improved cybersecurity. The alternative websites normally display a critical message

758

M. Romagna

against the original website with an explanation of why the user was redirected (Hampson 2012). One relevant example happened in June 2000, when visitors who tried to connect with Nike.com website were redirected to the webpage of s11, a group against globalization (Samuel 2004).

The Evolution of Hacktivism Through Actors and Actions The history of hacktivism can be divided in three phases following the growing recognition among hacktivists and hackers of cyberspace as an arena for sociopolitical actions (Milan 2015). The phases should not be interpreted as closed squares with no common access points with each other. Rather, they should be understood as chronological tools to help clarify and discuss the historical development of hacktivism. In fact, there are some actors or ideologies that spanned multiple phases, but they have been really influential in one period only. One example is the EDT, still active in these days, but mostly known for its operations in the 1990s. Therefore, the chronological division presented here tries to merge more than 35 years of different features and actors, and it represents the best approach according to the author of this chapter. Karagiannopoulos (2018) and Milan (2015) prefer to split hacktivism’s history in two periods, marking the dividing point with the appearance of Anonymous. Though this is a possible solution, given how Anonymous evolved and influenced future groups, it seems better to have a threeperiod framework justified by more recent changing in society, especially outside Western countries. The first phase represents the start, when traditional ideologies and values of activism met the Internet for the first time. The second phase is dominated by Anonymous that places itself as a new heterogeneous force interested in picking any type of fight so long as it is seen as a form of injustice by its members. The last phase is a development of the second one, with a massive and amplified use of social network online platforms and with a prominent position of new groups that often represent different culture than the Western one (which for long had been predominant on the Internet). The first phase was characterized by an activist approach usually linked to leftwing ideologies (Samuel 2004). These activists, who were experiencing a new technology for the first time, met a computer hacking community inspired by the original hacker mentality: experimental and curious rather than disruptive (Jordan and Taylor 2004). While activists were looking for new methods to push forward their political agenda, hackers were discovering that hacking could be used to achieve different goals than the simple manipulation of computers and software (Samuel 2004). This first period should actually be split in two sub-phases. The preweb sub-phase covers the beginning of hacktivism from the 1980s to the early 1990s. The web sub-phase starts with the creation of the web, when the links among physical and cyberspace were becoming real (Milan 2015, p. 552). This first phase ends in the early years of the New Millennium, with the evolution of the web into the more interactive Web 2.0 (Romagna in press).

36

Hacktivism: Conceptualization, Techniques, and Historical View

759

The second phase can be identified with the popularization and increased visibility of hacktivism, thanks to groups such as Anonymous (Karagiannopoulos 2018; Milan 2015). The “Anonymous era” spans from 2004 to 2012 and coincides with the apex reached by Anonymous in the period 2010–2012. Anonymous transformed hacktivism from marginal skirmishes launched by sporadic cell-based groups of techno-savvies to transnational and decentralized networks of hackers who try to “intervene regularly into the real-world struggles” (Milan 2015, p. 554). In this period, there were other active groups, but none of them was not even close to the popularity and media attention received by Anonymous. The last phase started in 2013 and it represents the current phase of hacktivism. One of the most relevant episodes in the history of hacktivism took place on the 5 of November 2013: the Million Mask March promoted by Anonymous. The march involved dozens of demonstrations in many cities around the globe with hundreds of people wearing Guy Fawkes masks (Coleman 2014). Phase 3 has witnessed the rise of several small groups of hacktivists, especially in those countries that had been affected by the 2010–2012 turmoil of the “Arab Spring.” Another important element was the fast growth of online social networks outside Western nations, which allowed the hacktivist community to increase its reach and spread its ideologies. At the same time, the fluid nature of Anonymous (Coleman 2014) as seen in phase 2 stimulated the birth of several independent realities that were and are focused on specific issues rather than on complex political ideologies. This particular situation helped to develop many cause-based hacktivist cells which work independently or in small networks and that are typical of a new era in hacktivism. It does not mean that in the previous phases they did not exist: it means that they are now much more visible and they became the most relevant actors for this phenomenon.

Phase 1: From the 1980s to the Mid-2000s – From the Origins of the Hacker-Activist Spirit to the Affirmation of Hacktivism Hacktivism originated in the early 1980s, when hackers were mainly focused on values of the hacking subculture such as freedom of intellectual property and information, unlimited access to computers, and data and open source software (Sorell 2015) while simultaneously starting to explore more traditional political ideologies, as the German Chaos Computer Club did (Jordan and Taylor 2004). At the same time, traditional activist groups identified the PC and the Internet as a new tool and space to promote their sociopolitical agenda. The first recorded episode of hacktivism was done using a malware known as WANK (worm against nuclear killer). WANK was deployed in 1989 by a group of hacker-activists based in Australia to protest against NASA and the US Department of Energy for the launch of a shuttle carrying radioactive plutonium (Denning 2015). Though clearly political in its intent, the message still represented an isolate case. The initial separation of hacktivism from hacking and the subsequent embrace of more traditional sociopolitical values became a reality only in the mid-1990s with the genesis of a politically motivated hacking movement (Jordan and Taylor 2004).

760

M. Romagna

The Internet massively contributed to the evolution of hacktivism and of the hacktivists’ mentality. It gave them more visibility and increased the chances of a higher impact. It also created interaction among individuals beyond national and physical borders, often facilitating communication and organizational aspects (Milan 2013). The first political DDoS attack was launched by the Zippies in 1994, targeting thousands of e-mail accounts in the United Kingdom, in protest of a bill that would have outlawed outdoor dance festivals (Denning 2015). Just months later another hacktivist group, the Strano Network, targeted the French government for its nuclear and social policies, confirming the growing political ideals over the traditional hacker values (Ibid.). Since the mid-1990s, it has been possible to recognize two main courses of actions that shaped the direction of hacktivism (Romagna in press). The first took inspiration from sociopolitical ideologies typical of traditional left-wing movements, connected with alternative or anti-globalization movements (Samuel 2004), which represent the purest form of hacktivism. The second direction is the one that leads toward regional and international geopolitical tensions and conflicts, which imply the support of patriotic and nationalistic forms of hacktivism (Karatzogianni 2015). Simultaneously, other hackers started developing a different approach which was less sociopolitical and more linked to specific causes (like vigilantism against pedophiles or counter-terrorism) falling under what some scholars defined as cause-based hacking (Holt 2011). The new wave of cyber protests of the mid-1990s was mainly amplified by two groups that largely influenced the ideology and techniques used by future hacktivists. These groups were the Critical Art Ensemble and the Electronic Disturbance Theater (Samuel 2004). Although CAE struggles to meet the criteria of a hacktivist group, it had a relevant impact and influence in the history of this phenomenon. In fact, CAE explored the intersections between art, technology, and political activism, and it contributed to theorize the concept of electronic civil disobedience. CAE’s main idea indicated a strong connection among hacking expertise and electronic civil disobedience: hacking should have been used to counterweight the imbalance of power held by the establishment and to broaden the political effect of civil disobedience (Jordan and Taylor 2004, p. 71). This approach created the foundations for a form of hacktivism in which the hacking component is particularly important, if not essential (Romagna in press). The Electronic Disturbance Theater is instead a collective of hacktivists created in 1997 by Ricardo Dominguez, former member of CAE. The approach taken by EDT was quite different from the idea developed by CAE. Though EDT’s members were in fact hackers themselves, they believed that a sociopolitical cause would have been better supported, also from a moral perspective, by a large amount of people with no knowledge in computer hacking. According to Dominguez, this approach was to be preferred over a small group of skilled hacker-activists pursuing a technologically advanced form of protest (Jordan and Taylor 2004). One of the most successful EDT’s operations was the 1997 support for the Zapatista movement in Chiapas (Mexico) that aligns itself with alternative globalization and anti-neoliberal social movements, seeking more control of the indigenous

36

Hacktivism: Conceptualization, Techniques, and Historical View

761

population’s other local resources (Jordan 2002). During this operation, allegedly thousands of people used FloodNet, a code that automatically loaded and reloaded the target’s web page on the computers of every participant, causing the site or network to slow down or crash (Samuel 2004, p. 77). EDT was interested in claiming the online space as an arena for political protests, equivalent in its social, cultural, and legal dimension to the physical one (Sauter 2014, p. 43). They had a clear political agenda aim at involving the general public. They interpreted their actions as a way of challenging governments and corporations in line with a tradition belonging to left-wing activism (Samuel 2004). The use of virtual sit-ins to protest against the Mexican government or the Pentagon are clear examples of the desire to use nondisruptive tools similar to the ones adopted for street protest and in clear contrast with other groups that preferred to exploit less legal hacking techniques (Ibid.). EDT’s campaigns did not go unnoticed, especially in the hacking community, where they gained criticisms rather than appreciation. Several hackers claimed that the use of FloodNet was undemocratic, disruptive, and not in line with the hacking spirit (Ibid.). Nevertheless, EDT’s members replied to these criticisms by adopting a no-anonymity policy and taking personal responsibility (and risk) as any activists would have done during street protests (Romagna in press). The no-anonymity policy clearly failed to be implemented by other hacktivist groups, but it was not received as a big surprise. Anonymity has proven to be a strength for hacktivists, though it had both merits and demerits. It helped to protect them and to shape their legend, especially among enthusiastic young techno-savvies, but it also negatively contributed to create a confusing image of hacktivism, undermining the trust of the public opinion (Fish and Follis 2016). FloodNet was not only used by EDT. The Electrohippies employed it during the 1999 World Trade Organization meeting in Seattle (Samuel 2004). The operations of the late 1990s are characterized by the need of pairing digital actions with street ones, and to a certain extent, they will be appearing again with Anonymous. Though DDoS actions and virtual sit-ins had the advantage of mobilizing several individuals by making them active actors, other tools proved to have the same if not higher level of efficacy in promoting hacktivists’ agenda (Romagna in press). In 1998, the group Milw0rm accessed the computers of the Bhabha Atomic Research Centre (BARC) in Mumbai and posted an antinuclear weapons agenda and peace messages. One month later they hacked the British web hosting company Easyspace, inserting the antinuclear mushroom cloud message on more than 300 Easyspace’s websites (Denning 2001). Next to ideologies typical of traditional activism, the end of the 1990s and beginning of the 2000s witnessed the development of cause-based hacking (Holt 2011), particularly for patriotic and religious reasons. There have been several examples of operations conducted to promote national identity, to intervene during regional or international geopolitical tensions (Samuel 2004), or even to support ethno-religious ideologies (Karatzogianni 2005). Denning (2001, p. 272) shows how operations can also be motivated by internal conflicts, such as in the case of the “Free East Timor” action in September 1998 when 40 Indonesian websites were defaced to denounce the violation of human rights by the Indonesian government (Ibid.).

762

M. Romagna

Online skirmishes became a common ground for the development of patriotic hacktivism, generally resembling tensions or even serious confrontations happening in the physical space (Romagna in press). During the Kosovo conflict, pro-Kosovo hacktivists started a series of defacements leaving messages that praised a “Free Kosovo,” while opponents belonging to the nationalistic hacktivist group of the Serb Black Hands engaged in cyberattacks against NATO’s computer networks (Denning 2001). Threats and incidents have been happening on daily bases among hacktivists that support opposite sides in the Israel-Palestinian conflict and in the Indo-Pakistani tensions (Samuel 2004). Both cases are based on ethno-religious motivations, as proven by the written messages, or by the national flags left on the defaced websites (Karatzogianni 2015). When analyzing hacktivism for patriotic reasons, it is necessary to carefully distinguish whether the action was actually done by hacktivists or was a form of statesponsored hacktivism. A controversial example was the large operation launched in 2001 against more than 1400 American websites by over 140 Chinese hacktivist groups (among which there were the notorious Honker Union of China and China Eagle) following the collision between a US drone and a Chinese navy jet (Denning 2015).

Phase 2: From the Mid-2000s to 2012 – Anonymous Era and Rise of the Hackers Although the 9/11 attacks happened in the beginning of the New Millennium, their consequences have had an immense impact in the following years, also on hacktivism. The wars and geopolitical tensions that flared up particularly in the Middle East (Karatzogianni 2015) were recognizable already during the 2003 Iraq war, when allegedly Iraqi hacktivists used malware and launched thousands of defacements and DDoS attacks in the first weeks of the conflict. The aftermath of 9/11 led to an escalation of cyberattacks among anti-war and pro-war hacktivists and between proIslamist and anti-Islamists (Ibid.) which are still relevant in present days. Episodes of hacktivism became more common in the New Millennium, with the evolution of new tactics, larger deployment of computer hacking techniques, and development of greater transnational networks (Milan 2015). There are several factors that concurred to this evolution, and among them, at least three were particularly relevant for hacktivism. First, the evolution of the web into the Web 2.0 allowed for higher level of interaction between users who transformed from passive receivers to active content creators. Second, this increasing amount of shared information contributed to enriching one of the pillars in hacker’s ideology, “information wants to be free.” The freedom of information was fostered by the fact that the Internet offered a relatively safer environment to show political dissatisfaction and promote a sociopolitical agenda without directly exposing the individual who could make use of a different identity. The ability to create content and to share values and ideologies influenced the sociopolitical debate. Simultaneously, emerging new ideas were combined with recent technological developments, allowing more people to join the debate. Indeed, the third factor is represented by millions of people, especially youngsters in developing countries, who started having access

36

Hacktivism: Conceptualization, Techniques, and Historical View

763

to once unaffordable ICT technologies and exploited new ways of using them (Romagna in press). In this period, hacktivism became more often the object of negative attention, especially among mass media (Klein 2015). Usually, the problem was not the goal or the motivation, but the means used by hacktivists to reach their target (Samuel 2004). This new form of hacktivism was perfectly represented by Anonymous, though it should be clarified that the collective has always represented an anomaly rather than a rule in the hacktivist landscape. Instead of the typical small highly organized groups of hacktivists, Anonymous showed the ability to engage with thousands of people (Coleman 2014) creating a phenomenon that reached almost (if not completely) the dimension of a social movement (Coleman 2012a). When looking at the number of participants that took part in its actions, its longevity, and its incredible mutability and the heterogenous support of many diverse causes, it is easy to understand that Anonymous would deserve a chapter on its own. The first indications of something that could slightly look like Anonymous surfaced in 2004 on the 4chan’s forum platform (Klein 2015). Initially, Anonymous was not a collective but rather a sort of mental status, a feeling of belonging, and a melting pot of pranks, of Internet subcultures’ values, and of fuzzy sociopolitical ideologies (Coleman 2014). Anonymous is “one of the most extensive movements to have arisen almost directly from certain quarters of the Internet, [. . .] part digital direct action, part human rights technology activism, and part performance spectacle” (Coleman 2012a, p. 210). The direct sociopolitical actions (though not perceived as such by many participants) began only in 2008 when Operation Chanology was launched against the Church of Scientology (Olson 2013). Anonymous’ affiliates engaged in dozens of operations in the following years, selecting the most disparate targets (Coleman 2014) and proving to be a multiform actor very difficult to classify or identify to a specific cause (Klein 2015). By the end of 2010, Anonymous was not anymore only an idea but an army (Coleman 2012b). Some of these operations (shortened among participants as “Ops”) were done only on the Internet, like OpBart and OpSony in 2011 and OpRussia in 2012. Others were instead organized to support street protests like in the case of OpTunisia and OpQuebec in 2012 (Olson 2013). A key to the success of Anonymous was its peculiar ability to attract and manipulate the media coverage surrounding its raids (Sauter 2014, p. 60). Anonymous was able to read the feelings of the Internet community by attracting new people to its cause and exploring different techniques to promote its agenda: from traditional DDoS (Sauter 2014) and website defacements to the infamous data leaks and the more invasive attacks against ICT system networks (Olson 2013). The growth of the Internet increased the sense of empowerment among individuals (Milan 2015) and especially among hacktivists. While street protests have always been dependent on having a high turnout, small but well-equipped groups of hacktivists could be extremely effective and achieve higher visibility through carefully planned cyber operations, even though the result would have not necessarily met a change in society (Denning 2001). It would be incorrect to believe that hacktivism in this second phase was limited to Anonymous and to WikiLeaks (Romagna in press). Certainly, both these realities gained a lot of media attention because they targeted Western organizations. However, many other groups transit

764

M. Romagna

from phase 1 into phase 2 or were ex novo created in this period. Although there were changes in members and goals, teams such as the Honker Union of China and China Eagle remained highly operative in phase 2, the former with attacks against Vietnamese, Philippine, and Japanese targets (Yip and Webber 2011) and the latter engaging in skirmishes with American hacktivists (Denning 2015). Among the new groups in the hacktivist scene, most of them had strong connections with patriotic and ethno-religious ideologies (Denning 2015). An example is the 2005 defacement campaign launched by Turkish hacktivists to protest the publication by a Danish newspaper of a cartoon perceived as an insult to Islam (Holt 2011). The image featured the prophet Muhammad with a bomb in his turban and was perceived as highly disrespectful by the Muslim community. Thousands of websites were disrupted by Turkish hackers in response to the publication, showing a growing cultural struggle that found a fertile ground on the Internet (Ibid.). As noticed by Karatzogianni (2015), the religious component gained more attention in the hacktivist community and will continue to grow also in the next phase. This trend is perfectly in line with the geopolitical and cultural situation that has characterized the first 20 years of the twenty-first century. The geopolitical issues have definitely influenced the development of state-sponsored hacktivism that became another important topic especially during the cyberattacks against Estonia in 2007 and the Russo-Georgian war in 2008 (Ibid.). It was in this phase that LulzSec made its first appearance with operations against Sony Pictures and News Corporation among others (Milan 2015). Though known to engage for the “lulz” (slang expression for “laughing out loud” popular in Internet communities), LulzSec did not disdain some political actions or at least to embed some forms of protest in its attacks (Olson 2013). Considering that the group lasted slightly more than 50 days before being seized (Ibid.), it managed to obtain visibility and influenced future teams which are still using that name. Overall, phase 2 was definitely shaped by Anonymous, which had the merit (or demerit) to inspire the creation of several new cells of hacktivists that appeared just after the fall of LulzSec. Particularly, it encouraged forms of vigilant and cause-based hacktivism which will find fertile ground in the second decade of the New Millennium (Romagna in press).

Phase 3: From 2012 to Present Days – Toward New Forms of Hacktivism The arrest of LulzSec’s members was a shock in the Anonymous’ environment (Olson 2013). Nevertheless, Anonymous became probably stronger because it realized that its members were not untouchable as they believed, prompting the whole collective to go through a process of diversification that resulted in new independent sections (Romagna in press). Examples include Anon Philippines that engaged in several attacks against Chinese and Filipino websites; Anonymous Venezuela, which was involved in protesting against the internal political situation of Venezuela; and the recent actions of Anonymous Catalunya that targeted several

36

Hacktivism: Conceptualization, Techniques, and Historical View

765

Spanish websites to support the Catalan protests the government in Madrid. In some occasions Anonymous worked together with other groups in order to launch largescale attacks (Denning 2015), like in the case of “OpIsrael” against the Israeli state and “OpISIS” (Caldwell 2015) against the terrorist organization ISIS. In both cases there is a clear connection with phase 2, as the teams that supported Anonymous in these operations were a direct product of the North African and Middle East geopolitical tensions which occurred in the previous years (Romagna in press). OpIsrael Birthday in 2014 was, as the name tells, a recurrence of the original one, supported by more than 20 groups, including the Afghan Cyber Army; AnonGhost; Anonymous’ groups from Jordan, Lebanon, and Syria; the Gaza Hacker Team; the Izz ad-Din al-Qassam Cyber Fighters; and even the Syrian Electronic Army (SEA) (Denning 2015). Ironically, but in line with the fluid component of Anonymous, some of these groups became themselves targets of the collective just months later, as they openly supported a regime (the SEA) or terrorist groups (the Cyber Caliphate promoting ISIS’ ideology) (Denning 2015). Especially the operation against Israeli websites triggered a reaction from Israeli hacktivists, consistent with patriotic and ethno-religious cyber conflicts and in line with the development from phase 2 (Romagna in press). A relatively more recent cause embraced by many hacktivists is counter-terrorism, which can be considered a form of vigilantism (Kosseff 2016; Richards and Wood 2018). Several teams like Ghost Security Group, Ghost Squad Hackers, New World Hackers, and Skynet Central started personal campaigns using more advanced techniques than the traditional website defacements. They mainly prefer to block websites and forums with DDoS attacks and to dox individuals they consider to be terrorists or who are believed to sympathize with terrorist organizations. Skynet Central was created by former Anonymous’ members who did not believe anymore in Anonymous’ agenda and technical approach. Skynet did not limit its operations to terrorist supporters in the Middle East; it also engaged in attacks against Turkish institutions due to the recent political, civil, and military developments of the country (Romagna in press). Turkey is indeed a good example of a hot zone for hacktivists’ actions. Already in the past (Holt et al. 2017) and even more in recent years, there have been hundreds of episodes of hacktivism either for patriotism (examples are the Turk Hack Team and the Ayyldiz Tim), for internal sociopolitical struggles (Red Hack Team), or to promote the independence of different ethnic groups (pro-Kurdistan hacktivists). It seems that in most recent years, cause-based hacktivists have been gaining a more relevant role in the hacktivist community, sharing it with the ever present Anonymous collective. Though it is impossible to be sure about it, this situation will likely represent the trend in the upcoming years, partially mirroring feelings, ideas, and values of our society.

Conclusion Hacktivism has constantly changed in the last 30 years, and it has been able to adapt to the evolution of society, technology, and sociopolitical ideologies. Although generally recognized by the security industry and criminal justice sector as a cyber

766

M. Romagna

threat determined by sociopolitical motivations, the definition and conceptualization of hacktivism are still objects of discussion especially in the academic community. In fact, hacktivism is still struggling to find an acceptable position as tolerated means of protests to promote a political agenda or a certain ideology. This unclear situation often depends on the tools and methods used by hacktivists, in combination with the values and ideologies they want to promote. The distinction between hacktivism and terrorism remains a complex issue, particularly in the eyes of state governments and of the security industry. It is not only a technical problem as its intricacies derive from a labelling process that is heavily influenced by political and cultural aspects. Especially in the present phase of hacktivism, groups have abandoned mass actions participation (virtual sit-ins) typical of the late 1990s and the early 2000s, in favor of targeted attacks that focus on the use of personal hacking skills within a group or a small teams’ network. This new wave of hacktivism has been profoundly influenced by the advent of Anonymous which has shaped a new approach to cyberspace but also to the sociopolitical ideology. The fluidity of Anonymous is in line with the mutability of the Internet, and the new hacktivist community seems to be formed around a cause-based approach rather than a more complex left or right ideology. This positioning makes it more difficult to draw the line between a tolerable form of political hacking and the purely illegal cyberattacks driven by personal motivations. The average Internet user is now only a supporter or even a sort of spectator of the hacktivists’ actions. In fact, many new hacktivist groups (but Anonymous as well) often seem to prefer the pursuing of their own ethical or moral ideals. This might be in line with a “new” form of hacktivism that embraces a vigilante form of justice or a strong feeling of patriotism, rather than supporting ideologies embedded in traditional forms of civil disobedience. The steady growth of cyberspace will likely increase the chances, scope, and targets of hacktivism. Nevertheless, it remains difficult to foresee how far hacktivists will go, or will be allowed to go, considering the immense impact that the ability of influencing the Internet community will have in the future (and present days).

References Asenbaum, H. (2018). Cyborg activism: Exploring the reconfigurations of democratic subjectivity in Anonymous. New Media & Society, 20(4), 1543–1563. https://doi.org/10.1177/ 1461444817699994. Beck, C. (2016). Web of resistance: Deleuzian digital space and hactivism. Journal for Cultural Research, 20(4), 334–349. https://doi.org/10.1080/14797585.2016.1168971. Caldwell, T. (2015). Hacktivism goes hardcore. Network Security, (5), 12–17. https://doi.org/ 10.1016/S1353-4858(15)30039-8. Chandler, A. (1996). The changing definition and image of hackers in popular discourse. International Journal of the Sociology of Law, 24, 229–251. Coleman, G. (2012a). Coding freedom: The ethics and aesthetics of hacking. Princeton: Princeton University Press. Coleman, G. (2012b). Our weirdness is free: The logic ofAnonymous – Online army, agents of chaos, and seeker of justice. Triple Canopy. Retrieved from http://canopycanopycanopy. com/15/our_weirdness_is_free

36

Hacktivism: Conceptualization, Techniques, and Historical View

767

Coleman, G. E. (2014). Hacker, hoaxer, whistleblower, spy: The many faces of Anonymous. New York: Verso. Delmas, C. (2018). Is hacktivism the new civil disobedience? Raisons Politiques, 69, 63–81. https://doi.org/10.3917/rai.069.0063. Denning, D. E. (2001). Cyberwarriors: Activists and terrorists turn to cyberspace. Harvard International Review 23 (Summer), 70–75. Denning, D. E. (2015). The rise of hacktivism. Georgetown Journal of International Affairs. https://doi.org/10.1038/nmat1849. Deseriis, M. (2016). Hacktivism: On the use of botnets in cyberattacks. Theory, Culture and Society, 1–22. https://doi.org/10.1177/0263276416667198. Dominguez, R. (2008). Electronic civil disobedience post-9/11 forget cyber-terrorism and swarm the future now. Third Text, 22(5), 661–670. https://doi.org/10.1080/09528820802442454. Engineering and Technology. (2019, April 17). Ecuador faced 40 million DDoS attacks after Assange arrest. Retrieved May 1, 2019, from Engineering and Technology: https://eandt. theiet.org/content/articles/2019/04/ecuador-faced-40-million-ddos-attacks-in-days-after-assange -arrest/ Firer-Blaess, S. (2016). The collective identity of Anonymous. Web of meanings in a digitally enabled movement (Doctoral thesis), Uppsala University, Uppsala. Retrieved from http://uu. diva-portal.org/smash/record.jsf?pid=diva2%3A926671&dswid=6136 Fish, A., & Follis, L. (2016). Gagged and doxed: Hacktivism’s self-incrimination complex. International Journal of Communication, 10, 3281–3300. Gillen, M. (2012). Human versus Inalienable Rights: Is there still a future for online protest in the Anonymous world? European Journal of Law and Technology, 3(1), 1–19. Goode, L. (2015). Anonymous and the political ethos of hacktivism. Popular Communication, 13(1), 74–86. https://doi.org/10.1080/15405702.2014.978000. Gresty, D. W., Taylor, M. J., & Lunn, J. (2005). Legal considerations for cyber-activism. Systems Research and Behavioral Science, 22(6), 565–570. https://doi.org/10.1002/sres.625. Hampson, N. C. (2012). Hacktivism: A new breed of protest in a networked world. Boston College International and Comparative Law Review, 35(2), 511–542. Hardy, K. (2010). Operation titstorm: hacktivism or cyber-terrorism? UNSW Law Journal, 33(2), 474–502. Himma, K. E. (2007). Hacking as politically motivated digital civil disobedience: Is hacktivism morally justified? In K. E. Himma (Ed.), Internet security: Hacking, counterhacking, and society (pp. 73–98). Subdury: Jones & Bartlett Publishers. Holt, T. J. (2011). The attack dynamics of political and religiously motivated hackers. In T. Saadawi & J. D. Colwell Jr. (Eds.), Cyber infrastructure protection (pp. 159–180). Carlisle: Strategic Studies Institute. Retrieved from https://ssi.armywarcollege.edu/pubs/display.cfm?pubID=1067. Holt, T. J. (2012). Exploring the intersections of technology, crime, and terror. Terrorism and Political Violence, 24(2), 337–354. Holt, T. J., Freilich, J. D., & Chermak, S. M. (2017). Exploring the subculture of ideologically motivated cyber-attackers. Journal of Contemporary Criminal Justice, 33(3), 212–233. https://doi.org/10.1177/1043986217699100. Houghton, T. J. (2010). Hacktivism and Habermas: Online protest as neo-Habermasian counterpublicity (Published doctoral thesis), University of Canterbury, Christchurch. Retrieved from https://ir.canterbury.ac.nz/bitstream/handle/10092/5377/thesis_fulltext.pdf;sequence=1 Jordan, T. (2002). Activism! Direct action, hacktivism and the future of society. London: Reaktion Books. Jordan, T., & Taylor, P. (2004). Hacktivism and cyberwars: Rebels with a cause. London: Routledge. Karagiannopoulos, V. (2013). The regulation of hacktivism in contemporary society: Problems and solutions (Published doctoral thesis), University of Strathclyde, Glasgow. Retrieved from http://oleg.lib.strath.ac.uk/R/?func=dbin-jump-full&object_id=22688 Karagiannopoulos, V. (2018). Living with hacktivism. From conflicts to Symbiosis. London: Palgrave Macmillan.

768

M. Romagna

Karanasiou, A. P. (2014). The changing face of protests in the digital age: On occupying cyberspace and Distributed-Denial-of-Services (DDoS) attacks. International Review of Law, Computers & Technology, 28(1), 98–113. https://doi.org/10.1080/13600869.2014.870638. Karatzogianni, A. (2005). The politics of cyberconflict: Ethnoreligious conflicts in computer mediated environments (Published doctoral thesis), University of Nottingham, Nottingham. Retrieved from http://eprints.nottingham.ac.uk/12112/ Karatzogianni, A. (2015). Firebrand waves of digital activism 1994–2014. London: Palgrave Macmillan UK. Klein, A. G. (2015). Vigilante media: Unveiling Anonymous and the hacktivist persona in the global press. Communication Monographs, 82(3), 1–23. https://doi.org/10.1080/ 03637751.2015.1030682. Knapp, T. M. (2015). Hacktivism – Political dissent in the final frontier. New England Law Review, 49(2), 259–296. Kosseff, J. (2016). The hazards of cyber-vigilantism. Computer Law and Security Review, 32(4), 642–649. https://doi.org/10.1016/j.clsr.2016.05.008. Krapp, P. (2005). Terror and play, or what was hacktivism? Grey Room, (21), 70–93. https://doi.org/ 10.1162/152638105774539770. Kubitschko, S. (2015). Hackers’ media practices: Demonstrating and articulating expertise as interlocking arrangements. Convergence, 21(3), 388–402. https://doi.org/10.1177/ 1354856515579847. Li, X. (2013). Hacktivism and the first amendment: Drawing the line between cyber protests and crime. Harvard Journal of Law and Technology, 27(1), 302–330. List of designated terrorist groups. (n.d.). Retrieved May 10, 2019, from Wikipedia: https://en. wikipedia.org/wiki/List_of_designated_terrorist_groups. Mačák, K. (2016). Decoding Article 8 of the International Law Commission’s Articles on State Responsibility: Attribution of cyber operations by non-state actors. Journal of Conflict and Security Law, 21(3), 405–428. https://doi.org/10.1093/jcsl/krw014. Manion, M., & Goodrum, A. (2000). Terrorism or civil disobedience: Toward a hacktivist ethic. Computer and Society, 30, 14–19. Milan, S. (2013). Social movements and their technologies: Wiring social change. London: Palgrave Macmillan. Milan, S. (2015). Hacktivism as a radical media practice. In C. Atton (Ed.), The Routledge companion to alternative and community media (pp. 550–560). London: Routledge. O’Malley, G. (2013). Hacktivism: cyber activism or cyber crime? Trinity College Law Review, 16, 137–160. Olson, P. (2013). We are Anonymous: Inside the hacker world of LulzSec, Anonymous, and the global. London: Heinemann. Richards, I., & Wood, M. A. (2018). Hacktivists against terrorism: A cultural criminological analysis of Anonymous’ anti-IS campaigns. International Journal of Cyber Criminology, 12(1), 187–205. https://doi.org/10.5281/zenodo.1467895. Rogers, M. K. (2010). The psyche of cybercriminals: A psycho-social perspective. In S. Ghosh & E. Turrini (Eds.), Cybercrimes: A multidisciplinary analysis (pp. 217–235). London: Springer. Romagna, M. (in press). Evolution of hacktivism: From origins to now. In O. Guntarik & V. Grieves-Williams (Eds.), From sit-ins to #revolutions: Media and the changing nature of protests. New York: Bloomsbury Academic. Samuel, A. W. (2004). Hacktivism and the future of political participation (Unpublished doctoral thesis), Harvard University, Cambridge, MA. Retrieved from https://archive.org/stream/fp_ Hacktivism_and_the_Future_of_Political_Participation/Hacktivism_and_the_Future_of_Politi cal_Participation#page/n0/mode/2up Sauter, M. (2013). “LOIC will tear us apart”: The impact of tool design and media portrayals in the success of activist DDOS attacks. American Behavioral Scientist, 57(7), 983–1007. https://doi. org/10.1177/0002764213479370. Sauter, M. (2014). The coming swarm. New York/London: Bloomsbury Academic.

36

Hacktivism: Conceptualization, Techniques, and Historical View

769

Schmid, A. P. (2004). Terrorism – the definitional problem. Case Western Reserve Journal of International Law, 36(2), 375–419. Schori Liang, C. (2017). Unveiling the “united cyber caliphate” and the birth of the E-terrorist. Georgetown Journal of International Affairs, XVIII(III), 11–20. Sorell, T. (2015). Human rights and hacktivism: The cases of wikileaks and anonymous. Journal of Human Rights Practice, 7(3), 391–410. https://doi.org/10.1093/jhuman/huv012. Tanczer, L. M. (2017). The terrorist – Hacker/hacktivist distinction: An investigation of selfidentified hackers and hacktivists. In M. Conway, L. Jarvis, S. Macdonald, & L. Nouri (Eds.), Terrorists’ use of the internet (pp. 77–92). Amsterdam: IOS Press. https://doi.org/10.3233/9781-61499-765-8-77. Taylor, P. (2005). From hackers to hacktivists: Speed bumps on the global superhighway? New Media & Society, 7(5), 625–646. https://doi.org/10.1177/1461444805056009. Vegh, S. (2003). Hacking for democracy: A study of the Internet as a political force and its representation in the mainstream media (Published doctoral dissertation), University of Maryland, College Park. Retrieved from https://www.proquest.com/. Accession Order No. 3094551. Wong, W. H., & Brown, P. A. (2013). E-bandits in global activism: WikiLeaks, Anonymous, and the politics of no one. Perspectives on Politics, 11(4), 1015–1033. https://doi.org/10.1017/ S1537592713002806. Yar, M. (2013). Cybercrime and society. 2nd ed. London: Sage. https://doi.org/10.4135/ 9781446212196. Yip, M., & Webber, C. (2011). Hacktivism: A theoretical and empirical exploration of China’s cyber warriors. In: Proceedings of the 3rd international web science conference. ACM, 14–17 June 2011. Koblenz. Retrieved from https://eprints.soton.ac.uk/272350/

Global Voices in Hacking (Multinational Views)

37

Marleen Weulen Kranenbarg

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Judicial Population Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantitative Survey Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Personal Characteristics and Situational Risk Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Social Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specialization and Motives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qualitative Interview Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Labelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cyborg Hackers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Research Based on Police Records and Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantitative Life-Course Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qualitative Research on Organized Cybercrime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Concluding Remarks on Studying Hackers with Judicial Population Research . . . . . . . . . . . . . . . General Populations Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

772 773 773 774 776 776 777 777 778 780 780 781 783 783 785 786 787

Abstract

This chapter will give an overview of research on hacking in the Netherlands and other non-English-language countries. Findings from these countries will be compared to general findings in the literature. The chapter will start with both qualitative and quantitative research based on judicial populations. These studies address topics like personal characteristics and situational risk factors of hackers, their social networks, their specialization and motives, labelling, the extent to

M. Weulen Kranenbarg (*) Vrije Universiteit (VU) Amsterdam, Amsterdam, The Netherlands e-mail: [email protected] © The Author(s) 2020 T. J. Holt, A. M. Bossler (eds.), The Palgrave Handbook of International Cybercrime and Cyberdeviance, https://doi.org/10.1007/978-3-319-78440-3_33

771

772

M. Weulen Kranenbarg

which hackers can be seen as cyborg hackers, life-course research, and the role of hackers in organized cybercrime. Afterward, survey research on cyber-dependent offending among youth, based on general population samples, will be discussed. The chapter ends with a discussion on the need of international comparative research on hackers. Keywords

Hacking · System trespassing · Cyber-dependent offenders · International · Comparison · Samples · Quantitative · Qualitative

Introduction The majority of the English-language literature on hackers is based on research from English-speaking countries. These studies provide valuable insight into hackers, their characteristics, modus operandi, and networks. However, this English-language sample frame also limits the generalizability of these studies and the different types of methods used. This chapter will, therefore, discuss research from non-English-language countries, with a specific focus on hacking research from the Netherlands. In addition to this focus, it will discuss research published in English that originates from other non-English countries, as these can provide unique perspectives on hacking, other samples or methodologies, or cross-country comparisons. Where possible, the results from these studies will be compared to general findings in the literature. There are several reasons why Dutch research on hacking can provide valuable insights. Calculations by Internet Live Stats (2018) show that in 2016 93.7% of the Dutch population had access to the Internet at home. This means that the Netherlands is in the top ten of countries with the highest Internet penetration rate, above the United Kingdom and the United States. This results in high opportunities for crime and risks for victimization. This is reflected in the Dutch nationally representative victimization surveys, which have shown a remarkable trend in hacking victimization. While bicycle theft has long been the most common type of victimization in the Netherlands, hacking is now more prevalent. While in 2017 only 3.3% of the population had their bicycle stolen, 4.9% of the population was victimized by hacking (Statistics Netherlands 2018). With respect to research on hackers, Dutch studies have been using unique data sources and methodologies. While quite some international hacking literature focuses on forum data from, for example, English or Russian language forums (see, e.g., Dupont et al. 2016; Holt 2013; Holt et al. 2012b, 2016a, b; Macdonald and Frank 2017), there are no well-known and large-scale Dutch forums. Therefore, Dutch research tends to focus on other types of data. In general, two types of samples are used. First, collaborations with the police and prosecutor’s office provide opportunities to interview or survey known offenders (Van Der Wagen 2018a; Van Der Wagen et al. 2016; Weulen Kranenbarg 2018; Weulen Kranenbarg et al. 2019). These collaborations also allow for qualitative and quantitative research using police records. The growing focus on cybercrime in the police force will continue

37

Global Voices in Hacking (Multinational Views)

773

to provide this type of valuable information on criminal hackers. Second, the size of the country allows for surveying representative general population samples, usually focused on youth (Rokven et al. 2017b; Van Der Laan and Beerthuizen 2018; Van Der Laan and Goudriaan 2016). The developments above recently resulted in more attention toward human factor research on cybercrime in the Netherlands. One of the results of this development is a research agenda on The Human Factor in Cybercrime and Cybersecurity, which also includes a chapter on individual offenders (Weulen Kranenbarg et al. 2017) and cybercriminal networks (Leukfeldt et al. 2017). This state of the art of the literature was not specifically focused on Dutch hacking research and also included research on other types of cyber-offending and research from other (mostly Englishlanguage) countries. Since the publication of this research agenda, new and innovative Dutch research on hackers has been published, which will be the focus of this chapter in relation to the more general findings in the hacking literature. This chapter will start with discussing both quantitative and qualitative research based on judicial populations and police files. The nature of these samples, and the associated permissions that are required to use these, means that these studies are mostly focused on adult hackers (Van Der Wagen 2018a; Van Der Wagen et al. 2016; Weulen Kranenbarg 2018; Weulen Kranenbarg et al. 2019). Afterward, the chapter will focus on self-report general population studies, which are usually focused on youth (Rokven et al. 2017b; Van Der Laan and Beerthuizen 2018; Van Der Laan and Goudriaan 2016). Throughout the chapter, studies from other non-English countries will be discussed as well, and findings will be discussed in the light of findings in the general hacking literature. In the conclusion and discussion, the need for international comparative research will be discussed.

Judicial Population Research There are basically two ways in which specifically Dutch research uses judicial information on known criminal hackers: first, by quantitatively surveying (Weulen Kranenbarg 2018; Weulen Kranenbarg et al. 2019) or qualitatively interviewing (Van Der Wagen 2018a; Van Der Wagen et al. 2016) individuals who have been in contact with the judicial system for criminal hacking and, second, by conducting research on police records, which can be both quantitative research on the full judicial population (Ruiter and Bernaards 2013; Weulen Kranenbarg et al. 2018b) and qualitative case studies based on police files (Van Der Wagen 2018a; Van Der Wagen et al. 2016). Some studies also combine these methods (Van Der Wagen 2018a; Van Der Wagen et al. 2016).

Quantitative Survey Research Weulen Kranenbarg (2018) used a survey among a sample of 535 known offenders registered by the Dutch public prosecutor’s office. Half of this sample had been suspected of committing a traditional crime; the other half had been suspected of

774

M. Weulen Kranenbarg

committing a cyber-dependent crime. The cyber-dependent crimes in this study (in order of prevalence) were guessing passwords, defacing, digital theft, other types of hacking, damaging data, taking control over an IT system, phishing, malware use, intercepting communication, DoS attacks, selling somebody else’s data, spamming, and selling somebody else’s credentials (for prevalence rates, see Weulen Kranenbarg 2018; Weulen Kranenbarg et al. 2019). These are almost all crimes that require some form of hacking. The goal of this study was to examine these offenders’ personal characteristics and situational risk factors, their social networks, the extent to which they specialize or also commit traditional offenses, and their motives for offending. In addition, the goal was to compare the cyber-dependent offenders on these aspects with traditional offenders. However, in this chapter the focus will be on the findings for the cyber-dependent offenders. Detailed information on the survey-based comparison can be found in Weulen Kranenbarg (2018) and Weulen Kranenbarg et al. (2019). A unique feature of this survey study was the inclusion of an objective information technology (IT) skills test. Several cybercrime studies (e.g., Holt et al. 2010; Lee 2018) use a subjective IT skills survey question based on Holt et al. (2012a, p. 389), in which respondents are asked to indicate which of the following statements applies to their IT skills: “I am afraid of computers and don’t use them unless I absolutely have to,” “I can surf the net, use common software, but cannot fix my own computer,” “I can use a variety of software and fix some computer problems I have,” and “I can use Linux, most software, and fix most computer problems I have.” As Weulen Kranenbarg (2018) used a sample in which IT skills of some offenders were expected to be very strong, an additional statement was added to these four statements: “I can use different programming languages and am capable of detecting programming errors.” More importantly, the survey included ten multiplechoice test questions. These varied from very easy questions on, for example, what a valid email address looks like, which was answered correctly by 92.49% of the sample, to very challenging questions in which respondents had to find a coding error and come up with a way to prevent misuse of that error, which was only answered correctly by 4.34% of the sample. This objective IT skills measure is an interesting additional way of measuring IT skills, which showed a strong correlation with the subjective IT skills measure with the five statements discussed above.

Personal Characteristics and Situational Risk Factors In the international literature, one consistent correlate of both traditional offending (Berg and Felson 2016; Jennings et al. 2012; Lauritsen and Laub 2007) and cyberoffending (in both English and non-English research; see, e.g., Bossler and Holt 2009; Kerstens and Jansen 2016; Morris 2011; Ngo and Paternoster 2011; Wolfe et al. 2008) is that offenders are often also victims. One of the explanations is that offenders and victims share characteristics like low self-control and risky daily activities that increase both their risk for offending and victimization. In line with this general picture in the literature, a Dutch study on a large youth sample also

37

Global Voices in Hacking (Multinational Views)

775

found this victim-offender overlap for online auction fraud, virtual theft, and online identity fraud (Kerstens and Jansen 2016). Subsequently, Weulen Kranenbarg et al. (2019) showed that this overlap can also be found in a sample of known Dutch cyber-dependent offenders, as 9.59% of the sample reported both to have committed a cyber-dependent crime and have been victimized by a cyber-dependent crime in the preceding 12 months. In addition, 8.06% of the sample committed a cyberdependent crime without being victimized. Further examination of these two groups revealed that it is important to study their characteristics separately, something that has not yet been done in the aforementioned international literature on hacking. The offenders that had not been victimized appeared to have committed the more sophisticated types of cybercrime (Weulen Kranenbarg et al. 2019). The analyses of their characteristics also revealed that they were the group of offenders with very specific characteristics that clearly distinguish them from traditional offenders. For example, they did not have statistically significant low self-control; they had strong IT skills and specific situational risk factors like spending a lot of time on forums where they can learn more on how to commit these more sophisticated types of crime. It appears that these characteristics provide them with the opportunities to commit these offenses and the ability to prevent themselves from being victimized. On the other hand, the offenders who were also victimized committed the less sophisticated types of cybercrime and showed a more general risk profile. Apart from their online situational risk factors, they were more comparable to traditional victim-offenders. These cyber-dependent victim-offenders had low self-control, some IT skills (but less than the first group), and more general online activities in which both their opportunities for easy-to-commit cybercrime and their risks for being victimized were increased. In line with the explanation that offenders and victims share risk factors, the characteristics of the victim-offenders like low selfcontrol seem to increase their risk-taking behavior, which is related to both offending and victimization. IT skills of victim-offenders were, apparently, not strong enough to prevent their victimization. In short, this study revealed that different correlates can be found when looking at different types of hacking (Weulen Kranenbarg et al. 2019). Similar findings can be found in the international literature. For example, while some studies find that hacking or other cybercrimes are related to low self-control (Donner et al. 2014; Hu et al. 2013; Marcum et al. 2014), others find that the effect of self-control differs in relation to IT knowledge and the extent to which social learning plays a role (Bossler and Burruss 2011; Holt et al. 2012a). With respect to learning of skills, forums and other online networks have also been found to be an important source of information (Holt 2007; Holt et al. 2012b; Hutchings 2014; Hutchings and Clayton 2016), which explains the correlation between forum use and more sophisticated types of offending (Weulen Kranenbarg et al. 2019). In the light of these results, it should also be noted that there is some international research on the extent to which autism traits are related to hacking (Harvey et al. 2016; National Crime Agency 2017; Schell and Melnychuk 2011). However, while it makes sense to assume that some autism traits are related to hacking, there is no strong evidence for this claim yet.

776

M. Weulen Kranenbarg

Social Networks As already briefly mentioned above, many studies that mostly originate from Englishlanguage countries have indicated that having delinquent peers is an important correlate of hacking (Bossler and Burruss 2011; Donner et al. 2014; Holt 2007; Holt et al. 2010, 2012a; Holt and Kilger 2008; Hu et al. 2013; Marcum et al. 2014; Morris 2011; Morris and Blackburn 2009; Rogers 2001). These studies are, however, mostly based on student or school samples. In line with this international literature, the Dutch judicial population research of Weulen Kranenbarg et al. (2019) also found this relationship between offending and deviance of social contacts, even when controlling for other characteristics that are similar among social contacts like gender and age. Controlling for similar characteristics was enabled by the type of network data collected in this study, which had not yet been done in the international literature mentioned above. An even more important addition to the international literature, however, were the findings from the comparison with traditional offenders. This revealed that the relationship between offending and deviance of social contacts is much weaker for cyber-dependent crime, compared to traditional crime. Weulen Kranenbarg et al. (2019) specifically focused on important social ties, with whom offenders discuss important matters. These social contacts traditionally have the strongest impact on offending (Rokven et al. 2016, 2017a). The fact that their deviance is less strongly correlated to cyber-dependent offending can be explained in two ways. First, it could be the case that cyber-dependent offenders have more loose online social ties. Second, it could be that cyber-dependent offenders are more on their own and less inclined to seek close contact with others. The Internet could also provide a source of information on how to commit these offenses without having to have contact with others (Goldsmith and Brewer 2015; Weulen Kranenbarg et al. 2019). At the moment, it is unclear to what extent the possible explanations above are valid. In general, Weulen Kranenbarg et al. (2019) showed that research on cyber-dependent offenders should broaden its perspective to other types of social contacts. Forums, for example, could be a new place for social learning and social interactions but may also influence someone’s behavior in a completely different way than traditional offline social interactions. As stated in the introduction, it should be noted that there are no wellknown Dutch hacking forums. Therefore, existing Dutch forum research has focused on interaction on English-language forums (Soudijn and Zegers 2012), similar to the aforementioned general literature on hacking forums (Dupont et al. 2016; Holt et al. 2012b; Macdonald and Frank 2017). Planned future longitudinal research will distinguish between online and offline social contacts and the extent to which both influence and selection processes underlie the correlation between offending of peers.

Specialization and Motives Lastly, the self-report data from the judicial population survey has been used to examine to what extent specialization occurs in cyber-dependent crime and which motives offenders report for committing these offenses (Weulen Kranenbarg 2018). With respect to specialization, in this sample most offenders do not combine cyber-dependent

37

Global Voices in Hacking (Multinational Views)

777

offenses with traditional offenses. In addition, within the group of cyber-dependent offenders, some forms of specialization can be found. Hacking is generally seen as a first step in an offender’s modus operandi (Leukfeldt et al. 2013; Maimon and Louderback 2019). This was also indicated by Weulen Kranenbarg (2018), as offenders usually combined hacking with offenses like stealing or damaging data. With respect to the motives for committing these offenses, this study indicated that most offenders commit their crimes for intrinsic motives like curiosity or the challenge of breaking systems. For some offenses, extrinsic motives like sending a message or revenge were also mentioned (Weulen Kranenbarg 2018). This is mostly in line with the early work of Jordan and Taylor (1998) and Taylor (1999), which included qualitative fieldwork of motives in the international hacker culture and other international research on motives (e.g., Chiesa et al. 2008; Denning 2011; Holt 2007, 2009; Voiskounsky and Smyslova 2003; Woo 2003). In the sample of Weulen Kranenbarg (2018), financial motives were almost completely absent. This is striking, as the reported motives for traditional offenses were often financial. In addition, it is interesting as some argue that all cybercrime is now financially motivated (Chan and Wang 2015; Grabosky 2017; Holt and Kilger 2012; Kshetri 2009; Provos et al. 2009; Smith 2015; White 2013). In line with those arguments, it should be noted that a German self-report study found strong financial motives for identity theft-related types of hacking (Fotinger and Ziegler 2004). These similarities and differences between studies indicate that different samples may result in very different motives for committing these offenses. As some studies use motives in addition to characteristics like organization, resources, expertise, and target to develop threat actor typologies (De Bruijne et al. 2017), it is important to find empirical evidence for these motives.

Qualitative Interview Research Van der Wagen (2018a; Van Der Wagen et al. 2016) has conducted research on Dutch hackers by using different methodologies, including qualitative interviews and analyzing police files. The interviews are with hackers in general, both the ones that commit crimes and the ones that try to improve cybersecurity. In Van Der Wagen et al. (2016), ten qualitative in-depth interviews with hackers are used to explore processes of labelling in the hacker community. Nine of these hackers had the Dutch nationality, and half of them considered themselves to be white hat or ethical hackers, while the other half had been involved with black hat hacking. Thus, sampling was not specifically based on offenders who had been in contact with the judicial system. However, these interviews were complemented with information from five police files on hacking cases.

Labelling In Van Der Wagen et al. (2016), three dimensions of the deviant identity were examined: the way in which hackers perceive that other “normal people” see them, the way in which they see themselves and their actions, and the way in which

778

M. Weulen Kranenbarg

as outsiders they see themselves in relation to the conventional society and in relation to other outsiders. The interviewed hackers experience negative labelling effects. They feel that “normal people” have a negative image of them. Nevertheless, their perception of their own identity is positive. They feel like they have a special gift, which offers them more opportunities than others have. The online community offers positive reinforcement of their actions, which reduces the negative effect of the offline conventional society. Therefore, Van Der Wagen et al. (2016) conclude that the negative labelling does not result in stigmatization of these hackers. In addition, with respect to how they see themselves in comparison to other outsiders, these hackers, both white hat and black hat, all agree that they are not real cybercriminals. In line with the research discussed above, only one of the black hat hackers indicated that he had hacked for financial gain. All respondents in this research said that hacking is only criminal if the offender has financial motives. This was also clear in some of the studied police files with hacking cases. Labelling has been almost completely absent in the international literature on hacking. Similar findings on this topic can, however, be found in research from a different non-English country. The earlier work of Turgeman-Goldschmidt (2005, 2008, 2009, 2011a, b) is based on 54 interviews with Israeli hackers. This research also indicated that hackers experience negative labelling by others, but they see themselves as positive deviants (Turgeman-Goldschmidt 2008, 2011b). In addition, they also differ from other deviants in their use of neutralization techniques (Turgeman-Goldschmidt 2009). They do not use external justifications for their behavior. They only use internal justifications like denial of injury, denial of victim, condemnation of the condemners, appeal to higher loyalties, and self-fulfillment. These neutralizations techniques are also in line with the labelling processes described in Van Der Wagen et al. (2016) and Turgeman-Goldschmidt (2008) and other international research on neutralization among hackers (e.g., Chua and Holt 2016; Hutchings and Clayton 2016; Morris 2011; Young et al. 2007). Again, the hackers in Turgeman-Goldschmidt (2005) also did not commit their crimes for financial gain. Their hacking can be seen as a form of social entertainment. Turgeman-Goldschmidt (2011a) further argues that the absence of a financial motive and absence of external justifications mean that hackers cannot be considered whitecollar offenders, even though they do have some similar characteristics. Similarities and differences between cyber-offenders and white-collar offenders have also been found in cases from the United States (Pontell and Rosoff 2009).

Cyborg Hackers The main theme in the research of Van der Wagen is the cyborg perspective based on the actor-network theory (Latour 1992, 2005). In this line of research, Van Der Wagen (2018b) examines to what extent the relationship between humans and technology should be part of our understanding of cybercriminal behavior. Almost all international criminological research discussed in this chapter focuses on the human and looks at technology only as an instrument to commit cybercrimes.

37

Global Voices in Hacking (Multinational Views)

779

In this research, on the other hand, technology is considered an active and vital part that interacts with the human actor. This perspective has been used to study different aspects of cybercrime, including botnets (Van Der Wagen and Pieters 2015), cybercriminal networks (Van Der Wagen and Bernaards 2018), victims (Van Der Wagen and Pieters 2018), and hackers (Van Der Wagen 2018a), by using both interview data and police files. As this chapter is on hackers, it will focus on the extent to which hackers can be considered to be cyborgian deviants and how hackers view their relationship with technology. The ten interviews with both white hat and black hat hackers indicated that both for the process of becoming a hacker and being a hacker, the relationship of the hacker with the technology is a vital component. Van Der Wagen (2018a) states that hackers and technology should not be seen as two completely different things. Hackers interact with technology in different ways. They work with technology, they act through technology, and they sometimes act against technology. In these interactions, they look for the boundaries of technology and see to what extent they can overcome these boundaries, which could but does not have to result in committing crime. This notion of boundaries can also be found in the Israeli work of Turgeman-Goldschmidt (2005) and the ethnographic US-based research from Steinmetz (2015). Based on the interviews, Van Der Wagen (2018a) describes five dimensions of the relationship between hackers and technology: mind, performance, identity, body, and transgression. With respect to the mind, hackers are interested in the underlying processes of systems. They have an analytical perspective and want to fully understand a system, so that they can use it in innovative ways and completely control them. In their perspective, systems are more than just a static tool (Van Der Wagen 2018a). The botnet police file case study (Van Der Wagen and Pieters 2015) also revealed that by using hacking techniques, a hacker can also make a system work for him. By hacking into websites, the hacker was able to automatically spread malware, which in turn added new computers to the botnet. In line with the arguments about learning from other sources than strong social ties provided earlier (Goldsmith and Brewer 2015; Weulen Kranenbarg et al. 2019), hackers describe that they also learn by trial and error in interaction with systems (Van Der Wagen 2018a). Their performance is, however, not only the result of their own capabilities but also of the capabilities of the system that they use. The hackers describe their identity as having a natural or innate connection with technology and an ability to see things other people do not see, which provides them with the abilities and drive to keep learning about systems. Hackers rely on their body (mostly their brain) to respond intuitively on technological challenges. That is also why they keep challenging their own capabilities and technical capabilities. Lastly, with respect to transgression, the euphoric feeling after a successful hack challenges even the white hat hackers. The world of possibilities when one has finally gained access to a system may be too tempting which could result in misuse of the system (Van Der Wagen 2018a). Although only Van Der Wagen specifically applied the actor-network theory to this relationship between hackers and technology, other international studies have found similar characteristics of hacker culture (e.g., Steinmetz 2015; Taylor 1999; see ▶ Chap. 35, “Computer Hacking and the Hacker Subculture”).

780

M. Weulen Kranenbarg

Research Based on Police Records and Files In the research of Van der Wagen described above, police files have been used as case studies, mostly to enrich interview data. However, there is a long tradition of collaborations between researchers and police forces in the Netherlands, which enables Dutch criminologists to use police records and files as their main source of data as well. Police records provide limited but full offender population data, which is usually analyzed in a quantitative manner. Police files, on the other hand, provide in-depth data on specific cases that are usually studied in a qualitative manner. This section will first discuss two quantitative longitudinal life-course studies based on police records (Ruiter and Bernaards 2013; Weulen Kranenbarg 2018). Afterward, it will discuss some qualitative studies on police files (Bijlenga and Kleemans 2018; Bulanova-Hristova et al. 2016; Kruisbergen et al. 2018b; Leukfeldt et al. 2017a, b, c, d; Odinot et al. 2017, 2018).

Quantitative Life-Course Research A first longitudinal study on the life course of hackers has been conducted by Ruiter and Bernaards (2013). In this study a group of 323 hackers who had been registered as a suspect in the Dutch police registration system have been compared to other suspects on their sociodemographic characteristics and age-crime curves. The analyses indicated no differences in ethnicity and gender. To the same extent as other suspects, some registered criminal hackers already had a criminal record, or they committed other crimes after their registration for hacking. Recidivism in hacking could not be found in these data. However, as these are only the crimes that the police know about, it is likely that recidivism did take place. This is also confirmed by the survey of Weulen Kranenbarg (2018), in which a proportion of the hackers who had been in contact with the police self-reported recidivism in hacking. Even though the data of Ruiter and Bernaards (2013) have limitations, the longitudinal comparison of criminal careers of hackers with other offenders provides a unique perspective. The analyses showed that the criminal careers of hackers, as registered by the police, follow a similar pattern in age-crime curve, onset, and persistence as other criminals. This first study could only examine some basic sociodemographic characteristics. This means that it could not examine which life circumstances are related to a person’s offending or desistence. Weulen Kranenbarg et al. (2018b) combined the police registration data for all adult suspects (N = 870 cyber-dependent suspects and N = 1,144,740 other suspects) with other registration data from Statistics Netherlands for the period of 2000–2012. This enabled a longitudinal within-person examination of the relation between living with a partner or family, being employed, and being enrolled in education. In these within-person analyses, the years in which a person, for example, is employed are compared with the years in which that same person is unemployed, to see in which years that person is more likely to commit a cyber-dependent offense. In this research, there is a distinction between employment

37

Global Voices in Hacking (Multinational Views)

781

and education in de IT sector and general employment or education. The authors provide several augments why these traditionally important preventive life circumstances may not have the same effect on cyber-dependent offending. In contrast to these arguments, with respect to the effect of a partner or family, cyber-dependent offenders are similar to other offenders as they are less likely to offend in years in which they live together with a partner or a family (partner and child) compared to the years in which they live alone. For all types of crime, including cyber-dependent crime, living as a single parent can increase offending. With respect to employment and education, on the other hand, the results were in line with the expectations. In general being employed had a preventive effect on cyberdependent offending in this full suspect population. More importantly, however, people were more likely to commit cyber-dependent offenses in years in which they were employed in the IT sector or enrolled in education in general. This indicates that indirect social control of family can prevent cyber-dependent offending, but opportunities for these offenses occur in very different environments than opportunities for other crimes. Some life circumstances like specific types of employment can create these opportunities, and social control in those situations is not strong enough to prevent offending (Weulen Kranenbarg et al. 2018b). This can also be found in the international literature, where surveying American hackers on a hacker convention revealed that they have less strong social ties and more time to hack when they are unemployed (Bachmann 2010). On the other hand, other studies that focus on insiders have also indicated that cybercrimes can be employment-enabled (Grabosky and Walkley 2007; Nykodym et al. 2005; Randazzo et al. 2005). It seems that it will depend on the type of employment and opportunities, if employment is a protective factor or a risk factor.

Qualitative Research on Organized Cybercrime The quantitative research based on police records discussed above is unique in the sense that it provides longitudinal information on the full population of suspects registered by the police. However, the nature of the data does not allow for very detailed and in-depth analyses on each specific case. In qualitative case studies based on police files, on the other hand, the goal is not to present an overall representative picture of these cases but to provide a more in-depth understanding of these cases. This information is very valuable on itself but also in combination with research based on other methodologies. For example, Bijlenga and Kleemans (2018) analyzed five criminal investigations and found that some organized crime groups specifically contact employees in the IT sector to help them with parts of their crime script that require IT expertise, which is in line with the life-course research discussed above (Weulen Kranenbarg et al. 2018b). In the Netherlands this type of research by using police files has mainly been used to study organized crime in the so-called Organized Crime Monitor (see Research and Documentation Center 2018). In recent years, cybercrime cases have been added to this data collection. This is a very wide range of types of cybercrime cases, from online drug trade to advanced

782

M. Weulen Kranenbarg

banking malware cases. This chapter will focus on the results about hackers in these cases. For a broader English summary of recent results about organized crime and IT in the Netherlands, see Kruisbergen et al. (2018b, pp. 109–118). At the moment, seven cybercrime cases have been analyzed as part of the Organized Crime Monitor. However, other studies have analyzed numerous additional cases. In general, if there are hackers involved in these cases, they mainly act as facilitators. For example, the most well-known case is probably a drug trafficking case, in which two hackers were used by the organization to hack into the systems of the port, so that their container with drugs could enter the Netherlands undetected. In addition, in two cases malware writers wrote banking malware that was used to take over the IT systems of the victims to manipulate their online transactions (Kruisbergen et al. 2018b). In these cases, criminal networks that largely exist offline may use forums to find these malware writers (Leukfeldt et al. 2017a). The rest of the members of these networks do not have very strong IT skills. This shows how international research on these forums and the social organization of online cybercriminal services (Dupont et al. 2016; Holt 2013; Holt et al. 2012b, 2016a, b; Hutchings 2014; Hutchings and Clayton 2016; Macdonald and Frank 2017) can be very relevant even when studying a domestic and largely offline case. In contrast to research on individual hackers discussed earlier in this chapter (Denning 2011; Holt 2007, 2009; Turgeman-Goldschmidt 2005, 2008, 2011a; Van Der Wagen et al. 2016; Voiskounsky and Smyslova 2003; Weulen Kranenbarg 2018; Woo 2003), these organized crime studies tend to find financial motives for acting as a facilitating hacker in these networks (Kruisbergen et al. 2018b). This clearly stresses the importance of using different samples to study hackers. With respect to this financial motive, it is interesting to see that even if the crime script is a high-tech form of crime, the networks still show a need for cash (Kruisbergen et al. 2018a, b) and many networks still consist of groups of offenders who know each other offline (Leukfeldt et al. 2017c). This has not only been found in the Netherlands but in several other countries (Lusthaus 2018). A case study on Romania by Lusthaus and Varese (2017), for example, has also shown that cybercrime can have an important local and offline dimension. The cases discussed above are part of research on organized crime in general. However, some case studies have specifically added more high-tech organized cybercrime cases to their analyses. Hackers often have a more central role in these cases; see, for example, Odinot et al. (2017). This report also discusses some background characteristics of the 39 members of the organized crime groups who commit the IT-related parts of the crime script. In these cases, they are younger than the other members of the group are (29 vs. 37 years). In line with Ruiter and Bernaards (2013) discussed earlier, nine had previous convictions, but only three were convicted for hacking. Although money was the main motive in these cases, some hackers had motives related to revenge or hacking being their hobby, and some were pressured or forced by others. Lastly, a few studies have used these case files in international comparisons. The study discussed in the previous paragraph was part of a cross-national comparison between the Netherlands (11 cases), Sweden (15 cases), and Germany (18 cases)

37

Global Voices in Hacking (Multinational Views)

783

(Bulanova-Hristova et al. 2016; Odinot et al. 2018). These comparisons seem to confirm the overall picture on Dutch cases. Some important findings are that offenders in these networks tend to be younger than offenders in other organized crime networks. In addition, the possibilities of contacting hackers for a part of the crime script enable traditional organized crime groups to engage in forms of cybercrime as well. However, new groups that were not yet involved in organized crime also emerge in the field of organized cybercrime. In a similar manner, Leukfeldt et al. (2017b, d) compared Dutch results on the use of online crime markets and the origin, growth, and criminal capabilities of cybercriminal networks with cases from Germany and the United Kingdom. It should be noted, however, that the selective and nonrepresentative nature of this type of data makes international comparisons difficult.

Concluding Remarks on Studying Hackers with Judicial Population Research As shown above, judicial populations provide unique ways to study hacking or cybercrime in general. However, in these studies, it is not always clear how technical this type of hacking is, and many studies include a very broad range of cyberoffenses including hacking. Research on police files has shown that hacking is often only a small part or starting point of the crime (Leukfeldt et al. 2013). Hacking cases generally only have few suspects, which means that these cases will not be studied in organized crime research as discussed above. In addition, this research showed that in many case files on hacking, crucial information about the way in which the suspect hacked into a system is missing. As hacking can be done by simply guessing a password, not all suspects registered as hackers will have the technical capabilities that we generally associate with hackers. Lastly, of course, the hackers that are caught may be the ones that are less capable of hiding their crime, which may mean that research based on these hackers underestimates the skills of the general criminal hacker population. However, as indicated by Weulen Kranenbarg et al. (2019), more sophisticated types of hacking do emerge in these samples, which are also reflected in the IT skills of those offenders. This may be the result of police forces that are specifically targeting high-tech cybercrime.

General Populations Research In order to gain more insight into hacking in the general population, it is important to review survey research that is focused on general population samples. The advantage of using these samples is that the results are more representative than the results discussed above. However, it should also be noted that these results generally do not primarily focus on high-tech forms of hacking, as these offenses are not prevalent enough in the general population. Nevertheless, as Dutch research in this area focuses on youth, this does provide information that cannot be found in the judicial

784

M. Weulen Kranenbarg

population research discussed above. In the Netherlands this type of data is collected in the Juvenile Crime Monitor and is combined with data from police records in a biennial report (Van Der Laan and Beerthuizen 2018). In 2015 the most recent selfreport study has been conducted, published in the 2016 report (Van Der Laan and Goudriaan 2016). The reports above are mostly descriptive and provide information on all kinds of crime, including different types of cybercrime. The 2017 report presents an overall picture of the number of police records and convictions on cyber-dependent crimes (including hacking) for youth. The conclusion is that these numbers are very low, but that that must be a result of the data source as the 2016 report that includes self-report data shows much higher numbers (Van Der Laan and Beerthuizen 2018; Van Der Laan and Goudriaan 2016). Because of these limitations, new ways of extracting quantifiable data from police files and new types of prevalence research among youth are being explored (see, e.g., Van Der Heijden et al. 2017). The 2016 results based on the self-report data paint the following picture about the five cyber-dependent offenses included (changing someone’s password, hacking, i.e., logging in to someone else’s computer/account, hacking including altering data, spreading a virus, DDoS attack). Young adults are relatively more involved in these offenses than children are (22% age 18–23; 17% age 12–17; 7% age