Cybercrime in Context: The human factor in victimization, offending, and policing 3030605264, 9783030605261

This book is about the human factor in cybercrime: its offenders, victims and parties involved in tackling cybercrime. I

163 28 6MB

English Pages 414 [405] Year 2021

Report DMCA / Copyright


Polecaj historie

Cybercrime in Context: The human factor in victimization, offending, and policing
 3030605264, 9783030605261

Table of contents :
The Annual Conference on the Human Factor in Cybercrime: An Analysis of Participation in the 2018 and 2019 Meetings
The Annual Conference on the Human Factor in Cybercrime
The Present Study
Analytic Strategy
Part II: Victims
The Online Behaviour and Victimization Study: The Development of an Experimental Research Instrument for Measuring and Explaining Online Behaviour and Cybercrime Victimization
Online Behaviour and Cybercrime Victimization
Unsafe Online Behaviour as a Predictor of Online Victimization
Explaining Online Behaviour
Other Factors
Measuring Online Behaviour
Research into Self-Reported Behaviour
Research into Actual Online Behaviour
Self-Reported Behaviour Versus Actual Behaviour
Online Behaviour and Victimization Study
Outline of the Research Instrument
Measuring Seven Clusters of Online Behaviour
Detailed Description of the Measurements of Actual Online Behaviour
Appendix: Survey Items Self-Reported Online Behaviour
No Gambles with Information Security: The Victim Psychology of a Ransomware Attack
Ransomware as Data Loss
Neural and Behavioural Responses to Loss Feedback
Individual Differences in Loss Response
Personality Risk Factors and Ransomware
Response Strategies for Groups
Loss-Threatening Incentives and Minimal Loss
The Generalizability of Gambling-Task Experiments to Ransomware
Future Directions and Conclusion
Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations
Current Study
Participant Recruitment
Experience with Online Banking
Awareness of Security Guidelines (RQ1)
Compliance with Security Guidelines (RQ2)
The Effect of Awareness on Compliance with Security Guidelines (RQ3)
Limitations and Future Research
Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber Misconduct on a Social Media Site
Insider Threat: A Background
Research Problem and Current Investigative Approach
Method and Data
Phase 1: Threat Vector
Phase 2: Human Factor
Phase 3: Insider Threat Prevention
Conclusion and Further Work
Assessing the Detrimental Impact of Cyber-Victimization on Self-Perceived Community Safety
Defining End-User Cyber-Victimization
Digital Victimization, Localized Victimization, and Self-Perceived Community Safety
Systemic Desensitization
Independent Variables
Dependent Variable
Independent Samples T-Test
Analysis of Variance (ANOVA) Results
Analysis of Covariance (ANCOVA) Results
Conclusion and Implications
Show Me the Money! Identity Fraud Losses, Capacity to Act, and Victims’ Efforts for Reimbursement
Contacting Agencies and Reimbursement
Incident Characteristics
Victim Characteristics
Victims of Cybercrime: Understanding the Impact Through Accounts
Hacking Offences
Computer Virus Offences
The Impact of Computer Misuse Crime in Context
A Continuum of Impact
Incident of Minor Inconvenience
Gweneth [Individual, Computer Virus]
Jackie [Individual, Hacking]
Vanessa [Individual, Computer Virus]
Crime of Inconvenience
Joanna [Individual, Hacking]
Rachael [Individual, DDoS]
Lily [Individual, Computer Virus, Sextortion]
Angeline [Individual, Hacking]
Justin [SMO, Hacking]
Serious Crime of Personal Violation or Significant Financial Loss or Fear of Loss
Catherine [Individual, Hacking]
Claire [Individual, Hacking]
Leo [Individual, Hacking]
Sam [Individual, Hacking, Sexploitation]
Wayne [Individual, Online Harassment]
Patricia [Individual, Multiple]
Sophie [Individual, Hacking]
The Impact of a Canadian Financial Cybercrime Prevention Campaign on Clients’ Sense of Security
Cybercrime Prevalence
Impact of Cybercrime
Prevention Campaigns
Sense of Security
Descriptive Analyses
Limitations and Future Directions
Part III: Offenders
Saint or Satan? Moral Development and Dark Triad Influences on Cybercriminal Intent
Personality and Cybercriminal Intent
Dark Triad Traits
Moral Reasoning
Data, Instruments, and Methodology
Applying the Theory of Planned Behavior to Cybercrime
Measuring the Dark Triad
Measuring Moral Development
Methods and Analytical Strategy
Discussion of Findings and Implications
Practical Implications and Future Research Directions
Cyber-Dependent Crime Versus Traditional Crime: Empirical Evidence for Clusters of Offenses and Related Motives
Cyber-Dependent Crime
Motives and Typologies of Cyber-Dependent Offenders
Justifications or Neutralizations
The Current Study
Data and Methods
Sample and Procedure
Self-Reported Offending
Analytical Strategy
Offending Clusters
Intrinsic Motives of Cyber-Dependent Crime
Financial Motives of Cyber-Dependent Crime
Extrinsic Motives of Cyber-Dependent Crime
Comparison of Motives Between Traditional Crimes
Comparison of Cyber-Dependent and Traditional Crime
Conclusion and Discussion
Appendix 1: Pattern Matrix Principal Component Analysis
Appendix 2: Evidence for Significant Differences in Motives Between Clusters
Examining Gender-Responsive Risk Factors That Predict Recidivism for People Convicted of Cybercrimes
Prior Research on Women’s Involvement and Risk to Reoffend
Women’s Pathways into “Street” Crime
Gender-Responsive Risk Factors and Assessment in Corrections
The Current Study
Outcome Variable
Independent Variables
Control Variables
Analysis Plan
Describing Women and Men Convicted of Cybercrime
Identifying Gender-Responsive Risk Factors for Cybercrime
Overall Risk to Reoffend
Criminogenic Needs Areas
Exploring Masculinities and Perceptions of Gender in Online Cybercrime Subcultures
Literature Review
Gendering Cybercrime
Data and Methods
The CrimeBB Dataset
Quantitative Methods
Data Cleaning
NLP Text Pre-processing
Frequency Distribution
Bigrams and Collocations
Topic Modelling
Qualitative Methods
Results and Findings
Data Science Findings
Bigrams and Collocations
Topic Modelling
Qualitative Results
Stalking and Gendered Victimisation as a Point of Initiation
Gender as a Resource and a Risk in Cybercriminal Practices
Gender Norms and Access to ‘Hacker’ Identity
Discussion and Concluding Thoughts
Child Sexual Exploitation Communities on the Darkweb: How Organized Are They?
Cyber-Facilitated CSE
Entrepreneurial and Illegal Governance Structures
Associational Structures
Case File Analysis
Complementary Interviews
Case File Descriptions
Darkweb CSE Fora as Criminal Marketplaces: Organization and Role Differentiation
Entrepreneurial Structures
Illegal Governance
Associational Structures
Part IV: Policing
Infrastructural Power: Dealing with Abuse, Crime, and Control in the Tor Anonymity Network
Introduction: Power, Crime, and Control Online
Context and Review of the Literature
Platforms, Privacy, and Abuse
Tor: The Dark Web as a Privacy Infrastructure
Internet Infrastructures and Strategies of Control
Navigating Crime and Power as a Rebel Infrastructure
Theory and Methodology: A Social Worlds Approach to Studying Internet Infrastructure
Research Methods
Privacy as a Structure: The Engineer World and Standardisation
Privacy as a Service: The Infrastructuralist World and Neutralisation
Privacy as a Struggle: The Activist World and Reclamation
Democratisation: From Disruption to Governmentality
Discussion and Concluding Thoughts: Infrastructural Power and its Limits
Cybercrime Reporting Behaviors Among Small- and Medium-Sized Enterprises in the Netherlands
Literature Review
The Current Study
Dependent Variables
Independent Variables
Self-Reported Victimization
Vignette Study
Self-Reported Victimization
Text Mining for Cybercrime in Registrations of the Dutch Police
Previous Research
Data Selection
Data Preparation
Classifier Selection
Classifier Evaluation
Estimating the Number of Police Registrations of Cybercrime
Classifier Selection
Classifier Evaluation
Estimation of Number of Registrations of Cybercrime
Law Enforcement and Disruption of Offline and Online Activities: A Review of Contemporary Challenges
Law Enforcement and Cybercrime Policies
The Budapest Cybercrime Convention (BCC)
Challenges in Policing Cybercrimes
Legal Framework(s), Countries’ Sovereignty, and Jurisdictions
Context of the Digital World
Current State of Policing Expertise and Resources
Detection and Reporting Rates
Policing Offline and Online: On the Effectiveness of Various Approaches
Police Offline Interventions: Current State of Knowledge
Hot Spots Policing and Crackdowns
Community-Oriented Policing
Problem-Oriented Policing
What Works in Policing Online Crime?
Thoughts on Online Unfocused Policing
Online Focused Policing: Previous Attempts and Suggestions for the Future
Partnerships Beyond Community Policing
Unique Offender, Unique Response? Assessing the Suitability and Effectiveness of Interventions for Cyber Offenders
Research Method
Are Cyber Offenders Unique? A Brief Overview
Motivational Factors
Personal and Contextual Factors
Personality and Psychological Factors
Online Peer Influence
Limited Parental Control
Low Risk (Perceptions) of Getting Punished and Limited Awareness of Illegality
Limited Awareness of Inflicted Harm
What Makes Interventions (Not) Work? Three Perspectives on the Effectiveness of Interventions
Deterrence Approach
What Works Approach
Desistance Approach
Assessing the Effectiveness of Interventions
Assessing the Effectiveness of Interventions Directed at Deterrence
Assessing the Effectiveness of Risk-Based and Strength-Based Interventions
Diagnoses of Needs, Strengths and Responsivity
Applying Traditional Interventions on Cyber Offenders
Risk-Based Interventions for Cyber Offenders
Strength-Based Interventions for Cyber Offenders
Combinations of Risk-Based and Strength-Based Interventions

Citation preview

Crime and Justice in Digital Society 1

Marleen Weulen Kranenbarg Rutger Leukfeldt Editors

Cybercrime in Context The human factor in victimization, offending, and policing

Crime and Justice in Digital Society Series Editors Anastasia Powell Royal Melbourne Institute of Technology Melbourne, VIC, Australia Murray Lee University of Sydney Sydney, NSW, Australia Travis Linnemann Eastern Kentucky University School of Justice Studies Richmond, KY, USA Robin Cameron School of Management Royal Melbourne Institute of Technology Melbourne, VIC, Australia Gregory Stratton Royal Melbourne Institute of Technology Melbourne, VIC, Australia

Crime and Justice in Digital Society offers an exciting new platform for theoretical and empirical works addressing the challenges and opportunities of digital society and the Internet of Things for crime, deviance, justice and activism. As digital technologies become progressively embedded into our everyday lives, so too are human-technological interactions embedded into everyday crimes, as well as in cultural representations and justice responses to crime. There is a need for scholarly examination of the ways in which shifts in digital technologies as well as socio-­ political and socio-cultural structures and practices, are producing and reproducing crime, justice and injustices in contemporary societies. This new book series aims to publish and promote innovative, interdisciplinary, and forward thinking scholarship on crime, deviance, justice and activism in the context of digital societies. Both established and early career scholars are encouraged to submit proposals for research monographs or edited volumes. Crime and Justice in Digital Society is particularly welcoming of research that addresses issues of inequalities and injustices in relation to gender, race, sexuality, ability and/or class, as well as works that push the boundaries of conventional ‘cyber’ crime studies and engage with interdisciplinary frameworks from across criminology, sociology, studies of technology and society, media and cultural studies, politics, computer sciences and beyond.

More information about this series at

Marleen Weulen Kranenbarg • Rutger Leukfeldt Editors

Cybercrime in Context The human factor in victimization, offending, and policing

Editors Marleen Weulen Kranenbarg Department of Criminal Law and Criminology Vrije Universiteit (VU) Amsterdam Amsterdam, The Netherlands

Rutger Leukfeldt Netherlands Institute for the Study of Crime and Law Enforcement (NSCR) Amsterdam, The Netherlands Centre of Expertise Cyber Security The Hague University of Applied Sciences The Hague, The Netherlands

ISSN 2524-4701     ISSN 2524-471X (electronic) Crime and Justice in Digital Society ISBN 978-3-030-60526-1    ISBN 978-3-030-60527-8 (eBook) © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland


Part I Introduction������������������������������������������������������������������������������������������������������    3 Marleen Weulen Kranenbarg and Rutger Leukfeldt  The Annual Conference on the Human Factor in Cybercrime: An Analysis of Participation in the 2018 and 2019 Meetings����������������������    5 Asier Moneva Part II Victims  The Online Behaviour and Victimization Study: The Development of an Experimental Research Instrument for Measuring and Explaining Online Behaviour and Cybercrime Victimization��������������   21 M. Susanne van ’t Hoff-de Goede, E. Rutger Leukfeldt, Rick van der Kleij, and Steve G. A. van de Weijer  Gambles with Information Security: The Victim Psychology No of a Ransomware Attack���������������������������������������������������������������������������������   43 David L. McIntyre and Richard Frank  Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations������������������������������������������������������������������   61 Sophie Van Der Zee  Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber Misconduct on a Social Media Site ����������������������   79 Guerrino Mazzarolo, Juan Carlos Fernández Casas, Anca Delia Jurcut, and Nhien-An Le-Khac  Assessing the Detrimental Impact of Cyber-Victimization on Self-Perceived Community Safety ������������������������������������������������������������  103 James F. Popham




 Show Me the Money! Identity Fraud Losses, Capacity to Act, and Victims’ Efforts for Reimbursement������������������������������������������  123 Johan van Wilsem, Take Sipma, and Esther Meijer-van Leijsen  Victims of Cybercrime: Understanding the Impact Through Accounts��������������������������������������������������������������������������������������������  137 Mark Button, Dean Blackbourn, Lisa Sugiura, David Shepherd, Richard Kapend, and Victoria Wang  The Impact of a Canadian Financial Cybercrime Prevention Campaign on Clients’ Sense of Security��������������������������������������������������������  157 Cameron Coutu and Benoît Dupont Part III Offenders  Saint or Satan? Moral Development and Dark Triad Influences on Cybercriminal Intent����������������������������������������������������������������������������������  175 Nicole Selzer and Sebastian Oelrich  Cyber-Dependent Crime Versus Traditional Crime: Empirical Evidence for Clusters of Offenses and Related Motives������������������������������  195 Marleen Weulen Kranenbarg  Examining Gender-Responsive Risk Factors That Predict Recidivism for People Convicted of Cybercrimes ����������������������������������������  217 Erin Harbinson  Exploring Masculinities and Perceptions of Gender in Online Cybercrime Subcultures����������������������������������������������������������������������������������  237 Maria Bada, Yi Ting Chua, Ben Collier, and Ildiko Pete  Child Sexual Exploitation Communities on the Darkweb: How Organized Are They?������������������������������������������������������������������������������  259 Madeleine van der Bruggen and Arjan Blokland Part IV Policing  Infrastructural Power: Dealing with Abuse, Crime, and Control in the Tor Anonymity Network ������������������������������������������������  283 Ben Collier  Cybercrime Reporting Behaviors Among Small- and Medium-Sized Enterprises in the Netherlands ��������������������������������������������  303 Steve G. A. van de Weijer, Rutger Leukfeldt, and Sophie van der Zee



 Text Mining for Cybercrime in Registrations of the Dutch Police��������������  327 André M. van der Laan and Nikolaj Tollenaar  Law Enforcement and Disruption of Offline and Online Activities: A Review of Contemporary Challenges��������������������������������������  351 Camille Faubert, David Décary-Hétu, Aili Malm, Jerry Ratcliffe, and Benoît Dupont  Unique Offender, Unique Response? Assessing the Suitability and Effectiveness of Interventions for Cyber Offenders������������������������������  371 Wytske van der Wagen, Tamar Fischer, Sifra Matthijsse, and Elina van ’t Zand Index������������������������������������������������������������������������������������������������������������������  391

Part I

Introduction Marleen Weulen Kranenbarg and Rutger Leukfeldt

The second annual conference on the human factor in cybercrime was organized in October 2019  in The Netherlands. During this three day small-scale conference many well-known international researchers presented their latest work on the human factor in cybercrime. The small scale of the conference enabled us to make all sessions plenary. This resulted in lively discussions of the presented research and very useful feedback for the presenters. A large selection of the presented work is included as chapters in this book. This collection of chapters represents the state of the art of research on The Human Factor in Cybercrime. All chapters are based on high quality empirical research and contain a variety of disciplines and theoretical and methodological approaches, all related to human factors in cybercrime. The goal of this edited volume and the annual conference is to inform academics about these new developments in cutting edge research on the human factor in cybercrime and stimulate future research and collaborations. The next Chapter “The Annual Conference on the Human Factor in Cybercrime: An Analysis of Participation in the 2018 and 2019 Meetings” presents descriptive analyses on the goals and network of participants in the first two editions of the conference. Due to the global COVID-19 pandemic, the third annual conference which was scheduled to be held in Montréal in November 2020 has been postponed until 2021. We are confident that we will continue to strengthen connections in this community and we

M. Weulen Kranenbarg (*) Department of Criminal Law and Criminology, Vrije Universiteit (VU) Amsterdam, Amsterdam, The Netherlands e-mail: [email protected] R. Leukfeldt Netherlands Institute for the Study of Crime and Law Enforcement (NSCR), Amsterdam, The Netherlands Centre of Expertise Cyber Security, The Hague University of Applied Sciences, The Hague, The Netherlands © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Weulen Kranenbarg, R. Leukfeldt (eds.), Cybercrime in Context, Crime and Justice in Digital Society I,



M. Weulen Kranenbarg and R. Leukfeldt

hope that the annual conference and its publications such as this book will inspire many new and established researchers in the field. Similar to the session structure during the conference, the chapters in this book are grouped around three main themes: victims, offenders, and policing. Within these overarching themes, some subthemes can be identified. For victims, the section contains chapters on victim characteristics and behavior, consequences of victimization, and prevention of victimization. For offenders, the section contains chapters on both individuals offenders and their characteristics, and organized groups or communities of offenders. In the section on policing some interesting aspects of policing cybercrime are discussed, such as the context of the TOR-­ network, problems related to reporting crime to the police and the analysis of crime reports, and potential interventions.

The Annual Conference on the Human Factor in Cybercrime: An Analysis of Participation in the 2018 and 2019 Meetings Asier Moneva

Introduction In the land of research, there is a vast forest of academic conferences that grows thicker as we speak. In this forest, many researchers, especially the most inexperienced—and generally the youngest—get lost because they are uncertain which conferences to attend. Generally constrained by limited budgets, researchers must choose a handful of these events at which to disseminate their work and build their networks if they want to have an impact on society. But this forest is so dense that one can easily get lost. Many trails lead to “first and only” events that are crafted with carefully chosen names so broad as to attract a wide range of participants (e.g. the first International Conference on Technology, Knowledge and Human Behaviour).1 And to accommodate many participants in a short time, large conferences need formulas that allow research to be presented simultaneously. But parallel sessions mean that most research goes unnoticed by many scholars who might find it relevant to their own research. Thus, many researchers end up in these generic events, where the task of effectively exchanging knowledge is overly complex. At these events, disguised as interdisciplinary, participants are likely to have such different agendas that it is difficult for them to find usefulness in each other’s research. Amidst all this confusion, which conferences should one attend?

1  Very possibly there will never be a second edition of such conferences. Note that any resemblance to reality is pure coincidence.

A. Moneva (*) Netherlands Institute for the Study of Crime and Law Enforcement (NSCR), Amsterdam, Netherlands The Hague University of Applied Sciences, The Hague, Netherlands e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Weulen Kranenbarg, R. Leukfeldt (eds.), Cybercrime in Context, Crime and Justice in Digital Society I,



A. Moneva

Fortunately, there are other formulas that bring together groups of committed active participants who seek to put research into practice. Some groups of scholars and practitioners have tried to address the problem of abstraction by organising small conferences that are focused on particular problems [e.g. the Environmental Criminology and Crime Analysis Symposiums (ECCA) (Wortley & Townsley, 2017)]. Bringing together the most influential actors in the field, these scenarios help raise the level of discussion and advance the discipline (see Bottoms, 2012). The participants of these conferences then become some kind of “soft peer reviewers” who help shape research designs and interpret results within the most stimulating context. In addition, they contribute through criticism to uncover alternative explanations for findings, discuss results and identify directions for future research. Sadly, such conferences are needles in the straw. In pursuit of the same goals that advance science, the Annual Conference on the Human Factor in Cybercrime was conceived. This chapter provides an overview of the 2018 and 2019 editions of this Conference in order to analyse their strengths and identify what aspects could be improved in order to guide the organisation of the following editions. After introducing a description of the event and its structure in the next section, the chapter presents a series of descriptive analyses that allow understanding aspects such as the Conference attendance, the origin of the participants and the collaboration networks amongst them.

The Annual Conference on the Human Factor in Cybercrime To learn about the origins of the Annual Conference on the Human Factor in Cybercrime, one needs to go back a few years and understand the development of intellectual movements in the context of growing interest in cybercrime research. One of the pioneering movements in this domain was the International Interdisciplinary Research Consortium on Cybercrime (IIRCC). Formally established in 2015, the IIRCC was conceived as a global initiative that aims to bring together the leading scholars in the field of cybercrime and cybersecurity with practitioners—regardless of their background—to achieve two main objectives: advancing the state of the art in the discipline, and providing solutions for a secure Internet.2 As prolific researchers, the original proponents of this movement had a great presence at the most important international scientific events, which constituted ideal scenarios to promulgate the principles of the IIRCC. Inspired by this initiative, researchers from all over the world began to join its ranks. According to its website, 2  It was during an informal gathering at the second Annual Interdisciplinary Conference on Cybercrime—hosted by the Michigan State University, see eventdisplay/20375/the-2nd-annual-interdisciplinary-conference-on-cybercrime—when the participating scholars came up with the idea of providing a formal structure to their meetings, thus originating what is now known as the IIRCC. For more information about the IIRCC, visit: https://

The Annual Conference on the Human Factor in Cybercrime: An Analysis of Participati…


IIRCC members currently represent institutions from at least that nine different countries. And they continue to thrive. In parallel, the growing interest in cybercrime research was becoming evident at the two major criminology conferences: The Annual Conference of the European Society of Criminology (EUROCRIM), and the American Society of Criminology (ASC) Annual Meetings. After two decades of history, nobody disputes that these two are the most important criminological research conferences in their respective continents.3 Over the years, participation in both conferences has continued to grow (Aebi & Kronicz, 2019)4 along with the presence of cybercrime researchers, setting the tone for news initiatives that transcend territorial boundaries. In response to an increasing volume of cybercrime contributions in both conferences (Fig. 1), a bunch of leading scholars in the field—including many members of the IIRCC—resolved to organise the participation of cybercrime researchers by founding the European Society of Criminology Working Group (ESC WG) on Cybercrime in 2016 and the Division of Cybercrime in 2019. As shown in Table 1, although each has its particularities, both groups preserve the essence of the IIRCC.  This is reflected in their mission of bringing together scholars from the field of cybercrime and cybersecurity to exchange knowledge from a global perspective. Because of its recent creation, the Division of Cybercrime is still in its infancy, but the ESC WG on Cybercrime has been operating for a few years now. To promote the objectives set within the framework of EUROCRIM, one of the fundamental tasks that the chairs of the ESC WG on Cybercrime undertake consists in arranging all cybercrime presentations in such a way that there are no parallel sessions, so that all scholars interested in the topic can attend every presentation. This is no easy task, as to date the working group is composed of 83 researchers, but it certainly favours cybercrime research and also creates a meeting point for cybercrime scholars. However, even if these organisations succeed in this task and manage to improve the cybercrime research scenario in their respective conferences, they would still have to solve other problems in order to achieve their goals. First, the scope of these conferences is—as their name suggest—territorially limited, and to attend such events one must become a member of the societies that organise them by paying a fee (i.e. the ESC or the ASC). Therefore, the very nature of each conference limits the networking capacity of the participants and, with it, the ability to advance the field. And second, EUROCRIM and the ASC Annual Meeting are Criminology conferences that are full of criminologists. This is important because cybercrime research encompasses too wide a field and incorporates many objects of study that are approached from very different theoretical frameworks and methodologies. It is therefore impractical to make an approach from a single discipline. Additionally, this sometimes makes communication between cybercrime researchers difficult. 3  For more information about the conferences, visit for EUROCRIM, and html for the ASC Annual Meeting. 4  The historical ASC Annual Meeting attendance figures can be consulted in: https://www.asc41. com/history/Annual%20Meeting%20Misc/ASC_Annual_Meeting_Attendance_Figures.pdf.


A. Moneva

Fig. 1  Number of contributions presented at the ASC Annual Meeting and EUROCRIM with the string “cyber” in the title (Source: ASC Annual Meeting and EUROCRIM final programmes (2001–2019))

Table 1  Objectives of the ESC WG on cybercrime and the Division of Cybercrime ESC WG on Cybercrime 1. Advancing knowledge and research on cybercrime and cybersecurity across Europe (both substantively and methodologically) and other parts of the world, including the United States, the Middle East, and Asia, with plans to expand to other parts of the world 2. Creating a network for information exchange and international collaboration between leading scholars, starting scholars, graduate students, government agencies, and private organizations involved in cybercrime research

Division of cybercrime 1. To bring together in one multi- and inter-disciplinary division, those actively engaged in research, teaching, and/or practice in the field of cybercrime and cybersecurity 2. To encourage scholarly, scientific, and practical exchange and collaboration concerning cybercrime and cybersecurity within a global perspective 3. To develop effective cybercrime prevention strategies and practices 4. To provide a forum for interaction and the exchanging of ideas among persons involved in cybercrime and cybersecurity 5. To promote conference sessions pertaining to cybercrime and cybersecurity

Source: The ESC WG on Cybercrime website,; and the Constitution of the Division of Cybercrime

The Annual Conference on the Human Factor in Cybercrime: An Analysis of Participati…


To illustrate potential communication problems, note there is a great stretch from the most technical approaches that require knowledge in computer engineering and data science, to the most theoretical approaches that require a deep understanding of phenomena from the social sciences. In a field as young as cybercrime, this divergence allows many research  questions to be examined. But in order to achieve greater depth and scientific rigour, it is necessary to deepen certain questions from the standpoint of specialisation. In favour of the latter, scholars promoting a new conference model urged a more specific thematic shift: from cybercrime to the human factor in cybercrime. The Human Factor in Cybercrime encompasses several aspects: the victims who suffer from it, the offenders who commit it, the police strategies that are implemented for its formal control, the role that people and institutions have in its social control, the interaction between all these actors and the environment for its prevention, and the contribution of criminological theory in understanding and modelling all of them (Leukfeldt, 2017; Leukfeldt & Holt, 2020). The study of all these topics is primarily approached from the lens of the social sciences but needs both interdisciplinarity to thrive and a strong venue for the transfer of knowledge. To overcome these obstacles, two IIRCC members proposed at the 2017 ASC Annual Meeting in Philadelphia to organise a different conference scheme in 2018. The new conference would not be just continental, but global, and no membership fees would be requested, only the costs incurred by the participation. Participation would be open to any academics who are active in the field and want to present their work among their peers. Submitted abstracts would then be subjected to a peer review process that would keep the conference small in participation and linear in its development (i.e. no parallel sessions). In addition to the panels, sessions would include roundtables that address hot topics, keynotes by stakeholders to identify research needs, and pitch sessions to promote collaboration on upcoming research ideas. Such format would encourage all presentations to be heard and receive input from the audience, thus generating richer discussions that improve the knowledge produced. Furthermore, this simpler structure would facilitate the incorporation of stakeholders into these discussions, so that the research produced can be applied, reach the public, and impact on society. In this way, the conference would help to strengthen the link between academia and practice, to promote international collaboration between scholars in the same field, to provide soft peer review on the scholars’ work, and ultimately to provide an environment that focuses on advancing the field. This conference would have one additional peculiarity: it would narrow its thematic scope to the Human Factor in Cybercrime. As a result of both a new conference format and a thematic shift, the Annual Conference on the Human Factor in Cybercrime was created. After the first edition was held in 2018 in Israel, a second one was held in 2019 in The Netherlands consolidating its presence. The third edition—to be held in 2021 in Canada—is already in preparation, ensuring its continuity.


A. Moneva

The Present Study To better understand the growth and development of the Annual Conference on the Human Factor in Cybercrime, an overview of the participation in its two editions of 2018 and 2019 is provided. Inspired by the work of Bichler and Malm (2008) on the ECCA group, in this chapter descriptive analyses—including network analysis— are conducted to better understand the participation in the conference and its structure. The ultimate goal is to assess its strengths and weaknesses to evaluate whether the conference is directed towards achieving the objectives for which it was intended.

Data Three main data sources are used in this paper. The first is the data retrieved from the public programme of the conferences and related emails5; the second is the information about the members of the ESC WG on Cybercrime and the IIRCC publicly available on their respective websites; and the third are the original participation files maintained by the organisers. The latter had to be used to complement the others because the public programme of the 2019 conference only contained the names of the presenters and not all the co-authors. Note that data pertaining to members of the Division of Cybercrime were not included since they were not yet publicly available due to its still recent creation. In addition, informal conversations with the organisers, and other secondary and external public sources were consulted to complete information on participants (e.g. Google Scholar, personal and institutional websites). The final dataset includes the name of participants, their affiliation, and country where they develop their professional activity, whether they are members of the ESC WG on Cybercrime or members of the IIRCC—the seeds of the Annual Conference on the Human Factor in Cybercrime—whether they are stakeholders or academics, whether they participated in each of the two meetings of the conference, whether they constituted the organising committee, and their network of co-authors in such meetings. Regarding the latter, tidy data required to explore the collaboration network is composed of two separate datasets, one for the participants and their characteristics (i.e. nodes) and another delineating the connections between the nodes (i.e. edges). Note that participation is, therefore, measured by the authorship of the contributions submitted, not by physical attendance.

5  For the 2018 programme, see; for the 2019 programme, see

The Annual Conference on the Human Factor in Cybercrime: An Analysis of Participati…


Analytic Strategy A dual analysis strategy is used in this paper. Firstly, a descriptive analysis of the variation in the volume and composition of participation between the 2018 and 2019 conference editions is carried out. This includes the variation in attendance with respect to the type of participants, the type of institutions, and the number of countries involved. Secondly, Social Network Analysis (SNA) is conducted to examine the collaborative networks in each of the conference editions. SNA allows to study the individuals that compose a network and the relations that exist between them (Wasserman & Faust, 1994). In this study, the individuals that comprise the network are the participants of the two editions of the conference, and the relationships that exist between them are the collaborations found in the contributions presented at the conference. The collaborations in each network are displayed in the form of cliques (Luce & Perry, 1949), subnetworks of participants that are connected to each other. The cohesion of the network is measured by calculating its density, which indicates the ratio of existing relationships (ER) to possible relationships (PR),

Density =


where PR is calculated depending on the size of the network (n).

PR n =

n × (n − 1) 2

So, if all participants collaborated with each other forming a large clique, the density of the network would be 1, whereas individual participation in all cases would produce a density of 0. Data transformation and data visualisation were executed using the tidyverse R package version 1.3.0 (Wickham et  al., 2019), the sf R package version 0.9–3 (Pebesma, 2018), and the igraph R package version 0.8.1 (Csárdi & Nepusz, 2006) in RStudio version 1.2.5042 (RStudio Team, 2019) for the R free software version 3.6.2 (R Core Team, 2020).

Results The results of the descriptive analysis of participation at the conferences held in 2018 and 2019, and how it varied from one edition to another, are shown in Table 2. To this end, participation was analysed at three levels of aggregation: individual, institutional, and national. In general terms, participation in 2019 multiplied compared to 2018, which reflects in an increase in absolute numbers of each of the parameters in the table. However, the percentages reveal the change in participation


A. Moneva

Table 2  Variation in participation records in the two editions of the Annual Conference of the Human Factor in Cybercrime

Attendance Participants  Organising committee  ESC WG on cybercrime  IIRCC  Stakeholders Institutions  Law enforcement  Research  Government Countries

Conference edition 2018 2019 n % n 26 79 6 23.1 5 12 46.2 27 7 26.9 10 1 3.8 8 14 33 1 7.1 4 13 92.9 28 0 0.0 1 5 10

% 6.3 34.2 12.7 10.1 12.1 84.8 3.0

Variation n % 53 −1 −16.8 15 −12.0 3 −14.2 7 6.3 19 3 5.0 15 −8.1 1 3.0 5

in relative terms. Thus, even though the number of members of the ESC WG on Cybercrime doubled with respect to 2018, their participation decreased by 12% with respect to the total number of attendees. And a similar effect is observed for IIRCC participants (−14.2%). The number of stakeholders involved increased from 1 to 8, representing a 6.3% increase over total attendance. Note that being a member of the ESC WG on Cybercrime and/or the IIRCC, and being a stakeholder are non-­ exclusive categories (i.e. stakeholders can also be members of these organisations). At the institutional level, the number of unique entities represented increased by 19. In this case, the variation in the distribution of participation at the institutional level implied a relative increase in the participation of government representatives (3%) and law enforcement agencies (5%) to the detriment of research entities, whether they are universities or research institutes (−8%). Finally, the number of countries represented increased from 5 to 10. While in 2018 only three continents were represented (i.e. Europe, America and Asia), in 2019 all five continents have some form of representation. However, in both cases most participants came from Western Europe and North America (Fig. 2). Below, the second part of the analysis serves to graphically illustrate conference networking and to examine it in detail. Figure 3 shows the collaborative networks for the 2018 and 2019 meetings. Three features were used to characterise the participants in the network: the size, to distinguish the organising committee; the colour, to indicate whether the participants belong to the ESC WG on Cybercrime and/or the IIRCC; and the shape, to differentiate whether the participants are stakeholders or not. The existing collaborations in the network are displayed as cliques of two or more nodes, which increased from 7 in 2018 to 24 in 2019. For both conference editions, such collaborations are generally mixed between ESC WG on Cybercrime and/or IIRCC members and non-members. In contrast, stakeholders are rarely involved in collaborations with academics, a circumstance only observed on two occasions at the 2019 meeting. Apparently, the members of the ESC WG on

The Annual Conference on the Human Factor in Cybercrime: An Analysis of Participati…


Fig. 2  Participating countries in the two editions of the Annual Conference of the Human Factor in Cybercrime

Fig. 3  Collaboration networks in the two editions of the Annual Conference of the Human Factor in Cybercrime

Cybercrime and/or IIRCC play a fundamental role in promoting the cohesion of the network, as they are usually the nexus between various collaborations. Some of them also integrate the organising committee in both editions, which seems more engaged in collaboration in the 2019 meeting. Regarding the cohesion of the network, density analyses yield a value of 0.07 in the 2018 network versus 0.03 in the 2019 network. This means that the ratio of collaborations per participant was higher in the first edition.


A. Moneva

Discussion Although research on the development and purpose of academic conferences is scarce, such an object of study constitutes a cornerstone for the exchange of knowledge that allows for the advancement of scientific disciplines. Research pieces such as Bichler and Malm (2008) on ECCA Symposiums are as infrequent as they are undervalued. In their paper, the authors identify some weaknesses in the social structure of the symposia that allows for their reinforcement in the future. At the very least, the most relevant conferences should consider appointing a commission to conduct this type of research, which serves to evaluate their function and reorient their design. For this reason, we dedicate this chapter to the analysis of the participation in the two Annual Conferences of the Human Factor in Cybercrime held in 2018 and 2019 and the collaboration networks generated within them. Such action allows us to outline some important aspects to be taken into account for the organisation of future editions of the conference. The first aspect to be highlighted from the conferences is that participation increased considerably in the second edition. One possible explanation is that the success of the first meeting held in Israel increased its popularity among researchers, but it is also likely that the venue for the second meeting (i.e. The Netherlands) was more accessible to participants given their predominant Western background. Such Western dominance has also been observed in similar conferences (Bichler & Malm, 2008). Here it should be noted that participation is mediated by the organising committee. Since its members are responsible for selecting the contributions presented at the conference, it is possible that their preferences bias participation. For example, they may prioritise those contributions in which they collaborate, or they may favour some methodological approaches over others based on their own expertise (e.g. quantitative vs. qualitative). This, in turn, would affect participant diversity. A second relevant aspect to be discussed is the increased presence of stakeholders representing law enforcement agencies and government entities in the conference. Although they still constitute a small percentage of the total number of participants (10.1%), their presence has escalated in the 2019 meeting, even resulting in some joint collaborations with academics. A third aspect to be noted is that the participation of representatives from other countries also increased, bringing a greater diversity of perspectives to the debate due to the more diverse background of the participants (Bichler & Malm, 2008). Together, this resulting expansion is also reflected in the collaborations between scholars, which have increased in total numbers with respect to the first meeting. Such growth causes the cohesion of the network to decrease in the second meeting, since an arithmetical increase in participation requires a geometric increase in collaborations to maintain the same density. For example, a conference with five participants forming one clique would have a density of 1. If the following year this conference doubled the participation to 10, it would not be enough to also double the collaboration by forming two cliques of five participants, since the potential collaborations would be many more in the second case (i.e. PR5  =  10 compared to

The Annual Conference on the Human Factor in Cybercrime: An Analysis of Participati…


PR10 = 45). For this reason, a lower density does not necessarily mean that there is less collaboration among conference participants in the 2019 meeting compared to the 2018 meeting, but rather that it is the result of the growth of the network. Given that the two networks analysed differ greatly in size, it is appropriate to consider network density as an individual measure and not as a comparative one, at least for the time being. Having assessed the strengths and weaknesses of the conference, a number of recommendations can be listed to help improve its future orientation. Firstly, it appears that the work of the organising committee is bearing fruit by increasing the popularity of the conference in terms of participation and outreach. Future meetings will have to find the balance between size and cohesion so that communication between participants is fluid and encourages both the formation of new collaborations and the enrichment of scientific discussions. A good practice in this regard is the central role assumed by the organising committee in the collaboration networks of the 2019 edition. Upcoming meetings could benefit from the committee’s position to cohere the network of participants. Secondly, the participant networks of both editions show that collaboration between researchers and stakeholders is still scarce. Although the involvement of stakeholders is not an objective of the conference, for the research presented to be applied, it is important to encourage the presence of stakeholders that constitute the link between academics and practitioners for two reasons: so that research can be used to solve real problems, and so that strategies to solve such problems are evidence-based. After all, any  actors working to mitigate cybercrime and contributing to a better society may benefit from working together. Thirdly, the diversity of participants is essential. Participants from different backgrounds can help the network of academics to identify research needs and provide stakeholders with new perspectives on solving existing problems. Keeping the conference focused on interdisciplinarity would be a step forward in this direction. However, there are some aspects that were not addressed in this chapter and that—at the same time—pave the way for future research. Note that this chapter only measures collaborations within The Annual Conference on the Human Factor in Cybercrime network, so any other existing collaborations not reflected in the conference programmes were not considered in the analysis. Therefore, it is likely that collaborations between the members of the network are more frequent than what is shown here. Participation of early career researchers was not specifically examined either. Future research should address this issue by devoting special attention to the definition of early career researcher and collecting appropriate data. Generally, it is indispensable that participants’ affiliation and membership data are up to date for a rigorous analysis. In this regard, along with continuing to use open data sources, it is advisable to design a specific instrument to collect the data required for evaluating participation in future editions of the conference (i.e. a questionnaire that includes the informed consent of the participants and that complies with current data protection regulations).


A. Moneva

Conclusions This chapter assessed the participation in the two editions of the Annual Conference on the Human Factor in Cybercrime. Two main conclusions can be drawn from the analyses: (1) that the 2019 edition enjoyed a more numerous and varied participation, both in terms of individuals, institutions and countries; and (2) that the members of the ESC WG on Cybercrime and the IIRCC are instrumental in sustaining collaborative networks among participants, despite the fact that there are still many isolated nodes. Overall, it seems that the latest edition of the conference is closer to achieving the objectives for which it was conceived. Of course, the fact that only two conference meetings were held limits the scope of the recommendations presented in this paper. Nevertheless, with the information available, some structural patterns in participation can be observed that allow useful recommendations to be made. Data from future editions of the conference will allow for more robust analyses that will in turn serve to provide more reliable suggestions. Acknowledgements  To E. Rutger Leukfeldt and Marleen Weulen Kranenbarg for providing the data for the study. To Thomas J. Holt for providing details on the origin of the IIRCC. To Cristina Del-Real for her comments on an earlier draft of this manuscript. To the reviewers, for their insightful comments that helped improve the manuscript. Funding: This work was supported by the Spanish Ministry of Science, Innovation and Universities under Grant FPU16/01671, and under Grant EST18/00043.

References Aebi, M. F., & Kronicz, G. (2019). ESC Executive Secretariat annual report 2018. Newsletter of the European Society of Criminology, 18(2), 4–8. Bichler, G., & Malm, A. E. (2008). A social network analysis of the evolution of the Environmental Criminology and Crime Analysis (ECCA) symposiums. Crime Patterns and Analysis, 1, 5–22. Bottoms, A. (2012). Developing socio-spatial criminology. In M. Maguire, R. Morgan, & R. Reiner (Eds.), The Oxford handbook of criminology (pp. 450–489). Oxford: Oxford University Press. Csárdi, G., & Nepusz, T. (2006). The igraph software package for complex network research. InterJournal, Complex Systems, 1695(5), 1–9. Leukfeldt, E. R. (Ed.). (2017). The human factor in cybercrime and cybersecurity. The Hague: Eleven International Publishing. Retrieved from catalogus/research-­agenda-­the-­human-­factor-­in-­cybercrime-­and-­cybersecurity-­1 Leukfeldt, E.  R., & Holt, T.  J. (Eds.). (2020). The human factor of cybercrime. Abingdon: Routledge. Luce, R. D., & Perry, A. D. (1949). A method of matrix analysis of group structure. Psychometrika, 14(2), 95–116. Pebesma, E. (2018). Simple features for R: Standardized support for spatial vector data. The R Journal, 10(1), 439.­2018-­009 R Core Team. (2020). R: A language and environment for statistical computing (version 4.0.0) [Computer software]. Vienna: R Core Team. Retrieved from https://www.R-­

The Annual Conference on the Human Factor in Cybercrime: An Analysis of Participati…


RStudio Team. (2019). RStudio: Integrated development environment for R (Version 1.2.5) [Computer software]. Vienna: RStudio Team. Retrieved from Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications. Cambridge: Cambridge University Press. Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L., François, R., … Yutani, H. (2019). Welcome to the Tidyverse. Journal of Open Source Software, 4(43), 1686. https://doi. org/10.21105/joss.01686 Wortley, R., & Townsley, M. (Eds.). (2017). Environmental criminology and crime analysis (2nd ed.). Taylor & Francis Group: Routledge.

Part II


The Online Behaviour and Victimization Study: The Development of an Experimental Research Instrument for Measuring and Explaining Online Behaviour and Cybercrime Victimization M. Susanne van ’t Hoff-de Goede, E. Rutger Leukfeldt, Rick van der Kleij, and Steve G. A. van de Weijer

Introduction Cybercrime is common and its impact can be significant for victims (Cross, Richards, & Smith, 2016; Jansen & Leukfeldt, 2018; Leukfeldt, Notté, & Malsch, 2019). Cybersecurity professionals have tried to reduce victimization with technical measures such as anti-virus scanners and firewalls. However, these measures often have only a limited effect and much victimization can be traced back to human behaviour (Jansen, 2018; Leukfeldt, 2017). For example, internet users may fill in information on a phishing1 website when they should not, thereby allowing 1  Phishing is a form of an online scam, in which criminals copy the emails or websites of legitimate organisations to mislead victims in order to obtain login details and gain access to online accounts.

M. S. van ’t Hoff-de Goede (*) Centre of Expertise Cyber Security, The Hague University of Applied Sciences, The Hague, The Netherlands e-mail: [email protected] E. R. Leukfeldt Centre of Expertise Cyber Security, The Hague University of Applied Sciences, The Hague, The Netherlands Netherlands Institute for the Study of Crime and Law Enforcement (NSCR), Amsterdam, The Netherlands R. van der Kleij Centre of Expertise Cyber Security, The Hague University of Applied Sciences, The Hague, The Netherlands The Netherlands Organisation for Applied Scientific Research (TNO), The Hague, The Netherlands S. G. A. van de Weijer Netherlands Institute for the Study of Crime and Law Enforcement (NSCR), Amsterdam, The Netherlands © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Weulen Kranenbarg, R. Leukfeldt (eds.), Cybercrime in Context, Crime and Justice in Digital Society I,



M. S. van ’t Hoff-de Goede et al.

criminals to misuse that information. Therefore, research into internet users is essential to reduce victimization (Leukfeldt, 2017; Rhee, Kim, & Ryu, 2009; Talib, Clarke, & Furnell, 2010). If we want to prevent cybercrime victimization, we must first explain victimization. Previous cybercrime victimization studies have focused on establishing a risk profile for victims and have attempted to identify factors that could increase the risk of victimization. In these studies, personal characteristics and routine activities are often central, for example, by assuming that certain routine activities, such as using social media, make potential victims more visible to cybercriminals. However, taking the studies together, it does not seem possible to establish an unambiguous risk profile (Bossler & Holt, 2009, 2010; Holt & Bossler, 2013; Sheng, Holbrook, Kumaraguru, Cranor, & Downs, 2010; Van de Weijer & Leukfeldt, 2017). Cybercriminals are apparently not too picky and do not select whom they attack: Everyone is a potential victim of cybercrime. Moreover, it appears that certain online activities are only related to the risk of victimization of specific forms of cybercrime. There do not seem to be any routine activities that are by definition risk-­ increasing (Leukfeldt & Yar, 2016). It is thus not possible to outline a profile of high-risk personal characteristics or routine activities for cybercrime victimization (Leukfeldt, 2014). The current study focuses on the behaviour of internet users in explaining online victimization. It has been widely recognized that humans are the “weakest link” in cybersecurity. Unsafe online behaviour, such as using weak passwords and not updating software regularly, may increase the risk of cybercrime victimization (Leukfeldt, 2014; Shillair et  al., 2015). However, knowledge about how citizens defend themselves against cybercrime is scarce (for an overview, see, for example, Leukfeldt, 2017). It is still unknown how well internet users protect themselves against cybercrime, partly because how people say or think they behave online is not always the same as how people actually behave online (Crossler et  al., 2013; Debatin, Lovejoy, Horn, & Hughes, 2009; Workman, Bommer, & Straub, 2008). However, such knowledge is indispensable for the empirical foundation of possible behavioural interventions. It is therefore necessary to gain more insight into the way internet users actually behave online and which factors are associated with this. This chapter will outline the development of the research tool for the online behaviour and victimization study that can measure actual online behaviour along with possible explanatory factors. The added value of this research instrument is evident: we go beyond existing studies that are often based on self-report by measuring both perceived and actual behaviour among a large-scale sample. Moreover, we do aim to explain not only victimization of specific forms of cybercrimes, but also several clusters of online behaviour. After all, there are many types of behaviour that increase the risk of certain cybercrimes. In addition, simultaneously, it does not have to be the case that a certain behaviour always leads to a certain form of victimization. On one occasion, falling for a phishing email can lead to an empty

The Online Behaviour and Victimization Study: The Development of an Experimental…


bank account, while on another, it can lead to a ransomware2 infection or be the start of a spear phishing3 attack on the company where the victim works (see, for example, Leukfeldt, Kleemans, & Stol, 2017; Lusthaus, 2018). Therefore, the research instrument presented in this chapter objectively measures a number of behaviours that we know to be directly related to the victimization of various cybercrimes, such as sharing personal information and using weak passwords. Furthermore, this research instrument is innovative because it measures various explanations for online behaviour and victimization, while existing studies often only examine attitudes or awareness Finally, the tool includes several experiments to determine, for example, whether persuasion techniques used by criminals make individuals more likely to engage in unsafe online behaviour.

Online Behaviour and Cybercrime Victimization Unsafe Online Behaviour as a Predictor of Online Victimization Unsafe online behaviour can directly contribute to increased risk of victimization. Victims of online banking fraud, for example, often appear to have inadvertently given their personal information to fraudsters, for example by clicking on a hyperlink in a phishing email or entering information on a phishing website (Jansen, 2018; Jansen & Leukfeldt, 2015, 2016). An important condition for online safety is therefore safe online behaviour (i.e. cyber hygiene behaviour, Cain, Edwards, & Still, 2018). People who behave safely online—or cyber hygienically—adhere to “golden” rules (best practices). For example, they avoid unsafe websites, prevent clicking on unreliable hyperlinks, use strong passwords and keep their technical security measures up to date (Cain et al., 2018; Crossler, Bélanger, & Ormond, 2017; Symantec, 2018). Based on previous empirical studies, we identified seven central behavioural clusters for this study: password management, backing up important files, installing updates, using security software, being alert online, online disclosure of personal information and handling attachments and hyperlinks in emails. When internet users display safe behaviour within each cluster, this may protect them from cybercrime victimization (for more information, see Cain et  al., 2018; Crossler et  al., 2017; Van Schaik et al., 2017). Previous studies, based on both self-reported behaviour and actual behaviour in experimental settings, have shown that many people only behave safely online to a limited degree or even display patently unsafe online behaviour, on each of the

2  Ransomware is a malicious software that blocks a computer or encrypts files. Only when you pay a ransom you are able to use the computer or files again. 3  Spear phishing is a targeted phishing attack against a person or a specific group of people.


M. S. van ’t Hoff-de Goede et al.

seven behavioural clusters. Many people do not have a malware4 scanner or firewall on their home computer, or do not keep them up to date (Cain et  al., 2018; Van Schaik et al., 2017). In addition, young people are lax with their smartphone security (Jones & Heinrichs, 2012; Tan & Aguilar, 2012). While the use of unique strong passwords is an important security measure, studies have shown that 50–60% of passwords are reused across platforms, and that many people would share their passwords with others (Alohali, Clarke, Li, & Furnell, 2018; Cain et  al., 2018; Kaye, 2011). Another example of unsafe online behaviour is that people share personal information on social media on a large scale (Christofides, Muise, & Desmarais, 2012; Debatin et  al., 2009; Talib et  al., 2010), which can be used to make phishing emails more credible (spear phishing) or to commit identity fraud. For example, many of the respondents in the study by Talib et al. (2010) shared their full name and email address (62%), date of birth (45%), or full address (7%) on an online social network. Finally, online deviant behaviours, such as illegal downloading, online bullying and threatening others, are common and contribute to online victimization, possibly especially among young people (Bossler & Holt, 2009; Holt & Bossler, 2013; Maimon & Louderback, 2019; Ngo & Paternoster, 2011). A further conclusion that can be drawn from the literature is the added value of focusing on behaviour rather than on specific cybercrimes. Hacking victimization, for example, can be caused by many different behaviours. For example, people can be hacked because they have shared personal information, downloaded malware, or do not have up-to-date security. Moreover, these behaviours can also lead to victimization of other forms of cybercrime, such as online fraud or identity fraud. Studies that focus on specific crimes only provide insight into a small part of the complexity of online behaviour and cybercrime. By focusing on online behaviour, on the other hand, a wide range of cybercrimes can potentially be tackled.

Explaining Online Behaviour Although safe online behaviour may be of great importance to prevent cybercrime victimization, unsafe online behaviour is common. How can this be explained? Based on two theories that have previously been used to explain behaviour, Protection Motivation Theory (PMT) (Floyd, Prentice-Dunn, & Rogers, 2000; Norman, Boer, & Seydel, 2005) and the COM-B framework (Capability, Opportunity, Motivation, Behaviour) (Michie, Van Stralen, & West, 2011), several constructs can be distinguished that each may play a role in unsafe online behaviour. These are motivation for safe online behaviour, knowledge about safe online behaviour (i.e. awareness) and opportunity for safe online behaviour. After discussing these factors and previous studies on their relationships with online behaviour, this chapter will also focus on other potentially relevant factors.

4  Malware is malicious software that is installed on your computer unsolicited and usually unnoticed. Examples of malware are viruses, Trojan horses, worms, and spyware.

The Online Behaviour and Victimization Study: The Development of an Experimental…


Motivation According to PMT, how well we protect ourselves is influenced by the degree to which we are motivated to protect ourselves (Floyd et  al., 2000; Norman et  al., 2005). People with high protection motivation supposedly act more cautiously and take measures to protect their safety (Crossler & Bélanger, 2014; Floyd et al., 2000). It is argued in PMT that protection motivation is influenced by coping appraisal and threat appraisal; a persons’ evaluation of the threat and the measures against this threat (Floyd et al., 2000). Both threat appraisal and coping appraisal have several components. The components of threat appraisal are perceived vulnerability (assessment of one’s own vulnerability to the threat) and perceived severity (assessment of the severity of the threat). Coping appraisal includes the components response-­ efficacy (whether a measure will be effective against the threat), self-effectiveness (whether he/she is able to implement an effective measure) and response costs (whether the estimated costs of taking measures are worth it). PMT has previously been applied to online behaviour. Previous studies found that estimated response-efficacy, self-efficacy and response costs seem to be important predictors of safe online behaviour (Arachchilage & Love, 2014; Crossler et al., 2017; Crossler & Bélanger, 2014; Jansen & van Schaik, 2017; Rhee et al., 2009; Van Schaik et al., 2017; Workman et al., 2008). However, perceived vulnerability may not be related to safe online behaviour in the expected manner. People who consider themselves vulnerable to online attacks do not behave differently (Jansen, 2018) and may even behave less safely (Crossler & Bélanger, 2014). Related to perceived vulnerability, Boss, Galletta, Lowry, Moody, and Polak (2015) found that fear of victimization did not seem to affect the intention of computer users to back up their files, while it did seemed to increase their intention to use anti-malware software. Finally, most studies find a relationship between perceived severity and online behaviour (Crossler et al., 2017; Jansen, 2018; Jansen & van Schaik, 2017). However, Downs, Holbrook, and Cranor (2007) did not find the estimated severity of the consequences of a successful phishing attack to be a predictor for precautionary behaviour in their sample of 232 computer users. Unfortunately, very few studies have gone beyond studying protection motivation and attitudes to measure online behaviour. The few that did mainly focused on self-reported precautionary behaviour. It remains unclear how motivation may be related to actual online behaviour. Knowledge/Awareness The theoretical COM-B framework (Michie et al., 2011) suggests that in addition to motivation, a necessity for safe online behaviour is capacity (i.e. knowledge about online safety), also referred to as awareness. Examples are knowledge about online threats, information security, safety measures and being able to recognize malicious URLs.


M. S. van ’t Hoff-de Goede et al.

Previous studies that investigated the extent to which knowledge of IT and cybersecurity influences online behaviour yielded ambiguous results (Alohali et al., 2018; Arachchilage & Love, 2014; Cain et al., 2018; Downs et al., 2007; Holt & Bossler, 2013; Ovelgönne, Dumitras, Prakash, Subrahmanian, & Wang, 2017; Parsons, McCormac, Butavicius, Pattinson, & Jerram, 2014; Shillair et al., 2015). For example, Arachchilage and Love (2014) showed that knowledge, such as recognizing an unreliable URL, increases self-efficacy and may contribute to phishing risk-­avoiding behaviour. In addition, people who are able to evaluate URLs, understand internet icons and internet terms may be less vulnerable to phishing attacks (Downs et al., 2007). Furthermore, people who say they are IT experts seem less likely to display unsafe online behaviour (Alohali et al., 2018). On the other hand, Ovelgönne et al. (2017) found that software developers exhibit risky online behaviours more often than other respondents do. Although this may be related to people overestimating their knowledge of internet security in some cases, thereby unjustly classifying themselves as an IT expert (Debatin et  al., 2009), Cain et  al. (2018) found that people who considered themselves experts in IT behave less safely online. Moreover, no difference in safe behaviour was found between those who were trained in IT or cybersecurity and those who were not. These studies have made an important step towards exploring the relationship between knowledge and online behaviour. However, findings are still undecided and more research is needed, in particular to study actual online behaviour and its association with knowledge. Opportunity According to the COM-B framework, knowledge and motivation alone may not be enough to elicit safe online behaviour. Opportunity is also needed, which refers to the social and material environment that make behaviour possible or impossible (Michie et al., 2011). While the association between opportunity and behaviour has attracted the attention of researchers in other fields, such as dietary behaviour (Michie et al., 2011), research into the influence of the opportunity on online behaviour is scarce. The social environment refers to how the people around us influence our behaviour. For example, the privacy settings of users of online social networks are related to the number of online friends with private profiles (Lewis, Kaufman, & Christakis, 2008). Moreover, Herath and Rao (2009) showed that social influence of direct colleagues and managers can have a major impact on safe online behaviour within organizations. To our best knowledge, however, the relationship between social environment and online behaviour in private settings has not been studied further. The material environment refers to the availability of financial resources, time and tools that support safe practices. Many companies offer their employees tools, such as privacy screens, that should enable safe online behaviour. Such tools and resources can help strengthen self-confidence in displaying desired behaviour (self-­ effectiveness) among employees (Herath & Rao, 2009). To date, however, the role that the material environment plays in online behaviour outside companies has been

The Online Behaviour and Victimization Study: The Development of an Experimental…


the subject of few studies. It is therefore unclear how the material environment influences online behaviour in a private setting where tools are available in a different way than in companies; citizens must actively purchase and implement safety measures and keep these up to date themselves. Financial opportunity is therefore a relevant factor: people who know they should not send personal photos with free transfer websites (knowledge) and are motivated to use a safer—paid—option (motivation) also need financial leeway to do this (opportunity). Other Factors Another factor that can influence online behaviour is people’s previous experiences, such as previous cybercrime victimization. Past experiences can be an important predictor of future behaviour (Debatin et  al., 2009; Rhee et  al., 2009; Vance, Siponen, & Pahnila, 2012). People may adjust their online behaviour after they have become victims of a cyberattack and start to behave safer. For example, Facebook users who have had unpleasant experiences because they had shared personal information on the platform seem to be more aware of the risks and better able to protect themselves (Christofides et al., 2012; Debatin et al., 2009). However, not all studies point in this direction and previous victimization may not always directly lead to a change in online behaviour (Cain et al., 2018). It has also been argued that self-control is related to online behaviour (Bossler & Holt, 2010; Ngo & Paternoster, 2011). Self-control theory states that people with low self-control are impulsive, do not avoid risks and mainly focus on the short term (Gottfredson & Hirschi, 1990), which could increase their risk to be victims of cybercrime more frequently (Ngo & Paternoster, 2011). The link between self-­control and online victimization, however, may be indirect through other factors, such as motivation (Floyd et  al., 2000) being more active online (Van Wilsem, 2013), delinquent behaviour and associating with offenders (Bossler & Holt, 2010). It remains unclear, however, if and how the relationship between self-control and online victimization is influenced by online behaviour or how self-control is related to online behaviour. Another potentially important predictor of online behaviour is “locus of control”, a term that refers to the sense of responsibility that people have with regard to their own safety (Rotter, 1966). Whether someone considers themselves responsible (i.e. internal locus of control) or places that responsibility on others, such as the police or the bank (i.e. external locus of control), may affect the actions that they take to prevent a successful cyberattack, i.e. the way they behave online (Debatin et al., 2009; Jansen, 2018; Workman et al., 2008). It is expected that someone with a high internal locus of control will take responsibility and is motivated to take their online safety into their own hands. Indeed, previous studies found a positive significant association between locus of control and safe online behaviour (Jansen, 2018; Workman et al., 2008). However, it is also possible that a greater sense of responsibility leads to an unjustified sense of security. When people consider themselves responsible and capable of protecting themselves from cybercriminals, they may underestimate online risks (Rhee et al., 2009), which may result in unsafe online behaviour.


M. S. van ’t Hoff-de Goede et al.

Measuring Online Behaviour Online behaviour, and the degree to which it is safe or unsafe, has so far been measured in two ways. Some researchers have measured perceived behaviour by asking how respondents typically behave or how they would behave in a fictional online situation. In other studies, actual online behaviour has been observed. This section will provide an overview of the methods used in previous studies.

Research into Self-Reported Behaviour Most previous studies into online behaviour have focused on self-reported behaviour. Respondents in these studies were asked about their behaviour using items (e.g. “I open emails from unknown senders”) or questions (“What percentage of your passwords do you change every three months?”) (Cain et al., 2018; Crossler & Bélanger, 2014). An example of a research tool that works with propositions is the Human Aspects of Information Security Questionnaire (i.e. HAIS-Q; Parsons et al., 2014, 2017). In particular, this instrument measures knowledge, attitudes and perceived behaviour on a number of relevant topics, such as password management. Self-reported behaviour can also be investigated in questionnaires research using vignettes and role-play (Downs et al., 2007; Jong, Leukfeldt, & van de Weijer, 2018; Sheng et al., 2010). These methods make it possible to ask respondents about the behaviour they think they would exhibit in a fictitious situation set out by the researchers (Vance et al., 2012). An important advantage of this research method is that it enables researchers to determine situational factors that could cause bias in questionnaire research. In a role-play, researchers can, on the one hand, equate certain factors among everyone (e.g. “imagine your name is Tom Johnson and you work at a bakery”). On the other hand, researchers can manipulate factors, whereby subgroups of respondents are presented with an adapted situation. For example, researchers may differentiate between subgroup one (“imagine you have never been a victim of a crime”) and subgroup two (“imagine you have been defrauded in an online web shop in the past”). Based on the outlined circumstances, respondents are asked how they would act in this situation (Downs et al., 2007; Jong et al., 2018; Sheng et al., 2010). Questionnaire research has several advantages as a research method. For example, the investments needed for questionnaire research are relatively low, while a large representative research population can be achieved. The answers to standardized questions are also suitable for quantitative analysis in order to distinguish explanatory factors and easily compare answers between respondents. However, there are also drawbacks to researching behaviour using questionnaires and vignettes. In studies of self-reported behaviour, researchers focus on how people say they typically behave online or would behave in a hypothetical situation. Although most people indicate that cybersecurity is important (Madden & Rainie,

The Online Behaviour and Victimization Study: The Development of an Experimental…


2015), their self-reported behaviour does not always correspond to their actual behaviour (Smith & Louis, 2008; Spiekermann, Grossklags, & Berendt, 2001). When research focuses solely on self-reported online behaviour, it may result in an incorrect picture of how people actually behave online.

Research into Actual Online Behaviour Instead of self-reported behaviour, research can also measure actual behaviour. Previous studies where actual behaviour has been measured are scarce within the domain of cybersecurity. The studies that have been carried out mostly focus on phishing victimization. These studies often use phishing tests, using both fake phishing emails and legitimate emails, to measure the degree of susceptibility to phishing, i.e. to test their resistance to phishing attacks (see, for example, Cain et al., 2018, for an overview). By measuring how often the hyperlinks in the emails are clicked and how often people who click actually leave confidential or personal information on a legitimate or phishing website, it can be determined how safe people behave online regarding phishing. An important objection of this method is that people are misled for research purposes as participants in a phishing test often have not given permission to participate in advance. Kaptein, Markopoulos, De Ruyter, and Aarts (2009) looked at how easy it is to persuade people to give out personal information. More specifically, they looked at a type of information that cybercriminals can use in phishing attacks: email addresses. Participants first completed a survey that consisted of so-called dummy questions: the questions did not matter. The actual measurement took place after respondents completed the survey. Respondents were asked to provide email addresses of friends and acquaintances who may also want to participate in the survey. Various persuasion techniques were applied to this request. For example, respondents were told that other respondents had already given different email addresses to the researchers (social proof) or that they would have the results of the study sent to them if they provided at least one email address (reciprocity). Applying a persuasion technique resulted in significantly more email addresses being retrieved. Junger, Montoya Morales, and Overink (2017) have gone a step further. They looked at how easy it is to entice people to provide personal information that can be used in a more effective form of phishing, namely spear phishing, where the victim’s personal information is used to give him or her a false sense of security. In the study, people were approached on the street to take part in a survey. In this survey, a number of questions were asked about online shopping behaviour: whether they had ever bought something online, and if so, where and what. They were also asked to provide part of their personal identification number and email address. Surprisingly, people were willing to give such personal information to the interviewers. With this information, a very targeted and effective (spear) phishing attack can potentially be carried out.


M. S. van ’t Hoff-de Goede et al.

There are, however, several downsides to these types of studies. Although they provide better measurements of actual online behaviour, the studies are often performed on a small scale and few other factors are observed. Therefore, the observed actual online behaviour cannot be contributed to explanatory factors. Moreover, measurements of actual behaviour are not feasible in all situations, for example if we want to know how people behave during an actual ransomware attack. In addition, such measurements can be costly to perform and time consuming.

Self-Reported Behaviour Versus Actual Behaviour Online ​​behaviour can thus be measured in various ways. We argue that measures of actual behaviour are preferable to self-reports of behaviour. Self-reports can deviate from reality because they appeal to the memory of respondents or because respondents may give socially desirable answers. Therefore, measures of actual online behaviour can make a major contribution to our knowledge of the circumstances that influence online behaviour (Maimon & Louderback, 2019). However, such measurements also have practical disadvantages. For each study, it will therefore be necessary to determine the most suitable way of measuring online behaviour in terms of costs and benefits. A combination of the best of both worlds can be achieved through a “population-­ based survey experiment”, also called an “experimental survey” (Mutz, 2011). This method combines the advantages of questionnaire research, such as the possibility to study a large representative sample, with the advantages of experimental research, in which actual behaviour can be measured and causal relationships can be determined (Mullinix, Leeper, Druckman, & Freese, 2015). In practice, such an experimental survey often consists of an online questionnaire with built-in experiments. Respondents can be manipulated through these experiments (such as by imposing time pressure). In addition, measurements of actual behaviour can be taken during the survey.

Online Behaviour and Victimization Study Outline of the Research Instrument The aim of the online behaviour and victimization study was to build a research instrument that can measure actual online behaviour simultaneously with possible explanatory factors that have emerged from the literature. A population-based survey experiment was used, consisting of a questionnaire containing questions and vignettes on self-reported online behaviour and explanatory factors (presented in Table 1 and discussed in section “Explaining Online Behaviour”), as well as measurements of actual online behaviour with experimental manipulations. Moreover,

The Online Behaviour and Victimization Study: The Development of an Experimental…


Table 1  Overview of survey topics, other than online behaviour Section Motivation Knowledge

Theoretical model PMT & COM-B COM-B



Mood Victimization PMT Self-control Device

Time pressure Persuasion technique Threat appraisal


Coping appraisal


Locus of control Control factors

Routine activities

Topics Protection motivation Self-reported knowledge of online safety Knowledge test (objective) Material opportunity Social opportunity Mood (PANAS) Fear of victimization Previous online victimization Self-control (BSCS) Type of device used to fill out survey Use of online devices Security measures Time pressure Authority Reciprocity Perceived vulnerability Perceived severity Response-efficacy Self-effectiveness Response costs Locus of control Gender Education level Age Daily activity/occupation Cohabiting (yes/no) Children (< age 16) in household Internet use Online activities

background characteristics of respondents (e.g. age, gender, educational level, occupational status), respondents’ mood (e.g. the degree to which someone feels optimistic or depressed) and the device that was used are measured to include as control variables. Figure 1 schematically shows the order in which the different sections of the survey are presented to the respondents.5 The used items are based on existing questionnaires, which, if necessary, were translated into Dutch and adapted to the specific context of this study. If no questionnaire was available, such as for measuring opportunity, a questionnaire was developed by the researchers themselves.

 An English translation of the original Dutch questionnaire is available upon request from the authors. 5


M. S. van ’t Hoff-de Goede et al.

Control variables



Fear of victimization


PMT variables


Online behaviour

E-mail vegnettes

Knowledge test

Cliking behaviour Routine activities

Internet/device use



Password use

Measurements of actual behaviour

Persuasion techniques

Sharing personal information

Time pressure


Experiments Dependent variables Independent variables

Fig. 1  Schematic overview of the order of survey sections

Measuring Seven Clusters of Online Behaviour The research instrument presented in this chapter measures seven behavioural clusters, based on the literature study. In this experimental survey, online behaviour is measured in three ways. First, all behavioural clusters are measured through self-­ reports (see Table  2 and items in Appendix). Second, real phishing emails were adapted to be used as vignettes in order to measure respondents’ handling of (phishing) emails. Respondents are shown three emails addressed to a fictional person: two phishing emails, supposedly from a bank and a festival organization, and one legitimate email from an internet provider. Respondents are asked to pretend to be this fictional person. Respondents are then asked to choose from nine options on how they would respond to each of these emails (e.g. reply, click on link, etc.). Respondent behave unsafely if they reported opening the linked website from one or both phishing emails. Third, respondents encounter (fictitious) cyber-risk situations while completing the survey (see section “Measuring Seven Clusters of Online Behaviour” for more details), in order to measure actual online behaviour within the clusters “password management”, “being alert online” and “online disclosure of personal information”. It proved impossible to measure actual online behaviour within the other behavioural clusters for a number of reasons. First, mimicking cybercrime is not always possible or morally justified, for example, in the case of testing technical preventive measures. In some cases, it has also been proven technically unfeasible to incorporate a measurement in the questionnaire in a satisfactory manner. Therefore, a pragmatic approach was taken, and it was decided only to measure behaviour in an objective manner if this was possible in a practically feasible and morally responsible manner. Table 2 provides an overview of the ways in which each online behaviour cluster is measured in the survey.

The Online Behaviour and Victimization Study: The Development of an Experimental…


Table 2  Overview of measurements of online behaviour per behavioural cluster

Online behaviour 1. Password management

Method Self-report: questionnaire Yes

2. Backing up important files 3. Installing updates 4. Using security software 5. Being alert online

Yes Yes Yes Yes

6. Online disclosure of personal information


7. Handling attachments and hyperlinks in emails


Self-report: vignette

Objective measurement Yes: password strength No experimental condition

Yes: clicking behaviour Experimental condition: Time pressure Yes: disclosure of personal information Experimental condition: Persuasion techniques Yes

 etailed Description of the Measurements of Actual D Online Behaviour The measurements of actual online behaviour in the experimental survey of the online behaviour and victimization study will now be described in detail. There are three objective measurements of online behaviour included in the survey (Table 2). While completing the survey, respondents encounter, unbeknownst to them, three simulated cyber-risk situations, and how respondents deal with these situations is registered. First, at the beginning of the questionnaire, respondents are asked to create a username and password for privacy reasons (see Fig. 2).6 While the chosen password is not registered, the strength of the chosen password is measured. This allows researchers to determine the strength of the passwords respondents choose to protect their personal information. At the end of the survey, respondents are asked a control question to investigate if they would normally choose a similar type of password: “did you choose a password similar to those you would normally choose to protect your personal data?”

6  The objective measurement of password management is displayed in the picture, a print screen of the survey. In English, this states: In accordance with Dutch privacy legislation, we now ask you to create a temporary user account. For the purpose of this study, your personal data will be stored in this account. You will need to use this account one more time, at the end of the questionnaire. Please enter a username and password below. Username: Password: Re-enter password:


M. S. van ’t Hoff-de Goede et al.

Fig. 2  Screenshot of measurement of password management

Fig. 3  Screenshot of measurement of being alert online

Later in the survey, the extent to which respondents are alert while online is measured. Respondents are asked to watch a short video before answering the next question. However, the video does not start playing. Suddenly, a pop-up appears stating that software needs to be downloaded, called “Vidzzplay” (see Fig. 3).7 This software supposedly comes from an unknown source (thus unreliable). Here

 The objective measurement of clicking behaviour is displayed in the picture, a print screen of the survey. In English, this states: Before you answer the next question, we ask you to watch a short video on online shopping (30 s). Click on the play button in the screen below. //This video is being processed. Try again later. //We are sorry. //User account management. //Do you allow the following program from an unknown publisher to make changes to this computer? //Program name: Vidzzplay. //Publisher: unknown // Origin: 7

The Online Behaviour and Victimization Study: The Development of an Experimental…


researchers can see which choice the respondents make: download the software (unsafe choice), not download (safe choice) or skip the question (safe choice). Third, at the end of the questionnaire, respondents are asked to share personal information. This starts with standard questions, such as marital status, but the privacy value of the information increases with each question, such as their full name, date of birth and email address, and ends with asking for the final three digits of their bank account. For each question, respondents are able to click on the button “I’d rather not say”, which is considered the safe choice. If respondents fill out their personal information, the contents of their answer are not registered but only that they have answered the question. The more types of personal information respondents share, the more unsafe their behaviour is.

Experiments In two of the measurements of actual online behaviour, experimental conditions are included (Table 2). In these cases, variations to an objective measurement of actual online behaviour are presented to different subgroups of respondents. In the first experiment, during the objective measurement of “clicking behaviour”, where respondents are asked to download software, time pressure is imposed on half of the respondents. Respondents are asked to fill out a part of the survey in no more than 5 min. In the experimental condition, respondents are told that this was not sufficient time for previous respondents, and are urged to work fast-paced. Other respondents are informed that 5 min is sufficient time and that they can continue working at their own pace. Then, respondents are asked about their online routine activities. Hereafter, the respondents are asked to watch a video and the pop-up requesting permission for a software download appears (measurement of actual clicking behaviour). Control questions concerning the time pressure experienced are asked hereafter. The second experiment takes place during the objective measure of “online disclosure of personal data”, in which respondents are asked to enter personal data such as their address and the last three digits of their account number. Various persuasion techniques are used to manipulate respondents’ willingness to share personal information (1/3 the “authority” persuasion technique, 1/3 the “reciprocity” persuasion technique, 1/3 no persuasion technique). All respondents are told, “we would like to ask you some final questions”. One third of the respondents moves on to the questions for personal information without a persuasion technique. In the reciprocity category (one third of respondents), respondents are promised a chance of winning a gift certificate if they fully complete all questions concerning personal information. In the authority category (one third of respondents), the researchers urge the respondents to fully complete all their personal information because of the importance of the scientific study.


M. S. van ’t Hoff-de Goede et al.

Discussion This chapter outlined the development of a research instrument for the online behaviour and victimization study. The literature review that was conducted at the start of this study clearly shows that there is a lack of studies that measure actual online behaviour. One explanation for this is that this research area is still relatively young. Most studies that have been conducted can be seen as exploratory or mainly test whether existing criminological or psychological models can be used to explain self-reported unsafe online behaviour or cybercrime victimization (for an overview, see Leukfeldt, 2017). The available studies in which actual online behaviour has been measured had to deal with limitations because, for example, a non-­representative sample was used. Moreover, while these studies have yielded valuable results on the prevalence of unsafe online behaviour, they have seldom focused on a broad range of explanatory factors. A possible connection between factors such as knowledge and motivation and the prevalence of actual (objectively measured) online behaviour has hardly been investigated to date. Moreover, the association between unsafe actual online behaviour and online victimization has rarely been researched. While some studies have described online victimization that can be led back to unsafe online behaviour, such as sharing personal information online, it remains unclear how unsafe online behaviour affects the risk of online victimization, or how this may be related to individual or contextual factors. In the online behaviour and victimization study, a research instrument has therefore been developed that offers new possibilities for the research field in various ways. It was deliberately decided to measure both self-reported and actual online behaviour. After all, we know that although most people indicate that cybersecurity is important, people’s actual behaviour is not always equal to their attitudes or perceived behaviour. By using a population-based survey experiment—a method that combines the advantages of questionnaire research with the benefits of experiments—the added value of this research instrument is therefore evident: this instrument makes it possible to go beyond existing studies by measuring actual online behaviour in a large representative sample. Moreover, this instrument is innovative in another way: we do aim to explain not only victimization of specific forms of cybercrimes, but also several clusters of online behaviour. After all, it is behaviour that increases the risk of all kinds of online crime. While designing the experimental survey, several ethical issues arose that should be discussed in detail. During the experimental survey, respondents are presented with various fictitious cyber-risk situations. Respondents are also asked to create a password and enter personal details. In addition, there was concern that (compared to other studies) striking questions and situations would deter respondents, which could result in high levels of abandonment or contacts with the help desk.8 A university ethics committee has therefore approved the instrument. Requesting a password and personal data is ethically permitted, if the answers are not registered. It remains  However, during a pilot of the research tool, this only occurred in low frequencies.


The Online Behaviour and Victimization Study: The Development of an Experimental…


therefore unknown to the researchers, for example, which password respondents choose, only how strong this password is. In addition, the personal data that respondents fill in is not released to the researchers, only whether or not respondents answer a specific question about personal information. Finally, all respondents are informed (as much as possible) in advance by means of “informed consent” and are subsequently notified of the cyber-risk situations and manipulations to which they had been “exposed” by means of a “debriefing” (whether or not respondents completed the survey). Like any measurement instrument, this research instrument also has limitations. First, the research instrument measures both dependent and at the same point in time. A second wave of data collection, in which cybercrime victimization in particular is measured over time, is necessary to examine causal relationships between behaviour and victimization. Second, the objective measurements and experiments also each have their own limitations. Due to the length of the questionnaire, it was not possible to include objective measures and experiments for all seven behavioural clusters. Moreover, password strength is determined but it remains unknown if the password is unique and never used in other applications by the respondent, which is aexplanatory factors second condition for safe password management. In addition, in accordance with the GDPR,9 the information that respondents share is not recorded, thus it cannot be verified whether these are actual/correct data. When measuring whether or not respondents downloaded unsafe software (i.e. clicking behaviour), the instrument uses a pop-up made in the style of the Windows operating system. Non-­ Windows users are less familiar with the pop-up, which may make them more suspicious and less likely to say yes. Further development of this objective measurement is necessary, with various pop-ups that are technically actual pop-ups and are adapted to different devices and operating systems. Third, although the method—a survey with experiments—is very suitable for doing this kind of research, it is possible that respondents feel safe in the online environment of the survey. As a result, they may be quicker to make unsafe choices than in actual cyber-risk situations in real life. This may mean that in the home environment, the percentage of unsafe behaviour is lower than is determined by the research instrument. However, it is important to mention that the purpose of the research instrument is to measure online behaviour in an apparently safe environment—criminals often also imitate a safe environment (for example, an online bank or web shop) and entice people to click on a hyperlink or give away personal information. Finally, it is possible that participants will differ from non-participants on unregistered properties. Given the aim of the study, respondents are not fully informed in advance about the content of the study. Respondents expect to answer questions only about what they do online. Certain questions may scare participants that are of a suspicious nature. Therefore, respondents who are more suspicious/observant may drop out faster.  General Data Protection Regulation, applicable in Europe;



M. S. van ’t Hoff-de Goede et al.

Despite the limitations mentioned here, this research instrument makes it possible to study self-reported online behaviour and actual online behaviour, as well as the differences between them, and explain the occurrence of unsafe online behaviour and cybercrime victimization. This is relevant for interventions that will be developed in the future that focus on making online behaviour safer.

Appendix: Survey Items Self-Reported Online Behaviour Items Password management I share my personal passwords with others (R) I use simple, short passwords, with for example only one number or capital letter (R) I use the same password for different applications, for example, for both social media and online banking and web shops (R) Backing up important files I back up important files I store personal information in an encrypted manner, so that others cannot easily read it Installing updates I install operating system updates on my devices as soon as a new update is available I install updates to the apps or software that I use as soon as a new update is available I update my security software as soon as a new update is available Using security software There is security software installed on my devices to scan for viruses and other malicious software I use browser extensionsa to help me to surf safely, such as software to block advertisements or pop-ups Being alert online I download software, films, games or music from illegal sources (R) I use public Wi-fi (for example, in hotels, restaurants, bars, or public transport), without a VPN connectionb (R) I check the privacy settings on my devices, apps or social media Online disclosure of personal information I share personal information such as my home address, email address or telephone number via social media (R) I am selective in accepting social media connection requests from others Handling attachments and hyperlinks in emails I immediately delete emails that I do not trust When in doubt about the authenticity of an email, I contact the sender to ask if an email has actually been sent to me I open attachments in emails, even if the email comes from an unknown sender (R) R reversed item A browser extension is software that offers additional functionality to a browser, such as managing cookies or advertisements while surfing the internet b A VPN (Virtual Private Network) connection gives a user secure and anonymous access to a network and thus makes the internet connection safer a

The Online Behaviour and Victimization Study: The Development of an Experimental…


References Alohali, M., Clarke, N., Li, F., & Furnell, S. (2018). Identifying and predicting the factors affecting end-users’ risk-taking behavior. Information and Computer Security., 26, 306. https://doi. org/10.1108/ICS-­03-­2018-­0037 Arachchilage, N.  A. G., & Love, S. (2014). Security awareness of computer users: A phishing threat avoidance perspective. Computers in Human Behavior, 38, 304–312. Boss, S., Galletta, D., Lowry, P. B., Moody, G. D., & Polak, P. (2015). What do systems users have to fear? Using fear appeals to engender threats and fear that motivate protective security behaviors. MIS Quarterly, 39(4), 837. Bossler, A. M., & Holt, T. J. (2009). On-line activities, guardianship, and malware infection: An examination of routine activities theory. International Journal of Cyber Criminology, 3(1), 400–420. Bossler, A. M., & Holt, T. J. (2010). The effect of self-control on victimization in the cyberworld. Journal of Criminal Justice, 38(3), 227–236. Cain, A. A., Edwards, M. E., & Still, J. D. (2018). An exploratory study of cyber hygiene behaviors and knowledge. Journal of Information Security and Applications, 42, 36–45. Christofides, E., Muise, A., & Desmarais, S. (2012). Risky disclosures on Facebook: The effect of having a bad experience on online behavior. Journal of Adolescent Research, 27(6), 714–731. Cross, C., Richards, K., & Smith, R. G. (2016). The reporting experiences and support needs of victims of online fraud. Trends & Issues in Crime and Criminal Justice, 518, 2–14. Crossler, R. E., & Bélanger, F. (2014). An extended perspective on individual security behaviors: Protection motivation theory and a unified security practices (USP) instrument. ACM SIGMIS Database, 45(4), 51–71. Crossler, R. E., Bélanger, F., & Ormond, D. (2017). The quest for complete security: An empirical analysis of users’ multi-layered protection from security threats. Information Systems Frontiers, 21(2), 343–357. Crossler, R. E., Johnston, A. C., Lowry, P. B., Hu, Q., Warkentin, M., & Baskerville, R. (2013). Future directions for behavioral information security research. Computers and Security, 32, 90–101. Debatin, B., Lovejoy, J.  P., Horn, A.  K., & Hughes, B.  N. (2009). Facebook and online privacy: Attitudes, behaviors, and unintended consequences. Journal of Computer-Mediated Communication, 15(1), 83–108. Downs, J.  S., Holbrook, M., & Cranor, L.  F. (2007). Behavioral response to phishing risk. In Proceedings of the anti-phishing working groups  – 2nd annual eCrime researchers summit (pp. 37–44). New York, NY: ACM Press. Floyd, D. L., Prentice-Dunn, S., & Rogers, R. W. (2000). A meta-analysis of research on protection motivation theory. Journal of Applied Social Psychology, 30(2), 407–429. Gottfredson, M.  R., & Hirschi, T. (1990). A general theory of crime. Stanford, CA: Stanford University Press. Herath, T., & Rao, H. R. (2009). Protection motivation and deterrence: A framework for security policy compliance in organisations. European Journal of Information Systems, 18(2), 106–125. Holt, T. J., & Bossler, A. M. (2013). Examining the relationship between routine activities and malware infection indicators. Journal of Contemporary Criminal Justice, 29(4), 420–436. Jansen, J. (2018). Do you bend or break? Preventing online banking fraud victimization through online resilience. Doctoral thesis. Enschede: Gildeprint. Jansen, J., & Leukfeldt, R. (2015). How people help fraudsters steal their money: An analysis of 600 online banking fraud cases. In Proceedings – 5th workshop on socio-technical aspects in security and trust, STAST 2015 (pp. 24–31). Piscataway, NJ: IEEE. Jansen, J., & Leukfeldt, R. (2016). Phishing and malware attacks on online banking customers in the Netherlands: A qualitative analysis of factors leading to victimization. International Journal of Cyber Criminology, 10(1), 79–91.


M. S. van ’t Hoff-de Goede et al.

Jansen, J., & Leukfeldt, R. (2018). Coping with cybercrime victimization: An exploratory study into impact and change. Journal of Qualitative Criminal Justice & Criminology, 6(2), 205–228. Jansen, J., & van Schaik, P. (2017). Comparing three models to explain precautionary online behavioural intentions. Information and Computer Security, 25(2), 165–180. Jones, B.  H., & Heinrichs, L.  R. (2012). Do business students practice smartphone security? Journal of Computer Information Systems, 53(2), 22–30. Jong, L., Leukfeldt, R., & van de Weijer, S. (2018). Determinanten en motivaties voor intentie tot aangifte na slachtofferschap van cybercrime. Tijdschrift Voor Veiligheid, 17(1–2), 66–78. Junger, M., Montoya Morales, A. L., & Overink, F. J. (2017). Priming and warnings are not effective to prevent social engineering attacks. Computers in Human Behavior, 66, 75. Kaptein, M., Markopoulos, P., De Ruyter, B., & Aarts, E. (2009). Can you be persuaded? Individual differences in suceptibility to persuasion. In IFIP Conference on Human-Computer Interaction (pp. 115–118). Berlin: Springer. Kaye, J. (2011). Self-reported password sharing strategies. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems  – CHI ‘11 (p.  2619). New  York, NY: ACM. Leukfeldt, E. R. (2014). Phishing for suitable targets in the Netherlands: Routine activity theory and phishing victimization. Cyberpsychology, Behavior and Social Networking, 17(8), 551–555. Leukfeldt, E. R. (Ed.). (2017). Research agenda the human factor in cybercrime and cybersecurity. Den Haag: Eleven International Publishing. Leukfeldt, E. R., Kleemans, E. R., & Stol, W. P. (2017). Cybercriminal networks, social ties and online forums: Social ties versus digital ties within phishing and malware networks. British Journal of Criminology, 57(3), 704–722. Leukfeldt, E. R., & Yar, M. (2016). Applying routine activity theory to cybercrime: A theoretical and empirical analysis. Deviant Behavior, 37(3), 263–280. Leukfeldt, E.  R., Notté, R.  J., & Malsch, M. (2019). Exploring the needs of victims of cyber-­ dependent and cyber-enabled crimes. Victims and Offenders, 15(1), 60–77. Lewis, K., Kaufman, J., & Christakis, N. (2008). The taste for privacy: An analysis of college student privacy settings in an online social network. Journal of Computer-Mediated Communication, 14(1), 79–100. Lusthaus, J. (2018). Honour among (cyber)thieves? European Journal of Sociology, 59(2), 191–223. Madden, M., & Rainie, L. (2015). Americans’ attitudes about privacy, security and surveillance. Retrieved from americans-­attitudes-­about-­privacy-­security-­and-­surveillance/ Maimon, D., & Louderback, E. R. (2019). Cyber-dependent crimes: An interdisciplinary review. Annual Review of Criminology, 2(1), 191. Michie, S., Van Stralen, M. M., & West, R. (2011). The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science: IS, 6, 42. Mullinix, K. J., Leeper, T. J., Druckman, J. N., & Freese, J. (2015). The generalizability of survey experiments. Journal of Experimental Political Science, 2(2), 109–138. Mutz, D. C. (2011). Population-based survey experiments. Princeton: Princeton University Press. Ngo, F. T., & Paternoster, R. (2011). Cybercrime victimization: An examination of individual and situational level factors. International Journal of Cyber Criminology, 5(1), 773. Norman, P., Boer, H., & Seydel, E.  R. (2005). Protection motivation theory. In M.  Conner & P. Norman (Eds.), Predicting health behaviour (pp. 81–127). London: Open University Press. Ovelgönne, M., Dumitras, T., Prakash, B.  A., Subrahmanian, V.  S., & Wang, B. (2017). Understanding the relationship between human behavior and susceptibility to cyber attacks. ACM Transactions on Intelligent Systems and Technology, 8(4), 1–25. Parsons, K., Calic, D., Pattinson, M., Butavicius, M., McCormac, A., & Zwaans, T. (2017). The human aspects of information security questionnaire (HAIS-Q): Two further validation studies. Computers and Security, 66, 40–51.

The Online Behaviour and Victimization Study: The Development of an Experimental…


Parsons, K., McCormac, A., Butavicius, M., Pattinson, M., & Jerram, C. (2014). Determining employee awareness using the human aspects of information security questionnaire (HAIS-Q). Computers and Security, 42, 165–176. Rhee, H. S., Kim, C., & Ryu, Y. U. (2009). Self-efficacy in information security: Its influence on end users’ information security practice behavior. Computers and Security, 28(8), 816–826. Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs: General and Applied, 80(1), 1. Sheng, S., Holbrook, M., Kumaraguru, P., Cranor, L., & Downs, J. (2010). Who falls for phish? A demographic analysis of phishing susceptibility and effectiveness of interventions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 373–382). New York, NY: ACM. Shillair, R., Cotten, S. R., Tsai, H. Y. S., Alhabash, S., Larose, R., & Rifon, N. J. (2015). Online safety begins with you and me: Convincing internet users to protect themselves. Computers in Human Behavior, 48, 199. Smith, J.  R., & Louis, W.  R. (2008). Do as we say and as we do: The interplay of descriptive and injunctive group norms in the attitude-behaviour relationship. British Journal of Social Psychology, 47(4), 647–666. Spiekermann, S., Grossklags, J., & Berendt, B. (2001). E-privacy in 2nd generation E-commerce: Privacy preferences versus actual behavior. In ACM Conference on Electronic Commerce (pp. 1–10). New York, NY: ACM Press. Symantec. (2018). Security center white papers. Tempe, AZ: Symantec. Retrieved from https://­center/white-­papers Talib, S., Clarke, N.  L., & Furnell, S.  M. (2010). An analysis of information security awareness within home and work environments. ARES 2010  – 5th International Conference on Availability, Reliability, and Security (pp. 196–203). Tan, M., & Aguilar, K. S. (2012). An investigation of students’ perception of Bluetooth security. Information Management and Computer Security, 20(5), 364–381. Van de Weijer, S. G. A., & Leukfeldt, E. R. (2017). Big five personality traits of cybercrime victims. Cyberpsychology, Behavior, and Social Networking, 20(7), 407–412. Van Schaik, P., Jeske, D., Onibokun, J., Coventry, L., Jansen, J., & Kusev, P. (2017). Risk perceptions of cyber-security and precautionary behaviour. Computers in Human Behavior, 75, 547–559. Van Wilsem, J. (2013). “Bought it, but never got it” assessing risk factors for online consumer fraud victimization. European Sociological Review, 29(2), 168–178. Vance, A., Siponen, M., & Pahnila, S. (2012). Motivating IS security compliance: Insights from habit and protection motivation theory. Information and Management, 49(3–4), 190. Workman, M., Bommer, W. H., & Straub, D. (2008). Security lapses and the omission of information security measures: A threat control model and empirical test. Computers in Human Behavior, 24(6), 2799–2816.

No Gambles with Information Security: The Victim Psychology of a Ransomware Attack David L. McIntyre and Richard Frank

The first known strain of ransomware, the AIDS Trojan, was discovered in 1989 and was spread using floppy disks. Upon infection, a ransom of $189 was demanded (Richardson & North, 2017), but since the malware could only spread through the sharing of floppy disks, and because it was built upon weak encryption, the attack was unsuccessful. Other ransomware strains followed, mainly used by organized criminals in Russia, often targeting other victims within Russia and bordering countries (Richardson & North, 2017). Today, modern ransomware spreads through infected email attachments and operating system or application exploits. These computer-based extortion scams utilize malicious programs which encrypt the user’s data in the background, demanding a cryptocurrency payment in exchange for file restoration (Luo & Liao, 2007). Upon payment, it is promised that the criminal will send the victim the decryption keys, allowing the files to be restored to their original state. The recent WannaCry ransomware attack brought the threat of encryption-based computer crime into international awareness. Targets of ransomware range from private individuals to large organizations and businesses, and the ransom demanded generally varies accordingly: typically in the neighbourhood of $300 for individuals, and more than $50,000 for businesses (Paddon, 2018). The FBI estimated that the total monetary loss caused by ransomware—including data loss and negative affects on productivity—would exceed $1bn in 2016 (Brewer, 2016). Not only is ransomware profitable, but the number of steps between crime (infection of computer) and profit is much smaller than other cybercrimes; moreover, because victims send money directly to the criminals, there need not be any

D. L. McIntyre (*) Department of Psychology, Simon Fraser University, Burnaby, BC, Canada e-mail: [email protected] R. Frank School of Criminology, Simon Fraser University, Burnaby, BC, Canada © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Weulen Kranenbarg, R. Leukfeldt (eds.), Cybercrime in Context, Crime and Justice in Digital Society I,



D. L. McIntyre and R. Frank

i­ ntermediaries in the criminal process (all of whom must typically be compensated, resulting in lesser profits for the end criminals). Owing to these factors, the number of strains of malware specializing in ransom has skyrocketed, from two between the years 2005 to 2012, to 15 within just the first three months of 2016 (Symantec, 2016). Many general anti-cybercrime security strategies help to block ransomware: users are advised to beware of spam, of links that impersonate real websites, and of other ways of introducing malware onto a secure computer or network (Brewer, 2016). Though ransomware attacks are initiated by such technological means, there is an intermediate step after infection, during which the criminals must coerce their victims’ cooperation to make profit. There are generally three possible outcomes of a successful infection: (1) a user, having a secure recent backup of his data, will not feel compelled to pay; (2) an unprepared victim would rather accept the data loss than pay the ransom; and (3) the victim values the loss of data higher than the ransom being asked, and decides to pay. The goal of the criminal after infection is to exert psychological pressure on the victims to manipulate them into paying. The factors involved in accomplishing this persuasion, many of which are designed or contrived before the infection takes place, will be explored. Due to the anonymous nature of bitcoin and other cryptocurrencies, there is no guarantee that the criminals will deliver the promised decryption key upon receiving their ransom (Bohr & Bashir, 2014)—to pay the perpetrators seems, at first glance, to be an outright gamble. However, evidence seems to indicate that the criminals will usually hand over the decryption keys (Trustwave, 2017), and it would seem that this practice is in the best long-term interest of the criminal: if it was public knowledge that no decryption key will be provided upon payment, then future victims will have no incentive to pay the ransom; likewise, if the public perceives that there is a high chance of receiving the decryption key upon payment (and thereby recovering the ransomed files), then that strategic credibility will make it difficult for the victim to rule out cooperation. One may therefore be tempted to view ransom payment as a natural response to the crime, especially if valuable data has been encrypted—it may seem as though a victim’s response can be predicted and explained entirely by assessing how much he values the lost data. But once the victim is notified of his loss, the playing field is far from level. Making the decision of whether to pay is no longer so simple as weighing the value of the data against the price of the ransom, as if the victim is merely shopping. It is important to note that the data has been lost in the first place: feedback indicating a loss (hereafter ‘loss feedback’, a type of negative feedback) is known to have psychological effects which can shift the probabilities of the recipient’s subsequent behaviours (e.g. Masaki, Takeuchi, Gehring, Takasawa, & Yamazaki, 2006). Applying these research findings to ransomware victims is one of the main goals of the chapter. Ransomware relies both on the strength of encryption and persuasiveness of the message to manipulate the victim into making a payment. Therefore, any complete explanation of how perpetrators profit from ransomware attacks must include a psychological evaluation of ransom tactics. Persuasive elements of ransomware include limiting the available time in which to make a payment, displaying a countdown timer to induce stress, and making the system all-but-unusable until the ransom is

No Gambles with Information Security: The Victim Psychology of a Ransomware Attack


paid (Patyal, Sampalli, Ye, & Rahman, 2017). These threatening elements contained in a ransomware display, together with their implicit delivery of loss feedback, are argued to comprise a critical stage in ransomware crimes, and can reasonably be expected to affect the probability of payment by the victim. The argument will be put forward that ransomware attacks take advantage of the human psychology of loss aversion, creating a psychological context that is advantageous to the attackers, and which affects individuals differently according to their unique neural phenotypes. It will be shown that, once the victim is aware that he has lost his data, loss-­ related psychological mechanisms are at play, and that this response to coercion scales with the value of the lost data. With this knowledge, we can begin to understand why the persuasion stage of ransomware is successful, and start to devise strategies to disrupt this malicious business model from a psychological perspective, and make it less profitable for the criminals.

Ransomware as Data Loss Neural and Behavioural Responses to Loss Feedback The profitability of ransomware depends not just on the designer’s ability to effectively infect computers and encrypt data, but on a psychological attack, which must be powerful enough that sufficiently many victims are willing to send money to the very people who robbed them in the first place. The cognitive experience of a ransomware victim can resemble that of a gambler on a losing streak. The initial encryption robs the user of data and proceeds to notify the user of his loss. The user may feel ashamed for having been foolish or incautious at some earlier stage (e.g. having fallen for a social engineering tactic like a malicious email) and regretfully realize that diligent backup practices would have nullified the threat. Like the gambler, the ransomware victim has lost his property, and the only chance to regain is to risk more money in the same context. For our purposes, the most important feature of this analogy is that the decisions of a ransomware victim or a gambler are typically made following loss feedback. Ransomware displays themselves constitute loss feedback, and so the behavioural trends observed in gambling paradigm research can be gainfully applied to understand the effects of ransomware on victims. Physiological responses by the human brain to loss feedback have been captured in numerous studies (e.g. Taylor et al., 2006). These studies are not necessarily conducted on clinical populations, such as compulsive gamblers; gambling-task paradigms expose the participants to many feedback events, which is conducive to collecting robust datasets of behavioural and physiological activity to correlate with those feedback events. Electroencephalography (EEG), which is often recorded as a means to study the temporal course of attentional or perceptual activity in the brain in the order of milliseconds (Luck, 2014), has been collected during many of these gambling experiments, and analyzed to identify feedback-related potentials.


D. L. McIntyre and R. Frank

Previous EEG research has revealed multiple physiological activation ‘components’ which appear in the course of decision-making activities involving risk and uncertainty. An experiment using a feedback-based gambling paradigm found that additional choices following losses were riskier as compared to choices that followed gains (Gehring & Willoughby, 2002). In addition to the behavioural data, electrical activity related to the feedback was isolated through a technique called event-related potentials (ERPs), which averages many trial events to identify the neurological signals of interest from amongst other, irrelevant brain activity (see Bressler & Ding, 2006; Luck, 2014). The study determined that a medial-frontal negative potential (MFN), located near the anterior cingulate cortex (ACC), appeared specifically in response to losses; it increased in magnitude after loss trials, and correlated with a higher likelihood of choosing the risky option on the next trial (Gehring & Willoughby, 2002). A subsequent functional magnetic resonance imaging (fMRI) study observed activation of the rostral ACC in response to loss, but not in cases where available gains were withheld due to errors (Taylor et al., 2006); the authors reasoned that this result reflects the human bias to weigh potential losses more heavily than potential gains, or loss aversion (Tversky & Kahneman, 1991). Ransomware victims might be expected, on average, to exhibit this same tendency to adjust behaviour towards risk in response to loss: they may undergo the same loss-related physiological process indexed by MFN, resulting in a shift of probabilities towards a greater willingness to risk. Moreover, it has been shown that the magnitude of a loss correlates linearly with a higher likelihood of subsequently choosing a riskier bet (Gehring & Willoughby, 2002). Attempts have been made to generalize such laboratory results using more naturalistic tasks. One such study involved a simulated balloon-pumping task (the Balloon Analogue Risk Task: BART), in which participants were able to sell the balloon at a value linearly increased by the number of pumps: popping the balloon resulted in a loss of all accumulated value for that trial (Schonberg, Fox, & Poldrack, 2011). Greater dorsal ACC and prefrontal cortex (PFC; specifically within the right dorsolateral region in the balloon study) activation was interpreted as evidence that such risk-sensitive regions track the increasing potential loss as the balloon grows larger (Schonberg et al., 2011). The right PFC in particular has been shown to play a role in inhibiting risky behaviours—disruption of this area using repetitive transcranial magnetic stimulation (rTMS) caused participants to perform poorly on a decision task in which reward risk was related to reward magnitude (i.e. unlikely successes yielded larger rewards; Knoch et al., 2006). In the context of ransomware, not only does encrypting the victim’s data create a strong incentive to cooperate, but the loss feedback compounds his willingness to risk further resources for the chance to restore his data. It is difficult to estimate how long this tendency will persist, because the result comes from a study in which the next decision is made within seconds. The importance of time is not explicitly known, but it may be possible to resist or reduce the psychological influence of ransomware by waiting before making a choice. Although it seems unlikely that one response event could affect behaviour after several days, reminding the user that the files are ransomed (through an image on the screen, often including a live countdown timer; Patyal et al., 2017) may serve to

No Gambles with Information Security: The Victim Psychology of a Ransomware Attack


renew or sustain the behavioural effect. Additionally, the visual qualities of a ransomware notification ought not to be overlooked as an element of the overall social persuasion attack. In the short term, it is known that the visual qualities of feedback affect the magnitude of the feedback-related negativity (FRN) in response to both gain and loss feedback (Liu, Nelson, Bernat, & Gehring, 2014). When a feedback letter (indicating a gain or loss) was flanked by dissimilar letters, the FRN was boosted in amplitude. Though it is difficult to directly apply such basic research about visual incongruency to ransomware screens, it is worth investigating whether notifications that highlight the loss feedback message by increasing local salience (making it pop out from its surroundings; see Baluch & Itti, 2011) are more startling and memorable. A reasonable expectation is that more prominent ransomware screens contribute to a greater likelihood of earning a ransom. Visually threatening and memorable displays may correspond to a serially recurring sense of loss in the victim (though it is unclear whether an MFN-like response occurs during recollection of—or repeated exposure to—the same loss feedback). An ERP study simulating such a situation might, for example, repeatedly display loss feedback that was unambiguously recognizable, and presented in a way such that it could not be interpreted as related to events intervening between its first and subsequent appearances; it would compare the amplitudes of ERPs in early trials versus late trials, to determine whether responses to the same feedback attenuate with repetition. Regardless, victims can be expected to avoid the loss of their data to an extent disproportionate to its perceived value. As we shall see, individual differences in the response to loss suggest that there are also differences in susceptibility to social persuasion strategies that feature loss feedback.

Individual Differences in Loss Response An fMRI study observed a correlation between risky behaviour and decreased brain activity in several gain-sensitive brain regions, including the ventral anterior cingulate cortex (the MFN is tentatively thought to be sourced somewhere in ACC); activity in ventral ACC and other gain-sensitive areas decreased in proportion to the size of a potential loss (Gehring & Willoughby, 2002; Tom, Fox, Trepel, & Poldrack, 2007). Crucially, it was shown that individuals who were more loss-averse in their behaviour also showed decreased activity in related brain areas as potential losses increased—these neural indices correlated with a behavioural model of loss aversion at an individual level. It has been suggested that various neuropsychiatric and behavioural disorders could be partially explained by characterizing individual differences in these neural responses to risk (Tom et al., 2007). The ability to predict loss-averse behaviour using neural imaging implies there must be a spectrum of individuals who fall within a risk-tolerant neural phenotype, who are more likely to risk money to recover data, and who perhaps have a higher risk of being addicted to gambling. In other words, the extent to which ransomware shapes behaviour will depend partially on the degree of that individual’s loss aversion bias; it might be


D. L. McIntyre and R. Frank

rationally expected that individuals with greater relative fear of losses compared to gains of similar magnitude (i.e. higher degree of loss aversion) might be more willing to pay for their data, regardless of its actual or personal value (Tversky & Kahneman, 1991). Ironically, another BART study suggests that high trait anxiety individuals exhibit an attenuated FRN compared to low trait anxiety counterparts (Takács et al., 2015). It was thought a pessimistic bias explains this reduced response to loss feedback: high trait anxiety individuals’ expectations under risk are more conservative, and therefore less dramatically violated in the event of a loss (Takács et al., 2015). Further investigation of the role of anxiety as a moderator of physiological responses to loss feedback is warranted, especially if the FRN can be shown to track changes in likelihoods of subsequent actions under risk. A countdown to the destruction of one’s personal property has obvious potential to cause distress; people rely on their personal computers and the data therein for various daily tasks, including professional responsibilities. Even in cases where the victim decides not to pay, the anxiety and uncertainty introduced by ransomware is bound to persist until the payment deadline passes. Research suggests, for example, that a rapidly moving clock induces: heart rate slowing; faster reaction times; low, inaccurate estimates of the amount of time having passed during a decision under risk—when compared to the trials with a slow-moving clock (these were illusory: the same real amount of time was given for responses in each condition; Jones, Minati, Harrison, Ward, & Critchley, 2011). Indeed, there is a case to be made for the coercive power of ransomware that could largely overlook the risk-encouraging effects of loss feedback (loss feedback, threatening visual qualities, and time pressure are all part of the coercive attack of ransomware). Though it is beyond the scope of this paper, it is worth noting that emotional endurance is another element of individual difference that may help decide whether the ransom is ultimately paid. We have some choice about how to counter these crimes and reduce their profitability: one option is to undermine the rate of success. A strategic step towards cutting the profits of ransomware attackers would be to recognize that individual neurological differences may likely be a factor shaping how a victim responds to ransomware. Such factors may particularly help to explain cases where (irreplaceable items like photos or written work aside) ransoms are paid to recover relatively low-value data, or perhaps as a way of achieving emotional relief by ending the suspense. Reducing the success rate of ransomware crimes by tackling these cases may become more and more of a priority if the prevalence of ransomware continues to climb. The best societal strategy to demoralize the criminals would be to refuse to pay, commit to a better backup practice in the future, and move on—but it is difficult to harmonize individual behaviours, especially as the odds of data recovery are fair (Trustwave, 2017). Targeted education strategies aimed at populations with especially valuable data, or at those with specific risk factors may help to cut the profitability of ransomware. Attempts to identify personality risk factors will be discussed.

No Gambles with Information Security: The Victim Psychology of a Ransomware Attack


Personality Risk Factors and Ransomware It may be useful to identify personality features that predict responses to coercion and social persuasion. One personality study found Type A and B personalities responded differently to perceived coercion, claiming that Type B females are especially vulnerable to persuasion, and that Type B males only resist when the coercion is not as strong (i.e. low coercion) (Carver, 1980). Though these results seem to confirm that personality (a set of observable behavioural trends ultimately traceable to biological differences, in our view) can be informative about vulnerability to ransomware attacks, these results cannot be easily applied to the context of ransomware. The behavioural observations used to create the Type A (stereotyped as driven, competitive, active) and Type B (stereotyped as relaxed, physically passive, and patient) categories will not necessarily be the most predictive set of qualities for social persuasion and behavioural responses to stress; they were originally validated because they predicted risk for heart disease (Friedman & Rosenman, 1959). In the context of ransomware, the coercion is strong by design, requiring a non-optional payment within a specified period of time, after which access to the files is permanently lost as the decryption key is deleted. For this reason, it would be worthwhile to study neural and other physiological responses to coercion, and potentially recapitulate the predictive qualities of Type A and B categories for coercion responses, but within a modern personality psychology paradigm. To date, few studies have investigated the connection between cybercrime victimization and personality. It has been shown that victims of cybercrime share many of the Big Five personality traits which are risk factors for victims of traditional crime: lower conscientiousness and emotional stability, and greater openness to experience (van de Weijer & Leukfeldt, 2017; extraversion and agreeableness scores were not predictive). The authors explained that their Big Five-derived model of victimization is missing factors that could more powerfully explain cybercrime victimization (van de Weijer & Leukfeldt, 2017). Scores on the HEXACO Personality Inventory have also previously been correlated with fraud victimization in older adults: non-victims scored higher on honesty-humility, conscientiousness and had better cognitive ability (Judges, Gallant, Yang, & Lee, 2017). Such models might gainfully be tested for predictiveness about victim responses to ransomware—perhaps victims who score higher on honesty and conscientiousness would also be less willing to cooperate with criminals. Explaining the predictive value of personality traits may point towards novel preventative measures for cybercrime, perhaps connected to other lifestyle factors associated with traditional crime victimization. One recently developed model, the Susceptibility to Persuasion-II (StP-II), was created with the explicit purpose of measuring susceptibility to persuasion; it was assessed for reliability and predictiveness on scam compliance data, and has been applied to cybercrime victims in forthcoming research (Modic, Anderson, & Palomäki, 2018). The field of cybercrime psychology is growing.


D. L. McIntyre and R. Frank

Although personality psychology cannot explain the consequences of a ransomware attack at the biological level, it can provide a ‘zoomed out’ sociological-level perspective that may be more conducive to real-life applications. Personality trait surveys are substantially easier to collect than electrophysiological data. In the context of malicious communications that involve verbal persuasion, as opposed to outright threats (like destruction of a decryption key; Patyal et al., 2017), evidence suggests it is beneficial for potential victims to be made aware of their own susceptibility to manipulative and illegitimate persuasion (Sagarin, Cialdini, Rice, & Serna, 2002). Personality indices would likely be engaging and memorable components of cybersecurity awareness programmes, which will be most effective at undermining persuasion when they demonstrate the participant’s vulnerability to illegitimate persuasion—which people tend to expect for others and not themselves (Sagarin et al., 2002). The strength of these preconceptions may be related to personality measures. Regardless, education programmes would also invoke the benefits of forewarning, which, when used to highlight manipulative intent, can produce resistance to persuasion even in children (Rozendaal, Buijs, & Reijmersdal, 2016). It is likely, however, that the state of high emotional arousal that is likely induced by ransomware notifications could counteract or overwhelm preparatory strategies. It has been shown in an experimental context that highly aroused adults reported greater purchase intentions in response to fraudulent advertisements (real advertisements were used as stimuli), relative to a low arousal condition (Kircanski et al., 2018). Although an improved understanding of the personal psychological factors influencing ransomware compliance would be valuable, prevention-oriented educational strategies should clearly be tailored using evidence to avoid those coercive circumstances in the first place. But suppose we are curious particularly about the physiological responses to coercion. The tightly time-locked epochs of EEG used to measure event-related potentials are not well-suited for monitoring a participant through continuous exposure to coercive communication (such as the information presented when the ransomware takes over the computer). It is not yet clear where the behavioural potentiation of loss feedback is represented once the MFN or other feedback-related ERPs have faded. However, a variety of techniques are available to deepen future investigations, including oscillatory EEG analysis techniques (see Herrmann, Strüber, Helfrich, & Engel, 2016). One can imagine an eye-tracking study where the participant must read a coded high-or-low coercion instruction, during which skin conductance, heart and breath rate, blood pressure and other such ‘fight-or-flight’ responses (Taylor, 1991) as well as neural activity of the participant are recorded. Analyses that tracked changes in these metrics over the course of a coercive message could reveal differences in the physiological response to coercion, helping to broaden our understanding of how humans process forceful persuasion. Future studies of this kind could seek correlations between Big Five or HEXACO traits and the likelihood of being persuaded, and compare directly with StP-II subscales.

No Gambles with Information Security: The Victim Psychology of a Ransomware Attack


Response Strategies for Groups Applied to the context of corporations, it seems sensible that the loss of an entire company network of data simply overrides any need for individual-targeted psychological coercion, but that is not to say there will not be key decision makers within the company whose thought processes are relevant. An experimental attempt to determine whether experienced traders could learn to avoid ‘myopic loss aversion’ (MLA: a strong focus on short-term benefits leading to a strong negative reaction to recent losses) found the opposite; traders who are used to gain-or-loss choices display an especial tendency towards myopic loss-averse behaviour (Haigh & List, 2005). The traders placed the highest bets in conditions where a fixed bet was ‘rolled’ multiple times before they would receive feedback, and were least willing to commit resources to bets where they would receive feedback after every round (Haigh & List, 2005). In the context of ransomware, this behavioural result seems to imply that traders would be frugal if they decided to pay ransoms computer-by-­ computer, perhaps refusing to continue if the rate of data restoration was too low after the first few trials. If ransomware could recognize that it had reached a business’s network, proceed to encrypt the whole network, and demand a lump-sum payment for the entire set of hard drives, an experienced trader might be more willing to order a single, risky payment for the whole company—and watch each computer decrypt or not (this is the characterization of ‘myopic’ behaviour). It seems unlikely that the directors of any sizeable business would make decisions computer-­ by-­computer, and more probable that they would commit to a global ransom-paying directive. Though it will often be a foregone conclusion that the business must attempt to restore as much data as possible, or restore normal function to as many computers as possible, companies may benefit from paying one ransom at a time in cases where only select computers contain important data—especially if good recordkeeping allows them to decide which hard drives can safely be wiped and restored to function. Having generic backups for each type of workstation (prepared with the requisite programs for normal usage) would enable swift restoration of function on any computer whose local data is unimportant. This would remove ‘restoration of function’ as a motivation to pay the ransom, leaving lost data as the primary concern. Supposing that the company can survive comfortably without completely maximizing its data returns (i.e. paying every ransom out of desperation), a step-by-step decision plan raises the possibility that the victim organization will decide to stop after only a few failed decryptions, or after decrypting the most important drives and accepting the loss of others. Depending on the size and health of the company, its backup policies, and the data lost on each computer, it may be wiser to accept the loss of a few workstations of data, and keep possibly thousands of dollars in the process, than to attempt to decrypt everything. This strategy will be less likely if the decision maker has training or experience which has encouraged ‘myopic’, all-at-­ once decisions that bypass repeated rounds of feedback.


D. L. McIntyre and R. Frank

One also wonders whether the effects of data loss will be inflicted equally on the company employees. It is rational to assume no individual employee will have the same psychological experience of loss as with their own personal data and property. This would mean that the application of loss feedback research to ransomware is only informative about private individuals and not company personnel making decisions about other peoples’ data. There is, however, some relevant evidence about gambling with others’ resources. A study of decision-making for others has indicated that loss aversion is reduced if the decision maker is not affected by the choice (Andersson, Holm, Tyran, & Wengström, 2014). On the other hand, participants have reported greater unhappiness when losing on behalf of their friends as opposed to losing on behalf of a stranger, also displaying an increased FRN for such losses (Leng & Zhou, 2014). Another, earlier study established that the FRN, which shares likely neural sources with the MFN, appears in response to losses by a cooperative ally (Itagaki & Katayama, 2008). One can imagine an embarrassed, loss-afflicted IT team requesting that the ransom payments be authorized, as a path to partial redemption by making the decision themselves. None of the data is their personal property, but its loss affects their allies (and in career terms, it affects them). It would seem logical that IT staff who are not directly responsible for security should be the ones to make the technical decisions in a ransom-payment context. This is more pragmatic and viable than hiring a neutral consultant on short notice.

Loss-Threatening Incentives and Minimal Loss Humans motivated by loss aversion respond differently in working memory tasks. For instance, participants in one working memory study involving ‘loss-threatening incentives’ recognized previously viewed images more quickly and accurately, and this behavioural change correlated with greater PFC activation as a result of the loss motivation (Krawczyk & D’Esposito, 2013); medial PFC expresses an increased BOLD signal during both win and loss feedback when compared to no change feedback (Treadway, Buckholtz, & Zald, 2013). Alongside that behavioural evidence of improved memory, it was found that the amygdala and striatum were especially active when money was at risk of being lost—it is thought that these areas inform executive control areas through evaluation of the loss incentive during working memory tasks (Krawczyk & D’Esposito, 2013); for example, risky behaviour in Parkinson’s disease is thought to be related to amygdala dysfunction in particular rather than impairment of global cognition (Kobayakawa, Koyama, Mimura, & Kawamura, 2008). Predictions can be made about the functioning of these value-­ related processing centres in the context of a coercive ransomware attack. Immediately following their discovery of the attack, ransomware victims contend with the threat of further loss; the common countdown feature of ransomware (Patyal et al., 2017; this encourages victims to pay before the price increases or the decryption key is deleted outright) would supply the loss-threatening incentive. The drive to take a minimal loss is indexed by activity in the PFC and striatum, which

No Gambles with Information Security: The Victim Psychology of a Ransomware Attack


show greatest activation when the lowest possible loss among alternatives is offered (Krawczyk & D’Esposito, 2013), and it is thought that, together with the insula, these regions integrate motivation, urgency and emotional information to make decisions in the context of risk (Jones et al., 2011). It seems natural, therefore, that reducing the motivation to pay, the urgency of the payment, and emotional stress of ransomware attacks should each have concrete influence over victim decisions. Whether the lowest possible loss would be to pay the ransom or to lose data must certainly vary case by case—regardless, it may be helpful to remind the victim that paying to recover the data is not a true ‘minimal loss’ if the ransom attackers do not pass along a decryption key, and thereby adjust the perceived value of each option. Ransomware criminals who successfully maintain the public perception that paying ransoms will lead to decryption are legitimizing that payment as a potential minimal loss choice. One finding of critical importance is that an individual’s perceived stress level over the past month is negatively related to BOLD level in mPFC in response to loss feedback, suggesting that chronic stress may impair processing of value information (Treadway et al., 2013). This result builds on research that indicated chronic stress is related to morphological mPFC changes, including atrophy, in which it was speculated that such changes may underlie observed relationships between stress and compulsive decision-making (Dias-Ferreira et al., 2009; see Cleck & Blendy, 2008). It is therefore likely that—in keeping with the pattern of morbidity—chronic stress is a risk factor for ransomware compliance because it undermines the individual’s value-related decision-making. The evolutionary account of why humans are predictably loss-averse includes the claim that life-threatening losses are just too risky, even if they could result in substantial gains (Minati et al., 2012). It is reasonable to suppose that such biases, which undermine balanced weighting of gains against losses, may have been evolutionarily selected-for—in a major exception, men have been shown to be gain seekers in mating contexts (Li, Kenrick, Griskevicius, & Neuberg, 2012); perhaps activated sex hormones shift decision-making heuristics to upweight potential genetic transmission against the danger to the organism. This may be a unique mechanism in leading male victims of online romance scams to risk their own money and resources (e.g. Whitty & Buchanan, 2012). Setting aside that major exception to loss-averse behaviour: an attempt to reproduce real-world decision-­ making tendencies during EEG or fMRI monitoring discovered a correlation between the magnitude of a potential loss on a gamble trial, and the power in the alpha frequency band of EEG waves (Minati et  al., 2012). It was suggested that additional neural processing mechanisms are activated in decision-making for sufficiently strong loss scenarios, as compared to more balanced risk-versus-reward considerations. It’s possible that the superlative negative outcome of losing all data to ransomware taps into this purported survivalist assessment system, the activity of which (partially tracked by alpha band activity) varies separately from other electrophysiological indices of economic assessment (Minati et al., 2012). Models of risk-­ versus-­reward decision-making must account for the exceptional case of severe losses, which recruits processing mechanisms that are not directly sensitive to the


D. L. McIntyre and R. Frank

probability or magnitude of potential gains (Minati et al., 2012) and can exacerbate loss aversion in scenarios where self-protection is the motivation (Li et al., 2012). Critical loss scenarios such as ransomware attacks could lead victims to make decisions which may not be predicted by risk-versus-reward models that fail to include such contextual information: this perspective may also help to explain cases where victims pay ransom fees even when the cost greatly exceeds their valuation of the information. Furthermore, if victims are willing to pay for the restoration of invaluable or irreplaceable data, like photographs or written work, it may owe something to the element of self-injury. Unique and personally precious data in particular may trigger the loss aversion boost associated with self-protection situations; after all, people are generally less averse to loss in decisions made for others, even when those others’ real resources are on the line (Polman, 2012). Future economic decision models may increase their predictive accuracy by further investigating and characterizing such scenarios.

 he Generalizability of Gambling-Task Experiments T to Ransomware For the sake of completeness, the propriety of applying results from gambling-task psychology experiments to ransomware victims must be examined in detail. Although it is clear that the ransomware notification constitutes loss feedback, there are contextual differences between a willing player in a gambling task and a victim who realizes they have lost their data. A sensible objection to the application of such studies to ransomware might be that the ransomware victim never willingly risked his resources. A gambler plays willingly, and a research participant consents to engage in the experiment; the participant receives feedback on a decision, whereas the ransomware victim receives a surprise notice of having been robbed. Moreover, it is uncertain whether the victim is tech-savvy enough to recognize his loss as a mistake—as the result of lax backup practices, of clicking a suspicious email, or of failing to use an up-to-date operating system. For these reasons, there may be doubt that gambling-task studies can truly provide insight into the persuasion stage of ransomware. The combined findings of several EEG studies yield insight into whether these contextual differences moderate or otherwise change the consequences of loss feedback. In Gehring and Willoughby (2002), the MFN was distinguished from a different error-detection component—the error-related negativity (ERN). The ERN has been difficult to separate from the MFN, because it appears many similar situations and is also sourced to the ACC (see Dehaene, Posner, & Tucker, 1994; Kiehl, Liddle, & Hopfinger, 2000), but this separation is critical to determining what kinds of stimuli encourage risky behaviour. During the experiment, participants chose between two bet magnitudes and received feedback; the possibilities varied from trial to trial. There were conditions where both the large and small bets would result in a loss,

No Gambles with Information Security: The Victim Psychology of a Ransomware Attack


making the lesser gamble the correct choice on a principle of minimizing loss (Gehring & Willoughby, 2002). Likewise, in the case where both bet sizes result in a win, it is an ‘error’ to select the smaller gamble, because the gain is sub-optimal. Under these conditions, it was learned that the MFN was significantly determined by whether the trial was a loss or gain, but not by the outcome’s relationship to the alternative choice (i.e. whether it was an error; Gehring & Willoughby, 2002). It seems we should not expect it to make a difference, with respect to loss feedback, whether the ransomware victim has made mistakes (or perceived mistakes) in cyber security prior to the attack. Of course, it may also be argued that only research which simulates the unpredictability of ransomware is generalizable to the real world—proponents of that position might propose a modification of the gambling task, such as a resource trading game that included periodical gain-or-loss events (‘donation’ and ‘robbery’ events) beyond the participant’s control. Two studies come very close to this mark: the first, a study of the feedback negativity (FN; a dorsal ACC-sourced component detected in EEG), identified an FN in response to losses after participants either made no relevant choice, or no action at all (Yeung, Holroyd, & Cohen, 2004). The magnitude of the FN was merely reduced compared to trials where the participant had been in control (Yeung et al., 2004). The second study showed that participants exhibited an FRN in response to an antagonistic opponent’s gains, in a game where the opponent’s winnings were taken from the player’s own pool of points (Itagaki & Katayama, 2008). The FRN was therefore shown to track the player’s assessment of his own fortunes, regardless of whether he made decisions or had involvement in the outcome (Itagaki & Katayama, 2008). This suggests that a ransomware victim will be affected by simple negative feedback indicating data has been lost; regrets about the failure to create backups and other such concerns could be considered exacerbating factors, perhaps having behavioural influence through separate affective avenues. Further such experiments would help to clarify how humans react to losses that feel totally out of their control, which would provide additional applicable evidence for the ransomware case.

Future Directions and Conclusion The connections between cognitive neuroscience and cybercrime are currently limited, and greater investigation is needed to extend basic science laboratory findings into naturalistic contexts. Neuroscience will expand to provide deeper biologically-­ based explanations for the decision-making patterns observed in human organisms, as part of an expanding psychological science—a science which as it matures will empower us to understand human interactions ever more clearly, including coercive, criminal ones. Observations of population-wide patterns of behavioural economics, such as loss aversion, are collected downstream from the biology that produced them; explaining the regularities of human psychological responses will involve farsighted evolutionary accounts and minute, modern models of individual


D. L. McIntyre and R. Frank

d­ ifferences, including personality and neurological profile. Critically, these psych-­ cybercrime theories will recognize that criminals and victims are humans, and that ransom attacks can be usefully regarded as attacks on a human brain, ones designed to take advantage of the characteristics of that target. Many research directions can take us further. For example, we could attempt to determine the roles of loss and gain salience in affecting future behaviours, and encourage neurologically informed decision-making habits (recent studies exploring the role of learning and context include Schutte, Kenemans, & Schutter, 2017, and Zheng, Li, Wang, Wu, & Liu, 2015). Such information would be especially useful in the event of the emergence of a ‘blackmail’ ransom program, which would ask for a payment, decrypt and go dormant, then resurface, turning hard-drive access into a subscription service (never mind that computer manufacturers of the future may see fit to do this themselves, in one form or another). We may also examine risk assessment at pre-encryption stages of a ransomware attack. Companies will find it useful to instruct and survey their employees about information security frequently; a recent study found that participants’ security behaviours, as modelled by the Iowa Gambling Task, matched their self-reported measures when the issue of information security was made prominent (Vance, Anderson, Kirwan, & Eargle, 2014). It was also discovered that individual neural responses to feedback predicted security behaviours, providing further evidence of individual variance in risk-taking behaviours correlated with EEG evidence (Vance et  al., 2014)—but it hardly seems ethical for companies to subject their employees to such tests. Stressing the importance of information security will help to encourage secure behaviours. The functional relationship that explains the correlation between a larger MFN and greater risk-taking behaviours is still poorly understood, but the connection presently gives some insight into behaviour performed under a mantle of risk and salient loss feedback, like ransomware payments. Further research will help to clarify which distinct processes may be indexed by the various feedback-sensitive components. There will also surely be further investigation into the changes of risk-related feedback processing with age, which show that younger adults are comparatively more risk-seeking than their elders (West, Tiernan, Kieffaber, Bailey, & Anderson, 2014); this finding matches a recent survey of gambling behaviour, which reveals that numbers peak in the participants’ 20s and 30s before falling off (Welte, Barnes, Tidwell, & Hoffman, 2011). As the tech-savvy youth grow older, it will be interesting to see whether they prove resistant to ransomware demands. Astute commentators might suggest that new computer software and technology could be used to monitor for ransomware, bypassing the need to educate the public about such attacks and advise them against cooperation. Unfortunately, some recently proposed ransomware detectors, while innovative, may pose too serious of a threat to privacy to achieve widespread usage. One recent proposal is a cloud-­ monitoring system with access to a user’s computer, monitoring files and the network, while another proposes monitoring specifically of I/O requests and of the Master File Table, which would supposedly detect most current ransomware viruses (NTFS; Kharraz, Robertson, Balzarotti, Bilge, & Kirda, 2015; Lee, Moon, & Park, 2017). It is difficult to determine whether the public would tolerate these tools, or

No Gambles with Information Security: The Victim Psychology of a Ransomware Attack


how they would be profitably implemented in a privacy-respecting way—a way that does not monetize information collected from protective monitoring. For instance, it doesn’t seem appropriate to give a cloud-based system access to data kept by public institutions like hospitals, even though such institutions have been notably disrupted by ransomware in the past (see Patyal et al., 2017). Research about how to best educate compulsive gamblers suggests that the education route of prevention is viable. A recent study determined that animated videos were more likely to alter gamblers’ habits than video recordings (Wohl, Christie, Matheson, & Anisman, 2010). Such a video series about ransomware would provide the additional advantage of telling users to back up crucial files and keep their software up-to-date. Combining these methods with ransomware-sensitive antivirus software will help to limit the profitability of ransomware attacks. Most importantly, recognizing and understanding the social persuasion element of ransomware attacks will provide a wider range of strategies and recommendations to help victims in the grey area to resist payment. Increasing awareness and availability of technological countermeasures and good practices will also undermine the persuasiveness of ransomware. As we come to understand the mechanisms of psychological coercion at play in ransomware, it becomes clearer that pre-emptive, psychologically informed educational strategies may be necessary to tackle ransomware from the victim side. Encouraging better security practice and directing victims to consult with neutral third-parties will help to bleed attackers dry.

References Andersson, O., Holm, H. J., Tyran, J. R., & Wengström, E. (2014). Deciding for others reduces loss aversion. Management Science, 62(1), 29–36. Baluch, F., & Itti, L. (2011). Mechanisms of top-down attention. Trends in Neurosciences, 34(4), 210–224. Bohr, J., & Bashir, M. (2014, July). Who uses bitcoin? An exploration of the bitcoin community. In Twelfth Annual International Conference on Privacy, Security and Trust (PST) (pp. 94–101). IEEE. doi: Bressler, S.  L., & Ding, M. (2006). Event-related potentials. Wiley encyclopedia of biomedical engineering. Hoboken, NJ: Wiley. Brewer, R. (2016). Ransomware attacks: Detection, prevention and cure. Network Security, 2016(9), 5–9. Carver, C. S. (1980). Perceived coercion, resistance to persuasion, and the type a behavior pattern. Journal of Research in Personality, 14(4), 467–481. Cleck, J. N., & Blendy, J. A. (2008). Making a bad thing worse: Adverse effects of stress on drug addiction. The Journal of Clinical Investigation, 118(2), 454–461. Dehaene, S., Posner, M. I., & Tucker, D. M. (1994). Localization of a neural system for error detection and compensation. Psychological Science, 5(5), 303–305. Dias-Ferreira, E., Sousa, J. C., Melo, I., Morgado, P., Mesquita, A. R., Cerqueira, J. J., … Sousa, N. (2009). Chronic stress causes frontostriatal reorganization and affects decision-making. Science, 325(5940), 621–625. Friedman, M., & Rosenman, R. H. (1959). Association of specific overt behavior pattern with blood and cardiovascular findings: Blood cholesterol level, blood clotting time, incidence of arcus


D. L. McIntyre and R. Frank

senilis, and clinical coronary artery disease. Journal of the American Medical Association, 169(12), 1286–1296. Gehring, W. J., & Willoughby, A. R. (2002). The medial frontal cortex and the rapid processing of monetary gains and losses. Science, 295(5563), 2279–2282. Haigh, M.  S., & List, J.  A. (2005). Do professional traders exhibit myopic loss aversion? An experimental analysis. The Journal of Finance, 60(1), 523–534. Herrmann, C. S., Strüber, D., Helfrich, R. F., & Engel, A. K. (2016). EEG oscillations: From correlation to causality. International Journal of Psychophysiology, 103, 12–21. Itagaki, S., & Katayama, J. I. (2008). Self-relevant criteria determine the evaluation of outcomes induced by others. Neuroreport, 19(3), 383–387. Jones, C.  L., Minati, L., Harrison, N.  A., Ward, J., & Critchley, H.  D. (2011). Under pressure: Response urgency modulates striatal and insula activity during decision-making under risk. PLoS One, 6(6), e20942. Judges, R. A., Gallant, S. N., Yang, L., & Lee, K. (2017). The role of cognition, personality, and trust in fraud victimization in older adults. Frontiers in Psychology, 8, 588. Kharraz, A., Robertson, W., Balzarotti, D., Bilge, L., & Kirda, E. (2015). Cutting the gordian knot: A look under the hood of ransomware attacks. In International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment (pp. 3–24). Cham: Springer. Kiehl, K. A., Liddle, P. F., & Hopfinger, J. B. (2000). Error processing and the rostral anterior cingulate: An event-related fMRI study. Psychophysiology, 37(2), 216–223. Kircanski, K., Notthoff, N., DeLiema, M., Samanez-Larkin, G.  R., Shadel, D., Mottola, G., … Gotlib, I.  H. (2018). Emotional arousal may increase susceptibility to fraud in older and younger adults. Psychology and Aging, 33(2), 325. Knoch, D., Gianotti, L. R., Pascual-Leone, A., Treyer, V., Regard, M., Hohmann, M., & Brugger, P. (2006). Disruption of right prefrontal cortex by low-frequency repetitive transcranial magnetic stimulation induces risk-taking behavior. Journal of Neuroscience, 26(24), 6469–6472. Kobayakawa, M., Koyama, S., Mimura, M., & Kawamura, M. (2008). Decision making in Parkinson’s disease: Analysis of behavioral and physiological patterns in the Iowa gambling task. Movement Disorders, 23(4), 547–552. Krawczyk, D. C., & D’esposito, M. (2013). Modulation of working memory function by motivation through loss-aversion. Human Brain Mapping, 34(4), 762–774. Lee, J. K., Moon, S. Y., & Park, J. H. (2017). CloudRPS: A cloud analysis based enhanced ransomware prevention system. The Journal of Supercomputing, 73(7), 3065–3084. Leng, Y., & Zhou, X. (2014). Interpersonal relationship modulates brain responses to outcome evaluation when gambling for/against others: An electrophysiological analysis. Neuropsychologia, 63, 205–214. Li, Y.  J., Kenrick, D.  T., Griskevicius, V., & Neuberg, S.  L. (2012). Economic decision biases and fundamental motivations: How mating and self-protection alter loss aversion. Journal of Personality and Social Psychology, 102(3), 550. Liu, Y., Nelson, L. D., Bernat, E. M., & Gehring, W. J. (2014). Perceptual properties of feedback stimuli influence the feedback-related negativity in the flanker gambling task. Psychophysiology, 51(8), 782–788. Luck, S. J. (2014). An introduction to the event-related potential technique. Cambridge, MA: MIT Press. Luo, X., & Liao, Q. (2007). Awareness education as the key to ransomware prevention. Information Systems Security, 16(4), 195–202. Masaki, H., Takeuchi, S., Gehring, W.  J., Takasawa, N., & Yamazaki, K. (2006). Affective-­ motivational influences on feedback-related ERPs in a gambling task. Brain Research, 1105(1), 110–121. Minati, L., Grisoli, M., Franceschetti, S., Epifani, F., Granvillano, A., Medford, N., … Critchley, H.  D. (2012). Neural signatures of economic parameters during decision-making: A functional MRI (FMRI), electroencephalography (EEG) and autonomic monitoring study. Brain Topography, 25(1), 73–96.

No Gambles with Information Security: The Victim Psychology of a Ransomware Attack


Modic, D., Anderson, R., & Palomäki, J. (2018). We will make you like our research: The development of a susceptibility-to-persuasion scale. PLoS One, 13(3), e0194119. Paddon, D. (2018, May 16), Dozens of Canadian firms have paid ransoms to regain control of data, study finds. The Globe and Mail. Retrieved from­on-­ business/study-­finds-­dozens-­of-­canadian-­firms-­have-­paid-­ransoms-­to-­regain-­control-­of-­data/ article31253317/ Patyal, M., Sampalli, S., Ye, Q., & Rahman, M. (2017). Multi-layered defense architecture against ransomware. International Journal of Business and Cyber Security, 1(2), 52–64. Polman, E. (2012). Self–other decision making and loss aversion. Organizational Behavior and Human Decision Processes, 119(2), 141–150. Richardson, R., & North, M.  M. (2017). Ransomware: Evolution, mitigation and prevention. International Management Review, 13(1), 10–21. Rozendaal, E., Buijs, L., & Reijmersdal, E.  A. V. (2016). Strengthening children’s advertising defenses: The effects of forewarning of commercial and manipulative intent. Frontiers in Psychology, 7, 1186. Sagarin, B. J., Cialdini, R. B., Rice, W. E., & Serna, S. B. (2002). Dispelling the illusion of invulnerability: The motivations and mechanisms of resistance to persuasion. Journal of Personality and Social Psychology, 83(3), 526. Schonberg, T., Fox, C. R., & Poldrack, R. A. (2011). Mind the gap: Bridging economic and naturalistic risk-taking with cognitive neuroscience. Trends in Cognitive Sciences, 15(1), 11–19. Schutte, I., Kenemans, J. L., & Schutter, D. J. (2017). Resting-state theta/beta EEG ratio is associated with reward-and punishment-related reversal learning. Cognitive, Affective, & Behavioral Neuroscience, 17(4), 1–10. Symantec. (2016). Symantec 2016 Internet security threat report. Tempe, AZ: Symantec. Takács, Á., Kóbor, A., Janacsek, K., Honbolygó, F., Csépe, V., & Németh, D. (2015). High trait anxiety is associated with attenuated feedback-related negativity in risky decision making. Neuroscience Letters, 600, 188–192. Taylor, S.  E. (1991). Asymmetrical effects of positive and negative events: The mobilization-­ minimization hypothesis. Psychological Bulletin, 110(1), 67. Taylor, S. F., Martis, B., Fitzgerald, K. D., Welsh, R. C., Abelson, J. L., Liberzon, I., … Gehring, W.  J. (2006). Medial frontal cortex activity and loss-related responses to errors. Journal of Neuroscience, 26(15), 4063–4070. Tom, S. M., Fox, C. R., Trepel, C., & Poldrack, R. A. (2007). The neural basis of loss aversion in decision-making under risk. Science, 315(5811), 515–518. Treadway, M. T., Buckholtz, J. W., & Zald, D. (2013). Perceived stress predicts altered reward and loss feedback processing in medial prefrontal cortex. Frontiers in Human Neuroscience, 7, 180. Trustwave. (2017). 2017 Trustwave global security report. Chicago, IL: Trustwave. Retrieved from­us/resources/library/ documents/2017-­trustwave-­global-­security-­report/ Tversky, A., & Kahneman, D. (1991). Loss aversion in riskless choice: A reference-dependent model. The Quarterly Journal of Economics, 106(4), 1039–1061. van de Weijer, S. G., & Leukfeldt, E. R. (2017). Big five personality traits of cybercrime victims. Cyberpsychology, Behavior and Social Networking, 20(7), 407–412. Vance, A., Anderson, B. B., Kirwan, C. B., & Eargle, D. (2014). Using measures of risk perception to predict information security behavior: Insights from electroencephalography (EEG). Journal of the Association for Information Systems, 15(10), 679. Welte, J. W., Barnes, G. M., Tidwell, M. C. O., & Hoffman, J. H. (2011). Gambling and problem gambling across the lifespan. Journal of Gambling Studies, 27(1), 49–61. West, R., Tiernan, B. N., Kieffaber, P. D., Bailey, K., & Anderson, S. (2014). The effects of age on the neural correlates of feedback processing in a naturalistic gambling game. Psychophysiology, 51(8), 734–745. Whitty, M.  T., & Buchanan, T. (2012). The online romance scam: A serious cybercrime. CyberPsychology, Behavior, and Social Networking, 15(3), 181–183.


D. L. McIntyre and R. Frank

Wohl, M. J., Christie, K. L., Matheson, K., & Anisman, H. (2010). Animation-based education as a gambling prevention tool: Correcting erroneous cognitions and reducing the frequency of exceeding limits among slots players. Journal of Gambling Studies, 26(3), 469–486. Yeung, N., Holroyd, C. B., & Cohen, J. D. (2004). ERP correlates of feedback and reward processing in the presence and absence of response choice. Cerebral Cortex, 15(5), 535–544. Zheng, Y., Li, Q., Wang, K., Wu, H., & Liu, X. (2015). Contextual valence modulates the neural dynamics of risk processing. Psychophysiology, 52(7), 895–904.

Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations Sophie Van Der Zee

The world is becoming increasingly connected and networked (Scheerder, van Deursen, & van Dijk, 2017). Every year, more people have access to the Internet. The International Telecommunication Union (ITU), a United Nations specialized agency for information and communication technologies, estimated that by the end of 2019, 4.1 billion people—53.6% of the global population—were connected to the internet (ITU, 2020). In comparison, 10 years ago in 2009, only 25.8% of the global population had Internet access. And not just the number of connected people increases, also the types and number of tasks that people perform online tend to increase, from social and commercial to governmental activities. A particularly interesting development is the use of digital payments over cash transactions. According to the World Payments Report 2020 by Capgemini and BNP Paribas, digital payment transactions grew with 14% to 708.5 billion transactions from 2018 to 2019. Examples of digital payments include credit card transactions, debit card transactions, e-wallet transactions, and online banking transactions. With an increased use of digital payments, the security of such payments has gained in importance. In addition to an increase of legal online activities, we have also been seeing an increase in illegal online activities. Criminals realized the potential of digital crimes and the cost of cybercrime has been rising ever since. According to the Center for Strategic and International Studies & McAfee, cybercrime is now costing the world almost $600 billion annually (Lewis, 2018). As a result, cybercrime is now globally the third biggest economic crime, just behind government corruption and narcotics. In the Netherlands—the country in which the presented study was executed—cybercrime is now the most common type of crime. In 2019, 13% of citizens reported victimization of cybercrime (CBS, 2019). The two most common crimes in the Netherlands are no longer traditional crimes, but cybercrimes: hacking (5.5%) and online consumer fraud (4.6%). In response, governS. Van Der Zee (*) Department of Applied Economics, Erasmus School of Economics, Erasmus University Rotterdam, Rotterdam, The Netherlands e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Weulen Kranenbarg, R. Leukfeldt (eds.), Cybercrime in Context, Crime and Justice in Digital Society I,



S. Van Der Zee

ments, industry, and individuals are spending billions each year on preventive information security for protection against cybercrime (Egelman & Peer, 2015), although some security experts have argued that this money is better spent on the legal response to cybercrime than prevention (Anderson et al., 2013). In the past, much emphasis was placed on the technical aspects of information security. In the past decade, the role that users play, the human factor, has gained increased attention. And with good reason. The majority of successful cyberattacks involve passive (e.g., not installing anti-virus software or the latest software updates) and/or active actions (e.g., downloading a file containing malware, clicking on a phishing link) of the user. In other words, victims of cybercrime often play a crucial role in information security. Users are often referred to as the weakest link of cybersecurity (Schneier, 2000), although such statements have also received criticism from a usability perspective (Krol, Spring, Parkin, & Sasse, 2016). When security measures are unusable, it can decrease user compliance (Sasse, Brostoff, & Weirich, 2001). Increasing user compliance with security guidelines is desirable, since more secure behavior by users could help decrease the number of successful cyberattacks and, consequently, help decrease the cost and negative consequences of cybercrime. Research has shown that there are individual differences that influence the chance of becoming a victim of cybercrime (Bossler & Holt, 2010; Gratian, Bandi, Cukier, Dykstra, & Ginther, 2018; Holtfreter, Reisig, & Pratt, 2008; Modic & Lea, 2012; Ngo & Paternoster, 2011; Van de Weijer & Leukfeldt, 2017). Specifically, researchers have studied the digital behaviors of users to identify the main weaknesses and their causes. This line of research demonstrates that some people are more likely to become a victim of cybercrime than others. Specifically, Gratian et  al. (2018) showed that roughly 5–23% of cybersecurity related behavioral intentions are caused by individual differences such as financial risk-taking, rational decisionmaking, and extraversion. The most extensive study so far on individual differences and cybercrime was conducted by Van de Weijer and Leukfeldt (2017). They examined the effect of big five personality characteristics on the chance of becoming a victim of different types of cybercrimes and found that lower scores on conscientiousness and emotional stability, and higher scores on openness to experience, were linked to victimization risk of cybercrime. Modic and Lea (2012) also examined which personality traits are related to cybercrime victimization and found that people who score higher on extraversion, openness to experience, self-control, premeditation, and sensation seeking, and lower on urgency, were more likely to become a victim of cybercrime. Several other studies also found that low self-control can increase the risk of cybercrime victimization (Bossler & Holt, 2010). Noncontact crimes such as fraud and cybercrime often rely on some degree of victim cooperation, and people with low self-control tend to cooperate more easily (Holtfreter et  al., 2008). However, while Ngo and Paternoster (2011) did find an effect of self-control, they only found this effect for two out of the seven types of cybercrime, suggesting an influence of context. Taken together, there are several individual differences that influence the type of digital behaviors people take part in, thereby influencing their risk of becoming a victim of cybercrime.

Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations


Current Study This paper examines compliance with banking regulations for digital payments. Although individuals may differ in their cybercrime victimization likelihood, banks need to minimize the chance of victimization for as many people as possible. In contrast with more traditional offline crimes, banks play a crucial role in the handling of cybercrime. When cybercrime victimization leads to financial losses, customers are more likely to report victimization to their bank or other organizations than to the police (Van de Weijer, Leukfeldt, & Van der Zee, 2020). From a victim point of view, this decision to reach out to other organizations than the police is understandable. The majority of cybercrime victims have negative experiences with reporting their victimization to the police (Van de Weijer et al., 2020) and banks play a role both in the detection of anomalies to prevent cybercrime from happening and in incident handling once the crime has occurred. It is therefore not strange that banks have taken several measures to increase user compliance, including the implementation of several Information Security Awareness (ISA) programs (Bauer, Bernroider, & Chudzikowski, 2017) and recommendations for the security of Internet payments by the European Central Bank (ECB), 2013). In the Netherlands, banks have taken a unified approach in their battle against cybercrime and created guidelines to improve user compliance. More specifically, on January first 2014, the Dutch Banking Association (DBA; Nederlandse Vereniging van Banken) launched a set of five security guidelines to enhance the reliability and security of electronic payments. Each guideline was divided into a specified set of actions that customers have to comply with in order to follow the guideline (c.q., ING, n.d.). This set of guidelines and specified actions provided Dutch banks with a tangible measure of compliance. If a customer fails to comply with one of the guidelines, or even with one of the specified actions, the banks can claim negligence on the side of the customer. Such a negligence claim can cost the customer up to 150 euros when becoming a victim of cybercrime (Volkskrant, 2013). The security guidelines can be found on the websites of participating banks. Arguably, such guidelines only improve security when people are aware of their existence and when people comply. In order to investigate whether this is the case, we developed the following two research questions. Are people aware of the security guidelines implemented by the DBA? (RQ1) and are people compliant with the security guidelines implemented by the DBA? (RQ2). Many training programs and awareness campaigns aimed at improving information security are primarily focused on informing people. This is based on the assumption that when people know what the risks are, how to recognize them and, more broadly, how to operate safely online, they are more likely to do so. Lab studies confirm this assumption. For example, Parsons, McCormac, Butavicius, Pattinson, and Jerram (2014) developed the Human Aspects of Information Security Questionnaire (HAIS-Q) based on people’s attitudes, knowledge, and self-reported behavior. In a range of lab experiments, Parsons and colleagues demonstrate that


S. Van Der Zee

people with higher HAIS-Q scores are better at identifying phishing emails (Parsons et al., 2014, 2017). However, Jones, Towse, and Race (2015) demonstrated that in human factor in cybersecurity research, not all lab results translate to actual behavior in the real world. This insight led to the third research question examining whether awareness of the security guidelines implemented by the DBA influences compliance with those guidelines? (RQ3). To test these three research questions, a user survey about awareness of—and compliance with—the security guidelines implemented by the DBA was developed.

Methods This study was approved by the Research Ethics Committee of the Psychology Department of the Vrije Universiteit Amsterdam, the former university of the author. The study is in line with the World Medical Association Declaration of Helsinki.

Participant Recruitment Data collection took place in spring 2017  in the Netherlands. We deliberately avoided recruitment around the University campus to increase the probability of collecting a representative sample of the Dutch population. Instead, participants were recruited in public spaces such as public transport and libraries. Participants could either participate using pen and paper, or scan a QR code to participate online. The majority of the data was collected during train journeys. After each train stop, the experimenter would enter a new train car and approach everyone in that car to ask them to participate in the experiment. This approach to ask everyone in a train car was chosen to decrease sampling bias. First, the experimenter briefly explained to each potential participant what the survey was about and asked whether they were willing to participate. Upon agreement to take part, a paper survey and pen was handed to the participant. The survey also contained a QR code linked to a Qualtrics website containing a digital version of the survey, so participants who preferred to complete the survey online, could do so. All participants signed the consent form before completing the survey and participants could ask questions at any time, either in person or by email. Upon completion, pen and paper participants would hand back the completed paper copy of the survey. Online participants would just close their web browser. A minority of the data was collected in public libraries, following a similar sampling procedure (approaching all potential participants the experimenter came across). Response rates (i.e., the percentage of people that agreed to take part) were unfortunately not recorded.

Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations


Participants In total, 133 Dutch participants took part in this study. All participants indicated that they use online banking, and as a result, no participants were removed for this reason. However, 14 participants started, but did not complete the online survey and were removed from the sample, leaving a final sample of 119 participants (67 females, 56.3%). If incidental responses were missing, the participant remained included in the dataset. As a result, the sample size of some questions may differ slightly. On average, participants were 36.14 years old (SD = 14.27, range 19–76). Almost half of the participants completed a university degree (44.5%), followed by higher professional education (29.4%), secondary vocational education (12.6%), and secondary/high school (13.4%). Study participation took 5–10 min and was rewarded by the option to take part in a prize draw for a € 50 gift voucher. Participants took part in the prize draw by providing their email address. They could also choose to stay fully anonymous by not providing their email address. Upon completion of data collection, a randomly selected participant was contacted via email about winning the € 50 voucher.

Materials We developed a survey to measure whether people are aware of the security guidelines implemented by the Dutch Banking Association, and the extent to which they report compliance with these guidelines. Since this study investigates compliance with Dutch security guidelines, the questionnaire was distributed in Dutch. The questionnaire started with a set of demographic questions, including gender, age, and education level. Next, several questions regarding the participant’s experience with digital payments were asked, including whether they use online banking and, if so, which devices are used for this purpose (e.g., PC, laptop, tablet, smartphone, smartwatch). Since previous victimization of cybercrime might influence future digital behaviors, participants were also asked to report previous victimhood of digital payment incidents. Next, participants were asked whether they were aware of the existence of, and, if so, to what extent in agreement with, the security guidelines of the Dutch Banking Association. The final and main part of the survey comprised a set of questions based on the security guidelines implemented by the Dutch Banking Association. Each of the five main guidelines and each of the specified actions people have to perform in order to comply with the guidelines were translated into statements. We developed one statement per guideline or action and all guidelines and actions were covered. Participants were invited to respond to each statement with “correct,” “incorrect,” or “don’t know.” The statements about the five general guidelines are:


S. Van Der Zee

1. I keep my security codes secret. 2. Besides me, no one ever uses my bank card. 3. I make sure all devices I use for online banking are well secured. 4. I check my online bank statements at least every 2 weeks, and/or my paper bank statements within 2 weeks of arrival by post. 5. I report any online banking related incidents immediately to my bank. Each general guideline statement was followed by a set of concrete action statements. These concrete action statements clarified what actions people had to perform in order to adhere to the associated general guideline. For example, the specified actions for the guideline “I keep my security codes secret” are: (a) I keep the pin number of my bank card secret; (b) I keep the pin number and/or password of my online banking account secret; (c) I have not written down my security codes anywhere in recognizable format, including digitally and on paper; (d) My password does not contain information that is easy to guess, such as a postcode, a house number, a year of birth, or a name of a family member; (e) When I type in my security codes, I always make sure no one is watching me; and (f) I never give out my security codes (PIN numbers and passwords) via telephone or email. For a complete overview in English of the security guideline and action statements, please see Table 1. At the end of the survey, participants were asked about one additional guideline plus the specified actions that are needed to comply with that guideline. This guideline is currently not included in the set of security guidelines implemented by the Dutch Banking Association, but is often mentioned by security experts as advice on safe digital behavior and may be implemented in the future (ING, n.d.). This additional guideline is “I always check whether the Internet connection with the bank is secure.” Specified actions for this guideline are: (a) I check whether a little lock is present in the web browser; (b) I check whether the “s” in https is present; (c) I always check the Internet address of my bank’s website before entering my personal details; and (d) I always check the transferable amount, the account number, and date before approving an online transaction.

Results Experience with Online Banking All participants (100%) in our survey indicated that they use online banking. Participants use different devices for this purpose, with most participants using a PC or laptop (94.1%), followed by a smartphone (73.9%), and a tablet (26.9%). Only one participant indicated using his or her smartwatch for online banking purposes (0.8%) and no additional devices were introduced. Out of 119 participants, only nine (7.6%) indicated having experienced an online banking incident. One person transferred money to the wrong recipient. The other eight incidents (6.7%) can be considered different versions of cybercrime. Three

Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations


Table 1  Compliance rates in percentages per guideline and specified action

Guideline/action I keep my security codes secret (for example, PIN numbers and passwords)  I keep my PIN number of my bank pass secret  I keep the PIN number and passwords for online banking secret  I have not written down my security codes anywhere in recognizable format, including digitally and on paper.  My password does not contain information that is easy to guess, such as a postcode, a house number, a year of birth, or a name of a family member  When I type in my security codes, I always make sure no one is watching me  I never give out my security codes (PIN numbers and passwords) via telephone or email Besides me, no one ever uses my bank card  If I have to hand over my bank card, upon return, I always check whether I received my own bank card back  My bank card is always safely stored  I frequently check whether I still have my bank card in my possession I make sure all devices I use for online banking are well secured (for example, computer, tablet, mobile phone, smartwatch)  The most recent security update is installed on all devices  Anti-virus software is installed on all devices  A firewall is installed on all devices  No illegal software or files are installed on any of the devices I use for online banking (for example, computer, tablet, mobile phone, smartwatch)  Access to all devices I use for online banking is shielded with a security code (for example, a password or a PIN number)  Unauthorized people do not have access to online banking on any of my devices  I always logout when finished with online banking  I never use online banking on an insecure Wi-fi network (i.e., an open Wi-fi network without password protection) I check my online bank statements at least every 2 weeks, and/or my paper bank statements within 2 weeks of arrival by post

Don’t Comply, comply, % % 82.4 1.7

Don’t know or missing, % 16

96.6 97.5

3.4 1.7

0 0.8













55.5 71.4

43.7 20.2

0.8 8.4

64.7 87.4

29.4 10.1

5 2.5







79.8 70.6 58.0

13.4 11.8 31.1

6.7 17.7 10.9







88.2 63.9

10.9 32.8

0.8 3.4






S. Van Der Zee

Table 1 (continued)

Guideline/action I report any online banking related incidents immediately to my bank  I contact my bank when I do not have my bank card in my possession or when I don’t know where my bank card is  I contact my bank when I suspect another person knows or has used my security codes  I contact my bank when unauthorized transactions have taken place on my bank account  I contact my bank when a mobile device (such as phone, tablet, or smartwatch) I use for online banking is no longer my possession, without me removing the online banking application first I always check whether the internet connection with the bank is secure   I check whether a little lock is present in the web browser   I check whether the “s” in https is present   I always check the internet address of my bank’s website before entering my personal details   I always check the transferable amount, the account number, and date before approving an online transaction

Don’t Comply, comply, % % 72.3 5.0

Don’t know or missing, % 10.1



















52.9 75.6

41.2 20.2

5.8 4.2




Statements about the five security guidelines are presented in bold. The statement about the additional guideline is presented in italic

people had money removed from their bank account or attempts thereto. Two participants experienced auction fraud, one was skimmed, one was asked via email for their password, and one could no longer log in to their own online banking account. Seven out of eight cybercrime incidents (87.5%) were reported to the bank. In addition to reporting to the bank, one out of eight victims (12.5%; auction fraud) also reported their victimization to the police, and the skimmed victim also reported the crime to the organization where his/her card was skimmed. One person did not report the incident at all. Three out of eight victims indicated that they changed their behavior due to the incident. The occurrence of cybercrime in this sample is low compared to cybercrime victimization in other studies. For example, Van de Weijer et al. (2020) conducted a study on police reporting of cybercrime victimization, relying on a larger and more representative sample. In their study, 303 out of 595 participants (50.9%) indicated victimization of cybercrime. Interestingly, only 13.1% of offenses were reported to the police, which is in line with the 12.5% report rate found in the current study. Although Van de Weijer et al. (2020) also found that victims were more likely to report their victimization to other organizations than the police (33.2%), this effect was much larger in our study (87.5%).

Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations


The low cybercrime occurrence in our study has methodological implications. The sample size is too small to draw any conclusions from or to take into account in any statistical analyses, so the cyber incidents will not be further discussed in this paper.

Awareness of Security Guidelines (RQ1) To answer the first research question, participants were asked whether they are aware of the security guidelines implemented by the DBA. A quarter of our sample (24.4%) indicated being aware of these guidelines, while three quarters indicated not being aware of such guidelines (74.8%). One participant did not answer this question (N = 118). These results indicate that a large majority of our sample was not aware of the existence of the security guidelines that could influence their reimbursement by the bank in case of financial loss due to cybercrime.

Compliance with Security Guidelines (RQ2) To answer the second research question, participants were asked to self-report whether they comply with the five security guidelines implemented by the DBA in general. We also specified for each guideline, which concrete actions had to be performed in order to comply. Each guideline and each action was turned into concrete statements that participants could agree (correct/yes), disagree (incorrect/no), or indicate that they do not know whether they comply with (don’t know). According to the guidelines, people need to comply with each of the five guidelines and all specified actions in order to fully comply. Importantly, failing to comply with one of the guidelines—or even just one of the concrete actions—can already be enough for a bank to not fully reimburse victims of cybercrime due to negligence. First, participants’ responses to the five general guidelines were analyzed. We created a new variable called “Guidelines Compliance.” Participants were categorized as compliant when answering “correct” to all five guidelines, indicating that they follow the guidelines. Participants were categorized as non-compliant when responding “incorrect” to at least one of these five guidelines. Participants also had the option to indicate “don’t know,” and when answering by pen and paper, they could leave fields open (missing data). When a single statement was not judged, but the statements above and below were, the participants were not removed from the datafile. In those cases, it is likely that it concerns a deliberate choice by the participant to avoid answering the question, suggesting this type of missing data is not random, but informative and should therefore not be removed. To deal with this meaningful missing data, we created a dummy variable where we combined the “don’t know” answers and open fields (i.e., missing data) into an “it is unknown whether all guidelines were followed” answer category.


S. Van Der Zee

Roughly a quarter (23.5%) of participants reported following all five guidelines. The majority (60.5%) reported not following at least one guideline, thereby risking a negligence claim by the bank when falling victim of cybercrime. In a minority of cases (16.0%), it was unclear whether the participant complied with all five guidelines. Of the five guidelines, most participants indicated complying with “I keep my security codes secret” (82.4% complied), followed by “I check my online bank statements at least every 2 weeks, and/or my paper bank statements within 2 weeks of arrival by post” (73.9% complied), “I report any online banking related incidents immediately to my bank” (72.3% complied), “I make sure all devices I use for online banking are well secured” (70.6% complied), and “besides me, no one ever uses my card” (55.5% complied). Taken together, less than a quarter of people report complying with the security guidelines, even though noncompliance with at least one guideline can have negative real-life financial consequences. For each of the five guidelines it is specified which actions need to be undertaken in order to comply. Importantly, failing to comply with even one of those actions can already lead banks to claim customer negligence. Therefore, we also calculated which percentage of participants reported complying with all specified actions. For this purpose, we created a new variable called “Specific Action Compliance,” using the same labeling of correct, incorrect, and don’t know/missing as above. Results indicate that when made specific, only 3.4% of participants indicated complying with all actions. An overwhelming majority (84.0%) indicated not complying with at least one action and of 12.6% of participants, it remained unclear whether they complied with all specified actions. The actions that most participants complied with were: “I keep the pin number and/or password of my online banking account secret” (97.5% complied), “I keep the pin number of my bank card secret” (96.6% complied), and “I report to my bank when unauthorized transactions have taken place on my bank account” (93.3%). Actions that were least complied with were: “No illegal software or files are installed on any of the devices I use for online banking” (58.0% complied), “I never use online banking on an insecure Wi-Fi network” (63.9%) and “my bank card is always safely stored” (64.7% complied). For a full overview of results, please see Table 1. Last, we examined to which extent participants complied with the additional security guideline “I always check whether the Internet connection with the bank is secure” and the set of associated specified actions. This guideline is currently not included in the set of five guidelines posited by the DBA, but is recommended by Dutch banks on their website about safe online behavior. In total, 53.8% of participants indicated to comply with the general guideline, 21.0% indicated to not comply and for 25.2% of participants it is unknown whether they comply. Three quarters of the people (75.6%) reported checking the web address of their bank, 68.1% reported checking for the lock, and 52.9% reported checking the “s” in https. On the bright side, 95% of participants indicated that they check the bank account number of the recipient and date before approving a digital payment.

Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations


 he Effect of Awareness on Compliance with Security T Guidelines (RQ3) We tested whether people who are aware of the five security guidelines are more compliant with those guidelines compared to people who are not aware. For this first analysis, we focused solely on the five statements directly related to the guidelines. We did not include the action-based statements. A 2 × 3 chi-square analysis of Awareness (yes/no) on Guidelines Compliance (yes/no/don’t know) showed that people who are aware of the guidelines do not comply with them more than people who are not aware, p = 0.108. For an overview of all compliance results, please see Table 1. Next, we tested whether people who are aware of the five security guidelines are more compliant with the set of specified actions than people who are not aware. For this analysis, all responses to action-based statements were analyzed. We could not perform a chi-square analysis because two cells contained less than five observations. Instead, we performed a 2 × 3 Fisher’s Exact Test (FET) of Awareness (yes/ no) on Specific Action Compliance (yes/no/don’t know). The results revealed a significant association between Awareness and Specific Action Compliance, FET (2) = 8.26, n = 118, p = 0.012, indicating that people who are aware of the guidelines reported different behavior regarding specific actions than people who are not aware of the guidelines. Because Specific Action Compliance has three answer options (i.e., yes, no, and don’t know/missing), we need to examine the row percentages before interpreting the results further, see Table 2. Interestingly, the percentage of people that comply with the set of specific actions is the same (3.4%) between people that are aware of the guidelines and people that are not aware of the guidelines. That means that even though there is a significant relationship between awareness and specific action compliance, being aware of the guidelines does not lead to more compliance per se. When unaware of the guidelines, 89.9% of participants admitted noncompliance, compared to 69.0% admitting noncompliance when aware, a decrease of 20.9%. At the same time, we saw a 20.9% increase in participants responding they do not know whether they comply with the guidelines when aware of the guidelines (27.6%), compared to participants who Table 2  Results of Fisher’s Exact Test examining the effect of guideline awareness on specific action compliance Specified action compliance Complied Aware 1 3.4% Not aware 3 3.4% Total 4 3.4%

Did not comply 20 69.0% 80 89.9% 100 84.7%

Exact counts and percentages are presented

Don’t know 8 27.6% 6 6.7% 14 11.9%

Total 29 100% 89 100% 118 100%


S. Van Der Zee

were not aware (6.7%). This suggests that the positive association between Awareness and Specific Action Compliance seems to be caused by people who are not aware of the guidelines openly admitting they do not comply, while people who are aware of the guidelines responding more often that they do not know whether they comply or leave the question unanswered. To test how we should interpret these results, we created two dummy variables and ran an additional chi-square test and Fisher’s Exact Test. If the significant effect would be caused by an increase in compliance when aware, one would still expect to find a difference when “did not comply” and “don’t know” are combined into one dummy variable. Because two cells contained less than five observations, we ran a Fisher’s Exact Test of Awareness (yes/no) on Dummy 1 Specified Action Compliance (yes/no & don’t know) and did not find a significant effect, p = 0.682. However, if our interpretation that the significant effect is caused by a shift from “did not comply” when unaware to “don’t know” when aware is correct, one would still expect to find a difference when “complied” and “did not comply” are combined into one dummy variable. We ran a chi-square analysis of Awareness (yes/no) on Dummy 2 Specified Action Compliance (yes & no/don’t know) and indeed found a significant effect, X2(1) = 9.09, n = 118, p = 0.006. In sum, being aware of security guidelines indeed seems to make people more hesitant to admit noncompliance with those guidelines, rather than actually increasing their compliance levels on a specified action level.

Discussion Banks across the world have been establishing security guidelines to increase digital secure behavior by their customers in an attempt to reduce the costs of cybercrime (ING, n.d.; ECB, 2013). The rationale behind such guidelines is that when people know how to behave securely, they are more likely to do so. Lab research supports this assumption (Parsons et  al., 2014, 2017). Whether security guidelines indeed have the desired effect will partly depend on people’s awareness of the existence— and content—of such guidelines. In this study, we tested whether people are aware of the five security guidelines implemented by the Dutch Banking Association (DBA) on the first of January 2014. Results show that only a quarter of our participants were aware of these guidelines. When asked about compliance with the five general guidelines, less than a quarter reported following all five guidelines. Importantly, when asked about compliance with the specified actions needed to comply with these five guidelines, only a few participants reported complete compliance (3.4%). This finding uncovers two important issues. First, people report behaving insecurely online, which puts them at risk of cybercrime victimization. Second, the difference between compliance with the general guidelines and compliance with the specified actions suggests that the phrasing of a statement (or question) influences the answer participants give. Specifically, our data suggests that the more precise a statement is framed—for example by referring to a specific action

Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations


instead of a general guideline—the more realistic answer you will receive. As a result, solely asking people about general guidelines or statements may lead to overly optimistic results. This finding is in line with a previous observation by Anderson et al. (2013), who found that general statements such as “I comply with information security policies” are susceptible to bias and not suitable for testing security knowledge. The same argument seems to apply to testing behavioral compliance with general guidelines. A practical implication of this research is therefor to ask people about specific behaviors when testing their online security level. We also tested whether people who are aware of the guidelines are also more likely to be compliant. In theory, knowledge should inform behavior. Empirical evidence for this assumption is provided by Parsons et al. (2014, 2017), who demonstrated that people with more information security knowledge are better at identifying phishing emails, a measure of secure online behavior. However, our results indicate that awareness of the existence of the security guidelines did not actually improve compliance. While we did find a significant relationship between awareness and compliance, follow-up analyses showed that this effect was more likely to be caused by a shift from admitting noncompliance when unaware of the security guidelines, to responding “don’t know” when aware of these guidelines. In other words, our data suggests that people found it harder to admit noncompliance when they were aware that there were guidelines they did not comply with, than people who were unaware of the existence of such guidelines. This finding suggests a certain degree of social desirability in the participant’s answers. In addition to the research we conducted on the five security guidelines posited by the DBA, we also measured compliance with an additional informal guideline about checking whether the Internet connection with the bank is secure. Results showed that only half of participants (53.8%) comply with this guideline, a finding that is in line with previous research demonstrating that the people are often confused by the lock and certificate checking (Dhamija, Tygar, & Hearst, 2006; Jakobsson, 2007). The extent to which this advice is useful is also debatable, since cybercriminals can manipulate these security indicators, potentially providing a false sense of security (Claessens, Dem, De Cock, Preneel, & Vandewalle, 2002). The results of this study pose a societal problem since the awareness of, and compliance with, this set of security guidelines can have real-world consequences for online banking customers. It becomes increasingly complicated in the Netherlands to use banking services without relying on online banking, which means that compliance with these guidelines applies the majority of the Dutch population, with noncompliance potentially leading to negative financial consequences in the case of cybercrime victimization. In addition, Jansen and Leukfeldt (2016) concluded based on 30 interviews with victims of online banking, that there is no likely target when it comes to online banking fraud. Instead, everyone is at risk. To summarize, the majority of Dutch citizens use online banking. Dutch banks have created a set of guidelines and a list of specified actions that customers have to comply with in order to safely use online banking. Noncompliance can have negative financial consequences in case of cybercrime victimization. The majority of our participants were not aware of the existence and content of such guidelines. The great


S. Van Der Zee

majority of our participants indicated not complying with the five general guidelines. Almost none of our participants indicated complying with all specified actions. Especially the latter is problematic since noncompliance with even only one specified action can already lead to negligence claims (own risk) and lead to negative financial consequences. These results suggest that Dutch banks are currently expecting security-related behaviors from their customers that a) customers are not aware of; and b) customers do not comply with. This raises the question whether it is reasonable of banks to expect these security-related behaviors from their customers or whether these guidelines and associated financial consequences can better be interpreted as shifting liability from the bank to the customer. In practice, such a liability shift would mean that victims of cybercrime can be blamed for their victimization through negligence claims by their bank. Requesting a certain level of securityrelated behaviors in order to avoid crime victimization is not uncommon. If you leave your bicycle unattended without locking it and the bicycle subsequently gets stolen, insurance may not pay out. An insurance company may ask for the key(s) of your bicycle lock to prove it was locked when stolen. Other behaviors, such as leaving your bicycle outside overnight, may put the bicycle owner more at risk, but are not used to determine negligence levels. A problem with the security guidelines for online banking is that in order to comply, people have to adhere to five general guidelines and 21 specified actions. Instead of being judged on one or two very specific actions (e.g., locking your bike), with online banking, people are judged on their entire digital hygiene plus several physical activities such as bankcard sharing. And this research shows that people do not have their entire digital hygiene in order. Due to the implementation of security guidelines for online banking in 2014 by the DBA, when victimized of cybercrime, 96.6% of our participants could be held liable for their victimization by their bank. Regardless of intention, this shifts blame and liability from the bank to the victim. Blaming victims of crime unfortunately is a common and serious problem in crime in general, but in cybercrime specifically (Cross, 2015). Especially victims of online fraud are often portrayed as greedy and gullible, and are often blamed for the actions that led to their losses. By implementing and enforcing these security guidelines, the blaming of victims for the actions that led to financial losses seems to now also be applicable to the context of online banking. Researchers have argued that instead of unfairly shifting liability to cybercrime victims, society should improve the way they treat victims of cybercrime, thereby reducing the emotional stress caused by such victimization (Cross & Blackshaw, 2014). This advice may be applicable to Dutch banks as well.

Limitations and Future Research Studies investigating digital behavior tend to make use of a variety of methodological approaches. While some focus specifically on intentions and self-reported behavior, others prefer measuring actual behaviors, either in the lab or in the real world. Sometimes, lines of research that started in the lab are later validated in the

Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations


real world, such as the development and predictive value of the Security Behavior Intentions Scale (SeBIS; Egelman, Harbach, & Peer, 2016). In some cases, results from self-reported questionnaires and even lab studies do not match the results of field studies, highlighting the importance of ecological valid research in the area of cybersecurity (Coventry, Briggs, Jeske, & van Moorsel, 2014; Jones et al., 2015). The first limitation of this study is the relatively small sample size (n = 119) and unstructured method of data collection. Previous research demonstrated that age affects both Internet use and cybercrime victimization, with younger people spending more time online and having higher chances of becoming a victim of cybercrime (Öğütçü, Testik, & Chouseinoglou, 2016). For this reason, we avoided participant recruitment through the university participant system and on campus. Instead, participants were recruited in public spaces such as trains and public libraries. While this recruitment method allowed for a more diverse sample (e.g., age range: 19–76), it was also time consuming. This resulted in a smaller sample size compared to online data collection. In addition, not all types of people are equally likely to frequent trains and libraries, limiting the generalizability of our results. A second weakness of the current study is that we asked participants to self-­ report which digital behaviors they do and do not engage in. These self-reports may not reflect actual behavior for a number of reasons. For example, people may answer in a social desirable manner, or they may underestimate their own risky digital behavior. The results from this study suggest that at least the social desirability bias plays a role, since participants that were aware of the security guidelines seemed more prone to answering that they “don’t know” whether they comply with a specified security action than people who were not aware of the guidelines. In other words, it seems that when people know that what they are doing is wrong, it reduces the chance of them openly admitting their noncompliance. Although field experiments are quite rare in this field, several researchers have developed either software or research methodologies with which real-world behaviors of participants can be monitored (Bravo-Lillo, Egelman, Herley, Schechter, & Tsai, 2013; Rajivan & Gonzalez, 2018). Follow-up research measuring the specified security action could help establish the real-world reliability of our findings. A third limitation of this research is the possibility for participants to respond with “don’t know” to questions about their digital behavior. Including this answer option may have affected the outcomes of this study, since we cannot know for sure whether “don’t know” really means participants do not know whether they engage in such behavior, or whether it is a less direct and maybe more socially acceptable way of admitting to behaving insecurely. Results displayed in Table 2 demonstrating that people who were aware of the guidelines responded more often that they don’t know whether they complied or left the question unanswered than people who were not aware of the guidelines, suggest the latter. We could have phrased the answer options for the questions about the guidelines and specified actions differently, for example by leaving out the “don’t know” option or by implementing a Likert scale. The problem with leaving out the “don’t know” option is that it forces people to choose between “yes” and “no,” while there may be questions they genuinely do not know the answer to. For example, not everyone may know whether they


S. Van Der Zee

have a firewall installed on their computer and forcing them to choose between “yes” and “no” will corrupt the dataset. Implementing a Likert scale as an answer option allows to testing to which extent people comply with security guidelines. However, in this paper we sought out to test whether the current DBA guidelines and subsequent possible actions by Dutch banks can be considered fair (based on RQ1 and 2) and for this particular research question, it is not needed to know to which extent people comply. One failure to comply is technically enough for banks to claim negligence, with negative financial consequences for the customer as result. In conclusion, even though we acknowledge the theoretical benefits of applying a Likert scale answer option, we argue that the current approach sufficiently answers the stated research questions.

Conclusion We conducted a survey to test to which extent Dutch people are aware of, and comply with the security guidelines implemented by the DBA. The results provide several valuable insights into security guideline compliance and risky digital behavior. First, we demonstrate that only a quarter of our participants were aware of the security guidelines by the DBA and even fewer people fully comply with them. Second, we demonstrate that the more specific statements are phrased, the less likely people are to report compliance. This finding highlights the importance of formulating security advice and questions as specific as possible to gain a realistic insight in self-reported security behavior. Third, we demonstrate that awareness of the DBA security guidelines does not lead to more self-reported compliance with these guidelines. Instead, security guideline awareness reduced the likelihood of people honestly reporting about their own security-related behavior. Fourth, an investigation into which risky digital behaviors people exhibit most revealed that more than 1/3 of the people admitted to having installed illegal software or files on devices used for online banking, using insecure open Wi-Fi networks when online banking, and not storing their bank card safely. Insights in the types of risky behavior people engage in can be used to inform policy, training, and awareness campaigns aimed at influencing people’s digital hygiene. Noncompliance with the DBA security guidelines seems to be the current norm, but can nonetheless have negative financial consequences when falling victim of cybercrime. Therefore, we advise Dutch banks to reconsider their current policy regarding the security guidelines. It is based on unrealistic expectations and, in its current form, is unfairly shifting responsibility, blame, and negative financial consequences to the customer. More efforts are needed to inform people and actively help them to change their digital behaviors, before they can be used to impose financial consequences. Instead of blaming people when victimized, banks could take a more pro-active role in improving the digital hygiene of their customers, thereby helping to reduce the cost of cybercrime. This is also in the bank’s advantage, since a large part of the costs of digital payment fraud is still covered by banks rather than customers.

Shifting the Blame? Investigation of User Compliance with Digital Payment Regulations


References Anderson, R., Barton, C., Bohme, R., Clayton, R., van Eeten, M., Levi, M., … Savage, S. (2013). Measuring the cost of cybercrime. In The economics of information security and privacy (pp. 265–300). Berlin: Springer-Verlag. Bauer, S., Bernroider, E.  W. N., & Chudzikowski, K. (2017). Prevention is better than cure! Designing information security awareness programs to overcome users’ non-compliance with information security policies in banks. Computers & Security, 68, 145–159. Bossler, A. M., & Holt, T. J. (2010). The effect of self-control on victimization in the cyberworld. Journal of Criminal Justice, 38, 227–236. Bravo-Lillo, C., Egelman, S., Herley, C., Schechter, S., & Tsai, J. (2013). You needn’t build that: Reusable ethics compliance infrastructure for human subjects research. In Cybersecurity Research Ethics Dialog & Strategy Workshop. San Francisco, CA: IEEE. CBS. (2019). Less traditional crime, more cybercrime. Retrieved April 5, 2020, from https://www. Claessens, J., Dem, V., De Cock, D., Preneel, B., & Vandewalle, J. (2002). On the security of today’s online electronic banking systems. Computers & Security, 21, 253–265. Coventry, L., Briggs, P., Jeske, D., & van Moorsel, A. (2014). SCENE: A structured means for creating and evaluating behavioral nudges in a cyber security environment. International Conference of Design, User Experience, and Usability, 2014, 229–239. Cross, C. (2015). No laughing matter: Blaming the victim of online fraud. International Review of Victimology, 21(2), 187–204. Cross, C., & Blackshaw, D. (2014). Improving the police response to online fraud. Policing: A Journal of Policy and Practice, 9(2), 119–128. Dhamija, R., Tygar, J. D., & Hearst, M. (2006). Why phishing works. CHI ‘06 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2006, 581–590. Egelman, S., Harbach, M., & Peer, E. (2016). Behavior ever follows intention? A validation of the security behavior intentions scale (SeBIS). In The 2016 Chi Conference (pp. 5257–5261). San Jose, CA: CHI. Egelman, S., & Peer, E. (2015). Scaling the security wall. Developing a security behavior intentions scale (SeBIS). In Chi 2015. Seoul: CHI. European Central Bank (ECB). (2013). Recommendations for the security of internet payments. Retrieved April 5, 2020, from Gratian, M., Bandi, S., Cukier, M., Dykstra, J., & Ginther, A. (2018). Correlating human traits and cyber security behavior intentions. Computers & Security, 73, 345–358. Holtfreter, K., Reisig, M. D., & Pratt, T. C. (2008). Low self-control, routine activities, and fraud victimization. Criminology, 46, 189–220. ING. (n.d.). Uniforme veiligheidsregels. Retrieved April 5, 2020, from ING_uniforme-veiligheidsregels_tcm162-41790.pdf ING Veilig Internetbankieren. (n.d.) Retrieved April 5, 2020, from veilig-bankieren/veilig-bankzaken-regelen/veilig-bankzaken-regelen-met-mijn-ing/index.html ITU. (2020). ITU statistics on individuals using the Internet, 2005–2019. Retrieved April 5, 2020, from Jakobsson, M. (2007). The human factor in phishing. Privacy & Security of Consumer Information, 7, 1–19. Jansen, J., & Leukfeldt, E. R. (2016). Phishing and malware attacks on online banking customers in the Netherlands: A qualitative analysis of factors leading to victimization. International Journal of Cyber Criminology, 10(1), 79–91. Jones, H. S., Towse, J. N., & Race, N. (2015). Susceptibility to email fraud: A review of psychological perspectives, data-collection methods, and ethical considerations. International Journal of Cyber Behavior, Psychology and Learning., 5(3), 13–29.


S. Van Der Zee

Krol, K., Spring, J. M., Parkin, S., & Sasse, M. A. (2016). Towards robust experimental design for user studies in security and privacy. In Learning from authoritative security experiment results (LASER), USENIX (pp. 21–31). San Jose, CA: USENIX. Lewis, J. (2018). Economic Impact of Cybercrime— No Slowing Down. McAfee report, February 2018. Modic, D., & Lea, S. E. G. (2012, September 10). How neurotic are scam victims, really? The big five and internet scams. Retrieved April 5, 2020, from Ngo, F. T., & Paternoster, R. (2011). Cybercrime victimization: An examination of individual and situational level factors. International Journal of Cyber Criminology, 5(1), 773–793. Öğütçü, G., Testik, Ö. M., & Chouseinoglou, O. (2016). Analysis of personal information security behavior and awareness. Computers & Security, 56, 83–93. Parsons, K., Calic, D., Pattison, M., Butavicius, M., McCormack, A., & Zwaans, T. (2017). The human aspects of information security questionnaire (HAIS-Q): Two further validation studies. Computers & Security, 66, 40–51. Parsons, K., McCormac, A., Butavicius, M., Pattinson, M., & Jerram, C. (2014). Determining employee awareness using the human aspects of information security questionnaire (HAIS-Q). Computers & Security, 42, 165–176. Rajivan, P., & Gonzalez, C. (2018). Creative persuasion: A study on adversarial behaviors and strategies in phishing attacks. Frontiers in Psychology, 9, 135. Sasse, M.  A., Brostoff, S., & Weirich, D. (2001). Transforming the ‘weakest link’. A human/ computer interaction approach to usable and effective security. BT Technology Journal, 19(3), 122–131. Scheerder, A., van Deursen, A., & van Dijk, J. (2017). Determinants of internet skills, uses and outcomes. A systematic review of the second- and third-level digital divide. Telematics and Informatics, 34(8), 1607–1624. Schneier, B. (2000). Secrets and lies: Security in a digital world. Hoboken, NJ: John Wiley and Sons. Van de Weijer, S., Leukfeldt, R., Van der Zee, S. (2020). Reporting cybercrime victimization: Determinants, motives, and previous experiences. Policing: An International Journal 2020, 1363-951X. Van de Weijer, S. G. A., & Leukfeldt, E. R. (2017). Big five personality traits of cybercrime victims. Cyberpsychology, Behavior and Social Networking, 20(7), 407–412. Volkskrant. (2013). ‘Eigen risico voor klanten banken bij cybercrime’ by Peter van Ammelrooy. Retrieved April 5, 2020, from eigen-risico-voor-klanten-banken-bij-cybercrime~bf53db4a/ World Payments Report 2019 by Capgemini and BNP Paribas. (2019). Retrieved April 5, 2020, from

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber Misconduct on a Social Media Site Guerrino Mazzarolo, Juan Carlos Fernández Casas, Anca Delia Jurcut, and Nhien-An Le-Khac

Introduction Until the past decade we have been used to interacting with people face to face. We captured memories with analogic cameras, talked to one another in person, or sent handwritten letters to our family. In a matter of years those everyday common acts have become outdated. The evolution of the internet has brought about a new age of social communication and this phenomenon has extended into every aspect of our modern life. A new model of society is progressing; instant communication, endless engagement, follower counts, superficial engagement, liking, posting, or sharing content are the pillars of a society based on the appearance instead of the being. The number of people using social media has increased significantly, to more than 2.46 billion in 2017 and breaching 3 billion in 2021 (see Fig. 1). The influence of social media on businesses, as well as people, cannot be denied. Today, almost every enterprise has its own social media channel enabling businesses to gain exposure, traffic and market insights. However, as with all things, not all that glitters on social media is gold. Social media presents a cybersecurity risk for every business. Individuals share almost everything about themselves on the web: friendships, demographics, family, activities and work-related information. This could present a potential risk for businesses if organisational policies, training and technology fail to properly address the issue. In many cases, it is employees’ behaviour that puts key company information in danger. Most personnel lead a very connected life, where they are constantly checking and posting a large amount of information on social media. This can lead

G. Mazzarolo (*) · A. D. Jurcut · N.-A. Le-Khac University College Dublin, Dublin, Ireland e-mail: [email protected] J. C. F. Casas University of Leicester, Leicester, UK © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Weulen Kranenbarg, R. Leukfeldt (eds.), Cybercrime in Context, Crime and Justice in Digital Society I,



G. Mazzarolo et al.


3.5 3








1.5 1








0.5 0

2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021

Fig. 1  Number of social network users worldwide from 2010 to 2021 (Statista, 2020)

some employees to divulge private company data through platforms that they believe are protected and private. In 2018, the Office of the Information and Privacy Commissioner of Alberta (OIPC) reported that a breach occurred when an employee, who was looking for technical support from a close contact, accidentally sent and revealed spreadsheets containing private information without authorisation (OIPC, 2018). People often fail to consider or appreciate how attractive data can be for cybercriminals, state-­ intelligence gathering, data brokers and marketers. Once any data becomes public, what it is used for is outside of customer’s control. The information shared could contribute to a cybersecurity risk and might be difficult to manage or mitigate (Zulkarnaen, Daud, Ghani, & Hery, 2016). Social media has turned into a reconnaissance tool for malicious individuals and user accounts are now seen as a goldmine for cyber criminals. In fact, any data disclosure could be used for different malicious purpose: phishing and social engineering, intelligence gathering, intellectual property theft or unfair competition. The problem of cybersecurity in relation to social media is real, consistent and continues unabated. In April 2019, the US Federal Bureau of Investigations (FBI) issued a security alert to private sector partners regarding foreign intelligence services using social media accounts to target and engage employees with US government clearance (Cimpanu, 2019). In 2014, iSIGHT Partners revealed a 3-year cyber espionage operation targeting and spying on foreign military and political leaders using social networking. According to the iSIGHT Partners report, hackers used fake accounts on Facebook,

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


Twitter, LinkedIn, Google+, YouTube and Bloggers alleging that they worked in journalism, government or defence (iSIGHT Partners, 2014). Mika Aaltola, researcher at the Finnish Institute of International Affairs, published a paper focused on detailing a Chinese preference for LinkedIn in terms of acquiring classified information from states and enterprises (Aaltola, 2019). These are not unusual examples of intelligence operations nor limited to a unique social media platform. All intelligence agencies use similar activities. When data leakage occurs, security professionals are faced with the same question: ‘how can we prevent this from happening again?’ Over the past decade researchers and practitioners have discussed and examined the causes and characteristics of the perpetrators of insider threats. With the development of risk strategy, it has become clear that mitigation cannot solely rely on security control measures and other security-related tools (Mahfuth, 2019). Increasingly research communities have focused their interest on technical and behavioural indicators as well as human factors (Gamachchi & Boztas, 2017). The investigation misleading information related to insider threats in social media is in an embryonic stage and thus not well understood. To advance the knowledge in this field there is a continuous need to find new techniques in order to detect and deter insider threats (Holt & Bossler, 2016). The purpose of this paper is to address this challenge and put forward a grounded framework analysing the contributions that have been made to date. Research included examines social media security risks from the unintentional insider view and provides a testing environment based on cybersecurity defence and theories to better understand how human personality engages with this unique domain. A secondary purpose is to verify whether a new indicator from these theories could be developed, with the inherent potential for practical implementation resulting in a reduced overall risk of data breach.

Insider Threat: A Background Insider threats have been present well prior to the existence of technology. For centuries humanity has told stories about infamous attacks coming from trusted people. The quote that can be considered as a mantra for insider threat hunter is: ‘Et tu, Brute?’, a Latin phrase meaning ‘Even you, Brutus?’. It is allegedly attributed to Emperor Caesar at the moment of his assassination in the Senate house, addressed to his beloved nephew Brutus (Shelley, 2013). These words have come to represent an ultimate betrayal from the most unexpected source, such as a trusted partner or family member. More and more often, cybercrime champions are addressing cases within organisations. The report ‘2020 Cost of Insider Threats: Global’ disclosed that the number of incidents has increased by 47% and the average annual cost of insider threats has also grown by 31% to $11.45 million in the past 2 years (Ponemon Institute, 2020). In addition, statistic suggest that insider threats account for roughly 30% of


G. Mazzarolo et al.

all cybersecurity incidents in government departments and organisations (IBM, 2017). The dangers that come from inside are more difficult to predict and discover because employees are familiar with the organisation’s infrastructure and the security controls applied. This result in accessing effortlessly to classified material (Mazzarolo and Jurcut, 2020). Insider crimes are usually conducted by two types of users: malicious users acting on purpose and employees accidentally causing data breaches and leaks. The result can be the same: data leakage, fraud, theft of confidential information, robbery of intellectual property and the sabotage of computer systems. Descriptions of the types of insider threat can be found from different authoritative sources. In this case, the US-CERT explanations are used for the sake of completeness. The ‘Guide to Insider Threats’ defines intentional as follows: A malicious insider threat is a current or former employee, contractor, or business partner who has or had authorised access to an organisation’s network, system, or data and intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organisation’s information or information systems (Cappelli, Moore, & Trzeciak, 2012).

The report ‘Unintentional Insider Threats: A Foundational Study’ defines unintentional threats as: … a current or former employee, contractor, or business partner who has or had authorised access to an organisation’s network, system, or data and who, through action or inaction without malicious intent, causes harm or substantially increases the probability of future serious harm to the confidentiality, integrity, or availability of the organisation’s information or information systems (CERT Insider Threat Team, 2013, p. 2).

Governmental agencies, security firms and academic researchers have come together to confront the mutual enemy and propose alternative and multidisciplinary solutions (Karampelas, 2017). In order to understand what drives different insiders to illegal deeds, the NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) established five distinct insider profiles: sabotage, theft (of intellectual property), fraud, espionage and unintentional insiders (Kont, Pihelgas, Wojtkowiak, Trinberg, & Osula, 2015). Where sabotage, theft, fraud and espionage require a deliberate malicious factor the unintentional player probably will not even know they are doing something wrong, but will have inadvertently harmed an organisation’s assets through the leaking of data, or providing access to external cybercriminals. Figure 2 shows the type of insider threat and their actions.1 1   In the counter-intelligence (CI) field, the acronym MICE (Money, Ideology, Coercion/ Commitment and Ego) has been fully accepted by the CI community for decades as the main ‘motivational and emotional aspects’ for the act of disclosing information. Those four factors obviously implied some kind of weakness or vulnerability. Sometimes, a mix of two or three of these factors also are decisive as motivation. Nowadays, an alternative framework is being discussed and accepted by some CI experts. It is the path from MICE to RASCLS, the acronym for reciprocation, authority, scarcity, commitment (and consistency), liking and social proof. According to the former CIA National Clandestine Service (NCS) officer Randy Burkett, today’s CI departments often deal

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


Fig. 2  Insider threat type

Previous research analysed insider threat cases and tried to find common characteristics that lead to an incident. These indicators are essential for cybersecurity operators to monitor, detect and respond against possible incidents. The common behaviours that could indicate an insider threat can be differentiated between digital and personal behaviours. Some frequent patterns are reviewed by security company Varonis (Petters, 2020). Digital hints are associated with an employee’s usage of data, especially if the actions are not directly part of their routine job description. For example, seeking, saving, moving or printing large amounts of classified information, accessing documents that are not linked to the employee role, or using unauthorised remote storage data. Real life user style could also be a precursor signalling further incidents. Some behaviours such as displaying disgruntled conduct, ethical flexibility, logging in to the firm network during off-hours, repeatedly violating organisational policies or request access exceeding ‘need to know’ responsibility are all warning signs requiring attention. Insider threat occurrences can impact organisations in a multitude of aspects; however, the riskiest consequences are financial and reputational. In July 2019, the federal court charged engineer Page Thomson with computer fraud and abuse for an intrusion on the stored data of Capital One (Sheetz, 2019). The banking corporation revealed that it suffered a data breach that exposed hundreds of thousands of customers’ personal information. Capital One believed the financial impact of the 2019 with non-state actors with complex mixtures of competing loyalties, including family, tribe, religion, ethnicity and nationalism (Burkett, 2013).


G. Mazzarolo et al.

breach to be between $100m and $150m because of costs associated with the disaster recovery plan including customer notifications, credit monitoring, technology costs and legal support (Warwick, 2019). Insider attacks can additionally produce loss that is difficult to quantify or recover, such as damage to an organisation’s reputation. Cases such as that of Edward Snowden,2 Chelsea Manning3 and Robert Hanssen4 resulted in gigantic damage to the reputation of the United States government agencies. In order to reduce the insider threat risk, it is crucial to implement a layered approach including policies, procedures and technical controls. Nowadays, as a countermeasure against data leakage within corporations, the implementation of a comprehensive Insider Threat Programme (InTP) is highly recommended. In deploying this programme, corporations should take into consideration that fact that every organisation has to tailor its approach to meet its unique needs. The Intelligence and National Security Alliance (INSA) provides a framework for implementing an InTP with the Insider Threat roadmap, based on a 13-step model representing those actions that have been taken by current successful programmes in both business and government (INSA, 2015) (Fig. 3).

2  Edward Snowden (1983) is an American citizen, a former Central Intelligence Agency (CIA) employee and subcontractor (Booz Allen Hamilton Co.) who leaked top-secret information from the National Security Agency (NSA) in 2013. Snowden gradually became disillusioned with the NSA global surveillance programs he was involved since he considered they were a clear intrusion into people’s private lives. Although he tried to raise his ethical concerns through internal channels, nobody paid enough attention to the warnings that Snowden could become (was becoming) an ‘insider threat’. Edward Snowden could be considered as an example of ‘insider threat’ with an ethical commitment. According to Snowden, he considers himself a whistle-blower despite a leaker since he did not leak the intel for ‘personal profit’. 3  Chelsea Manning (1987) born as male Bradley Manning, she got female gender identity in 2013. She was a former US Army soldier working as intelligence analyst posted in Iraq in 2009. She leaked sensitive US intel (up to 750,000 documents) to WikiLeaks. She was imprisoned from 2010 until 2017 when her sentence was commuted. According to several military psychiatrists that assessed Manning’s personality and psychology during the trial, Manning had been isolated in the Army while dealing with her gender identity dichotomy. The specialists considered that Manning had the perception that her leaks were positively changing the world. Chelsea Manning could be considered as an ‘insider threat’ under the parameters of psychological imbalance (gender dichotomy and ego) combined with an ethical commitment for a better world. 4  Robert Hanssen (1944) is a former Federal Bureau of Investigation (FBI) senior intelligence officer who spied for the Soviet Main Intelligence Directorate (GRU) from 1979 to 2001. Hanssen sold thousands of classified documents to the KGB for more than $1.4 million in cash and diamonds. The intel provided by Hanssen to the Russians detailed US strategies in nuclear war, military weapons technologies and counter-intelligence. He is currently serving 15 consecutive life sentences. According to the US Department of Justice, Hanssen’s acts of espionage could be considered ‘possibly the worst intelligence disaster in US history’. Robert Hanssen is a clear example of ‘insider threat’ with a profit motivation (money).

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


Fig. 3  Insider Threat Programme Roadmap (INSA, 2015)

Research Problem and Current Investigative Approach While malicious actors within the organisation are the most sophisticated and dangerous to manage, recent investigations highlight that unintentional insider threats represent a major risk for business. These menaces can become also potential attack vectors for both intentional insiders and external adversaries (Trzeciak, 2017). This study investigates unintentional insider threats (UIT), examining related research and best practice developed up until this point to support a better understanding of its origin. Through the development of a specific use case, based on data disclosure on social media, a preliminary theory is presented regarding potential mitigation strategies and countermeasures. Even if the overall insider threat represents a unique risk for the organisation, UIT denote an exceptional challenge and differ completely from intentional actors in terms of motivation and indicators. Meanwhile, policy, technical controls, monitoring and incident handling have a decisive impact in detecting, deterring and responding to malicious threats. The same cannot be said for accidental cases. The adoption of teleworking, cloud solutions, Bring Your Own Device (BYOD) arrangements and continuous interactions with internet and social media have blurred the separation between work and private life. The result is that organisations


G. Mazzarolo et al.

need to re-assess their security boundaries in order to implement appropriate protective measure and avoiding leakage of internal information. The threatscape brings in different scenarios: during their day-to-day business activities, employees can accidentally click a phishing email, install unapproved software, upload sensitive information to the cloud, or transfer confidential data to unauthorised USBs or via email. Fortunately, technical controls are now available which can block or deter these activities and preserve an organisation’s confidentiality and integrity. However, accidental disclosure of sensitive information to social media is often blurred. CERT National Insider Threat Centre acknowledge organisational obstacles to establishing, monitoring and enforcing policy regarding what personnel publish on social media sites (CERT, 2018). Users utilizing social media such as Facebook, Instagram, LinkedIn, etc. regularly self-disclose a large amount of information. This becomes a real problem when the data disclosed is linked with professional activity such as posting images of the workspace, expressing negative views of employers or colleagues and, in the worst cases, sharing classified information publicly. Despite the impressive scale of information disclosure, very little is understood about what motivates users to disclose personal information. Human error results in the majority of insider threat incidents. Because of the human factor, a multidisciplinary people-centric approach is required as a supplementary tier of defence (Elifoglu, Abel, & Tasseven, 2018). Supervising employee conduct and maintaining data privacy obligations on social media is a challenge. Enterprises must guarantee employee rights following legal and ethical grounds, while ensuring that their online activities do not compromise company reputation or leak classified information. Recommendations for a first deterrence include implementing a social media policy to provide a clear code of conduct and non-disclosure agreements with associated disciplinary procedure in case of employee misconduct or infringement. Certain monitoring tools allow for the tracking of employee comments and negative sentiments regarding businesses (Cross, 2014). Even if those applications help to reduce the risk, they still require tailored configuration and close analysis in order to perform well. Training and security awareness are certainly one of the most effective countermeasures. Proper training influence employees and prevent them from clicking links or prompt them to think twice before posting information on the web (Trend Micro, 2018). A blend of policy, awareness and technical controls can reduce the number of security incidents resulting from unintentional behaviour by more than 50% (Friedlander, 2016). Reducing insider threats is not a straightforward task. There are several behavioural indicators that can support investigation and identify where a potential threat is coming from. This, however, should be integrated with trustworthy insider threat detection tools that allow the gathering of full data on user activities. Zuffoletti, CEO of SafeGuard Cyber, advises companies to think about social media both as a vector for threat hunting and part of their attack surface (Sheridan, 2019). Due the scope of the subject, the complexity and the continuous debate on how to reduce the incident landscape, our research restricted the domain and focused the effort to (a)

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


targeting the misuse of employer’s information on social media and (b) analysis and correlation of human personality based on risk-taking tolerance. Our hypotheses were tested and discussed according to Pareto model. The Pareto analysis, also called 80/20 rule, undertakes that the large majority of problems (80%) are determined by little important causes (20%). Pareto analysis framework have been preferred for this work because it allows to discover the most important causes of internal data disclosure in social media based on risky personality and can translate the results obtained in further actionable countermeasures (Powell & Sammut-Bonnici, 2015). The collecting of evidence regarding employees mishandling information was based on internet search methodology and relied on Social Media Intelligence (SOCMINT). SOCMINT refers to a subset of Open-Source Intelligence (OSINT) that collects information exclusively from social media sites. The term ‘SOCMINT’ was developed by Omand, Bartlett, and Miller (2012). In this same document, the authors stressed that this practice should be restricted to non-intrusive collection from open sources (Omand et  al., 2012). The Șușnea  and  Iftene definition of SOCMINT was found to be compliant with this research approach. It defines SOCMINT as a convergence of OSINT designs and web-mining techniques applied to social media information and used to identify as well as understand situations that could become a threat for national security. This model is adapted and moved it to the industry defence strategy (Șușnea & Iftene, 2018). Different tools have been developed for data reconnaissance and intelligence gathering. Some popular applications for collecting different types of public information are Creepy, Maltego, theHarvester, Recon-ng and many more (Chauhan & Panda, 2015). As search engine technique, Nihad and Rami (2018) recommend the use of Google dork for sophisticated research since it has countless dedicated operators that help advanced and targeted searches. Deanna Caputo, behavioural scientist at MITRE’s Social, Behavioral, and Linguistic Sciences Department suggested that ‘technology always in some way involves human beings’ and therefore ‘you can’t tackle a technological challenge without taking into account human nature’ (Caputo, 2012). Human elements are also a major factor in UIT. Previous research has explored personality traits with the aim of discovering specific characteristic that indicate insider threat. Previous studies stress how personality could put people at risk of cybercrime. Holt et al. investigated the magnitude to which personality traits and user behaviours involve the likelihood of malicious software infections (Holt, Van Wilsem, Van de Weijer, & Leukfeldt, 2018). Wijer and Leukfeldt (2017) examine how the big-five personal attributes can cause exposure to attacks and they found evidence of certain people’s traits directly linked with cyber victimisation. Borwell et al. remarked that the human element is recognised as the weakest link in information security, and there is often a connection between behaviour of humans and cybercrimes exploitations (Borwell, Jansen, & Stol, 2018). A particularly interesting study explored insider threat events with malicious intent and proposed a justification across a connection between these and ‘Dark


G. Mazzarolo et al.

Triad’ personality attributes (difficult personalities with traits as Machiavellianism, narcissism and psychopathy) (Maasberg, Warren, & Beebe, 2015). The above-­ mentioned papers has been used as background for exploring the possible link between unintentional incident and user behaviour. Further analysis has taken into consideration the outcome in previous research works and the correlation on big-­ five (openness, conscientiousness, agreeableness, extraversion and neuroticism), DISC (dominance, influence, steadiness, conscientiousness) and cybercrime activity. Most existing works, however, focus on intentional insider threats with the organisational boundary. Nurse et al. (2014) describe the personality characteristic element as a factor of antecedents or key initial reasons to understanding an individual’s propensity to attack. Additionally, INSA underline that certain personality traits may predispose an employee to acts of espionage, theft, violence or destruction (INSA, 2017). Personality as a collection of behaviours, cognitions and emotional patterns has an impact on an individual’s thinking and doing (Cherry, 2019). As a result, this could be of use in terms of indicating possible involvement in activities that could threaten organisations. Our research was based on an extensive literature review and the assessment of current cybersecurity defence capabilities to contain UIT.  This work permits to identify a specific use case that includes current challenges in an unknown environment, i.e. social media while providing the opportunity to define an innovative countermeasure approach based on personality.

Method and Data In this section, recommendations include targeted, risk-based approaches that focus on the following two areas: event detection and personality screening. A thorough literature review supported the development of a framework that includes threat vectors and human factors with the aim of discovering a new methodology that is twice as impactful helping to detect data leakage on social media and identify personality traits that can support a preventive cyber defence activity through a process for risk mitigation (Fig. 4).

Phase 1: Threat Vector In our research, datasets were collected following the best practice of the social media intelligence discipline (SOCMINT), a branch of open-source intelligence (OSINT) (Schaurer, 2012). SOCMINT describes methods and technologies that allow monitor social media website such as LinkedIn, Instagram, Facebook or Twitter and simultaneously collect publicly available data.

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


Fig. 4  UIT method framework

Table 1  Google Dorking Operators Type of information Intitle: Looks out for mentioned words in the page title Inurl: Looks out for mentioned words in the URL Filetype: This is used to find filetypes Intext: This helps to search for specific text on the page Site: Results from within a specific website OR: Search for one of two keywords AND: Search for one and other keywords ““: Search for a specific combination of keywords

Google dork is a passive information gathering method based on query Google engine against certain specific information. This tool has many special features to help in finding sensitive data that we could apply in this research. More details about how to use Boolean string to refine and target specific hunt in Google and in order to discover information about companies, employees and geolocations can be found in Johnny Long book ‘Google Hacking for Penetration Testers’ (Long, 2004). Some of the most valuable operators available in Google Dorking are shown in Table 1. The variables of interest in our study were: site; country/region; company name; and employee categories (contractor, consultant, full time, temporary). The execution of the following query ‘ inurl:in (“Region Area, Country” AND “company name”) and (“consultant” OR “contractor” OR “full time” OR “temporary”)’ resulted in a first list of raw data equal to (n = 866) records.


G. Mazzarolo et al.

Different methods have been investigated in recent years and various solution has been provided in order to collect data online. Two of these techniques generated great interest and result: web scraping and application programming interface (API) (Willers, 2017). The final goal of both, web scraping and API, is to retrieve web information. Web scraping extract public accessible data from website beyond the use of software. Differently, API offer direct access and extraction of the data. Our research, through the open-source tool Data Scraper, extracted data out of HTML web pages previously settled with Google dork and imports it into Microsoft Excel spreadsheets. The dataset previously obtained was subjected a data wrangling through the application Open Refine including: inspection: detect unexpected, incorrect and inconsistent data; cleaning: fix or remove any identified anomalies, then verifying: results were inspected to verify correctness. Following this processes, in our research, n = 470 results with unique and consistent profiles remained. In the first phase of this analysis, the intention was anomalous detection based on the infringement of security police and non-disclosure agreements on social media. Due to the small number of data points, a manual qualitative analysis was performed (with the support of an automatic string search on python language). This attribute-based analysis took into consideration all sections of LinkedIn; however, the most significant data for investigation were found in the ‘Summary and Experience’ table. The ‘Summary’ normally includes name, photo, headline, most recent company, education, contact information and a brief career story. The ‘experience section’ usually contains job title, company, location, dates of employment and detail information about each job experience. The user detection list was established according to the information disclosure classification in Table 2 and ranked as low- or high-risk impact. Social networks promote users to divulge information about their job functions, responsibilities, family, interests, hobbies and beyond, apparently with the scope of engaging them in a wider network with their friends and colleagues. However, the amount and the sensitivity of information disclosed without any form of protection and filter could become extremely risky. The description details of what you do in your current (or past) job can leverage the disclosure of company classified information. This paper focuses on the leak of internal sensitive information: classified projects, further merger or acquisition, further financial investment in specific area of interest; internal ICT infrastructure: Table 2  Information disclosure classification Type of information Internal sensitive information Internal ICT infrastructure Sensitive role information Personal information linked with the job

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


User Risk Assessment 80


70 60


50 40 30 20 10 0

High Risk

Low Risk

Fig. 5  User risk assessment Table 3  High-risk traits Traits Age Work experience Role

Male, n = 46 24–58 years 6 months to 18 years 20% technical to 80% managerial

Female, n = 3 24–32 years 3 months to 3 years 100% managerial

software and hardware in use; sensitive role information: which employees have access to critical systems or key stakeholders, or who has authority within the company; personal information linked with the job: travels, connections, comments. Qualitative interpretation and evaluation of each profile resulted in a list of n = 120 users that exposed information about employer on social media. Figure 5 shows additional insight. Fifty-nine percent of users were not relevant and were labelled as low-risk profiles. Forty-one percent of examined profiles demonstrated a certain level of sensitive information disclosed. The sample population of this research is shown in Table 3.

Phase 2: Human Factor The second phase of this research related to human factors and building a personality model for each user. This was a preliminary study constricted by finance, time and manpower. Focus remained on quickly developing and deploying a pragmatic solution. Following careful analysis of open-source tools available in the market, several products were tested although two got primary attention for similar characteristics: Crystal and Emma. Both products are based on artificial intelligence and easy to configure. The two tools have been evaluated based on rating and review.


G. Mazzarolo et al.

Their capabilities were finally verified throughout ten volunteers that took part in a study aimed to compare, verify and confirm the reliability of the results. Crystal was ultimately chosen because it appeared more mature, accurate and was able to provide with a more in-depth analysis. Through the use of algorithms that evaluate the communicative content available on LinkedIn and then with statistical modelling, this application judges personalities corresponding to the DISC model classification system (Marston, 2008). DISC is a behaviour assessment focused on four different personality traits: Dominance (D), Influence (I), Steadiness (S) and Conscientiousness (C). Crystal algorithms are assessing the public data available in any profile and provide text-sample analysis from writing style and structure (D’Agostino & Skloot, 2019). Additionally, Crystal handles what others, in your close network circles, have written about you (Fig. 6). Crystal can only retrieve information that is being publicly shown. The final goal is to identify people behavioural patterns as accurately as possible. The benefit of using a text-sample approach is the accuracy and the independent framework that avoid third party commitment. The two major flaws are the need of sufficient data sample for the analysis and the possibility of intentionally ingest poisoned data for altering the result. Traits are shown in Table 4. Previous research has demonstrated that personality elements have a degree of impact on internal threats (Xiangyu, Qiuyang, & Chandel, 2017). This fact supports the assertion that an evaluation of employee personality traits could be used as indicator in the overall risk screening. Previous literature has also revealed that specific high scores in traits such as extraversion and openness, and low scores in neuroticism, agreeableness, and conscientiousness, correspond with risk-taking behaviour. Specifically, Xiangyu et al. emphasised that personality profiles, based on the ‘Big




Fig. 6  Personality profile


Personality Profile


Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


Table 4  DISC personality traits Pros


Dominance Direct Result-oriented Decisive Extrovert Neurotics

Influence Inspirational Interactive Outgoing Extrovert Impulsive

Steadiness Patient Tactful Agreeable Slow Sensitive

Conscientiousness Analytical Reserve Precise Calculating Condescending

Five’ methodology (Goldberg, 1990), could be used to predict risk-­taking behaviour according to the following description: high extraversion and openness stream the motivational force for risk taking; low neuroticism and agreeableness supply the insulation against guilt or anxiety about negative consequences, and low conscientiousness makes it easier to cross the cognitive barriers of need for control, deliberation and conformity (Nicholson, Soane, Fenton-O’Creevy, & Willman, 2005).

Research about the influence of personality on self-disclosure of information on social media demonstrates that individuals who are more extroverted disclose more accurate personal information in an attempt to gain more relevance and improve their position on the web (Chen, Pan, & Guo, 2016). This correlation supports the Five-factor Model and the DISC personality assessment (Jones, Morris, & Hartley, 2013). Following on from this research, as well as the subsequent assessment while reviewing of traits in each model the following parallels to DISC have been made: • • • •

‘Conscientiousness’ = a parallel to DISC personality type C. ‘Agreeable’ = a parallel to DISC personality type S. ‘Extroverted’ = a parallel to DISC personality type I. A combination of ‘Openness’ and ‘Neuroticism’ = a parallel to DISC personality type D.

In addition, when looking at a DISC profile, both S and C personality styles fall to the more ‘introverted’ side of the DISC spectrum, while D and I personality styles are considered to be more classically ‘extroverted’. According to the above analyses, the following hypotheses have been made: • Hypothesis 1: Dominant traits correlate with high-risk taking. These traits include: decisiveness, having a high ego, strength, being a risk taker and overstepping authority. • Hypothesis 2: Influence traits correlate with high-risk taking. These traits include: persuasiveness, talkativeness, impulsiveness, being emotional and being more concerned with acceptance than concrete results. • Hypothesis 3: Steadiness traits correlate with low-risk taking. These traits include: being predictable, understanding, friendly and compliant towards authority. • Hypothesis 4: Conscientiousness traits correlate with low-risk taking. These traits include: sticking to the rules, standards, procedures and protocols.


G. Mazzarolo et al.

Personality Risk Indicators 60 40 20










Low Risk

Fig. 7  Personality Risk Indicators

From the previous phase, ‘Threat vector’, the users (n = 120) were processed by software Crystal and categorised in the four unique DISC groups as shown in Fig. 7. The personality traits provide the most accurate possible personality profile based on the information available in LinkedIn (Skloot, 2019).

Phase 3: Insider Threat Prevention These predictions were based on interpretation of previous literature and experiment conducted between March and October 2019 and results have offered significant validation of the proposed hypotheses. The n between threat vector and human factor is summarised in Table 5. Forty nine incidents have been recorded: 9 associated with dominance traits (18%), 3 with influence (6%), 6 with steadiness (12%) and 31 with conscientiousness (64%). Pareto analysis has been used to correlate both variables, incidents and behavioural characteristics. The evaluation made on the information exposure causes (Fig. 8) reveals that the major source to the incidents is related with two types of employee’s traits. Eighty percent of incidents are caused by unintentional actions attributable to conscientiousness and dominant behaviour. Focusing the countermeasure effort in security awareness and additional technical controls to those specific groups could reduce or contain the disclosure of information. Investigating how personality affects data disclosure on LinkedIn has provided the following findings: Hypotheses 1, 2 and 3 are positively correlated with the expected data exposure on social media (1 and 2 as high risk-taking profile and 3 as low risk-taking behaviour). Results showed that higher scores on behavioural traits of dominance (H1) and influence (H2) were significantly related to high number of data disclosure. In contrast, emotional stability/steady (H3) recorded a low number of disclosure cases. Against research expectation, conscientiousness (H4) showed the highest number of data disclosure. Therefore, H4 was negatively correlated with the results.

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


Table 5  Data analysis DISC Dominance Influence Steadiness Conscientiousness

Events 21  6 23 70

Incidents  9  3  6 31

% Data disclosed 18  6 12 63

Fig. 8  Pareto Analysis on UIT

• The trait characteristics of dominant personality styles are openness, neuroticism, extroversion and aversion to authority. It was reasonable to hypothesise that staff with traits of openness would be engaged in low levels of data disclosure concerns and as consequences share unnecessary information. • The influence profile characteristics are extroversion, sociability and talkativeness; they want to be the centre of attention. I-styles crave interactions with others and, as a result of this, higher disclosure of personal information was expected and confirmed. • People who demonstrate steadiness personality characteristics demonstrate traits including being: careful, calm, stable, more passive, predictable and reliable. The lowest number of data disclosure in relation to this group appears to confirm thesis supported in this research. • Individuals who rate conscientiousness strictly follow procedures and standards. They tend to be cautious and contemplative and are not natural risk takers. It was not possible to identify a causal explanation of why this personality trait resulted in the highest in data disclosure.


G. Mazzarolo et al.

Discussion This research shows that individual differences in personality can be used as an additional indicator for deterring UIT. Three out of four hypotheses were positively confirmed. Individuals with higher ‘dominant’ and ‘influence’ traits were more prone to accepting risk and increase the number of incidents resulting in sensitive data disclosure. On the contrary, individuals with ‘reliable’ and ‘extremely loyal’ characteristics were associated with a lower incident rate. A conscientiousness profile could not confirm the hypothesis of low risk-taking. In order to explain that result, an assumption is required based on inductive reasoning. A consciences style, taken to an extreme, could display addiction to work, perfectionism, attention to detail and compulsiveness behaviour traits. Even though conscientiousness positively interacts with psychological well-being, theoretical and empirical work suggest that individuals can be excessively conscientious, resulting in obsessive-compulsiveness, and thereby less positive individual outcomes (Carter, Guan, Maples, Williamson, & Miller, 2015). Their argument does not appear to satisfactorily explain this result. Further analysis should be employed to support an understanding of why this profile was associated with the highest information disclosure. Limitations and challenges (both theoretical and technical) were encountered during the development of this research. Social media and search engines are continually reviewing their privacy policies. It became difficult for programmers to interact with application programming interface (API) and to automate functions that collect and analyse data online. Dataset change occurs continually as users add, modify or remove the data. The ability to interpret or predict what will happen based on behavioural traits is limited. When proposing DISC behavioural styles, it is recommended that consideration is given to style blends, rather than focusing solely on a person’s mostly highly scoring trait. Most people will, in fact, show more of some traits and less of others and could possibly have some of all four traits. There are some concerns about the implications that large-scale data mining and analytics could have for society, particularly regarding the impact on privacy, mass surveillance regimes and social bias (Kennedy & Moss, 2015).

Conclusion and Further Work Accidental insiders pose a serious threat to every business. Employees are the most valuable assets of any company, but they can also become the most substantial security threat. Research has indicated that a considerable percentage of cybersecurity incidents and data leakages are caused by a current or former employee acting inadvertently. The expansion of online activity and social networking in recent years has jeopardised security and caused significant losses to organisations due to leakage of information by their own employees (Johnson, 2016).

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


When assessing the overall risk of unintentional insider threat, it is important to consider different aspects to prevent, detect and respond in case of an incident. The preventative measures currently in use, based on purely technical approaches, are insufficient; defending insider threats requires more than technology (Stahie, 2019). The aim of this research was to develop a framework that provided possibilities for detecting data leakage within an organisation and exploring the personality sphere linked to those incidents, in order to find common characteristics that can support a predictive defence capability. Building a comprehensive IT security programme should take into consideration also the reduction of blaming and punishment to the end users because often they are the victims. As recommended by CERT, a positive incentive offer the possibility of a more reasonable and beneficial approach to reducing the insider threat with less undesirable consequences (CERT, 2016). The evolution of technology has progressed to the same extent as security challenges (Schneir, 2018). Threats, arising from internal personnel’s activities or lack of awareness, appear to represent a higher risk to information security than challenges triggered by outside attackers (Hekkala, Väyrynen, & Wiander, 2012). This study contributed to the perception of how additional protection regarding data leakage from UIT can be achieved on social media. Expanding on previous research, a new framework for insider threat detection and prevention was presented and developed based on social media domains and personality traits. This paper differentiates itself from existing research which is based almost exclusively on technical indicators. The test environment included an attempt to correlate incident detection tracking using advanced techniques, searches and risk-taking personality traits based on SOCMINT techniques and DISC methodology. Contrary to research expectation, the results obtained shows that conscientiousness traits result to be the most risky profile in data disclosure. Additionally, steady characteristics  seem to confirm the aversion of risky attitude and avoid incidents. Once again this proves the complexity of insider threat subject and the difficulties to encounter trustable indicators that can prevent data leakage. Even if they have signed non-disclosure agreements and are bound to security policy and regulation, people disclose information. Revealing information about someone to others is part of being human but doing that over social media can be extremely dangerous. A social media audience is unlimited, and cybercriminals are scanning and targeting employees in real time with the purpose of collecting sensitive data that can be utilised in offensive activity such as social engineering or phishing. The result of information disclosure can have dramatic financial and reputational consequences (Long, Fang, & Danfeng, 2017). Incident detection correlated with personality trait analysis could help decrease the overall risk but this needs to be integrated with other indicators. If it is used alone there is a risk it could be overinterpreted when it cannot be considered a constant/established trait; people’s actions are influenced by different circumstances. This research was based on risktaking behaviour however, other relevant human factors can influence daily activity such as fatigue, stress and environmental variables (Carnegie Mellon University, 2013).


G. Mazzarolo et al.

Unintentional insider threat presents a problem to security practitioners and academics alike and further research is necessary to develop a more exhaustive understanding of risk tolerance in the context of UIT.  In the future, organisations will inevitable rely more on online services such as the Cloud. Cutting edge technology and unpredictable situations (such as the COVID-19 pandemic, where a large portion of the workforce has suddenly transitioned to teleworking) can significantly increase insider risk. Randy Trzeciak, director of the CERT National Insider Threat Centre, is quoted as having said ‘this extraordinary situation has increased risk factors for insider incident’ (Carnegie Mellon University, 2020). As a result, insider threats will become progressively more complex and difficult to identify. Moving from traditional detection methods to new approaches, such as those analysed in this case study, will soon be insufficient. Technologies such as data science and artificial intelligence might soon be implemented to support detection of insider threats before they cause irreversible damage (Jou, 2019). In February 2020 during the last RSA conference, one of the most important information security worldwide summits, experts suggested that enterprises should develop their own risk algorithms by combining machine learning capabilities with behavioural analytics (Asokan, 2020). Next steps will concern further work on joint multi-risk domain indicators that could be used as a UIT deterrent.

References Aaltola, M. (2019). Geostrategically motivated co-option of social media. Retrieved from­c ontent/uploads/2019/06/bp267_geostrategically_motivated_co-­ option_of_social-­m edia.pdf67_geostrategically_motivated_co-­o ption_of_social-­m edia. pdf&usg=AOvVaw0ACCbu_3czY2_ORvF3pUWp Asokan, A. (2020). How machine learning can strengthen insider threat detection. Retrieved from­machine-­learning-­strengthen-­insider-­threat-­detectiona-­13790 Borwell, J., Jansen, J., & Stol W. (2018). Human factors leading to online fraud victimisation: Literature review and exploring the role of personality traits. Retrieved from: https://www. Burkett, R. (2013). An alternative framework for agent recruitment: From MICE to RASCLS. Studies in Intelligence, 57(1), 7–17. Cappelli, D. M., Moore, A. P., & Trzeciak, R. F. (2012). The CERT guide to insider threats. Caputo D. (2012). Applying behavioral science to the challenges of cybersecurity. Retrieved from­at-­mitre/employee-­voices/ applying-­behavioral-­science-­to-­the-­challenges-­of-­cybersecurity Carnegie Mellon University. (2013). Unintentional insider threats: A foundational study. Pittsburgh, PA: Carnegie Mellon University. Carnegie Mellon University. (2020). Insider threats in the time of COVID-19. Pittsburgh, PA: Carnegie Mellon University. Retrieved from­events/news/article.cfm?assetId=638958 Carter N. T., Guan L., Maples J. L., Williamson R. L., & Miller J. D. (2015). The downsides of extreme conscientiousness for psychological well-being: The role of obsessive compulsive tendencies.

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


Retrieved from: Extreme_Conscientiousness_for_Psychological_Well-­b eing_The_Role_of_Obsessive_ Compulsive_Tendencies CERT, SEI. (2013). Unintentional Insider Threats: A Foundational Study (CMU/SEI-2013-TN-022). Retrieved from­view.cfm?assetid=58744 CERT, SEI. (2016). The critical role of positive incentives for reducing insider threats. Retrieved from CERT, SEI. (2018). Common sense guide to mitigating insider threats (6th ed.; pp 5–6). Retrieved from Chauhan, S., & Panda, N. K. (2015). Hacking web intelligence: Open source intelligence and web reconnaissance concepts and techniques (p. 101). Waltham, MA: Syngress. Chen X., Pan Y., & Guo B. (2016). The influence of personality traits and social networks on the self-disclosure behavior of social network site users. Retrieved from https://www.researchgate. net/publication/303316556_The_influence_of_personality_traits_and_social_networks_on_ the_self-­disclosure_behavior_of_social_network_site_users Cherry, K. (2019). What is personality and why does it matter? Retrieved from­is-­personality-­2795416 Cimpanu, C. (2019). FBI warning: Foreign spies using social media to target government contractors. Retrieved from fbi-­warning-­foreign-­spies-­using-­social-­media-­to-­target-­government-­contractors/ Cross, M. (2014). Social media security. Leveraging social networking while mitigating risk. Waltham, MA: Syngress Publishing, Inc.. D’Agostino, D., & Skloot, G. (2019). Predicting personality: Using AI to understand people and win more business. Hoboken, NJ: John Wiley & Sons. Elifoglu, I.  H., Abel, I, & Tasseven, O. (2018). Minimizing insider threat risk with behavioral monitoring. Retrieve from­of-­ business-­382-­june_2018.pdf Friedlander, G. (2016). How to change user behavior and reduce risk by over 50%. Retrieved from­to-­change-­user-­behavior-­and-­reduce-­risk-­by-­over-­50/ Gamachchi, A., & Boztas, S. (2017). Insider threat detection through attributed graph clustering. 16th IEEE International Conference on Trust, Security and Privacy in Computing and Communications Goldberg, L.  R. (1990). An alternative “description of personality”: The big-five factor structure. Journal of Personality and Social Psychology, 59(6), 1216–1229. https://doi. org/10.1037/0022-­3514.59.6.1216 Hekkala, K., Väyrynen, R., & Wiander, T. (2012). Information security challenges of social media for companies. Retrieved from publication/264894370_Information_Security_Challenges_of_Social_Media_for_Companies Holt, T. J., & Bossler, A. M. (2016). Cybercrime in progress: Theory and prevention of technology-­ enabled offenses. Crime science series (p. 156). London: Routledge. Holt, T. J., Van Wilsem, J., Van de Weijer, S., & Leukfeldt, R. (2018). Testing an integrated self-­ control and routine activities framework to examine malware infection victimization. Social Science Computer Review, 38, 187. IBM. (2017). Insider threats: the danger within. Retrieved from federal/cybersecurity-­insider-­threats INSA. (2017) Assessing the mind of the malicious insider: using behavioral model and data analytics to improve continuous evaluation. Retrieved from­content/ uploads/2017/04/INSA_WP_Mind_Insider_FIN.pdf Intelligence and National Security Alliance. (2015). Insider threat program roadmap. Retrieved from­threat-­roadmap/ iSIGHT Partners. (2014). Newscaster: An Iranian threat within social networks. https://paper. Iranian_Threat_Within_Social_Networks/file-­2581720763-­pdf.pdf


G. Mazzarolo et al.

Johnson, C. (2016). How social media jeopardizes data security. Retrieved from https://www. Jones, C. S., Morris, R, & Hartley N. T. (2013). Comparing correlations between four-quadrant and five-factor personality assessments. Jou, S. (2019). How to use artificial intelligence to prevent insider threats. Retrieved from https://­Blog/How-­to-­use-­Artificial-­Intelligence-­to-­Prevent-­ Insider-­Threats/ba-­p/2686761 Karampelas, P. (2017). An organizational visualization profiler tool based on social interactions (pp. 369–394). Cham: Springer International Publishing. Kennedy, H, & Moss, G. (2015). Known or knowing publics? Social media data mining and the question of public agency. Retrieved from pdf/10.1177/2053951715611145 Kont, M., Pihelgas, M., Wojtkowiak, J., Trinberg, L., & Osula, A.  M. (2015). Insider threat detection study. Retrieved from CCDCOE.pdf Long, C., Fang, L., & Danfeng Y. (2017). Enterprise data breach: causes, challenges, prevention, and future directions. Retrieved from Enterprise_data_breach_causes_challenges_prevention_and_future_directions_Enterprise_ data_breachfrom Long, D.  J. (2004). Google hacking for penetration testers. Rockland, MA: Syngress Publishing, Inc.. Maasberg, M., Warren, J., & Beebe, N.  L. (2015). The dark side of the insider: Detecting the insider threat through examination of dark triad personality traits. Retrieved from https:// the_Insider_Threat_Through_Examination_of_Dark_Triad_Personality_Traits Mahfuth, A. (2019). Human factor as insider threat in organizations. International Journal of Computer Science and Information Security (IJCSIS), 17(12), December 2019 issue. Marston, W. M. (2008). Emotions of normal people. Louth: Cooper Press. Mazzarolo, G., & Jurcut A. D. (2020). Insider threat in cybersecurity: The enemy within the gates. European Cyber Security Journal, 6(1), 57–63. Retrived from: h­ttps:// media/ECJ_vol6_issue1.pdf Nicholson, N., Soane, E., & Fenton-O’Creevy, M., Willman P. (2005). Personality and domainspecific risk taking. Retrieved from Personality_and_Domain-­Specific_Risk_Taking Nihad, & Rami. (2018). Open source intelligence methods and tools: A practical guide to online intelligence. New York City: USA. Apress Publisher. Nurse, J.  R. C., Buckley O., Legg P.  A., Goldsmith M., Creese S., Wright G.  R. T., & Whitty M. (2014). Understanding insider threat: A framework for characterizing attacks. Retrieved from Office of the Information and Privacy Commissioner of Alberta. (2018). OIPC investigation finds city of Calgary properly responded to privacy breach. Retrieved from https://www.oipc.­and-­events/news-­releases/2018/oipc-­investigation-­finds-­city-­of-­calgary-­properly-­ responded-­to-­privacy-­breach.aspx Omand, D., Bartlett, J., Miller, C. (2012). Introducing social media intelligence (SOCMINT). Retrieved from Introducing_social_media_intelligence_SOCMINT Petters, J. (2020). What is an insider threat? Definition and examples. Retrieved from https://www.­threats/ Ponemon Institute. (2020). 2020 Cost of insider threats global report. Retrieved from https://www.­of-­insider-­threats/ Powell, T., & Sammut-Bonnici, T. (2015). Pareto analysis. Retrieved from:

Protect Against Unintentional Insider Threats: The Risk of an Employee’s Cyber…


Schaurer, F. (2012). Social media intelligence (SOCMINT). Same song, new melody? OSINT blog. Retrieved from­media-­intelligence-­socmint-­ same-­song-­new-­melody/. Schneir, B. (2018). How changing technology affects security. Retrieved from https://www.wired. com/insights/2014/02/changing-­technology-­affects-­security/ Sheetz M. (2019). Meet Paige Thompson, who is accused of hacking Capital One and stealing the data of 100 million people. Retrieved from­ thompson-­alleged-­capital-­one-­hacker-­stole-­100-­million-­peoples-­data.html Shelley, V. (2013). Shhh… can you keep a Secret? Protecting against insider threats. Retrieved from­threats-­how-­keep-­secret/ Sheridan, K. (2019). Capital one: What we should learn this time. Retrieved from https://www.­one-­what-­we-­should-­learn-­this-­time/d/d-­id/1335426 Skloot, G. (2019). How accurate is crystal? Retrieved from crystal-­accuracy Stahie, S. (2019). Insider threat is still the biggest danger for companies  – Data loss prevention is not working. Retrieved from insider-­threat-­is-­still-­the-­biggest-­danger-­for-­companies-­data-­loss-­prevention-­is-­not-­working/ Statista. (2020). Number of social network users worldwide from 2010 to 2023. Retrieved from Șușnea, E., & Iftene, A. (2018). The significance of online monitoring activities for the social media intelligence. Retrieved from­2018_0. pdf#page=230 Trend Micro. (2018). The importance of employee cybersecurity training: top strategies and best practices. Retrieved from­importance-­of-­employee-­ cybersecurity-­training-­top-­strategies-­and-­best-­practices/ Trzeciak, R. (2017). 5 Best practices to prevent insider threat. Retrieved from https://insights.sei.­best-­practices-­to-­prevent-­insider-­threat.html Warwick, A. (2019). Former AWS engineer arrested for Capital One data breach. Retrieved from­AWS-­engineer-­arrested-­for-­ Capital-­One-­data-­breach Wijer S.  G. A., & Leukfeldt E. (2017). Big five personality traits of cybercrime victims. Retrieved from: Traits_of_Cybercrime_Victims Willers, J. (2017). Methods for extracting data from the Internet. Retrieved from https://lib. Xiangyu L., Qiuyang L., & Chandel S. (2017). Social engineering and insider threats. Retrieved from Insider_Threats Zulkarnaen, R., Daud, M., Ghani, S., & Hery. (2016). Human factor of online social media cybersecurity. Risk impact on critical National Information Infrastructure (p. 196). Cham: Springer International Publishing.

Assessing the Detrimental Impact of Cyber-Victimization on Self-Perceived Community Safety James F. Popham

Introduction Academic study of cyber harms targeting end-users have demonstrated that victimization of this nature has a significant adverse impact on the individual’s sense of safety in their digital and physical communities, often simultaneously (Brunton-­ Smith, 2017; Henry, Flynn, & Powell, 2018; Henson, Reyns, & Fisher, 2013; Jansen & Leukfeldt, 2018). Internet-mediated peer-to-peer interactions, having become a primary form of socialization, often situate potential victims virtually adjacent to their aggressors and undermine their mechanisms for escape (Navarro, Clevenger, & Marcum, 2016). These end-user focused harms generally involve peer aggression communicated across networks, inclusive of social and public media, occurring between either acquaintances or strangers and incorporating symbolic forms of violence (Hinduja & Patchin, 2013; Nilan, Burgess, Hobbs, Threadgold, & Alexander, 2015; Wall, 2007).Yet despite academic and popular dialogue about the harms associated with internet-borne forms of harassment, governmental initiatives exploring perceptions of community safety in North America have substantively overlooked internet-based harms (Choo, 2015; Espelage & Hong, 2017). In these cases, indicators of community safety generally extend from empirical self-report studies which tend to substantiate politically charismatic concerns about visible forms of social decay and disorder while overlooking the multiple aetiologies of personal safety (Whitzman, 2008). In a Canadian context, populist legislation in the province of Ontario (Canada’s most populous jurisdiction) has dictated that all municipalities create a “community safety and well-being plan” (Safer Ontario Act, S.O. 2018, c.3, s.143[1]); however, the government-issued parameters for these plans centre on risk, pointing generally

J. F. Popham (*) Wilfrid Laurier University, Brantford, ON, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Weulen Kranenbarg, R. Leukfeldt (eds.), Cybercrime in Context, Crime and Justice in Digital Society I,



J. F. Popham

toward crimes of disorder as key issues. Ontario’s harm-oriented approach to assessing community safety is reflective of historic trends in the United States that connect it with crime prevention, responsibilizing the state as the chief arbiter of law and order while reifying criminality as a prima facie indicator (Edwards, Hughes, & Lord, 2013; Hope, 2005). Uniform Crime Report (UCR) surveys, recording police/ public interactions, remain a primary data source for North American studies of public safety (Scassa, 2018) connecting the concept with the socio-legal definitions of crime present in governing legislation like the Canadian Criminal Code (R.S.C., 1985, c. C-46). While some effort has been made to incorporate data extending from victimization surveys like the Canadian General Social Survey (GSS), subsequent analyses have defaulted to employing measures of fear of crime aligned with legislated definitions (Olajide, Lizam, & Adewole, 2015; Scassa, 2018; Van Dijk, 2015; Whitzman, 2008). These outcomes often come at the cost of considering broader social influences on safety and a capacity to reflexively address new or emerging social trends, such as the impact of digital victimization. Framed as a soft or trivial crime, end-user cyber-victimization tends to be deprioritized as a consumer responsibility and overlooked by criminal justice systems (Jewkes, 2013; Manning, 2018; Reiss, 1985), contributing toward its absence from rigid studies of community safety (Percy, 1978; Wall, 2007). Choo’s (2015) systemic desensitization theoretical frame lends some insight into this absence: she argues that end-user cyber-victimizations are generally associated with juvenile forms of deviancy, leading to a trivialization by the state and its justice systems. Systemic desensitization toward end-user cyber-victimization can also be linked with desultory responses by police services, who tend to disdain, triage, and ignore reported cybercrimes (Bidgoli & Grossklags, 2016; Bossler & Holt, 2012; Broll & Huey, 2015), in turn leading to a structural under-recording of cybercriminality in official crime statistics (Popham, McCluskey, Ouellet, & Gallupe, 2020). Given the linkage between official crime statistics and studies of public safety, the omission of digital victimization from these studies may be a function of its systemic under-reporting. This paper develops a comparative analysis using Canadian victimization data to assess the impact of digital victimization on perceptions of community safety. The analytical method reflects similar European studies of community safety that rely on a holistic scale as an indicator of safety—rather than strictly fear of crime (Kullberg, Karlsson, Timpka, & Lindqvist, 2009; Van Dijk, 2015). Three control groups, including (1) individuals who reported digital end-user cyber-victimizations in the past 12 months but no criminal victimizations; (2) individuals who reported criminal victimizations in the past 12 months but no digital victimizations; and (3) individuals who reported both a criminal and digital victimization in the past 12 months (“polyvictimization”; see Hamby et al., 2018), are contrasted to assess the role of victimization on perceptions of community safety. Data were collected as part of the 2014 Canadian General Social Survey, which includes measures of self-reported victimization.

Assessing the Detrimental Impact of Cyber-Victimization on Self-Perceived Community… 105

Background Defining End-User Cyber-Victimization The informationalization of western societies and subsequent encroachment of technology into our day-to-day social interactions has opened up new avenues for digital victimization (Weulen Kranenbarg, Holt, & Van Gelder, 2019). These digital harms, broadly defined as cybercrime and entailing any criminal interaction that involves the use or targeting of a computer or networked technology (Maras, 2017; Wall, 2007), rely on the networked nature of society and particularly the informational flows occurring between nodal individuals (Castells, 2013). While a typological range of cybercrimes has been established through academic study and policy development (e.g. Popham, 2017; Wall, 2004), this paper is primarily concerned with those harms occurring through interpersonal, local, and often highly intimate means. Namely, end-user cyber-victimizations include a range of harms that involve using digital technologies to propagate peer aggression and interpersonal violence through both private and public media (e.g. text messaging, social media, malicious websites), potentially occurring between both acquaintances and strangers (Hinduja & Patchin, 2013). Harms can include acts of symbolic violence (Nilan et al., 2015); intimate partner violence (Melander, 2010); gender-, race-, and sexual orientation-­ based hate (Henry & Powell, 2018); sexual exploitation (Navarro et  al., 2016); along with a constellation of other interactions. Cyber-harassment victimization rates have scaled with the popularization of the internet, affecting a range of 20–40% of people in Westernized societies per several recent meta-analyses (e.g. Aboujaoude, Savage, Starcevic, & Salame, 2015; Bottino, Bottino, Regina, Correia, & Ribeiro, 2015; Henry & Powell, 2018; Jenaro, Flores, & Frías, 2018), with even higher rates of victimization presenting in the global south (Kshetri, 2013). A major field of academic study has emerged surrounding end-user cyber-­ victimization (Jenaro et al., 2018), providing insights about its prevalence, the correlates predicting victimization, as well as its interim and long-term impacts. In terms of correlates, studies have demonstrated connections between victimization and gender as well as sexual orientation, while mixed evidence exists linking victimization to additional indicators including socioeconomic status, race, geography, and educational attainment (Aboujaoude et  al., 2015; Tokunaga, 2010). Studies have also demonstrated that frequency and variety of internet use, as well as histories of abuse, substance use, and in-person harassment or bullying can load significantly on predictive models for victimization (Hinduja & Patchin, 2008). In terms of impact, emotional distress including feelings of sadness, anger, and frustration are amongst the most commonly encountered immediate effects (Patchin & Hinduja, 2012); these emotions are often supplanted by feelings of fear, guilt, and anxiety over time (Button & Cross, 2017; Elipe, Mora-Merchán, Ortega-Ruiz, & Casas, 2015; Jansen & Leukfeldt, 2018). End-user cyber-victimization has also been demonstrably linked with long-term affective disorders. For example, Chadwick’s (2014) meta-analysis of global cyber-harassment studies indicates that anti-social


J. F. Popham

behaviours, social and academic difficulties, depression, suicidal ideation, and attempted suicide stem from victimization. Kowalski and Limber (2013) add that physical health issues may also stem from victimization, as well as higher rates of absenteeism from school and work along with an overall sense of dissociation from one’s community.

Digital Victimization, Localized Victimization, and Self-­Perceived Community Safety Many of these indicators and impacts are analogous to those extending from studies of self-perceived community safety and its relationship with victimization. For instance, a large body of research assessing the impacts stemming from localized criminal victimization—itself a key determinant of community safety (e.g. Kullberg et  al., 2009; Henson et  al., 2013; etc.)—has repeatedly demonstrated correlation with self-perceived community safety measured along vectors like social withdrawal (Rader, May, & Goodrum, 2007); depression (Hochstetler, DeLisi, Jones-­ Johnson, & Johnson, 2014); substance use (Hughes, McCabe, Wilsnack, West, & Boyd, 2010); heightened feelings of unsafety at home and on the street (Tseloni & Zarafonitou, 2008); and long-term detriment to sense of safety (Russo & Roccato, 2010). In perhaps the most direct of connections, Office for National Statistics (2017), writing on behalf of the UK Office for National Statistics, indicates that previous victimization instigates negative perceptions about safety and local/ national crime rates. Similar to studies of cybercrime victimization, the body of literature on criminal victimization in physical spaces has also identified contrasts in victimization experiences within gender, sexual orientation, age, living arrangements (e.g. alone or with others), and geographic frames (e.g. Otis, 2007; Pain, 2001; Whitzman, 2008; etc.). The point of discussion here is that end-user cyber-­ victimizations are not only analogous to localized victimizations, but rather that they are also proxy forms of victimization occurring in a relatively new space with great frequency and affect. This conceptual standpoint was explored at length by Virtanen (2017), who observed connections between personal risk interpretation ratings and fear of crime. Of particular relevance to this study, Virtanen (2017) observed that the residual impact of cybercrime victimization is not gendered, but rather more closely related to low confidence in computing abilities. Relatively few studies have explored the relationship between digital victimization and self-perceived community safety; however, Brunton-Smith (2017) empirically identifies significant correlation between fear of cybercrime and its connection to more general levels of safety. His study indicates that (1) cybercrime victimization correlates with further worry about cybercrimes; and (2) worry about cybercrime also translates to broader concerns about one’s sense of safety. Specifically, the author notes that respondents reporting feeling unsafe walking in their neighbourhood at night are 40% more likely to report being worried about online crime

Assessing the Detrimental Impact of Cyber-Victimization on Self-Perceived Community… 107

compared to those who felt safe/very safe (Brunton-Smith, 2017). Similar observations were made in the context of mental health by Hamby et al. (2018), who found that polyvictimization in both digital and physical worlds manifests as post-­ traumatic stress and anxiety/dysphoria symptoms in a sample of rural US residents. The authors suggest that the expanding range of social interactions afforded through informationalization may expose individuals to victimization opportunities at greater frequency and variety, which in turn amplifies adverse mental health symptoms. Cénat, Smith, Hébert, and Derivois (2019) add a longitudinal element to these findings by identifying the long-term impacts of polyvictimization across digital and physical domains on the presentation of psychological distress within a sample of French tertiary students. A range of studies on cyberbullying and its implications for the non-digital lives of affected students generally mirror these findings, with particular emphasis on feelings of safety at school (e.g. Faucher, Jackson, & Cassidy, 2014; Foody, Samara, & Carlbring, 2015; Guan, Kanagasundram, Ann, Hui, & Mun, 2016; Seralathan, 2016; etc.), a locale often understood to be an extension of community (Osterman, 2000). While a research gap exists linking cyber-victimization to personal sense of safety, the international field of cybercrime has explored the personal consequences of victimization at length, providing suitable proxies when considering self-­ perceived community safety. For instance, Hinduja and Patchin’s (2007) defining study of offline youth experiences following online victimization observed significant indicators of strain (as defined by Agnew, 1992) that manifested as feelings of anger, frustration, and sadness. More contemporary studies have expanded on these findings beyond youth: Jansen and Leukfeldt’s (2018) study of phishing and malware victims observed enduring emotional impacts among a sample of Dutch adults. Interestingly, the authors observed that negative secondary effects of victimization affected victims’ sense of justice, such as the time lost dealing with banking institutions. As explored above (e.g. Office for National Statistics, 2017), negative experiences extending from victimization can breed apathy or disdain toward criminal justice and public safety. This relationship is unpacked by Button and Cross (2017), who unpack the relationship between unsympathetic justice systems that often tacitly resort to victim blaming and subsequent under-reporting of cyber frauds. Jansen and Leukfeldt (2018) add that victimization often spurs behavioural modifications as an avoidance tactic, connecting with previous findings offered by Reisig, Pratt, and Holtfreter (2009), which may ultimately lead to digital avoidance. These observations add to the credence of cyber-victimization studies that employ an opportunity theory approach (e.g. Bossler & Holt, 2009; Ngo & Paternoster, 2011; Leukfeldt & Yar, 2016; etc.). The duality of long-term access and variety of interaction/formats likely increases the level of exposure that informs cyber-lifestyle elements of adapted routine activities and lifestyle-exposure lenses (Bossler & Holt, 2009; Cohen & Felson, 1979; Reyns, Henson, & Fisher, 2011; van Wilsem, 2011), situating personal safety at deeply individual levels. Connecting with the research questions herein, a host of qualitative studies have provided contextual evidence about the harms occurring at this convergence, experienced at deeply intimate levels. For instance, Navarro et  al. (2016) discuss the role of


J. F. Popham

technology in continuing victimization and intimate partner violence beyond physical spaces. Their interview research with victims of interpersonal violence identifies the ways in which traditional approaches to finding safety, such as the imposition of physical barriers or distance, have been undermined by technological encroachment. Given the presumptive adverse effects of longer-term access to the internet paired with a greater variety of interactions, it seems likely that the lifestyle and guardianship arms of Cohen and Felson’s (1979) theory are fulfilled.

Systemic Desensitization The knowledge gap addressed by Brunton-Smith (2017) with respect to the impact of digital victimization on personal sense of safety presents an important consideration for studies of cybercrime. As the author notes, a perplexing situation exists wherein significant fields of study in community safety and cybercrime have demonstrated both the effect of victimization from individual standpoints as well as the growing presence of digital interpersonal and property crimes within general crime rates, yet the two concepts have rarely overlapped. The systemic desensitization frame raised by Choo (2015) lends some insight into these gaps. Choo (2015) outlines a three step “anxiety hierarchy” (p. 54) that ranges from generalized awareness about a digital issue, to local or contextual awareness, and concludes with personal and intimate experience with digital victimization. In essence, this perspective holds that complex social issues like cyberbullying are subject to a hierarchy of public responses extending from the relative social “nearness” of consequential events. While significant digital harms—such as the suicide of a young person in response to cyberbullying—may become part of public dialogue about the nature of the internet, the locale of the incident remains distant (either socially or geographically) and its nuances uncommunicated, thus abstracting its impact. In many cases, this abstraction drives trivialization of end-user cyber-victimization as the events are viewed through impersonal lenses that inform personal invulnerability and generalized notions of the internet as a lawless prairie (Hampson, 2011). Choo (2015) concludes that public awareness about end-user cyber-victimizations has rarely ascended beyond the first step of the anxiety hierarchy, ultimately framing harmful acts as personal consequence extending from an unwillingness to “log off” (Bossler & Holt, 2012; Broll & Huey, 2015; Button & Cross, 2017). Responses are subsequently focused on accountability rather than redress, informing policy that addresses generalized fears but does little to curb victimization (Choo, 2015; Levin & Goodrick, 2013; Loveday, 2017; Smyth, 2010). By extension, the de minimis framing of cyber-victimization may also affect the limited evaluation of its impact on personal sense of safety. As Wall (2008) points out, an enormous gulf rests between mythologized public constructions of the internet and the actual day-to-day experiences in a digitized society. For instance, cyber-­ harassment is often alternatively referred to as cyberbullying, associating the harms with historic notions of schoolyard bullying and minimizing them as nothing more

Assessing the Detrimental Impact of Cyber-Victimization on Self-Perceived Community… 109

than child’s play. These underestimations contribute toward a normalizing effect wherein the phenomenon is largely overlooked and dismissed as an unimportant, everyday experience contrasted against dystopian constructions of the “omnipotent super-hackers” who pose the true threats to digital sovereignty (Wall, 2008, p. 871; Wall, 2004). This trivialization has manifested in North America as a reluctance on the part of police services and law enforcement officers to pursue or investigate end-­ user oriented cybercrimes. For instance, in a 2012 study of US-based police officers’ attitudes and responses to cybercrime, Bossler and Holt found that fewer than one in five LEOs felt that local cybercrimes should be responded to by the police service and preferred instead that they be handled by Federal agencies. Similar dismissive attitudes have been recorded in Canada (e.g. Broll & Huey, 2015) and Mexico (Becerra, 2018). Subsequently, cyber-harassment remains under-recorded in formal measures of safety. For instance, cybercrime is under-reported in the Canadian incident-based version of the Uniform Crime Report Survey (UCR2), which collects police-­ reported crime statistics from all police services operating within the country (Popham et al., 2020). This omission means that information about the extent of cybercrime is absent from policymaking processes which rely on the UCR2 as a primary indicator of public safety (Bidgoli & Grossklags, 2016; Cook & Roesch, 2012). The relative absence of cybercrime from the UCR2 then tautologically exacerbates its minimization, as lax reporting leads to deprioritization and under-­ developed legislative responses, which further diminish reporting rates. When the politicized nature of community safety research is taken into account, one can see how cyber-victimization is continuously overlooked: operationalized indicators that substantiate community safety claims are generally imposed (rather than created) through managerial agenda-setting with the aim of confirming existing state-led safety measures (Whitzman, 2008).

Method Data Data for this study were collected from the 28th cycle of the Canadian General Social Survey (GSS), “Canadians’ Safety and Security (Victimization)”. The GSS has been collected in Canada since 1985, with the primary objectives of (1) gathering data on social trends relating to the living condition and well-being of Canadians; and (2) to develop knowledge relating to current or upcoming social policy issues or matters of public interest. The 28th cycle of the survey, which was collected via telephone interviews in 2014, focused on the safety and victimization experiences of Canadians aged 15  years or older, as well as their perceptions of crimes and views regarding the criminal justice system. Survey participants were selected using a stratified random sampling approach that consists of two phases: first, a series of


J. F. Popham

exclusive strata, including provincial and census metropolitan areas were established; second, a simple random sample of participants was contacted from each strata until pre-determined quotas were met. Contact information was drawn from existent telephone registries and cross-tabulated with an address register to ensure coverage and accurate representations of the strata. Responses were collected using proprietary computer-assisted telephone interviewing, and 33,089 viable responses were collected from a frame of 63,706 calls placed between January and December 2014. Data is made publicly available as a public use microdata file through Statistics Canada’s data liberation initiative and was accessed via the library at the author’s institution. The GSS 28 questionnaire focused on victimization, which covered a range of topics including crime prevention, risks, and perceptions; abuse by spouse/partner and ex-spouse/ex-partner; criminal harassment; and other types of victimization; as well as key demographic and wellness indicators. Notably, the 2014 iteration of the victimization survey suppressed a number of questions relating to internet use, risk, and prevention that had been present on the previous (2009) victimization file; for 2014 this section was reduced to five questions specific to cyber-harassment, referred to in the survey as cyberbullying.

Independent Variables A four-point categorical variable (SuperStatus) was computed using participant’s responses to two questions. These included the variables for most serious personal incident [in the past 12  months], and cyberbullying—past 12  months, which was inclusive of all forms of cyberbullying. Individuals who did not report either localized or cyber-victimization were recorded as (1) “non-victimized”; those who reported personal victimization but no cyber-victimization were recorded as (2) “localized victimization only”; those who reported no personal victimizations but did report cyber-victimization were recorded as (3) “cyber-victimization only”; and individuals who reported both personal and online victimization were recorded as (4) “polyvictimization”. A second group of independent variables were selected for use in a hierarchical multiple regression analysis. They included living arrangement of respondent’s household (single/multiple members), population centres indicator (rural/urban), sex of respondent (binary coding by Statistics Canada), and age group. With the exception of the age group variable, all were converted to dummy variables in order to align with the principles of general linear modelling. The selected variables represent existing knowledge about general factors influencing perceived safety such as those presented in Reyns’ (2015) study on the predictors of safety in Canada using a previous iteration of the GSS, as well as the consolidated knowledge discussed above by authors such as Otis (2007), Pain (2001), and Whitzman (2008).

Assessing the Detrimental Impact of Cyber-Victimization on Self-Perceived Community… 111

Dependent Variable A composite variable, Safety Scale (⍺ = 0.64) was constructed of eight indicators and was computed to assess self-perceived community safety. In general, composite variables improve the statistical power in multivariate tests, particularly when correlations present between each original and outcome variable (Song, Lin, Ward, & Fine, 2013). More to the point, composite variables help to address the formless nature (e.g. Brunton-Smith, 2017; Ferraro & Grange, 1987) of community safety while also avoiding alignment with unitary and legalistic definitions of crime and safety, an important criticism set out by authors like Whitzman (2008) and Van Dijk (2015). The items used for this study were selected as the closest proxy to the concepts set out in the Canadian Index of Wellbeing (CIW) (Scott, 2010), and the scale items are detailed in Table 1. Two variables (neighbourhood—higher or lower amount of crime compared to the rest of Canada, and Feeling of safety—alone at home at night) were reverse-coded compared to the other items and were recoded accordingly. Overall, SafetyScale can range from a score of 8 at the lowest end to 29 at the highest, with lower scores indicating a greater sense of community safety. Many of the items prescribed within the CIW were developed in partnership with the Canadian Council on Social Development and reflect contemporary knowledge about the breadth of factors affecting community safety. For instance, a sense of belonging within one’s physical locale (Kern, 2005), places of study or work (Goldweber, Waasdorp, & Bradshaw, 2013; Wormington, Anderson, Schneider, Tomlinson, & Brown, 2016), and social communities (Murray, 2008) all weigh heavily on self-perceived safety. Nofziger and Williams (2005) demonstrated that positive interactions with local police services and overall impressions about their effectiveness have both been connected with sense of safety, as well as risk (Ho & McKean, 2004). Similar studies of the courts have also demonstrated their impact on perceived community safety, particularly when the courts appear to be achieving public goals of safety (Berthelot, McNeal, & Baldwin, 2018; Bradford, Jackson, Hough, & Farrall, 2008). Satisfaction with one’s overall safety from crime—that is, Table 1  Self-perceived Community Safety Scale Item Feeling of safety—Walking alone at night Satisfaction with personal safety from crime Perception (local police)—Ensuring safety of citizens in area Sense of belonging—Local community Criminal courts—Confidence Confidence in the police Feeling of safety—Alone at home at nighta Neighbourhood—Higher or lower amount of crimea Recoded


Range 1–4 1–5 1–3

Mean (SD) 1.57 (0.68) 1.77 (0.77) 1.33 (0.54)

1–4 1–4 1–4 1–3 1–3

1.99 (0.82) 2.12 (0.76) 1.62 (0.68) 1.12 (0.34) 1.27 (0.53)


J. F. Popham

an overall assessment of the justice system’s functionality in personal context—has been connected with an overall sense of community safety, particularly among young people (Brennan, 2011). While some authors have raised concerns about contextual safety indicators, such as feeling safe while alone (e.g. Ferraro, 1995), the concepts nonetheless remain an important indicator of safety in historic and contemporary studies (Boyce, Eklund, Hamilton, & Bruno, 2000; Brennan, 2011; Forde, 1993; Ratnayake, 2017).

Hypotheses The goal of the present study is to identify any similarities in self-perceived community safety among disparate groups, which was informed by past research indicating the impact of interpersonal end-user cyber-victimization on individuals. To this end, the analyses test three hypotheses: H1. Individuals who have encountered any form of victimization, inclusive of cyber-­ victimization, will have a diminished sense of community safety compared to those who have not been victimized. H2. Individuals who reported cyber-victimization will have a diminished sense of community safety compared to non-victimized individuals. H3. Individuals who have experienced interpersonal cyber-victimization will have a similarly diminished sense of community safety compared to those who have been locally victimized.

Results Descriptives Table 2 sets out the frequencies associated with each victimization category, and the mean self-perceived community safety score across each category. The vast majority of respondents (90.7%, n  =  30,000) did not report experiencing any form of

Table 2  Frequency of victimization groups and mean self-perceived community safety scale No victimizations Local victimization only Cyber-victimization only Polyvictimization Total

Frequency 30,000 2464 473 152 33,089

Percent 90.7 7.4 1.4 0.5 100.0

Mean (SD) 11.95 (2.77) 13.73 (3.16) 12.97 (3.13) 14.78 (3.62) 12.10 (2.86)

Assessing the Detrimental Impact of Cyber-Victimization on Self-Perceived Community… 113

victimization in the past 12  months. Comparatively, only 1.9% of respondents (n = 625) indicated that they had experienced cyber-victimization.

Independent Samples T-Test A Welch t-test was run to determine if there were differences in self-perceived community safety between individuals who had experienced any form of victimization in the past 12 months versus those who had not. This test was selected due to the assumption of homogeneity of variances being violated, as assessed by Levene’s test for equality of variances (p