Social and Behavioral Research for Homeland Security [1 ed.] 9781118651858, 9781118651667

Social and Behavioral Research for Homeland Security features articles from the Wiley Handbook of Science and Technology

159 50 1MB

English Pages 117 Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Social and Behavioral Research for Homeland Security [1 ed.]
 9781118651858, 9781118651667

Citation preview

Voeller

V01-ffirs.tex

V1 - 12/04/2013 3:54pm Page ii

Voeller

SOCIAL AND BEHAVIORAL RESEARCH FOR HOMELAND SECURITY

V01-ffirs.tex

V1 - 12/04/2013 3:54pm Page i

Voeller

V01-ffirs.tex

V1 - 12/04/2013 3:54pm Page ii

Voeller

V01-ffirs.tex

V1 - 12/04/2013 3:54pm Page iii

SOCIAL AND BEHAVIORAL RESEARCH FOR HOMELAND SECURITY

Edited by JOHN G. VOELLER Black & Veatch

Voeller

V01-ffirs.tex

V1 - 12/04/2013 3:54pm Page iv

Copyright © 2014 by John Wiley & Sons, Inc. All rights reserved Published by John Wiley & Sons, Inc., Hoboken, New Jersey Published simultaneously in Canada No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com. ePDF: 9781118651858 ePub: 9781118651803 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1

Voeller

v01-ftoc.tex

V1 - 12/05/2013 7:37pm Page v

CONTENTS

Preface 1. 2. 3. 4. 5. 6. 7. 8.

Social and Psychological Aspects of Terrorism Training and Learning Development for Homeland Security Human Sensation and Perception Human Behavior and Deception Detection Speech and Video Processing for Homeland Security Training for Individual Differences in Lie Detection Ability Deterrence: An Empirical Psychological Model Social, Psychological, and Communication Impacts of an Agroterrorism Attack

Index

vii 1 9 19 37 49 63 77 89 105

v

Voeller

v01-ftoc.tex

V1 - 12/05/2013 7:37pm Page vi

PREFACE

Adapted from the Wiley Handbook of Science and Technology for Homeland Security. The topic of homeland security did not begin with the World Trade Center or the IRA or the dissidents of past empires, but began when the concept of a nation versus a tribe took root and allegiance to a people was a choice, not a mandate. The concept of terrorism is part of homeland security but there are other risks to homeland security; such as those that come from Mother Nature or negligence of infrastructure maintenance. Indeed, these factors have much higher probabilities of causing substantial damage and loss of life than any group of terrorists could ever conceive. Hence, the focus here is on situations that put humans at risk and can disrupt and damage infrastructure, businesses, and the environment, and on scientific and technological tools that can assist in detecting, preventing, mitigating, recovering, and repairing the effects of such situations. The number of science and technology (S&T) related topics that are involved in the physical, cyber and social areas of homeland security includes thousands of specialties in hundreds of disciplines so no single collection could hope to cover even a majority of these. Instead, our intention is to discuss selected topics in ways that will allow readers to acquire basic knowledge and awareness and encourage them to continue developing their understanding of the subjects. Naturally, in the context of homeland security and counterterrorism, some work has to be classified so as not to “communicate our punches” to our adversaries and this is especially true in a military setting. However, homeland security is concerned with solutions to domestic situations and these must be communicated to officials, law enforcement, and the public. Moreover, having experts speak in an open channel is important for informing researchers, academics, and students so that they can work together and increase our collective knowledge. There are many ways to address homeland security concerns and needs, and many different disciplines and specialties. An ongoing open conversation among experts which will allow them to connect with others and promote collaboration, shared learning and new relationships is needed. Certainly, creating a forum in which theories, approaches, vii

viii

PREFACE

solutions and implications could be discussed and compared would be beneficial. In addition, reliable sources from which experts and lay persons alike could learn about various facets of homeland security are needed. It is equally important that policy and decision makers get the full picture of how much has been done and how much still needs to be done in related areas. Even in places that have dealt with terrorism for over a century, there are no strong, cost-effective solutions to some of the most pressing problems. For example, from a distance, we have very limited ability to spot a bomb in a car moving toward a building to allow decision making on whether to destroy or divert the car before it can damage the target. Even simpler, the ability to spot a personnel-borne improvised explosive device (IED) in a crowd coming into a busy venue is still beyond our collective capability. Therefore, the bounding of what we know and don’t know needs to be documented. Finding additional uses for technologies developed originally to solve a homeland security problem is one of the most important aspects of the economics involved. An inescapable issue in many areas of homeland security S&T, is that even a successful solution when applied to only a small market will likely fail because of insufficient returns. For example, building a few hundred detectors for specific pathogens is likely to fail because of limited demand, or it may never even receive funding in the first place. The solution to this issue is finding multiple uses for such devices. In such a case, a chemical detector for contraband or dangerous materials could be used also to detect specific air pollutants in a building; thus, help allergy sufferers. In this way capabilities developed for homeland security may benefit other, more frequently needed uses, thereby making the invention more viable. The editors of this work have done a superb job of assembling authors and topics and ensuring good balance between fundamentals and details in the chapters. The authors were asked to contribute material that was instructional, discusses a specific threat and a solution, or provides a case study on different ways a problem could be addressed and what was found to be effective. We wanted new material where possible. The authors have produced valuable content and worked hard to enhance quality and clarity of the chapters. And finally, the Wiley staff has taken on the management of contributors with patience and energy beyond measure. Senior Editor John G. Voeller

Voeller

v01-c01.tex V1 - 12/04/2013 11:36am Page 1

1 SOCIAL AND PSYCHOLOGICAL ASPECTS OF TERRORISM Fathali M. Moghaddam and Naomi Lee Georgetown University, Washington, D.C.

1.1

INTRODUCTION

Claims that “one person’s terrorist is another person’s freedom fighter” have made it notoriously difficult to define terrorism [1]. From a social psychological perspective, terrorism can be defined as politically motivated violence, perpetrated by individuals, groups, or state-sponsored agents, intended to bring about feelings of terror and helplessness in a population in order to influence decision making and to change behavior [Reference 2, p. 161]. Social and psychological processes are at the heart of terrorism, because it is through bringing about particular feelings and perceptions (terror and helplessness) that terrorists attempt to change actual behavior of victim individuals and societies.

1.2

SOCIAL ROOTS OF TERRORISM

In order to explain why people commit terrorist acts, a variety of socio-psychological explanations have been put forward [3, 4]. These include irrationalist explanations influenced by Freud, as well as rationalist, materialist explanations. An overlooked factor is functionality: terrorism is adopted as a tactic because it sometimes works effectively. For example, it is generally agreed that the March 11, 2004, terrorist attacks in Madrid, resulting in close to 200 deaths and over 1000 serious injuries, led to the ruling party in Spain being voted out of power because of their close alliance with the Iraq policies of the Bush administration. Of course, this kind of political impact tends to be short term and limited in scope. In this discussion, our focus is on terrorism carried out by fanatical Muslims, particularly violent Salafists, because at the dawn of the twenty-first century this type of terrorism poses the greatest threat at the global level, as reflected by the focus of research [5–12]. On the other hand, other types of terrorism, such as by members of Euskadi ta Askatasuna, Basque Homeland and Freedom (ETA) in Spain or the Tamil Tigers in Sri Social and Behavioral Research for Homeland Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

1

Voeller

2

v01-c01.tex V1 - 12/04/2013 11:36am Page 2

SOCIAL AND PSYCHOLOGICAL ASPECTS OF TERRORISM

Lanka, have not ended, but tend to be confined to particular regions and separatist causes, and are a less serious threat globally. We outline the social and psychological aspects of terrorism in two main parts. First, we examine the roots of terrorism; second, we explore the consequences of terrorism. In order to better understand the roots of terrorism, it is useful to adopt a staircase metaphor [3]: imagine a narrowing staircase winding up a multistory building. Everyone begins on the ground floor, and it may be that people are sufficiently satisfied with conditions to remain on the ground floor. However, under certain conditions, people will feel they are being treated unjustly and some individuals will start climbing up the staircase, searching for ways to change the social–economic–political situation. The climb up the staircase to terrorism involves radicalization. The challenge is to transform the conditions, to facilitate deradicalization, so that people are not motivated to climb up, and those who have climbed up become motivated to climb back down. The weight of evidence suggests that contextual rather than dispositional factors best explain movement up and down the staircase to terrorism (e.g. see 13–15). Terrorism is not explained by psychopathology, illiteracy, or poverty [3, 16, 17]. Under certain conditions, individuals with “normal” psychological profiles will do harm to others [18]. The staircase metaphor helps to highlight the role of context, as well as the psychological processes that characterize thought and action on each floor of the staircase to terrorism.

1.2.1

Radicalization: Moving Up the Staircase

Radicalization typically involves a step-by-step process, well documented in almost a century of research on conformity and obedience (see Reference 19, Chapters 15 and 16). As individuals move up the staircase, step-by-step, they gradually adopt those attitudes, beliefs and morality that condone terrorism, and some of them eventually become recruited to carry out terrorist attacks. This process begins with the radicalization of entire communities on the ground floor. Ground floor. The ground floor is occupied by about 1.2 billion Muslims. Psychological processes central to thought and action on this floor are relative deprivation and identity. In the Near and Middle East, as well as in North Africa—including other important Islamic countries such as Egypt, Saudi Arabia, and Pakistan—Muslims are ruled by governments that cannot be voted out by popular will, yet they are supported by Western powers (e.g. United States). This support comes in the form of political and military interventions (as in the case of Kuwait and Saudi Arabia) and economic aid (as in the case of Egypt and Pakistan). Oil producing countries have suffered from an “oil paradox” [Reference 3, pp. 74–76): instead of improving the lives of the masses, oil revenue has allowed despotic ruling groups, such as the Saudis, to pay for a stronger security apparatus and to win the support of Western powers through enormous arms purchases and promises of reliable, cheaper oil supplies. Two factors have helped to raise expectations and to create fraternal (collective) relative deprivation among the populations on the ground floor. First, the global mass media has presented the impoverished Islamic masses with images of an opulent life that is available to people in some countries. Secondly, Western politicians have promised democratization and reform. Consequently, the expectation has been raised among the Islamic masses for great choice and greater participation.

Voeller

v01-c01.tex V1 - 12/04/2013 11:36am Page 3

SOCIAL ROOTS OF TERRORISM

3

In practice, most people in the Near and Middle East lack choices both in economic and political spheres. In the economic arena, wealth disparities are enormous and the standard of educational and social services have remained poor. In the political sphere, little actual progress has been made toward giving people a voice in government, although there has been considerable publicity about “democratic changes” in places such as Egypt and Saudi Arabia. Globalization has also helped to create an identity crisis in Islamic communities [3]. In the midst of social–economic–technological global changes, one set of extremists in Islamic societies are urging the abandonment of traditional life-styles and the copying of the West; other extremists push for a return to “pure Islam” as it was (supposedly) practiced in its original form 1400 years ago. The “become copies of the West” strategy has led to the “good copy problem” [3] because following this option means Muslims will lack an authentic identity, and at best can only become “good copies” of a Western ideal. The “return to pure Islam” option is also associated with enormous problems because it is being used by fundamentalists to implement regressive interpretations of Islam. An alternative, secular “middle ground” needs to be constructed, but for this to happen the governments of Islamic societies must allow greater political freedom. At present, procedures to allow people to participate in decision making about the cultural, social, economic, and political future of their societies are still not in place. Social psychological research suggests that procedural justice is vitally important, and influences how fair people believe a system is, independent of the actual outcome of decision making. First floor. Individuals climb to the first floor particularly motivated to achieve individual mobility, and central to their experiences is procedural justice. The importance of openness and circulation has been emphasized by thinkers from Plato to modern theorists: closed systems lead to corruption, a sense of injustice, and eventual collapse [2]. Individuals who feel that paths for progress are not available, now move further up the staircase. Second floor. Those who arrive on the second floor are experiencing tremendous frustration because the paths to change and improvement seem blocked to them. They become vulnerable to the influence of radical preachers as well as government propaganda, displacing aggression onto Westerns, the United States and Israel in particular, as the “cause of all problems”. Research demonstrates that displacement of aggression is a powerful factor in redirecting frustrations onto external targets [20]. Third floor. Individuals who climb to the third floor already perceive their own societies to be unjust, and perceive external targets (particularly the United States) as the root cause of injustice. On the third floor, these individuals gradually “disengage” from moderate policies and morality, and engage with a morality supportive of terrorism, often seeing terrorist tactics as the only weapon at the disposal of Muslims fighting for justice. Fourth floor. Recruitment takes place on the fourth floor, where individuals become integrated into the culture of small, secretive terrorist cells. The new recruits are trained to view the world in a rigidly categorical, us versus them, good versus evil manner, and to see the terrorist organization as legitimate. Unfortunately, the categorical thinking of extremist Islamic groups tends to mirror, and be reinforced by, the categorical “us versus them” thinking of extremists in the West. Fifth floor. In the animal kingdom, intraspecies aggression is limited by inhibitory mechanisms brought on by one animal’s display of submission to another. Inhibitory mechanisms prevent serious injury and death. In order to carry out terrorist acts, often resulting in multiple deaths and injuries, individuals must learn to sidestep the inhibitory

Voeller

4

v01-c01.tex V1 - 12/04/2013 11:36am Page 4

SOCIAL AND PSYCHOLOGICAL ASPECTS OF TERRORISM

mechanisms that function to prevent human aggression under normal circumstances. This “learning” takes place on the fifth floor, and in part involves further distancing and dehumanizing of targets. Having to live in isolation, separated from the rest of society by secrecy and fear, results in even tighter bonds within terrorist cells. 1.2.2

Deradicalization: Moving Down the Staircase

Using the staircase metaphor, as well as insights from earlier research on deradicalization [16, 21, 22], we arrive at important general guidelines for deradicalization programs. First, research suggests that for any given individual, the path to deradicalization is not necessarily the opposite of the path that person took to radicalization; the path down is not always the same as the path up. Secondly, deradicalization programs need to be designed for each set of individuals depending on the floor they have reached on the staircase to terrorism. For example, individuals on the top floor are ready to carry out terrorist attacks, and deradicalization can be most effective after the terrorist has been captured. However, individuals who reach the third floor are in the process of adopting terrorist morality, and they can be influenced by deradicalization programs without necessarily first being captured. Thirdly, resources should be focused particularly on the ground floor, where the vast majority of people reside. International surveys reveal that the populations of many important Islamic societies have become radicalized on the ground floor [23]. This is associated with a rise in conspiratorial thinking. For example, in 2006 the percentages of people who believed that Arabs did not carry out the 9/11 attacks were: Indonesia 65%, Egypt 59%, Jordan 53%, and Pakistan 41%. In the traditionally “pro-Western” society of Turkey, the percentage of Muslims who expressed disbelief that Arabs carried out the 9/11 attacks went up from 43% in 2002 to 59% in 2006. In Egypt 28% and in Jordan 29% of Muslims believe that violence against civilian targets in order to defend Islam is sometimes justified [23]. These findings reflect a broad radicalization processes associated with some support for terrorism and generally higher anti-Western sentiment. 1.3

SOCIAL PSYCHOLOGICAL CONSEQUENCES OF TERRORISM

Research attention to the effects of terrorism on civil society and psychological well-being was ignited after the attacks of September 11, 2001. This research can be organized into three general topics: political attitudes, prejudice, and mental health. 1.3.1

Political Attitudes

Terrorism is associated with demonstrable changes in political attitudes, as both experimental studies and surveys have shown. Research linking terrorist attacks in support of more authoritarian political policies and abdication of civil liberties, is discussed. Authoritarianism is a personality trait popularized by Adorno [24] and subsequently refined by Altemeyer [25] as consisting of submissiveness to authority, aggressiveness toward outgroups, and conventionalism. This personality trait appears to both predict people’s responses to aggression and increase in response to aggression. In a quasi-experimental study of the effects of Islamic terrorist attacks in Madrid (March 11, 2004), right-wing authoritarianism and conservatism were measured in Spanish citizens both before and after the attacks [26]. Right-wing authoritarianism increased,

Voeller

v01-c01.tex V1 - 12/04/2013 11:36am Page 5

SOCIAL PSYCHOLOGICAL CONSEQUENCES OF TERRORISM

5

and Spanish citizens reported a stronger attachment to traditional conservative values. Since the study was quasi-experimental, a causal link between the attacks and changes in political beliefs could not be established. In a controlled laboratory experiment [27], the presence of a terrorist threat was manipulated. Results showed that the more authoritarian participants were prior to the threat, the less they supported democratic values, the more they supported military aggression. It was concluded that threats increase the activation of an authoritarian response. Repeated attacks (whether terrorist or military) appear to elicit support for escalating retaliatory actions among young, voting-age US citizens in controlled experiments [28]. Retaliatory responses were stronger when the attacks were perpetrated by terrorists rather than a militia. The signing of a peace treaty prior to attacks led males to retaliate more than females, supporting the thesis that men act with vengeance after a transgression while women pursue conciliation. In all permutations of their experiment (terrorist vs. military attack, peace treaty vs. no peace treaty, democratic vs. nondemocratic adversary), repeated attacks corresponded with responses that eventually matched or surpassed the conflict level of the initial attack. These studies have important implications for policies designed to contain conflicts. The issue of civil liberties in the context of the US “War on Terror” has received extensive media coverage. The scholarly literature on this topic, however, is limited to correlational analyses based on public polling. Although these analyses do not permit causal inferences, they are highly informative. In a review of all the major political polls conducted pre- and post-September 11th, 2001, US respondents expressed increased willingness to abdicate civil liberties, increased confidence in the government’s ability to protect the United States from terrorist threats, and increased support for the use of ground troops in combating terrorism [29]. In the months following the attacks, however, perceived threat declined, as did support for surveillance of Americans’ communications and respondents’ confidence in the US government’s ability to prevent future attacks.

1.3.2

Prejudice and Social Cohesion

Well-established social psychological research on intergroup relations demonstrates that people placed into groups will discriminate against outgroup members and favor ingroup members [30]. When placed into groups, people also exaggerate the homogeneity of their ingroup and its distinctiveness from outgroups. These effects are even present when groups are formed on the basis of such trivial dimensions as one’s estimation of how many dots appear on a piece of paper. These well-established research findings provide a backdrop to reports of rising anti-Arab and anti-Muslim prejudice in the United States since September 11th, 2001. Nearly all studies of prejudice in the United States concern White prejudice toward Blacks. This focus warrants broadening, particularly in the light of evidence suggesting that prejudice directed more toward Arabs than Blacks [31]. Both immediately after 9/11 and one year later, American college students reported higher levels of prejudice toward Arabs than Blacks. Those students with higher levels of media exposure displayed higher levels of overall minority prejudice, whether toward Arabs or Blacks. Anti-Arab prejudice was also higher among those who more strongly endorsed social hierarchies, more strongly identified as “American”, and believed future terrorist attacks are likely [32].

Voeller

6

v01-c01.tex V1 - 12/04/2013 11:36am Page 6

SOCIAL AND PSYCHOLOGICAL ASPECTS OF TERRORISM

Terrorism is also linked to increased social cohesion, as international research demonstrates. Akhahena [33] documented how the terrorist bombing in Kenya (August 1998) helped Kenyans forge a new national identity that united previously fractured social identities. A negative aspect of increased social cohesion, however, is decreased intergroup contact. Persistent violence between Catholics and Protestants in Northern Ireland over the past 30 years has led to segregation in the areas of education, residence, and personal life. This segregation limits contact between Catholic and Protestant communities and arguably plays a major role in maintaining intergroup conflict [34]. 1.3.3

Mental Health

Mental health has been the most intensively researched aspect of terrorism’s psychological consequences, with posttraumatic stress disorder (PTSD) comprising the majority of studies. The most common psychological effects of a traumatic event such as a terrorist attack are acute stress disorder (in the short term) and PTSD (in the longer term), with depression, anxiety disorders, and substance abuse as the next most frequent effects [35]. Which factors determine who will suffer psychologically after a terrorist attack? This matter has been disputed. Silver et al. [36] conducted a nationally representative longitudinal study of US residents’ psychological response to the attacks of September 11th, 2001. They found that proximity or degree of exposure was not a necessary precondition for high levels of acute and posttraumatic stress symptoms at 2 weeks and 12 months post-9/11. These results indicate the need to study the effects of indirect exposure to terrorism. In contrast, Schlenger’s [37] review of the major studies of psychological distress post-9/11 concluded that PTSD following the attack was concentrated in the New York City metropolitan area. Furthermore, PTSD prevalence was strongly associated with direct connection to the attacks. Though many adults across the United States were distressed by the attacks, Schlenger [37] concludes that much of this distress resolved over time without professional treatment. It is important to recognize that the vast majority of mental health literature follows a Euro-American academic traditional and adopts a Western medical perspective. It follows that important cross-cultural differences in response to terrorism may exist that are not captured by predominant methods. De Jong [38], for instance, has asserted that the predominant diagnostic criteria (DSM-IV and ICD-10) are not always appropriate for non-Western cultures. Research on the effects of terrorism is little, but growing. The more expansive literature on traumatic events such as war and natural disasters can complement and further enrich our understanding of terrorism’s social psychological consequences.

REFERENCES 1. Cooper, H. H. A. (2001). The problem of definition revisited. Am. Behav. Sci. 44, 881–893. 2. Moghaddam, F. M. (2005a). The staircase to terrorism: A psychological exploration. Am. Psychol. 60, 161–169. 3. Moghaddam, F. M. (2006). From the Terrorists’ Point of View: What They Experience and Why They Come to Destroy, Praeger International Security, Westport, CT. 4. Pyszcznski, T., Solomon, S., and Greenberg, J. (2003). In the Wake of 9/11: the Psychology of Terror, American Psychological Association, Washington, DC.

Voeller

v01-c01.tex V1 - 12/04/2013 11:36am Page 7

REFERENCES

7

5. Booth, K., and Dunne, T. Eds. (2002). Worlds in Collision: Terror and the Future Global Order. Palgrave Mamillan, New York. 6. Davis, J. (2003). Martyrs: Innocence, Vengeance, and Despair in the Middle East, Palgrove Macmillan, New York. 7. Kegley, C. W. Jr. Ed., The New Global Terrorism: Characteristics, Causes, Controls. Prentice Hall, Upper Saddle River, NJ. 8. Khosrokhavar, F. (2005). Suicide Bombers: Allah’s New Martyrs, (Translator Macey, D. Ed). Pluto Press, London. 9. Pape, R. A. (2005). Dying to Win: The Strategic Logic of Suicide Bombing, Random House, New York. 10. Pedahzur, A. (2005). Suicide Terrorism, Polity Press, London. 11. Sageman, M. (2004). Understanding Terror Networks, University of Pennsylvania Press, Pennsylvania, PA. 12. Silke, A. Ed. (2003). Terrorism, Victims, and Society: Psychological Perspectives on Terrorism and its Consequences. Wiley, Hoboken, NJ. 13. Atran, S. (2003). Genesis of suicide terrorism. Science 299, 1534–1539. 14. Bongor, B., Brown, L. M., Beutler, L. E., Breckenridge, J. N., and Zimbardo, P., Eds. (2006). Psychology of Terrorism. Oxford University Press, New York. 15. Stout, C. E. Ed. (2002). The Psychology of Terrorism, Vol. 4, Praeger Publishers, Westport, CT. 16. Horgan, J., and Taylor, M. (2003). The Psychology of Terrorism, Frank Cass & Co., London. 17. Ruby, C. L. (2002). Are terrorists mentally deranged? Anal. Soc. Issues Public Policy 2, 15–26. 18. Zimbardo, P. (2007). The Lucifer Effect: Understanding How Good People Turn Evil , Random House, Inc., New York. 19. Moghaddam, F. M. (2005b). Great Ideas in Psychology: A Cultural And Historical Introduction, Oneworld, Oxford. 20. Miller, N., Pederson, W. C., Earlywine, M., and Pollock, V. E. (2003). A theoretical model of triggered displaced aggression. Pers. Soc. Psychol. Rev. 7, 75–97. 21. Bernard, C. Ed. (2005). A Future for the Young: Options for Helping Middle Eastern Youth Escape the Trap of Radicalization. Rand Corporation, Santa Monica, CA. 22. Crenshaw, M. (1991). How terrorism declines. Terrorism Polit. Violence 3, 69–87. 23. Pew Research Center. (2006). Conflicting views in a divided world . Retrieved at http:// pewglobal.org/. 24. Adorno, T. W., Frenkel-Brunswik, E., Levinson, D. J., and Sanford, R. N. (1952/1982). The Authoritarian Personality, W.W. Norton & Company, Inc, New York. 25. Altemeyer, B. (1996). The Authoritarian Spectre, Harvard University Press, Cambridge, MA. 26. Echebarria-Echabe, A., and Fern´andez-Guede, E. (2006). Effects of terrorism on attitudes and ideological orientation. Eur. J. Soc. Psychol. 26, 259–265. 27. Hastings, B. M., and Schaffer, B. A. (2005). Authoritarianism and sociopolitical attitudes in response to threats of terror. Psychol. Rep. 92, 623–630. 28. Bourne, L. E., Helay, A. F., and Beer, F. A. (2003). Military conflict and terrorism: General psychology informs international relations. Rev. Gen. Psychol. 7, 189–202. 29. Huddy, K., Khatib, N., and Capelos, T. (2002). Reactions to the terrorist attacks of September 11, 2001. Public Opin. Q. 66, 418–450. 30. Taylor, D. M., and Moghaddam, F. M. (1994). Theories of Intergroup Relations: International Social Psychological Perspectives, 2nd ed., Praeger Publishers, Westport, CN.

Voeller

8

v01-c01.tex V1 - 12/04/2013 11:36am Page 8

SOCIAL AND PSYCHOLOGICAL ASPECTS OF TERRORISM

31. Persson, A. V., Musher, E., and Dara, R. (2006). College students’ attitudes toward blacks and Arabs following a terrorist attack as a function of varying levels of media exposure. J. Appl. Soc. Psychol. 35, 1879–1893. 32. Oswald, D. L. (2006). Understanding anti-Arab reactions post 9/11: The role of threats, social categories, and personal ideologies. J. Appl. Soc. Psychol. 35, 1775–1799. 33. Akhahenda, E. F. (2002). When Blood and Tears United a Country: The Bombing of the American Embassy in Kenya, University Press of America, Lanham, MD. 34. Campbell, A., Cairns, E., and Mallet, J. (2005). Northern Ireland: the psychological impact of “the Troubles”. In The Trauma of Terrorism: Sharing Knowledge and Shared Care. An International Handbook , Y. Danieli, D. Brom, and J. Sills, Eds. Haworth Press, New York, NY, pp. 175–184. 35. Danieli, Y., Engdahl, B., and Schlenger, W. E. (2004). The psychosocial aftermath of terrorism. In Understanding terrorism: Psychosocial roots, consequences, and interventions, F. M. Moghaddam, and A. J. Marsella, Eds. American Psychological Association, Washington, DC, pp. 223–246. 36. Silver, R. C., Poulin, M., Holeman, E. A., McIntosh, D. N., Gil-Rivas, V., and Pizarro, J. (2004). Exploring the myths of coping with a national trauma: A longitudinal study of responses to the September 11th Terrorist Attacks. J. Aggress. Maltreat. Trauma 9, 129–141. 37. Schlenger, W. E. (2004). Psychological impact of the September 11, 2001 terrorist attacks: Summary of empirical findings in adults. J. Aggress. Maltreat. Trauma 9, 97–108. 38. De Jong, J. T. V. M. (2002). Public mental health, traumatic stress and human rights violations in low-income countries. In Trauma, war, and violence: Public mental health in socio-cultural context, J. T. V. M. De Jong, Ed. New York, Luwer Academic/Plenum Publishers, pp. 1–92.

FURTHER READING Alexander, Y. (2002). Combating terrorism: Strategies of ten countries, University of Michigan Press, Ann Arbor, MI. Bloom, M. (2005). Dying to Kill: The Allure of Suicide Terror, Columbia University Press, New York. Crenshaw, M., Ed. (1995). Terrorism in Context. Pennsylvania University Press, University Park. Horgan, J. (2005). The Psychology of Terrorism, Routledge (UK), London. Hunter, S. T., and Malik, H., Eds. (2005). Modernization, Democracy, and Islam. Praeger Publishers, Westport, CT. McDermott, T. (2005). Perfect Soldiers: The Hijackers-Who They Were, Why They Did It., Harper Collins Publishers, New York. Moghaddam, F. M. and Marsella, A. J., Eds. (2004). Understanding Terrorism: Psychosocial Roots, Consequences, and Interventions. American Psychological Association, Washington, DC.

Voeller

v01-c02.tex V1 - 12/04/2013 11:36am Page 9

2 TRAINING AND LEARNING DEVELOPMENT FOR HOMELAND SECURITY Eduardo Salas and Elizabeth H. Lazzara University of Central Florida, Orlando, Florida

2.1

INTRODUCTION

On December 22, 2001, Richard Colvin Reid hid explosives in his shoes in an effort to destroy American Airlines Flight 63 bound to the United States from Paris (BBC News 2008) [1]. His attempt was ultimately unsuccessful because other passengers were able to resolve the situation; however, the world would come to know this man as the “shoe bomber”. This incident marked a drastic change in the policies and procedures for commercial airlines in order to ensure the safety of all people onboard. Due to the high-risk nature of the situation and the consequences of possible outcomes, all employees responsible for screening passengers boarding aircrafts would be mandated to undergo intense training to be able to detect any clues to prevent another such occurrence happening in the future. This example illustrates the importance of training and learning development in Homeland Security (HS). Recently, Salas and colleagues [2] define training as “the systematic acquisition of knowledge (i.e. what we need to know), skills (i.e. what we need to do), and attitudes (i.e. what we need to feel) (KSAs) that together lead to improved performance in a particular environment” (p. 473). Learning occurs when there is a permanent cognitive and behavioral change by acquiring the requisite competencies to perform the job. We submit that learning is facilitated when the training design and delivery is guided by the findings from the science of learning (and training). The purpose of this chapter is to provide some insights about the science and offer some principles to help in designing, developing, implementing, and evaluating training.

2.2

THE PHASES OF TRAINING

The design of training is a process that consists of a set of interrelated phases that have to be effective; it must be applied systematically. In this chapter, we discuss four Social and Behavioral Research for Homeland Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

9

Voeller

10

v01-c02.tex V1 - 12/04/2013 11:36am Page 10

TRAINING AND LEARNING DEVELOPMENT FOR HOMELAND SECURITY

general training phases. These phases, and associated principles and guidelines, represent what we know from the science that works and must be done when designing and delivering training in any organization. We hope that these will guide those in the practice of designing and implementing training for HS purposes. As noted, effective training requires attention to four phases [3]. These are discussed below with specific principles to guide the focus and shape the actual elements in each phase. 2.2.1

Phase 1: Analyze the Organizational Training Needs

This is one of the most critical phases of training because many important decisions are made at this juncture. It is in this phase where skill deficiencies are determined and where the environment is prepared and set for learning and transfer to occur in the organization. Therefore, before training can be successfully designed and implemented, it is necessary to assess the needs of the organization. This is done in order to properly set up the learning environment to uncover the necessary KSAs and prepare the organization for the training. 2.2.1.1 Uncover the Required KSAs. To determine what KSAs are needed, all of the required tasks to be performed must be analyzed. Ideally, the analysis focuses on the competencies that must be acquired and not on the actual tasks to be performed because competencies are common throughout a variety of tasks. To uncover the requisite KSAs, organizations should conduct a task analysis and/or cognitive task analysis. Task analyses are needed to determine what competencies are needed to perform a job successfully. Cognitive tasks analysis goes deeper and uncovers the knowledge or cognitions underneath job performance. These analyses set the foundation for designing a successful training program. It helps in establishing the training objectives, the learning outcomes, and provides the learning expectations for both trainers and trainees. Furthermore, the training objectives outline the conditions that will take place during job performance, and they provide the acceptable criterion to measure performance [4]. In addition to uncovering and analyzing the necessary competencies, it is also critical to determine who exactly needs to be trained and what they need to be trained on. Conducting a person analysis ensures that the right people get the appropriate training. Employees possess and need different KSAs; therefore, they do not necessarily require the same kind of training. More experienced employees would not need an extensive, intense training session compared to new, inexperienced employees. 2.2.1.2 Prepare the Organization. Before a training system can be designed and implemented, the organization needs to be prepared. Goldstein and Ford [5] proposed that some aspects of the organization to be considered include “an examination of organizational goals, resources of the organization, transfer climate for training, and internal and external constraints present in the environment” (p. 41). In other words, do the goals of the organization and training program align? Does the training support the strategic goals of the organization? What are the available resources (e.g. finances, technology, and so on)? What are the possible limitations that the training might encounter based upon the existing resources? Lastly, is the organizational climate fostering learning and the importance of the training? That is, is the climate and culture conducive in transferring the newly acquired KSAs to the actual operating environment? Is the organization motivating the

Voeller

v01-c02.tex V1 - 12/04/2013 11:36am Page 11

THE PHASES OF TRAINING

11

trainees to attend training? To set up the appropriate climate, organizations need to send out positive messages about training so that trainees will see the value of the training. Trainees will also be more supportive of the training system if it is voluntary rather than being mandatory. If training must be mandatory, make it with as few obstacles as possible. Overall, the organizational climate should support and encourage the training to ensure its success. In total, determining the precise training needs is imperative. Knowing what, why, who, when, and how to train before designing training is a must. Organizations get the most out of training when the required KSAs are uncovered and the organizations prepare the training and set its climate to support learning. 2.2.2

Phase 2: Design and Develop Instruction

The second phase is about designing and developing the instructional content, storyboards, lesson plans, materials, curriculum, and preparing all the resources needed to deliver and implement the training. A number of factors are important here; most notably, the reliance of the science of training to drive the decision as much as possible. This science has produced many guidelines, tips and examples that can be applied [3, 6, 7]. 2.2.2.1 Rely on Scientifically Rooted Instructional Principles. Clearly, effective training is about applying pedagogically sound principles to the design of instruction. It is about using the science to create a learning environment that will engage, motivate, propel, and immerse the trainee in acquiring KSAs. Thus, it is critical when designing training to consider individual factors (e.g. cognitive ability, self efficacy, and motivation) as well as organizational factors (e.g. policies, procedures, prepractice conditions, and feedback) because they are extremely influential in the learning outcomes. For example, a trainee’s motivation level can determine their ability to acquire, retain, and apply trained skills; therefore, training should be designed to enhance the motivation to learning of the trainees [8, 9]. 2.2.2.2 Set up Prepractice Conditions. In addition to establishing a positive organizational climate, organizations must set up prepractice conditions to enhance the effectiveness of the training system [10]. The efforts made prior to training will positively affect learning and ultimately performance; therefore, trainees should be prepared even before training begins. They should receive preparatory information about the training (e.g. brochures and pamphlets) or advanced organizers to manage the information [11]. Furthermore, providing trainees with attentional advice can guide them in deciding what strategies will foster learning [3]. The benefit of setting up the prepractice conditions is that not only will it benefit trainees by optimizing learning but it is also a cost-effective way to facilitate the success of the training system. 2.2.2.3 Create Opportunities to Practice and Receive Feedback. Any training seeks to give information about needed concepts, demonstrate required cognitions and behaviors, and creates opportunities to practice and receive feedback. The instructional delivery should be guided by training objectives; and the information, demonstration, and/or practice-based strategy demonstrations should target the wanted KSAs. The practice opportunities should be challenging and vary in difficulty because it is not the quantity of practice per se that is important but rather the quality of practice. Mere repetition

Voeller

12

v01-c02.tex V1 - 12/04/2013 11:36am Page 12

TRAINING AND LEARNING DEVELOPMENT FOR HOMELAND SECURITY

does not necessarily enhance learning; therefore, as trainees learn and improve their KSAs, the scenarios should be more difficult and varied. To ease comparisons and ensure standardizations, scenarios should be designed a priori [12]. Moreover, developing the scenarios prior to training eases the burden on trainers by allowing them more control. In addition, instructors can focus on providing trainees with feedback because it will foster training by providing guidance on what areas are lacking and still need improvement [13]. 2.2.2.4 Seek to Diagnose KSAs’ Deficiences. In order to establish whether trainees learned the requisite KSA, performance measures must be created to assess the trained competencies against the stated objectives. Ideally, performance measures evaluate processes as well as outcomes on both the individual and team level (if applicable; [3]). The effectiveness of the training lies heavily on the ability to assess and diagnose performance [14]. Therefore, organizations should take careful consideration when deciding what tool to use to evaluate performance against the trained objectives. One approach is to utilize a behavioral checklist (e.g. Targeted Acceptable Responses to Generated Events or Tasks (TARGETS)—), which evaluates trainees by recording the presence or absence of desired behaviors to scripted events [15]. Other approaches are available as well (see [16]). 2.2.3

Phase 3: Implement the Training

The third phase is the implementation or actual execution of the training program or system. This is the more “mechanical” part, but pay attention to the location, resources, instructor, and the delivery of the instructional system (e.g. information or practice based). 2.2.3.1 Put Everything into Action. After the training has been designed, it is time to implement it. Now, it is time to identify the training site and ensure that it is prepared prior to training. The training site should be a comfortable setting and equipped with the proper resources. Instructors must also be trained and prepared to be able to address any issues/concerns that may arise during training. At this point, any instructional materials are finally carried out and the training is completely functional. Preferably, the fully functioning training should be pilot tested to discover any potential problems and to be able to make the appropriate adjustments [17]. Because of the possibility that things will go wrong, relapse prevention procedures should be created in order to solve any dilemmas. 2.2.4

Phase 4: Evaluate the Training

The fourth phase is one that most organizations want to implement; however, most avoid it altogether or just simply do not go deep enough to truly determine the effectiveness of the training. Evaluations are designed to determine what worked and to assess the impact of the training system on the organization. 2.2.4.1 Use a Multilevel Approach. Incorporating a training program into an organization does not stop once it has been implemented. The training must be evaluated to truly determine its effectiveness. Ideally, researchers suggest taking a multilevel approach to evaluation in order to obtain the complete picture. Kirkpatrick [18] devised a popular

Voeller

v01-c02.tex V1 - 12/04/2013 11:36am Page 13

LEARNING DEVELOPMENT

13

evaluation strategy measuring reactions, learning, behavioral change, and organizational impact. A multilevel approach will identify the successful aspects of the training program as well as the elements that are still lacking and need further adjustments in order to improve. When evaluations are based on only one dimension, it is easy to obtain an inaccurate assessment of the impact of the training intervention. For example, it is possible that trainee reactions are positive, yet learning did not take place [19]. Therefore, it is beneficial to examine at higher levels (e.g. learning and behavioral change; [20]). Assessments at the behavioral level will indicate whether the trained KSAs will be transferred to on the job performance [5]. Thus, it is not only crucial that trainees react positively and learn the material, but it is also important that they apply the trained KSAs to the job. 2.2.4.2 Ensure Transfer of the Acquired KSAs. Training is only beneficial to the organization when the learned KSAs are not only learned during the training but also applied and maintained on the job [7, 21]. Hence, organizations must prepare the climate to facilitate using the KSAs learned during training [22]. For example, trainees need opportunities to perform [23] because a substantial delay between training and job performance can lead to significant skill decay [24]. Supervisors should also encourage trainees to use their trained skills on the job by providing positive reinforcement (e.g. verbal praise and monetary reward; [25]). Positive reinforcement when applied appropriately (i.e. immediately following behavior) will lead to repetition [26]. Having supervisory support and providing reinforcements sends out a positive message to trainees, which is imperative to the success and effectiveness of training.

2.3

LEARNING DEVELOPMENT

Now that we have an understanding of the science behind designing, developing, implementing, and evaluating a training program, we can discuss some of the possible training strategies. Because employees must implement a variety of information and skills on a daily basis, it is necessary to possess a variety of training strategies in your arsenal to be able to customize and adapt to all of the different requisite competencies required to perform each task. As technology permeates throughout businesses, more complex skills are required to complete tasks in the work environment; therefore, it is necessary that our training strategies become more complex as well to adjust to the growing changes. Due to the popularity of technology and the growing demand of organizations to use teams to perform complex tasks, we will elaborate on simulation-based training (SBT) and games as a learning development strategy. Moreover, because organizations often lack the time to implement a formal training program, we will discuss an informal technique called on-the-job training (OJT). 2.3.1

Simulation-based Training

SBT is an interactive, practice-based instructional strategy which provides opportunities for trainees to develop the requisite competencies and enhance their expertise through scenarios and feedback [12]. The scenarios serve as the “curriculum”. In other words, the learning objectives derived from the training needs analysis are embedded within the scenarios. The SBT “life cycle” consists of a number of interrelated and critical stages and

Voeller

14

v01-c02.tex V1 - 12/04/2013 11:36am Page 14

TRAINING AND LEARNING DEVELOPMENT FOR HOMELAND SECURITY

each step is fundamental to the next [27]. The first step is to verify trainees’ existing skills and their previous performance record. Next, determine the tasks and competencies that will be emphasized during training. As a result of the second step, the training/learning objectives can be established. Upon the completion of all of these steps, scenarios can be created. The scenarios are scripted and designed to elicit the requisite competencies by incorporating “trigger” events. Afterwards, performance measures must be developed to assess the effectiveness of the training. Then, the performance data is collected and compared to the existing, previous data. The collected data serves as the foundation and guide for providing feedback to the trainees. Lastly, all of the information can then be used to make any adjustments or modifications to the training program. SBT can be an optimal instructional strategy because it has many benefits. First, SBT mimics the job environment; therefore, it is very realistic, which makes transferring skills to the job easier [28]. In addition, SBT allows an organization to explore training with a variety of scenarios, which facilitates and accelerates expertise [2]. Third, SBT is interactive and engaging. Being engrossed in training is influential to motivation, and researchers have shown that motivation enhances learning [29]. Last, SBT when utilizing carefully crafted scenarios and measures, can facilitate the diagnosis of performance. 2.3.2

Games

Recently, the military along with other organizations have started to use games as instructional tools to acquire knowledge, skills, and attitudes applicable in the work place as well as other settings. Games can be defined as “a set of activities involving one or more players. It has goals, constraints, payoffs, and consequences. A game is rule-guided and artificial in some respects. Finally, a game involves some aspect of competition, even if that competition is with oneself” [30], p. 159. Although the definition of what constitutes a game is being debated by researchers because they are available in a wide array of formats (e.g. board games, console-based games PC-based games), there is agreement that games provide educational benefits to learning as a training tool. For example, Vogel and colleagues [31] conducted a meta-analysis and found that cognitive and attitudinal abilities were enhanced in participants when they used interactive games and simulations as opposed to traditional instruction methods. Games have become a popular instructional tool because they not only benefit the learner but are also advantageous for the developers and instructors. Users benefit by “playing” because the skills necessary to accomplish the goals within the game are applicable to other situations. Furthermore, games elicit motivation in users because they are interactive, fun, and engaging [32]. Developers and instructors benefit from leveraging games as well because they are modifiable (i.e. instructional features can be added in some cases with ease) and a cost-effective approach to learning. 2.3.3

On-the-Job Training

Frequently, in HS and in other organizations there is not sufficient time or resources to implement a formal training because new policies and procedures must be integrated immediately; therefore, OJT is one possible solution. OJT is “job instruction occurring in the work setting and during the work” [33] p.3. Because it occurs on the job and does not require instructors or trainees to leave the job site, it is a very economical alternative. Moreover, occurring in the actual work environment has the added benefit of facilitating

Voeller

v01-c02.tex V1 - 12/04/2013 11:36am Page 15

REFERENCES

15

training transfer since trainees can see that the training is relevant and applicable to completing the job tasks. Therefore, the KSAs have more significance. However, in order to reap the benefits of such an applicable, customizable, low cost alternative, OJT needs to be executed correctly. All OJT is not created equal. Practitioners need to abide by several learning principles in order to optimize the effectiveness of OJT. First, as with any other training, the top of the organization and its leaders needs to support the OJT. For example, as noted, earlier organizations can show support through rewards and incentive programs [34]. Second, OJT facilitators also need to be included throughout the process [35]. OJT facilitators need to be involved in designing and developing the program as well as being trained on instructional techniques (e.g. coaching and mentoring). Often, facilitators are knowledgeable in their field; however, they lack the expertise to effectively teach others. Once the organization and the training facilitators are supportive, the trainees must be prepared. Preparatory information about the content of the upcoming OJT will not only establish the appropriate expectations, it will also foster motivation [10]. Third, it is absolutely critical that the OJT be structured and guided to be optimally effective. A structured OJT ensures standardizations reducing discrepancies in the way training is delivered and executed. OJT is a useful strategy when guided by the science of learning as well. 2.4

CONCLUDING REMARKS

Regardless of the strategy (e.g. SBT, games, and OJT) being implemented, training must follow the basic principles to ensure its success [6]. It must be developed systematically because all of the facets are interrelated, serving as the foundation for the next component—assessing the needs of the organization, identifying the necessary resources, developing the practice scenarios, evaluating the effectiveness, and providing feedback to make adjustments. But to ensure that trainees learn the requisite KSAs, the design, delivery, implementation, and evaluation of the training must be provided with the science of learning and training. REFERENCES 1. BBC News (2008). Who is Richard Reid? (2001, December 28). Retrieved January 14, from http://news.bbc.co.uk/1/hi/uk/1731568.stm. 2. Salas, E., Priest, H. A., Wilson, K. A., and Burke, C. S. (2006). Scenario-based training: Improving military mission performance and adaptability. In Minds in the Military: The Psychology of Serving in Peace and Conflict, Vol. 2, Operational Stress, A. B. Adler, C. A. Castro, and T. W. Britt, Eds. Praeger Security International, Westport, CT, pp. 32–53. 3. Salas, E., and Cannon-Bowers, J. A. (2000a). Design training systematically. In The Blackwell Handbook of Principles of Organizational Behavior, E. A. Locke, Ed. Blackwell Publisher Ltd, Malden, MA, pp. 43–59. 4. Goldstein, I. L. (1993). Training in Organizations, 3rd ed., Brooks, Pacific Grove, CA. 5. Goldstein, I. L., and Ford, J. K. (2002). Training in Organizations: Needs Assessment, Development, and Evaluation, 4th ed., Wadsworth, Belmont, CA. 6. Salas, E., and Cannon-Bowers, J. A. (2000b). The anatomy of team training. In Training and Retraining: A Handbook for Business, Industry, Government, and the Military, S. Tobias, and J. D. Fletcher, Eds. MacMillan Reference, New York, pp. 312–335.

Voeller

16

v01-c02.tex V1 - 12/04/2013 11:36am Page 16

TRAINING AND LEARNING DEVELOPMENT FOR HOMELAND SECURITY

7. Salas, E., and Cannon-Bowers, J. A. (2001). The science of training: A decade of progress. Annu. Rev. Psychol. 52, 471–499. 8. Quinones, M. A. (1995). Pretraining context effects: training assignment as feedback. J. Appl. Psychol. 80, 226–238. 9. Quinones, M. A. (1997). Contextual influencing on training effectiveness. In Training for a Rapidly Changing Workplace: Applications of Psychological Research, M. A. Quinones, and A. Ehrenstein, Eds. American Psychological Association, Washington, DC, pp. 177–200. 10. Cannon-Bowers, J. A., Rhodenizer, L., Salas, E., and Bowers, C. A. (1998). A framework for understanding pre-practice conditions and their impact on learning. Pers. Psychol. 51, 291–320. 11. Cannon-Bowers, J. A., Burns, J. J., Salas, E., and Pruitt, J. S. (1998). Advanced technology in scenario-based training. In Making Decisions Under Stress: Implications for Individual and Team Training, J. A. Cannon-Bowers, and E. Salas, Eds. American Psychological Association, Washington, D.C., pp. 365–374. 12. Fowlkes, J., Dwyer, D. J., Oser, R. L., and Salas, E. (1998). Event-based approach to training (EBAT). Int. J. Aviat. Psychol. 8(3), 209–221. 13. Salas, E., and Cannon-Bowers, J. A. (1997). Methods, tools, and strategies for team training. In Training for a Rapidly Changing Workplace: Applications of Psychological Research, M. A. Quinones, and A. Ehrenstein, Eds. APA, Washington, DC, pp. 249–280. 14. Salas, E., Wilson, K. A., Priest, H. A., and Guthrie, J. W. (2006). Training in organizations: the design, delivery and evaluation of training systems. In Handbook of Human Factors and Ergonomics, 3rd ed., G. Salvendy, Ed. John Wiley & Sons, Hoboken, NJ, pp. 472–512. 15. Fowlkes, J. E., and Burke, C. S. (2005). Targeted acceptable responses to generated events or tasks (TARGETs). In Handbook of Human Factors and Ergonomics Methods, N. Stanton, H. Hendrick, S. Konz, K. Parsons, and E. Salas, Eds. Taylor & Francis, London, pp. 53-1–53-6. 16. Brannick, M. T., Salas, E., and Prince, C., Eds. (1997). Team Performance Assessment and Measurement: Theory, Methods, and Applications, Lawrence Erlbaum Associates, Mahwah, NJ. 17. Clark, D. (2000). Introduction to Instructional System Design, Retrieved January 17, 2008 from http://www.nwlink.com/∼donclark/hrd/sat1.html#model. 18. Kirkpatrick, D. L. (1976). Evaluation of training. In Training and Development Handbook: A Guide to Human Resource Development, 2nd Ed., R. L. Craig, Ed. McGraw-Hill, New York, pp. 1–26. 19. Howard, S. K., Gaba, D. M., Fish, K. J., Yang, G., and Sarnquist, F. H. (1992). Anesthesia crisis resource management training: Teaching anesthesiologists to handle critical incidents. Aviat. Space Environ. Med. 63, 763–770. 20. Salas, E., Wilson, K. A., Burke, C. S., and Wightman, D. (2006). Does CRM training work? An update, extension, and some critical needs. Hum. Factors 48(2), 392–412. 21. Balwin, T. T., and Ford, J. K. (1988). Transfer of training: a review and directions for future research. Pers. Psychol. 41, 63–105. 22. Tracey, B. J., Tannenbaum, S. I., and Kavanagh, M. J. (1995). Applying trained skills on the job: the importance of the work environment. J. Appl. Psychol. 80, 239–252. 23. Ford, J. K., Quinones, M. A., Sego, D. J., and Sorra, J. S. (1992). Factors affecting the opportunity to perform trained tasks on the job. Pers. Psychol. 45, 511–527. 24. Arthur, W., Bennett, W., Stanush, P. L., and McNelly, T. L. (1998). Factors that influence skill decay and retention: a quantitative review and analysis. Hum. Perform. 11, 79–86. 25. Rouiller, J. Z., and Goldstein, I. L. (1993). The relationship between organizational transfer climate and positive transfer of training. Hum. Resour. Dev. Q. 4, 377–390.

Voeller

v01-c02.tex V1 - 12/04/2013 11:36am Page 17

REFERENCES

17

26. McConnell, C. R. (2005). Motivating your employees and yourself. Health Care Manag. (Frederick) 24(3), 284–292. 27. Salas, E., Wilson, K. A., Burke, C. S., and Priest, H. A. (2005). Using simulation-based training to improve patient safety: What does it take? Jt. Comm. J. Qual. Patient Saf. 31(7), 363–371. 28. Oser, R. L., Cannon-Bowers, J. A., Salas, E., and Dwyer, D. J. (1999). Enhancing human performance in technology-rich environments: Guidelines for scenario-based training. In Human/technology Interaction in Complex Systems, E. Salas, Ed. JAI Press, Greenwich, CT, Vol. 9, pp. 175–202. 29. Colquitt, J. A., LePine, J. A., and Noe, R. A. (2000). Toward an integrative theory of training motivation: A meta–analytic path analysis of 20 years of research. J. Appl. Psychol. 85(5), 678–707. 30. Dempsey, J. V., Haynes, L. L., Lucassen, B. A., and Casey, M. S. (2002). Forty simple computer games and what they could mean to educators. Simul. Gaming 33(2), 157–168. 31. Vogel, J. J., Vogel, D. S., Cannon–Bowers, J., Bowers, C. A., Muse, K., and Wright, M. (2006). Computer gaming and interactive simulations for learning: A meta–analysis. J. Educ. Comput. Res. 34(3), 229–243. 32. Garris, R., Ahlers, R., and Driskell, J. E. (2002). Games, motivation and learning: a research and practice model. Simul. Gaming 33(4), 441–467. 33. Rothwell, W. J., and Kazanas, H. C. (1994). Improving on-the-job Training: Hw to Establish and Operate a Comprehensive OJT Program, Jossey–Bass, San Francisco. 34. Levine, C. I. (1996). Unraveling five myths of OJT. Techn. Skills Train. 7, 14–17. 35. Derouin, R. E., Parrish, T. J., and Salas, E. (2005). On-the-job training: Tips for ensuring success. Ergon. Des. 13(2), 23–26.

Voeller

v01-c02.tex V1 - 12/04/2013 11:36am Page 18

3 HUMAN SENSATION AND PERCEPTION Robert W. Proctor Department of Psychological Sciences, Purdue University, West Lafayette, Indiana

Kim-Phuong L. Vu Department of Psychology, California State University, Long Beach, California

3.1

INTRODUCTION

Str¨ater begins his book Cognition and Safety with the statement “Human society has become an information processing society” [1, p. 3]. This statement is as true for homeland security tasks as for any other tasks that require people to interact with machines and other people in complex systems. Homeland security involves people interacting with information technology, and use of this technology to communicate effectively is an important aspect of security [2]. For communication to be effective, human–machine interactions must conform to users perceptual, cognitive, and motoric capabilities. In particular, because all information that a person processes enters by way of the senses, sensory and perceptual processes are going to be critical factors. These processes are relevant to detecting a weapon in luggage during screening, identifying vulnerable targets for which risk is high, and communicating warnings to individuals. Given the masses of data extracted from intelligence gathering activities of various types, these data need to be integrated and displayed to appropriate security personnel in an easy to perceive form at the proper time. These and other aspects of homeland security systems require an understanding of fundamental concepts of sensation and perception.

3.2

BACKGROUND

Much is known about the methods for studying perception, the structure and function of the sensory systems, and specific aspects of perception such as the role of attention [3]. Understanding how people sense, perceive, and act on the information they receive is essential for homeland security because many of the surveillance tasks involve monitoring, detecting, and reporting events. This chapter provides an overview of sensation and perception, with emphasis on topics that seem relevant to homeland security. Five sensory modalities are typically Social and Behavioral Research for Homeland Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

19

20

HUMAN SENSATION AND PERCEPTION

Central fisure

Parietal lobe (skin senses)

Occipital lobe (visual cortex)

Taste Frontal lobe Temporal lobe (Auditory cortex)

Olfaction Olfaction

Spinal cord

FIGURE 3.1

Illustration of the primary sensory receiving areas in the cerebral cortex.

distinguished: vision, hearing, touch, smell, and taste—all of which are relevant to certain aspects of homeland security. For the sake of brevity, we cover vision and hearing in most detail, describing the other senses only briefly. The reader is referred to longer and more specialized review chapters [4], as well as to textbooks on sensation and perception [3]. All sensory systems have receptors that convert physical stimulus energy into electrochemical energy in the nervous system. The sensory information is coded in the activity of neurons and travels to the brain via structured pathways consisting of interconnected networks of neurons. For most senses, two or more pathways operate in parallel to analyze and convey different kinds of information from the sensory signal. The pathways project to primary receiving areas in the cerebral cortex (Fig. 3.1) and then to many other areas within the brain. The study of sensation and perception involves not only the anatomy and physiology of the sensory systems, but also behavioral measures of perception. Psychophysical data obtained from tasks in which observers detect, discriminate, rate, or recognize stimuli provide information about how the properties of the sensory systems relate to what is perceived. They also provide information about the functions of higher-level brain processes that interpret the sensory input through mental representation, decision-making, and inference. Thus, perceptual experiments provide evidence about how the sensory input is organized into a coherent percept on which actions are based.

3.3

METHODS FOR INVESTIGATING SENSATION AND PERCEPTION

Many methods for studying sensation and perception exist. We emphasize behavioral and psychophysiological methods because of their relevance to homeland security.

METHODS FOR INVESTIGATING SENSATION AND PERCEPTION

3.3.1

21

Threshold Methods and Scaling

Classical psychophysical methods for measuring detectability and discriminability of stimuli are based on the concept of a threshold, the minimum amount of stimulation necessary for an observer to detect a stimulus (absolute threshold) or distinguish a stimulus from another one (difference threshold). Examining how thresholds change in different settings can tell us much about perception and whether specific stimuli such as alarms may be effective. Many techniques have been developed for measuring thresholds in basic and applied settings [5]. Classical psychophysics also provides methods for building scales of perceived magnitude [6]. Indirect methods construct scales from an observer’s accuracy at discriminating stimuli, whereas direct methods construct scales from an observer’s magnitude estimates. Scaling methods can be used to quantify perceptual experience on any dimension that varies in magnitude, such as perception of risk. They can be used as design tools in development of new methods for displaying information, for example, data sonifications (representations of data by sound; [7]). 3.3.2

Signal Detection Methods

An observer’s judgments, when stimuli are difficult to detect or discriminate, are influenced by the willingness to give one response or another. Signal detection methods allow measurement of this response criterion, or bias, separately from detectability or discriminatiblity [8]. Situations for which signal detection is applicable involve a “signal” (e.g. a weapon in luggage) that an observer must discriminate from “noise” (e.g. other items in luggage). If the observer is to respond “yes” when a signal is present and “no” when it is not, the outcome can be classified as a hit (“yes” to signal), false alarm (“yes” to noise), miss (“no” to signal), or correct rejection (“no” to noise). Measures of detectability (how accurately a weapon can be discriminated from other items) can be calculated based on the difference between hit and false alarm rates, and measures of response bias (the tendency to open the luggage regardless of whether a weapon is present) on overall rate of responding “yes” versus “no”. For a given level of detectability, the possible combinations of hit and false-alarm rates vary as a function of the observer’s response criterion. For example, immediately after a terrorist attempt, screeners may adopt a liberal criterion and open any luggage that they think might possibly contain a weapon, yielding a high hit rate coupled with a high false alarm rate. Detectability can be improved by providing better screening equipment and operator training, whereas a desired response bias can be induced by an appropriate reward system. Signal detection methods and theory provide powerful tools for investigating and conceptualizing performance of other security-related tasks such as maintaining vigilance [9]. 3.3.3

Psychophysiological Methods and Brain Imaging

Methods for measuring physiological reactions to stimuli are useful in studying perception [10]. Measures of electrical brain activity, electroencephalograms, can be recorded from the scalp. Event-related potentials, which measure brain activity locked to an event such as stimulus onset, provide detailed information about the timecourse of brain activation. Functional neuroimaging techniques, which measure brain activity indirectly through bloodflow, provide insight into the spatial organization of brain functions. These methods

22

HUMAN SENSATION AND PERCEPTION

can be used to determine whether a particular behavioral phenomenon has its locus in processes associated with sensation and perception or with subsequent response-selection and execution. Their use for applied purposes is being explored in the areas of neuroergonomics [11] and augmented cognition [12], which have the goals of implementing and adapting high-technology interfaces to facilitate communication of large amounts of information.

3.4

VISION

Vision is arguably the most vital sense for interacting with the world. It provides detailed information about objects in the surrounding environment, and their locations and movements. Complex information can be depicted in high fidelity displays that mimic the external environment, more abstract graphical formats that represent data or interactions among system components, and alphanumerically to convey verbal messages and numerical values. 3.4.1

Visual Sensory System

The stimulus for vision is light energy generated by, or reflected from, objects in the environment. Light travels in waves, with the wavelengths of the visual spectrum varying from 400 to 700 nm. Light enters the eye through the cornea and passes through the pupil and lens (Figure 3.2). The pupil adjusts between 8 and 2 mm diameter in dim and bright light, respectively, allowing a larger percentage of light to enter when it is scarce. The cornea and lens focus images on the photoreceptors, located on the retina at the back of the eye. The cornea provides a fixed focusing power, and the lens changes its shape through a process of accommodation to provide increased focusing power as the distance of a fixated object changes from far to near. The amount of rotation of the eyes inward, the vergence angle, also increases as the distance of a fixated object is reduced. Because accommodation and vergence require muscular activity, tasks that necessitate rapid and numerous changes in them will cause visual fatigue. The retina contains two types of photoreceptors, rods and cones, which have photopigments that begin a process of converting light into neural signals. Rods are responsible

Optic nerve fibers Iris

Pupil Fovea Cornea Lens

Optic nerve Retina

Retina

Rods Cones

FIGURE 3.2 Illustration of the primary structures of the eye, with an object’s image focused on the retina. (Adapted from E. B. Goldstein (2002). /Sensation and Perception/ (6th ed.). Pacific Grove, CA: Wadsworth.)

.

VISION

23

for night vision and do not support color perception. Cones are responsible for daylight vision and for perception of color and detail. The image of a fixated object will fall on the fovea, a small retinal region containing only cones. The retina also contains another region, the blind spot, where the optic nerve leaves the eye and there are no photoreceptors. The nerve fibers leaving the eye form two pathways. One is devoted to rapid transmission of global information across the retina. It carries high temporal frequency information needed for motion perception and detection of abrupt changes. The other is devoted to slower transmission of detailed features from the fovea and plays a role in color and pattern perception. The optic nerve projects into the lateral geniculate nucleus and then into the primary visual cortex, located at the back of the brain. More than 30 cortical areas subsequent to the primary visual cortex are involved in the processing of visual information [13]. Two different pathways play distinct roles in perception. The ventral pathway, which goes to a region in the temporal lobe, is involved in identifying objects. The dorsal pathway, which goes to a region in the parietal lobe, is involved in determining where objects are located. This dissociation of what and where processing affects performance as well; for example, navigational tasks that rely on “where” information are performed well under low lighting levels at which pattern recognition is impaired [14]. 3.4.2

Visual Perception

Sensitivity to light increases for a period after entering the dark (see Figure 3.3). Several factors contribute to this dark-adaptation process: larger pupil size, photopigments returning to a light sensitive state, and shift from cones to rods. Because cones have a spectral sensitivity function that peaks at higher wavelengths than that for rods, short wavelength stimuli appear relatively brighter when dark adapted. Displays intended for Low

Rod

Logarithm of sensitivity

Light adapted sensitivity

Maximum cone sensitivity

Cone Dark adapted sensitivity (maximum rod sensitivity)

High 10

20

Time in dark (min)

FIGURE 3.3 Dark adaptation function illustrating sensitivity to visual stimuli as a function of time in the dark for cone and rods.

24

HUMAN SENSATION AND PERCEPTION

100%

Relative acuity

80% Blind spot 60%

40%

20%

0%

−60° −40° −20°



20°

40°

60°

Degrees from fovea Side of retina near ear

FIGURE 3.4

Side of retina near nose

Visual acuity as a function of retinal location.

use in the field need to be designed with the different sensitivities of day and night vision taken into account. Acuity is high in the fovea and decreases as stimulus location becomes more peripheral. The acuity function is due to the density of cones being greatest in the fovea (see Figure 3.4) and to less convergence of foveal than peripheral photoreceptors in the sensory pathway. Acuity can be measured in several ways, including with a standard Snellen eye chart, that are not perfectly correlated. Resolution acuity can be specified by a spatial contrast sensitivity function, which for an adult shows maximum sensitivity at a spatial frequency of 3–5 cycles/degree of visual angle. Because high spatial frequencies convey detail and low frequencies the global properties of stimuli, acuity tests based on contrast sensitivity provide a more detailed analysis than standard acuity tests about aspects of vision necessary for performing various tasks. For example, contrast sensitivity at intermediate and low spatial frequencies predicts detectability of signs at night [15]. An abrupt change in a display to signal a change in system mode may go undetected. The change is more likely to attract attention if it is signaled by a flickering stimulus. Conversely, stimuli such as displays on cathode ray tube (CRT) screens may flicker but with the intent of being seen as continuous. The highest rate at which flicker can be perceived is called the critical flicker frequency. A display intended to be seen as flickering should be well below the critical flicker frequency, whereas a display intended to be seen as continuous should be well above that frequency. People show good lightness constancy, which is that a stimulus appears to be constant on a white-to-black dimension under different amounts of illumination. However, lightness contrast, for which an object looks darker when a surrounding or adjacent object is white rather than black, may occur when the intensity of local regions is changed, as in displays or signs. Because color perception is a function of the output of the three cone types, color vision is trichromatic: Any spectral color can be matched by a combination of three primary colors from the short, middle, and long wavelength regions of the spectrum.

VISION

25

This fact is used in the design of color televisions and computer monitors, for which all colors are generated from combinations of pixels of three different colors. For many perceptual phenomena, blue and yellow are paired in opposing manners, as are red and green: One color of the pair may produce an afterimage of the other; a background of one color may induce the other in a figure that would otherwise be seen as a neutral color; combinations of the two colors are not perceived. These complimentary color relations are based in the visual sensory pathways. That is, output from the cones is rewired into opponent-process neural coding in the optic nerve. A given neuron can signal, for example, blue or yellow, but not both at the same time. Finally, 8% of males are color blind and cannot distinguish all colors that a person with trichromatic vision can, which may cause objects in those colors to be less conspicuous [16]. The use of color to convey information, thus, must be done with care. 3.4.3

Higher-Level Properties of Visual Perception

The patches of light stimulating the photoreceptors must be organized into a perceptual world of meaningful objects. This is done effortlessly in everyday life, with little confusion. However, organization can be critical for constructed displays. A symbol on a sign that is incorrectly grouped may not be recognized as intended. Similarly, if a warning signal is grouped perceptually with other displays, then its message may be lost. The investigation of perceptual organization was begun by Gestalt psychologists. According to the Gestalt psychologists, perceptual organization follows the principle of pr¨aagnanz : The organizational processes will produce the simplest possible organization allowed by the conditions [17]. The first step in perceiving a figure requires separating it from the background. The importance of figure-ground organization is seen in figures with ambiguous figure-ground organizations, as the well-known Ruben’s vase (see Figure 3.5). When a region is seen as figure, the contour appears to be part of it, the region seems to be in front of the background, and it takes on a distinct form. Several factors influence figure-ground organization: Symmetric rather than asymmetric patterns tend to be seen as figure; a region surrounded completely by another tends to be seen as figure and the surround as background; the smaller of two regions tends to be seen as figure and the larger as ground. Figure-ground principles can be used to camouflage targets in the field.

FIGURE 3.5

Ruben’s vase: An illustration of reversible figure-ground relations.

26

HUMAN SENSATION AND PERCEPTION

The way that the figure is grouped is also important to perception. Grouping principles include: proximity—display elements that are located close together will tend to be grouped together; similarity—display elements that are similar in appearance, for example, orientation or color, will tend to be grouped together; continuity—figures will tend to be organized along continuous contours; closure—display elements that make up a closed figure will tend to be grouped together; and common fate—elements with a common motion will tend to be grouped together; connectedness—elements can be grouped by lines connecting them; and common region—a contour drawn around elements will cause those elements to be grouped together. Another distinction is between integral and separable stimulus dimensions [18]: Stimuli composed from integral dimensions are perceived as wholes, whereas those composed from separable dimensions are perceived in terms of their component dimensions. Speed of classification on one dimension is unaffected by the relation to the other if the dimensions are separable. However, for integral dimensions, classifications are slowed when the value of the irrelevant dimension is uncorrelated with that of the relevant dimension but speeded when the two dimensions are correlated. Combinations of hue, saturation, and lightness, and of pitch and loudness have been classified as integral, and size with lightness or angle as separable. The distinction between integral and separable dimensions is incorporated in the proximity compatibility principle [19]: If a task requires information to be integrated mentally (i.e. processing proximity is high), then that information should be presented in an integral display (i.e. one with high display proximity). High display proximity can be accomplished by increasing the spatial proximity of the display elements so that the elements are integrated and appear as a distinct object. The idea is to replace the cognitive computations that someone must perform to combine the pieces of information with a less mentally demanding pattern-recognition process. To survive, a person must be able to perceive locations of objects accurately. Moreover, representational displays should provide the information necessary for accurate spatial perception. Many cues play roles in the perception of distance and spatial relations [20, see Figure 3.6], and the perceptual system constructs the three-dimensional percept using these cues. Among the possible depth cues are accommodation and vergence angle, which, at relatively close distances, vary systematically as a function of Depth information

Oculomotor

Accommodation

Visual

Convergence

Binocular

Monocular

Static cues Interposition

Size

Motion parallax

Perspective

FIGURE 3.6 Diagram of oculomotor and visual depth cues. (Adapted from R. Sekuler and R. Blake (1994). /Perception/ (3rd ed.). New York: McGraw Hill.)

VISION

27

the distance of the fixated object from the observer. Binocular disparity is a cue that is a consequence of the two eyes viewing objects from different positions. A fixated object falls on corresponding points of the two retinas. For objects in front of or behind a curved region passing through the fixated object, the images fall on disparate locations. The direction and amount of disparity indicate how near or far the object is from fixation. Binocular disparity is a strong cue to depth that can enhance the perception of depth relations in displays of naturalistic scenes and may be of value to scientists and others in evaluating multidimensional data sets (e.g. a three-dimensional data set could be processed faster and more accurately to answer questions that required integration of the information if the display was stereoptic than if it was not [21]). There are many static, or pictorial, monocular cues to depth. These include retinal size—larger images appear to be closer—and familiar size—for example, a small image of a car provides a cue that the car is far away. The cue of interposition is that an object that appears to block part of the image of another object located in front of it. Other cues come from shading, aerial perspective, and linear perspective. Texture gradient, which is a combination of linear perspective and relative size, is important in depth perception [22]. Depth cues become dynamic when an observer moves. If fixation is maintained on an object and as location changes, as when looking out a train window, objects in the background will move in the same direction in the image as you are moving, whereas objects in the foreground will move in the opposite direction. This cue is called motion parallax . When you move straight ahead, the optical flow pattern conveys information about how fast your position is changing with respect to objects in the environment [23]. The size of the retinal image of an object varies as a function of the object’s distance from the observer. When accurate depth cues are present, size constancy results: Perceived object size does not vary as a function of changes in retinal image size that accompany changes in depth. Size constancy breaks down and illusions of size appear when depth cues are misleading. Misperceptions of size and distance also can occur when depth cues are minimal, as when navigating at night. For displayed information to be transmitted accurately, the objects and words must be recognized. Pattern recognition is typically presumed to begin with feature analysis. Alphanumeric characters are analyzed in terms of features such as vertical or horizontal line segments (see Figure 3.7). Confusion matrices obtained when letters are misidentified indicate that an incorrect identification is most likely to be a letter whose features overlap with the one that was displayed. Letters are components of syllables and words. Numerous

Features

Image

FIGURE 3.7

Pattern (letter) recognition through analysis of features.

28

HUMAN SENSATION AND PERCEPTION

studies have provided evidence for the need to distinguish several different levels of reading units [24]. Pattern recognition is also influenced by “top-down” information of several types [25]: regularities in mapping between spelling and spoken sounds and orthographic, syntactic, semantic, and pragmatic constraints. For accurate pattern recognition, the possible alternatives need to be physically distinct and consistent with expectancies created by the context. For a skilled reader, the pattern recognition involved in reading occurs almost instantaneously and relatively automatically. This is true for other pattern recognition skills as well (e.g. identifying enemy tanks or intrusion detection patterns). The important point is that, with experience, people can come to recognize very complex patterns that would seem meaningless to a novice. In fact, it is generally that efficient pattern recognition underlies expertise in most domains [26]. Some stimuli, such as faces, are special in that they are processed by different areas of the brain from other objects and their recognition is more sensitive to global configuration and orientation [27].

3.5

HEARING

The sense of hearing is also used extensively to convey information [2]. It is an effective modality for warnings, due to sound being able to be heard from any direction and because rapid onsets tend to attract attention. 3.5.1

Auditory Sensory System

Sound waves are fluctuations in air pressure produced by mechanical disturbances; the frequency of oscillations correlates with the sound’s pitch and the amplitude with its loudness. A sound wave moves outward from its source at 344 m s –1 , with the amplitude being a decreasing function of distance. When sound reaches the outer ear, it is funneled into the middle ear (see Figure 3.8). The eardrum, which separates the outer and middle Outer ear Eardrum Ossicles Inner ear

Cochlea

Middle ear

Pinna Auditory canal

FIGURE 3.8

Illustration of the major structures of the ear.

HEARING

29

ears, vibrates in response to the fluctuations in air pressure produced by the sound wave. The middle ear contains a system of three bones that move when the eardrum vibrates, and this movement gets transferred to the fluid-filled inner ear. A flexible membrane, the basilar membrane, runs the length of the inner ear. Movement of this membrane bends hair cells, which are the sensory receptors that initiate neural signals. The pathways from the auditory nerve project to the primary auditory cortex in the temporal lobe after first passing through several neuroanatomical structures. The auditory cortex contains neurons that extract complex features of auditory stimulation. 3.5.2

Auditory Perception

Loudness is affected by many factors in addition to amplitude. Humans are insensitive to tones below 200 Hz and, to a lesser extent, to tones exceeding 6 kHz. This is illustrated by equal loudness contours, which show that low and high frequency sounds must be of higher amplitude to be of equal loudness to tones of intermediate frequency (see Figure 3.9). Extraneous sounds can mask targeted sounds. This is important for work environments, in which audibility of auditory input must be evaluated with respect to the level of background noise. The amount of masking depends on the spectral composition of the target and noise stimuli. Masking only occurs from frequencies within a critical bandwidth. A masking noise will exert a much greater effect on sounds of higher frequency than on sounds of lower frequency, with this asymmetry due to properties of the basilar membrane.

Intensity level (db above threshold of hearing)

Equal loudness curves (based on Fletcher and Munson) 140 130 120 110 100 90 80 70 60 50 40 30 20 10 0 −10 −20 20

100

1K Frequency (Hz)

10K 20K

FIGURE 3.9 Equal loudness contour curves. Each curve indicates the intensity for tones of different frequencies required for the tones to sound equally loud as a 1KHz tone at the indicated intensity level.

30

3.5.3

HUMAN SENSATION AND PERCEPTION

Higher-Level Properties of Auditory Perception

The principles of perceptual organization apply to auditory stimuli. Grouping can occur on the basis of similarity (e.g. frequency) and spatial and temporal properties (see Figure 3.10). Tones can be grouped into distinct streams based on similarities on various dimensions [28]. Being able to identify where a threat is coming from is important to survival. Two different sources of information, interaural intensity and time differences, are relied on to perceive the location of sound around us. At the front and back of the listener, the intensity of the sound and the time at which it reaches the ears is equal. As the sound moves progressively toward one side or the other of the listener’s head, the sound becomes increasingly more intense at the closer ear than at the farther one, and it also reaches the ipsilateral ear first. The interaural intensity cue is most effective for high frequency tones, and the interaural time cue for low frequency sounds. Localization accuracy is poorest for tones between 2 and 4 kHz, where neither cue is effective. Because both cues are ambiguous at the front and back, front-back confusions of the location of brief sounds often occur.

Upp

er lip

Leg

Hip

es

ali

a

To

Ge

nit

Trunk Neck Head Shoulder Arm Elbow Forearm t Wris d Han tle Lit g n Ri le idd M ex d In

Th Ey um No e b se Fac e

ot

Fo

Midsagittal sulcus

Lips Lower lip Teeth, gums, jaw Tongue Pharynx al omin -abd Intra

Sylvian fissure

FIGURE 3.10 Somatotopic map of the cerebral cortex. (Based on one from W. Penfield & T. Rasmussen (1950). “The cerebral cortex of man.” New York: Macmillan.)

BODY SENSES, SMELL, AND TASTE

3.6

31

BODY SENSES, SMELL, AND TASTE

Though we cannot go into detail on the remaining sensory modalities, they have important implications for homeland security as well. 3.6.1

Touch, Proprioception, Pain, and Temperature

The body senses are composed of four distinct modalities [29]—touch, proprioception, pain, and thermal sensations—that are elicited respectively by mechanical stimulation of the skin, mechanical displacements of the muscles and joints, stimuli of sufficient intensity to damage tissue, and cool and warm stimuli. The receptors for these senses are the endings of neurons located in the back side of the spinal cord. The fibers follow two major pathways, dorsal and anterolateral. The former pathway conveys information about touch and proprioception, and the latter information about pain and temperature. The fibers project to the somatosensory cortex, which is organized as a homunculus representing the opposite side of the body. Areas of the body for which sensitivity is greater have larger areas devoted to them than areas with lesser sensitivity (see Figure 3.10). Some of the cells respond to complex features of stimulation, such as movement of an object across the skin. Vibrotaction is an effective way for transmitting complex information [30]. When mechanical vibrations are applied to a region of skin, the frequency and location of the stimulation can be varied. For frequencies of less than 40 Hz, the size of the contactor area does not influence the absolute threshold for detecting vibration. For higher frequencies, the threshold decreases with increasing size of the contactor, indicating spatial summation of the energy within the stimulated region. For multicontactor devices, which can present complex spatial patterns of stimulation, masking stimuli presented in close temporal proximity to the target stimulus can degrade identification. However, with practice, pattern recognition capabilities with these types of devices can become quite good. As a result, they can be used as reading aids for the blind and to a lesser extent as hearing aids for the hearing impaired [30]. A distinction is commonly made between active and passive touch [31]. Passive touch refers to situations in which the individual does not move her hand, and the touch stimulus is applied passively, as in vibrotaction. Active touch refers to situations in which the individual intentionally moves the hand to manipulate and explore an object. Pattern recognition with active touch is superior to that with passive touch. However, the success of passive vibrotactile displays for the blind indicates that much information can also be conveyed passively. 3.6.2

Smell and Taste

Smell and taste can communicate information about potential danger. The smell of a toxic substance or taste of rancid potato chips may be noxious and convey that they should not be consumed. Contaminated water also may have a noxious smell and taste, and a chemical attack may produce a burning sensation in the throat and nose. Both sensory modalities can be used for warning signals. For example, ethylmercaptan is added to natural gas to warn of gas leaks because humans are sensitive to its odor. The sensory receptors for taste are groups of cells called taste buds located on the tongue, throat, roof of the mouth, and inside the cheeks. Sensory transduction occurs

32

HUMAN SENSATION AND PERCEPTION

when a taste solution comes in contact with the taste buds. The nerve fibers from the taste receptors project to several nuclei in the brain and then to the insular cortex, located between the temporal and parietal lobes, and the limbic system. Four basic tastes can be distinguished: sweet, sour, salty, and bitter, though many sensations fall outside of their range [32]. For smell, molecules in the air that are inhaled affect receptor cells located in a region of the nasal cavity. Different receptor types have different proteins that bind the odorant molecules to the receptor. The fibers from the smell receptors project to the olfactory bulb, located in the front of the brain. From there, the fibers project to a cluster of neural structures called the olfactory brain. Although odors are useful as warnings, they are not very effective at waking someone from sleep, which is why smoke detectors that emit a loud sound are needed. The sense of smell shows considerable plasticity, with associations of odors to events readily learned and habituation occurring to odors of little consequence [33].

3.7

MULTIMODAL SENSORY INTERACTIONS AND ROLE OF ACTION

In everyday life, we receive input constantly through the various senses. This input must be integrated into a coherent percept. It is important, therefore, to understand how the information from different senses is weighted and combined in perception, and how processing of input from one modality is affected by processing of input from another [34]. Many systems tend to overload the visual system with displays. As a result, there is an increased interest in using multimodal display technologies, which uses other modalities to augment visual perception. For example, auditory and tactile displays have been used to direct an observer’s attention to certain areas of a visual display that require further analysis [35]. Multimodal displays also allow information to be presented to users in virtual worlds that represent real-world interactions of the senses [36]. The use of multiple display and control modalities enables different ways of presenting and responding to information, the incorporation of redundancy into displays, and emulation of real-life environments. Multimodal interfaces can reduce mental workload and make human-computer interactions more naturalistic. However, designing effective multimodal interfaces is a challenge because many interactive effects between different modalities may arise. These effects must be taken into account if the full benefits of multimodal interfaces are to be realized. There is a tendency to think of perception independent from action because “input precedes output.” However, a close relation between perception and action exists. For example, it is natural to orient attention to the location of a sound, making auditory displays a good choice for actions that require users to respond to the location of the sound (e.g. fire alarms should be place close to the exit). As a result, the decisions and actions that need to be made in response to a signal or display must be taken into account when designing to optimize perception [37].

3.8

CONCLUSION

Many of the technical devices that have been, and are being, developed to aid in homeland security depend on successful human-system interactions. Human perception is an

REFERENCES

33

important aspect of such interactions. Operators must be able to sense and perceive the displayed information accurately and efficiently, and in a way that maps compatibly onto the tasks and actions that they must perform, for the system to achieve its goals. Regardless of the exact forms that future security technologies take, as long as humans are in the system the basic principles and concepts of sensation and perception must be taken into account.

REFERENCES 1. Str¨ater, O. (2005). Cognition and Safety: An Integrated Approach to Systems Design and Assessment. Ashgate, Burlington, VT. 2. Robinson, D. (2006). Emergency planning: the evolving role of regional planning organizations in supporting cities and counties. In The McGraw-Hill Homeland Security Book , D. G. Kamien, Eds. McGraw-Hill, NY, pp. 297–310. 3. Wolfe, J. M., Kluender, K. R., Levi, D. M., Bartoshuk, L. M., Herz, R. S., Klatzky, R. L., and Lederman, S. J. (2006). Sensation and Perception. Sinauer, Sunderland, MA. 4. Proctor, R. W., and Proctor, J. D. (2006). Sensation and perception. In Handbook of Human Factors and Ergonomics, 3rd ed., G. Salvendy, Ed. John Wiley & Sons, Hoboken, NJ, pp. 53–88. 5. Gescheider, G. A. (1997). Psychophysics: The Fundamentals, 3rd ed. Lawrence Erlbaum Associates, Hillsdale, NJ. 6. Marks, L. E., and Gescheider, G. A. (2002). Psychophysical scaling. In Stevens’ Handbook of Experimental Psychology, Methodology in Experimental Psychology, H. Pashler, and J. Wixted, Eds. John Wiley & Sons, New York, pp. 91–138. 7. Walker, B. N. (2002). Magnitude estimation of conceptual data dimensions for use in sonification. J. Exp. Psychol. [Appl.] 8, 211–221. 8. Macmillan, N. A., and Creelman, C. D. (2005). Detection Theory: A User’s Guide, 2nd ed. Cambridge University Press, New York. 9. See, J. E., Howe, S. R., Warm, J. S., and Dember, W. N. (1995). Meta-analysis of the sensitivity decrement in vigilance. Psychol. Bull. 117, 230–249. 10. Kanwisher, N., and Duncan, J., Eds. (2004). Functional Neuroimaging of Visual Cognition: Attention and Performance XX . Oxford University Press, New York. 11. Parasuraman, R., and Rizzo, M. (2007). Neuroergonomics: The Brain at Work . Oxford University Press, New York. 12. Schmorrow, D. D., Ed. (2005). Foundations of Augmented Cognition. Lawrence Erlbaum Associates, Mahwah, NJ. 13. Frishman, L. J. (2001). Basic visual processes. In Blackwell Handbook of Perception, E. B. Goldstein, Ed. Blackwell, Malden, MA, pp. 53–91. 14. Andre, J., Owens, A., and Harvey, L. O., Jr., Eds. (2003). Visual Perception: The Influence of H. W. Leibowitz . American Psychological Association, Washington, DC. 15. Evans, D. W., and Ginsburg, A. P. (1982). Predicting age-related differences in discriminating road signs using contrast sensitivity. J. Opt. Soc. Am. 72, 1785–1786. 16. O’Brien, K. A., Cole, B. L., Maddocks, J. D., and Forbes, A. B. (2002). Color and defective color vision as factors in the conspicuity of signs and signals. Hum. Factors 44, 665–675. 17. Palmer, S. E. (2003). Visual perception of objects. In Experimental Psychology, Handbook of Psychology, A. F. Healy, and R. W. Proctor, Eds., Vol. 4. John Wiley & Sons, Hoboken, NJ, pp. 179–211.

34

HUMAN SENSATION AND PERCEPTION

18. Garner, W. (1974). The Processing of Information and Structure. Lawrence Erlbaum Associates, Hillsdale, NJ. 19. Wickens, C. D., and Carswell, C. M. (1995). The proximity compatibility principle: its psychological foundation and relevance to display design. Hum. Factors 37, 473–494. 20. Proffitt, D. R., and Caudek, C. (2003). Depth perception and the perception of events. In Experimental Psychology, in Handbook of Psychology, A. F. Healy, and R. W. Proctor, Eds., Vol. 4. John Wiley & Sons, Hoboken, NJ, pp. 213–236. 21. Wickens, C. D., Merwin, D. F., and Lin, E. (1994). Implications of graphics enhancements for the visualization of scientific data: dimensional integrality, stereopsis, motion, and mesh. Hum. Factors 36, 44–61. 22. Gibson, J. J. (1950). The Perception of the Visual World . Houghton Mifflin, Boston, MA. 23. Bruno, N., and Cutting, J. E. (1988). Minimodularity and the perception of layout. J. Exp. Psychol. Gen. 117, 161–170. 24. Healy, A. F. (1994). Letter detection: a window to unitization and other cognitive processes in reading text. Psychon. Bull. Rev. 1, 333–344. 25. Massaro, D. W., and Cohen, M. M. (1994). Visual, orthographic, phonological, and lexical influences in reading. J. Exp. Psychol. Hum. Percept. Perform. 20, 1107–1128. 26. Ericsson, K. A., Charness, N., Feltovich, P. J., and Hoffman, R. R., Eds. (2006). The Cambridge Handbook of Expertise and Expert Performance. Cambridge University Press, New York. 27. Farah, M. J., Wilson, K. D., Drain, M., and Tanaka, J. (1998). What is “special” about face perception? Psychol. Rev. 105, 482–498. 28. Bregman, A. S. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound . MIT Press, Cambridge, MA. 29. Gardner, E. P., Martin, J. H., and Jessell, T. M. (2000). The bodily senses. In Principles of Neural Science, E. R. Kandel, J. H. Schwartz, and T. M. Jessell, Eds., Vol. 4. Elsevier, Amsterdam, pp. 430–450. 30. Summers, I. R., Ed. (1992). Tactile Aids for the Hearing Impaired . Whurr Publishers, London. 31. Gibson, J. J. (1966). The Senses Considered as Perceptual Systems. Houghton Mifflin, Boston, MA. 32. Schiffman, S. S., and Erickson, R. P. (1993). Psychophysics: Insights into transduction mechanisms and neural coding. In Mechanisms of Taste Transduction, S. A. Simon, and S. D. Roper, Eds. CRC Press, Boca Raton, FL. 33. Doty, R. L., Ed. (2003). Handbook of Olfaction and Gustation, 2nd ed. Marcel Dekker, New York. 34. Calvert, G., Spence, C., and Stein, B. E., Eds. (2004). The Handbook of Multisensory Processes. MIT Press, Cambridge, MA. 35. Proctor, R. W., Tan, H. Z., Vu, K. P. L., Gray, R., and Spence, C. (2005). Implications of compatibility and cuing effects for multimodal interfaces. In Foundations of Augmented Cognition, D. D. Schmorrow, Ed. Lawrence Erlbaum Associates, Mahwah, NJ, pp. 3–12. 36. Stanney, K. M., Ed. (2002). Handbook of Virtual Environments: Design, Implementation, and Applications. Lawrence Erlbaum Associates, Mahwah, NJ. 37. Proctor, R. W., and Vu, K. P. L. (2006). Stimulus-Response Compatibility Principles: Data, Theory, and Application. CRC Press, Boca Raton, FL.

FURTHER READING

35

FURTHER READING Bolanowski, SJ., and Gescheider, GA., Eds. (1991). Ratio Scaling of Psychological Magnitude. Lawrence Erlbaum Associates, Hillsdale, NJ. Macmillan, N.A. (2002). Signal detection theory. In Stevens’ Handbook of Experimental Psychology, Methodology in Experimental Psychology, H. Pashler, and Wixted, J., Eds. Vol. 4. John Wiley & Sons, New York, pp. 43–90. Wickens, TD. (2001). Elementary Signal Detection Theory. Oxford University Press, New York.

Voeller

v01-c04.tex V1 - 12/04/2013 11:38am Page 37

4 HUMAN BEHAVIOR AND DECEPTION DETECTION Mark G. Frank and Melissa A. Menasco University at Buffalo, State University of New York, Buffalo, New York

Maureen O’Sullivan University of San Francisco, San Francisco, California

4.1

INTRODUCTION

Terrorism at its core is a human endeavor. Human beings cultivate what they hate, plan, and then execute terrorist attacks. Thus, any information that can aid the intelligence or security officer to weigh the veracity of the information he or she obtains from suspected terrorists or those harboring them would help prevent attacks. This would then not only add another layer to force protection but would facilitate future intelligence gathering. Yet the face-to-face gathering of information through suspected terrorists, informants, or witnesses is replete with obstacles that affect its accuracy such as the well-documented shortcomings of human memory, honest differences of opinion, as well as what is the focus of this chapter—outright deception [1]. The evidence suggests that in day-to-day life most lies are betrayed by factors or circumstances surrounding the lie, and not by behavior [2]. However, there are times when demeanor is all a Homeland security agent has at his or her disposal to detect someone who is lying about his or her current actions or future intent. Because a lie involves a deliberate, conscious behavior, we can speculate that this effort may leave some trace, sign, or signal that may betray that lie. What interests the scientist, as well as society at large, is (i) are there clues perceptible to the unaided eye that can reliably discriminate between liars and truth tellers; (ii) do these clues consistently predict deception across time, types of lies, different situations, and cultures?; and if (i) and (ii) are true, then (iii) How well can our counter-terrorism professionals make these judgments, and can they do this in real time, with or without technological assistance?

Social and Behavioral Research for Homeland Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

37

Voeller

38

4.2

v01-c04.tex V1 - 12/04/2013 11:38am Page 38

HUMAN BEHAVIOR AND DECEPTION DETECTION

SCIENTIFIC OVERVIEW—BEHAVIORAL SIGNS OF DECEPTION

To date no researcher has documented a “Pinocchio response”; that is, a behavior or pattern of behaviors that in all people, across all situations, is specific to deception (e.g. [3]). All the behaviors identified and examined by researchers to date can occur for reasons unrelated to deception. Generally speaking, the research on detecting lies from behavior suggests that two broad families of behavioral clues are likely to occur when someone is lying—clues related to liar’s memory and thinking about what they are saying (cognitive clues), and clues related to liar’s feelings and feelings about deception (emotional clues) [3–8]. 4.2.1

Cognitive Clues

A lie conceals, fabricates, or distorts information; this involves additional mental effort. The liar must think harder than a truth teller to cover up, create events that have not happened, or to describe events in a way to allow multiple interpretations. Additional mental effort is not solely the domain of the outright liar; however, a person who must tell an uncomfortable truth to another will also engage in additional mental effort to come up with the proper phrasing while simultaneously reducing the potential negative emotional reaction of the other. This extra effort tends to manifest itself with longer speech latencies, increased speech disturbances, less plausible content, less verbal and vocal involvement, less talking time, more repeated words and phrases, and so forth [9]. Research has also shown that some nonverbal behaviors change as a result of this mental effort. For example, illustrators—hand or head movements that accompany speech, and are considered by many to be a part of speech (e.g. [10])—will decrease when lying compared to telling the truth [11, 12]. Another way in which cognition is involved in telling a lie is through identification of naturalistic memory characteristics. This means that experienced events have memory qualities that are apparent upon description that are different from events that have not been experienced (the “Undeutsch hypothesis” [13]). Events that were not actually experienced feature more ambivalence, have fewer details, a poorer logical structure, less plausibility, more negative statements, and are less embedded in context. Liars are also less likely to admit lack of memory and have less spontaneous corrections (reviewed by [8, 9]), and may use more negative emotion words and fewer self and other references [14]. Mental effort clues seem to occur more in the delivery of the lie, whereas memory recall clues tend to rest more in the content of the lie. We note that not all lies will tax mental effort; for example, it is much less mentally taxing to answer a close ended question like “Did you pack your own bags?” with a yes or no than to answer an open ended “What do you intend to do on your trip?” Moreover, a clever liar can appear more persuasive if he or she substitutes an actual experienced event as their alibi rather than creating an entirely new event. This may be why a recent general review paper [9] found consistent nonhomogeneous effect sizes for these mental effort and memory-based cues across the studies they reviewed, as the particular paradigms used by researchers varied greatly in the extent to which the lies that were studied mentally taxed the liars.

Voeller

v01-c04.tex V1 - 12/04/2013 11:38am Page 39

SCIENTIFIC OVERVIEW—BEHAVIORAL SIGNS OF DECEPTION

4.2.2

39

Emotional Clues

Lies can also generate emotions, ranging from the excitement and pleasure of “pulling the wool over someone’s eyes” to fear of getting caught to feelings of guilt [4]. Darwin [15] first suggested that emotions tend to manifest themselves in the facial expressions, as well as in the voice tones, and that these could be reliable enough to accurately identify emotional states. Research has since shown that for some expressions—e.g. anger, contempt, disgust, fear, happiness, sadness/distress, or surprise—cultures throughout the planet recognize and express these emotions in both the face and voice similarly [16]. To the extent that a lie features higher stakes for getting caught, we would expect to see more of these signs of emotion in liars compared to truth tellers. If the lie is a polite lie that people tell often and effortlessly, there would be less emotion involved (e.g. [17]). Meta-analytic studies suggest that liars do appear more nervous than truth tellers, with less facial pleasantness, higher vocal tension, higher vocal pitch, greater pupil dilation, and fidgeting [9]. If the lie itself is about emotions—e.g. telling someone that one feels calm, when in fact one is nervous—the research shows that signs of the truly felt emotion appear in the face and voice despite attempts to conceal, although these signs are often subtle and brief [18, 19]. 4.2.3

Measurement Issues

One issue in measuring lie signs is to make clear what is meant by the terms cognition and emotion. For example, in deception research, the term arousal is used interchangeably with emotion, but often refers to many different phenomena: an orienting response [20], an expression of fear [21], a more indeterminate affect somewhere between arousal and emotion ([22]; see also discussion by Waid and Orne [23]), as well as physiological states as different as stress, anxiety, embarrassment, and even anger [24]. A second issue in measuring lie signs is to clarify the level of detail of measurement as well as to specify why that level of detail may or may not correlate with lying [25]. Many meta-analyses of behavioral deception clues report insignificant effect sizes, but the variance among effect is not homogeneous (e.g. [3, 9, 26–28]). For example, some studies investigated behavior at the most elemental physical units of measurement such as counting the movements in the hands, feet, arms, legs, torso, eye movements, eye blinks, pupil dilation, lip pressing, brow lowering or raising, lip corner puller (smiling), fundamental frequency, amplitude, pauses, filled pauses, response latency, speech rate, length of response, connector words, unique words, self-references, and so forth. Other studies investigated behavior at the most elemental psychological meaning units of measurement. Some of these included manipulators—which involve touching, rubbing, etc., of various body parts—which could be composed of a number of hand, finger, and arm movements, but which were scored for theoretical rather than merely descriptive reasons. Other psychologically meaningful units of measurement include illustrators, which accompany speech to help keep the rhythm of the speech, emphasize a word, show direction of thought, etc. or emblems, which are gestures that have a speech equivalent, such as a head nod meaning “yes”, or a shrug meaning “I am not sure”, or facial emblems such as winking. The psychological meaning units might also include vocal tension, speech disturbances, negative statements, contextual embedding, unusual details, logical

Voeller

40

v01-c04.tex V1 - 12/04/2013 11:38am Page 40

HUMAN BEHAVIOR AND DECEPTION DETECTION

structure, unexpected complications, superfluous details, self-doubt, and so forth. Finally, other studies investigated behavior at the most interpretative/impressionistic unit level, which are further unarticulated composites of the physical and the psychological meaning units described earlier. Some of these impressionistic variables of the behaviors include fidgeting, involvement, body animation, posture, facial pleasantness, expressiveness, vocal immediacy and involvement; and spoken uncertainty; plausibility; and cognitive complexity (see review by [9]). The problem of course is that as one moves from physical to impressionistic measures, it would seem to become harder to make those judgments reliably. This is not always the case though, for example, the term “smile” has rarely been defined in research reports, yet independent coders are typically above 0.90 reliability when coding smiles (see [29] for a review). Although research works suggest that people can be more accurate when they employ indirect inferences to deception (e.g. does the person have to think hard? [30]), “gut” impressions tend to be uncorrelated with accuracy [26]. This suggests that we must be cautious about clues at the impressionistic level, and that it may be more productive to study them at their psychological level where they might be more meaningful to understanding deception.

4.2.4 Prognosis on Generalizability of Deception Findings Across Time, Lies, Situations, and Cultures It is safe to conclude that although there are some clues that betray a lie at rates greater than chance, none of them are exclusive to deception. This conclusion applies to machine based physiological approaches as well. However, the origins of these signs—mental effort, memory, and emotion—are universal. This suggests that if the context in which the information is gathered is controlled, and designed to differentially affect liars and truth tellers, it would increase greatly the chances of being able to distinguish people with deceptive intent from those with truthful intent. Polygraph examination has done this by controlling their question style to improve hit rates, but to date this has not been done systematically in behavioral studies. Thus its effects are unknown, but we can speculate based upon what we know about normal, truthful human behavior. If the lie is of no significance to the person, with no costs for getting caught, and involves a simple yes or no answer, odds are there will not be many clues to distinguish the liar from the truth teller. If the situation has significance to the person, there are consequences for getting caught, and the person is required to recount an event in an open ended question, then we would expect more clues to surface that would distinguish the liar from the truth teller. This may be a curvilinear relationship; a situation of extraordinary high mental effort and emotion—e.g. one in which a person is being beaten, screamed at, and threatened with execution—will generate all the “lie clues” described earlier, but equally in liar and truth teller. Nonetheless, information about mental effort, experienced memory, and emotion can be very useful clues to Homeland Security personnel to identify behavioral “hot spots” [4] that can provide information about issues of importance to the subject. A counter-terrorism Intelligence officer who knows when a subject is feeling an emotion or thinking hard can know what topics to pursue or avoid in an interview, whether the subject is fabricating, concealing information, or merely feeling uncomfortable with the topic, although truthful.

Voeller

v01-c04.tex V1 - 12/04/2013 11:38am Page 41

SCIENTIFIC OVERVIEW—ABILITIES TO SPOT LIARS

4.3

41

SCIENTIFIC OVERVIEW—ABILITIES TO SPOT LIARS

Research over the past 30 years suggests that the average person is slightly statistically better than chance at identifying deception, but not practically better. The most recent review of over 100 studies has shown that when chance accuracy is 50%, the average person is approximately 54% accurate [31]. There are a number of reasons for this poor ability; among them poor feedback in daily life (i.e. a person only knows about the lies they have caught); the general tendency among people to believe others until proven otherwise (i.e. a “truth bias”; [32]), and especially a faulty understanding of what liars actually look like (i.e. the difference between people’s perceived clues to lying, compared to the actual clues; [26]). 4.3.1

General Abilities of Specialized Groups

Most of the studies reviewed were laboratory based and involved observers judging strangers. But similar results are found even when the liars and truth tellers are known to the observers (also reviewed by [31]. If the lies being told are low stakes, so that little emotion is aroused and the lie can be told without much extra cognitive effort, there may be few clues available on which to base a judgment. But even studies of high stakes lies, in which both liars and truth tellers are highly motivated to be successful, suggest an accuracy level that is not much different from chance. Researches that examined unselected professionals involved in security settings—police, federal agents, and so forth—have typically found that they too are not any more accurate in their abilities to spot deception than laypeople (e.g., [27, 33–36]). However, within these studies there have been a handful of groups that have performed better than 60% accurate on both lies and truths, and what these groups are doing might be informative for Homeland Security applications. The first group identified was a group of Secret Service agents who not only were superior, as a group, in detecting lies about one’s emotions, but those who were more accurate were more likely to report using nonverbal clues than those who were less accurate. The authors [33] speculated that the Secret Service agents were more accurate than the other groups because they were trained in scanning crowds for nonverbal behaviors that did not fit, and they also dealt with assassination threats, many of which were made by mentally ill individuals. Unlike most police officers whose assumption of guilt in suspects is high [37], reflecting the experience of their daily work, Secret Service agents interviewed suspects where they knew the base rate of true death threats was low. The second set of groups identified included forensic psychologists, federal judges, selected federal law enforcement officers, and a group of sheriffs [34]. A commonality among these groups seemed to be their very high motivation to improve their lie detecting skills. A third set of groups identified were police officers examining real-life lies, who showed 65% overall accuracy in detecting lies and truths [38]. 4.3.2

Individual Differences

As with any ability, research suggests that some people are better able to detect deception than others in high-stake lies (e.g. [39]); this skill does not seem to translate to lower-stake

Voeller

42

v01-c04.tex V1 - 12/04/2013 11:38am Page 42

HUMAN BEHAVIOR AND DECEPTION DETECTION

lies [32]. One element of better skill in higher-stake settings is the ability to judge micromomentary displays of emotion [33, 39]. Other groups who showed better than 60% accuracy included people with left hemisphere brain lesions that prevented them from comprehending speech [40], and those subjects who scored higher on a test of knowledge of clues to deceit were also more accurate than those who did not [41]. A different approach has been to identify individuals who obtain high scores on lie detection tests and studying them in detail [42]. After testing more than 12,000 people using a sequential testing protocol involving three different lie detection accuracy measures, O’Sullivan and Ekman identified 29 highly accurate individuals. These individuals had a kind of genius with respect to the observation of verbal and nonverbal clues, but since genius often connotes academic intelligence, the expert lie detectors were labeled “truth wizards” to suggest their special talent. Although this term is unfortunate in mistakenly suggesting that their abilities are due to magic rather than talent and practice, the term does reflect the rarity of their abilities. One of the first findings of the Wizard Project was a profession-specific sensitivity to certain kinds of lies. About one-third of the wizards were highly accurate on all three of the tests used. Another third did very well on two of the tests, but not on the third, in which people lied or told the truth about whether they had stolen money. Nearly all of these wizards were therapists who had little, if any, experience with lies about crime. On the other hand, the remaining third of the wizards were law enforcement personnel—police and lawyers—who did very well on the crime lie detection test, but not on a test in which people lied or told the truth about their feelings. Compared with a matched control group, expert lie detectors are more likely than controls to attend to a wide array of nonverbal behaviors and to be more consciously aware of inconsistencies between verbal and nonverbal behaviors. Although expert lie detectors make almost instantaneous judgments about the kind of person they are observing, they are also more cautious than controls about reaching a final decision about truthfulness.

4.4

CRITICAL NEEDS ANALYSIS

Research on human behavior and deception detection can make a useful contribution to Homeland Security needs as long as scientists and practitioners understand what it is they are observing—signs of thinking or signs of feeling. This rule applies to automated approaches that measure physiology as well. Even with this limitation, training in behavioral hot spot recognition may make security personnel better at spotting those with malfeasant intent. Other critical needs are discussed below. 4.4.1

More Relevant Laboratory Paradigms and Subjects

We must recognize that general meta-analyses of the research literature, although useful, are limited in their applicability to security contexts, since such analyses tend to combine studies that feature lies told with few stakes and cognitive demands with those with higher stakes and stronger cognitive demands. Thus, we should be more selective about which studies to examine for clues that may be useful or relevant to security contexts. This also means it is important for scientists to develop research paradigms that more closely mirror the real-life contexts in which security personnel work. Although laboratory settings are

Voeller

v01-c04.tex V1 - 12/04/2013 11:38am Page 43

CRITICAL NEEDS ANALYSIS

43

not as powerful as real-world settings, high-stake laboratory deception situations can provide insights with the best chance of applicability. Consistent with this approach, two current airport security techniques capitalize on behaviors identified by research studies on stress, with anecdotal success (i.e. Transportation Security Administration (TSA)’s Screening Passengers by Observation Techniques and the MA State Police Behavioral Assessment System). One way to facilitate this type of progress is to have Homeland Security personnel advise laboratory research, as well as allow researchers to spend on-the-job time with them. We believe that pairing the researchers and practitioners would eventually result in calls for laboratory studies featuring higher stakes to the liars, different subject populations beyond US/Europeans (as research suggests that people can detect deception in other cultures at rates greater than chance; [43, 44]), and differing interview lengths such as examining shorter interviews (i.e. a 30–90 s security screening) and longer interviews (i.e. a 1–4 h intelligence interview). 4.4.2

Examination and Creation of Real-World Databases

There have been very few studies of real-world deception (e.g. [38]), yet the technological capability exists to create many more. The biggest problem with real-world data is determining the ground truth (was the person really lying, or did he or she truly believe what he or she just stated?). Estimating ground truth—as compared to knowing ground truth—will slow down the identification of any patterns or systems. Clear criteria must be established a priori to determine this ground truth. For example, confessions of malfeasance are a good criterion, but false confessions do happen. Catching someone with contraband (i.e. a “hit”) is also a good criterion, but occasionally the person may be truthful when he or she states that someone must have snuck it into his or her luggage. Moreover, academics should advise on the capture and recording of these databases, to ensure that the materials are able to be examined by the widest number of researchers and research approaches. For example, most of the police interview video we have seen is of such poor quality that we cannot analyze facial expressions in any detail. It is only when these databases are combined with the laboratory work that we can more sharply identify behaviors or behavioral patterns that will increase the chances of catching those with malfeasant intent. To optimally use this information though, we must also examine in detail known cases of false negatives and false positives as well as correct hits to determine why mistakes were made in these judgments. 4.4.3

Ground Truth Base Rates

Security personnel do not know the base rates for malfeasance in their settings. Although it may be logistically impossible to hand-search every piece of hand luggage in a busy airport, or follow every investigative lead, it would be essential to know this base rate in order to ascertain the effectiveness of any new behavioral observational technique. This would also permit more useful cost–benefit analyses of various levels of security and training. A less satisfying but still useful way to ascertain effectiveness is to compare hit rates for contraband for those using various behavioral observation techniques with those who are stopped randomly (as long as the day of the week and time of the day/year are scientifically controlled).

Voeller

44

v01-c04.tex V1 - 12/04/2013 11:38am Page 44

HUMAN BEHAVIOR AND DECEPTION DETECTION

4.4.4

Optimizing Training

The most recent meta-analysis of the research literature on training people to improve deception detection from behavior has shown that across over 2000 subjects, there was a modest effect for training, despite the use of substandard training techniques [45]. This obviously suggests that better training techniques will yield larger improvements in people’s abilities to sort out truth from lie. One training change would be to train on behavioral clues that are derived from similar situations and supported by research. For example, one study trained research subjects to recognize a set of behavioral clues that are believed to be indicative of deception, and are often taught to law enforcement personnel as signs of deception, although many of these signs are not supported by the scientific literature [46]. This study reported a 10% decrease in accuracy for the groups receiving such training. Therefore, the first step in adequate training is to identify what information is useful for training (see above). The second step is to determine the most effective way to deliver that information. For example, what is the training duration that maximizes comprehension—one full day, three full days, or more? Should it be done in a group or self-study? Does it need simple repetition, or more creative approaches, and how many training items are needed? Does it need to be reinforced at particular intervals? How many clues should be taught—i.e. at what point do you overwhelm trainees? How do you train in such a way as to improve accuracy without overinflating confidence? These are just a few of the questions with unknown answers. 4.4.5

Identifying Excellence

Another critical need is to identify who within relevant organizations shows signs of excellence, through their higher hit rates or whatever other clear criteria can be applied. This strategy is similar to the strategy of the “wizards” study [42]. One caution is that to date, most testing material will be laboratory experiment based, and the generalizability of that information to real-world contexts is not perfect. An examination of the convergent validity of laboratory tests of deception detection and other more naturalistic approach measures (peer ratings, field observations in airports, or other points of entry with accuracy determined by the rate of contraband “hits” by individuals compared to random selection) would be a great start.

4.5

FUTURE RESEARCH DIRECTIONS

The aforementioned critical needs suggest several research questions, but by no means is that section comprehensive. As we peer into the future, there is much work to do. A partial list of future directions shown below suggests what we should do. •

Examine the role of technology in facilitating behavioral observation. A number of computer vision algorithms are now available that can aid observation, such as recognizing emotional expressions in the face (e.g. [47]). What is unknown is how robust these algorithms are in real-world contexts. What is also unknown is how best to combine technological observation of behavior with human judgment. Would there be a tendency for humans to overrely upon the technology over time? • Identify the optimal environmental set up for surveillance, whether with technology or the unaided eye. This includes proxemic placement of tables, lines, stanchions,

Voeller

v01-c04.tex V1 - 12/04/2013 11:38am Page 45

REFERENCES









45

other individuals, and so forth. One goal would be to create an environment that would reduce the typical stress felt by the normal traveler, which would hopefully increase the salience of any sign of stress exhibited by the malfeasant to increase the chances of its being observed. Identify optimal interaction style between security agents and the public. One can aggressively question and threaten travelers, but that might render behavioral observation useless due to the overall stress engendered. A rapport-building approach (e.g. [48]) might be better, but this needs more research. Identify the optimal interview style. Phrasing of questions is important in obtaining information, but this has not been researched in the open literature. Small changes in phrasing—e.g. open versus close ended—might add to the additional cognitive burden of the liar and thus could be useful. The order of questions will also be important, as well as whether one should make a direct accusation. But only additional research will tell. Identify the optimal way to combine behavioral clues. Research tends to examine individual behavioral clues to ascertain their effectiveness, yet more modern neural network and machine learning approaches may be successful in identifying patterns and combinations of behaviors that better predict deception in particular contexts. Identify the presence of countermeasures. An inevitable side effect of the release of any information about what behaviors are being examined by security officers, to identify riskier individuals in security settings, is that this information will find its way onto the Internet or other public forums. This means a potential terrorist can learn what to do and what not to do in order to escape further scrutiny. The problem is that we do not know yet whether one can conceal all their behaviors in these real-life contexts. Moreover, some of these behaviors, like emotional behavior, is more involuntary [16] and should be harder to conceal than more voluntary behavior like word choice. Thus it remains an open question as to whether a potential terrorist can countermeasure all of the critical behaviors.

Space limitations preclude an exhaustive list of needs, future directions, and research. In general, the research suggests that there are limited clues that are useful to sorting out liars and truth tellers, but most people cannot spot them. However, a closer examination of this literature suggests that some behavioral clues can be useful to security personnel, and some people can spot these clues well. We feel that it may be ultimately most productive to expand our thinking about behavioral clues to deceit to include thinking about behavioral clues to a person’s reality—clues that someone is recounting a true memory, thinking hard, or having an emotion he or she wishes to hide. This would enable a security officer to make the most accurate inference about the inner state of the person they are observing, which, when combined with better interaction and interviewing techniques, would enable them to better infer the real reasons for this inner state, be it intending us harm, telling a lie, or telling the truth.

REFERENCES 1. Haugaard, J. J., and Repucci, N. D. (1992). Children and the truth. In Cognitive and Social Factors in Early Deception, S. J. Ceci, M. DeSimone-Leichtman, and M. E. Putnick, Eds. Erlbaum, Hillsdale, NJ.

Voeller

46

v01-c04.tex V1 - 12/04/2013 11:38am Page 46

HUMAN BEHAVIOR AND DECEPTION DETECTION

2. Park, H. S., Levine, T. R., McCornack, S. A., Morrison, K., and Ferrar, M. (2002). How people really detect lies. Commun. Monogr. 69, 144–157. 3. Zuckerman, M., DePaulo, B. M., and Rosenthal, R. (1981). Verbal and nonverbal communication of deception. In Advances in Experimental Social Psychology, L. Berkowitz, Ed. Academic Press, San Diego, CA, Vol. 14, pp. 1–59. 4. Ekman, P. (1985/2001). Telling Lies. W. W. Norton, New York. 5. Ekman, P., and Frank, M. G. (1993). Lies that fail. In Lying and Deception in Everyday Life, M. Lewis, and C. Saarni, Eds. Guilford Press, New York, pp. 184–200. 6. Hocking, J. E., and Leathers, D. G. (1980). Nonverbal indicators of deception: A new theoretical perspective. Commun. Monogr. 47, 119–131. 7. Knapp, M. L., and Comadena, M. E. (1979). Telling it like it isn’t: A review of theory and research on deceptive communication. Hum. Commun. Res. 5, 270–285. 8. Yuille, J. C., Ed. (1989). Credibility Assessment. Kluwer Academic Publishers, Dordrecht. 9. DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., and Cooper, H. (2003). Cues to deception. Psychol. Bull. 129, 74–112. 10. McNeill, D. (1992). Hand and Mind. What Gestures Reveal about Thought. Chicago of University Press, Chicago. 11. Ekman, P., and Friesen, W. V. (1972). Hand movements. J. Commun. 22, 353–374. 12. Vrij, A. (1995). Behavioral correlates of deception in a simulated police interview. J. Psychol. 129, 15–28. 13. Undeutsch, U. (1967). Beurteilung der glaubhaftigkeit von aussagen. In Handbuch derPsychologie. Bd. II: Forensische Psychologie, U. Undeutsch, Ed. Verlag fur Psychologie, Goettingen, pp. 26–181. 14. Newman, M. L., Pennebaker, J. W., Berry, D. S., and Richards, J. M. (2003). Lying words: predicting deception from linguistic styles. Pers. Soc. Psychol. Bull. 29, 665–675. 15. Darwin, C. (1872/1998). The Expression of the Emotions in Man and Animals, 3rd ed. (w/ commentaries by Paul Ekman). Oxford University Press, New York. 16. Ekman, P. (2003). Emotions Revealed . Henry Holt, New York. 17. DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., and Epstein, J. A. (1996). Lying in everyday life. J. Pers. Soc. Psychol. 70, 979–995. 18. Ekman, P., Friesen, W. V., and O’Sullivan, M. (1988). Smiles when lying. J. Pers. Soc. Psychol. 54, 414–420. 19. Ekman, P., O’Sullivan, M., Friesen, W. V., and Scherer, K. (1991). Invited article: face, voice, and body in detecting deceit. J. Nonverbal Behav. 15, 125–135. 20. deTurck, M. A., and Miller, G. R. (1985). Deception and arousal: isolating the behavioral correlates of deception. Hum. Commun. Res. 12, 181–201. 21. Frank, M. G. (1989). Human Lie Detection Ability as a Function of the Liar’s Motivation, Unpublished doctoral dissertation, Cornell University, Ithaca. 22. Burgoon, J. E., and Buller, D. B. (1994). Interpersonal deception: III. Effects of deceit on perceived communication and nonverbal behavior dynamics. J. Nonverbal Behav. 18, 155–184. 23. Waid, W. M., and Orne, M. T. (1982). The physiological detection of deception. Am. Sci. 70, 402–409. 24. Steinbrook, R. (1992). The polygraph test: a flawed diagnostic method. N. Engl. J. Med. 327, 122–123. 25. Frank, M. G. (2005). Research methods in detecting deception research. In Handbook of Nonverbal Behavior Research, J. Harrigan, K. Scherer, and R. Rosenthal, Eds. Oxford University Press, London, pp. 341–368.

Voeller

v01-c04.tex V1 - 12/04/2013 11:38am Page 47

REFERENCES

47

26. DePaulo, B. M., Stone, J., and Lassiter, D. (1985). Deceiving and detecting deceit. In The Self and Social Life, B. R. Schlenker, Ed., McGraw-Hill, New York, pp. 323–355. 27. Vrij, A. (2000). Detecting Lies and Deceit: The Psychology of Lying and the Implications for Professional Practice. John Wiley & Sons, Chichester. 28. Zuckerman, M., and Driver, R. E. (1985). Telling lies: verbal and nonverbal correlates of deception. In Multichannel Integration of Nonverbal Behavior, W. A. Siegman, and S., Feldstein, Eds. Erlbaum, Hillsdale, NJ, pp. 129–147. 29. Frank, M. G. (2003). Smiles, lies, and emotion. In The Smile: Forms, Functions, and Consequences, M., Abel, Ed. The Edwin Mellen Press, New York, pp. 15–43. 30. Vrij, A., Edward, K., and Bull, R. (2001). Police officers’ ability to detect deceit: the benefit of indirect deception detection measures. Leg. Criminol. Psychol. 6, 185–196. 31. Bond, C. F. Jr., and DePaulo, B. M. (2006). Accuracy of deception judgments. Pers. Soc. Psychol. Rev. 10, 214–234. 32. DePaulo, B. M., and Rosenthal, R. (1979). Telling lies. J. Pers. Soc. Psychol. 37, 1713–1722. 33. Ekman, P., and O’Sullivan, M. (1991). Who can catch a liar? Am. Psychol. 46, 913–920. 34. Ekman, P., O’Sullivan, M., and Frank, M. G. (1999). A few can catch a liar. Psychol. Sci. 10, 263–266. 35. DePaulo, B. M., and Pfeifer, R. L. (1986). On-the-job experience and skill at detecting deception. J. Appl. Soc. Psychol. 16, 249–267. 36. Kraut, R. E., and Poe, D. (1980). Behavioral roots of person perception: the deception judgments of customs inspectors and laymen. J. Pers. Soc. Psychol. 39, 784–798. 37. Meissner, C. A., and Kassin, S. M. (2002). :“He’s guilty!”: investigator bias in judgments of truth and deception. Law Hum. Behav. 26, 469–480. 38. Mann, S., Vrij, A., and Bull, R. (2004). Detecting true lies: police officers’ abilities to detect suspects’ lies. J. Appl. Psychol. 89, 137–149. 39. Frank, M. G., and Ekman, P. (1997). The ability to detect deceit generalizes across different types of high stake lies. J. Pers. Soc. Psychol. 72, 1429–1439. 40. Etcoff, N. L., Ekman, P., Magee, J. J., and Frank, M. G. (2000). Superior lie detection associated with language loss. Nature 405(11), 139–139. 41. Forrest, J. A., Feldman, R. S., and Tyler, J. M. (2004). When accurate beliefs lead to better lie detection. J. Appl. Soc. Psychol. 34, 764–780. 42. O’Sullivan, M., and Ekman, P. (2004). The wizards of deception detection. In The Detection of Deception in Forensic Contexts, P. A. Granhag, and L. Stromwell, Eds. Cambridge University Press, Cambridge, pp. 269–286. 43. Bond, C. F. Jr., and Atoum, A. O. (2000). International deception. Pers. Soc. Psychol. Bull. 26, 385–395. 44. Bond, C. F., Omar, A., Mahmoud, A., and Bonser, R. N. (1990). Lie detection across cultures. J. Nonverbal Behav. 14, 189–204. 45. Frank, M. G., and Feeley, T. H. (2003). To catch a liar: challenges for research in lie detection training. J. Appl. Commun. Res. 31, 58–75. 46. Kassin, S. M., and Fong, C. T. (1999). “I’m innocent!”: effects of training on judgments of truth and deception in the interrogation room. Law Hum. Behav. 23, 499–516. 47. Bartlett, M. S., Littlewort, G., Frank, M. G., Lainscsek, C., Fasel, I., and Movellan, J. (2006). Fully automatic facial action recognition in spontaneous behavior. J. Multimedia 6, 22–35. 48. Collins, R., Lincoln, R., and Frank, M. G. (2002). The effect of rapport in forensic interviewing. Psychiatry Psychol. Law 9, 69–78.

Voeller

v01-c04.tex V1 - 12/04/2013 11:38am Page 48

5 SPEECH AND VIDEO PROCESSING FOR HOMELAND SECURITY Mark Maybury Information Technology Center, The MITRE Corporation, Bedford, Massachusetts

5.1

SPEECH AND VIDEO FOR HOMELAND SECURITY

As articulated in the National Strategy for Homeland Security (www.whitehouse.gov/ homeland/book) [1], homeland security requires effective performance of a number of primary missions such as border and transportation security and critical infrastructure protection. These activities are human intensive, both in terms of the objects of focus (e.g. citizens or foreigners crossing a border) as well as the government or contractor personnel performing these function (e.g. TSA at US airports). Automation is necessary to ensure effective, objective, and affordable operations. Speech and video processing are important technologies that promise to address some of the severe challenges of the homeland security mission. Furthermore, there is some hope that the detection of visual or acoustic anomalies (e.g. unnatural human motion and voice stress) could yield improved deception detection. With thousands of miles of border with Mexico and Canada and 95,000 miles of shoreline, border and transportation security is a daunting challenge. Some important applications include the following: •

Video surveillance for anomalous and/or hostile behavior detection has important applications at border crossings as well as monitoring remote border areas. • Identification and tracking of individuals using biometrics (e.g. speech, face, gait, and iris). For example, speaker identification can be used for authentication for both physical access control and computer account access. While details of biometrics are beyond the scope of this chapter, we refer the reader to an overview text [2] or a more detailed algorithmic approach [3]. Other critical homeland security applications are as follows: •

Critical infrastructure protection to include key site monitoring (e.g. transportation, energy, food, and commerce) or video surveillance of public areas. This could

Social and Behavioral Research for Homeland Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

49

50

SPEECH AND VIDEO PROCESSING FOR HOMELAND SECURITY

include automated video understanding, in particular the detection, classification, and tracking of objects such as cars, people, or animals in time and space in and around key sites. Beyond object detection and tracking, it would include recognition of relationships and events. • Automated processing of audio and video to understand broadcast news and/or index video surveillance archives. • Audio hot spotting for surveillance at a border crossing or large-scale public events. • The use of audio or video analysis to detect deception (e.g. irregular physical behavior and/or speech patterns) but also audio and video cryptography to obscure message content or audio and video stenography to hide its very existence, and countermeasures thereof. Some of the requirements for these applications are severe. These include • • • • • • •

broad area surveillance; long duration: 24 × 7 detection; real-time detection; high accuracy and consistency; completely autonomous operation; low or intermittent communications bandwidth (e.g. for storage and exfiltration); low acquisition and maintenance cost.

Some deployments may also require low power consumption (and/or long battery life), limited storage, and intermitted connectivity.

5.2

THE CHALLENGE OF SPEECH

The ability to detect and track criminal or adversary communications is essential to homeland security. Whether for law enforcement or intelligence, searching conversational speech is a grand challenge. Telephone conversations alone illustrate the scale of the challenge with over a billion fixed lines worldwide creating 3785 billion minutes (63B hours) of conversations annually, equivalent to about 15 exabytes of data (ITU 2002). Add to this rapidly growing mobile and wireless communication. In addition, 47,776 radio stations add 70 million hours of original radio programming per year. Further complicating this, approximately 6800 languages and as much as 10,000 dialects are spoken globally. In spite of this untapped audio gold mine, audio search requirements are only beginning to appear. As Figure 5.1 illustrates, there are over 300 spoken languages with more than one million speakers, but only 66 of these are written and for which we have a translation dictionary. Of these, we have ASR and MT for only 44, and only 20 of these are considered “done” in the sense that systems exist for automated transcription and translation. In addition to the challenge of lack of written materials, which we will return to subsequently, there are many challenges beyond scale. These include challenges with language in general, such as polysemy, ambiguity, imprecision, malformedness, intention, and emotion. And in addition to the traditional set of challenges with automated speech recognition such as noise, microphone variability, and speaker disfluencies, the kind

THE CHALLENGE OF SPEECH

Language count

334

Javanese Urdu Telugu Min Nan Gujarati Sindhi Amharic Igbo

Indonesian Uyghur Turkmen Tagalog Swahili

66

>1 M Speakers

Written + Translation Dictionary

Hausa Burmese Korean Hindi Russian

43

All ASR + MT Prereqs.

51

French German Spanish Mandarin

20 “Done”

(Source: Linguistic Data Consortum’s DARPA Surprise Language Experiment assessment of FL resources, 2003)

FIGURE 5.1

Spoken foreign language systems and needs.

of conversational speech that occurs in telephone calls, meetings, interviews has the following additional challenges: • • • •

• •

• •

• •

Multiparty. Multiple, interacting speakers. Talkover. Multiple simultaneous speakers talk over speaker turns. Spontaneity. Unpredictable shifts in speakers, topics, and acoustic environments. Diverse settings. Conversation is found in many venues including outdoor border crossings, indoor meetings, radio/TV talk shows, interviews, public debates, lectures or presentations that vary in degree of structure, roles of participants, lengths, degree of formality, as well as variable acoustic properties. Acoustic challenges. Spoken conversations often occur over cell phones or handheld radios which come in and out of range and have highly variable signal to noise ratios. Nonacoustic conversational elements. Speakers use clapping, laughing, booing, whistling, and other sounds and gestures to express agreement, disagreement, enjoyment, and other emotions, as well as outdoor noise (e.g. weather and animals) and indoor noise (e.g. machinery and music). Real time and retrospective. Access during the speech event (e.g. real-time stream processing) or after. Tasks. Speaker identification, word hot spotting, audio document routing (doc/passage/fact), retrieval or question/answering, tracking entities and events, and summarization (e.g. speakers and topics) Multilingual. Multiple languages, sometimes from the same speaker. References. Since conversations are often performed in a physical context, the language often contains references to items therein (exophora).

Compounding these challenges, expert translators, particularly for low density languages are expensive and scarce.

52

SPEECH AND VIDEO PROCESSING FOR HOMELAND SECURITY

100%

NIST STT Benchmark Test History (May. '07) Switchboard

Conversational speech Meeting speech

WordErrorRate

(Non-English)

Read speech

CTS Arabic (UL) Meeting - SDM OV4 Meeting - MDM OV4

Switchboard II Broadcast speech

Switchboard Cellular

CTS Mandarin (UL) Meeting - IHM News Mandarin 10x

+

Air Travel Planning Kiosk Speech

Varied microphone

Non-English

+

20k

News English unlimited

10% 5k

+ News Arabic 10x

+ +

+ +

CTS fisher (UL) News English 1x

+

+ News English 10x

Noisy

[]

1k

4%

2%

Range of Human Error in Transcription

Date 1% 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011

FIGURE 5.2

NIST benchmarks over time. (http://www.nist.gov/speech/history.)

In addition to the challenges with speech, for large collections of audio, there exist many retrieval challenges such as triage, storage, query formulation, query expansion, query by example, results display, browsing, and so on.

5.3

AUTOMATED SPEECH PROCESSING

Figure 5.2 illustrates the significant progress made over the years in spoken language processing. The figure shows best systems each year in evaluations administered by NIST to objectively benchmark performance of speech recognition systems over time. The graph reports reduction of word error rate (WER) over time. The systems were assessed based on a wide range of increasingly complex and challenging tasks moving from read speech, to broadcast (e.g. TV and radio) speech, to conversational speech, to spontaneous speech, to foreign language speech (e.g. Chinese Mandarin and Arabic). Over time, tasks have ranged from understanding read Wall Street Journal text, to understanding foreign television broadcasts, to the so-called “switchboard” (fixed telephone and cellphone) conversations. Future plans include meeting room speech recognition (NIST; [4]). As Figure 5.2 illustrates, while recognition, rates of word error for English (clean, well-formed, single speaker, speaking clearly to computer) are well below 10%. For example, computers can understand someone reading the Wall Street Journal with a 5%

DECEPTION DETECTION

53

word error rate (WER) (1 word in 20 wrong). Conversations are harder, with broadcast news often achieving only a 15–20% WER and the CALLHOME data collection (phone calls) achieving 30–40% WER.

5.4

AUDIO HOT SPOTTING

As an illustration of the state of the art, the Audio Hot Spotting project [5–7] aims to support natural querying of audio and video, including meetings, news broadcasts, telephone conversations, and tactical communications/surveillance. As Figure 5.3 illustrates, the architecture of AHS integrates a variety of technologies including speaker ID, language ID, nonspeech audio detection, keyword spotting, transcription, prosodic feature and speech rate detection (e.g. for speaker emotional detection), and cross language search. An important innovation of AHS is the combination of word-based speech recognition with phoneme-based audio retrieval for mutual compensation for keyword queries. Phoneme-based audio retrieval is fast, more robust to spelling variations and audio quality, and may have more false positives for short-word queries. In addition, phoneme-based engines can retrieve proper names or words not in the dictionary (e.g. “Shengzhen”) but, unfortunately, produces no transcripts for downstream processes. In contrast, word-based retrieval is more precise for single-word queries in good quality audio and provides transcripts for automatic downstream processes. Of course it has its limitations too. For example, it may miss hits for phrasal queries, out-of-vocabulary words, and in noisy audio, and is slower in preprocessing. Figure 5.4 illustrates the user interface for speech search, and includes a speaker and keyword search facility against both video and audio collections. The user can also search by nonspeech audio (e.g. clapping and laughter). For crosslingual needs, a query in English is translated to a foreign language (e.g. Spanish and Arabic) and is used to retrieve hot spots in a transcription of the target media, which is then retrieved and translated into the query language. This process is illustrated in Figure 5.5. The user typed in word “crisis” is translated into Arabic query term and is used to search the target media, which is subsequently translated as shown.

5.5

DECEPTION DETECTION

Detection of deception is important for assessing the value of informants, identifying deception at border crossing, and for antifraud, and can be revealed by face, voice, and body [8]. Evidence of increased pitch and vocal tension in deceptive subjects has been found from the literature survey [9]. The most widely cited sources of evidence of deception using speech include latency, filled pauses, discourse coherence, and the use of passive voice and contractions. However, most research on deceptive behavior has focused on visual cues such as body and facial gestures or on descriptive as opposed to empirical studies much less automated means of detection. Hirschberg et al. [10] and Graciarena et al. [11] report on the use of a corpus-based machine learning approach to automated detection of deception in speech. Both leverage the Columbia-SRI-Colorado (CSC) corpus that consists of 22 native American English

54

SPEECH AND VIDEO PROCESSING FOR HOMELAND SECURITY

FIGURE 5.3

FIGURE 5.4

AHS architecture.

AHS search interface.

speakers who were motivated by financial reward to deceive an interviewer on two tasks out of six in sessions lasting between 25 and 50 min. Using a support vector machine based on prosodic/lexical features combined with a Gaussian mixture model based on acoustic features, Graciarena et al. [11] report 64.4% accuracy in automatically distinguishing deceptive from nondeceptive speech. Although these efforts are promising, one national study [12] argues for the need for significant interdisciplinary research in this important area.

THE CHALLENGE OF VIDEO

FIGURE 5.5

5.6

55

AHS crosslingual audio hot spotting.

THE CHALLENGE OF VIDEO

Just as acoustic information provides vital information for homeland security, so too visual information is a critical enabler. Although static images are commonly used to identify suspects, characterize facilities, and/or describe weapons and threats, motion pictures have become increasing valuable because of their ability to capture not only static objects and their properties but also dynamic events. The following are the challenges faced by video processing: Broad area coverage. 24 × 7 video surveillance of a broad area poses challenges with processing, storage, power, and sustainability. For example, ◦ thousands of cameras are deployed in the United Kingdom for tasks such as facility surveillance, traffic monitoring, and environmental observations (e.g. river levels). • Real-time processing. Events (e.g. border crossing and crimes) occur in real time and frequently require immediate intervention. For example, ◦ a new nationwide network of cameras at the National Automatic Number Plate Recognition Data Centre north of London will record up to 50 million license plates a day to detect duplicates and track criminals. • Massive volume. Video requires roughly 10 times as much storage as audio therefore methods for compression should be efficient for storage and dissemination. Moreover, real-time or retrospective human review of material is tedious and an ideal opportunity for automation. • Accuracy and consistency of detection, identification, and tracking. Object and event detection and recognition in a broad range of conditions (lighting, occlusion, and resolution) are severe challenges. •

56

SPEECH AND VIDEO PROCESSING FOR HOMELAND SECURITY • •



• •

Privacy preservation. The broad deployment of cameras raises challenges for privacy as well as cross boundary sharing identical systems. Processing. Effective understanding of video requires many subchallenges including format conversion, detection, segmentation, object/face recognition, gesture and gait recognition, and event understanding. Nature. Occlusion (e.g. fog and rain), lighting, object orientation, and motion require size, rotation, shape, and motion invariant detection that are robust to natural variation. Noise. Noise from lenses, cameras, the environment (e.g. lighting and smoke/fog/ snow), storage, and transmission. Variability. The natural variability in foreground, background, objects, relationships, and behaviors as well as wide variations in illumination, pose, scale, motion, and appearance.

There are many benefits of automated video processing including the followin: •

Automated identification and tracking. Correlation. Storage and indexing can enable correlation of objects across time and space, pattern detection, forensics as well as trend analysis. • Cross cuing. Initial detection of objects or events can cue more complete or higher quality tracking. • Compression. Object ID and tracking can dramatically reduce storage and dissemination needs. •

There are many important application areas of video processing, from interview deception detection to monitoring of border crossings or facilities (e.g. airport and military base entrances). For example, the Bordersafe project [13] automatically extracts license plate numbers from video as cars travel in and around Tucson, Arizona. The Tuscon Customs and Border Protection (CBP) has captured over 1 million records of license plate numbers, state, date, and time from over 225,000 distinct vehicles from both the United States and Mexico. Comparison revealed that plates from over 13,000 of those border crossing vehicles (involved in nearly 60,000 border crossings) were associated with criminal records from Tuscon and Pima County law enforcement.

5.7

AUTOMATED VIDEO PROCESSING

The key elements necessary for automated understanding of video have been explored since the early days of vision research in robotics in artificial intelligence. In addition to systems to process imagery from security surveillance cameras, algorithms are needed to analyze the 31 million hours of original television programming per year from over 20,000 broadcast stations around the world. For example, as illustrated in Figure 5.6, using an integration of text, audio, imagery, and video processing, the Broadcast News Navigator [14] enables a user to browse and perform content-based search on videos personalized to their interests. Users can find content two and one half times faster over sequential video search with no loss in accuracy by searching directly for specific content. The related Informedia system (www.informedia.cs.cmu.edu) has explored video privacy

AUTOMATED VIDEO PROCESSING

FIGURE 5.6

57

Broadcast news navigation.

(a)

(b)

50 50

100 150

100

200 250

150

300 350

200

400 450 100

200

FIGURE 5.7

300

400

500

600

50

100

150

200

250

300

Motion tracks detected on airport tarmac (a) and office park (b).

protection via methods such as face pixelizing, body scrambling, masking, and body replacement. Homeland security users may need to monitor not only broadcast news, but other video sources such as security cameras. As illustrated in Figure 5.7, research at MIT has integrated question answering technology together with video understanding methods to create a video question answering system. Figure 5.7 illustrates motion tracks detected in two different settings: an airport tarmac (a) and an entrance gate to an office park (b). This is used by Katz et al. [15] in a prototype information access system, called Spot, that combines a video understating system together with a question answering natural language front end to answer questions about video surveillance footage taken around the Technology Square area in Cambridge, Massachusetts. Spot can answer questions such as the following:

58

SPEECH AND VIDEO PROCESSING FOR HOMELAND SECURITY • • • • • •

“Show me all cars leaving the garage.” “Show me cars dropping off people in front of the white building” “Did any cars leave the garage toward the north?” “How many cars pulled up in front of the office building?” “Show me cars entering Technology Square.” “Give me all northbound traffic.”

This kind of intuitive, query-based access to information can dramatically enhance both facility situational awareness and enable focused investigation.

5.8

MULTICAMERA VIDEO ANALYSIS

In addition to moving object detection, identification, and tracking, employment of active multicamera systems enables wide area surveillance, mitigates occlusion, and reveals 3D information [16]. However, multicamera systems require solutions for emplacement and use, selection of best views, cross camera handoff of tracked objects, and multisensor fusion. These have been successfully used for surveillance of people at the SuperBowl or for traffic monitoring. Active cameras—that support active pan, tilt, and zoom—allow automated focus attention on objects of interest in scenes. In addition to the visible spectrum, infrared sensors can help track humans, animals, and vehicles hidden in dense foliage. Multicamera environments can enable, for example, continuous monitoring of critical infrastructure (e.g. air or seaport, military facility, and power plant), detect perimeter breaches, track moving people or vehicles, pan/tilt/zoom for identification, and issue alerts.

5.9

STATE OF THE ART

With all of the rapid advances in video processing, how well do these systems work? As illustrated in Figure 5.8, NIST organizes an annual benchmarking activity to compare the performance of video understanding systems. As can be seen, this annual event has grown from a few participants in 2001 processing about a dozen hours of video to dozens of participants processing hundreds of hours worth of video to support search for particular video segments. For example, in the 2004 NIST TRECVID benchmarking activities [17], participants included IBM Research, Carnegie Mellon University, University of Amsterdam. They applied their systems to four tasks required to find relevant segments in video data sets: shot boundary, story boundary, and feature detection as well as search. The video data set contained over 184 h of digitized news episodes from ABC and CNN with the task of discovering 10 types of segment, in particular: •

Boat/ship. Segment contains video of at least one boat, canoe, kayak, or ship of any type. • Bill Clinton. Segment contains video of Bill Clinton. • Madeleine Albright. Segment contains video of Madeleine Albright.

STATE OF THE ART

Participants

59

Video Hours

250 200 150 100 50 0 01

20

02

20

03

20

04

20

FIGURE 5.8 • • • • • • •

05

20

06

20

07

20

08

20

TRECVID trends.

Train. Segment contains video of one or more trains or railroad cars that are part of a train. Beach. Segment contains video of a beach with the water and the shore visible. Airplane takeoff. Segment contains video of an airplane taking off, moving away from the viewer. People walking/running. Segment contains video of more than one person walking or running. Physical violence. Segment contains video of violent interaction between people and/or objects. Road. Segment contains video of part of a road, any size, paved or not. Basket scored. Segment contains video of a basketball passing down through the hoop and into the net to score a basket—as part of a game or not.

To address the diversity of potential video data and to continually challenge researchers, each year the data sets grow and the evaluation tasks are expanded. For example, the TRECVID 2005 data set added multilingual video (Chinese and Arabic in addition to English) and the topics were slightly different and ranged from finding video segments of people (e.g. prisoner), places (e.g. mountain, building exterior, and waterscape/waterfront), things (e.g. car, map, and US flag) to events (e.g. people walking/running, explosion or fire, and sports). In 2007, a video summary task was added to the existing shot boundary, search, and feature detection tasks and in 2008 surveillance event detection was added along with 100 h of airport surveillance video. Effectiveness on video segment retrieval is measured primarily using mean average precision (the mean of the average precision of each query), which ranges widely by topic. Other measures include search processing time and precision at various depths. For interactive searches, participants are encouraged to collect data on usability as seen by each searcher. For example, in 2006, interactive retrieval of Tony Blair segments were achieved at nearly 90% mean average precision, whereas segments of people entering or leaving a building were recognized at only the 10% level.

60

SPEECH AND VIDEO PROCESSING FOR HOMELAND SECURITY

5.10

FUTURE RESEARCH

The challenges of audio and video analysis are daunting but with the rapid growth of sources, the need is equally great. Spoken dialog retrieval is an exciting research area precisely because it contains all the traditional challenges of spoken language processing together with the challenges imposed by the retrieval task. Some important spoken conversation processing challenges include [18] •

dealing with multiple speakers; dealing with foreign language and associated accents; • incorporating nonspeech audio dialog acts (e.g. clapping and laughter); • conversational segmentation and summarization; • discourse analysis, such as analyzing speaking rates, turn taking (frequency and durations), concurrence/disagreement, which often provides insights into speaker emotional state, attitudes toward topics and other speakers, and roles/relationships. •

Some important speech retrieval challenges include the following: •

• • • • •

How can we provide a query by example for a speech or audio signal, for example, find speech that sounds (acoustically and perceptually) like this? (See Sound Fisher in Reference 19.) How can we provide (acoustic) relevancy feedback to enhance subsequent searchers? How do we manage whole story/long passage retrieval that exposes users to too much errorful ASR output or too much audio to scan? Because text-based keyword search alone is insufficient for audio data, how do we retain and expose valuable information embedded in the audio signal? Are nonlinguistic audio cues detectable and useful? Can we utilize speech and conversational gists (of sources or segments) to provide more efficient querying and browsing.

Some interesting application challenges are raised, such as dialog visualization, dialog comparison (e.g. call centers), or dialog summarization, simultaneously with the challenge of addressing speech and dialog. Like audio analysis, video analysis has many remaining research challenges. These include •

scalable processing to address large-scale video collections; processing of heterogeneous video sources from cell phone cameras to handheld video cameras to high definition mobile cameras; • robustness to noise, variability, and environmental conditions; • bridging the “semantic gap” between low level features (e.g. color, shape, and texture) and high level objects and events. •

The combination of both audio and video processing is an area of research that promises combined effects. These include

REFERENCES

61



cross modal analysis to support cross cuing for tasks such as segmentation and summarization; • cross modal sentiment analysis for detection of bias and/or of deception; • cross media analysis for biometrics for identity management to overcome the noise and errorful detection in single media (e.g. audio and video) identification; • utilization of speech and conversational gists (of video sources or segments) to provide more efficient video querying and browsing. In conclusion, speech and video processing promise significant enhancement to homeland security missions. Addressing challenges such as scalability, robustness, and privacy up front will improve the likelihood of success. Mission-oriented development and application promises to detect dangerous behavior, protect borders, and, overall, improve citizen security.

REFERENCES 1. Office of Homeland Security (2002). National Strategy for Homeland Security. http://www. whitehouse.gov/homeland/book. 2. Woodward, J., Orlans, N., and Higgins, P. (2003). Biometrics: Identity Assurance in the Information Age. McGraw-Hill, Berkely, CA. 3. Gonzales, R., Woods, R., and Eddins, S. (2004). Digital Image Processing using MATLAB . Prentice-Hall, Upper Saddle River, NJ. 4. Zechner, K., and Waibel, A. (2000). DiaSumm: flexible summarization of spontaneous dialogues in unrestricted domains. Proceedings of the 18th Conference on Computational Linguistics, Saarbr¨ucken, Germany, pp. 968–974. 5. Hu, Q., Goodman, F., Boykin, S., Fish, R., and Greiff, W. (2003). Information discovery by automatic detection, indexing, and retrieval of multiple attributes from multimedia data. The 3rd International Workshop on Multimedia Data and Document Engineering. September 2003, Berlin, Germany, pp. 65–70. 6. Hu, Q., Goodman, F., Boykin, S., Fish, R., and Greiff, W. (2004). Audio hot spotting and retrieval using multiple audio features and multiple ASR engines. Rich Transcription 2004 Spring Meeting Recognition Workshop at ICASSP 2004 . Montreal. 7. Hu, Q., Goodman, F., Boykin, S., Fish, R., and Greiff, W. (2004). Audio hot spotting and retrieval using multiple features. Proceedings of the HLT-NAACL 2004 Workshop on Interdisciplinary Approaches to Speech Indexing and Retrieval . Boston, USA, pp. 13–17. 8. Ekman, P., Sullivan, M., Friesen, W., and Scherer, K. (1991). Face, voice and body in detecting deception. J. Nonverbal Behav . 15(2), 125–135. 9. DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., and Cooper, H. (2003). Cues to deception. Psychol. Bull . 129(1), 74–118. 10. Hirschberg, J., Benus, S., Brenier, J., Enos, F., Friedman, S., Gilman, S., Girand, C., Graciarena, M., Kathol, A., Michaelis, L., Pellom, B., Shriberg, D., Stolcke, A. (2005). Distinguishing deceptive from non-deceptive speech. Interspeech 2005 . September 4–8, Lisbon, Portugal, pp. 1833–1836. 11. Graciarena, M., Shriberg, E., Stolcke, A., Enos, F., Hirschberg, J. and Kajarekar, S. (2006). Combining prosodic, lexical and cepstral systems for deceptive speech detection. Proceedings of IEEE ICASSP . Toulouse.

62

SPEECH AND VIDEO PROCESSING FOR HOMELAND SECURITY

12. Intelligence Science Board (2006). Educing Information. Interrogation: Science and Art. National Defense Intelligence Council Press, Washington, DC, http://www.dia.mil/college/ pubs/pdf/3866.pdf. 13. Chen, H., Wang, F.-Y., and Zeng, D. (2004). Intelligence and security informatics for homeland security: information, communication, and transportation. IEEE Trans. Intell. Transp. Syst. 5(4), 329–341. 14. Maybury, M., Merlino, A., and Morey, D. (1997). Broadcast news navigation using story segments, ACM International Multimedia Conference. November 8–14, Seattle, WA, pp. 381–391. 15. Katz, B., Lin, J., Stauffer, C., and Grimson, E. (2004). Answering questions about moving objects in videos. In New Directions in Question Answering, Maybury, M., Ed. MIT Press, Cambridge, MA, pp. 113–124. 16. Trivedi, M. M., Gandhi, T. L., and Huang, K. S. (2005). Distributed interactive video arrays for event capture and enhanced situational awareness. IEEE Intell. Syst. 20(5), 58–66. 17. Smeaton, A. F., Over, P., and Kraaij, W. (2006). Evaluation campaigns and TRECVid. In Proceedings of the 8th ACM international Workshop on Multimedia information Retrieval (Santa Barbara, California, USA, October 26–27, 2006). MIR ’06. ACM, New York, NY, pp. 321–330. 18. Maybury, M. (2007). Searching conversational speech. Keynote at workshop on searching spontaneous conversational speech. International Conference on Information Retrieval (SIGIR-07). 27 July 2007. Seattle, WA. 19. Maybury, M. Ed. (1997). Intelligent Multimedia Information Retrieval . AAAI/MIT Press, Menlo Park, CA, ( http://www.aaai.org:80/Press/Books/Maybury-2/).

FURTHER READING Maybury, M. Ed. (2004). New Directions in Question Answering. AAAI/MIT Press, Cambridge, MA. NIST Meeting Room Project: Pilot Corpus. http://www.nist.gov/speech/test beds. Popp, R., Armour, T., Senator, T., and Numrych, K. (2004). Countering terrorism through information technology. Commun. ACM 47(3), 36–43. Tao, Li., Tompkins, R., and Asari, V. K. (2005). An Illuminance-Reflectance Nonlinear Video Enhancement Model for Homeland Security Applications, aipr, 34th Applied Imagery and Pattern Recognition Workshop (AIPR’05), pp. 28–35.

Voeller

v01-c06.tex V1 - 12/04/2013 11:38am Page 63

6 TRAINING FOR INDIVIDUAL DIFFERENCES IN LIE DETECTION ABILITY Maureen O’ Sullivan University of San Francisco, San Francisco, California

Mark G. Frank University of Buffalo, State University of New York, Buffalo, New York

Carolyn M. Hurley University of Buffalo, State University of New York, Buffalo, New York

6.1

INTRODUCTION

Catching terrorists is a multilayered process. Although technological sensors are both rapid and reliable, as in the use of thermographic or facial and body analysis programs, there are points in the process of assessing deception where only a human lie detector can be used. This may occur after the automated system shows a “hit” on an individual, which subjects him or her to further scrutiny, or in other security domains where access to technology is limited or nonexistent. Given these situations, it is important to determine who should interview such potential terrorists. Should we train all security personnel to improve their basic abilities? Or, should we select those most amenable to training, because of their motivation, skill, or other characteristics? Or, should we select already expert lie catchers; and if we do, how do we find them? The literature on how to increase lie detection accuracy through training has been sparse, although an increasing number of scientists are addressing this issue. This overview will enumerate some of the factors involved in designing a good training study and examine the current state of knowledge concerning training for improved lie detection accuracy.

Social and Behavioral Research for Homeland Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

63

Voeller

v01-c06.tex V1 - 12/04/2013 11:38am Page 64

64

TRAINING FOR INDIVIDUAL DIFFERENCES IN LIE DETECTION ABILITY

6.2

INDIVIDUAL DIFFERENCES IN LIE DETECTION ABILITY

Over the last 50 years, a general presumption has been that lie detection accuracy is a particular ability or cognitive skill [1] that might be an aspect of social-emotional intelligence [2]. This widely held belief implies something approximating a normal distribution of lie detection accuracy scores, with most scores in the average range and a few being very high or very low. However, a recent study questioned this assumption. A 2008 meta-analysis [3] of 247 lie detection accuracy samples concluded that although there was reliable evidence that people vary in the ease with which their lies can be detected, there is no evidence of reliable variance in the ability to detect deception. This rather controversial conclusion was criticized on a variety of grounds [4, 5]: most of the studies used college students, not professional lie catchers; the statistical model did not satisfy the classical test theory on which it was based; the metric used was standard deviations without reference to means, a highly misleading unit of measurement; and the authors ignored a substantial literature demonstrating convergent validity between lie detection accuracy and various social and psychological variables. Furthermore, in the last several years, as researchers use lie scenarios more appropriate to security personnel in their research, the number of reports in which highly accurate groups have been identified has increased [6]. The study of highly accurate individual lie detectors has been less common [7–9]. These studies suggest, however, that practice and motivation to detect deception are important variables. Moreover, expert lie detectors are more accurate with lies relevant to their profession [5, 9, 10]. Frank and Hurley [10] found that among law enforcement personnel, accuracy was greater for those with more experience in different domains of law enforcement. Homicide investigators, for example, were more accurate than fraud investigators who were more accurate than patrolmen walking a beat. Similarly, O’Sullivan [11] found, as predicted, that college administrators were more accurate in detecting the lies of college students than other non-faculty college personnel. In addition to supporting the view that experience makes a difference in lie detection accuracy, some of these studies support the view that experience with a particular kind of lie is important in lie detection. By extension, training to enhance lie detection accuracy should emphasize the particular lie of interest. Evidence relating to this point is reviewed below.

6.3 HOW EFFECTIVE IS TRAINING TO INCREASE LIE DETECTION ACCURACY? In a review of 11 lie detection training studies completed between 1987 and 1999, Frank and Feeley [12] reported a small, but significant, positive effect of training. Their methodological review suggested that the literature was hindered by several weaknesses in the research designs of most of the studies performed. They emphasized the importance of several variables in designing training programs and evaluating them: (i) the relevance of the lie to the lie detectors being trained. Training college students to detect lies about friends told by other college students may not generalize to training law enforcement personnel about lies about past or present crimes; (ii) whether the lie scenario uses high stakes lies—lies that involve strong rewards and punishments for successful and unsuccessful deceiving—may affect both lie detection accuracy, and training conducted with them. A recent meta-analysis [6] suggests that even professional lie catchers, such

Voeller

v01-c06.tex V1 - 12/04/2013 11:38am Page 65

RELEVANCE

65

as police personnel, will not be accurate in detecting low stakes lies, lies that are not important to the liars’ or the truth tellers’ self-identity, or lies without significant rewards or punishments. Their meta-analysis found that the average lie detection accuracy of police tested with high stakes lies was significantly higher than that of police tested with low stakes lies; (iii) in many studies, training consists of a brief, written description of potential cues to deception with no actual examples of the behaviors, no feedback, and no practice with similar or related kinds of behavior. Adequate training needs practice, feedback, and exemplars similar to the materials; (iv) basic experimental protocol should be followed, ideally, through the use of randomly determined experimental (trained) and control (untrained) groups with pre- and post-testing of both the experimental and the control groups. Different liars and truth tellers should be included in the pre- and post-testing measures. And, of course, the difficulty of the two measures should be calibrated for equivalence; (v) assuming that a bona fide training effect is found (based on a standard experimental protocol), and that training with one kind of lie has been shown to increase accuracy with that lie, another issue is whether the training is lie-specific or generalizes to increased accuracy with other kinds of lies; (vi) in addition to generalization to other kinds of lies (what Frank and Feeley [12] called Situational Generality), a related issue is time generality. How long does such increased accuracy last? Is it a permanent learning effect? Or one that dissipates outside of the training environment? These six factors are sine qua nons for lie detection training research. In a more recent methodological review, Frank [13] expanded the discussion of these topics and included many suggestions about ways in which to improve lie detection accuracy studies. In the present overview, however, we use the Frank and Feeley [12] paradigm to examine the nine lie detection training studies that were completed from 2000 to 2007. Table 6.1 summarizes the strengths and defects of these studies in the light of the Frank and Feeley paradigm. In conclusion, we will discuss the importance of individual differences in designing training programs, over and above the variation in individual lie detection accuracy. As Table 6.1 shows, of the nine training studies, three found no significant training effect; in one of these studies the lie scenario may have been irrelevant to the test takers [14]. In the others, the training may have been inadequate [18, 20]. Among 16 different groups tested, nine (Table 6.1, groups 4–8, 12,13, 15, 16) showed a significant lie detection accuracy increase, ranging from 2% to 37% (median increase = 20%).

6.4

RELEVANCE

Frank and Feeley [12] argued that training should be on lies relevant to the trainees. We agree, but in a recent publication [6] we refined this argument. It may be even more important that the lie scenario used for training contains the kinds of behaviors, both verbal and non-verbal, that provide clues to deception than that the lie superficially looks like a lie of interest. This distinction is what test psychologists call face validity versus construct validity and what experimental psychologists term mundane realism versus experimental realism. A lie scenario may seem relevant to a law enforcement lie detection situation because it shows a felon being interviewed by a police officer (face validity, mundane realism). But if the lie is about a topic of no importance to the felon, the emotional and cognitive aspects of a high stakes lie will not be present. Conversely, a college student discussing a strongly held belief, who will receive substantial rewards

66

Crews [15] Crews [15]

George [16] George [16]

Hartwig [17]

Levine Levine Levine Levine

O’Sullivan [19]

Porter [20]

Porter [21]

Santarcangelo [22]

4 5

6 7

8

9 10 11 12

13

14

15

16

97

20

151

78

256 90 96 158

164

177

29

26 14 18

n

College

Parole officers

College

College

College College College College

Police trainees

Air Force Air Force

College College

Police Social workers College

Sample Trained

65/69

40/77

Ns

57/61

Yes

No

Yes

Yes

Yes Yes Yes Yes

Yes

56/85a Ns Ns Ns 56/58a

Unknown Unknown

Yes Yes

No No No

Relevance of Test

54/60 47/61

42/69 44/64

Ns Ns Ns

Accuracy Pre/Post

No

Yes

Yes

Yes

No No No No

Perhaps

Unknown Unknown

No No

No No No

High Stakes of Test

Note: College: college students; Accuracy: pretest accuracy/post-test accuracy scores for same individuals. a Accuracy for post-test only design: untrained accuracy/trained accuracy scores.

[18] [18] [18] [18]

Akehurst [14] Akehurst [14] Akehurst [14]

Study

Lie Detection Accuracy Training Studies, 2000–2007

1 2 3

Group

TABLE 6.1

Perhaps

Yes

No

Yes

No No No Yes

Yes

Yes Yes

Yes Yes

Yes Yes Yes

Training Adequacy

Yes

Yes

Yes

No

No No No No

Yes

Unknown Unknown

Yes Yes

Yes Yes Yes

Testing Adequacy

No

No

No

No

No No No No

No

No No

No No

No No No

Situational Generality

No

Perhaps

No

No

No No No No

No

No No

No No

No No No

Time Generality

Voeller v01-c06.tex V1 - 12/04/2013 11:38am Page 66

Voeller

v01-c06.tex V1 - 12/04/2013 11:38am Page 67

TRAINING

67

if he tells the truth successfully or lies successfully and who will be punished if he is unsuccessful, may better simulate the behaviors seen in a law enforcement interview (construct validity, experimental realism). So while the construct validity or experimental realism of a scenario is the more important variable, the relevance or interest of the lie to the lie catcher (its face validity or mundane realism) must also be considered. In screening expert lie detectors from several different professional groups including law enforcement personnel and therapists, O’Sullivan [5] found that about one-third of the experts were at least 80% accurate on each of three different lie detection tasks. The remaining two-thirds of the experts obtained 80% on two of the three tests. For this second group, their lowest score was either on a test in which young men lied about stealing a significant amount of money or a test in which young women lied or told the truth about whether they were watching a gruesome surgical film or a pleasant nature film. Not surprisingly, the lowest of the three scores for therapists was on the crime test; for law enforcement personnel, their lowest score was on the emotion test. This finding was highly significant. Among recently published lie detection accuracy studies, several meet the criterion of relevance, whether this term is used to refer to importance to the trainees (mundane realism, face validity) or actual validity for the lies that lie catchers need to be accurate on (experimental realism, construct validity). Hartwig [17] tested police officers using a mock theft scenario and allowed the trainees to interview the experimental suspects. Akehurst [14], on the other hand, used test stimuli in which children lied or told the truth about an adult taking a photograph. Since it is unlikely that much arousal happened, whether this scenario had either mundane or experimental realism for the subjects is doubtful. All of the other studies used college students as target liars and truth tellers. Insofar as the trainees were students or therapists, who work with clients in that age group, such materials are probably relevant to them.

6.5

HIGH STAKES LIES

Among the nine training studies published between 2000 and 2007, four used what we consider to be high stakes lies. Porter [20, 21] used a scenario in which targets lied or told the truth about highly emotional events in their personal lives. We consider lies with a strong self-identity aspect to be high stakes. O’Sullivan [19] used a scenario in which both personal identity and a large cash reward were involved. Although the Hartwig study [17] used a sanctioned mock theft scenario which reduces the stakes for the liars and truth tellers, the targets also received a lawyer’s letter which may have “bumped up” the stress of the situation. (Three of these four studies achieved a significant learning effect.) The other studies included scenarios in which college students told social lies about friends or lied about whether they had headphones hidden in their pockets. (They had been directed to do so by the experimenter, so little emotional arousal could be expected.)

6.6

TRAINING

Outstanding expertise in lie detection is likely the result of a host of individual difference variables such as interest, extensive and varied life experience, motivation, practice, and

Voeller

68

v01-c06.tex V1 - 12/04/2013 11:38am Page 68

TRAINING FOR INDIVIDUAL DIFFERENCES IN LIE DETECTION ABILITY

feedback with professionally relevant lies that most expert lie detectors seem to share. In addition, there are probably particular kinds of skills such as visual or auditory acuity, pattern recognition and social or emotional memory that vary from expert to expert and that will cause them to be more or less expert on different kinds of lies, depending on their particular subset of skills. So while expert lie detection employs a host of skills, training for lie detection accuracy in a particular course or a particular study might more efficiently proceed by training in a focused skill or set of skills known to be related to lie detection. Many of the recent lie detection studies used this approach, narrowing their focus and evaluating the effectiveness of training with a particular kind of knowledge or subset of cues. Santarcangelo [22] found that informing trainees about either (i) verbal content cues (plausibility, concreteness, consistency, and clarity which are included in the more extensive Criteria-Based Content Analysis (CBCA) protocol); (ii) nonverbal cues (adaptors, hand gestures, foot and leg movements, and postural shifts) or (iii) vocal cues (response duration, pauses, speech errors, and response latency) resulted in lie detection accuracy greater than a no-cues control group. Levine [18] conducted a series of studies on how to increase lie detection accuracy that also used mere verbal description of cues. In three of the studies, a lecture describing general behavioral cues comprised one condition. A second condition was a bogus training group in which incorrect information about lie detection clues was given to the subjects. The control group received no information about lie detection clues. None of the three studies obtained significant results in the predicted direction. In the fourth study, behavioral cues actually occurring in the stimulus materials were used for the lecture condition. In this condition, a significant result was found between the training lecture (58%) and the control condition (50%). However, the bogus training also resulted in significantly increased training (56%) which was not significantly different from the authentic training condition. Interpretation of this study is complicated by the use of only two different stimulus persons as the target liars and truth tellers. Other researchers are also designing training studies which teach those behavioral cues actually existing in the training and testing materials [15, 23]. For studies using this training method, situational generality (testing on other lie detection tests as well) is particularly important. Hartwig [17] took a novel approach by training police trainees to adjust the timing of their questions. Rather than assessing the nonverbal behaviors of the liars and truth tellers, actual evidence (eyewitness testimony, fingerprints, etc.) was available and the liars and truth tellers were informed of this during the interview. The Hartwig study found that if interviewers held back knowledge of the evidence until later in the interview, liars were more likely to make inconsistent statements which increased detection accuracy for the interviewers. This training is much more like the kind of interview situation in which law enforcement officers decide the honesty of suspects. Such training, however, may not generalize to interview situations in which no evidence is available. An unusual feature of deception research, although certainly not new in other kinds of training, is the use of computer programs in lieu of instructor presentation or printed materials. Crews [15] and George [16] demonstrated that there was no difference between a computer-based training program and the same material presented by a human instructor. In both cases, significantly increased accuracy was achieved. Although most of the studies provided examples of honest and deceptive behaviors for trainees, some did not. Subjects in the Levine [18] and Santarcangelo [22] studies, for example, only received a written sheet of cue information that could be read rather

Voeller

v01-c06.tex V1 - 12/04/2013 11:38am Page 69

TIME GENERALITY

69

quickly. It is interesting that these studies found a significant, albeit small (4%) increase in accuracy, whereas studies using more lengthy training procedures [15, 17] reported gains in excess of 20%.

6.7

TESTING

(a) Randomization. Trainees were randomly assigned in all of the studies. Most of the studies used a pre—post design except those of Hartwig [17] and Levine [18] which utilized a random assignment, post-group comparison design. Random assignment in a post-group-only design assumes that all assigned interviewers or judges are alike prior to training and that differences afterwards are due to the training alone. A post-test-only design does not completely rule out the possibility that trained and untrained interviewers or judges, even if randomly assigned, were different before the experiment. (b) Independence of items in the stimulus materials. Although most of the lie detection materials used different liars or truth tellers for each “item” some did not. Levine [18], for example, used only two targets, who both lied and told the truth about items on a test. When “items” are not independent, the effect of biases, personal likes and dislikes with particular kinds of people, familiarity with particular kinds of people or particular kinds of behavioral styles can all affect the final scores. These biases may reflect factors other than lie detection accuracy. (c) Independence of targets in pre—post designs. All of the pre- and post studies, except O’Sullivan’s [19], used different liars and truth tellers for their pre- and post-tests. Although a control group ameliorates the effect of mere familiarity on increased lie detection accuracy, it is preferable to have different individuals as targets in the pre- and post-test measures and to ensure that the tests are of equivalent difficulty. The Crews study [15] did an especially careful job of determining that their pre- and post tests were equivalent in difficulty, establishing their norms in a pilot study. None of the other studies did this, or if they did, they did not mention it. (d) Numbers of targets. Except for Levine [18] who used only two test subjects, most of the studies used 6 to 12 subjects for the pre-test and/or post-test measures.

6.8

SITUATIONAL GENERALITY

All of the studies used a single kind of lie so the generalizability of training for lie detection accuracy is unknown. Given that some of the studies with the greatest increase in accuracy taught and emphasized the cues that were actually contained in the materials [15, 16], the issue of situational or lie generality is an important one.

6.9

TIME GENERALITY

None of the studies reviewed examined the temporal stability of any gain in lie detection accuracy, so we have no way of knowing whether gains in lie detection accuracy survive

Voeller

70

v01-c06.tex V1 - 12/04/2013 11:38am Page 70

TRAINING FOR INDIVIDUAL DIFFERENCES IN LIE DETECTION ABILITY

the time span of the training course. Researchers are aware of this issue, however. Porter [21] spread the training over five weeks, and found a highly significant increase in detection accuracy. Whether this gain would last longer than five weeks, however, is unknown. Marett [24] was specifically interested in the effect of lie detection history (training over time) on final accuracy, but the small number of subjects and items did not allow them to reach any conclusions. (This study is not reviewed since no accuracy means were reported.)

6.10 INDIVIDUAL DIFFERENCES RELATED TO LIE DETECTION ACCURACY In training to increase lie detection accuracy, a variety of individual difference abilities need to be considered. The already existing ability of the trainees is one that has often been overlooked. It seems reasonable, however, that training which provides new information to mediocre lie detectors, may be superfluous to expert ones. And providing specialized training, in verbal content analysis or facial expression recognition or other nonverbal cues, might be more advantageous for those already at an average or above average lie detection accuracy level. No research exists which examines the role of pre-existing lie detection accuracy on the efficacy of different lie detection training paradigms. In our work with expert lie detectors who have been trained in facial expression recognition, several of them have reported a disruption of their ability to assess truthfulness in the months immediately following the training. With practice, however, according to their self-reports, they were able to incorporate the new information into their skill set. Kohnken [25] and Akehurst [14] also described reports from police trainees that they needed more time to incorporate the new information provided. (In these studies it was verbal content training rather than facial expression recognition.) A difficulty in examining this hypothesis (that more expert lie detectors may have an initial disruption effect, resulting in a decrement in lie detection accuracy) may occur due to the ceiling effect or regression to the mean for the lucky guessers in the first testing. If trainees are already highly accurate prior to training (70% or better), there is little room for improvement as measured by most existing lie detection accuracy measures. Many lie detection accuracy tests are relatively brief; the median number of items is ten. Clearly, new tests containing more items of greater difficulty are necessary. The issue of item difficulty is also an important one. Many items in existing lie detection measures are difficult because the lies are trivial and there are no emotional and/or cognitive clues to discern. Item difficulty should be based on subtle cues that are present although, difficult to distinguish, or should reflect the kinds of personality types (outgoing, friendly) that are particularly difficult for American judges to perceive as liars. Other individual difference variables that have been largely overlooked in studies of lie detection accuracy training are the intelligence and cognitive abilities of the lie detector. O’Sullivan [26] demonstrated that the fundamental attribution error was negatively related with accurate detection of liars. Whether such cognitive biases can be corrected through training has not been examined. Although many people seem to believe that lie detection is a natural ability unrelated to education or training, O’Sullivan noted [27] that more than half of her 50 expert lie detectors have advanced degrees and all have at least a two year associates degree. The interpretation of the many cognitive and emotional cues that

Voeller

v01-c06.tex V1 - 12/04/2013 11:38am Page 71

INDIVIDUAL DIFFERENCES RELATED TO LIE DETECTION ACCURACY

71

occur while lying and telling the truth may take a superior baseline level of intelligence to decipher. This hypothesis has also not been examined. On the other hand, Ask and Granhag [28] found no relationship between cognitive or personality variables such as need for closure, attributional complexity, and absorption. The lie scenarios they used, however, may not have provided sufficient score variance to examine their hypotheses adequately. Many expert lie detectors seem to have an ongoing life commitment to seeking the truth [5]. This kind of commitment and practice cannot be taught in a single training program, which suggests that selecting already accurate lie detectors might be a more sensible approach to use when staffing personnel to perform lie detection interviews. This option, however, may be difficult to implement given the relative rarity of expert lie detectors (from 1 per thousand in some professional groups to 20% in others [5]) and the personnel restrictions in some agencies. In addition to individual differences in lie detection accuracy as a factor to be considered in designing and implementing lie detection accuracy training courses, the role of other individual difference factors needs to be considered. Deception researchers [9] have noted the extraordinary motivation of expert lie detectors to know the truth. Porter [29] attempted to examine motivation by randomly assigning subjects to one of two levels of motivation to succeed at a lie detection task. This motivation manipulation had no impact on consequent lie detection accuracy. An experimentally manipulated motivation to detect deception, however, may not be a sufficient analog for the life-long commitment to discern the truth in one’s profession and one’s life that some expert lie detectors show. To date there is mounting evidence that certain law enforcement personnel groups [6, 30, 31] and individuals [5, 7] are accurate at least with certain kinds of lies. There is replicated evidence that groups of forensic specialists (psychologists and psychiatrists), federal judges [31], and dispute mediators [5] are also significantly above chance in their ability to discern the truth. In all of these studies, comparison groups, usually of college students, have average accuracies at the chance level on the tests used. This provides some support for the view that the lie detection tests are not easy, which rules out one explanation for their high accuracy. While commitment to lie detection is an aspect of some expert lie catcher’s professional lives, O’Sullivan [19] found that even among college students, concern for honesty was significantly related to lie detection accuracy. Students who reported rarely lying to friends obtained higher accuracy on a lie detection measure than students who lied to friends frequently. In this same study, a high rating for honesty as a value when compared with other values (such as a comfortable life) also distinguished more and less accurate lie detectors. Given the importance of emotional clues in detecting deception, it is not surprising that a number of studies have reported significant correlations between emotional recognition ability and lie detection accuracy. Warren, Schertler, and Bull [32], for example, demonstrated that accuracy at recognizing subtle facial expressions using the SETT (Subtle Expression Training Tool [33]) was positively related to accuracy in detecting emotional lies, but not nonemotional ones. (This study underscores the need for situational generality of lie scenarios as discussed earlier.) Ekman and O’Sullivan [30], Frank and Ekman [34], and Frank and Hurley [10] all found a significant relationship between micro-expression detection accuracy and lie detection accuracy using precursors of the Micro-Expression Training Tool (METT) [35]. Frank [36] also found that being trained on micro-expressions significantly improved detecting emotions that occurred while lying.

Voeller

72

v01-c06.tex V1 - 12/04/2013 11:38am Page 72

TRAINING FOR INDIVIDUAL DIFFERENCES IN LIE DETECTION ABILITY

Many IQ tests are highly saturated with verbal content, so it is likely that the ability to apply one type of verbal system (e.g., CBCA) in improving lie detection accuracy may be related to verbal intelligence. Vrij [37] found individual differences in the ability to learn CBCA in order to lie or tell the truth more effectively. While the ability to learn CBCA may have a cognitive component, the study also found that ability to use CBCA in truth and lie performance was related to social anxiety. Porter’s [29] report of a significant correlation between handedness and lie detection accuracy (left-handed lie catchers being superior) also suggests a biologically based individual difference that should be considered in lie detection accuracy programs. Etcoff and her colleagues [38] also reported a similar right brain advantage in lie detection. Other individual difference variables of interest have included gender and personality variables such as social skill and Machiavellianism. For all of these variables, conclusions are difficult to draw because of the widely varying adequacy of the lie detection scenarios used, or the lack of variance in lie detection accuracy of some of the subjects. For example, in one study [39] which reported an interaction effect between gender and increased accuracy with training, the differing mean accuracies of the two genders at the start of the study compromises this conclusion. Before training, average accuracy for males was 47% which increased to 70% after training. For females, pretraining accuracy was 68% which decreased to 62% after training. Pretraining performance for females was significantly higher than for males, giving females less headroom for improvement. Even though the males’ accuracy increased significantly while the females did not, the difference in their final accuracy levels was not significant. This effect might reflect a room-for-improvement phenomenon rather than a gender one. Some low-scoring females might have shown some improvement. The confounding of base accuracy level and gender would need to be clarified before conclusions can be drawn about gender effects. Over all, no consistent gender superiority in lie detection accuracy or in training effectiveness has been demonstrated. Training studies with relevant tasks, focused training programs, and reliable test materials known to contain behavioral clues or other evidence relevant to lie detection, have resulted in a growing body of research demonstrating that lie detection is difficult for most people, but that improvement is possible with well-honed training programs. Selecting the best detectors within an organization may be more cost-effective, but it too is fraught with problems. The tasks used to determine who goes forward need to mirror the structural features of the scenarios to which these personnel will apply their skills. And, ideally it would be useful to develop some metric as to how well they do in the real world, compared to those not selected. For example, we can consider criteria such as how much contraband is confiscated, or how many cases go to trial and result in a conviction, or other goals specific to the agency may be useful. This would require a new way of thinking about security, but it may violate assumptions about equal treatment for all agency personnel.

6.11

CONCLUSION

We end on an optimistic note. Increasingly, researchers are identifying highly accurate lie catchers. This increased range of lie detection accuracy can provide a proving ground for developing lie-specific training. Research on how expert lie detectors do what they do can suggest materials to be included in lie detection courses. Researchers have also

Voeller

v01-c06.tex V1 - 12/04/2013 11:38am Page 73

REFERENCES

73

become increasingly sophisticated about the need for experimental validity in their work. They have also become more sophisticated about the value of training on one particular skill or clue domain at a time (e.g., CBCA, METT). We believe the tools of the scientist can be successfully applied to real-world security settings. But more work is needed in order to calibrate the cost/benefit ratio because so much of the science is not directly relevant to security personnel. We see this as a call for increased cooperation between scientists who are sympathetic to the pressures on security personnel and practitioners who desire scientific help in their professions. Once we achieve that combination of forces, we can move this issue forward to identify the optimal way to deploy people in the lie detection process.

REFERENCES 1. Ekman, P. (2001). Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage. W. W. Norton & Co, New York. 2. O’Sullivan, M. (2005). Emotional intelligence and detecting deception. Why most people can’t “read” others, but a few can. In Applications of Nonverbal Communication, R. E. Riggio, and R. S. Feldman, Eds. Erlbaum, Mahwah, NJ, pp. 215–253. 3. Bond, C. F. Jr., and DePaulo, B. M. (2008). Individual differences in judging deception: accuracy and bias. Psychol. Bull. 134(4), 501–503. DOI: 10.1037/0033-2909.134.4.477. 4. Pigott, T. D., and Wu, M. (2008). Methodological issues in meta-analyzing standard deviations: comment on Bond and DePaulo (2008). Psychol. Bull. 134(4), 498–500. DOI: 10.1037/0033-2909.134.4.498. 5. O’Sullivan, M. (2008). Home runs and humbugs: comment on Bond and DePaulo (2008). Psychol. Bull. 134(4), 493–497. DOI: 10.1037/0033-2909.134.4.493. 6. O’Sullivan, M., Frank, M. G., Hurley, C. M., and Tiwana, J. Police lie detection accuracy: the effect of lie scenario. Law Hum. Behav., In press. 7. Bond, G. A. (2008). Deception detection expertise. Law Hum. Behav. 32(4), 339–351. DOI: 10.1007/s10979-007-9110-z. 8. O’Sullivan, M. (2007). Unicorns or Tiger Woods: are lie detection experts myths or rarities? A response to On lie detection ‘Wizards’ by Bond and Uysal. Law Hum. Behav. 31(1), 117–123. DOI: 10.1007/s10979-006-9058-4. 9. O’Sullivan, M., and Ekman, P. (2004). The wizards of deception detection. In The Detection of Deception in Forensic Contexts, P. A. Granhag, and L. Stromwell, Eds. Cambridge University Press, Cambridge, pp. 269–286. 10. Frank, M. G., and Hurley, C. M. (2009). Detection Deception and Emotion by Police Officers. Manuscript in preparation. 11. O’Sullivan, M. (2008). Lie detection and aging. Annual Conference Society for Personality and Social Psychology. Albuquerque, NM . 12. Frank, M. G., and Feeley, T. H. (2003). To catch a liar: challenges for research in lie detection training. J. Appl. Commun. Res. 31(1), 58–75. 13. Frank, M. G. (2005). Research methods in detecting deception research. In Handbook of Nonverbal Behavior Research, J. A. Harrigan, K. R. Scherer, and R. Rosenthal, Eds. Oxford University Press, New York, pp. 341–368. 14. Akehurst, L., Bull, R., Vrij, A., and Kohnken, G. (2004). The effects of training professional groups and lay persons to use criteria-based content analysis to detect deception. Appl. Cogn. Psychol. 18(7), 877–891. DOI: 10.1002/acp.1057.

Voeller

74

v01-c06.tex V1 - 12/04/2013 11:38am Page 74

TRAINING FOR INDIVIDUAL DIFFERENCES IN LIE DETECTION ABILITY

15. Crews, J. M., Cao, J., Lin, M., Nunamaker, J. F. Jr., and Burgoon, J. K. (2007). A comparison of instructor-led vs. web-based training for detecting deception. J. STEM Educ. 8(1/2), 31–40. 16. George, J. F., Biros, D. P., Adkins, M., Burgoon, J. K., and Nunamaker, J. F. Jr. (2004). Testing various modes of computer-based training for deception detection. Proc. Conf. ISI. 3073, 411–417. 17. Hartwig, M., Granhag, P. A., Stromwall, L. A., and Kronkvist, O. (2006). Strategic use of evidence during police interviews: when training to detect deception works. Law Hum. Behav. 30(5), 603–619. DOI: 10.1007/s10979-006-9053-9. 18. Levine, T. R., Feeley, T. H., McCornack, S. A., Hughes, M., and Harms, C. M. (2005). Testing the effects of nonverbal behavior training on accuracy in deception detection with the inclusion of a bogus training control group. West. J. Commun. 69(3), 203–217. DOI: 10.1080/10570310500202355. 19. O’Sullivan, M. (2003). Learning to detect deception. Annual Conference of the Western Psychological Association. Vancouver, BC . 20. Porter, S., McCabe, S., Woodworth, M., and Peace, K. A. (2007). ‘Genius is 1% inspiration and 99% perspiration’ . . . or is it? An investigation of the impact of motivation and feedback on deception detection. Leg. Criminol. Psychol. 12(2), 297–309. DOI: 10.1348/135532506X143958. 21. Porter, S., Woodworth, M., and Birt, A. R. (2000). Truth, lies, and videotape: an investigation of the ability of federal parole officers to detect deception. Law Hum. Behav. 24(6), 643–658. DOI: 10.1023/A:1005500219657. 22. Santarcangelo, M., Cribbie, R. A., and Hubbard, A. S. (2004). Improving accuracy of veracity judgment through cue training. Percept. Motor Skill. 98(3), 1039–1048. 23. Cao, J., Lin, M., Deokar, A., Burgoon, J. K., Crews, J. M., and Adkins, M. (2004). Computer-based training for deception detection: What users want? Proc. Conf. ISI. 3073, 163–175. 24. Marett, K., Biros, D. P., and Knode, M. L. (2004). Self-efficacy, training effectiveness, and deception detection: a longitudinal study of lie detection training. Proc. Conf. ISI. 3073, 187–200. 25. Kohnken, G. (1987). Training police officers to detect deceptive eyewitness statements: Does it work? Soc. Behav. 2(1), 1–17. 26. O’Sullivan, M. (2003). The fundamental attribution error in detecting deception: the boy-who-cried-wolf effect. Pers. Soc. Psychol. Bull. 29(10), 1316–1327. DOI: 10.1177/ 0146167203254610. 27. O’Sullivan, M. (2009). Are there any “natural” lie detectors? Psychol. Today. Available at http://blogs.psychologytoday.com/blog/deception/200903/are-there-any-natural-lie-detec1tors. 28. Ask, K., and Granhag, P. A. (2003). Individual determinants of deception detection performance: Need for closure, attribution complexity and absorption. Goteborg Psychol. Rep. 1(33), 1–13. 29. Porter, S., Campbell, M. A., Stapleton, J., and Birt, A. R. (2002). The influence of judge, target, and stimulus characteristics on the accuracy of detecting deceit. Can. J. Behav. Sci. 34(3), 172–185. DOI: 10.1037/h0087170. 30. Ekman, P., and O’Sullivan, M. (1991). Who can catch a liar? Am. Psychol. 46(9), 189–204. 31. Ekman, P., O’Sullivan, M., and Frank, M. G. (1999). A few can catch a liar. Psychol. Sci. 10(3), 263–266. 32. Warren, G., Schertler, E., and Bull, P. (2009). Detecting deception from emotional and unemotional cues. J. Nonverbal Behav. 33(1), 59–69. DOI: 10.1007/s10919-008-0057-7. 33. Ekman, P., and Matsumoto, D. (2003). Subtle Expression Training Tool .

Voeller

v01-c06.tex V1 - 12/04/2013 11:38am Page 75

FURTHER READING

75

34. Frank, M. G., and Ekman, P. (1997). The ability to detect deceit generalizes across different types of high-stake lies. J. Pers. Soc. Psychol. 72(6), 1429–1439. 35. Ekman, P., Matsumoto, D. M., and Frank, M. G. (2003). Micro Expression Training Tool v1 . 36. Frank, M. G., Matsumoto, D. M., Ekman, P., Kang, S., and Kurylo, A. (2009). Improving the Ability to Recognize Micro-expressions of Emotion. Manuscript in preparation. 37. Vrij, A., Akehurst, L., Soukara, S., and Bull, R. (2002). Will the truth come out? The effect of deception, age, status, coaching, and social skills on CBCA scores. Law Hum. Behav. 26(3), 261–283. DOI: 10.1023/A:1015313120905. 38. Etcoff, N. L., Ekman, P., Magee, J. J., and Frank, M. G. (2000). Lie detection and language comprehension. Nature 405(6783), 139. DOI: 10.1038/35012129. 39. deTurck, M. A. (1991). Training observers to detect spontaneous deception: the effects of gender. Commun. Rep. 4(2), 79–89.

FURTHER READING Ekman, P. (2003). Emotions Revealed . Henry Holt, New York. Harrington, B., Ed. (2009). Deception: From Ancient Empires to Internet Dating. Stanford University Press, Stanford, CA. Lindsay, R. C. L., Ross, D. F., Read, J. D., and Toglia, M. P., Eds. (2007). The Handbook of Eyewitness Psychology Vol I Memory for People. Lawrence Erlbaum, Mahwah, NJ. Toglia, M. P., Read, J. D., Ross, D. F, and Lindsay, R. C. L., Eds. (2007). The Handbook of Eyewitness Psychology Vol I Memory for Events. Lawrence Erlbaum, Mahwah, NJ.

Voeller

v01-c06.tex V1 - 12/04/2013 11:38am Page 76

Voeller

v01-c07.tex V1 - 12/04/2013 11:39am Page 77

7 DETERRENCE: AN EMPIRICAL PSYCHOLOGICAL MODEL Robert W. Anthony Institute for Defense Analyses, Alexandria, Virginia

7.1

INTRODUCTION

Although deterrence has not led to a strategic victory to date against the entire loosely knit network of cocaine traffickers. However, it has shut down nearly all direct smuggler flights into the United States [1, 2], eliminated Peru as a major cocaine producing country [2, 3], and recently closed down nearly all Caribbean go-fast boat traffic. Section 3 recounts how data obtained from these various success stories facilitated the derivation and calibration of an unexpectedly simple mathematical function representing the psychology of deterrence [1, 3]. It goes on to explain how these tactical victories teach several practical lessons and reveal operational dilemmas. To apply these results to terrorism, Section 4 summarizes an analysis of terrorist preparations for the 9/11 attacks. This analysis suggests that “deterrence” influences decision making for terrorists perpetrating complex plots. The section also explains the methods for estimating the deterrent effect of a mixture of several possible consequences and methods for estimating the deterrence contribution of multilayer defenses. Section 5 introduces several testable hypotheses concerning the generality of these findings and possible explanations for the willingness function. It also emphasizes the importance of interdisciplinary, integrated research to focus all available knowledge on understanding the risk judgments of criminals, insurgents, and terrorists.

7.2

DEFINITIONS AND SOURCES

A great deal of deterrence research addresses the prisoner’s dilemma gaming of the cold war standoff, rate of loss models of military attrition, or guidance to law enforcement in various situations, often with the underlying assumption of a linear relationship between effort and effect. By contrast, this work focuses on the psychology of perpetrators represented as a fraction of a pool willing to act. Therefore, this approach does not discriminate between individual behavior and distributions across a perpetrator population. Social and Behavioral Research for Homeland Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

77

Voeller

78

v01-c07.tex V1 - 12/04/2013 11:39am Page 78

DETERRENCE: AN EMPIRICAL PSYCHOLOGICAL MODEL

The US military has formally defined both deterrence and strategic deterrence; the first applies to thwarting terrorists in general, while the second applies to complex plots that could damage the vital interests of the United States. Remarkably, these definitions include a psychological interpretation of deterrence. Primary data sources in the public domain are cited at the end of this section. Unfortunately, many organizations applying deterrence in their operations cannot publicly release their classified data, and others with fewer restrictions are reluctant to do so. Moreover, these organizations also do not see their mission as one of justifying support for sustained applied research or any basic science. 7.2.1

Definition of Deterrence

The US Department of Defense (DoD) defines deterrence as “the prevention from action by fear of consequences—deterrence is a state of mind brought about by the existence of a credible threat of unacceptable counteraction” [4]. Even suicide terrorists must fear some consequences, especially risks that undermine their motives for taking such drastic action. For example, some terrorists might fear failure, arrest, or loss of life without completing their mission; dishonoring or bringing retribution upon their families; embarrassing their cause and supporters of their cause; or revealing a larger scheme or its supporting network. 7.2.2

Definition of Strategic Deterrence

Recently, the DoD introduced a related concept: “strategic deterrence is defined as the prevention of adversary aggression or coercion threatening vital interests of the United States and/or our national survival; strategic deterrence convinces adversaries not to take grievous courses of action by means of decisive influence over their decision making” [5]. This definition should exclude individuals who are mentally ill, act impulsively, or act alone. Strategic deterrence primarily applies to complex plots and networks with sufficient resources to threaten national vital interests. Although the empirical quantitative model reveals that deterrence will not thwart everyone, its cumulative and systemic impact on complex plots or networks should be capable of debilitating virtually all of them. 7.2.3

Information from Operational Sources

Operational organizations provided an interview report summarizing the responses of a very diverse population of 109 imprisoned drug smugglers. Analyses of these data led to the development of a simple mathematical expression representing the psychology of deterrence [1, 3]. Two reports provide more details on the interviews and operational data from major countercocaine operations [3, 6] used to verify and calibrate the deterrence model. Unfortunately, other data sets are not available for public release.

7.3

PRINCIPAL FINDINGS

Deterrence is essential for amplifying limited interdiction capabilities to thwart hostile activity. For example, lethal consequences can amplify interdiction effort by more than

Voeller

v01-c07.tex V1 - 12/04/2013 11:39am Page 79

PRINCIPAL FINDINGS

79

10 Material loss to capture Capture to prison

0.8

Prison to loss of life

Willingness

Self caught Associate caught

0.6

Self imprisoned Associate imprisoned

0.4

0.2

0.0 0.0

0.2

0.4 0.6 Probability of interdiction

0.8

1.0

FIGURE 7.1 The willingness function.

a factor of 10. The following quantitative representation of the psychology of deterrence and associated tactical lessons has been used to size forces, guide operations, and assess operational effectiveness in counterdrug and counterterrorism operations. Although the references provide more detail, one case is summarized: the air interdiction operations against smugglers flying cocaine from Peru to Colombia. This case illustrates the effectiveness of deterrence, verifies essential features of the mathematical form of the willingness function, and provides calibration for lethal consequences. 7.3.1

Willingness Function

The “willingness function” expresses the psychological aspects of deterrence in mathematical terms. It facilitates an estimate of the fraction of all would-be perpetrators willing to challenge the risks of interdiction. It has one independent variable, the probability of interdiction, P I , and one constant parameter, the threshold of deterrence, P 0 , calibrated to the specific perceived consequences of interdiction. Figure 7.1 plots the willingness functions for three different values of the deterrence threshold. The vertical axis represents the fraction of perpetrators and the horizontal axis represents the probability of interdiction. To interpret a willingness function, consider the light curve. As the interdiction probability increases from zero, all would-be perpetrators remain willing to continue until their perception of the interdiction probability reaches the deterrence threshold at a probability of interdiction of 0.13. Beyond the deterrence threshold, the fraction of the perpetrators still willing to perpetrate, W (P I ), declines in proportion to the inverse of the perceived probability of interdiction: W =

P0 . PI

(1)

Voeller

80

v01-c07.tex V1 - 12/04/2013 11:39am Page 80

DETERRENCE: AN EMPIRICAL PSYCHOLOGICAL MODEL

As the interdiction probability approaches 1.0, however, a small fraction, P 0 , of the perpetrators persist, even expecting certain interdiction. In interviews with imprisoned drug smugglers, some commented that they would continue smuggling knowing they would be imprisoned since one fee, given in advance, would more than compensate for their prison time [3]. Scofflaw fishermen violating restrictions that protect living marine resources also behave according to the deterrence model and show no indication of quitting out to an 80% probability of interdiction [1]. Heavy, medium, and light curves in Figure 7.1 illustrate willingness functions bounding the ranges of four different types of consequences. The heavy curve represents the boundary between “lethal” consequences and “imprisonment” and is determined by a threshold of deterrence of 0.02. The medium weight curve separates “imprisonment” from “capture followed by release” and has a threshold of 0.05. The light curve separates “capture and release” from “loss of material assets” and has a threshold of 0.13. Figure 7.1 also shows four sets of data obtained from voluntary interviews of imprisoned smugglers. Each was asked whether he or she would be willing to continue to smuggle if the chance of interdiction equaled successively higher values as indicated by data symbols along the trend lines. The same willingness questions were asked for different consequences, for example, being caught then released or being imprisoned, and for two different perceptual orientations, answering for themselves and answering as if they were a former associate smuggler. As the researchers anticipated, the interviewees estimated their associates would be more willing to continue smuggling than they would be now that they have experienced incarceration. These cumulative trends illustrate how well the willingness function boundaries parallel and bracket the interview responses. In such very high-risk activities, perpetrators appear to decide whether the risks are acceptable before even considering the adequacy of the rewards. For example, all inmates stated their willingness to smuggle without any reference to wages. On separate questions exploring the sensitivity of willingness to wage levels, significantly higher wage offers did not increase the previously declared fraction of the smugglers willing to face the risks. However, if risks do increase, the wage necessary to sustain smuggler willingness at their previously declared levels increases quadratically relative to the increased risk.

7.3.2

Surge Operations

Surge operations typically consist of doubling or more the interdiction pressure and sustains it long enough to convince perpetrators that they cannot simply outwait the interdictors (typically 2–5 months for counterdrug operations). Surges have effectively communicated risks to perpetrators and caused lasting deterrence, even as interdiction efforts substantially relax from surge levels [1, 3]. A surge operation can provide valuable intelligence since it can induce perpetrators to react, thereby revealing their clandestine activity and the level of their deterrence threshold. Focusing surges on criminal hot spots should amplify the visibility of criminal reaction to deterrence, and has proven capable of doing so in urban areas [7]. However, if perpetrators can change their mode of operation or shift their location, the interview data suggests they will change whenever interdiction risk reaches only approximately one-half of the deterrence threshold [1, 3]. Thus, operators must take this possibility into account in their subsequent planning.

Voeller

v01-c07.tex V1 - 12/04/2013 11:39am Page 81

PRINCIPAL FINDINGS

7.3.3

81

Breakouts from Deterrence

A mathematical property of the willingness function shows that deterrence, once established, is at risk of instability. After deterrence has suppressed attempts, the estimated fraction of perpetrators actually interdicted tends to remain constant at a magnitude equal to the deterrence threshold:   P0 · PI = P0 . (2) W · PI = PI Under normal conditions, defenders need only interdict this constant fraction to deter. However, any diversion of interdiction effort elsewhere or additional recruitment expanding the pool of potential perpetrators, possibly as the result of an external event, could cause the fraction interdicted to drop below the deterrence threshold. This would most likely trigger a burst of perpetrator attempts, threatening a breakout from deterrence. Interdictors, therefore, need to maintain a reserve capacity, or other overwhelming threat of counteraction, to prevent breakout or reestablish deterrence. 7.3.4

Deterrence Model

The deterrence model estimates the fraction of all perpetrators thwarted by interdictors, P t , that is, those who are either interdicted or deterred. Pt = 1 − (1 − PI ) · W (PI∗ )

(3)

where PI∗ is the perceived probability of interdiction. Under steady conditions with well-informed perpetrators, the willingness function represents the subjective aspects of perceived risk, and PI∗ equals P I . During surges or other transition periods, however, there might be a diversity of perceptions with many misunderstandings of the real situation. Since the probability of thwarting an attempt equals the probability of unsuccessful attempts, it is one minus the probability of those willing and able to avoid interdiction. 7.3.5

Example—Peruvian Drug Flights

A series of operations to interdict and deter air traffickers flying cocaine base from Peru to Colombia provided an estimate of the deterrence threshold for lethal consequences [1, 3]. These operations also demonstrated the impact of an initial surge and proved that perpetrators will ignore even lethal consequences under some conditions. The US detection and monitoring support to the Peruvians provided nearly perfect coverage of trafficker flights, and the combined capacities of those flights closely matched satellite estimates of the coca crop during periods without deterrence. This enabled an estimate of those willing, while complete and verified interdiction records gave probability of interdiction. Figure 7.2 shows the principal operational periods plotted over two deterrence model curves. The vertical axis is the fraction of flights thwarted and the horizontal axis shows the probability of interdiction. Each operational period lasted from 7 to 11 months, identified 100–500 smuggler flights, and involved 6–17 interdictions. Ovals represent conservative estimates of the asymmetric uncertainty ranges from both statistical and systematic sources. Open circles represent periods of nonlethal consequences during which

Voeller

82

v01-c07.tex V1 - 12/04/2013 11:39am Page 82

DETERRENCE: AN EMPIRICAL PSYCHOLOGICAL MODEL

1.0

Final After

Fraction thwarted

0.8

During Early

0.6

0.4 Nonlethal periods Lethal periods Fit to lethal periods

0.2

Prison to loss of life

Before 0.0 0.00

Interdiction only

0.05

0.10

0.15

0.20

0.25

Probability of interdiction

FIGURE 7.2 Deterrence model for lethal interdiction showing operational periods intended to stop smuggler flights from Peru to Colombia.

air traffickers carried all cocaine base destined for Colombia. Filled circles represent periods with lethal consequences. Three periods of lethal interdiction illustrate the transition from no deterrence to full deterrence, after passing through an intervening surge. Figure 7.2 labels these as “before,” “during,” and “after.” In the 10-month “before” period, there is no evidence for deterrence; smugglers simply ignored lethal consequences. Since the Peruvians did not have US detection and monitoring support, they only shot down seven smugglers. This is well within the statistical uncertainty range of the deterrence threshold for lethal interdiction indicated by the heavy curve. To aid the Peruvians in protecting their national security against an ongoing insurgency, the US Presidential Directive resumed intelligence support to their air force. This initiated the surge period “during” the transition. In the first month, Peruvian interceptors interdicted eight trafficker flights. Unusually high levels of lethal interdiction continued, and smuggling flights plummeted as trafficker pilots communicated and adjusted their perception of the risks. Full deterrence had set in by the period labeled “after.” Since the probability of interdiction in the transition period exceeded the trafficker pilots’ perceptions of that probability, the point labeled “during” is out of equilibrium and does not lie on the deterrence model curves. In the first month of the “after” period, interdictors relaxed their pressure, and smuggler flights increased fourfold. Interdiction support resumed the next month, and once again, traffickers were deterred. Thereafter, intelligence reports indicating depressed coca prices sustained the support for interdiction. Illicit Peruvian coca cultivation eventually declined to less than one-third of its previous levels. The best-fit value for the deterrence threshold for lethal consequences, excluding the “during” period, is 1.2 ± 0.2%. Since the distribution of interdictions by month is a Poisson distribution, the operational variation about the threshold is comparable to the

Voeller

v01-c07.tex V1 - 12/04/2013 11:39am Page 83

IMPORTANT APPLICATIONS

83

threshold itself. Consequently, operational planners adopt a conservative value of 2.0% for the lethal threshold to cover this variation. 7.3.6

Interdictor’s Dilemma

The Peruvian experience illustrates the interdictor’s dilemma: is deterrence working or are perpetrators avoiding detection? In the general case, the only resolution to this dilemma is convincing corroborating intelligence proving damage to the illicit activity. Often this is supplemented by intelligence indicating perpetrator intent, consequences perpetrators fear, and clandestine attempts. 7.3.7

Defender’s Dilemma

Defense can be a thankless task. If there are no explicit hostile acts, why do we need to continue operations? If deterrence fails and there are attacks, who do we hold accountable? Defensive operations driven by concerns over accountability promote routine activities that become vulnerable to terrorist probes. Two potential sources of information can transform passive and reactive defenses into dynamic ones taking the initiative. First, deterrence operations can be augmented with intelligence collection on perpetrator attempts to probe or defeat our defenses, and, second, red teams, exercises, and gaming can be employed to continually introduce new and adaptive elements into our defenses. These activities could also provide credible information for evaluating effectiveness and justifying resources.

7.4

IMPORTANT APPLICATIONS

Do lessons learned from criminals transfer to insurgents and terrorists? Analysis of the preparations for the 9/11 attacks indicates consistency between the drug smugglers’ deterrence threshold for lethal consequences of 0.012 and the inferred subjective criterion used by Mohamed Atta to initiate the attack. Although factors other than psychological ones might also have applied, there was evidence of deterrence further up the leadership hierarchy. The 9/11 Commission Report stated on page 247, “According to [Ramzi] Binalshibh, had Bin Laden and [Khalid Sheikh Mohammed] KSM learned prior to 9/11 that Moussaoui had been detained, they might have canceled the operation.” A second application of the willingness function extends it to estimate the deterrence effect of combinations of consequences. A third application extends the deterrence model to estimate the contribution of deterrence to multiple layers of defense. 7.4.1

Deterrence of 9/11 Terrorists

Although dedicated suicide terrorists perpetrated the 9/11 attacks, analysis reveals that they were probably deterred from hasty action until they developed confidence in their plan [8]. Terrorists must exercise extreme caution day-to-day while preparing for a complex attack, and risk aversion provides a basis for deterrence. Their cautious preparations and practice flights were analyzed as a system reliability problem: for a plot consisting of all four hijacked flights reaching their targets, how many unchallenged “practice” flights would be necessary to reduce their perceived risk of failure to a level comparable to

Voeller

84

v01-c07.tex V1 - 12/04/2013 11:39am Page 84

DETERRENCE: AN EMPIRICAL PSYCHOLOGICAL MODEL

the deterrence threshold for lethal interdiction derived from studies of drug smugglers? By this criterion, in addition to the flights necessary to assemble the team in the United States, the 9/11 plot leaders would have had to practice 20–40 more times to be confident of the success of the attack. After this analysis was published, Chapter 7 of the 9/11 Commission Report mentions at least 80 flights, half of which are domestic, and 8 of those use the hijacking routes, box cutters and all. This analysis illustrates how our imperfect deterrence of individuals could have compounded to undermine their complex plot. 7.4.2

Deterrence through Combining Consequences

Interdictors need a means of estimating the deterrence effect of a combination of risks, especially for anticipating the effect of multiple layers of defense. A logically consistent method for doing this is obtained by drawing an analogy with expressions for expected utility and related models from the psychology of decision making under risk: N  PI,i i=1

P0,i

= PI ·

N  (PI,i /PI ) i=1

P0,i

N

=

 PI 1 where PI = = PI,i . P0 W

(4)

i=1

This represents a combination of N risks, each with probability of interdiction, P I , i , and deterrence threshold, P 0, i . The combination also recovers the mathematical form of an inverse willingness function by identifying the following expression as a deterrence threshold:  P0 =

N  (PI,i /PI ) i=1

P0,i

−1 .

(5)

Since W ≤1.0 implies deterrence, the corresponding condition is 1/W ≥1.0. Note that the individual risks, P I , i /P 0, i , all can be below their respective thresholds, yet their combination can deter. Since the consequences represent losses, the inverse willingness, 1/W , can be interpreted as a measure of risk. Those familiar with economics of choice among lotteries or the psychology of judgment under uncertainty will recognize the left-hand expression in Eq. (4) as similar to that for estimating risk, with 1/P 0, i corresponding to the utility function or more generally the subjective utility. Other than the Peru–Colombia flights, all of the operations, for which there are data, involved a combination of consequences [1, 3], and these followed the willingness function. As an example of mixed consequences, consider the wide range of consequences faced by cocaine smugglers at each of the five transactional steps required to breakdown multiton loads from Colombia into gram-sized purchases by millions of users in the United States. Remarkably, traffickers at all levels share the risk since traffickers lose on average 12% of their loads at each step [2]. The following equation illustrates how a plausible mixture of consequences could result in a 12% deterrence threshold: PI,Drugs PI,lethal PI 0.12 PI,Pr ison 0.004 0.022 0.094 = 1.0 = + + = + + = P0 0.12 P0,lethal P0,Pr ison P0,Drugs 0.02 0.05 0.25

(6)

Voeller

v01-c07.tex V1 - 12/04/2013 11:39am Page 85

IMPORTANT APPLICATIONS

85

Here, a 0.4% chance of death, a 2.2% chance of being imprisoned, and a 9.4% chance of losing the drugs and most likely the smuggling vehicle could combine to yield the 12% threshold. Note that each of the individual contributions is below its respective deterrence threshold. Although the logical consistency and plausibility of this method for combining consequences can be verified, in general, one must exercise caution and plan to verify the estimated combination since the research on descriptive risk judgments describes many deviations from the simple prescriptive form of the expected utility [9–11]. Mathematical simplicity is an overriding practical consideration for counterterrorism operations, and the simplicity of the willingness function is remarkably relative to other models from the literature that require several parameters to represent subject responses. A fundamental difference, however, between the willingness function and expressions found in the literature is that acceptance or attractiveness of a gamble is generally interpreted as the negative of risk rather than its reciprocal [12]. Why the willingness function fits the available data so well remains a mystery. Possibly perpetrator preoccupation with extreme risk reduces the complex general case to a simpler asymptotic form. 7.4.3

Defense in Depth

Estimating the ability of several layers of defense to thwart terrorists requires an understanding of how terrorists might perceive those defenses. Some circumstances might cause terrorists to perceive all of the layers as one barrier (e.g. if penetrating the first layer required penetrating all layers, as with passengers on a ship, or if terrorist planners required several members of a cell to be able to penetrate all of the layers). By contrast, other situations would allow perpetrators to attempt penetrations one layer at a time. If all layers are perceived as one barrier, each layer becomes a separate risk, and all layers a combination of those risks. Again, for such a combination, individual layers might not pose sufficient risk to exceed the deterrence threshold, yet together they could. This advantage of layers perceived as one barrier is offset by the high rate of undeterrables, numerically equivalent to the deterrence threshold for only one barrier. If, however, the layers are viewed as independent risks, some or all must pose a risk above the deterrence threshold if deterrence is to contribute. Since the layers each thwart a fraction of the perpetrators, their effects compound multiplicatively to suppress residual leakage. This also assumes that undeterrables at one layer might be deterred by a risk at a subsequent layer. If it were otherwise, terrorist planners employing a team of less cautious undeterrables for a complex plot would risk revealing it before it could be executed. Figure 7.3 shows the deterrence model for two-layer defenses plotted against the probability of interdiction for one layer that is assumed representative of both layers. A large deterrence threshold of 0.2 expands the graphic scale to ease visualization. With two layers perceived as one barrier, deterrence begins at approximately one-half the deterrence thresholds of the individual layers. (With very large thresholds at each layer, the probability of confronting deeper layers would be discounted by the chances of being interdicted at earlier ones.) Also, in Figure 7.3, the two layers acting separately compound to thwart relatively more perpetrators beyond an interdiction rate of approximately 0.33. Correlations among layers could undermine or enhance deterrence relative to these baseline cases. Perpetrators might view both layers as equivalent—after crossing one, the other is an assured passage—hence undermining deterrence. Alternatively, the first

Voeller

86

v01-c07.tex V1 - 12/04/2013 11:39am Page 86

DETERRENCE: AN EMPIRICAL PSYCHOLOGICAL MODEL

0.10

Fraction thwarted

0.8

0.6

0.4 Individual layer 2-Layers interdiction only

0.2

2-Layers perceived as one barrier 2-Layers perceived separately

0.0 0.0

0.2

0.4

0.6

0.8

0.10

Probability of interdiction for each layer

FIGURE 7.3

Comparison of deterrence models for two-layered defenses.

layer could alert interdictors at subsequent layers to suspicious individuals for a more in-depth examination or perpetrators falsifying statements at one layer might increase the consequences if interdicted at a subsequent layer; both of these possibilities would enhance deterrence if they were known to would-be perpetrators.

7.5

RESEARCH DIRECTIONS

How broadly does the willingness function apply? How might the willingness function be knit into the body of established psychological and behavioral findings? Future research should integrate these findings and other work on deterrence into a unified area of study so that lessons transfer and deeper understanding informs our ongoing counterterrorism efforts. 7.5.1

General Result

Several testable hypotheses suggest that the understanding of deterrence presented here applies to those taking extreme risks, including drug traffickers, insurgents, and terrorists: •

People can judge risk directly [1, 3, 9–11], and with simple mathematical regularity in extreme situations. • Underlying motives are more common than different. Even drug traffickers seek respect from their reference group, need to maintain a lifestyle, pursue the thrill of risk taking, and, in some cases, fund insurgencies and terrorism. • The mathematical simplicity of the willingness function is difficult to explain without appealing to some overriding principle, given the intricacies of the psychological

Voeller

v01-c07.tex V1 - 12/04/2013 11:39am Page 87

REFERENCES

87

theories and models as well as the diversity of subjects and situations covered by the willingness function.

7.5.2

Explaining the Willingness Function

Future research might examine two alternative explanations of the willingness function and connect them with the study of decision under uncertainty: •

In the psychology of persuasion, the persuasiveness of a communication is a sum over salient novel arguments; thus, the constant fraction interdicted might represent a constant rate of persuasive argumentation against perpetrating acts [13]. • If the decline of those willing represents the distribution of those with greater needs than the likely consequences of deterrence, then the decline might parallel the Pareto distribution that extends toward lower incomes [14]. Extensive research into the psychology of judgment under risk should be applicable to deterrence, yet the models and methods address acceptance as the negative rather than the reciprocal of risk. Might there be a universal asymptotic distribution converging on an inverse power law? 7.5.3

Integrating the Research Community

Understanding the psychology of deterrence as it applies to terrorists requires information on, among other things, terrorist perspectives, intentions, perceptions of risk, and behavior. Results presented here indicate that it appears possible to relate deterrence of terrorists and insurgents to criminals and extreme risk takers. A national research effort to understand deterrence would have to integrate intelligence sources, operational experience, and various social science research communities. Today, the barriers between these three communities are formidable. Hopefully, this handbook will raise awareness of the value of, and need for, a synthesis across these institutional barriers, and catalyze efforts toward that end.

REFERENCES 1. Anthony, R.W. United Nations Office on Drugs and Crime. (2004). A calibrated model of the psychology of deterrence. Bull. Narc.: Illicit Drug Markets LVI(1 and 2), 49–64. 2. Anthony, R.W., and Fries, A. United Nations Office on Drugs and Crime. (2004). Empirical modeling of narcotics trafficking from farm gate to street. Bull. Narc.: Illicit Drug Markets LVI(1 and 2), 1–48. 3. Anthony, R.W., Crane, B.D., and Hanson, S.F. (2000). Deterrence Effects and Peru’s Force-Down / Shoot-Down Policy: Lessons Learned for Counter-Cocaine Interdiction Operations. Institute for Defense Analyses, p. 252. IDA Paper P-3472. 4. Department of Defense Dictionary of Military and Associated Terms. (2000). JCS Pub 1–02 , Joint Chiefs of Staff Publication. 5. U.S. Strategic Command. (2004). Strategic Deterrence Joint Operating Concept, Director, Policy, Resources and Requirements, Offutt AFB, NE, p. 77.

Voeller

88

v01-c07.tex V1 - 12/04/2013 11:39am Page 88

DETERRENCE: AN EMPIRICAL PSYCHOLOGICAL MODEL

6. Crane, B.D. (1999). Deterrence Effects of Operation Frontier Shield , Institute for Defense Analyses, IDA Paper P-3460, (25) March 1999. 7. Sherman, L.W., and Weisburd, D. (1995). General deterrent effects of police patrol in crime “Hot Spots”: a randomized, controlled trial. Justice Q. 12(4), 625–648. 8. Anthony, R.W. (2002). Deterrence of the 9-11 Terrorists, Institute for Defense Analyses, Document D-2802, (15) December 2002. 9. Kahneman, D., and Tversky, A. (1979). Prospect theory: an analysis of decision under risk. Econometrica 47(2), 263–291. 10. Weber, E.U. (1997). The utility of measuring and modeling perceived risk. In Choice Decision and Measurement: Essays in Honor of R. Duncan Luce, A.A.J. Marley, Ed. Lawrence Erlbaum Associates, pp. 45–56, 472. 11. Jia, J., Dyer, J.S., and Butler, J.C. (1999). Measures of perceived risk. Manage. Sci. 45(4), 519–532. 12. Weber, E.U., Anderson, C.J., and Birnbaum, M.H. (1992). A theory of perceived risk and attractiveness. Organ. Behav. Hum. Decis. Process. 52, 492–523. 13. Perloff, R.M. (2003). The Dynamics of Persuasion: Communication and Attitudes in the 21st Century. 2nd ed., Lawrence Erlbaum Associates, New Jersey and London, p. 392. 14. Reed, W.J. (2001). The Pareto, Zipf and other power laws. Econ. Lett. 74, 15–19.

FURTHER READING The references to the psychological literature and “Research Directions” section provide a starting point on further reading.

Voeller

v01-c08.tex V1 - 12/04/2013 11:40am Page 89

8 SOCIAL, PSYCHOLOGICAL, AND COMMUNICATION IMPACTS OF AN AGROTERRORISM ATTACK Steven M. Becker University of Alabama at Birmingham School of Public Health, Birmingham, Alabama

8.1

INTRODUCTION

As policy makers, the agriculture sector, researchers, emergency planners, and communities prepare to meet the enormous challenge posed by agroterrorism, increasing attention has been devoted to such critical issues as field and laboratory detection, surveillance, mapping, improved outbreak modeling, vaccine development and improvement, and disposal and decontamination options. Far less consideration, however, has been given to social, psychological, and communication issues. Yet, the manner in which these issues are approached will be one of the principal determinants of an agroterrorism event’s outcome. The ultimate aim of an agroterrorism attack, after all, is not to harm crops or ruin agricultural products; rather, it is to destroy confidence in the food supply and in societal institutions, create fear and a sense of vulnerability in the population, reduce people’s hope and resolve, and weaken the society and the nation. Effectively addressing key social, psychological, and communication issues will be crucial to the success of quarantines or other mitigation measures, and to efforts to minimize exposure to threat agents, reduce the impacts of an incident, maintain public confidence and trust, and better assist affected individuals, families, and communities [1]. It is no exaggeration, therefore, to say that social, psychological, and communication issues constitute “make or break” factors in any effort to manage an agroterrorism event. Without sufficient attention devoted to these issues, “response efforts after a terrorist attack might be successful in narrowly technical terms but a failure in the broader sense. In effect, the battle might be ‘won,’ but the war would be lost” [2, p. 16].

Social and Behavioral Research for Homeland Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

89

Voeller

90

v01-c08.tex V1 - 12/04/2013 11:40am Page 90

IMPACTS OF AN AGROTERRORISM ATTACK

8.2 LEARNING FROM THE 2001 FOOT-AND-MOUTH DISEASE OUTBREAK Among the best ways to understand the nature and extent of the social, psychological, and communication challenges that an agroterrorism attack could pose is to learn from recent experience with large-scale disease outbreaks. In this regard, the 2001 foot-and-mouth disease outbreak in the United Kingdom is probably the most instructive. Although the 2001 outbreak was not the result of terrorism, it “presented unprecedented challenges which no one in any country had anticipated” [3, p. 6]. This included a host of serious social, psychological, and communication impacts. In addition, because of the open, forthright and thorough way that British society has examined the successes and failures in the handling of the epidemic, others have a rich opportunity to learn from this experience. Foot-and-mouth disease is a viral disease that mainly affects cattle, pigs, goats, and sheep. Its symptoms include fever, vesicles (blisters) in the mouth or on the feet, pain, lameness, loss of appetite, and loss of condition [4]. The virus can survive for long periods of time and is powerfully contagious. Indeed, foot-and-mouth disease has variously been described as “the most contagious of all diseases of farm animals” [5, p. 2], “the most feared infection of domestic livestock” [6, p. 1], and “the most contagious disease of mammals” [7, p. 425]. Not only can animals be infective without displaying signs of the disease, the virus can also be transmitted in a host of ways. “The virus is present in fluid from blisters, and can also occur in saliva, exhaled air, milk, urine and dung. Animals pick up the virus by direct or indirect contact with an infected animal. Indirect contact includes eating infected products and contact with other animals, items or people contaminated with the virus, such as vehicles, equipment, fodder and anyone involved with livestock.” [8, p. 13] The rapidity with which the 2001 epidemic spread was astonishing. British officials estimate that by the time the virus was confirmed on February 20, some 57 farms in 16 counties had already been infected. By February 23, when a movement ban was imposed, 62 more premises were thought to have been infected, involving seven more counties [8, p. 14]. In addition, the scale of the outbreak was remarkable. At the height of the crisis, “more than 10,000 vets, soldiers, field and support staff, assisted by thousands more working for contractors, were engaged in fighting the disease. Up to 100,000 animals were slaughtered and disposed of each day” [8, p. 1]. By the time the outbreak ended—221 days after it began—the toll was enormous: animals were slaughtered at more than 10,000 farms and related agricultural premises in England, Scotland, and Wales. Approximately 2000 locations were “slaughtered out” because foot-and-mouth disease had been confirmed there, while another 8000 were targeted either because they neighbored an infected farm (“contiguous culling”) or because it was suspected that animals could have been exposed to the virus (“dangerous contacts”). While efforts were made to reduce pain and suffering, there were all too many situations where this aim was not achieved due to the scale of the operation and a shortage of trained personnel. Reports of frightened animals taking flight, animals being wounded, or animals being shot multiple times were not uncommon. Piles of dead animals awaiting disposal were a regular sight in affected areas, particularly in the early days of the culling operation; so, too, were trenches where carcasses were buried and “funeral pyres” where carcasses were burned. In the end, the total number of animals slaughtered for disease

Voeller

v01-c08.tex V1 - 12/04/2013 11:40am Page 91

SOCIAL, PSYCHOLOGICAL, AND COMMUNICATION IMPACTS

91

control purposes was staggering—over 4.2 million. Beyond that, 2.3 million other animals were slaughtered under “welfare provisions” because strict movement restrictions in affected regions made it impossible to get feed to them. People living in the midst of the epidemic and associated carnage were hit hard emotionally, as when farms that had been in the family for generations were wiped out or when children’s pets were required to be slaughtered. In addition, people were battered economically. Agricultural communities, including farmers and their families, people employed in agriculture, and area businesses, saw livelihoods and financial security disappear virtually overnight. Tourism—a vital industry in many of the affected areas—dropped precipitously, causing even greater economic damage and dislocation. Before the outbreak finished, it had even gone international, spreading to a limited extent to France, the Netherlands, Northern Ireland, and the Republic of Ireland [3, 8]. It is common in most disaster situations for people’s responses and reactions to be marked by resilience and helping behaviors. The foot-and-mouth epidemic was no exception. Many communities remained united in the face of the invisible threat and there were countless acts of assistance and support. Amongst farmers and farming families, there was a continuing commitment to agriculture as a way of life despite the tremendous difficulties caused by foot-and-mouth disease [9]. In addition, many veterinarians and other professionals endured difficult conditions and went above and beyond the call of duty to help bring the outbreak under control. Finally, there were many examples of public sympathy and support for affected farmers and farming communities. People in the Southwest and other parts of the United Kingdom, for example, participated in a huge fund-raising effort aimed at helping those whose livelihoods had been ravaged by the epidemic. The Green Wellie Appeal, launched in March by the Western Morning News, saw participation from celebrities, businesses, schools, and thousands of people sympathetic to the plight of affected farmers. More than £1 million was raised [10]. At the same time, the outbreak also caused new strains, sharp conflict and division, profound distress, widespread loss of trust, and a host of other serious social, psychological, and communication impacts. These were partly a result of the damage wrought by the outbreak itself, but they were also compounded by serious shortcomings in preparedness and response efforts. Initially, “no-one in command understood in sufficient detail what was happening on the ground.” By the time the extent of the problem was fully grasped, a cascade of social, psychological, and communication effects had already begun. “A sense of panic appeared, communications became erratic and orderly processes started to break down. Decision making became haphazard and messy . . . . The loss of public confidence and the media’s need for a story started to drive the agenda” [3, p. 6]. While no two events are ever alike, the range of individual, family, community, and societal effects experienced during foot and mouth provides a clear indication of the kinds of social, psychological, and communication impacts that could result from a large-scale agroterrorism attack. Some of the most significant effects evidenced during the 2001 outbreak are reviewed below. 8.3 8.3.1

SOCIAL, PSYCHOLOGICAL, AND COMMUNICATION IMPACTS Isolation

Efforts to control the spread of the virus had the unintended consequence of causing widespread social isolation. A ban on animal movements, the creation of large exclusion

Voeller

92

v01-c08.tex V1 - 12/04/2013 11:40am Page 92

IMPACTS OF AN AGROTERRORISM ATTACK

zones around affected farms, the posting of “keep out” signs, the placing of disinfectant baths and mats, the closure of footpaths, parks, tourist attractions and heritage sites, prohibitions against all nonessential travel, and the closure of widespread areas of the countryside often combined to bring community life to a standstill. Farmer’s markets, fairs, art shows, and other events were cancelled, and many other facets of social life—visiting neighbors, going to the pub, attending religious services, shopping, participating in clubs and community groups—ceased. Even the utilization and delivery of health and social services were affected. In the words of one official report, “children and families could not conduct normal lives . . . .” [11, p. 9]. Thus, at a time of maximum difficulty and stress, people were often cut off from normal social outlets, from each other, and from their community support networks. 8.3.2

A Sense of Being under Siege

Even where some degree of movement or interaction was possible, fear that other people could potentially spread the virus caused many farmers, farming families, and others to barricade themselves off from the outside world. The farthest that one could safely venture was to the end of his or her property. Children were even kept home from school for an extended period of time. The sense of being on edge and under siege was reinforced every time there was an instance of someone ignoring warning signs or violating a closure order. Such occurrences appeared to happen at a variety of times and in a multiplicity of locations [12]. Reported problems included walkers pulling down disease-warning signs, people entering closed areas/footpaths, and people crossing farm property. A spokesperson for one police department was quoted as saying that numerous complaints had been received alleging that “people are either ignoring the signs or ripping them down. On one occasion a man walking his dog ripped a sign down and went straight down the path. Another time, a man led a child on horseback down a path” [13, p. 33]. In some instances, there were direct conflicts when farming families trying to protect their property from the virus encountered outsiders. Among the incidents described in media reports were one where a farmer’s wife confronted cyclists with a shotgun, and another where a farmer was attacked by two men walking a dog after he asked them to leave his farmland [13]. 8.3.3

Hoaxes and Threats

Compounding the fear, uncertainty, and distress experienced by farming communities were hoaxes and threats perpetrated in the wake of the outbreak. In one case, for example, a farmer reported having found a pig’s head that had apparently been thrown into the middle of his field of dairy cows. In another case, a vial and bloodstained gloves were left near a sensitive area of a farm. The overall number of such incidents was relatively small; but in the context of the enormous worries and uncertainties already being experienced by much of the countryside, even this small number was sufficient to add greatly to people’s fears and sense of being under siege [12]. 8.3.4

Noncompliance with Infection Control Measures

Adherence to measures aimed at controlling the spread of infection is a key to crisis management during a large-scale outbreak. During the foot-and-mouth disease outbreak,

Voeller

v01-c08.tex V1 - 12/04/2013 11:40am Page 93

SOCIAL, PSYCHOLOGICAL, AND COMMUNICATION IMPACTS

93

cooperation and compliance were often good. However, many exceptions were seen over the course of the outbreak. At times, and in some areas, the lack of compliance occurred often enough and was sufficiently serious to constitute a major concern. Compliance problems, which were identified in relation to both farms and transport, included unlicensed movement of animals, dirty vehicles, and vehicles spilling organic material onto roads. Some of these problems might have stemmed from lack of awareness, lack of training, unclear instructions, or ineffective communication. There is evidence, for example, that words such as biosecurity, blue box, and red box were not always well understood. But other problems—including the deliberate alteration of movement licenses and illegal entry to infected premises—were clearly intentional violations of infection control measures. In a number of cases, violators were fined or prosecuted if they were caught [12]. 8.3.5

Conflict within Communities

Differences between those involved in agriculture and those dependent on tourism, changes and perceived inconsistencies in valuation and compensation levels, and divergent views on approaches to dealing with the crisis sometimes created new tensions and sharp conflicts. These conflicts divided neighbors and friends and had broader impacts as well. As a member of one farming family explained, the situation was damaging “not just the farming lifestyle, but the farming communities, the farming relationship” (Quoted in [14], p. 274). One of the most powerful descriptions of the combined effect of isolation, the state of siege, and splits between people was given by a resident of Holne at the Devon Foot and Mouth Inquiry (2002, p. 58): Divisions occurred within people and between different groups—“us and them.” The “us” became narrower and smaller—only the immediate family. Thus psychological isolation exacerbated physical isolation. People withdrew from the nurturing of the community. The dangerous “not us” became wider and bigger: farmers, walkers; MAFF/DEFRA; those with no bio-security and those with excellent bio-security; those who left, those who remained; organic farmers, postmen, people with dogs; horse drivers and horse riders; children at school and not; open pubs and closed pubs; those compensated and those not; those who cheated and those who played straight. Suspicion, guilt, panic, fear and abandonment were all apparent. What is left is lack of confidence, depression, lack of ability to respond, and despair.

8.3.6

Psychological Impacts

As the Royal Society of Edinburgh [11, p. 9] summed up, “for those involved, or even those not involved but living in the locality, there was trauma . . . . For many of these people, and perhaps especially their children, the events of 2001 were a nightmare . . . ” Only a relatively small number of systematic studies of the outbreak’s psychological impact were conducted, perhaps in part because of the difficulties inherent in a situation involving severe travel restrictions. But the research that was conducted has reinforced the conclusion that this was a highly distressing experience. In a study carried out shortly after the official end of the outbreak, Peck et al. [15] compared psychological morbidity in a badly affected area (Cumbria) and an unaffected area (the Highlands) using a 12-item version of the General Health Questionnaire that was mailed to farmers. Though small sample size limits how far the results can be generalized, the study

Voeller

94

v01-c08.tex V1 - 12/04/2013 11:40am Page 94

IMPACTS OF AN AGROTERRORISM ATTACK

found that farmers in the affected area had significantly higher levels of psychological morbidity than those in the unaffected area. Other research (e.g. [16]) carried out in various locations and using a variety of methodologies has also examined emotional well-being and mental health in relation to the outbreak. Olff et al. [17] studied farmers whose animals were slaughtered during the outbreak and found that approximately half had high levels of traumatic stress symptoms. Deaville et al. [18] carried out a health impact assessment of the foot-and-mouth outbreak in Wales. Using a multimethod approach that combined validated quantitative instruments with qualitative interviews, the assessment found significant mental health effects in the study sample and identified such symptoms as sleeplessness, tearfulness, frustration, anger, and lack of motivation. Hannay and Jones [19] used a mail survey to examine how farmers and tourism workers in Dumfries and Galloway, Scotland were affected by the outbreak. The results indicated that both groups had experienced negative impacts in the areas of daily activities, feelings, overall health, social activities, social support, and quality of life [20]. Finally, Mort et al. [21] conducted a longitudinal qualitative analysis of weekly diaries and concluded that the foot-and-mouth experience was accompanied by distress, feelings of bereavement, fear of a new disaster, and loss of trust. Looking across the psychological impacts of the outbreak, Peck [20] concluded that, despite the high levels of distress, there had been no increase in demand for mental health services in affected areas. Rather, farmers turned to “family, friends and veterinary surgeons for support” (p. 272). In addition, noted Peck, there was “an expressed willingness to use anonymized sources of support, such as telephone or internet helplines” (p. 275). This is fully consistent with reports from the many organizations that provided support to farmers, farming families, and others in affected communities. Crisis hotlines and stress helplines were flooded with calls, so that hours had to be extended and staffing had to be increased. The Rural Stress Information Network, for example, reported that with the onset of the outbreak, it had received more calls in a single month than in the entire preceding year [12]. No direct, systematic studies of the outbreak’s effect on children—generally considered a vulnerable population—were carried out [22]. Nevertheless, it was apparent that the situation took a significant emotional toll on them. Children were often nearby when parents’ and grandparents’ farms were slaughtered out. They witnessed piles of dead animals, saw and smelled the funeral pyres that burned for days, and sometimes even lost their own pets as a consequence of the crisis. In addition, children shared in the isolation that affected farm communities. They missed school for extended periods of time, were unable to socialize with friends, and saw their families’ own distress on a daily basis. As one parent told the Devon Foot and Mouth Inquiry, “my children had never seen me cry before” [23, p. 50]. Children’s stress manifested itself in many ways, from angry e-mail postings [24] to problems with bed-wetting. As one rural nurse wrote, “as time passed we had an increase in referrals for children who were bed-wetting, often after long periods of being dry” [25, p. 60]. In a health assessment carried out in Wales by Deaville et al. [18], over half of the study’s respondents indicated that the outbreak had affected their children. Although most attention has focused on farmers and their families, it should also be borne in mind that foot-and-mouth was often a distressing experience for those charged with fighting the outbreak. Professionals on the front lines worked very long hours, were often away from home, and regularly witnessed horrific sights. Furthermore, although some frontline personnel felt that their work was supported by farmers, community

Voeller

v01-c08.tex V1 - 12/04/2013 11:40am Page 95

SOCIAL, PSYCHOLOGICAL, AND COMMUNICATION IMPACTS

95

residents, and the broader public, this was often not the case. Indeed, because of the high level of controversy, anger, frustration, and mistrust surrounding almost every aspect of foot-and-mouth, it was not uncommon for frontline staff to find themselves the target of relentless hostility and derision. Some professionals even reported that they were ashamed to be identified as government agency staff members. This state of affairs undoubtedly made an already emotionally taxing situation even more difficult for some frontline workers. 8.3.7

An Overwhelming Demand for Information

Just as the crisis developed with breathtaking rapidity, so too did the demand for information. Requests for information quickly exceeded all expectations, and communication resources and personnel were severely stretched. For example, during the early part of the outbreak, staff at the Carlisle Disease Emergency Control Centre found themselves having to field some 6500 calls per week even as they worked feverishly to deal with the outbreak. On the national level, the resources of a helpline at the headquarters of the Ministry of Agriculture, Fisheries and Food were quickly exceeded, as were those of a much larger governmental foot-and-mouth disease helpline that had been set up utilizing a call center at the British Cattle Movement Service. As a result, officials established an overflow service through a private contractor. By March–April, the national foot-and-mouth disease helpline was hitting 7000 calls per day. Over the course of the 31-week outbreak, government-sponsored helplines responded to literally hundreds of thousands of calls from farmers and the general public [8]; [12]. Aside from the overwhelming numbers of calls, one of the biggest challenges affecting the helpline effort was the difficulty those operating it had in obtaining information that was sufficiently detailed, accurate, and up-to-date. Helpline staff often had to rely on the website operated by the Ministry of Agriculture, Fisheries and Food. Although the Ministry had succeeded in quickly establishing the website after the outbreak began, and although it was widely used (by March–April it was seeing an average of 50,000 user sessions per day), the site did not always contain the most recent information [26, p. 321; 8; 12]. Particularly in situations where other sites were more up-to-date, this added to confusion and suspicion. Poortinga et al. [27] carried out a multimethod study of how people (n = 473) in two communities—one potentially at risk from foot-and-mouth and another not close to any cases—viewed the trustworthiness of various sources of information about the outbreak. Among those scoring lowest on trust were government ministers and food manufacturers. The media fell exactly in the middle of the list (number 7 out of 13 information sources), perhaps because of concerns about sensationalism and exaggeration. Who, then, were seen as the most trustworthy sources of information? Topping the list were veterinary surgeons, followed by farmers, and then friends and family. In other words, people often trusted animal health professionals and local sources (e.g. word of mouth, the grapevine) far more than the national media and the national government. The crisis also saw the emergence of new “virtual” communities and networks that were able to link people despite the isolation created by the outbreak [28]. 8.3.8

Conflict over Control Measures

Efforts to dispose of the huge number of slaughtered animal carcasses encountered significant community opposition. In part, this was due to a lack of consultation with

Voeller

96

v01-c08.tex V1 - 12/04/2013 11:40am Page 96

IMPACTS OF AN AGROTERRORISM ATTACK

stakeholders. “The speed with which decisions were taken, from site selection to construction and use, meant that there was little time for consultation . . . . The lack of consultation angered local communities . . . the lack of information and perceived insensitivity to local concerns aggravated the situation” [3, p. 114]. One major focus of opposition was the so-called funeral pyres (fires) that were extensively used in affected areas. Concerns included smoke contamination, dioxins, the powerful stench, and the problem of ash removal. In one locale, protests by business people and other residents forced officials to substantially reduce the size of a major burning operation. In another location, families blockaded trucks carrying carcasses to a funeral pyre. In yet another area, residents blocked trucks from entering a pyre site [12]. Plans for burial of carcasses also provoked anger and protest. People’s concerns included possible transport leakage, seepage of leachate, and contamination of watercourses and drinking water supplies. Near one proposed site, for example, several hundred people from three villages came together to oppose burial plans. Although the vast majority of protests against burial sites were peaceful, there were isolated exceptions. In one situation, for example, earth-moving equipment was used to crush a police van after protesters attempted to stop plans for mass burial of animal carcasses [12]. At times, opposition and protest were local in nature. But at other times, the issue of what to do with the carcasses of dead animals pitted region against region. In one area, for example, hundreds of people marched to protest plans to bring dead sheep from other areas of Britain to their county for burial [12]. In such situations, there was a powerful sense that people were being asked to shoulder more than their fair share of the burden. As Bush et al. [29] commented, “in the final analysis, local hostility to the burial sites was not only about the shortcomings of consultation and the failure to take seriously local knowledge, or the doubts about possible risks to either human health or the local environment. It was equally about the injustice of being singled out as a local repository for the by-product” of a national disaster. 8.3.9

A Breakdown of Trust and Confidence

Despite dedication and hard work from many civil servants, disease control professionals, and frontline staff, strategic problems such as a slow recognition of the severity of the outbreak, a slow early response, controversy over the mass slaughter policy, perceived inconsistencies in compensation procedures, conflict over carcass disposal, and a lack of adequate consultation with stakeholders, all contributed to a loss of faith in the overall handling of the situation. Communication problems further damaged public confidence [26]. In the end, the foot-and-mouth disease crisis resulted in a “breakdown of trust between many of those affected directly or indirectly and their Government” [3, p. 7].

8.4 IMPLICATIONS FOR AGROTERRORISM PREPAREDNESS AND RESPONSE The 2001 foot-and-mouth disease outbreak in the United Kingdom—while not a terrorist event—provides a clear indication of the types of social, psychological, and communication impacts that could occur as a consequence of a large-scale agroterrorism attack. The spectrum of effects ranges from the distress suffered by individual farming families who see their life’s work disappear overnight to broad social impacts such as community

Voeller

v01-c08.tex V1 - 12/04/2013 11:40am Page 97

IMPLICATIONS FOR AGROTERRORISM PREPAREDNESS AND RESPONSE

97

division, regional conflict, and loss of trust. Furthermore, as the 2001 experience makes clear, these impacts may be profound and widespread. Indeed, there is a real potential for the severity of social, psychological, and communication impacts of an agroterrorism attack to be even greater than what was seen during the foot-and-mouth epidemic. For example, an event involving a zoonotic agent would present an additional layer of challenges. Likewise, the possibility of multiple or repeated attacks could make it vastly more difficult to reestablish people’s sense of security. It will be crucial to learn from the foot-and-mouth outbreak and other experiences and incorporate these insights into agroterrorism contingency planning, training, preparedness, and response. Some of the key lessons that relate to social, psychological, and communication issues are discussed in the following sections. 8.4.1

Enlist the Public as a Partner

Although some level of disagreement and conflict is probably inevitable in a situation like the foot-and-mouth outbreak, it is now generally accepted that the situation was made far worse because of a lack of consultation with communities during the crisis. However, the problem ran deeper; even before the outbreak, there was failure to adequately engage stakeholders—including communities—in the emergency planning process. For example, stakeholders were “not formally consulted in preparing contingency plans” [8, p. 40]. Today, foot-and-mouth preparedness planners in the United Kingdom employ a much more inclusive, participatory approach. Nearly every aspect of managing an agroterrorism event will depend upon gaining the cooperation and confidence of agricultural communities and the broader public. Thus, it is essential for agroterrorism planning and preparedness efforts to view them as full-fledged partners. Stakeholders need to be involved in plan development long before an event occurs [30], and their participation in training exercises is vital. Similarly, the development of emergency information and outreach strategies cannot possibly be fully effective without community input and feedback. More broadly, there is a need to engage agricultural communities and the public in discussions about the agroterrorism threat long before an event occurs. This will permit full consideration of different management strategies, disposal options, compensation issues, and other potentially controversial matters, and facilitate the development of participatory decision-making processes that are seen as fair, transparent, credible, and effective. 8.4.2

Adequate Resources and Preparation for Information Hotlines

It is clear from the foot-and-mouth experience that, in the event of an agroterrorism attack, the demand for information from official hotlines will be massive. If public confidence is to be maintained, agencies will need to have well-rehearsed plans, phone facilities, and trained personnel to rapidly set up and operate such hotlines. Hotline arrangements—including mechanisms to ensure that accurate and up-to-date information is available—should be regularly and realistically tested through exercises. Depending on the nature of an agroterrorism event, there may also be substantial information demands from veterinarians, county extension agents, health departments, doctors, and others involved in responding to the situation. Thus, agencies will also need to be able to rapidly provide special hotlines and appropriate informational materials tailored to meet the needs of professionals.

Voeller

98

8.4.3

v01-c08.tex V1 - 12/04/2013 11:40am Page 98

IMPACTS OF AN AGROTERRORISM ATTACK

Adoption of a Pre-Event Message Development Approach

An agroterrorism event and its resulting impacts could unfold with great speed, leaving agencies little or no time to develop effective communication strategies, informational materials, and emergency messages. In such a situation, events could easily outstrip communication efforts, leaving information vacuums that could quickly be filled with misinformation and rumors. This, in turn, could greatly complicate efforts to control an outbreak and contribute to the erosion of trust and confidence. One promising solution that has broken new ground is to adopt what has come to be known as the “pre-event message development” approach. In a nutshell, the idea is to carry out research on the concerns, information needs, and preferred information sources of key audiences; utilize the findings to prepare emergency messages and other materials; and carefully test them long before an event occurs [31–33]. Interest in this approach developed out of the experience of the Centers for Disease Control and Prevention (CDC) during the 2001 anthrax letter incidents. With concern about the incidents growing rapidly, CDC found itself having to field large numbers of calls from the public, requests by health officials for real-time information, and inquiries from the media. With events moving quickly and with staff already stretched assessing and managing the incidents, it became difficult to keep up with the demand for information. Reflecting on the experience, CDC later concluded that efforts to manage future emergencies would benefit from the use of a more proactive approach wherever possible. The agency enlisted the assistance of four US schools of public health, which carried out a multiyear, multisite research program to (i) understand the perceptions, information needs, self-protection concerns, preferred information outlets, and trusted sources for a range of population groups; (ii) identify core content for emergency messages; and (iii) pre-test draft message components (including the identification of confusing terms). CDC is now using these findings to craft more effective emergency messages, materials, and web content related to the human health aspects of unconventional terrorism agents. The communication challenges associated with an agroterrorism event would be immense. So too would the stakes. Should public trust and confidence be lost, they will be difficult to regain. The “pre-event” approach is not easy. It requires investment in research and a commitment to translate that research into practice. However, adoption of a “pre-event” approach increases the chances that agencies can stay “ahead of the curve” rather than falling hopelessly behind. Rather than starting from scratch and guessing what information key stakeholders and the general public want, the use of a “pre-event” approach enables agencies to build on an empirically grounded foundation. “During an actual emergency, the focus of attention can be on developing incident specific information” that can quickly be incorporated into already tested materials [31]. 8.4.4

A Broader Approach to Communication

Clearly, a vital part of any effective communication strategy during an agroterrorism event will involve working closely with the news media to get needed information out to the public. As practical experience and the literature on risk communication have shown, this means having the infrastructure and trained personnel to rapidly respond to media requests for information; being able to provide experienced, credible, well-informed spokespersons for interviews; being able to provide opportunities for visuals; and having press kits with relevant statistics and succinct and clear resource materials available. In addition, an

Voeller

v01-c08.tex V1 - 12/04/2013 11:40am Page 99

IMPLICATIONS FOR AGROTERRORISM PREPAREDNESS AND RESPONSE

99

effective communication strategy also requires reaching out to different types of media, including television, radio, and newspapers [34]. However, as important as the media component of a communication strategy may be, it is essential to remember that some population segments may not be reached through the media or may prefer or trust other sources of information. As noted earlier, during the 2001 foot-and-mouth disease outbreak in the United Kingdom, it was not uncommon for people to give more credence to trusted local sources, word of mouth, and the “grapevine” than to the national media or national government. This is consistent with some recent research on bioterrorism issues suggesting that, in some situations, there could be urban–rural differences in terms of preferred information sources. For example, one recent study noted that, whereas urban respondents reported looking to the media first for information, rural respondents reported looking first to local authorities [35]. In light of these findings, it is critical for an agroterrorism communication strategy to complement the mass media component with a carefully thought-out community outreach component. This should include steps to ensure that accurate, up-to-date information is rapidly and continuously provided directly to trusted local figures (e.g. county extension agents and veterinarians) and trusted community organizations and networks (e.g. farming organizations, houses of worship). The extensive involvement of stakeholders well before an event should greatly facilitate the identification of community networks that may be important for such outreach efforts. During the foot-and-mouth outbreak, parts of the farming community (particularly younger farmers and their families) also made extensive use of information technology. In an agroterrorism situation, it will be important to ensure that informational websites are easily found, user friendly, written in clear language, informed by an understanding of people’s concerns, and regularly updated with the latest information. 8.4.5

Ability to Rapidly Expand Crisis Hotlines and Peer/Social Support

As noted earlier, many people having to cope with the impacts of the foot-and-mouth outbreak turned to crisis hotlines and stress helplines. With an agroterrorism attack likely to produce widespread emotional distress, it will be vital for emergency response plans to include mechanisms for rapidly expanding crisis/stress hotline services. Facilities, needed equipment and resources, and trained personnel should be identified in advance, as should ways of communicating the availability of the services. In addition, strategies for facilitating peer/social support should be included in planning. For example, mental health professionals can play “an educational and consultative role for veterinary surgeons, farming organizations, self-help groups . . . and local radio” [20, p. 275]. 8.4.6

Special Services and Materials for Children

In any disaster situation, children have unique vulnerabilities. They may be exposed to the same frightening sights, sounds, and smells as adults, but not have the maturity or experience to interpret and understand what is going on around them. Although children are often resilient, there is no doubt that an agroterrorism event would be a highly distressing situation for them. It is important, therefore, for agroterrorism preparedness planning to include appropriate mental health support and interventions for children. This should include a particular focus on schools and day-care settings. “Children spend the majority of their waking hours at school or in a child-care setting. These settings

Voeller

100

v01-c08.tex V1 - 12/04/2013 11:40am Page 100

IMPACTS OF AN AGROTERRORISM ATTACK

are familiar and comfortable to children, and generally are experienced as safe, secure environments. As such, school and child-care settings are excellent locations for working with children before, during, and after a disaster” [22, p. 24]. In addition, it will be important to develop age-appropriate informational materials, explanations, coloring books, and messages to help children and families understand and cope with the situation [22]. 8.4.7

Support for Frontline Personnel

As the foot-and-mouth epidemic demonstrated, the job of managing a large-scale outbreak can put frontline personnel under enormous strain. Likewise, during an agroterrorism event, long work hours, fatigue, extended periods of time away from home and family, the risk of injury, regular exposure to upsetting images, the uncertainty of the situation, and perhaps even public hostility could put frontline personnel at significantly increased risk for emotional distress. Agroterrorism planning, therefore, should include a robust mental health component aimed at supporting frontline personnel. This should include such measures as predeployment briefings, provision of self-care and stress management information, regular rest breaks, buddy/peer support arrangements, and support groups. 8.4.8

Human Health Issues

To the extent that human health concerns arise in relation to a suspected or actual agroterrorism attack (e.g. when zoonotic agents are involved or simply when rumors of possible human health effects gain prominence), it will be essential for agencies and spokespersons with a high level of credibility on health issues to be at the center of public communication efforts. Research on terrorism situations involving unconventional agents (including biological threats) has shown that many of people’s concerns, and many of the questions they want answered, relate directly or indirectly to health [32, 35–37]. In addition, other research on terrorism in general has demonstrated that when people are asked who they would trust to “give accurate and reliable information about what is happening and what to do in the event of a terrorist attack,” it was the professionals and organizations knowledgeable about health and health care that were ranked the highest [38]. The CDC was ranked the highest, with 84% of the population indicating it would either “completely trust” or “somewhat trust” the agency to provide accurate and reliable information. Others on the list included “Doctor who is expert” (83%), the Surgeon General (76%), and the National Institutes of Health (75%). Figures such as the Secretary of Homeland Security and the Attorney General ranked much lower (68% and 65% respectively). The lesson is clear. If human health issues are involved in an agroterrorism event, communication with the general public needs to put health issues at the center, messages need to be “front-loaded” with information that answers people’s health questions, and the information should be provided by spokespersons recognized for having high credibility on health issues (e.g. the CDC). 8.4.9

More Realistic Plans and Exercises

There is a pressing need to better integrate social, psychological, and communication issues into agroterrorism contingency plans and training exercises. Many plans and

Voeller

v01-c08.tex V1 - 12/04/2013 11:40am Page 101

RESEARCH DIRECTIONS

101

exercises continue to give only minimal attention to these crucial considerations. Key areas (e.g. provision of appropriate services, development of an effective risk communication strategy, maintenance of trust and confidence) need to be explicitly addressed, and relevant roles and coordination issues need to be delineated and practiced on a regular basis. Without adequate consideration of relevant social, psychological, and communication issues, plans and exercises will be unrealistic and of limited value in preparing agencies and responders to deal with the complex challenges posed by an agroterrorism attack.

8.5

RESEARCH DIRECTIONS

In addition to implementing the lessons learned from the foot-and-mouth outbreak and other relevant experiences, it will be important in the coming years to carry out further research related to the social, psychological, and behavioral aspects of agroterrorism. In this regard, the topics identified in the 2002 National Research Council report on agricultural terrorism continue to be relevant [1]. For example, it would be useful to conduct additional work on how best to assist individuals and communities affected by an agroterrorism attack and how best to speed recovery. Another key area of research involves improving our understanding of the factors that affect compliance with infection control measures during large-scale agricultural disease outbreaks. What factors serve to facilitate compliance and what factors make compliance less likely? How, for example, do different work practices, economic situations, or local customs come into play? A better understanding of such factors will aid in the development of more realistic and more effective infection control strategies. Finally, it would be valuable to expand research on emergency communication during large-scale agricultural disease outbreaks. It is clear from the foot-and-mouth experience that communication problems exacerbated the outbreak’s impacts and damaged public trust and confidence. The stakes and the costs of failure could be even higher in an agroterrorism event. There is, therefore, a pressing need for additional research to better understand people’s concerns, information needs, and preferred information sources in relation to agroterrorism threats. Improved emergency communication—including the development of empirically grounded, pre-event messages—could play an important role in reducing an outbreak’s spread, mitigating its impacts, and maintaining trust, social cohesion, and public confidence.

ACKNOWLEDGMENTS This chapter is based, in part, on fieldwork conducted by the author in the United Kingdom during and after the 2001 foot-and-mouth disease outbreak. The author is grateful to the many individuals and organizations that helped facilitate this work. Special thanks are due to the US Embassy in London, the Department for Environment, Food and Rural Affairs, the Rural Stress Information Network, the Ministry of Defence, and the National Farmers Union. Thanks are due as well to A. Becker, D. Franz, and R. Gurwitch, who provided helpful comments on earlier versions of the manuscript. Finally, the author wishes to thank the Lister Hill Center for Health Policy, and the Smith Richardson

Voeller

102

v01-c08.tex V1 - 12/04/2013 11:40am Page 102

IMPACTS OF AN AGROTERRORISM ATTACK

Foundation (International Security and Foreign Policy Program), which provided support for the research.

REFERENCES 1. National Research Council (2002). Countering Agricultural Bioterrorism, Committee on Biological Threats to Agricultural Plants and Animals. The National Academies Press, Washington, DC. 2. Becker, S. M. (2001). Meeting the threat of weapons of mass destruction terrorism: toward a broader conception of consequence management. Mil. Med. 166(S2), 13–16. 3. Anderson, I. (2002). Foot and Mouth Disease 2001: Lessons to be Learned Inquiry, Stationery Office, London. 4. Donaldson, A. (2004). Clinical signs of foot-and-mouth disease. In F. Sobrino, E. Domingo, Eds. Foot and Mouth Disease: Current Perspectives, Horizon Bioscience, Norfolk, pp. 93–102. 5. Brown, F. (2004). Stepping stones in foot-and-mouth research: a personal view. In F. Sobrino, E. Domingo, Eds. Foot and Mouth Disease: Current Perspectives, Horizon Bioscience, Norfolk, pp. 1–17. 6. Rowlands, D. J., Ed. (2003). Foot-and-mouth Disease, Elsevier Science B.V., Amsterdam. 7. Blancou, J., Leforban, Y., and Pearson, J. E. (2004). Control of foot-and-mouth disease: role of international organizations. In F. Sobrino, E. Domingo, Eds. Foot and Mouth Disease: Current Perspectives, Horizon Bioscience, Norfolk, pp. 425–426. 8. National Audit Office (2002). The 2001 Outbreak of Foot and Mouth Disease, Stationery Office, London. 9. Bennett K., Carroll, T., Lowe, P., and Phillipson, J., Eds. (2002). Coping with Crisis in Cumbria: Consequences of Foot and Mouth Disease, Centre for the Rural Economy, University of Newcastle upon Tyne, Newcastle upon Tyne. 10. Western Morning News (2001). Foot and Mouth: How the Westcountry Lived Through the Nightmare, Western Morning Press, Plymouth. 11. Royal Society of Edinburgh (2002). Inquiry Into Foot and Mouth Disease in Scotland , Royal Society of Edinburgh, Edinburgh, Scotland. 12. Becker, S. M. (2004b). Learning from the 2001 foot and mouth disease outbreak: social, behavioral and communication issues. Scientific Panel on Agricultural Bioterrorism: Countering the Potential for Impact of Biothreats to Crops and Livestock , American Association for the Advancement of Science, Seattle, Washington, April 14, 2004. 13. Ingham, J., (2001). Look at the human suffering caused by efforts to keep this invisible enemy at bay. Daily Express, p. 33. 14. Bennett, K., and Phillipson, J. (2004). A plague upon their houses: revelations of the foot and mouth disease epidemic for business households. Sociol. Ruralis 44(3), 261–284. 15. Peck, D. F., Grant, S., McArthur, W., and Godden, D. (2002). Psychological impact of foot-and-mouth disease on farmers. J. Ment. Health 11(5), 523–531. 16. Garnefski, N., Baan, N., and Kraaij, V. (2005). Psychological distress and cognitive emotion regulation strategies among farmers who fell victim to the foot-and-mouth crisis. Pers. Individ. Dif. 38(6), 1317–1327. 17. Olff, M., Koeter, M. W. J., Van Haaften, E. H., and Kersten, P. H. (2005). Gersons BPR Impact of a foot and mouth disease crisis on post-traumatic stress symptoms in farmers. Br. J. Psychiatry 186(2), 165–166.

Voeller

v01-c08.tex V1 - 12/04/2013 11:40am Page 103

REFERENCES

103

18. Deaville, J., Kenkre, J., Ameen, J., Davies, P., Hughes, H., Bennett, G., Mansell, I., and Jones, L. (2003). The Impact of the Foot and Mouth Outbreak on Mental Health and Well-being in Wales, November. Institute of Rural Health and University of Glamorgan, Glamorgan. 19. Hannay, D., and Jones, R. (2002). The effects of foot-and-mouth on the health of those involved in farming and tourism in Dumfries and Galloway. Eur. J. Gen. Pract. 8, 83–89. 20. Peck, D. F. (2005). Foot and mouth outbreak: lessons for mental health services. Adv. Psychiatr. Treat. 11(4), 270–276. 21. Mort, M., Convery, I., Baxter, J., and Bailey, C. (2005). Psychosocial effects of the 2001 UK foot and mouth disease epidemic in a rural population: qualitative diary based study. British Medical Journal 331, 1234. 22. Gurwitch, R. H., Kees, M., Becker, S. M., Schreiber, M., Pfefferbaum, B., and Diamond, D. (2004). When disaster strikes: responding to the needs of children. Prehospital Disaster Med. 19(1), 21–28. 23. Mercer, I. (2002). Crisis and opportunity: Devon foot and mouth inquiry 2001 , Devon Books, Tiverton Devon. 24. Nerlich, B., Hillyard, S., and Wright, N. (2005). Stress and stereotypes: children’s reactions to the outbreak of foot and mouth disease in the UK in 2001. Child. Soc. 19(5), 348–359. 25. Beeton, S. (2001). How foot and mouth disease affected a rural continence service. Nurs. Times 97(40), 59–60. 26. Gregory, A. (2005). Communication dimensions of the UK foot and mouth disease crisis, 2001. J. Public Aff. 5(3–4), 312–328. 27. Poortinga, W., Bickerstaff, K., Langford, I., Niewohner, J., and Pidgeon, N. (2004). The British 2001 Foot and Mouth crisis: a comparative study of public risk perceptions, trust and beliefs about government policy in two communities. J. Risk Res. 7(1), 73–90. 28. Hagar, C., and Haythornthwaite, C. (2005). Crisis, farming & community. J. Community Inform. 1(3), 41–52. 29. Bush, J., Phillimore, P., Pless-Lulloli, T., and Thomson, C. (2005). Carcass disposal and siting controversy: risk, dialogue and confrontation in the 2001 foot-and-mouth outbreak. Local Environ. 10(6), 649–664. 30. Levin, J., Gilmore, K., Nalbone, T., and Shepherd, S. (2005). Agroterrorism workshop: engaging community preparedness. J. Agromedicine 10(2), 7–15. 31. Vanderford, M. L. (2004). Breaking new ground in WMD risk communication: the pre-event message development project. Biosecur. Bioterror. 2(3), 193–194. 32. Becker, S. M. (2004a). Emergency communication and information issues in terrorism events involving radioactive materials. Biosecur. Bioterror. 2(3), 195–207. 33. Becker, S. M. (2005). Addressing the psychosocial and communication challenges posed by radiological/nuclear terrorism: key developments since NCRP 138. Health Phys. 89(5), 521–530. 34. U.S. Department of Health and Human Services (2002). Communicating in a Crisis: Risk Communication Guidelines for Public Officials, Center for Mental Health Services, Substance Abuse and Mental Health Services Administration, U.S. Department of Health and Human Services, Washington, DC. 35. Wray, R., and Jupka, K. (2004). What does the public want to know in the event of a terrorist attack using plague? Biosecur. Bioterror. 2(3), 208–215. 36. Glik, D., Harrison, K., Davoudi, M., and Riopelle, D. (2004). Public perceptions and risk communication for botulism. Biosecur. Bioterror. 2(3), 216–223. 37. Henderson, J. N., Henderson, L. C., Raskob, G. E., and Boatright, D. T. (2004). Chemical (VX) terrorist threat: public knowledge, attitudes, and responses. Biosecur. Bioterror. 2(3), 224–228.

Voeller

104

v01-c08.tex V1 - 12/04/2013 11:40am Page 104

IMPACTS OF AN AGROTERRORISM ATTACK

38. Marist College Institute for Public Opinion (2003). How Americans Feel About Terrorism and Security: Two Years After 9/11 , Survey conducted on behalf of the National Center for Disaster Preparedness and the Children’s Health Fund. August.

FURTHER READING Brown, C. (2003). Vulnerabilities in agriculture. J. Vet. Med. Educ. 30(2), 112–114. Chalk, P. (2004). Hitting America’s Soft Underbelly: The Potential Threat of Deliberate Biological Attacks Against the U.S. Agricultural and Food Industry, The Rand Corporation, Santa Monica, CA. Hugh-Jones, M. E. (2002). Agricultural bioterrorism. In High-Impact Terrorism: Proceedings of a Russian—American Workshop. National Research Council in Cooperation with the Russian Academy of Sciences, The National Academies Press, Washington, DC, pp. 219–232.

Voeller

v01-bindex.tex V1 - 12/13/2013 4:37pm Page 105

INDEX

9/11 terrorists, deterrence of, 83–84 active touch, 31 acuity, 24 agroterrorism attack, 89 learning from 2001 foot-and-mouth disease outbreak, 90–91 preparedness and response implications, 96–97 ability to rapidly expand crisis hotlines and peer/social support, 99 adequate resources and preparation for information hotlines, 97 broader approach to communication, 98–99 enlisting of public as partner, 97 frontline personnel support, 100 human health issues, 100 pre-event message development approach adoption, 98 realistic plans and exercises, 100–101 special services and materials for children, 99–100 research directions, 101 social, psychological, and communication impacts breakdown of trust and confidence, 96 conflict over control measures, 95–96 conflict within communities, 93 demand for information, 95 hoaxes and threats, 92 isolation, 91–92 noncompliance with infection control measures, 92–93 psychological impacts, 93–95

sense of being under siege, 92 Audio Hot Spotting project, 53 authoritarianism, 4–5 automated speech processing, 52–53 automated video processing, 56–58 behavioral signs of deception and scientific overview, 38 cognitive clues, 38 emotional clues, 39 measurement issues, 39–40 prognosis on generalizability of deception findings across time, lies, situations, and cultures, 40 Bordersafe project, 56 breakouts, from deterrence, 81 Centers for Disease Control and Prevention (CDC), 98, 100 cognitive clues, 38 Criteria-Based Content Analysis (CBCA), 68, 72 critical flicker frequency, 24 cues and visual perception, 26–27 defender’s dilemma, 83 deradicalization, 4 deterrence, 77 definition, 78 important applications, 83 defense in depth, 85–86 deterrence of 9/11 terrorists, 83–84 deterrence through combining consequences, 84–85

Social and Behavioral Research for Homeland Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

105

Voeller

106

v01-bindex.tex V1 - 12/13/2013 4:37pm Page 106

INDEX

deterrence (Continued) information from operational sources, 78 principal findings, 78–79 breakouts from deterrence, 81 defender’s dilemma, 83 deterrence model, 81 interdictor’s dilemma, 83 Peruvian drug flights, 81–83 surge operations, 80 willingness function, 79–80 research directions, 86 general result, 86–87 research community integration, 87 willingness function explanation, 87 strategic deterrence definition, 78 Devon Foot and Mouth Inquiry, 94 emotional clues, 39 figure-ground organization, 25 games, as instructional tool, 14 Green Wellie Appeal, 91 hearing, 28 auditory sensory system, 28–29 auditory perception, 29 auditory perception higher-level properties, 30 human behavior and deception detection, 37 behavioral signs of deception and scientific overview, 38 cognitive clues, 38 emotional clues, 39 measurement issues, 39–40 prognosis on generalizability of deception findings across time, lies, situations, and cultures, 40 critical needs analysis, 42 excellence optimization, 44 ground truth base rates, 43 real-world databases examination and creation, 43 relevant laboratory paradigms and subjects, 42–43 training optimization, 44 future research directions, 44–45 liars spotting abilities, 41 individual differences, 41–42 specialized groups general abilities, 41 human sensation and perception, 19 background, 19–20 body senses, smell, and taste, 31 smell and taste, 31–32 touch, proprioception, pain, and temperature, 31 hearing, 28 auditory perception, 29–30 auditory sensory system, 28–29 investigation methods, 20

psychophysiological methods and brain imaging, 21–22 signal detection methods, 21 threshold methods and scaling, 21 multimodal sensory interactions and role of action, 32 vision, 22 visual perception, 23–28 visual sensory system, 22–23 interdictor’s dilemma, 83 KSAs (what we need to feel), 10–11 acquired, ensuring transfer of, 13 deficiencies, seeking to diagnose, 12 required, 10 liars spotting abilities, 41 individual differences, 41–42 specialized groups general abilities, 41 lie detection ability and training for individual differences, 63–64 accuracy, 70–72 effectiveness, 64–65 high stake lies, 67 relevance, 65, 67 situational generality, 69 testing, 69 time generality, 69–70 training, 67–69 lightness contrast, 24 masking noise, 29 mental health, 6 Micro-Expression Training Tool (METT), 71 motion parallax, 27 multicamera video analysis, 58 multimodal sensory interactions and role of action, 32 olfactory brain, 32 on-the-job training (OJT), 14–15 passive touch, 31 Peruvian drug flights, example of, 81–83 phoneme-based audio retrieval, 53 ‘‘Pinocchio response,” 38 posttraumatic stress disorder (PTSD), 6 pr¨aagnanz principle, 25 prejudice and social cohesion, 5–6 psychological meaning units, of measurement, 39 psychophysiological methods and brain imaging, 21–22 pure Islam, 3 radicalization, 2–4 Ruben’s vase, 25 Rural Stress Information Network, 94

Voeller

v01-bindex.tex V1 - 12/13/2013 4:37pm Page 107

INDEX simulation-based training (SBT), 13–14 smell and taste, 31–32 speech and video processing, for Homeland Security Audio Hot Spotting project, 53 automated speech processing, 52–53 automated video processing, 56–58 challenge of speech, 50–52 of video, 55–56 deception detection, 53–54 future research, 60–61 multicamera video analysis, 58 speech and video, 49–50 state of the art, 58–59 spot, 57 stimuli, 26 strategic deterrence, definition of, 78 Subtle Expression Training Tool (SETT), 71 surge operations, 80 taste buds, 31 terrorism definition of, 1 deradicalization, 4 mental health, 6 prejudice and social cohesion, 5–6 radicalization, 2–4

107

social psychological consequences of, 4 political attitudes, 4–5 social roots of, 1–2 touch, proprioception, pain, and temperature, 31 training and learning development, for Homeland Security, 9 learning development, 13 games, 14 on-the-job training (OJT), 14–15 simulation-based training (SBT), 13–14 training phases, 9–10 evaluation of training, 12–13 implementation of training, 12 instruction designing and development, 11–12 organizational training needs analyzing, 10–11 Tuscon Customs and Border Protection (CBP), 56 vibrotaction, 31 video and speech, 49–50 vision, 22 visual perception, 23–25 higher-level properties, 25–28 visual sensory system, 22–23 willingness function, 79–80 explanation of, 87 word-based retrieval, 53