Wrongful convictions and forensic science

1,183 149 1MB

English Pages 12 Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Wrongful convictions and forensic science

Table of contents :
Wrongful convictions and forensic science
1 INTRODUCTION
2 COUNTING WRONGFUL CONVICTIONS
3 FORENSIC SCIENCE AS A CORRELATE OF WRONGFUL CONVICTIONS
3.1 How forensic science contributed to known wrongful convictions
4 A LANDMARK CRITICAL ANALYSIS OF FORENSIC SCIENCE
5 HUMAN FACTORS
5.1 The problem with ``bad actors´´
5.2 Cognitive bias
6 GOING FORWARD
7 CONCLUSION
ACKNOWLEDGMENT
CONFLICT OF INTEREST
Endnotes
FURTHER READING
REFERENCES

Citation preview

Received: 26 July 2020

Revised: 3 October 2020

Accepted: 18 October 2020

DOI: 10.1002/wfs2.1406

OVERVIEW

Wrongful convictions and forensic science Catherine L. Bonventre Department of Criminal Justice, North Carolina Agricultural and Technical State University, Greensboro, North Carolina Correspondence Catherine L. Bonventre, Department of Criminal Justice, North Carolina Agricultural and Technical State University, Greensboro, NC. Email: [email protected]

Abstract More than 2,600 exonerations have been documented in the United States since 1989. Forensic science—in the form of postconviction DNA testing—has played a critical role in the revelation that wrongful convictions are a problematic feature of criminal justice. Yet, forensic science is also among the many factors—including eyewitness misidentification, false confessions, informants, and more—that are correlates of wrongful convictions. Forensic science contributes to erroneous convictions when analysts provide invalid testimony at trial or when such evidence fails to correct false crime theories. Moreover, while intentional forensic misconduct certainly exists, the effects of confirmation biases may present a greater threat to forensic analyses. The preceding mechanisms and reform efforts are discussed. This article is categorized under: Jurisprudence and Regulatory Oversight > Expert Evidence and Narrative KEYWORDS confirmation bias, exonerations, forensic science, misconduct, wrongful convictions

1 | INTRODUCTION In 2015, the Federal Bureau of Investigation (FBI) announced the results of an ongoing review of FBI forensic examiners' testimony on microscopic hair comparison evidence used in criminal cases that occurred before the year 2000. According to the FBI, “[A]t least 90 percent of trial transcripts the Bureau analyzed as part of its Microscopic Hair Comparison Analysis Review contained erroneous statements” (FBI, 2015, para. 1). Specifically, of the 286 cases involving inculpatory testimony at trial, the FBI and its collaborators reviewed by the time of the announcement, 257 contained erroneous statements (FBI, 2015). The FBI conducted the review in collaboration with the U.S. Department of Justice (DOJ), the Innocence Project, and the National Association of Criminal Defense Lawyers (FBI, 2015). The review began after The Washington Post reported in 2012 that, for years, the Justice Department had been aware of faulty forensic work that might have contributed to wrongful convictions but had not thoroughly reviewed the cases (Hsu, 2012a). Government officials identified almost 3,000 cases involving microscopic hair testimony and reports that would be subject to review (FBI, 2015). Santae Tribble, of Washington, DC, was wrongly convicted of murder based, at least in part, on FBI hair testimony. He spent nearly 30 years in prison and was exonerated when DNA testing on the evidentiary hairs excluded him as the source (Hsu, 2012b). Because of Tribble's wrongful conviction, a judge ordered the DC government to pay him 13.2 million dollars (Hsu, 2016). Wrongful convictions cause psychological, emotional, and economic injuries to the innocent people convicted of crimes they did not commit (Campbell & Denov, 2004; Grounds, 2004; Westervelt & Cook, 2014). They also produce social harms to the original crime victims (Williamson, Stricker, Irazola, & Niedzwieki, 2016), disrupt family relationships (Westervelt & Cook, 2010), may reduce the public trust in our justice systems (Norris & Mullinix, 2019), and lead WIREs Forensic Sci. 2020;e1406. https://doi.org/10.1002/wfs2.1406

wires.wiley.com/forensicsci

© 2020 Wiley Periodicals LLC.

1 of 12

2 of 12

BONVENTRE

to additional victims of the actual perpetrators who remain at liberty (Acker, 2012/2013; Baumgartner, Grigg, Ramirez, & Lucy, 2017/2018; Norris, Weintraub, Acker, Redlich, & Bonventre, 2019). For this review, the term wrongful conviction means the conviction of a factually innocent person—such as when the wrong person is identified as the perpetrator and convicted of a crime that someone else committed or when no crime occurred, and someone was nevertheless held responsible. Several common factors are correlated with wrongful convictions, including eyewitness misidentification, false confessions, forensic science issues, inadequate defense lawyering, informants, perjury, and official misconduct (Borchard, 1932; Garrett, 2011; Gross, Jacoby, Matheson, Montgomery, & Patil, 2005; Scheck, Neufeld, & Dywer, 2003; West & Meterko, 2015/2016). The presence of forensic science on this list of correlates reveals its complicated relationship with wrongful convictions. While DNA evidence has been a powerful tool in helping to reduce skepticism that justice systems convict factually innocent people, research on exonerations from wrongful convictions indicates that forensic science also contributes to these erroneous outcomes (Cole, 2012; Cole & Thompson, 2013; Garrett, 2011; Thompson, 2008). The study of wrongful convictions and forensic science has occurred across a realm of multiple disciplines—including legal scholarship, social psychology, social science, science and technology studies, and forensic science. This overview highlights some perspectives from those domains on how forensic science contributes to wrongful convictions in the United States and considers some recent efforts to reduce the role of forensic science in contributing to wrongful convictions.

2 | C O U N TI N G WR O N G F U L CO N V I C T I O N S Researchers identify and keep track of wrongful convictions by counting exonerations. An exoneration entails a convicted person's official clearance of a crime based on evidence of innocence. In the United States, the two primary organizations that compile exoneration information at the national level are the Innocence Project and the National Registry of Exonerations (NRE). Attorneys Barry Scheck and Peter Neufeld founded the Innocence Project at Cardozo Law School in 1992. The organization provides legal advocacy to exonerate the wrongly convicted through postconviction DNA testing and engages in criminal justice policy reform advocacy (Innocence Project, n.d.-a). The NRE was founded in 2012 and is currently a project of the Newkirk Center for Science and Society at the University of California Irvine, the University of Michigan Law School, and Michigan State University College of Law. The NRE compiles and analyzes information on all known exonerations from 1989 to the present. To be included in the NRE database, an exoneration must be based on some official declaration of innocence of the crime or official relief from all legal consequences of the crime due to evidence of innocence (NRE, n.d.-a). Examples of such official actions include executive pardons, acquittals, or dismissals based on evidence of innocence (NRE, n.d.-a). Thus, an exoneree is a “person who was convicted of a crime and later officially declared innocent of that crime, or relieved of all legal consequences of the conviction because evidence of innocence that was not presented at trial required reconsideration of the case” (NRE, n.d.-a). As of July 2020, postconviction DNA testing has led to the exonerations of over 360 people in the United States (Innocence Project, n.d.-a). Overall, the number of U.S. exonerations identified from 1989 to July 2020—whether accomplished through DNA or other means—exceeds 2,600 (NRE, n.d.-b). Wrongful conviction scholars acknowledge that the exoneration counts likely represent a fraction of all wrongful convictions, and the actual rate of such convictions “is not merely unknown but unknowable” (Gross, O'Brien, Hu, & Kennedy, 2014, p. 7230). Thus, researchers have estimated wrongful conviction rates based on analyses of death-penalty convictions (Gross et al., 2014; Gross & O'Brien, 2008; Risinger, 2007), the opinions of justice professionals (Ramsey & Frank, 2007; Zalman, Smith, & Kiger, 2008), postconviction DNA testing on archived biological evidence (Roman, Walsh, Lachman, & Yahner, 2012; Walsh, Hussemann, Flynn, Yahner, & Golian, 2017), and inmate self-report data (Loeffler, Hyatt, & Ridgeway, 2019; Poveda, 2001). For example, recent analyses have produced estimates ranging from 4.1% in death-sentenced defendants (Gross et al., 2014), 6% in inmates serving noncapital felony sentences in Pennsylvania (Loeffler et al., 2019), to more than 11% in 1970s and 1980s convictions involving sexual assault in Virginia (Walsh et al., 2017).

3 | F O R E N S I C S C I E N C E AS A C O R R E LA T E OF W R O N G F U L CO N V I C TI O N S Like all human enterprises, forensic science is fallible. The increasing crime rate in the 1970s was accompanied by a growth in the number of crime laboratories in the United States (Peterson & Leggett, 2007). According to Peterson and

BONVENTRE

3 of 12

Leggett (2007), “[w]hile the growth was necessary, it was unregulated and without clear guidance from, or adherence to, national standards. Thus, while crime-laboratory services expanded, some of the underlying problems of quality assurance and minimum scientific standards simply multiplied” (p. 625). Early studies of DNA exonerations indicated that forensic science evidence was a correlate of the cases studied (see, e.g., Connors, Lundregan, Miller, & McEwan, 1996). For example, Scheck and colleagues identified “tainted or fraudulent science” as a contributor in onethird of the more than 60 DNA exonerations they analyzed (Scheck et al., 2003, p. 246). Using data provided by the Innocence Project, Saks and Koehler reviewed 86 DNA exonerations and found that 63% of the cases involved “forensic science testing errors,” and 27% involved “false/misleading testimony by forensic scientists” (Saks & Koehler, 2005, p. 892). Although, it is unclear how the authors of this latter study operationalized their terms. Later, Garrett's analysis of the first 200 DNA exonerations found that 57% of the exonerees “were convicted based on forensic evidence, chiefly serological analysis and microscopic hair comparison” (Garrett, 2008, p. 60). An analysis of DNA exonerations from 1989 through 2014 indicated that the misapplication of forensic science was a contributing factor in 47% of the 325 cases studied (West & Meterko, 2015/2016). In that study, the misapplication of forensic science included “scientific error, overstatement, gross negligence, or misconduct,” but not “instances in which forensic science was applied properly” (West & Meterko, 2015/2016, pp. 743–744). However, when researchers examine all DNA and non-DNA exonerations together, the apparent role of forensic science drops. Thus, Gross and Shaffer (2012) found that “false or misleading forensic evidence” was a contributing factor in 24% of the 873 exonerations identified between 1989 and 2012. Notably, the proportion of cases in the NRE database in which forensic science was a contributing factor appears to be stable. As of July 05, 2020, the percentage of cases involving “false or misleading forensic evidence” at trial is 24% (n = 636 out of 2,640) (NRE, n.d.-b). Researchers at the NRE currently define false or misleading forensic evidence to mean that the “conviction was based at least in part on forensic information that was (1) caused by errors in forensic testing, (2) based on unreliable or unproven methods, (3) expressed with exaggerated and misleading confidence, or (4) fraudulent” (NRE, n.d.-a). These findings have limitations. First, although studying exonerations has advanced our understanding of the factors that contribute to wrongful convictions, the known cases are not necessarily representative of all wrongful convictions. For example, the DNA exonerations consist primarily of sexual assault and homicide cases (West & Meterko, 2015/2016), making the data unrepresentative of criminal cases in general (Garrett & Neufeld, 2009; Gross, 2008). The types of cases that are prevalent among DNA exonerations make sense given that violent crimes such as sexual assaults and homicides are likely to have biological evidence collected (Roman, Reid, Chalfin, & Knight, 2009; Weedn & Hicks, 1998). Thus, the contributing factors differ in degree between DNA exonerations and exonerations in general, the latter of which include a more extensive range of crime categories. Second, multiple factors can and do contribute to convictions; thus, it would be difficult to say in a scientific sense, that any particular correlate caused a wrongful conviction (see Leo, 2005). As Cole and Thompson (2013) observed, counting factors that were present at trial is distinct from “a measure of whether that factor caused the wrongful conviction or even of how much influence that factor may have had on various criminal justice system actors” (p. 118). Indeed, LaPorte recently analyzed 133 DNA exonerations that involved forensic science as a contributing factor and found that 98% of the cases involved multiple additional contributing factors (LaPorte, 2018). Finally, it is important to note that over time, a variety of terms describing forensic science issues have appeared in the literature. Some argue that terms such as faulty, bad, junk, unreliable, or fraudulent when used broadly to describe forensic science as a contributing factor to wrongful convictions obscure the multiple types of errors or issues that are involved (see, e.g., Christensen, Crowder, Ousley, & Houck, 2014). Also, such loose terminology may engender disputes regarding the extent to which forensic science is a contributor to wrongful convictions (see, Cole, 2012; Collins & Jarvis, 2009).1 Finally, it is important to note that evidence indicates that the role of forensic science as a contributor to wrongful convictions, at least as evidenced in DNA exonerations, may be declining over time. In their analysis of DNA exonerations through 2014, in which the misapplication of forensic science contributed to 47% of the cases, West and Meterko (2015/2016) examined the contributing factors by year of conviction. They found that the misapplication of forensic science dropped to 20% for convictions obtained in 1996 or later (West & Meterko, 2015/2016). At least in the DNA cases, which primarily involve sexual assault and murder, this may be because there is increasing use of DNA pretrial (West & Meterko, 2015/2016). It may also be due to the declining use of conventional serology and hair microscopy. These disciplines were the most frequent contributors at 55 and 47%, respectively, among the 154 DNA exonerations involving the misapplication of forensic science studied by West and Meterko (2015/2016). Similarly, LaPorte's (2018) study of the DNA exonerations identified by the NRE as having forensic science as a factor found that 83% of the convictions in the 133 cases examined occurred before 1991. Yet, it is important to remember, though, that

4 of 12

BONVENTRE

among all exonerations in the NRE database—including both DNA-based and non-DNA, 24% involve false or misleading forensic science (NRE, n.d.-b).

3.1 | How forensic science contributed to known wrongful convictions Some commentators have argued that merely reporting the percentage of exonerations in which forensic science evidence was present at trial may have the unintended consequence of inflating the perception of the extent to which forensic science contributes to wrongful convictions (see Collins & Jarvis, 2009). While it may be true that the mere presence of forensic science evidence at trial does not necessarily mean the evidence was faulty, scrutiny of forensic science testimony tends to indicate that there are problems. To better understand the role of forensic science in wrongful convictions, Garrett and Neufeld reviewed the trial testimony in DNA exoneration cases to determine the validity of the forensic analysts' statements (Garrett & Neufeld, 2009; see also, Garrett, 2011 for an expanded analysis). The study authors found that in 60% of the 137 trial transcripts they examined, the analysts provided “invalid testimony,” which they defined as “testimony with conclusions misstating empirical data or wholly unsupported by empirical data” (Garrett & Neufeld, 2009, p. 2). Examples included analysis interpreting nonprobative evidence as inculpatory, discounting exculpatory evidence, presenting inaccurate statistics, and more (Garrett & Neufeld, 2009). The invalid testimony was provided mainly in cases involving serology and microscopic hair comparison, because, as the authors noted, most of the cases in the study set involved sexual assault—cases in which biological evidence of that nature was routinely collected (Garrett & Neufeld, 2009). The Garrett and Neufeld study shed light on an empirically underexamined aspect of forensic practice—trial testimony—yet the authors were careful to note the study's limitations. For example, they noted that their data—drawn primarily from cases involving sexual assault—were unrepresentative of criminal cases in general or current forensic testimony. They also made no claims to the prevalence of invalid testimony. Nevertheless, Cole (2012) observed that one of the crucial contributions of such trial transcript analysis is that it moves beyond counting exonerations in which forensic science evidence was present at trial and specifies how forensic science contributed to the cases (see Cole, 2007). In another study, Gould and colleagues attempted to identify the factors that statistically differentiate wrongfulconviction cases from innocent-defendant cases that resulted in pretrial dismissals or acquittals at trial (“near misses”) (Gould, Carrano, Leo, & Hail-Jares, 2014). The authors compared a set of wrongful convictions to a control group of near misses and, using logistic regression, identified a set of 10 statistically significant factors that influence the likelihood of a wrongful conviction. The factors included “the age and criminal history of the defendant, punitiveness of the state, Brady violations, forensic error, weak defense, weak prosecution case, family defense witness, non-intentional misidentification, and lying by a non-eyewitness” (Gould et al., 2014, p. 477). Forensic science contributed to the wrongful convictions by compounding other mistakes rather than correcting them or through officials' failure to use it because other evidence of guilt made the forensic evidence appear nonprobative (Gould et al., 2014). The study included a qualitative analysis by an expert panel of criminal justice stakeholders, who found that “the most common forensic error was improper testimony at trial by a state's witness who overstated the precision or inculpatory nature of the results” (p. 77) and that errors in laboratory testing or fraud were less frequent. As the authors observed, “In these instances, it appears that the state used forensic science merely to confirm its case rather than provide a rigorous, independent assessment of the defendant's guilt” (Gould et al., 2014, p. 500). On the other hand, the authors found that properly used forensic science evidence increased the likelihood that innocent-indicted defendants would have their cases dismissed. Thus, as Cole has argued, viewing forensic science as a discrete item on a list of contributing factors may be unhelpful. Instead, the “function of forensic science should be to correct false theories developed by police as well as to support true theories” (Cole, 2012, p. 735; see also Cole & Thompson, 2013). This framework—thinking about forensic science as a system factor—supports the argument that the role other criminal justice actors play in the use of forensic science evidence should be addressed (see, e.g., Dioso-Villa, 2016; Laurin, 2013; Oliva & Beety, 2017). As the story below illustrates, prosecutors may disregard or explain away properly produced exculpatory DNA testing results (Appleby & Kassin, 2016). In their analysis of the first 325 DNA exonerations, West and Meterko (2015/2016) cited 28 cases in which DNA testing excluded the defendants at the time of trial—yet they were still convicted. Thus, from a system perspective, more attention should be paid to how prosecutors and other criminal actors use—or misuse—forensic evidence (Gershman, 2003).

BONVENTRE

5 of 12

The Norfolk Four false-confession case illustrates the limitations of the power of proper forensic science evidence to correct erroneous police or prosecution theories about criminal events, especially when other powerful forms of evidence, such as confessions, are present. Beginning in 1997, investigators with the Norfolk Police Department arrested four Navy sailors for the rape and murder of a young woman. After lengthy interrogations, the sailors gave false confessions, which they subsequently retracted (Wells & Leo, 2008). Each of the four defendants provided biological samples for forensic DNA testing, the results of which excluded each as the source of the semen found on the victim (Wells & Leo, 2008). Knowledge of the exculpatory DNA results should have prompted the investigators to reconsider the guilt of the original four defendants. Instead, the prosecutors continually reshaped their narrative of how the crime occurred and sought additional suspects to explain away the inconsistent biological evidence (Wells & Leo, 2008). Even after the actual perpetrator confessed that he had committed the rape and murder alone—and his DNA matched the crime-scene DNA—the prosecution against the Norfolk Four continued to their convictions. In 2017, 20 years after the case began, then-Virginia Governor Terry McAuliffe pardoned the four sailors (Jackman, 2017a).

4 | A L A N D M A R K C R I T I C A L A N A L Y S I S OF FO R E N S I C S C I E N C E In 2005, Congress directed the National Academy of Sciences (NAS) to conduct a comprehensive study of forensic science in the United States. That study, completed by the Committee on Identifying the Needs of the Forensic Science Community, culminated in a 2009 report (the National Research Council [NRC] report2) that identified several deficiencies in forensic science practice including case backlogs, underfunding, understaffing, lack of standardized methods, lack of oversight, and limited systematic research to support the validity of many of the forensic science disciplines (NRC, 2009). While acknowledging the valuable role of forensic science evidence in the successful prosecution of criminal cases, the committee opined, Those advances, however, also have revealed that, in some cases, substantive information and testimony based on faulty forensic science analyses may have contributed to wrongful convictions of innocent people. This fact has demonstrated the potential danger of giving undue weight to evidence and testimony derived from imperfect testing and analysis. Moreover, imprecise or exaggerated expert testimony has sometimes contributed to the admission of erroneous or misleading evidence. (NRC, 2009, p. 4). The committee issued 13 recommendations to advance forensic science practice. Two of the more controversial proposals were removing crime laboratories from law enforcement control and the creation of a federal entity to “promote the development of forensic science practice into a mature field of multidisciplinary research and practice” (NRC, 2009, p. 19; see, also, Ballou, 2019). The committee classified forensic science practices as either “pattern/experience evidence” or “analytical evidence”—the former including fingerprints, bite marks, microscopic hair comparison, and the latter including the analysis of DNA and chemicals (e.g., controlled substances) (NRC, 2009). The committee recommended funding for peer-reviewed research “to address the issues of accuracy, reliability, and validity in the forensic science disciplines,” especially for those falling under the pattern/experience evidence category (NRC, 2009, p. 22). Recommendation 5 stressed the need for research on the sources of human error and bias in forensic practice and procedures designed to minimize such +errors and bias (NRC, 2009).

5 | HUMAN FACTORS Over the past several years, there has been a growing body of literature on the human factors in forensic science practice. The study of human factors is “a multidisciplinary field that examines ways in which human performance (e.g., the judgments of experts) can be influenced by cognitive, perceptual, organizational, social and cultural factors, and other human tendencies” (National Commission on Forensic Science [NCFS], 2017a). The next sections address some of these concerns.

5.1 | The problem with “bad actors” According to the Bureau of Justice Statistics, there were 409 publicly funded crime laboratories in the United States in 2014 with over 14,000 employees, more than half of whom were “analysts or examiners who prepared and analyzed evidence and reported on their conclusions” (Durose, Burch, Walsh, & Tiry, 2016, p.5). It is reasonable to assume that

6 of 12

BONVENTRE

most forensic analysts are well-intentioned and approach their duties in good faith. However, research has documented several high-profile laboratory scandals (Turvey, 2013). For example, the NRC report included the case of Fred Zain, a trooper with the Serology Division of the West Virginia State Police Crime Laboratory, who produced false evidence and testimony in numerous cases in the 1980s (NRC, 2009). Zain's misconduct was so egregious that West Virginia's highest court ruled his testimony invalid and unreliable as a matter of law (In the Matter of West Virginia State Police Crime Laboratory, Serology Division, 1993). As of this writing, the NRE database includes seven exonerations in which Zain provided false or misleading trial testimony (NRE, n.d.-c; see, also, Giannelli, 2010a). More recently, Annie Dookhan was a chemist in the Hinton Drug Laboratory in Massachusetts who engaged in “drylabbing,” tampered with evidence, and provided false testimony about her credentials (Cunha, 2014). Drylabbing occurs when a scientist reports on laboratory tests that were not performed. In 2013, Dookhan pleaded guilty and was sentenced to three to five years in prison (Ellement, 2016). Dookhan's misconduct ultimately led prosecutors in Massachusetts to dismiss over 20,000 drug cases (Jackman, 2017b). How many of those defendants were innocent remains unknown. Such laboratory scandals in which intentional forensic science misconduct or errors favored the prosecution (Giannelli, 1997, 2007; Turvey, 2013) provide some evidence that pro-prosecution bias exists. Because law enforcement agencies administer most publicly funded crime laboratories in the United States (NRC, 2009; Peterson, Mihajlovic, & Bedrosian, 1985), some scholars have suggested that it is this structural relationship that creates the potential for proprosecution bias (Giannelli, 1997; Jonakait, 1991; Moenssens, 1993; NRC, 2009; Thomson, 1974). Consequently, many have argued that crime laboratories should be removed from the administrative control of law enforcement (DiFonzo, 2005; Giannelli, 1997; Scheck et al., 2003). As stated in the NRC report, “The best science is conducted in a scientific setting as opposed to a law enforcement setting. Because forensic scientists are driven in their work by a need to answer a particular question related to the issues of a particular case, they sometimes face pressure to sacrifice appropriate methodology for the sake of expediency” (NRC, 2009, p. 23). Presumably, independence from law enforcement would limit the potential for pro-police or pro-prosecution bias among analysts, including motivational and cognitive biases (Giannelli, 2010b). Laurin (2013) cautions, however, that such arrangements would require “thoughtful calibration” and research on “best practices in the division of responsibilities and collaboration between and among investigators, prosecutors, and crime laboratory personnel” (p. 1112). Furthermore, Findley and Scott warn that “organizational independence from law enforcement is not a guarantee that forensic scientists will not share a police investigator's tunnel vision” (p. 394). The Annie Dookhan scandal illustrates that the laboratory that employed Dookhan during her misconduct was not a police laboratory—the Massachusetts Department of Public Health operated it. The state Office of the Inspector General (OIG) investigated and concluded that the Massachusetts State Police, with its infrastructure, resources, and drug-testing accreditation, was the appropriate agency for handling the Hinton Laboratory's drug testing. This move ultimately occurred (Cunha, 2014). The OIG investigation report labeled Dookhan “the sole bad actor” (Cunha, 2014, p. 113). While the OIG report described several failures at the management level—including lack of oversight, insufficient training of the chemists, and ineffective quality control—the “bad actor” label itself is problematic. As Thompson (2008) explained, Individualistic explanations channel our thinking toward individualistic solutions (replacing the bad apples) and divert attention from broader institutional, structural and cultural factors that may contribute to laboratory foul-ups. We tend to think that replacing the bad apples solves the underlying problem without considering why we have so many bad apples in the first place, why we find more bad apples in some environments than others, and why the apples repeatedly seem to go bad in the same familiar ways. (p. 972) Consequently, instances of forensic science errors or ethical lapses in which there are identifiable “bad actors” nevertheless require an approach to understanding what aspects of the organization facilitated the lapse's occurrence (Doyle, 2010; Thompson, 2008). For example, as Jeanguenat and Dror (2018) argue that workplace stress, pressure, and anxiety are understudied human factors that deserve more attention for their role in affecting forensic decision making (see, also, Dror, 2020).

5.2 | Cognitive bias Many experts argue that the more significant threat to unbiased and impartial forensic science arises from cognitive biases, which unconsciously affect forensic decision making (see, e.g., Dror & Pierce, 2020). That is, even if most forensic analysts are well intentioned, many scholars argue that cognitive biases pose a potentially more significant source of error than intentional misconduct (Byrd, 2006; Dror & Cole, 2010; Kassin, Dror, & Kukucka, 2013; Risinger, Saks, Thompson, & Rosenthal, 2002).

BONVENTRE

7 of 12

Social psychologists have documented many cognitive biases, including, for example, confirmation bias. In a comprehensive review of confirmation bias, Nickerson (1998) noted that “[i]f one were to identify a single problematic aspect of human reasoning that deserves attention above all others, the confirmation bias would have to be among the candidates for consideration” (p. 175, emphasis in the original). Confirmation bias refers to “several more specific ideas that connote the inappropriate bolstering of hypotheses or beliefs whose truth is in question” (Nickerson, 1998, p. 175). Nickerson notes that the process by which this occurs is unconscious. In short, people tend “to seek and interpret information in ways that are partial toward existing beliefs…[and] to avoid information that would contradict those beliefs and support alternative hypotheses” (Ask & Granhag, 2005, p. 45). In the context of a criminal case, the hypothesis or belief whose truth is in question is the guilt or innocence of the suspect. Thus, researchers are paying increasing attention to how cognitive biases affect forensic decision making. In an earlier analysis, Risinger et al. (2002) reviewed the literature on context effects and described how these phenomena might affect forensic scientists, particularly those whose work involves subjective interpretation. The authors stated that “An elementary principle of modern psychology is that the desires and expectations people possess influence their perceptions and interpretations of what they observe. In other words, the results of observation depend upon the state of the observer as well as the thing observed” (Risinger et al., 2002, p. 6). For example, when police or prosecutors ask forensic analysts to reexamine inconclusive or exculpatory results, there is an implicit suggestion to find more inculpatory results (Risinger et al., 2002). Kassin, Bogart, and Kerner (2012) analyzed DNA exonerations involving false confessions to determine whether the confessions had a corrupting influence on other types of evidence in those cases. They found that false confession cases had a significantly higher frequency of forensic science errors (than, for example, cases involving eyewitness errors only). The authors hypothesized that when the confession preceded other evidence in temporal order, the knowledge of the admission increased the possibility of confirmation bias among subsequent witnesses (Kassin et al., 2012). In a recent systematic review of all the available research in this domain, Cooper and Meterko (2019) concluded that the research on the effects of cognitive biases on forensic decision making supported the notion that forensic scientists are susceptible to such biases. For example, experimental studies have shown that forensic analyses, particularly those involving pattern recognition or subjective interpretation, are vulnerable to the effects of cognitive biases (see Dror & Cole, 2010, for a review). In one study, experienced fingerprint examiners reached contradictory (within-subject) conclusions when examining the same latent prints under different contexts (Dror & Charlton, 2006). It is important to note that instrumentation-based forensic science disciplines, like DNA analysis, are not immune from bias (Dror & Pierce, 2020; Thompson, 2009). For example, experimental evidence indicated that analysts conducting DNA mixture interpretation were vulnerable to bias and contextual influences (Dror & Hampikian, 2011). In their systematic review, Cooper and Meterko (2019) also concluded that the research supported “the potential value of procedures designed to reduce access to unnecessary information and to promote use of multiple comparison samples rather than a single suspect exemplar and replication of results by analysts blinded to previous results” (p. 43). Indeed, many scholars and practitioners have recommended approaches to minimize cognitive biases in forensic casework to reduce the risk of error (see, e.g., Dror & Pierce, 2020; Krane et al., 2008; Thompson, 2009). For example, the linear sequential unmasking (LSU) approach requires the analyst to first examine and document the evidentiary material before exposure to the reference material (e.g., suspect exemplars)—thus forcing the analyst to “[work] from the evidence to the suspect, rather than from the suspect to the evidence” (Dror et al., 2015, p.1111, see, also, Scherr, Redlich, & Kassin, 2020). The LSU approach is flexible because it allows the analyst to revise the initial judgment after exposure to the reference material—with limitations on the number of post-exposure revisions, and documentation of the analyst's confidence in the initial judgment (Dror et al., 2015). However, LSU is not without practical challenges, including how to select what contextual information is relevant to the analysis and what should be shielded (Langenburg, 2017). Another method is to have a case manager engage with all the case information to determine which tests are appropriate and assign the testing to an analyst who is shielded from potentially biasing contextual information (see Krane et al., 2008). Others have argued that the preferable approach is peer review through retesting and reanalysis or verification of the initial results through a second, independent analysis by an examiner who does not know the original conclusion (Budowle et al., 2009) or that accreditation programs enhance the professionalism of laboratories and provide mechanisms to identify and correct errors or fraud (Collins & Jarvis, 2009; Giannelli, 2007).

6 | G OI N G FO R W A R D Following the publication of the NRC report, efforts to advance forensic science have proceeded on several fronts (Butler, 2015). The NRC recommended the creation of an independent federal entity, the “National Institute of Forensic

8 of 12

BONVENTRE

Science (NIFS)” to “support and oversee” forensic science in the United States (NRC, 2009, pp. 18–19). The report authors considered and dismissed the possibility of locating the NIFS within existing federal agencies in favor of starting an entirely new entity (NRC, 2009). For example, the authors noted that locating an independent entity designed to serve all stakeholders within the DOJ would be at odds with the DOJ's mission “to enforce the laws and defend the interests of the United States” (NRC, 2009, p. 17). Nevertheless, the NIFS, as envisioned by the NRC report authors did not materialize. Ultimately, the DOJ and the National Institute of Standards and Technology established a federal advisory committee, the NCFS, to “enhance the practice and improve the reliability of forensic science” (NCFS, 2017b, p. 3). The NCFS membership included multiple stakeholders across the criminal justice system and the scientific community, including lawyers (defense and prosecution), scientists, judges, forensic science practitioners, victim advocates, and more. Although the NCFS served in an advisory role to the DOJ, it aimed ultimately to impact forensic science policy at the state and local levels (NCFS, 2017b). The NCFS operated from 2013 until 2017 when Attorney General Jeff Sessions declined to renew its charter (Hsu, 2017). Seven subcommittees of the NCFS produced several work products related to the issues raised in the NRC report, including, as relevant here, addressing human factors and reporting and testimony (NCFS, 2017b). For example, the Human Factors Subcommittee produced a document aimed at addressing the challenge of cognitive bias by discussing ways to minimize forensic analysts' exposure to taskirrelevant information such as suspects' confessions or criminal histories (NCFS, 2015). As noted, the NCFS was an advisory body to the DOJ only, so the extent to which its recommendations are adopted and implemented by that body and the forensic science community more broadly, remain to be seen (see Epstein, 2018).

7 | C ON C L U S I ON The wrongful conviction of an innocent person is a tragic miscarriage of justice. Researchers and advocates have identified the common correlates of wrongful convictions through the study of exonerations and have shown forensic science evidence to be among them. As with all contributing factors to wrongful convictions, it is necessary to dig deep and understand all the mechanisms by which forensic science plays a role at both the individual and systemic levels. Thus, psychology research has shown that cognitive biases can lead to errors in forensic decision making, which could lead to erroneous outcomes. Through their analysis of trial transcripts, legal scholars and science and technology scholars have demonstrated that invalid forensic science testimony is another mechanism. However, these two mechanisms are not mutually exclusive since cognitive biases can affect all human decisions. Even when wrongful convictions are the product of forensic misconduct, it is necessary to step back and examine the institutional and cultural forces that supported such outcomes rather than inhibited them (Doyle, 2010; Dror, 2020). As discussed above, there is evidence that forensic science as a contributor to wrongful convictions may be declining among DNA exonerations. It was also noted, however, that among both DNA and non-DNA exonerations, 24% involve false or misleading forensic science. Thus, vigilance concerning the role of forensic science as a contributor to wrongful convictions should remain—as should efforts to improve forensic practice in general. Finally, an important area of research remains underexplored. DNA exonerations have been a rich source of data for understanding forensic science and wrongful convictions. Yet, only 10% of the first 325 DNA exonerations involved guilty pleas (West & Meterko, 2015/2016). This figure stands in contrast to the vast majority of felony convictions in the United States—more than 90%—that are secured through guilty pleas (Durose & Langan, 2007). Research investigations into the extent to which forensic science evidence contributes to false guilty pleas would aid in understanding whether, for example, innocent people are more likely to plead guilty in cases involving forensic evidence and drug crimes (see, e.g., Gabrielson & Sanders, 2016). A C K N O WL E D G M E N T The author thanks the anonymous reviewers and the WIRES editors for their insightful suggestions and comments. CONFLICT OF INTEREST The author has declared no conflicts of interest for this article. ORCID Catherine L. Bonventre

https://orcid.org/0000-0003-3809-4922

BONVENTRE

9 of 12

E N D N O T ES 1

LaPorte's (2018) analysis of DNA exonerations highlights the potential challenges with classifying forensic science as a contributor to wrongful convictions. LaPorte compared the case narratives of DNA exonerations described on the Innocence Project website with those on the NRE website and found inconsistencies in the classification of cases involving forensic science as a factor. In short, 24 of the cases listed on the Innocence Project website as having forensic science errors were not identified as such in the NRE database (LaPorte, 2018). As noted above, the NRE's definition of “false or misleading forensic evidence” includes forensic testing errors, unreliable methods, expression involving exaggeration or misleading confidence, or fraud (NRE, n.d.-a). LaPorte's analysis used to content from the Innocence Project website in 2016, which then classified the role of forensic science as “unvalidated or improper forensic science” (p. 12). As of this writing, the Innocence Project uses the term “misapplication of forensic science” which is defined on the organization's website as: “[F]orensic evidence that is unreliable or invalid and expert testimony that is misleading. It also includes mistakes made by practitioners and in some cases misconduct by forensic analysts. In some cases, scientific testimony that was generally accepted at the time of a conviction has since been undermined by new scientific advancements in disciplines…” (Innocence Project, n.d.-b). The Innocence Project website cites microscopic hair comparison as an example of generally accepted testimony that subsequent advancements have undermined. It is unclear where the definitional differences lie between the wrongful conviction catalogs. It is clear, however, that a one-size-fits-all term or definition for forensic science as a contributor to wrongful convictions remains elusive and in fact, may be undesirable.

2

This report is also referred to in the literature as “the NAS Report.”

R EL A TE D WIR Es AR TI CL ES The testifying forensic discipline expert-A primer Interpretation continues to be the main weakness in criminal justice systems: Developing roles of the expert witness and court FURTHER READING President's Council of Advisors on Science and Technology (2016). Forensic science in criminal courts: Ensuring scientific validity of featurecomparison methods. Retrieved from https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/PCAST/pcast_forensic_ science_report_final.pdf Simon, D. (2012). In doubt: The psychology of the criminal justice process, Cambridge, MA: Harvard University Press. Zalman, M., & Carrano, J. (Eds.). (2014). Wrongful conviction and criminal justice reform: Making justice, New York, NY: Routledge. Findley, K. A., & Scott, M. S. (2006). The multiple dimensions of tunnel vision in criminal cases. Wisconsin Law Review, 2006, 291–397. U.S. Department of Justice (2015). Charter: National Commission on Forensic Science. Retrieved from https://www.justice.gov/archives/ncfs.

R EF E RE N C E S Acker, J. R. (2012/2013). The flipside injustice of wrongful convictions: When the guilty go free. Albany Law Review, 76(3), 1629–1712. Appleby, S. C., & Kassin, S. M. (2016). When self-report trumps science: Effects of confessions, DNA, and prosecutorial theories on perceptions of guilt. Psychology, Public Policy, and Law, 22(2), 127–140. https://doi.org/10.1037/law0000080127 Ask, K., & Granhag, P. A. (2005). Motivational sources of confirmation bias in criminal investigations: The need for cognitive closure. Journal of Investigative Psychology and Criminal Profiling, 2(1), 43–63. https://doi.org/10.1002/jip.19 Ballou, S. M. (2019). The NAS Report: 10 years of response. Journal of Forensic Sciences, 64(1), 6–9. https://doi.org/10.1111/1556-4029.13961 Baumgartner, F. R., Grigg, A., Ramirez, R., & Lucy, J. S. (2017/2018). The mayhem of wrongful liberty: Documenting the crimes of true perpetrators in cases of wrongful incarceration. Albany Law Review, 81(4), 1263–1288. Borchard, E. M. (1932). Convicting the innocent: Sixty-five actual errors of criminal justice, Garden City, NY: Garden City Publishing. Budowle, B., Bottrell, M. C., Bunch, S. G., Fram, R., Harrison, D., Meagher, S., … Stacey, R. B. (2009). A perspective on errors, bias, and interpretation in the forensic sciences and direction for continuing advancement. Journal of Forensic Sciences, 54(4), 798–809. https://doi.org/ 10.1111/j.1556-4029.2009.01081.x Butler, J. M. (2015). U.S. initiatives to strengthen forensic science & international standards in forensic DNA. Forensic Science International: Genetics, 18, 4–20. https://doi.org/10.1016/j.fsigen.2015.06.008 Byrd, J. S. (2006). Confirmation bias, ethics, and mistakes in forensics. Journal of Forensic Identification, 56(1), 511–525. Campbell, K., & Denov, M. (2004). Burden of innocence: Coping with a wrongful imprisonment. Canadian Journal of Criminology and Criminal Justice, 46(2), 139–163. https://doi.org/10.3138/cjccj.46.2.139 Christensen, A. M., Crowder, C. M., Ousley, S. D., & Houck, M. H. (2014). Error and its meaning in forensic science. Journal of Forensic Sciences, 59(1), 123–126. https://doi.org/10.1111/1556-4029.12275 Cole, S. A. (2007). Where the rubber meets the road: Thinking about expert evidence as expert testimony. Villanova Law Review, 52(4), 803–842. Cole, S. A. (2012). Forensic science and wrongful convictions: From exposer to contributor to corrector. New England Law Review, 46, 711–736. Cole, S. A., & Thompson, W. C. (2013). Forensic science and wrongful convictions. In C. R. Huff & M. Killias (Eds.), Wrongful Convictions & miscarriages of justice: Causes and remedies in North American and European criminal justice systems (pp. 111–135). New York, NY: Routledge.

10 of 12

BONVENTRE

Collins, J. M., & Jarvis, J. (2009). The wrongful conviction of forensic science. Forensic Science Policy & Management, 1, 17–31. https://doi. org/10.1080/19409040802624067 Connors, E., Lundregan, T., Miller, N., & McEwan, T. (1996). Convicted by juries, exonerated by science: Case studies in the use of DNA evidence to establish innocence after trial. Retrieved from: U.S. Department of Justice, National Institute of Justice. https://www.ncjrs.gov/ App/Publications/abstract.aspx?ID=161258 Cooper, G. S., & Meterko, V. (2019). Cognitive bias research in forensic science: A systematic review. Forensic Science International, 297, 35– 46. https://doi.org/10.1016/j.forsciint.2019.01.016. Cunha, G. A. (2014). Investigation of the drug laboratory at the William A. Hinton State Laboratory Institute. Boston, MA: Office of the Inspector General. Retrieved from. https://www.mass.gov/doc/investigation-of-the-drug-laboratory-at-the-william-a-hinton-state-laboratoryinstitute-2002-0/download DiFonzo, J. H. (2005). The crimes of crime labs. Hofstra Law Review, 34(1), 1–11. Dioso-Villa, R. (2016). Is the expert admissibility game fixed? Judicial gatekeeping of fire and arson evidence. Law & Policy, 38, 54–80. Doyle, J. M. (2010). Learning from error in American criminal justice. The Journal of Criminal Law & Criminology, 100(1), 109–147. Dror, I. E. (2020). Cognitive and human factors in expert decision making: Six fallacies and the eight sources of bias. Analytical Chemistry, 92, 7998–8004. Dror, I. E., & Charlton, D. (2006). Why experts make errors. Journal of Forensic Identification, 56(4), 600–616. Dror, I. E., & Cole, S. A. (2010). The vision in “blind” justice: Expert perception, judgment, and visual cognition in forensic pattern recognition. Psychonomic Bulletin & Review, 17(2), 161–167. https://doi.org/10.3758/PBR.17.2.161 Dror, I. E., & Hampikian, G. (2011). Subjectivity and bias in forensic DNA mixture interpretation. Science and Justice, 51, 204–208. https:// doi.org/10.1016/j.scijus.2011.08.004 Dror, I. E., & Pierce, M. L. (2020). ISO standards addressing issues of bias and impartiality in forensic work. Journal of Forensic Sciences, 65 (3), 800–808. https://doi.org/10.1111/1556-4029.14265 Dror, I. E., Thompson, W. C., Meissner, C. A., Kornfield, I., Krane, D., Saks, M., & Risinger, M. (2015). Context management toolbox: A linear sequential unmasking (LSU) approach to minimizing cognitive bias in forensic decision making. Journal of Forensic Sciences, 60(4), 1111–1112. https://doi.org/10.1111/1556-4029.12805 Durose, M. R., Burch, A. M., Walsh, K., & Tiry, E. (2016). Publicly funded forensic crime laboratories: Resources and Services, 2014, Washington, DC: U.S. Department of Justice, Bureau of Justice Statistics. Retrieved from. https://www.bjs.gov/content/pub/pdf/pffclrs14.pdf Durose, M. R., & Langan, P. A. (2007). Felony sentences in state courts, 2004, Washington, D.C.: U.S. Department of Justice, Bureau of Justice Statistics. Retrieved from. https://www.bjs.gov/content/pub/pdf/fssc04.pdf Ellement, J. R. (2016, November 16). Annie Dookhan, key figure in state lab scandal, released from prison. Boston Globe. Retrieved from https://www.bostonglobe.com/metro/2016/04/12/annie-dookhan-key-figure-state-lab-scandal-released-from-prison/ lp7q98UmWKucv4F7O3R4DI/story.html Epstein, J. (2018). The National Commission on Forensic Science: Impactful or ineffectual? Seton Hall Law Review, 48, 743–771. Federal Bureau of Investigation (FBI) (2015, April 20). FBI testimony on microscopic hair analysis contained errors in at least 90 percent of cases in ongoing review [Press release]. Retrieved from https://www.fbi.gov/news/pressrel/press-releases/fbi-testimony-on-microscopichair-analysis-contained-errors-in-at-least-90-percent-of-cases-in-ongoing-review Gabrielson, R. & Sanders, T. (2016, July 7). How a $2 roadside drug test sends innocent people to jail. The New York Times Magazine. Retrieved from https://www.nytimes.com/2016/07/10/magazine/how-a-2-roadside-drug-test-sends-innocent-people-to-jail.html. Garrett, B. L. (2008). Judging innocence. Columbia Law Review, 108, 55–142. Garrett, B. L. (2011). Convicting the innocent: Where criminal prosecutions go wrong. Cambridge, MA: Harvard University Press. Garrett, B. L., & Neufeld, P. J. (2009). Invalid forensic science testimony and wrongful convictions. Virginia Law Review, 95(1), 1–97. Gershman, B. L. (2003). Misuse of scientific evidence by prosecutors. Oklahoma City University Law Review, 28, 17–41. Giannelli, P. C. (1997). The abuse of scientific evidence in criminal cases: The need for independent crime laboratories. Virginia Journal of Social Policy & the Law, 4, 439–478. Giannelli, P. C. (2007). Wrongful convictions and forensic science: The need to regulate crime labs. North Carolina Law Review, 86, 163–235. Giannelli, P. C. (2010a). Scientific fraud. Criminal Law Bulletin, 46(6), 313–1333. Giannelli, P. C. (2010b). Independent crime laboratories: The problem of motivational and cognitive bias. Utah Law Review, 2010(2), 247–266. Gould, J. B., Carrano, J., Leo, R. A., & Hail-Jares, K. (2014). Predicting erroneous convictions. Iowa Law Review, 99, 471–517. Gross, S. R. (2008). Convicting the innocent. Annual Review of Law and Social Science, 4, 173–192. https://doi.org/10.1146/annurev. lawsocsci.4.110707.172300 Gross, S. R., Jacoby, K., Matheson, D. J., Montgomery, N., & Patil, S. (2005). Exonerations in the United States 1989 through 2003. The Journal of Criminal Law & Criminology, 95(2), 523–560. Gross, S. R., & O'Brien, B. (2008). Frequency and predictors of false conviction: Why we know so little and new data on capital cases. Journal of Empirical Legal Studies, 5(4), 927–962. https://doi.org/10.1111/j.1740-1461.2008.00146.x Gross, S. R., O'Brien, B., Hu, C., & Kennedy, E. H. (2014). Rate of false conviction of criminal defendants who are sentenced to death. Proceedings of the National Academy of Sciences, 111(20), 7230–7235. https://doi.org/10.1073/pnas.1306417111 Gross, S. R. & Shaffer, M. (2012). Exonerations in the United States, 1989–2012: Report by the National Registry of Exonerations. Retrieved from https://repository.law.umich.edu/other/92/

BONVENTRE

11 of 12

Grounds, A. (2004). Psychological consequences of wrongful conviction and imprisonment. Canadian Journal of Criminology and Criminal Justice, 46(2), 65–182. https://doi.org/10.3138/cjccj.46.2.165 Hsu, S. S. (2012a, July 10). Justice Dept., FBI to review use of forensic evidence in thousands of cases. The Washington Post. Retrieved from https://www.washingtonpost.com/local/crime/justice-dept-fbi-to-review-use-of-forensic-evidence-in-thousands-of-cases/2012/07/10/ gJQAT6DlbW_story.html Hsu, S. S. (2012b, December 14). Santae Tribble cleared in 1978 based on DNA hair test. The Washington Post. Retrieved from https://www. washingtonpost.com/local/crime/dc-judge-exonerates-santae-tribble-of-1978-murder-based-on-dna-hair-test/2012/12/14/da71ce00-d02c11e1-b630-190a983a2e0d_story.html Hsu, S. S. (2016, February 28). Judge orders D.C. to pay $13.2 million in wrongful FBI hair conviction case. The Washington Post. Retrieved from https://www.washingtonpost.com/local/public-safety/judge-orders-dc-to-pay-132-million-in-wrongful-fbi-hair-conviction-case/ 2016/02/28/da82e178-dcde-11e5-81ae-7491b9b9e7df_story.html Hsu, S. S. (2017, April 10). Sessions orders Justice Dept. to end forensic science commission, suspend review policy. The Washington Post. Retrieved from https://www.washingtonpost.com/local/public-safety/sessions-orders-justice-dept-to-end-forensic-science-commissionsuspend-review-policy/2017/04/10/2dada0ca-1c96-11e7-9887-1a5314b56a08_story.html In the Matter of West Virginia State Police Crime Laboratory, Serology Division (1993). 190 W. Va. 321. Innocence Project (n.d.-a). Retrieved from https://www.innocenceproject.org Innocence Project (n.d.-b). Overturning wrongful convictions involving misapplied forensics. Retrieved from https://www.innocenceproject. org/overturning-wrongful-convictions-involving-flawed-forensics/ Jackman, T. (2017a, March 21). Norfolk 4 wrongly convicted of rape and murder, pardoned by Gov. McAuliffe. . The Washington Post. Retrieved from https://www.washingtonpost.com/news/true-crime/wp/2017/03/21/norfolk-4-wrongly-convicted-of-rape-and-murderpardoned-by-gov-mcauliffe/?utm_term=.001e55e8ee38 Jackman, T. (2017b, April 18). Prosecutors dismiss more than 21,500 drug cases in wake of Mass. lab chemist's misconduct. The Washingto Post. Retrieved from https://www.washingtonpost.com/news/true-crime/wp/2017/04/18/prosecutors-dismiss-more-than-19000-drugcases-in-wake-of-mass-lab-chemists-misconduct/ Jeanguenat, A. M., & Dror, I. E. (2018). Human factors affecting forensic decision making: Workplace stress and wellbeing. Journal of Forensic Sciences, 63(1), 258–261. https://doi.org/10.1111/1556-4029.13533. Jonakait, R. N. (1991). Forensic science: The need for regulation. Harvard Journal of Law & Technology, 4, 109–191. Kassin, S. M., Bogart, D., & Kerner, J. (2012). Confessions that corrupt: Evidence from the DNA exoneration case files. Psychological Science, 23(1), 41–45. https://doi.org/10.1177/0956797611422918 Kassin, S. M., Dror, I. E., & Kukucka, J. (2013). The forensic confirmation bias: Problems, perspectives, and proposed solutions. Journal of Applied Research in Memory and Cognition, 2, 42–52. https://doi.org/10.1016/j.jarmac.2013.01.001 Krane, D. E., Ford, S., Gilder, J. R., Inman, K., Jamieson, A., Koppl, R., … Thompson, W. C. (2008). Sequential unmasking: A means of minimizing observer effects in forensic DNA interpretation. Journal of Forensic Sciences, 253(4), 1006–1007. https://doi.org/10.1111/j.15564029.2008.00787.x Langenburg, G. (2017). Addressing potential observer effects in forensic science: A perspective from a forensic scientist who uses linear sequential unmasking techniques. Australian Journal of Forensic Sciences, 49(5), 548–563. LaPorte, G. (2018). Wrongful convictions and DNA exonerations: Understanding the role of forensic science. National Institute of Justice Journal, 279, 11–25. Laurin, J. E. (2013). Remapping the path forward: Toward a systemic view of forensic science reform and oversight. Texas Law Review, 91, 1051–1118. Leo, R. A. (2005). Rethinking the study of miscarriages of justice: Developing a criminology of wrongful conviction. Journal of Contemporary Criminal Justice, 21(3), 201–223. https://doi.org/10.1177/1043986205277477 Loeffler, C. E., Hyatt, J., & Ridgeway, G. (2019). Measuring self-reported wrongful convictions among prisoners. Journal of Quantitative Criminology, 35, 259–286. https://doi.org/10.1007/s10940-018-9381-1 Moenssens, A. A. (1993). Novel scientific evidence in criminal cases: Some words of caution. Journal of Criminal Law & Criminology, 84, 1–21. National Commission on Forensic Science (2015). Views of the Commission ensuring that forensic analysis is based upon task-relevant information. Retrieved from https://www.justice.gov/archives/ncfs/file/818196/download National Commission on Forensic Science (2017a). Human factors. Retrieved from https://www.justice.gov/archives/ncfs/human-factors#:: text=The%20Human%20Factors%20Subcommittee%20examined,and%20their%20personnel%20in%20the National Commission on Forensic Science (2017b). Reflecting back: Looking toward the future. Retrieved from https://www.justice.gov/ archives/ncfs National Registry of Exonerations (n.d.-a). Glossary. Retrieved from http://www.law.umich.edu/special/exoneration/Pages/glossary.aspx National Registry of Exonerations (n.d.-b). Contributing factors and type of crime. Retrieved from http://www.law.umich.edu/special/ exoneration/Pages/ExonerationsContribFactorsByCrime.aspx National Registry of Exonerations (n.d.-c). Browse cases. Retrieved from http://www.law.umich.edu/special/exoneration/Pages/browse.aspx National Research Council (NRC). (2009). Strengthening forensic science in the United States: A path forward, Washington, D.C.: The National Academies Press. https://doi.org/10.17226/12589 Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https:// doi.org/10.1037/1089-2680.2.2.175

12 of 12

BONVENTRE

Norris, R. J. & Mullinix, K. J. (2019). Framing innocence: An experimental test of the effects of wrongful convictions on public opinion. Journal of Experimental Criminology. https://doi.org/10.1007/s11292-019-09360-7 Norris, R. J., Weintraub, J. N., Acker, J. R., Redlich, A. D., & Bonventre, C. L. (2019). The criminal costs of wrongful convictions: Can we reduce crime by protecting the innocent? Criminology & Public Policy, 19(2), 367–388. https://doi.org/10.1111/1745-9133.12463 Oliva, J. D., & Beety, V. E. (2017). Discovering forensic fraud. Northwestern University Law Review, 112, 121–138. Peterson, J. L., & Leggett, A. S. (2007). The evolution of forensic science: Progress amid the pitfalls. Stetson Law Review, 36, 621–660. Peterson, J. L., Mihajlovic, S., & Bedrosian, J. L. (1985). The capabilities, uses, and effects of the nation's criminalistics laboratories. Journal of Forensic Sciences, 30(1), 10–23. https://doi.org/10.1520/JFS10959J Poveda, T. G. (2001). Estimating wrongful convictions. Justice Quarterly, 18(3), 689–708. https://doi.org/10.1080/07418820100095061 Ramsey, R. J., & Frank, J. (2007). Wrongful conviction: Perceptions of criminal justice professionals regarding the frequency of wrongful conviction and the extent of system errors. Crime & Delinquency, 53(3), 436–470. https://doi.org/10.1177/0011128706286554 Risinger, D. M. (2007). Innocents convicted: An empirically justified factual wrongful conviction rate. Journal of Criminal Law & Criminology, 97(3), 761–806. Risinger, D. M., Saks, M. J., Thompson, W. C., & Rosenthal, R. (2002). The Daubert/Kumho implications of observer effects in forensic science: Hidden problems of expectation and suggestion. California Law Review, 90, 1–56. Roman, J., Walsh, K., Lachman, P., & Yahner, J. (2012). Post-conviction DNA testing and wrongful conviction. Washington, DC: Urban Institute. Roman, J. K., Reid, S. E., Chalfin, A. J., & Knight, C. R. (2009). The DNA field experiment: A randomized trial of the cost-effectiveness of using DNA to solve property crimes. Journal of Experimental Criminology, 5, 345–369. https://doi.org/10.1007/s11292-009-9086-4 Saks, M. J., & Koehler, J. J. (2005). The coming paradigm shift in forensic identification science. Science, 309(5736), 892–895. https://doi.org/ 10.1126/science.1111565 Scheck, B., Neufeld, P., & Dywer, J. (2003). Actual innocence: When justice goes wrong and how to make it right, New York, NY: New American Library. Scherr, K. C., Redlich, A. D., & Kassin, S. M. (2020). Cumulative disadvantage: A psychological framework for understanding how innocence can lead to confession, wrongful conviction, and beyond. Perspectives on Psychological Science, 15(2), 353–383. https://doi.org/10.1177/ 1745691619896608 Thompson, W. C. (2008). Beyond bad apples: Analyzing the role of forensic science in wrongful convictions. Southwestern University Law Review, 37, 971–994. Thompson, W. C. (2009). Painting the target around the matching profile: The Texas sharpshooter fallacy in forensic DNA interpretation. Law, Probability and Risk, 8(3), 257–276. https://doi.org/10.1093/lpr/mgp013 Thomson, M. A. (1974). Bias and quality control in forensic science: A cause for concern. Journal of Forensic Sciences, 19, 504–517. https:// doi.org/10.1520/JFS10205J Turvey, B. E. (2013). Forensic fraud: Evaluating law enforcement and forensic science cultures in the context of examiner misconduct, Waltham, MA: Academic Press. Walsh, K., Hussemann, J., Flynn, A., Yahner, J., & Golian, L. (2017). Estimating the prevalence of wrongful conviction. National Criminal Justice Reference Service. Retrieved from https://www.ncjrs.gov/pdffiles1/nij/grants/251115.pdf Weedn, V. W. & Hicks, J. W. (1998). The unrealized potential of DNA testing. Washington, D.C.: U.S. Department of Justice, National Institute of Justice. Retrieved from https://www.ncjrs.gov/pdffiles/170596.pdf Wells, T., & Leo, R. A. (2008). The wrong guys: Murder, false confessions, and The Norfolk Four, New York, NY: New Press. West, E., & Meterko, V. (2015/2016). Innocence Project: DNA exonerations, 1989–2014: Review of findings from the first 25 years. Albany Law Review, 79(3), 717–795. Westervelt, S. D., & Cook, K. J. (2010). Framing innocents: The wrongly convicted as victims of state harm. Crime, Law and Social Change, 53(3), 259–275. https://doi.org/10.1007/s10611-009-9231-z Westervelt, S. D., & Cook, K. J. (2014). Life after death row: Exonerees' search for community and identity, New Brunswick, NJ: Rutgers University Press. Williamson, E. J., Stricker, J. M., Irazola, S. P., & Niedzwieki, E. (2016). Wrongful convictions: Understanding the experiences of the original crime victims. Violence and Victims, 31(1), 155–166. https://doi.org/10.1891/0886-6708.VV-D-13-00152 Zalman, M., Smith, B., & Kiger, A. (2008). Officials' estimates of the incidence of “actual innocence” convictions. Justice Quarterly, 25(1), 72–100. https://doi.org/10.1080/07418820801954563

How to cite this article: Bonventre CL. Wrongful convictions and forensic science. WIREs Forensic Sci. 2020; e1406. https://doi.org/10.1002/wfs2.1406