[Dissertation] Stop Making Sense: A Critique of Contemporary Evidence-Based Practices Through the Lens of Music Education

253 15 503KB

English Pages [227] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

[Dissertation] Stop Making Sense: A Critique of Contemporary Evidence-Based Practices Through the Lens of Music Education

Table of contents :
Acknowledgements
Table of Contents
1. Nature of the Study
2. Literature Review
3. Methodology
4. Results
5. Discussion, Limitations, and Implications for Future Research
References

Citation preview

STOP MAKING SENSE

1

Stop Making Sense: A Critique of Contemporary Evidence-Based Practices Through the Lens of Music Education

Eric Reimnitz A Dissertation Submitted to the Faculty of The Chicago School of Professional Psychology In Partial Fulfillment of the Requirements For the Degree of Doctor of Psychology Claude Barbre, PhD Todd Dubose, PhD

October 28, 2019

STOP MAKING SENSE

2

Unpublished Work 2020 by Eric Reimnitz All Rights Reserved

STOP MAKING SENSE

3

Stop Making Sense: A Critique of Contemporary Evidence-Based Practices Through the Lens of Music Education

A Dissertation Submitted to the Faculty of The Chicago School of Professional Psychology In Partial Fulfillment of the Requirements For the Degree of Doctor of Psychology

Eric Reimnitz 2019 Approved By:

Claude Barbre, PhD, Chairperson Distinguished Full Professor, The Chicago School of Professional Psychology

Todd Dubose, PhD, Member Disintiguished Full Professor, The Chicago School of Professional Psychology

STOP MAKING SENSE

4 Acknowledgements

There are many people I need to thank for helping me throughout this journey. Most importantly, my family and especially my mother. Her hard work ethic throughout my life set the example needed to complete this work. Furthermore, I wish to thank Dr. Barbre and Dr. DuBose for their guidance as well as their support. I wouldn’t have been able to complete this dissertation if they hadn’t understood the nature of my work as well as their encouragement to stay focused on this topic. They have both helped me feel a little more sane in this crazy world since the moment I met them. Regarding the previously mentioned thanks for my family, they deserve more praise, especially my mother. Not only for raising me but also for tolerating my behavior the past 40 years. And to my brothers and sister for also putting up with me and teaching me more than I ever expected. My brother Adam has taught me that wisdom comes in many forms and we are all scientists in our daily lives. My brother Cory taught me that true intelligence is more than just knowledge and the importance of social skill. And my sister Sara who has shown me the true meaning of happiness and what mental health really looks like. My nieces also deserve mentioning, as they make life a lot more enjoyable and provide some happiness in these crazy times. Finally, to Paola, finding you seems like the second chance I never thought I’d get. I finally feel as though I truly have a chance to be happy. I wouldn’t have been able to complete this dissertation, or experience the success I’ve had, without you.

STOP MAKING SENSE

5 Table of Contents

Chapter 1: Nature of the Study ....................................................................................................... 7 Introduction to the Research Problem ...................................................................................... 7 Evidence-Based Practices....................................................................................................... 16 Statement of the Problem........................................................................................................ 24 Chapter 2: Literature Review........................................................................................................ 35 What is Evidence? ................................................................................................................ 35 A History of Evidence-Based Practices ................................................................................ 41 The Role of Philosophy ........................................................................................................ 52 Problems with Logical Positivism .......................................................................................... 84 Subjectivity and Science ........................................................................................................ 92 The Myth of Quantitative Superiority .................................................................................... 96 Problems with Definitions .................................................................................................... 99 What's in a Name? .............................................................................................................. 112 Randomized-Control Trials ................................................................................................. 116 Philosophical Assumptions .................................................................................................. 130 Critical Thinking Skills ....................................................................................................... 134 The Importance of Creativity .............................................................................................. 135 Other Theories ...................................................................................................................... 139 Authoritarian Mentality ........................................................................................................ 141 Faulty Research .................................................................................................................... 143 Music Education .................................................................................................................. 151 History of Music Education ................................................................................................. 159

STOP MAKING SENSE

6

Philosophy of Music Education ........................................................................................... 167 Best Practices in Music Education ...................................................................................... 177 Chapter 3: Methodology ............................................................................................................. 180 Chapter 4: Results....................................................................................................................... 186 EBP Outcomes ..................................................................................................................... 186 Criticism .............................................................................................................................. 195 Chapter 5: Discussion, Limitations, and Implications for Future Research ............................... 204 Conclusion ............................................................................................................................ 204 Limitations............................................................................................................................ 209 Implications for Future Research.......................................................................................... 211 References................................................................................................................................... 214

STOP MAKING SENSE

7 Chapter 1: Nature of the Study Introduction to the Research Problem

According to the University of Michigan’s Monitoring the Future project, about 40% of all high school students enroll in music programs (Gorman, 2016). Considering that there are an estimated 3.5 million students attending high school in the United States (US; National Center for Education Statistics, 2016), this statistic means that there are around 1.4 million music students in U.S. high schools. When comparing these music students against other high school students not enrolled in music education, the differences are stark. According to the National Association for Music Educator’s (NAfME), music training in childhood can fundamentally alter the nervous system (Skoe & Kraus, 2012). Learning to play an instrument improves focus in later years (“Hearing the Music, Honing the Mind,” 2010) as well as develops cognitive structures that aid in multiple disciplines (Portowitz et al., 2009). Students who study music voluntarily show higher math skills (Broh, 2002; Gardiner et al., 1996) and improvement in spatial reasoning (Graziano et al., 1999; Gromko & Poorman, 1998; Hetland, 2000; Rauscher & Zupan, 1999). Music instruction is linked to improved reading skills (Broh, 2002; Gardiner et al., 1996; Standley, 2008), improved auditory discrimination, improved fine motor skills, increased vocabulary, and heightened nonverbal reasoning (Forgeard et al., 2008). Increased sensitivity to speech (Patrick et al., 2007), improved verbal memory (Chan et al., 1998; Ho et al., 2003), and higher academic success (Johnson & Memmott, 2007; Kelly, 2012) are also results of music education as is increased IQ (Schellenberg, 2004), increased self-esteem (Costa-Giomi, 2004; Jenlink, 1993), and increased social capital (Broh, 2002). These results demonstrate the important role music plays in a wellrounded liberal arts education as well as the intellectual and intelligence benefits available when studying music.

STOP MAKING SENSE

8

While these benefits have long been known, rarely have questions been asked regarding how music is taught or the psychology of teaching music. Is reading music notation an absolute necessity when learning music? Regarding the purpose of music education: Can enjoyment and fun be part of this purpose and, if so, how are these abstract aspects taught? What about the roles of the teacher and the student when learning music? To what extent is the teacher the expert and, conversely, to what extent does the student have autonomy when learning a creative and subjective skill such as music? Can creativity and subjectivity even be part of music education and, if so, how are these abstract ideas taught? These questions combine to form the central tenet of this dissertation: the study of the psychology behind teaching music, and more specifically, the impact evidence-based practices (EBPs) have on this psychology. This dissertation sought to explore the consequences EBPs are having in society, specifically through using music education as an example of these consequences, and offer potential alternatives to these current practices. To answer these questions, this dissertation focused intensely on a couple of topics, particularly those of educational philosophy and psychology. While it can be assumed many people are familiar with scientific study, what society may be less familiar with is the role philosophy plays in the study of scientific fields such as psychology. Few people understand that logic and philosophy actually form the basis for all science (Popper, 2002). In other words, without a strong philosophical basis, all the scientific experimentation in the world will not mean a thing (Lewontin, 1991). And while music has been studied scientifically regarding the effects music has on the brain, rarely has the educational philosophy been questioned or explored regarding its psychological impact. It is time for science to question its philosophy and thoughts regarding not only music and psychology but the expressive arts in general. Hence, questioning the current music educational philosophy means questioning EBPs, the current trend in the education system based on a logical positivist philosophy employed in

STOP MAKING SENSE

9

the human and scientific subjects. It would be impossible to study education in general without understanding EBPs and logical positivism in depth. But why is it necessary to understand EBPs? Well, this dissertation will argue that philosophy is a large part of any subject, especially psychology, and the philosophies of EBPs do not make sense and are lacking in information. In general, there has been and is considerable debate about EBPs (Sehon & Stanley, 2003). For example, for too long logical positivism has taught that only what is visible and observable is valid, meaning that important facets of people’s lives such as emotion and creativity are not deemed scientifically important because they are difficult to measure. While admittedly abstractions such as emoting and creating are difficult to study objectively, the idea that these abstractions should not be studied simply because they are difficult to observe is illogical, providing an example of how even science is ideological and subject to change. But why do psychologists need to be concerned with philosophy in music? In fact, why does humanity need to be concerned with philosophy at all? Or music education for that matter? One primary reason is because philosophy teaches people how to think. It is literally the root of all thought and the absolute basis for all humanity thinks and does. Without philosophy there is no understanding of math, science, music, or art. One cannot get to the top of the mountain without first climbing the base, and the base of this mountain is philosophy. It is no coincidence that humanity’s current understanding of science developed along with philosophy. Aristotle, for example, is considered by many to be the godfather of modern science, even though he was a philosopher. His ideas brought forth a way of thinking that would slowly develop into the modern-day process now called science. The great Immanuel Kant believed philosophy was the most basic result of man having rational capabilities (Kant & Meiklejohn, 2015). In other words, philosophy is the most basic and necessary of all subjects; scientific study does not exist without philosophy. According to the American Philosophical Association (n.d.), the study of philosophy

STOP MAKING SENSE

10

“develops intellectual abilities important for life as a whole, beyond the knowledge and skills required for any particular profession… It enhances analytical, critical, and interpretive capacities that are applicable to any subject matter and in any human context” (p. 5). Research has shown there are many benefits to studying philosophy, including higher scores on the GRE and LSAT, more than almost any other subject (Sifford, 2012). Philosophy also teaches critical thinking skills (Shim & Walczak, 2012). Some experts even believe the study of philosophy and critical thinking skills are the primary tools lacking in the current education system. Critical thinking is also important because it is applicable to multiple fields of study, and is not a one-dimensional area of thought, only benefiting certain subjects. In order to understand the current education system, including music education, it is necessary to start at this primary foundation: Science must understand its philosophy and how philosophy affects the individual and group psyche. This dissertation deconstructs both scientific and philosophical arguments regarding EBPs before exploring how these EBPs, when applied to music education, cause unintended consequences. In doing so, people can see the psychological effects EBP has when applied to many subjects, primarily music education. This thesis, then, is an interdisciplinary study of both evidence-based approaches in contradistinction with contrasting approaches to music education. Unintended consequences occurring when EBPs are applied to music education include ideas such as teaching students to be more disciplined at the expense of creativity, promoting art that is technical in nature but lacking in emotional content, promoting music that is technical but not enjoyable, the potential loss of student retention, lack of motivation, lack of skill, lack of credible and reasonable goals, lack of ability, lack of emotional understanding, and lack of enjoyment. In summation, EBPs may have many advantages, but these advantages also come at a cost and have many unintended consequences.

STOP MAKING SENSE

11

The importance of studying EBPs is many-folded. For one, research into this subject has promoted the questioning of the United States’ education philosophy, a philosophy that has rarely been questioned and desperately needs some criticism. While research into this area has shown that education regarding the psychology of music education is functioning as it should, the fact that the US’s teaching philosophy is rarely questioned should be cause for concern. Research into this field also helps science understand how to approach education and research in general, through improved quality in research and knowledge, and an improvement in the country’s education system as measured through multiple criteria. Is the scientific method applicable for every question or is the underlying philosophy a necessary ingredient as well? In fact, how does society even know if a subject is being taught well, and are evaluations or standardized tests enough to judge or determine success? Just as important is the understanding of the differences between creative subjects such as music and art, and more objective subjects such as math and science. Should these two subjects be taught using the same methods and the same objective measures? At the very least, is this idea not a debate the country should be having, or at the very least considering? This dissertation answers some of the possible issues when music is taught in the same manner as math or science, and the effect of more positivistic psychological approaches to teaching music. My interest in this subject began many years ago when I began teaching music myself. My interest in music has been lifelong and my interest in Psychology has spanned most of my adult life. These feelings about music and music education have been growing throughout my life and began when I was in the fourth grade, although I did not know it at the time. I was auditioning for band and I wanted to play the saxophone, specifically the tenor sax. Little did I know it at the time, but I was fortunate and chose an instrument that was not incredibly popular.

STOP MAKING SENSE

12

The girl sitting next to me during the meeting was not as fortunate; she asked to play the flute. The band director’s reply? “I already have enough flutes, why don’t you play the clarinet?” This early experience impacted me profoundly. At the time, I was just a child, so I did not understand how to articulate my feelings, but I knew instinctively that something about this situation was not right. Nowadays I can articulate the concern I had: This band director was prioritizing his wants and needs above those of his students. What impact would this prioritization have on this student’s education? How would she pursue an instrument she did not request to play? How excited would she be to take lessons? To attend band rehearsal? What kind of rapport would she have with her teacher if her wants and desires are being rejected? And why was this practice being done? What was preventing the band director from simply saying, “Great! I’m so happy you’re interested in band and want to learn to play an instrument. I’m grateful I get to be the one to teach you and hopefully I can add some sort of quality to your life through music.” Fast forward more than a decade and I was now the music teacher. And I could now articulate what was not right about the previous situation. Not only could I articulate exactly what bothered me about the previously mentioned situation, I now witnessed many practices within the music education community I felt were not right (based on what I would come to understand as empathy for the music student), or were at the very least not serving the best interests of the students. Why is it that a student’s musical goal is not allowed to be something as simple as learning a few Taylor Swift songs? What about the student whose only interest is playing Metallica? What about jazz? Why are students not allowed to study jazz at the beginning of their training? In time I began to see that many of the practices I disagreed with were quite “quantitative” in nature. I was flabbergasted by this idea: How could a field as creative and

STOP MAKING SENSE

13

subject as music become such a quantitative dimension? This dissertation and critical review of the literature on the psychology of music education explores these interrogatives. During my 15-year tenure as a music instructor, I witnessed the power one’s own philosophy has over their education, both as a student and an instructor. I saw how questioning the purpose of music education can lead to positive results. I taught the students whose only goal was to learn a couple of Taylor Swift songs after the public music system told them this goal was unacceptable. I saw how the acceptance of these goals led to an increase in their quality of life and their own personal enjoyment of music. I experienced the difference teaching a student how to play a difficult Metallica tune (even though they cannot play “Mary Had a Little Lamb”) can make. I witnessed student retention and enjoyment increase when I quit requiring my students to practice an hour a day. I saw a difference in my teaching style once I accepted that not every student has to have the goal of being an excellent player. After beginning a career in psychology many years later, I was introduced to the term evidence-based and I recognized how widespread the use of EBPs has become in many fields of study. Although this dissertation thoroughly delves into the definition of EBPs, for the sake of clarity, this dissertation defines EBPs as quantitatively-based research and practices. These quantitatively-based practices have invaded medicine, nursing, business, management, and even education. I began to wonder: How much of the practices I was seeing in the psychology of music education could be traced back to EBPs? What is the goal of these practices, and how practical and effective are they? I also experienced the consequences the country’s move towards so-called EBPs has caused. As a psychologist, I experience these consequences weekly and often daily. Because evidence-based practice means basing decisions on quantitative data, I can no longer diagnose a client simply based on self-report—I have to give them a questionnaire asking them the same

STOP MAKING SENSE

14

questions I would naturally ask, but because of faulty reasoning, it is considered more valuable and trustworthy to have quantifiable evidence. One of my favorite examples of EBPs, while at the same time being one of the most frustrating, is the constant need for change this unending reliance on quantitative data has caused. Computer programs and phone apps are constantly being changed and updated all in the name of improvement. Cable and retail companies, internet, and phone providers keep changing their service after receiving customer feedback. In addition, corporations and businesses regularly update and change their products as an effort to appeal to a larger base and generate more profit. Schools and universities seek evaluations and consistently update their policies, practices, and beliefs in an effort to improve their image, education practices, and compete with other schools. All of these examples have resulted in many positive changes. However, they have also resulted in many negative consequences that are rarely recognized or discussed, including an actual decrease in quality, not the least of which is a complete lack of consistency in so many aspects of people’s daily lives. This problem is a great example of not only the role of philosophy, but also the consequences of EBPs, as these problems exist because of the philosophy that EBPs will help us improve problems, which is a philosophical idea. Yet rarely do people stop to ask the simply question: What will be the consequence of this change? This dissertation explores how these EBPs instill overt and latent consequences on individuals and groups, and this dissertation argues that such a one-sided approach actually affects other areas of people’s lives, especially when referring to the psychology of music education. Over time, during my study of music education, it has become clear that there are often logical reasons why scientific practices in music education exist. However, I have also noticed that rarely are these EBPs questioned and the logical reasons underlying them are not always complete, logical, or rational. A great example of this idea is the teaching of music notation, or

STOP MAKING SENSE

15

the art of reading written music. While teaching music notation may have some advantages, it also leaves other important elements out of the equation, such as student motivation, interest, retention, and enjoyment. Not to mention the fact that there are thousands of amazing musicians who do not read music, meaning that music notation is demonstrably not a necessary skillto be a good musician. And yet this belief that reading music is necessary completely ignores music history and the fact that thousands of successful musicians never read at all. This is one of the many examples of teaching music in a more quantitative manner. This research questions this practice and develops some ideas regarding the effectiveness of using EBPs in music education and challenges many currently held assumptions. In summation, this dissertation introduces the subject of the psychology of music education, or more accurately, the psychology of how people learn to play and perform music as a way to critique modern EBPs. It studies a specific part of the current scientific system—that of EBPs and how these practices influence many subjects, especially in the field of music education. As an important part of this process, this paper will emphasize the importance of philosophy regarding the scientific process and how an understanding of philosophy influences the subject of psychology and the psychology of music education. It explores the definitions of EBPs and how these definitions affect the scientific process of psychology. After exploring these definitions, the dissertation summarizes the problems of these definitions and EBPs as well as offering theoretical solutions. The final hypothesis of the dissertation demonstrates the incompleteness and problematic nature of modern EBPs in order to promote conversations regarding how music education should be approached, and how the study of philosophy can influence people’s understanding of this subject within psychology.

STOP MAKING SENSE

16 Evidence-Based Practices

To begin the discussion about EBPs, it is first necessary to define EBPs. As shall soon be seen, any attempts to define EBPs quickly become problematic. However, there are many things EBPs purport to be, and these ideas need to be understood to inform proper discussion. Also, it should be understood that many terms may be used interchangeably, such as EBPs and evidencebased medicine. These terms are generally considered to be the same philosophy (Nelson & Steele, 2007). EBPs have grown in recent years as there have been calls to place a greater reliance on evidence when creating policy (Slavin, 2008). They exist in many subjects, including medicine, nursing, and other health care fields and professions (Gibbs & Gambrill, 2002). EBP proponents will often publicly criticize so-called non-evidence-based treatments by saying clients who have not received EBP treatments, usually meaning manualized therapies, have not received adequate treatment (Shedler, 2017). Developing a definition of EBP is very difficult. There is a lack of consistent agreement on what EBPs are. However, most people agree on a few basic points. Goldenberg (2005) quoted, The EBM movement centres around five linked ideas: first, clinical decisions should be based on the best available scientific evidence: second, the clinical problem, and not the habits or protocols, should determine the type of evidence to be sought: third, identifying the best evidence means using epidemiological and biostatistical ways of thinking; fourth, conclusions derived from identifying and critically appraising evidence are useful only if put into action in managing patients or making health care decisions; finally, performance should be constantly evaluated. (p. 2622)

STOP MAKING SENSE

17

Haynes (2002) provided even further information, Evidence-Based Medicine (EBM) is based on the notion that clinicians, if they are to provide, and continue to provide, optimal care for their patients, need to know enough about applied research principles to detect studies published in the medical literature that are both scientifically strong and ready for clinical application. (p. 1) So far so good. These ideas seem far from having any negative consequences. Further information suggests EBPs exist in medicine and other social science fields and are usually understood as the empirical standard by which people are expected to practice. They are seen as a way to decrease uncertainty (Goldenberg, 2005) and have frequently been described as a paradigm shift that will forever change the face of many fields (Sehon & Stanley, 2003). Many claim there is much new about EBPs and that they are not just the same form of therapy under a different name (Gibbs & Gambrill, 2002). EBP proponents claim their philosophy is new and unseen in previous science (Goldenberg, 2005) and they are a “paradigm shift” away from “business as usual.” This idea implies that EBPs are a new way to practice science, which many of their supporters say is true (Sehon & Stanley, 2003). One of the reasons many believe EBPs to be a new form of science is because they believe EBPs are impartial. Indeed, EBPs claim to provide the promise of consistent and impartial evidence (Goldenberg, 2005). One of the ways they claim to provide this impartiality is through the valuing of statistical evidence without much consideration to more qualitative evidence (Goldenberg, 2005). When comparing EBPs to clinical experience and observational studies, we do have a shift in the sort of evidence that is most highly valued for diagnosis, therapy, and prognosis questions. (Sehon & Stanley, 2003, p. 3)

STOP MAKING SENSE

18

Supporters believe EBP is superior because effectively testing a policy is necessary to know if it works. Practice is therefore superior to theory, values, or political will (Morrison, 2001). Indeed, as shall be seen, EBP supporters very much value testing in all its forms and have a keen reliance on quantitative evidence. In the field of psychology, EBPs have become the dominant narrative. These supporters also claim a shift in the field, saying, “EBP is a systemic approach to helping clients in which research findings related to important practice decisions are sought and critically appraised, what is found (including nothing) is shared with the client, and clients are involved as informed participants” (Gibbs & Gambrill, 2002, p. 464). Gibbs and Gambrill (2002) continued: EBP emphasizes consideration of the values and expectations of clients regarding goals sought, methods used, and outcomes attained. Clients’ personal opinions regarding effectiveness are important to consider, as suggested in professional codes of ethics. Efforts are made to minimize the play of personal opinion in critical appraisal of practicerelated research literature by clear description of search procedures used and the use of rigorous criteria to evaluate practice-related research. (p. 464) EBM proponents want everyone, including patients, clinicians, managers, and policy makers, to use the best findings from research needed to meet dual requirements of being scientifically valid and ready for use in clinical application (Haynes, 2002). The emphasis here is that EBPs encourage the sharing of research and knowledge, allowing the client to participate in their treatment and possibly choose the type of treatment they receive. One explanation for this valuing of statistical evidence over qualitative reasoning is a hierarchy of evidence that reflects a shift in beliefs and change in philosophy (Sehon & Stanley, 2003). EBPs represent a concerted effort to obtain only objective evidence (Silk et al., 2010). What separates EBPs from other ways of approaching scientific practice is the priority it gives to

STOP MAKING SENSE

19

certain forms of evidence, most prominently the emphasis placed on randomized controlled trials (RCTs; Sehon & Stanley, 2003). EBPs can essentially refer to the practice of valuing RCTs as the highest form of research (Sehon & Stanley, 2003). There is a preference for certain methodologies; EBP favors methods that critically appraise claims so that therapists do not misinform themselves and their clients (Gibbs & Gambrill, 2002, p. 464). EBP proponents proclaim experts as unreliable and fallible, whereas research adhering to strict criteria is less likely to be fallible (Haynes, 2002). This philosophy is based on the assumption that practitioners basing their decisions on current research will be more successful than those relying on their own understanding of the science (Haynes, 2002). They believe this standardization of treatment will cause a more thorough study of treatment outcomes and more consistency in outcome and practice (Addis et al., 1999). The evidence-based movement attempts to condone scientific approaches as opposed to more unsystematic or intuitive methods of practice. They claim a more scientifically rigorous approach, achieved through methodological examination and use of the most current clinical research (Goldenberg, 2005). EBPs consider themselves scientific because they promote methodological and systematic approach to evidence gathering. They criticize the use of intuition and unsystematic means of evidence gathering, such as clinical experience, and patient and practitioner values (Goldenberg, 2005). EBPs also seek to answer certain questions, such as how treatment can be presented in a manner more scientific and legitimate to clinicians and people outside a scientific field (Addis et al., 1999). For instance, “EBP describes in detail a series of steps for integrating research and practice and honoring ethical guidelines” (Gibbs & Gambrill, 2002, p. 461). Gibbs and Gambrill continued, “Considerable attention is devoted to formulating questions and using methodological filters” (2002, p. 461) and “Evidence-based practice (EBP) is the conscientious, explicit, and

STOP MAKING SENSE

20

judicious use of current best evidence in making decisions about the care of clients” (Gibbs & Gambrill, 2002, p. 452). Supporters of EBP claim many new advances, including new databases and research centers that have come into existence primarily in the past decade. Additional guidelines for EBPs are also new, such as “We have a file of more than 300 well-formulated effectiveness questions from social work students collected over 19 years; many have been answerable to some extent by electronic searches” (Gibbs & Gambrill, 2002, p. 463). Furthermore, other authors have claimed that EBPs are “designed to create professionals who are lifelong learners who draw on practice-related research finding and involve clients as informed participants in decisions made” (Gibbs & Gambrill, 2002, p. 452) and claimed that, “teaching EBP involves helping students to pose answerable questions and to seek and critically appraise related research findings. Those who teach EBP teach skills for posing answerable questions of relevance to practice and critically appraising related research findings” (Gibbs & Gambrill, 2002, p. 462). EBP supporters believe EBPs will enable teachers, as teachers can then do their own research and discover the policies that “work” (Morrison, 2001). “EBP classes do not indoctrinate students to value treatment research, but rather provide valuable exposure to EBPs and help develop skills to employ these practices” (Nelson & Steele, 2007, p. 327). Advanced training will be needed for effective use of manualized treatments (Addis et al., 1999). There are five critical issues related to teaching EBPs, including defining EBP, modeling the complexities of EBPs, examining the curriculum, coordinating organizations, and shifting the culture (Springer, 2007). Dawes (2008) described EBPs as aiming to provide the best possible evidence regarding the point of contact with clients. Sackett et al. (1996) defined EBPs as “the conscientious, explicit, judicious use of current best evidence in making decisions about the care of individual patients” (p. 71).

STOP MAKING SENSE

21

EBP supporters claim that EBP does not ignore clinical expertise, a common criticism of EBPs. Supporters believe clinical expertise is an important part of EBP, saying EBPs values client input and desires (Gibbs & Gambrill, 2002). EBP is not a cookbook or one-size-fits-all form of treatment: “Consideration of client values and expectations as well as the extent to which research findings apply to a particular client shows that it is not a cookbook approach” (Gibbs & Gambrill, 2002, p. 459). The choice of treatment should be up to the client (Gibbs & Gambrill, 2002). The American Psychological Association’s (APA’s) definition of EBP encompasses the use of scientifically proven treatments but also emphasizes clinical expertise and client context (Nelson & Steele, 2007). In the business world, EBPs are based on a new managerialist strategy (Davies, 2003; Hammersley, 2001). Called new-managerialism, staff are pitted against each other for everdecreasing resources (Davies, 2003). Although EBPs have been heralded as the “gold standard” of scientific and educational research, they also cater to corporate principles such as efficiency, profit maximization, and accountability (Silk et al., 2010). Certain types of knowledge are given priority over others, largely because of market forces (Davies, 2003) and new-managerialism emphasizes measurable outcomes and goals defined by management at the highest levels of the system. Those not in the tops of the system are cut from the debate about the outcomes and their own fate (Dennis, 1995). EBPs seek to change individuals through the idea of “continuous improvement” and documented individual commitment towards and striving for it. Little of no attention is placed on the work, only the outcome, saying, “As long as objectives have been specified and strategies for their management and surveillance put in place, the nature of the work itself is of little relevance to anyone” (Davies, 2003, p. 93). Davies (2003) continued,

STOP MAKING SENSE

22

If the auditing tools say that the work has, on average, met the objectives, it is simply assumed that the work has been appropriately and satisfactorily tailored according to the requirements of the institution (and often of the relevant funding body). (p. 93) EBPs can be characterized as the “removal of the locus of power from the knowledge of practicing professionals to auditors, policy-makers and statisticians, none of whom need know anything about the profession in question” (Rose, 1999, p. 172). They can also be described as manipulating the inflow and outflow of both information and financial resources, which appears to be a natural and normal process that blinds us to its effects (Davies, 2003). EBM posits that practitioners must be ready to accept and deal with uncertainty (rather than seeking the reductionist allure of basic science), and to acknowledge that management decisions are often made in the face of relative ignorance of their underlying nature or true impact for individual patients. (Haynes, 2002, p. 2) EBPs have grown in popularity over the last couple decades. In fact, the term “evidence-based” has become somewhat of a psychological buzzword (Shedler, 2017). Because EBPs have no standard definition, there is the danger of EBPs being interpreted as a catchphrase for anything or idea supported by some evidence (Springer, 2007). Some clinicians have suggested the principles of EBPs should be part of required training and those who violate these concepts should face potential suspension of their license (Sehon & Stanley, 2003). However, others claim rigidity and forcefulness is not part of the orientation (Addis et al., 1999). As seen, EBPs are not limited to clinical research as they exist in a wide variety of settings, including medicine, education, psychology, and financial systems. They have been implemented in medicine for many decades. They also exist in social work, education, nursing, finance, psychology, and engineering. Many areas of study claimed to follow medicine after their reported success using EBPs. For example, The National Association of Social Workers shares

STOP MAKING SENSE

23

much in common with the psychological code of ethics in part because of the commonality of using EBPs (Gibbs & Gambrill, 2002). Supporters claim practitioners are not left helpless if there is no research available upon which to make treatment decisions (Gibbs & Gambrill, 2002), which is a common criticism. Due to some frequent complaints, the current push for EBPs are part of a larger movement to better integrate research into clinical skills (Nelson & Steele, 2007). Part of the reasoning behind the EBP philosophy is to limit the role of uncertainty in Science: “The uncertainty involved in decision making and potential sources of bias are emphasized in EBP” (Gibbs & Gambrill, 2002, p. 464,) with many authors highlighting the role critical thinking plays within the EBP system (Gambrill, 2000). A final feature is that EBPs tend to be correlated more with certain theories, philosophies, and in the field of psychology, orientations. For example, the use of EBPs can be predicted by theoretical orientation with most proponents of EBPs endorsing a cognitive behavioral therapy (CBT) orientation (Nelson & Steele, 2007). They also tend to follow a logical positivist philosophy, although many supporters claim EBPs are a-philosophical and avoid philosophical discussions and questions all together. In summary, EBPs are many things, but generally emphasize the role of basing treatment decisions on the best evidence, which they generally define as quantitative research. This emphasis on quantitative research has generally produced a hierarchy of evidence, with randomcontrol trials considered the most important form of evidence by EBP supporters. There are disagreements over the various definitions, but many supporters say they are an entirely new way of approaching science and believe they are a change in the scientific system. EBP supporters also believe in the process of testing every client, including treatment outcomes. These ideas are being integrated into a wide variety of fields, including medicine, finance, education,

STOP MAKING SENSE

24

psychology, nursing, and social work. EBPs are related to certain ideas and theories, the most common being the CBT orientation in the field of psychology and also a logical-positivist philosophy. Perhaps the most apt way to describe EBPs is to say they are the philosophical idea to base treatment only on what works, leaving behind all other discussions or concerns. Statement of the Problem There are plenty of criticisms regarding EBPs (Haynes, 2002), and there has been much debate about the value of EBPs, with not everyone agreeing on their greatness. For starters, few people actually believe following evidence actually constitutes the revolutionary paradigm shift supporters of EBPs have claimed (Sehon & Stanley, 2003). In the field of psychology, many of these concerns exist because of complaints by clinicians (Addis et al., 1999). While many of the previously mentioned features sound great to many people, there are flaws within the system. The most immediate issue is that there is actually no set definition of EBPs (Springer, 2007). To have the major philosophy of any industry or field remain undefined, or lacking agreement, is problematic to say the least, but even more so when this philosophy is shared amongst many industries. This topic is covered more thoroughly later in the dissertation. For now it is enough to know there is no agreement on an EBP definition and definitions run the gamut from so broad and abstract they define essentially anything, to so narrow and specific as to be unusable. There is also debate in the community regarding how useful EBPs actually are (Springer, 2007): “The very reason for the origin of EBP…in spite of intentions of professionals to provide competent, ethical services informed by practice-related research, they do not do so” (Gibbs & Gambrill, 2002, p. 463). The processes upon which EBPs are based have many problems, including: a reduction in critical thought, a lack of responsible dissent, pervasive subliminal fear and anxiety, a devalued

STOP MAKING SENSE

25

self, a shift away from the personal, a shift from the complex towards the simple, and an emphasis on the economic considerations above all other considerations (Davies, 2003). Evidently, over the past decade or so, there has been a concerted backlash against various forms of subjective, interpretive, and constructivist thought that is as much economic in its derivation as it is epistemological in its effects. Academic freedom meets fiscal constraint, resulting in widespread intellectual compliance to the corporate scientific norm; with little or no guarantee that science and scientific thought, as classically understood, is actually being advanced. (Silk et al., 2010, p. 107) Amongst the criticisms are numerous significant challenges to applying EBPs, and these challenges are not always addressed by the EBP community (Haynes, 2002). For example, there are ethical issues that exist within EBPs in multiple disciplines (Sehon & Stanley, 2003). Common criticisms of EBPs include ignoring client values, promoting a manual approach, being nothing more than a cost-cutting tool, and leading to therapeutic nihilism (Springer, 2007). Others criticisms include: the ignorance of clinical expertise and judgment; ignoring clients’ values, desires, and preferences; being manualized and therefore utilizing a cookbook approach; existing simply for cost-cutting measures; being limited to clinical research only; impossible to perform in the real world and unable to be done; and resulting in therapeutic nihilism, often providing no information (Gibbs & Gambrill, 2002). Many criticisms of EBP claim it promotes a cookbook approach to care (Haynes, 2002). While many supporters of EBPs say clinical judgment continues to matter, many clinicians are still expected to follow manualized treatments rather than follow their judgment. Proponents of EBPs frequently deny and insult relationship-based and insight-oriented therapy by using phrases such as “the disconnect between what clinicians do and what science has discovered is an unconscionable embarrassment” (Shedler, 2017). EBP supporters say their beliefs take into

STOP MAKING SENSE

26

account clinical values and client beliefs, but proponents of other theories make the same claim (Sehon & Stanley, 2003). Researchers who do not conform to the values of EBP are often given labels such as failing to think or act with intellectual integrity, forsaking scientific rigor, forsaking honest inquiry, promoting self-gratification, or researching for the sole sake of ideology, greed, routinization, or efficiency (Murray et al., 2008): “EBR echoes with an all too familiar unquestioning air in which any ontological or epistemological position that may counter it is usually viewed with suspicion at best and outright hostility at worst and predictably marginalized” (Silk et al., 2010, p. 111). Another legitimate complaint about EBPs is that they are actually not the paradigm shift they claim to me. People who have claimed there is a positive shift taking place with the shift towards EBPs are mistaken and are polarizing the EMP debate The new ideas of EBPs are not inconsistent with many of the older ideas about evidence and science, meaning that EBPs are not really new or changing many practices in the field of science (Sehon & Stanley, 2003). Science has been using and relying on evidence for centuries and the belief that EBPs are new is a faulty claim at best. Regarding the field of psychology, the term evidence-based, when used for psychotherapy, has come to mean something different than originally intended. It has come to be used as a buzzword for a specific ideology and agenda. It is now code for manualized treatments, which are frequently brief, one-size-fits-all therapies, and which typically refer to a type of CBT (Shedler, 2017). Criticism of EBPs includes an overreliance on the value of clinical trials (Sehon & Stanley, 2003). Some proponents of EBPs propose only using evidence from RCTs and believe practice is independent of science and interpretation. These same proponents have also supported

STOP MAKING SENSE

27

a hierarchy of evidence that removes science and basic reasoning (Sehon & Stanley, 2003). EBP proponents seek to undermine other legitimate forms of evidence, but these claims by supporters of EBPs about the superiority of their methods and RCTs contain serious problems (Morrison, 2001). This evidence hierarchy that is so essential to EBPs tends to devalue evidence not at the top of its hierarchy and sees certain types of evidence as unimportant (Goldenberg, 2005). The goals of this hierarchy of evidence are not compatible with the goals of a person-centered style of care (Goldenberg, 2005) and are separated from scientific reasoning and a general understanding of human behavior. Further problems include difficulties transferring research into practice: “Treatments with substantial research support can take 15–20 years before they are fully integrated into routine clinical practice” (Nelson & Steele, 2007, p. 320). Many EBPs fail to move from research settings into clinical practice (Nelson & Steele, 2007), leading to the criticism that EBPs are not effective treatments and are promoted by academics and not clinicians: Behind the “evidence-based” therapy movement lies a master narrative that increasingly dominates the mental health landscape. The master narrative goes something like this: “In the dark ages, therapists practiced unproven, unscientific therapy. Evidence-based therapies are scientifically proven and superior.” The narrative has become a justification for all-out attacks on traditional talk therapy—that is, therapy aimed at fostering selfexamination and self-understanding in the context of an ongoing, meaningful therapy relationship. (Shedler, 2017, p. 320) Evidence-based, as the term is currently used, “is a perversion of every founding principle of evidence-based medicine” (Shedler, 2017, p. 328). Resistance to EBPs is considered ignorant and dismissed. It is assumed critics do not understand bottom line issues or basic

STOP MAKING SENSE

28

processes upon which EBPs are based (Davies, 2003). Some claim EBPs only exist to continue supporting those already in power (Silk et al., 2010). Regarding this issue applying research to practice, one of the larger complaints is the time constraints EBP places on clinicians (Springer, 2007). Because of the breadth of research available to professionals in modern times, it is unrealistic to expect teachers, educators, professors, and even researchers to be able to keep up with all available information (Stambaugh & Dyson, 2016). This reliance on research also causes some critics to emphasis the value of clinical experience and judgment of the individual clinician (Sehon & Stanley, 2003), claiming EBPs de-emphasize the role and important of clinical experience. This idea is consistent with the concern that EBPs, especially manualized treatments, do not facilitate the development of a therapeutic relationship (Addis et al., 1999): Existing manualized treatments are not always successful....manualized treatments are often regarded as highly technical and disorder specific. How many different manualized treatments must the working clinician learn in order to best serve a diverse group of clients… Using a treatment manual is not a matter of theoretical or empirical debate for practitioners. It is a psychological reality. By psychological reality we mean to emphasize first that practitioners are the ones who must grapple with attitudes and feelings regarding autonomy, competence, and the perceived threat of manualized treatments. (Addis et al., 1999, p. 431) Certain words, when used as part of the EBP dialogue, become problematic as well. One of the current problems is the use of the term statistically significant when measuring client outcomes. Using statistical significance in a study does not indicate that a client gets well or improves in any meaningful way. A client could have not improved and could still be significant (Shedler, 2017).

STOP MAKING SENSE

29

Other criticisms focus more on the foundations of the EBP philosophy, claiming EBPs exist largely because of financial constraints and reasons. The more one is seen as performing objective and good science, the more likely one is to receive large amounts of funding (Silk et al., 2010). There are also claims that EBPs are based more on political agenda than science (Sehon & Stanley, 2003). For example, critics of EBPs “argue that EBP is just another name for authority-based practice in which decisions are made on the basis of authority rather than on a careful appraisal of the evidentiary base related to recommendations made” (Gibbs & Gambrill, 2002, p. 469). Some practicing clinicians are concerned about their ability to learn and successfully implement manual-based treatments. These fears may be reasonable given current political and economic pressures: their livelihoods may depend upon adopting empirically based practices. An extension of this concern is that political forces will take on a big brother presence, dictating treatment interventions while monitoring outcomes. (Addis et al., 1999, p. 434) There is clearly a need for more extensive research on practitioner satisfaction and comfort using manualized treatments. It would be helpful to view manualized treatments as pieces of technology that, while they may be helpful to clients, will only be used by therapists if they are perceived as helpful, satisfying to use, and manageable to learn. As payment for mental health services has become more restricted and funding sources have dried up, there is a tremendous amount of pressure on practicing clinicians to meet productivity and revenue requirements dictated by organizational policies. (Addis et al., 1999, p. 435) Many clinicians believe they already offer effective treatment and are unmotivated to consider manualized treatments. These clinicians are only fueling the problems that EBP

STOP MAKING SENSE

30

practitioners endorse (Addis et al., 1999). Psychologist Jonathan Shedler (2017) pointed out, “There is a mismatch between the questions studies of ‘evidence-based’ therapy tend to ask versus what patients, clinicians, and health care policymakers need to know” (p. 322). Even successful magazines such as Newsweek have fallen prey to the idea that EBPs are scientific and non-EBPs are unscientific (Begley, 2009). Part of this problem regarding the research is that it’s possible to find evidence for any point of view if you search hard enough (Gibbs & Gambrill, 2002). Shedler (2017) wrote about the common assertion that EBPs are scientifically proven and therefore superior to other forms of psychotherapy. However, the empirical research does not support these claims (Shedler, 2017). Critics say there is also an “art” to science (Miettinen, 2001) and supporters of EBPs appear to have a theory of evidence that says all knowledge which passed for evidence prior to the EBP movement was not evidence at all or based on evidence. However, this theory does not appear to be outright expressed (Sehon & Stanley, 2003). This observation is related to the concern that Effectiveness is a matter of personal opinion (Gibbs & Gambrill, 2002) and EBPs often act as if the subjective is factual and scientific. Their emphasis on manualized treatments also ignores some big problems, such as manualized treatments being standardized or scripted in such a way that they cannot address the needs of individual patients (Shedler, 2017). This new-managerialism style so prevalent in EBPs, excels at using virtuous and moral language, blinding many to necessary critique and the negative consequences of this system. Systems such a new-managerialism are exhausting and debilitating, potentially leading to depression and other problems. These systems send a message that members are not good enough unless they conform to the rules and ideals set by others. New-managerialism is organized in such a way that everyone has to work harder to be considered good enough for the system and meet the expected standards:

STOP MAKING SENSE

31

Individuals involved in implementing (or simply caught within) new managerialist systems are often seduced by its rhetorics of efficiency and accountability, and by its morally ascendant promise of a desired comeuppance for those perceived to be faulty or inadequate in conducting their own conduct. (Davies, 2003, p. 95) It is based on the idea of compliance and conformity and requires obedience to a code which practitioners are expected to accept through fear and guilt (Rose, 1999).Those who adopt EBP simply become reverent towards a different authority, that of the researcher (Gibbs & Gambrill, 2002). New-managerialism places an emphasis on personal responsibility; however, this sense of responsibility is driven by a subliminal fear of negative consequences and surveillance rather than a sense of personal value and success within a social fabric. Individual motivation is no longer an important factor in the new-managerialism upon which EBPs are based. The locus of control has been removed from an internal locus to an external one. Motivation has been replaced by fear of punishment. Fear of being reported, breaking the rules, surveillance, and punishment have replaced the desire for personal fulfillment (Davies, 2003). Other problems with new-managerialism include: a reduction in freedom, the loss of a code of morality, favoring economy over morality, and the celebration of machismo and competition: These changes are favouring a much tougher, more ‘macho’ kind of academic, and encourage a climate where due process, equity, and respect for academic freedom are overwhelmed by the need to respond quickly to opportunities, reinvent, repackage, and position oneself and one’s institution in the market-place. (Davies, 2000, p. 177)

STOP MAKING SENSE

32

One of the many failed examples of new-managerialism is the lack of equality it has provided for women in the workforce, even though proponents of these ideals promised a revisioning of universities and workplaces (Davies, 2003). EBPs exist in a system where any questioning of the system itself is silenced or trivialized, as the system is assumed to be perfect. This system is assumed to be both natural and inevitable, making questioning futile (Davies, 2003). In EBP systems, people no longer obtain their sense of self-worth and value based upon their conduct, abilities, or knowledge (Davies, 2003). This system includes unquestioned authority of scientific evidence (Goldenberg, 2005), and a very specific type of scientific evidence at that. Many proponents of EBPs seem to avoid the difficult debate around EBPs by simply defining them in an overly broad manner, making disagreement impossible. These definitions include defining EBPs as the best possible combination of basic science, clinical experience, and clinical trials. By defining EBPs in this way, supporters come close to defining EBP as the best way to practice in general. In other words, they respond to the second order conceptual question (What are EBPs?) by saying EBPs are whatever approach best answers the first order question (How ought professionals practice?), giving the illusion of having answered both these questions when in fact neither question has been answered. This definition of EBPs makes any debate impossible and futile as it demonstrates circular reasoning and is overly broad (Sehon & Stanley, 2003). Also, this definition avoids the debate of how one defines the best possible evidence. While EBP supporters would claim RCTs and other type of quantitative research are the best type of evidence, simply making a statement such as this does not make it so. The debate around what constitutes great evidence rages on, despite claims otherwise. There is a danger that federal forces will foreclose on this narrow definition of scientific research and that this definition will become the standard for science and federal support

STOP MAKING SENSE

33

(Eisenhart & Towne, 2005). Scholarship with commercial value in a marketplace takes precedence over any intellectual value under the EBP system. Because of these problems, EBPs have caused financial and methodological concerns to take precedence over the natural scientific method (Silk et al., 2010). Some have even argued that EBPs exist not because of science and research but rather because of money, politics, and this new-managerialism system (Eisenhart & Towne, 2005). This new system supports knowledge that places the political above the broader civil concerns and public values, causing some to predict that EBPs will lead to the death of true knowledge (Silk et al., 2010). And in certain settings, these problems with EBPs are exacerbated. For example, places such as Community Mental Health Centers experience unique difficulties when attempting to implement EBPs (Nelson & Steele, 2007). Problems with the teachings of EBPs also exist, with EBPs typically being taught as a new system that will save the current and corrupt system. This type of setting can appear threatening in a certain light and EBPs should be taught through the lens of support, as compared to the current lens of threat (Addis et al., 1999). Perhaps no issue is a better example of the problems EBPs present than the standardized testing used in education. The criticisms are numerous. For example, much of people’s financial resources have been used for test preparation, rather than improving the quality of teaching or education. Additionally, many of the standardized tests being used were not designed to measure the trait they are being used for (Kohn, 2000), and the basis of assessing performance in systems such as EBPs is externalized, constantly increasing and changing, at odds with much professional knowledge, and often not based on previous practices (Davies, 2003). Returning to the field of psychology, even more criticisms of EBPs exist. For example, one frequently mentioned criticism of EBPs is that they can ignore clinical expertise. This

STOP MAKING SENSE

34

criticism gains credence when one realizes they are currently superimposed upon practice, repeating and reinforcing the biases and assumptions they present (Goldenberg, 2005). To summarize, numerous criticisms regarding EBPs exist, suggesting they are not the perfect system or form of practice their supporters claim, with the most common criticism being that EBPs are not the huge success or big change in the system they promised to be. Furthermore, they have not delivered the rapid improvement in results they promised. Further criticisms include EBPs being an overly simplified system of science, not having a set definition, using overly broad definitions so completely as to be vague, encouraging conformity as compared to critical thinking, discouraging variety or variation, not allowing for dissent, not being practice or useful in a clinical setting, not bridging the gap between practice and research, being based on a faulty philosophy, ignoring philosophy, and hijacking the use of the term evidence.

STOP MAKING SENSE

35 Chapter 2: Literature Review What is Evidence?

There are problems with EBPs (Berkwits, 1998; Howick, 2015). There is disagreement as to the cause of these problems, but there has generally been consensus that problems exist. However, much research can also be found supporting EBPs, proclaiming them to be one of the most important revelations in scientific thinking in modern times (Berkwits, 1998). This statement highlights one of the problems with EBPs: Despite popular belief, EBPs are based on beliefs that stem from logical positivist thinking—ideas that have been around for well over a century. In all actuality, the argument over using EBPs or not really comes down to a debate about what constitutes evidence. In logical positivism, a branch of philosophy based on the ideas of empiricism, also occasionally referred to as logical empiricism, one of the central tenets is verificationism, or the idea that a statement is only meaningful if it is empirically observable. While upon first glance this statement makes sense, it also contains its own undoing. For even though the idea of studying statements through empirical observation is brilliant and makes rational sense, the other side of this idea is that statements that cannot be observed empirically are not studied at all, a belief that has consequences and causes problems within fields like medicine, psychology, and education. Again, it can be easily argued that just because a variable or idea is not directly observable or definable does not mean it is negligible: In this regard, and although it appears there will be some movement from a covertly politicized epistemology, evidence still appears as incontestable, as if it speaks the truth, and is pure in the sense of being free from the inherent messiness of human language, human values, or indeed anything even recognizably human. (Silk et al., 2010, p. 107)

STOP MAKING SENSE

36

This idea of logical empiricism also creates a hierarchy of evidence, which proponents of EBPs support (Upshur, 2003). For example, little is known about emotions in the field of psychology. A logical empiricist philosophy suggests ignoring emotions because they are hard to define and study. However, ignoring emotion then creates huge problems and chasms within the field of psychology. Furthermore, this idea of only focusing on what can be observed and defined operationally has led to the current divide between quantitative data—data that can be operationally defined and measured—and qualitative data—data that can only be categorized— with quantitative data being assumed to be empirically observable. So, if something cannot be seen, logical positivism says it is not meaningful. In certain subjects, such as the study of music, this ideology creates huge problems, as will be illustrated, as much of what is valued and enjoyed in music is unobservable. Current discussions label evidence as data from properly conducted research that are applicable and provide relevant information (Berkwits, 1998). In logical positivism, evidence is also knowledge that has been subject to critical testing and has passed these tests (Gambrill, 2010; Upshur, 2003). For example, in the field of psychology, certain inventories are used to study a person’s psychological makeup after thorough and routine vetting and verification. However, as will soon be seen, in reality the definition of evidence logical empiricism promotes is not the definition many adhere to in practice. As great as this definition is (and this paper supports such a definition), it is plenty ambiguous and expansive, allowing for a vast interpretation of what evidence might be. This openness makes the definition more accurate, but it also makes it more difficult for some individuals to adhere to and accurately define. These same individuals crave and desire a definition that is specific and detailed, hence their reliance on quantitative data as the only criteria to fit the definition of evidence, in their opinion.

STOP MAKING SENSE

37

The debate around EBPs centers around two questions: 1. What constitutes scientifically-based evidence? 2. Is science the best or only way to approach meaningful study (Eisenhart & Towne, 2005)? This idea of what constitutes scientific evidence has historically been widely debated and continues to be. Consensus regarding what is scientific has not been established (Eisenhart & Towne, 2003). Practitioners with problems against the science and research regarding EBPs are not necessarily antievidence or antiresearch. They have generally expressed more openness to other types of research, including “effectiveness” research conducted in clinical settings with clients similar to those they see in treatment (Nelson & Steele, 2007). However, proponents of EBPs believe their methods are superior because they are basing their policies on evidence rather than ideology, fads, marketing, or politics (Slavin, 2008). This statement includes the assumption that non-EBP practitioners are basing their ideas on these qualities, rather than a different value system or philosophy. Much of this debate revolves around the term scientifically-based research and how this term is defined (Eisenhart & Towne, 2005). The great philosopher Husserl believed that evidence is so important because it is the component that separates science from all other activities (Husserl, 2014). For example, one possible definition of evidence is a reason for belief or taking action (Goodman, 2004). Obviously, it is difficult and complex to define EBPs, and they should be challenged on the simple grounds that defining evidence through the philosophy of science has been problematized. Additionally, EBPs maintain an “antiquated” understanding of evidence. They believe evidence to be a set of facts and believe scientific beliefs stand or fall in light of these

STOP MAKING SENSE

38

facts. Yet evidence is not self-apparent or given, even when gathered from the most objective setting (Goldenberg, 2005). There are other problems with evidence, such as having the appearance of being so objective that something is assumed to be reliable and generalizable (Silk et al., 2010): “Evidence,” we learn, is far from neutral; “truth” and “evidence” are always overdetermined by the social, historical and political contexts that lend them their currency and power. These inform our methodologies, and we know that these methodologies, in turn, directly and indirectly shape the object of inquiry. (Murray et al., 2007, p. 92) Critics of EBPs say there is more to a field than just a simplified version of evidence, claiming they frequently promote a misunderstanding of the scientific process, saying basic science is involved in understanding a subject. Evidence and basic science need each other. They work together rather than separately. The debate between them makes no sense (Sehon & Stanley, 2003). Opponents of EBPs are concerned about the underlying positivistic philosophy surrounding EBPs and its reliance on a very simplified definition of evidence (Eisenhart & Towne, 2005), and only a specific type of evidence at that: quantitative data (Goldenberg, 2005; Morrison, 2001). But this idea is flawed, as even fields as controversial as homeopathy base their treatments on evidence and argue they use the best evidence available (Sehon & Stanley, 2003). Furthermore, nothing in the standard definition of EBPs explains why examples such as homeopathy are wrong or fail to be evidence-based (Sehon & Stanley, 2003), other than to suggest their version of evidence is superior.

STOP MAKING SENSE

39

Common criticisms of EBPs are that there is a difference between teaching students how to think and teaching students what to think, saying EBPs confuse this process and claiming they teach students what to think but claim they teach them how to think. EBP supporters claim they do not take this action and only teach students how to think (Gibbs & Gambrill, 2002); thus, according to their supporters, EBPs are a guide for thinking, and for thinking about how decisions should be made (Springer, 2007). The process of EBPs and teaching EBPs include posing clear and well-structured questions, demonstrating transparency and ethical values in decision making, using critical thinking, using critical problem-solving skill, using Socratic questioning skills, challenging assumptions, and using the application of knowledge when making practice and policy decisions (Gambrill, 2006): “EBP is indeed complex, and thus the teaching of EBP is inherently messy and fraught with challenges” (Springer, 2007, p. 620). Eisenhart and Towne (2005) provided an accurate quote regarding the popular EBP belief that only quantitative data counts as true science: Some researchers have worried, with good reason, given the current political climate, that important ways of knowing, sometimes referred to as “nonscientific,” (e.g., philosophical, historical, cultural, affective, postmodern, and practice-oriented), will be forgotten in the rush to achieve scientifically-based research. (p. 31) As previously mentioned, one of the most blatant examples of the effects of EBPs exists in education and the modern emphasis on standardized testing. Definitions of education research first began appearing in law with the passage of the Reading Excellence Act in 1999. The main goal of this law was to ensure that funds dedicated to the program would not be misused. This goal was accomplished by correlating the funding to the use of the most current and “best” evidence. This idea required educators to create a definition of scientific research (Eisenhart & Towne, 2005).

STOP MAKING SENSE

40

However, ideas about evidence within academia are dangerous, naïve, and lack commonsense. Current ideas about evidence suggest seeing is believing, but fail to account for multiple other factors that determine what is believed, such as norms, standards, political dominance, and who establishes the questions that can be asked regarding truth (Silk et al., 2010). This idea that seeing is believing is certainly understandable, helping to explain some of the modern popularity of EBPs. However, according to many philosophers, seeing and believing are not necessarily related (Silk et al., 2010) and do not make for infallible science. Furthermore, the scientific process, and therefore evidence, is much more complex than basing all knowledge on the visible. What about the theoretical? What about what cannot be seen directly but can be seen indirectly? What if scientists have evidence for something but cannot prove its effectiveness or certainty, such as the existence and importance of emotion? What if science has a case that is so subjective that it is not possible to demonstrate anything factual, and yet people understand its importance, such as the preference for a certain sound in music? Are these questions not important just because they cannot be directly measured? During the Bush area, evidence came to be considered valid if it advanced the public agenda, appeared to be factual, but also if it was supportive of a regime or person. Evidence that contradicts the political agenda under EBP is seen as flawed when, in reality, it could be incredibly valid. Also, there has been a government manipulation of science in recent years, with the government funding science less and less. This lack of funding has contributed to the idea that good science must be clearly objective and quantifiable (Silk et al., 2010). Additionally, advocates of EBP seem to have a philosophy of evidence that believes all evidence in existence before their theory became dominant does not count as evidence (Sehon & Stanley, 2003), as if the basic idea of evidence came into existence around the 1990s and all science before this time does not count as science. In actuality, this idea makes sense, given the

STOP MAKING SENSE

41

EBP idea that evidence generally only counts if it is quantitative. This idea amounts to a hijacking of the term evidence, and the consequences of this hijacking have made it nearly impossible to have meaningful discussions about what constitutes good therapy (Shedler, 2017). In 2001, the National Research Council stated that scientific research is based on principles rather than method, reporting that these principles remain the same regardless of the methods being used and exist throughout many subjects, including natural sciences, social sciences, education, medicine, and agriculture (Eisenhart & Towne, 2005). But this EBP reliance on quantitative data are flawed. First, evidence can become fixed and support an individual’s way of viewing the world, rather than adjusting to a view of the world as determined by evidence (Murray et al., 2007, 2008). Second, much evidence is just dismissed under EBP, or at the very least, ranked lower in importance (Silk et al., 2010). In summary, the definition of evidence has never been completely established and likely never will be; it is a complex and philosophical subject with differing definitions depending on the source. EBP proponents define a hierarchy of evidence, claiming RTCs and other quantitative methods as superior forms of knowledge. These forms of evidence diminish the amount of knowledge available and limit the capacity of science. This simplified definition of evidence leads to a simplified form of science, lacking in complexity. A History of Evidence-Based Practices The history of EBPs, much like the definitions, is somewhat mired in controversy regarding its origins, with some saying EBPs began in Canada and others saying it began with American businesses (Upshur, 2003). However, the path EBPs took over time is more clearly understood, with EBPs invading other areas of research as businesses began spouting EBPs as the solution to ensuring money was not being wasted. The first area affected by EBPs was medicine, with EBPs promoted based on the idea that a standardized approach would decrease

STOP MAKING SENSE

42

variation and therefore improve care (Upshur, 2003). This idea is a great example of the importance of philosophy and reasoning in science, for it could be argued that decreases in variation do not necessarily equate to an improvement in care. In time, psychology, economics, social work, and even education succumbed to the temptation of the EBP mindset. Initial indications reveal that the primary motive behind EBPs was financial; however, other motivations exist for adopting this ideology as well. For starters, EBPs only seem to exist in democratic societies, and some have suggested that EBPs may serve as a method of taming public involvement and opinion (Berkwitz, 1998). In more modern times, EBPs are increasingly sharing goals with marketing products and advertisements (Gambrill, 2010). An example of this would be the use of zen sand gardens as examples of peace of mind when marketing credit cards. Or companies like Target keeping track of consumer spending habits using qualitative data to promote advertising or coupons towards certain individuals. Others suggest that like any scientific form of inquiry, EBPs emerged naturally, evolving from social circumstances and necessity (Berkwitz, 1998). These individuals claim EBPs are a reaction to other decision-based models in which decisions were based on consensus, anecdotal experience, or tradition (Gambrill, 2010). In their view, EBPs are just part of a long line of scientific reasoning that will continue to grow with the times, they only serve to increase scientific accuracy, and they are scientifically superior to all other philosophical methods of knowledge (Goldenberg, 2005). However, as will soon be seen, this belief does not fit with modern definitions of EBPs. Supporters of EBPs believe that using quantitative evidence as the basis of decision making can only improve a given situation because, as previously noted, standardized care would decrease inconsistencies and only improve the quality of care (Upshur, 2003). Supporters of this argument are seriously mistaken and frequently polarize the EBP debate (Sehon &

STOP MAKING SENSE

43

Stanley, 2003). However, the underlying assumptions for this belief are rarely, if ever, given; why would inconsistencies improve care? Of all the studies convened for this research project, not one of the studies supporting EBPs mentioned any evidence or reasoning supporting this assumption that standardized care automatically improves care. Is it not possible that inconsistencies could help improve care? Why does standardization automatically improve care? This idea is compounded by the fact that humans are vulnerable to such logical traps as confirmation bias, wishful thinking, and hindsight bias (Gambrill, 2010). These biases explain common human behaviors such as seeking evidence that confirms what people already believe, interpreting evidence in faulty and unrealistic manners, and only completely understanding evidence or a situation in hindsight, rather than interpreting the results logically. According to logical positivists, if scientists use quantitative evidence as the basis of their decisions, they are less likely to succumb to these logical traps. These supporters also believe this process will decrease the gaps between research and practice, which will maximize opportunities to help others, and will avoid doing any harm in the process (Gambrill, 2010). In reality, these ideas have not come to fruition in practice, and education is a great example of an area where EBPs may actually be doing more harm than good. There are still gaps between practice and research, plenty of people and students are not receiving the help they need, and it can easily be argued there is plenty of harm being done in the process. In fact, many people have claimed that EBPs are not about science at all, but rather money and politics (Eisenhart & Towne, 2003). The modern idea of evidence-based practice actually evolved from business, government, and management models from the mid-20th century. However, the philosophy of logical positivism began much earlier, in the 1920s in Vienna. This Vienna Circle, as it came to be known, rejected knowledge that could not be scientifically verified. This was the critical change in the philosophy of science that would eventually develop into what are now called EBPs

STOP MAKING SENSE

44

(Goldenberg, 2005). This circle dismissed such subjects as metaphysics and ethics, meaning other considerations such as definitions are considered unimportant in their scientific process (Goldenberg, 2005; The Editors of Encyclopedia Britannica, 2015). The only form of decision making that matters is what is proven scientifically to work. Clinical psychologists have identified themselves as scientist-practitioners since the Boulder conference of 1949, but the marriage of clinical work and science has always been troubled (Addis et al., 1999), with some psychologists believing the field was not scientific enough. EBPs originated in medicine based on concerns that practitioners were not receiving access to new research findings as they emerged, thus making their treatments out of date (Gibbs & Gambrill, 2002). EBP advocates declared a “new paradigm” in medicine in 1992. In this paradigm, evidence from health care research is the basis for decisions made in the health care system (Haynes, 2002). This term evidence-based comes from the medical world and the work of the Cochrane collaboration, a large database of empirically validated studies (Morrison, 2001). Many fields of study began following this idea, establishing their own evidences based on these ideas. However, as with many things in EBPs, there is some confusion surrounding the origin of these ideas. Depending upon the source, EBPs originated in various ways, with others claiming evidence-based medicine originated from clinical epidemiology at McMaster University, both in concept and name (Haynes, 2002). Still others suggest a different beginning: the term evidence-based comes from medicine and first gained attention in the 1990s. This term was initially a call for critical thinking, used to call attention to the logical fallacy existent in the “we’ve always done it this way” (Shedler, 2017, p. 319) justification and mentality. Yet there are a few things that most origin stories agree

STOP MAKING SENSE

45

upon: EBPs began from third-party services such as case reviewers, insurance panels, HMO administrators, government, and so forth (Addis et al., 1999). They began in either medicine or finance, depending upon the perspective, and began because of either a lack of scientific rigor or fiscal accountability: Over the last 20 years practitioners have begun to feel the direct effects of economic and accountability contingencies, and will continue to do so. More than ever, clinicians must answer questions about why they’re treating particular clients, why they’re choosing particular interventions, and whether such choices are justified economically in terms of outcomes. (Addis et al., 1999, p. 431) These changes were justified for multiple reasons. A significant shift began in the mental health workforce about 30 years ago due to the advent of managed behavioral care. Since this time, the marketplace has become highly competitive and a variety of disciplines have applied similar methodologies (Campbell et al., 2018). Also, starting in the 1970s, it became apparent that political will was making policy decisions rather than scientific evidence or basing policies on “what works” (Morrison, 2001). Social workers had been calling for the application of research findings for decades (Gibbs & Gambrill, 2002), and many practitioners were concerned about the relevance of some research and how it related to clinical work (Nelson & Steele, 2007). Research texts had long advocated for the use of current best practices in clinical work because research has suggested that practice-related research findings are rarely used. No texts within social work contains chapters on how to formulate well-crafted and detailed questions for scientific reasoning (Gibbs & Gambrill, 2002), another reason why EBPs grew in popularity. On the financial side, EBPs began with new-managerialism, when benign leaders observed professional work of staff at a distance (Davies, 2003). This developing way of running a business viewed a dispassionate, nonemotional, and distant observation of employees as a good

STOP MAKING SENSE

46

thing, assuring everyone that this style of work would improve business for everyone, including workers. This idea is one of the assumptions upon which EBPs are based. The later management models in the business and government worlds sprang into existence because of financial concerns and the desire to hold others accountable. This idea in and of itself is problematic, as truth and evidence really have more to do with logic and rationality than financial responsibility. Still, other subjects began adopting these principles of accountability and the need for quantifiable evidence after their benefactors and financial supporters requested evidence to determine their money was not being wasted and spent unwisely. To this day, many of those who support EBPs and write the guidelines often have financial reasons to do so, such as ties to pharmaceutical companies in the medical field (Gambrill, 2010). Perceived flaws in people, including an inability to live up to expectations of others, was paramount in starting new-managerialism, using these reasons as an excuse to support the downfall of the old system and the rise of a new system where these issues will not be problematic (Davies, 2003). These policies and ideas soon transferred to many fields, including, most prominently, education. In the United Kingdom (UK), the Secretary of State for Education said social science should determine what is effective and this effectiveness should determine the policies (Morrison, 2001), paving the way for EBP supporters to claim they were the key to more effective science. One of the key components of this reported increase in science in the U.S. education system was the push for standardized testing, which, in the US, increased drastically after the publication of A Nation at Risk in 1983 (Perrone, 1991). No Child Left Behind, which was passed in 2001, required use of programs based on scientifically-based research (Slavin, 2008), giving the appearance of concern about scientific thinking. Despite these reported changes

STOP MAKING SENSE

47

in standardized testing, however, the format has remained the same (Amrein & Berliner, 2002; Bhattacharyya et al., 2013). In 2003, the President’s New Freedom Commission on Mental Health released its report on mental health services in the United States and identified the dissemination of EBPs, defining these EBPs as “a range of treatments and services whose effectiveness is well documented” (Nelson & Steele, 2007, p. 319). As shall be seen in a future section of this dissertation, this phrase is just one of many definitions of EBP. However, the push for testing continued with supporters convincing others the key to an improvement in the quality of education in the US lied in the ability to test students and then use this information to change the system. Within this education system, EBPs have been pushed as a way to define science, policies, and practices (Silk et al., 2010). Motivations for this push include supporting a back-to-basics teaching mentality as well as casting public schools in a negative light to support privatization of the school systems (Kohn, 2000). While this idea sounds well-intended, and was, the reality did not work out as well, with very few changes leading to the improvements Americans were promised. The emphasis on standardized testing has only increased: Americans like tests so much that they have gradually structured society around them. For many children, the testing phenomenon begins in their early school career. Head Start children are tested before entry into the four-year-old’s program. Kindergartners are tested to see if they are ready to begin school. Before entering first grade, students are already on the testing roller coaster. Why are we so obsessed with standardized tests? (Bhattacaryya et al., 2013, p. 637) However, as Americans have suffered from too much standardized testing, businesses have reaped the rewards, with testing companies generating more than a quarter of a billion

STOP MAKING SENSE

48

dollars in revenue in 1999 and often able to capitalize on their tests by selling study materials as well. Few countries provide standardized tests before the age of 16, and these countries perform just as well if not better than U.S. students (Kohn, 2000). As previously mentioned, a large part of the problem seemed to be how the information gained from these tests was used, with government officials who advocate their use deciding where to place students and allocate services (Kohn, 2000). Still, one has only to look at history to understand that as well-intentioned as people’s beliefs are in using EBPs for improvement, historically, high-stakes testing has failed wherever they have been tried (Kohn, 2000). Despite the agreed upon invalidity of standardized testing amongst teachers, this process is continued due to society’s desire to measure education (Bhattacharyya et al., 2013). Furthermore, this idea goes against one of the primary principles of EBPs: Test something and then discontinue or alter whatever program you are testing if it is not working. As the evidence has clearly shown that high-stakes testing does not lead to the improvement proponents have promised, perhaps EBP’s own philosophy should suggest that it does not work and should be discontinued. Furthermore, this emphasis on high-stakes testing has led to other problems in the classroom as well, with one of the largest complaints being a trend that teachers have begun teaching to the test. In fact, test scores often increase after teachers begin teaching to the test, but then drop again once a different test is used. Administrators often use these examples to claim schools are failing, even though the same pattern happens frequently across many different school districts (Kohn, 2000). A large part of EBP use is the emphasis on using research databases to make educated decisions based on what works. However, many teachers are unfamiliar with electronic databases and do not know how to formulate answerable questions. Additionally, they may have never

STOP MAKING SENSE

49

heard of the idea of a well-formulated question. They also are put off by computers and the complexity of these large research databases (Gibbs & Gambrill, 2002). Additional changes have occurred as a result of EBPs. For example, in recent years, knowledge production has become privatized rather than public information. Peer review boards have also become more privatized since the Bush era, as evidenced by the increase of board members from private industries (Silk et al., 2010). Funding approaches have been made to facilitate initiatives aimed at disseminating EBPs into applied settings (Nelson & Steele, 2007). There is an aggressive push to define science, policies, and programs by evidence-based programs (Silk et al., 2010). Evidence-based reform has prospered in part because of the promotion it has received in federal policies (Slavin, 2008). However, despite the increased push for EBPs in fields such as psychology and education, they appear to be used less than EBP supporters desire, and previous attempts to push empirical treatments on practitioners have been met with controversy amongst practitioners (Nelson & Steele, 2007). Changes in health care reimbursement have contributed to the current culture, causing anxiety, fear, and anger in many practicing clinicians (Addis et al., 1999). Research has shown that individual practitioners vary considerably regarding how open they are to integrating EBPs, and practitioner use differed according to clinical setting (Aarons, 2004). EBP use is highest in hospital or university settings (Nelson & Steele, 2007). To summarize the current state of EBPs, at least in terms of education, there is currently a fear that political forces will define what counts as science so narrowly that it will create a vast amount of problems, including extreme difficulty for many researchers (Eisenhart & Towne, 2003). Indeed, the trend in scientifically conceived research has been a narrow definition of science used to justify the use of spending money (Eisenhart & Towne, 2003). This narrow definition is defined by neoliberal, neoconservative, and neoscientist forces (Silk et al., 2010).

STOP MAKING SENSE

50

Indeed, it is possible to make sense of the evidence-based practice movement within the framework of neoliberalism (Davies, 2003). Modern science has become shaped by the values of neoliberalism (Silk et al., 2010). This new corporate model of research redefines the importance of knowledge and undermines other aspects of science, such as theorizing, pedagogy, and meaning. These forms are no longer seen as a public good (Silk et al., 2010). These business practices have invaded the world of education as well, especially regarding universities. Teaching patterns that lack high standards of evidence is both natural and commonplace within the academic guild and agency settings (Springer, 2007). Colleges have increasingly become and see themselves as training grounds for corporations (Silk et al., 2010). This new reliance corporations have found on colleges have removed the ethical and moral bases for the purpose and meaning of higher education: Higher education is increasingly being redefined in market terms as corporate culture subsumes democratic culture, and crucial learning is replaced by an instrumental logic that celebrates the imperatives of the bottom line, downsizing, and outsourcing. In this formulation, academics become obsessed with grant writing, fund raising, and capital improvements, and higher education. (Silk et al., 2010, p. 112) In the field of psychology, the changes brought about have been stark. APA guidelines require all psychological training programs in graduate programs to teach EBPs (APA, 2019). Additionally, the EBP movement is seeing excitement and support that has not been experienced since logical positivism flourished between the 1920s and 1950s (Goldenberg, 2005). There have been extensive discussions about EBPs amongst clinical researchers (Addis et al., 1999) and the mental health field has moved towards widespread use of EBPs throughout the field in recent years (Nelson & Steele, 2007).

STOP MAKING SENSE

51

A drastic increase in the amount of quantitative research being conducted has been witnessed (Jorgensen & Ward-Steinman, 2015). The cause of this increase is largely due to a rejection of previous research and what was seen as some weaknesses of qualitative research and the use of some very unscientific methods. The use of nonscientific research and interventions caused many psychologists to search for more scientific treatments. A great example of this lack of scientifically-based treatments was the use of rebirthing procedures, which eventually led to the death of a child during a therapy session in Colorado (Josefson, 2001). Examples such as this one have been used by EBP proponents to justify the need for their methods. Support of EBPs came about partially because there has been a recognition of the fact that evidence gathered through observational studies and clinical experience was subject to a number of flaws, including expectation effects, placebo effects, and more. However, EBPs have yet to fix these flaws (Sehon & Stanley, 2003). The APA’s policy supporting EBPs closely resembles the Institute of Medicine’s definition of EBP and reflects a growing belief that treatment needs to be based on evidence (Nelson & Steele, 2007). EBPs are informed by financial interests. In therapy, insurance companies have an incentive to pay for the cheapest and shortest service. EBPs, with their focus on only reducing symptomology, fit this bill nicely (Shedler, 2017). However, the current EBP atmosphere does not share previous periods of social productivity marked by a desire to be socially responsible (Davies, 2003). Some supporters claim a middle-ground, saying EBPs and other methodologies should be combined (Sehon & Stanley, 2003), but a culture shift is needed to move from opinion-based practice to evidence-based practice (Springer, 2007). The desire for accountability illustrates one of the dangers of EBPs: At face value, EBPs have many positive attributes, such as a lack of ambiguity, tangible evidence, the appearance of

STOP MAKING SENSE

52

accountability, and the appearance of promoting values such as fairness and hard work. These positive aspects stand in stark contrast to a more philosophical system of evidence that acknowledges ambiguity, embraces the undefinable, accepts a lack of accountability as unavoidable, and embraces the intangible. While this more philosophical embracing of evidence may be more accurate, it is also more difficult for many to embrace and understand. In summation, there are diverging ideas regarding where and how EBPs began, but most believe they began in the financial world as a way to justify spending and hold people accountable. There is also certainty that they developed from a logical positivist philosophy, started in the Vienna Circle in the 1920s. They have grown in popularity since the 1990s in a variety of fields, with the most important being medicine. Many fields followed medicine, such as education, psychology, social work, and nursing. Today, EPBs are the dominant orientation throughout many of these fields, with the APA mandating that all clinically approved programs teach this methodology in the field of psychology. They are also prevalent throughout the education system, as evidenced by the vast increase in standardized testing. However, research has also shown that these treatments have not delivered the change they promise and are not without controversy. The Role of Philosophy The movement towards EBPs has contained some controversy (Nelson & Steele, 2007). The notion being spread that the EBP movement is a huge departure from business as usual is simply untrue: “It is unhelpful and misleading to portray EBM as a paradigm shift and that such portrayals have polarized the debate in an unfortunate way” (Sehon & Stanley, 2003, p. 2). It is a mistake, both philosophically and practically, to view EBPs as a scientific paradigm shift (Sehon & Stanley, 2003). EBP proponents proclaim that EBP is a new way to make decisions (Gibbs & Gambrill, 2000). This idea is not true. However, one of the most

STOP MAKING SENSE

53

significant changes that EBPs have brought about is a shift in philosophy. This shift is somewhat ironic, as many EBP supporters believe that philosophy is not a necessary part of the scientific method and unnecessary when using EBPs. For example, “It is fair to say that not very much attention was paid by the originators of EBM to the philosophy of science” (Haynes, 2002, p. 5). Haynes (2002) continued, “Most scientists and EBM advocates are ignorant of the philosophy of science and give little or no thought to constructing a philosophical basis for their activities” (p. 5). Haynes also stated, “It is also easy to agree with Alan Chalmers that most scientists and EBM advocates are ignorant of the philosophy of science and give little or no thought to constructing a philosophical basis for their activities” (p. 5). Furthermore, EBP supporters do not take a stance on many philosophical problems, such as the difference between epidemiological studies and individualistic practice (Haynes, 2002). As will be seen, this belief that philosophy is not important to science is not accurate. The history of philosophy of science over the past half century has existed primarily on two grounds: arguments about observation and how it is theory laden, and the idea that theories are underdetermined by data (Goldenberg, 2005). The EBP philosophy provides a mode of inquiry not available through a positivistic framework (Stambaugh & Dyson, 2016). The irony of EBP supporters not attending to a philosophy is a misnomer as philosophy naturally exists in the study of any subject or field. It is one of the basic foundations of science. The scientific method itself is a philosophy. So, for EBP proponents to proclaim themselves more scientific and yet proclaim philosophy as unimportant makes no sense. Furthermore, EBPs are based on a logical-positivist philosophy. This philosophy believes that only empirically validated knowledge, gained from experiment, counts as knowledge. All other methodologies are not only inferior, they are ignored (Editors of Encyclopedia Britannica, 2015). Thus, if research or an intervention is based on something other than a random-controlled

STOP MAKING SENSE

54

trial (RCT), it does not count as research, science, and is not considered to be even worth any concern. However, this belief is, in and of itself, a philosophy, and a very important one at that. For this philosophy is the primary influence in much of today’s culture, determining to a large extent how professionals practice medicine, education, finances, and psychology. And yet, as will be seen, this philosophy is inherently problematic. This statement is not just my opinion; rather, it is an idea with some wide appeal. Many authors have said EBPs are based on “absurd” and “irrational” ideas (Couto, 1998). EBP raises many philosophical questions, including epistemological issues (Haynes, 2002). This section explores some of these ideas. The first problematic idea in the EBP philosophy is the oversimplification of the scientific process. As recently described, EBPs endorse a very simplified philosophy: if you cannot measure it and experiment with it, ignore it. They believe in the well-intentioned idea to base treatment on what has been proven to work and be effective, but forget that what has been proven to work and be effective is somewhat subjective. Even their hierarchy of evidence is still a subjective belief. Evidence left standing after a scientific inquiry is assumed to be factual and is then given the title of scientific evidence (Goldenberg, 2005). This idea is appealing to many. However, it is a huge oversimplification of science, human behavior, communication, and research. EBPs endorse simple directives as being accurate and scientific, as compared to complex processes (Silk et al., 2010). There is a need to return to a more complex understanding of science (Silk et al., 2010). EBP supporters, to their credit, believe in the value of objectivity. When applied to education, this quest for objectivity may lead us to measure students on the basis of criteria that are a lot less important. However, there is no reason to suggest humanity

STOP MAKING SENSE

55

will ever achieve this perfect ideal of absolute objectivity, the way many proponents of quantitative data suggest will happen (Kohn, 2000). As an example, consider the following: “A reading intervention programme may raise students’ measured achievements in reading but may provoke an intense dislike of books or reading for pleasure” (Morrison, 2001, p. 78). This example demonstrates the complex nature of science. There is never just one method, one reason, or a perfect explanation. Reasoning cannot be narrowed down to a simple and easily definable answer, which one could argue EBPs attempt to do. The answer is never certain, and more work always remains to be done. Science is a complex and frustratingly imperfect process. It will be shown later in this dissertation just how flawed some research can be. For example, the field of statistics does not currently have an answer regarding what to do when some outcome measures are statistically significant, while other measures are not (Slavin, 2008). While this idea does not mean people should discount statistics as flawed and ignore statistical reasoning, it does remind society that science is not perfect nor simple: “All innovations have advantages and disadvantages: EBP is no exception” (Gibbs & Gambrill, 2002, p. 453). Many people agree about the value and importance of science but are unable to agree on what science is (Eisenhart & Towne, 2003). Opponents of EBPs are not antiscience. Rather, they work towards a more inclusive form of science, free from the constraints and limiting factors of EBPs (Silk et al., 2010). EBPs are important and have their place, but they also have their limitations (Morrison, 2001). As stated in a previous section, part of the problem with EBPs is their strict definition of what counts as scientific. One of the problems with this strict definition is that according to the Duhem-Quine thesis in the philosophy of science, any given body of evidence can be used to support multiple different and even contradicting theories (Goldenberg, 2005): “Very similar

STOP MAKING SENSE

56

conditions can produce very dissimilar outcomes. Regularity and conformity break down to irregularity and diversity, and effects are not straightforward, continuous functions of causes” (Morrison, 2001, p. 77). Frequently, what works for one party or group of people may not work for another party. Great examples of this situation include businesses restructuring, where the restructuring works great for executives and those at the top, but the workers would likely not define this situation as something that works well for them (Morrison, 2001). This idea that science can be a simple and concrete system is simply flawed. In the words of Descartes (Descartes & Cress, 1999), nothing is certain. There are other flaws in statistical reasoning as well. As another example, a large number of studies supporting an effect does not mean the effect exists. In fact, many of these studies are often done with too small a sample size, meaning the effect is simply supported by many flawed studies (Slavin, 2008). It is also widely recognized that it is impossible to hold a variable in a constant state in a dynamic, fluid, unique, and evolving situation. This idea is universally recognized in healthcare, and yet seems to carry no weight when considering EBPs (Morrison, 2001). This idea is a philosophy and not purely science. If the EBP philosophy recognized this idea, EBPs might look a lot different. In general, this oversimplification and defining science in such a limited matter produces a number of problems. Furthermore, to claim there is only one way to solve a problem is dangerous and potentially indoctrination (Morrison, 2001). Again, the beliefs behind EBPs are philosophical. To claim otherwise is to not understand either EBPs, the scientific method, or philosophy. So not only is EBPs’ strict definition of evidence problematic, the other problem is that theories are never determined completely by evidence: “The appeal to the authority of evidence

STOP MAKING SENSE

57

that characterizes EBPs does not increase objectivity but rather obscures the subjective elements that inescapably enter all forms of human inquiry” (Goldenberg, 2005, p. 2621). EBPs cannot be completely accurate because they fail to focus on where an individual’s beliefs and values come from, including scientific beliefs (Goldenberg, 2005). EBPs do not take into account ontological, axiological, epistemological, methodological, and political approaches to science and reasoning (Silk et al., 2010), failing to consider the complexities of human and social behaviors. They may be perfect for the hard sciences but are far too simplistic for human behavior (Morrison, 2001). Observations are not “data” or “givens” and should not be seen as such. They are interpretations and they result of subjective experience and thinking: “What someone perceives is not independent of one’s beliefs and expectations… Even within the confines of strictly evidence-based practice, empirical evidence undergoes numerous subjective interpretations” (Goldenberg, 2005, p. 2624). This idea demonstrates the philosophy that there is no such thing as a truly objective viewpoint (Silk et al., 2010). In the process of deciding what counts as evidence and which theories and ideas are superior, science uses extraempirical criteria, which is subject to preferences, biases, whims, beliefs, and social fancies (Goldenberg, 2005). Subject preference is part of the scientific process and is still contingent on human thought which is prone to error and full of subjective thinking. Objective questions with a clear-cut and precise, single answer bear no resemblance to problems in the real world and require no use of logic or reasoning (Kohn, 2000). People choose which theories and ideas to believe through subjective process: “The idea of unambiguous objects of perception is a myth” (Goldenberg, 2005, p. 2624). Regarding the subject of standardized testing, the evidence is clear: Standardized tests, unlike what they claim, are not objective (Kohn, 2000). They are only objective in terms of scoring and only because they are machine tallied, meaning they are not necessarily objective in

STOP MAKING SENSE

58

other areas of measurement (Bhattacharyya et al., 2013). The type of test, content, structure, number of test items, choice of response, instructions given, test results, and use of these results are all subjective (Boser, 2000). Interpretation, writing the questions, creating the answers, test inclusion and selection, and defining the definitions all are subjective and contribute to the lack of objectivity around standardized tests (Kohn, 2000). Of course, these statements all support the idea that the process by which EBPs come to exist is faulty. They simplify the process by eliminating important factors such as culture, context, and other subjects of knowledge (Goldenberg, 2005). For example, people’s perceptions of education are constructed in part by their formal education experience (Salazar & Randle, 2015). Furthermore, no direct evidence exists showing the assumption that EBP is superior is true, and there is currently no consensus on what constitutes “best” evidence (Haynes, 2002). Some EBPs can improve practices, but they should not be understood as automatically able to determine what the best practice is in all situations (Goldenberg, 2005). As shall be explored further in a future section of this dissertation, EBPs are based on an assumption. And it is assumed that EBPs are beneficial (Addis et al., 1999). Science needs to question the process of evidence being assumed in EBPs. EBPs should be challenged based on the way evidence has been defined in the philosophy of science (Goldenberg, 2005). EBPs are seen as a way to exist between competing philosophies (Goldenberg, 2005) but it is not a tenable solution to expect all field placements to require, support, and use EBPs, especially in certain settings, such as rural settings (Springer, 2007). Furthermore, overly broad descriptions of EBPs have done much to decrease the philosophical debate (Charleton & Miles, 1998; Sehon & Stanley, 2003; Tonelli, 1998).

STOP MAKING SENSE

59

EBP proponents believe all methods or ways of understanding are not equally valuable when finding the truth (Gibbs & Gambrill, 2002). While the supporters may be correct, this idea does not then mean that RCTs are perfect and all other research is flawed and should be ignored. This is flawed thinking and demonstrates how a philosophy can fundamentally alter the way science if practiced. Additionally, this idea is as belief and demonstrates that EBPs are not fully quantitative. Therefore, how does science judge what is scientific, or in the words of EBP supporters, what works? Well, it is unclear exactly how science judges what works. The idea of what works is still a matter of judgment and implies a degree of subjectivity, regardless of how empirical the research or study has been: “What works is a value statement, not simply an empirical statement” (Morrison, 2001, p. 77). Furthermore, deciding what works requires more than simply establishing causality (Morrison, 2001). For example, it is important to agree on basic ground rules when evaluating effectiveness, as “educational policy cannot support the adoption of proven programs if there is no agreement on what they are” (Slavin, 2008, p. 7). Furthermore, “it is essential not only that conclusions be correct but also that the process by which they are arrived at be open, consistent, impartial, and in accordance with both science and common sense” (Slavin, 2008, p. 7). This idea that the EBP philosophy is flawed and unscientific might lead one to query: Why doesn’t science just fix the philosophy or fix the problems with the scientific method? Well, this idea definitely sounds attractive. Sadly, methodologies used to correct one issue often lead to problems in another way, such as recomputing analyses for RQEs to control for clustering, which invariably leads to a nonsignificant analysis (Slavin, 2008). In other words, science is not a black and white, or dichotomous, method. It is complex. Changing one variable often changes another and unfortunately there is no perfect system. While the desire to find a

STOP MAKING SENSE

60

simplified method of research is understandable, this desire does not make it accurate. Science is and always will be complex. Science needs to embrace this complexity, rather than running from it. Another flaw in the EBP philosophy is the idea that RCTs are superior to other forms of research. This idea is seriously flawed. It is necessary to be very clear: RCTs are indeed superior than many other forms of research in many ways. However, they are also far from perfect and contain inherent flaws themselves, as all research does. The belief in their superiority, which is a philosophy, does not make them perfect. It is possible there are circumstances in which observational studies are a better choice than clinical trials (Benson & Hartz, 2000). The imperfections inherent in RCT research are explored further in a later section. The EBP philosophy also fails to answer the following questions: What is EBP and how ought we practice? It tries to answer these questions by using the other question to define the initial question: EBP is whatever leads to the best practices. Thus, they fail to further the debate and provide a substantive philosophy or reasoning (Sehon & Stanley, 2003). There is an urgent need to reach consensus on the definition of EBPs (Springer, 2007). There are other philosophical problems with EBPs. They support a noncritical scientific outlook and expect interpreters to consider results as absolute scientific fact without scrutiny (Silk et al., 2010). In other words, their belief system leads to a lack of critical thought or decision making. Furthermore, there are problems with science’s current interpretational beliefs, such as the idea that multiple interpreters are superior. Logically speaking, the number of interpreters has nothing to do with an interpretation’s accuracy (Silk et al., 2010). Yet the EBP philosophy believes that research articles reviewed by multiple peers and journals automatically qualify these beliefs and sciences as superior.

STOP MAKING SENSE

61

EBP still has many issues to address, such as: agreement on what constitutes best evidence, how to generalize beyond research, how practitioners can accurately and efficiently communicate research findings to clients, how policies are made that include patient considerations, and moral issues such as distributive justice and individual autonomy (Haynes, 2002). Perhaps the most obvious example of the problems EBPs have regarding their philosophy exists within standardized testing in the public education system. These problems, as shall be seen, are numerous. For starters, standardized testing oversimplifies the nature of the student scores (Schrag, 2000). Once again, this idea is based upon a philosophy. EBP supporters believe that student skills can easily be measured and compared accurately using standardized tests. This belief is a philosophical idea and caused supporters to push for standardized testing. This example demonstrates that EBPs are indeed based upon a philosophy, just like every other scientific theory or idea. American culture is obsessed with attaching numbers and value to everything, making standardized testing a valuable option. However, multiple-choice standardized answers simply do not measure the same skills as other types of testing, such as free-form responses and essay writing (Kohn, 2000). The counterargument for standardized testing is that it is the only way to assess and understand the success of ideas (Bhattacharyya et al., 2013). However, one of the problems with this philosophy is that classrooms and other naturalistic settings are not reductionistic, analyzable, and antiseptic conditions needed for laboratory-like experimental conditions. They are dynamic environments that demonstrate the complexity of the social sciences (Morrison, 2001). Furthermore, there are many who proclaim standardized testing as doing more harm to education than help (Rebora, 2012). Indeed, there is much evidence that

STOP MAKING SENSE

62

standardized testing is deleterious (Perrone, 1991). Again, this idea that standardization is the only way to accurately measure knowledge is a philosophy. The research does not bare this praise of standardized testing out. High scores on early testing have not been strongly correlated with high scores on testing later in life (Perrone, 1991). Many successful adults would fail the standardized tests given to high school students. They may not even pass the tests given to fourth graders (Kohn, 2000). These tests are most questionable for young children where children’s growth is very uneven, and they do not accurately represent children’s intelligence, achievement, and competence. Furthermore, there is no evidence that universal screening for young children is beneficial (Perrone, 1991). For example: There are countless cases of magnificent student writers whose work was labeled as not proficient because it did not follow the step-by-step sequence of what the test scorers (many of whom are not educators, by the way) think good expository writing should look like. (Kohn, 2000, p. 7) In addition, further issues exist. For example, timed testing is also problematic as it places an emphasis on student’s speed rather than knowledge (Kohn, 2000). When looking at the philosophy of EBP supporters and their reasoning behind why EBPs are necessary, there are problems with this reasoning: “The implication here would seem to be that teachers and students could be doing a better job but have, for some reason, chosen not to do so and need only be bribed or threatened into improvement” (Kohn, 2000, p. 4). This idea of motivation does not seem to be considered by many EBP supporters and was rarely mentioned in the articles studied for this research. And yet it is a huge logical hole in any support mentioned for standardized testing. Further holes in the logic behind testing exist as well, such as the idea that a number on a test is not an accurate indicator of gainful knowledge (Sternberg, 1998). It has been known for

STOP MAKING SENSE

63

some time that testing itself is not a perfect medium to measure performance, that is, some intelligent students routinely perform poorly on tests (Wolf et al., 1992). Standardized tests have not been an accurate indicator of college performance. Several hundred colleges no longer require these tests for admission (Kohn, 2000). Standardized tests are one of many tools available to evaluate students, but when used alone, do not represent an accurate picture of student performance (Bhattacharyya et al., 2013). Teachers have suggested that students be tested in multiple ways with class projects, self-reflections, research assignments, demonstrations, and displays existing alongside standardized tests. Furthermore, there is much evidence that proves scores on standardized tests are not true measures of a student’s intelligence (Bhattacharyya et al. 2013). Again, the idea that standardized tests are the best measure of student intelligence and performance is a philosophy and a belief, not science. When it comes to education, the current system has a different standard of assessment than many have in other areas of their lives: When someone is going to judge the quality of your work, whether you are a sculptor, a lifeguard, a financial analyst, a professor, a housekeeper, a refrigerator repairman, a reporter, or a therapist, how common is it for you to be given a secret pencil-and-paper exam? (Kohn, 2000, p. 4) This statement demonstrates the importance of philosophy. Supporters of EBPs believe testing is important in education but do not emphasize testing elsewhere. Consequently, people do not spend as much time testing as they do in education. This observation is a direct indicator of the influence philosophy has on their system. The facts stated in the previous paragraph again demonstrate the importance philosophy plays in EBPs. EBP supporters believe standardized testing accurately measures student abilities

STOP MAKING SENSE

64

and these tests are the best way to do so. This idea is not science, but rather philosophy, and they would not support standardized testing if this philosophy were not part of their belief system. One good argument against this philosophy is not just that standardized testing is problematic, but the idea that resources should be allocated equally to all students, an idea that is common sense in other countries, but is not applied in the US. Instead, good schools are rewarded and poorer performing schools are punished, when it should be the opposite: The money should go where it is needed. Even if standardized testing were more successful, this does not mean that important decisions should be based on a single standardized test, such as which schools to fund, which programs to support, and which teaching methods are successful (Kohn, 2000). Again, this idea that poorly performing schools should be punished and not receive funds is a philosophy. Furthermore, standardized tests are used against teachers when students receive low scores on these tests (Bhattacharyya et al., 2013), and bribes and punishments exist within the education system to ensure teachers focus on test scores (Kohn, 2000). However, rewards and punishment can never succeed at producing more than temporary compliance (Kohn, 2000). Classroom resources, parental support, and professional development all contribute towards the outcomes of tests (Koegh et al., 2012) and teachers are being held solely liable for a student’s education, even though this liability can never be measured and education is a complex process involving many people and many responsibilities (Bhattacharyya et al., 2013). Few parents and teachers believe a single score is an accurate representation of a child’s skills, and teachers generally endorse gaining very little knowledge about their students’ intelligence or abilities from standardized tests (Perrone, 1991). The people who work the closest with kids, such as the teachers, are the ones who understand the stakes of high-stakes testing. Support for testing grows as you move away from the people closest to the students. Government employees are more

STOP MAKING SENSE

65

likely to support standardized testing than someone who works closely with students (Kohn, 2000). These ideas also demonstrate not only the emphasis on philosophy but how a philosophy can be mistaken and inaccurate. Supporters of EBPs obviously believe in the importance of standardized testing, but as demonstrated, standardized testing is not the perfect and wonderful system they believe it to be. It is flawed, as is any philosophy believing it to be accurate. This idea demonstrates the importance of having a theory intermesh with science. Education often finds itself running from the latest teaching fad to the next with little thought or careful reflection. These attempts seldom bear fruit and rarely lead to the changes they have promised (Duke, 2005). But school systems take their support for standardized testing even further, including in their philosophies a belief that results from testing should be used to allocate resources. Furthermore, their belief in punishing schools that do not perform well is also a philosophy. These philosophies are problematic because parents expect their children to succeed and not be at the bottom of the test scores (Bhattacharyya et al., 2013). They also expect school systems to improve their children’s test scores (Haney et al., 1993). School rankings on tests and test scores are released to parents and local media outlet. Schools that have not achieved satisfactory scores risk loss of reputation and harsher consequences, including possible closure (Koegh et al., 2012). As with any ranking system, someone has to be at the top and someone has to be at the bottom (Bhattacharyya et al., 2013). With high standards not everyone will meet the standard; however, schools expect all their students to meet these standards (Sambar, 2001). This belief is a logically flawed philosophy. Because every distribution of scores automatically contains a bottom, it will always appear that some students are failing, even if they are doing well and have gained the knowledge they need in a subject (Kohn, 2000). The validity of these standardized

STOP MAKING SENSE

66

tests is only questioned when the results are low. When the results are high, no one questions the results of the validity (Strauss, 2006). These philosophical ideas are also flawed. When looking at the research on the success of standardized testing, it can be seen that countries like Finland, and other countries who routinely score high in educational rankings, have decided to trust teachers to determine what is best for students learning. These countries have also placed a higher value on education in their societies (Bhattacharyya et al., 2013). Despite low testing scores, the US ranks fifth in terms of money spent per student when compared to other developed countries (Ryan, 2013), demonstrating that higher spending on education is not predictive of higher quality education or higher test scores (Ryan, 2013). In other words, the U.S. testing philosophies are flawed. They were great ideas, but the research has clearly shown these philosophies have not become reality. Like EBPs in other fields, today’s educational problems have become oversimplified: Test scores are too low and need to be made higher, and yet higher test scores do not signal higher quality learning. Research as shown that improved test scores has nothing to do with an improvement in education but rather reflects student and teacher familiarity with the tests (Kohn, 2000). More importantly, research has clearly shown that standardized tests do not improve the quality of education students receive. They tend to encourage a quick form of learning based on memorization rather than a sophisticated understanding of a subject and all its intricacies. The whole point of questions on standardized tests is the quick finding of the answer rather than demonstrating interpretation, critical thinking, and reflective skills: “It is easier to get agreement on whether a semicolon has been used correctly than on whether an essay represents clear thinking” (Kohn, 2000, p. 3). Students are not allowed to explain their reasoning for their answers on standardized tests; correct answers do not necessarily indicate an understanding of the issues, and incorrect

STOP MAKING SENSE

67

answers do not suggest the absence of understanding. In fact, most standardized tests “punish” test-takers who think logically. Students who score well on these subjects often understand very little of the subject on which they are being tested. One study showed that about 41% of students responded to a standardized question correctly through memorization, while only 11% really understood the subject but did not answer the question correctly because of minor errors (Kohn, 2000). Furthermore, all this emphasis on testing ignores other factors, such as biology, social status, inherent intelligence, and internal and extrinsic motivation. The current education system is based around extrinsic motivation, and yet science knows extrinsic motivation does not produce lasting results (Kohn, 2000). When people find an activity interesting and enjoyable, they will engage in the activity for the sake of the work itself. Even Albert Einstein talked about the role of curiosity as being more important than intellect. People with no intrinsic motivation will often work only as hard as necessary to receive the extrinsic reward (Amabile, 1998). Test scores are easy to calculate and understand, as compared to ideas such as intrinsic motivation and intellectual exploration (Kohn, 2000). These same ideas have been studied within the confines of business organizations, finding that intrinsic motivation can easily be increased within organizations. Factors such as challenge, freedom, resources, work-group features, organizational support, and supervisory encouragement can be used within organizational systems to help increase intrinsic motivation. Research has determined that extrinsic motivators such as money or other financial rewards, does not necessarily make a person excited or happy about their jobs. This type of reward can decrease motivation if the person finds their experience dull (Amabile, 1998). People motivated by intrinsic curiosity are generally more motivated than those who are given external rewards (Amabile, 1996).

STOP MAKING SENSE

68

The emphasis on more standardized testing and EBPs within fields such as finance, medicine, government, and psychology comes from the idea that more accountability is needed (Kohn, 2000). This idea is also a philosophy and the previously mentioned research demonstrated that this philosophy is flawed. Talk such as “excellence,” “raising the bar,” “tougher standards,” and “higher expectations” really mean nothing more than higher test scores. “Objective data” show that barriers to achievement, such as “racism, poverty, fear of crime, low teacher salaries, inadequate facilities, and language barriers,” exist. However, the very people promoting the efficacy of objective data and standardized testing often dismiss these notions as excuses. Research has repeatedly indicated the amount of poverty within a given community is the primary indicator of test scores, indicating teacher quality and education has very little to do with performance on standardized tests. This research indicates these tests are not a measure of school effectiveness. Tests can be biased because they emphasize knowledge not everyone may have, or only those from a privileged background are likely to study (Kohn, 2000). “When someone emphasizes the importance of ‘higher expectations’ for minority children, we might reply, ‘Higher expectations to do what? Bubble-in more ovals correctly on a test—or pursue engaging projects that promote sophisticated thinking?’” (Kohn, 2000, p. 4). This commonly used phrase of “high standards” implies not everyone will be able to meet these standards. However, people continue expecting the opposite. Important matters such as resources have been dismissed by many based on the idea that if schools just “raise the bar,” performances will improve. Again, this belief is a flawed philosophy. The testing process is flawed. It uses flawed tests and processes to hold people accountable over that which they have very little control. Furthermore, society has created a system where schools base their incentives on a standard of outcome rather than a standard of opportunity (Kohn, 2000).

STOP MAKING SENSE

69

These ideas are not without their reasons. Supporting standardized testing allows people such as politicians, school administrators, and teachers to claim they care about school and student performance and improvement (Kohn, 2000), and oftentimes they do care about performance and improvement, even if their ideas are flawed. However, as was explained previously, the further away one gets from teaching students, the less they understand about the testing process and teaching, and the more they support standardized testing. But politicians are not teachers, and neither are researchers, so it is possible that this idea is another flaw in the philosophy of EBPs. Additionally, EBP supporters have presented a narrative that “accountability” in the form of standardized tests will fix all problems. As has been seen, this idea does not stand up to scrutiny. Slogans such as “tougher standards,” “accountability,” and “raising the bar” allow politicians appear concerned about improving children’s quality of education (Kohn, 2000), but the improvements they have promised have not materialized. According to Senator Paul Wellstone, Making students accountable for test scores works well on a bumper sticker, and it allows many politicians to look good by saying that they will not tolerate failure. But it represents a hollow promise. Far from improving education, high-stakes testing makes a major retreat from fairness, from accuracy, from quality, and from equity. (Kohn, 2000, p. 3) Again, all these beliefs are based upon the idea that standardized testing will fix society’s education system and can accurately measure student performance. This idea is a philosophy and not proven by science. In 2001, the push for standardized testing increased after the passage of the No Child Left Behind (NCLB) law, pushed by the Bush Administration. This law is also based on philosophies:

STOP MAKING SENSE

70

the support of standardized testing, but also the invalidation of the law of averages. NCLB believed that all students should be taught in the same manner and advance through class together. All students accelerate at the same pace. This idea completely ignores the fact that some human beings have more intellectual capabilities than others. The emphasis on standardized testing and the No Child Left Behind theory suggests all students learn at the same pace or equally well, which is demonstrably false. The idea that all students should learn the same material in the same fashion leads to one-size-fits-all education and also ignores advice from experts, given that most specialists condemn the practice of testing children younger than 8 or 9 through standardized measures (Kohn, 2000). Furthermore, NCLB left behind only required student achievement in three areas: reading, math, and science (Bhattacharyya et al., 2013), furthering the philosophy that only these subjects are important and more qualitative subjects such as writing, social studies, history, and the arts are not important. The need to assess writing demonstrates a major failing of standardized tests and quantitative data in general: the idea that perhaps not everything is testable (Perrone, 1991). According to the National Center for Fair and Opening Testing (2007), standardized tests are only able to evaluate certain skills. Amongst the qualities standardized tests cannot measure are: creativity, initiative, imagination, curiosity, conceptual thinking, irony, effort, commitment, judgment, nuance, ethical reflection, and goodwill. It is difficult, if not impossible, to devise a test in which young children can communicate the depth of their understanding of a particular subject (Kohn, 2000). Factors that will develop into good learning habits and life skills are not emphasized in the current system because of society’s emphasis on standardized testing, teaching to these tests, and the pressure to score well on these tests (Bhattacharyya et al., 2013). Additionally, most specialists condemn the practice of testing

STOP MAKING SENSE

71

children younger than 8 or 9 through standardized measures (Kohn, 2000). Once again, this idea is a philosophy, not science. Standardized tests are also norm-referenced tests, comparing students against each other, and therefore were never meant to evaluate the quality of a student’s education. Norm-referenced tests were not intended to measure how well a student understands or knows a certain subject. They were only meant to compare students to other students. Norm-referenced tests also contribute to the already “pathological” competitiveness American culture embraces. They pit students against one another rather than making sure each student has gained the knowledge they need. Norm-referenced tests (NRTs) are not likely to include a lot of questions every student can answer correctly because they need a certain percentage of students to answer these questions incorrectly. Norm-referenced tests allow us to believe a school is failing when actually the school could be teaching the students what they need to know (Kohn, 2000). The fact that tests are always performed individually is also problematic, causing our society to view independence and not cooperation as the ultimate goal (Kohn, 2000). Some responses on certain tests contain words many children have never even seen. Authentic performance-based assessments should be used instead of standardized tests. These assessments measure a children’s understanding and their growth in a subject and render the need for standardized testing moot (Perrone, 1991). One of the other legacies of the Bush regime is an attempt to define science through a strict epistemological and methodological fundamentalism, what is now being called “Bush science” (Silk et al., 2010). According to Bush science, only policies, programs, ideas, principles, and science with the appropriate evidence are allowed to advance. This process has defined science as that which embraces a very particular definition of science, which many refer to as methodologically fundamentalism. In other words, this definition is the final say on science

STOP MAKING SENSE

72

and anything outside this definition does not count as science. This methodological dogmatism is once again a seriously flawed philosophy (Silk et al., 2010): “Methodological dogmatism would be a serious mistake” (Addis et al., 1999, p. 433). This methodological fundamentalism seen in EBP is closely similar to moral fundamentalism. In this line of thought, people see that there is only one source of truth, believers in this methodology have access to truth that others do not, authority comes only from one source, access to this truth means correctness, contradictory evidence is dismissed, persuasion can only happen with arguments consistent with these beliefs, people who disagree do not have correct insight, there is an avoidance of those who don’t agree with these principles, beliefs are supported via decree, and other views are curtailed (Silk et al., 2010). Along these lines, supporters of standardized testing promote a certain type of methodological fundamentalism. But as already seen, standardized testing is not a perfect methodology. For example, standardized tests encourage rote memorization as the primary learning mechanism and rarely require the ability to reason or use critical thinking skills. Studies have shown a statistical correlation between high standardized test scores and shallow thinking, or a lack of ability to reason. These studies have shown that students who are more focused on memorization than understanding perform better on standardized tests: “Time spent preparing students to succeed on such tests is time that could have been spent helping them become critical, creative, curious thinkers” (Kohn, 2000, p. 23). In fact, standardized tests are positively correlated with a style of learning often considered inferior and “shallow,” such as simply memorizing answers, not understanding concepts, and not thinking critically. Their use and deep thinking are inversely related (Kohn, 2000). For example, a recent study of standardized math tests found only 3% of questions required the use of high-level conceptual knowledge. And only 5% required the use of high-level

STOP MAKING SENSE

73

thinking skills such as problem solving and reasoning. These tests required memorization of procedures, rather than use of conceptual reasoning. They also measured knowledge of “arbitrary conventions” such as writing a ration the correct way or describing mathematic symbols. These tests did not measure the capacity for logical thinking. Tests of science and social studies amount to little more than the memorization of facts. These tests are designed to measure or understand who has developed the ability to think like an artist or historian (Kohn, 2000). Common Core has become the new idea in education following NCLB. However, Common Core does nothing to cure this country of its over-reliance on standardized testing. Common Core can be described as blind faith in the standardization of tests and curriculum. Some parents view the Common Core standards as simply too difficult for students to achieve, saying the students are occasionally up to three levels higher than current levels (Boser et al., 2016). These arguments regarding what constitutes evidence are all based on philosophical thought, explaining why philosophy is so important to any argument about EBPs. If one values only what can be seen and ignores all other information, they will likely identify with the logical positivist movement and agree with many of the actions promoted by EBPs. And yet, agreement does not make one correct. Regardless of whether one agrees with the central tenets of logical positivism, it can be argued that many of the tenets do not make logical sense, a huge irony given the emphasis logical positivists claim their system places on logic. The importance of philosophy, regardless of the type of philosophical stance taken, cannot be overstated. This quote highlights the attitude many take with philosophy. It is not considered equal, or on pay with the other sciences, even though it is the foundation of all science does in any field of study and the first step in the scientific process.

STOP MAKING SENSE

74

Important elements in the philosophy of science that are often lacking in modern interpretation are context, culture, communication, and language, with each of these dimensions having as much influence on the results of scientific research as the data itself. When research seems to be lacking something, or some element seems to be missing, this missing element can often be traced back to one of these missing components. Often the research will appear to miss the big picture, and this problem is generally a misunderstanding of the role context plays in research (Berkwits, 1998). For example, in music education these misunderstandings look like researchers claiming a study demonstrates the effectiveness of teaching students a particular scale patterns means studying creativity in music is no longer a requirement. Research has indicated that many of philosophy’s most important concepts have yet to be taught at a basic level and are not being integrated into research. Ideas such as trial design and methods that are directly applicable to certain settings, hypothesis testing with flexible error rates, and Baysian clinical trials have yet to be integrated from philosophy into research. An example that demonstrates the importance of philosophy regarding the process of science is the definition of scientific research itself: what exactly is meant by scientifically-based research? Are evidence-based approaches really the only way to approach meaningful study (Eisenhart & Towne, 2003)? Are there other possible approaches? Some suggest different philosophical approaches such as W.V. Quine’s metaphor of the web of belief (Sehon & Stanley, 2003). If this term itself cannot be defined, how can science even purport to define what constitutes evidence? In recent decades, the logical positivist views have eclipsed almost every other form of knowledge in American society and this eclipse has largely been caused by financial concerns. While concerns about finances are not in and of themselves necessarily bad, the problem occurs when people lose focus on the inclusion of other aspects of the “human” sciences as part of the scientific process (Berkwits, 1998). When subjects such as history, philosophy, critical theory,

STOP MAKING SENSE

75

art, and other cultural studies are no longer included in society’s definition of the sciences, or no longer taken into consideration as possible factors when interpreting data, science suffers. And suffer science has. According to many sources in the past 30 years, U.S. students’ critical thinking skills have declined, understanding of basic scientific facts has decreased, and other countries with a more well-rounded liberal arts education such as Canada and Finland, are currently leading the US in test scores (National Center for Education Statistics, 2009). Further philosophical problems with EBPs exist. For example, how one categorizes a particular study largely depends upon what the evidence determines (Jorgensen & WardSteinman, 2015) and an article can be considered non-EBP if they fail to describe how randomization was carried out (Gibbs & Gambrill, 2000). Critics of anti-EBP arguments proclaim these arguments occur because of a lack of knowledge and understanding about EBP, or unfamiliarity with their usage (Gibbs & Gambrill, 2002). However, this idea does not hold any water either. As has been demonstrated, standardized testing does not work and there are serious flaws in logical positivist thinking. The EBP philosophy is overly simplified, there are problems with the definitions, problems with the name, a lack of agreement upon what constitutes evidence, and inconsistencies in the practice. How are these ideas based on misunderstandings? Isn’t it far more likely that there are flaws within the EBP system and that supporters of EBPs either have not, or are unwilling, to see these flaws: Accelerating the transfer of research findings into clinical practice is often based on incomplete evidence from selected groups of people, who experience a marginal benefit from an expensive technology, raising issues of the generalizability of the findings, and increasing problems with how many and who can afford the new innovations in care. (Haynes, 2002, p. 1)

STOP MAKING SENSE

76

A further issue with EBPs is the chasm between research and practice, meaning EBPs are hardly ever followed by practitioners. They require changes regarding how people find and use research (Gibbs & Gambrill, 2002). This lack of practice seems to be consistent in a variety of fields. Another way to frame this reality states: There appears to be a large difference between what EBPs say and what is practiced in the field. There is a large divide between applied and basic research in science, and both have their place and positives characteristics (Haynes, 2002). For example, there are clinicians who believe empirically based research cannot shed light on clinical practice because the two disciplines are so different (Addis et al., 1999). Part of this problem is related to EBPs trying to bypass a basic science model. There is a consensus that a model of science allowing for the complex integration of many fields is needed (Goldenberg, 2005). The best applied research studies are generally based off established findings of basic science. Applied research and basic research cannot exist without each other. They are two sides of the same coin. Application of research findings are still problematic, as rollouts of these applications have not been as successful as EBP proponents had hoped and it is still not completely understood how to apply research from groups to an individual (Haynes, 2002): The task for researchers, then, is to retain consistency, yet this could cause the researcher and the practitioner to come into conflict with each other if it gives rise to ethical, interpersonal, administrative or management problems on the part of the practitioners. What if the practitioners object to the experimental conditions of the research. (Morrison, 2001, p. 73) Practitioners are more likely to incorporate research if they view it as directly relevant to their own work (Nelson & Steele, 2007), but interventions supported by EBP advocates would have some negative and adverse consequences. These interventions could be beneficial in the

STOP MAKING SENSE

77

short-term but could end up causing more problems in the long term. These types of issues have not been studied, which goes against EBP philosophy (Haynes, 2002). Part of this problem is that EBPs routinely ignore long-term consequences at the expense of short-term gains. EBP advocates claim that patients’ values need to be incorporated into any decision, but this claim is just an attempt to bypass the fact that scientists currently do not know the long-term effects of EBPs (Haynes, 2002). Furthermore, critics claim the oversimplification of the EBP system does not allow for the flexibility treatments and situations require: The circumstances in which patients are treated can vary widely from location to location (including locations that are right across the street from one another): the resources, expertise and patients are often quite different and the same research evidence cannot be applied in the same way, or not at all. (Haynes, 2002, p. 5) Practitioners need to recognize that decisions are often made without recognizing the impacts of these decisions for the client: “Our understanding of how to determine what patients want is primitive” (Haynes, 2002, p. 5). Furthermore, EBP research methods are pluralistic and expanding, driven by a desire to answer a broad range of research questions (Haynes, 2002) and requiring a large effort to successfully complete this research: “Efficient access to databases requires constructing wellformulated questions related to client concerns” (Gibbs & Gambrill, 2002, p. 461). Haynes continued, The is of EBM is that science is producing new and better ways of predicting, detecting and treating disease than were imaginable at the middle of the past century. The ought of the EBM movement, which annoys many practitioners, and would perturb Hume and his followers, is that EBM advocates believe that clinicians ought to be responsible for

STOP MAKING SENSE

78

keeping up to date with these advances and ought to be prepared to offer them to patients. Thus, EBM has taken on the tones of a moral imperative. But it is premature to get very preachy about the ought of EBM, not that this has stopped EBM’s more ardent advocates. (Haynes, 2002, p. 6) But prediction is not the perfect answer. Often EBPs are only able to predict different outcomes, such as standardized testing, where different students receive various scores even though they’ve had the same teacher, same classroom, and same education. This prediction is not perfect and does not explain the reasons behind these scores (Morrison, 2001). This issue returns the discussion to the oversimplification of science. EBPs bypass a basic understanding of science and do not emphasize a basic scientific understanding of a subject. They do not address the role of basic science in research and practice, except to say that basic science is not enough for guidance and use in practice (Haynes, 2002). If medicine were to consider disease as a complex entity involving not only science and a biological understanding of the human body, but also as social and political factors, it would likely lead to a new scientific method (Goldenberg, 2005). Haynes stated this problem well, saying that EBM bypasses important knowledge, such as a basic, fundamental foundation built on the knowledge of the ways diseases work (Haynes, 2002). He continued, “EBP must continue to evolve, however, to address a number of issues including scientific underpinnings, moral stance and consequences, and practical matters of dissemination and application” (Haynes, 2002, p. 1). Another way to express this idea states that in the field of medicine, doctors are expected to cure clients using research, as compared to curing patience based on their understanding of the human body, disease, anatomical functioning, or any other factors. Would not a combination of all these factors produce a more thorough and helpful philosophy regarding treatment? In psychology, clinicians are expected to use research to determine the correct treatment as

STOP MAKING SENSE

79

compared to using their understanding of human behavior, effective treatment, and experience in therapy. This type of practice is problematic and clearly demonstrates how EBPs are not ideal treatments. Should not all these factors be considered in treatment? Goldenberg (2005) described this problem well: The post-positivist, feminist, and phenomenological philosophies of science that are examined in this paper contest the seemingly unproblematic nature of evidence that underlies EBM by emphasizing different features of the social nature of science. The appeal to the authority of evidence that characterizes evidence-based practices EBPs does not increase objectivity but rather obscures the subjective elements that inescapably enter all forms of human inquiry. The seeming common sense of EBM only occurs because of its assumed removal from the social context of medical practice. In the current age where the institutional power of medicine is suspect, a model that represents biomedicine as politically disinterested or merely scientific should give pause. (p. 2621) This method of ignoring basic science further demonstrates the lack of understanding regarding the scientific process. In many cases, tested empirical solutions are only used until their basic mechanisms and the science behind these solutions are understood. Only then are they considered part of the scientific cannon: In the basic science that underpins traditional medicine, the workings of the human body and basic mechanisms of disease can be discovered by observations of an individual human or organism using instruments that are objective and bias free. These mechanisms then can be discerned by inductive logic and known to a certainty. By contrast, applied research deals with more complex phenomena than disease mechanisms; often relies on experimentation rather than (just) observation; recognizes that observations of complex phenomena can be biased and takes measures to reduce bias; has groups of patients as the

STOP MAKING SENSE

80

basis of observation; uses probabilities to judge truth, rather than expecting certainty; and uses deductive and Bayesian logic to progress. (Haynes, 2002, p. 5) This reliance on “evidence” also creates another problem: it effectively makes evidence the ultimate authority and expects all practitioners to follow suit. Individual authority has been replaced with the idea of collective authority, or the idea that a group of experts knows better than the individual clinician. This appeal to authority is problematic and somewhat authoritarian or tyrannical, although this result was not originally intended (Haynes, 2002). EBP proponents proclaim there is no way to win the argument for EBPs without a universal standard of truth, saying the objectives of science are different than the objectives of medicine: There is a continuing tension here between the consequentialist, population-based origins of epidemiology (doing the greatest good for the greatest number), which generates most of the best evidence that EBM advocates hope to convince practitioners and patients to pay attention to, and the deontological or individualistic approach of medicine, doing the greatest good for the individual patient, which practitioners are sworn to do. (Haynes, 2002, p. 5) The expectation of EBM that doctors should keep abreast of evidence from “certaintypes-of-health-care” research raises many issues. First, what is ‘valid’ health care research? Second, what are the ‘best’ findings from this research? Third, when is health care research ‘ready’ for application? Fourth and fifth, to whom and how does one apply valid and ready evidence from health care research? EBM provides a set of increasingly sophisticated tools for addressing these questions, but, at present, the result is only partly as good as EBM advocates hope it will become. (Haynes, 2002, p. 5)

STOP MAKING SENSE

81

Other scientists seconded this idea: Because EBM is largely an effort to manage the unruly social world in which medicine is practiced via objective scientific procedure, the movement appears to be the latest expressions of “scientism,” modernity’s rationalist dream that science can produce the knowledge required to emancipate us from scarcity, ignorance, and error. However, such efforts tend to disguise political interest in the authority of so-called “scientific evidence.” The configuration of policy considerations and clinical standards into questions of evidence conveniently transform normative questions into technical ones. Political issues are not resolved, however, but merely disguised in technocratic consideration and language. Thus the goals of medicine and other normative considerations lie just below the surface of these evidentiary questions, and evidence becomes an instrument of, rather than a substitute for politics. (Goldenberg, 2005, p. 7) This section began by proclaiming that EBPs are not the paradigm shift they claim to be. This claim is one of the more interesting philosophical problems presented by EBPs as EBPs are not this shift in evidence they claim, largely because evidence has been used since the invention of science millennia ago. A more accurate statement would state: EBPs are an attempt to redefine science and evidence. This redefinition is part of why their use of the term evidence is so problematic, as evidence has been used previously and continues to be used outside of EBPs. In psychology, EBPs generally side themselves with certain orientations. However, non-EBP orientations also use evidence, so how are these orientations not also evidence-based? In medicine, few physicians would claim they do not use evidence in their decisions, including before the EBP movement became popular. Goldenberg (2005) stated: “By framing the problems of biomedicine as problems of (lack of) evidence exclusively, the assumptions, methods, and practices of scientific medicine go unquestioned” (p. 2630). To phrase this quote in another

STOP MAKING SENSE

82

manner, no one questions EBPs because of their use of the term evidence. People assume they must be good because they tell them their treatments are based on evidence in the title. Another issue is that EBP use can often have little to do with their philosophy or understanding. In psychology, practitioner training, openness towards EBPs, and attitudes towards EBPs were significant predictors of EBP use (Nelson & Steele, 2007). A practitioner’s level of education and clinical experience largely influenced clinicians’ attitude towards using EBPs (Aaron, 2004). Nelson and Steele (2007) conducted a study trying to predict practitioner use and endorsement of EBP. They found practitioner training, culture of the clinical setting, and the practitioner’s attitude towards treatment research were predictive of their use of EBP. However, the practitioner’s academic degree and years of clinical experience were not predictive of their EBP use and endorsement. EBP use is also often associated with manualized treatments in the field of psychology, meaning practitioners are expected to follow treatments presented in a manualized form. This use is also a problem philosophically. First of all, the reasons for using manualized treatments largely follow the philosophies presented in this section of the dissertation, which, as has been presented, contain comprehensive flaws: “Ideological arguments that practitioners should be motivated to utilize manualized treatments are insufficient and, at times, counterproductive. The fact is that most don’t and there are reasons why” (Addis et al., 1999, p. 431). These manualized treatments can create many problems for clinicians, including pressure to perform a treatment they have not experienced as effective, and fear to not perform said treatment for risk of losing employment (Dalal, 2018). The fact that 80%–90% of patients with panic disorder improve after cognitive behavioral therapy can be incredibly intimidating and problematic for a clinician (Addis et al., 1999). EBPs are more likely to come from outside forces than the clinicians themselves. They are largely based on the belief that we must hold

STOP MAKING SENSE

83

clinicians accountable and have been seen by many clinicians as problematic and interfering with practice. This belief means reactions to manual-based treatment by psychologists are largely caused by external issues such as problems with third-party payers as compared to the actual merit of the treatments (Addis et al., 1999). Practitioner attitudes are likely to be influenced by their definition of manualized treatment. Those who believe manualized treatment is a protocol imposed by a third party are less likely to endorse EBPs (Addis et al., 1999). This idea again demonstrates the effectiveness of philosophies, showing how a person who believes the EBP philosophy will practice and endorse their use. A common criticism of EBPs methodologies is that they do not recognize that research methods need to be tailored to the research question. Different methods of research produce different results and gather best evidence depending upon the question and design (Goldenberg, 2005). It is time to end the use of dichotomous language when describing research, such as qualitative and quantitative, and recognize that all research designs are good for the scientific questions they seek to answer (Springer, 2007). As an example of this idea, opponents of EBPs are not against evidence or RCTs; rather, they recognize that RCTs have their place and seek to use them for a specific purpose. There are limitations to where and when RCTs can be used (Morrison, 2001) and all forms of evidence and research have a role to play in helping the social sciences advance (Springer, 2007). There is a greater need for critical and interpretive methodologies within the sciences (Silk et al., 2010). Regarding the qualitative data EBP proponents are occasionally against, problems such as bias and selective memory do not mean a methodology has no value or is unscientific (Sehon & Stanley, 2003). As previously mentioned, all forms of research are valid as they all have their strengths and weaknesses. They are all flawed and imperfect. This emphasis on the quantitative

STOP MAKING SENSE

84

purposefully ignores the more human parts of research, such a feelings, emotions, and experience. These phenomenological approaches may not be perfect either, but they also have knowledge to offer as the technical way in which many CBT treatment manuals are written “may seem inconsistent with a more phenomenological exploration of a client’s feelings and thoughts” (Addis et al., 1999, p. 434). In summation, philosophy plays an important role in the use of EBPs, even though many EBP supporters claim philosophy matters not in the use of EBPs. Nowhere is this importance more obvious than in the use of standardized testing, which is based on the philosophy that using quantitative data to determine success and allocations of funds will lead to improvement, despite evidence to the contrary. Furthermore, EBPs themselves are based on a philosophy of defining good science as doing what works and then defining what works as being determined primarily by quantitative data and reasoning. Finally, EBPs are based upon a logical positivist philosophy, which claims only that which can be observed can be known. Problems with Logical Positivism As previously mentioned, EBPs are strictly positivist in their philosophical nature (Goldenberg, 2005) and there are serious problems with this philosophy. They are based upon a doctrine of logical positivism, and this view contains an almost religious-like faith in the human ability to observe and analyze in an unbiased manner (Silk et al., 2010). Positivism can be summarized as the believe that science is not only the best mechanism by which to study the natural world, but also the best mechanism by which to inform the natural world and social affairs (Jorgensen & Ward-Steinman, 2015). Logical positivism only recognizes scientifically verifiable propositions as valuable. These philosophies have been seriously undermined by postpositivist thinking since their popularity and the current philosophical beliefs are incompatible with postpositivistic thinking

STOP MAKING SENSE

85

(Goldenberg, 2005). In other words, philosophy has long since moved past positivism. However, many supporters continue believing in positivism’s ideals and values. The current philosophical beliefs are not compatible with logical positivism as most research beliefs exist within the realm of postpositivism as compared to logical positivism, allowing for complexity, values, social context, and exist in the social sciences as opposed to just the hard sciences (Eisenhart & Towne, 2005). Gibbs and Gambrill (2002) offered a rebuttal for this criticism that EBPs are closely associated with logical positivism: Many writers…confuse logical positivism and science as we know it today. The former approach to the development of knowledge with its inductive understructure was done in…decades ago. EBP was initiated in medicine. Its origin has nothing to do with behaviorism. Nor is evidence-based social work derived from behaviorism. It is true that there have been many rigorous appraisals of the effectiveness of cognitive-behavioral interventions. And it is true that some authors in this field make inflated claims of effectiveness. But this in no way means that evidence-based social work is derived from behaviorism. This assumption is certainly one of the oddest in the literature critiquing EBP. (p. 465) This statement is inconsistent with the history of EBP development. In psychology, EBPs are directly linked to behaviorism and schools of thought developed from behaviorism. Furthermore, science and philosophy are distinct. Logical positivism is a philosophy and a basis for scientific reasoning, as discussed in the previous section. Like all other fields, it is likely social work began embracing EBPs once psychology did, just as psychology followed medicine’s embrace of EBPs.

STOP MAKING SENSE

86

When deconstructing the arguments regarding what constitutes evidence, the issues can be summarized as a difference in values: logical positivism believes in observable verification (Goldenberg, 2005). But there is a problem with this statement and belief. The problem happens when people fail to understand the consequences of only giving credence to what can be visually seen. While what is visible may be easier to study and makes certainty more likely, to dismiss all statements and ideas simply because they are not visible, or directly observable, creates other logical problems. The first of these problems is that simply because something is not seen does not mean it does not exist. For example, the previously mentioned psychological problem of emotion. Emotions are self-evident, people know they have them because they experience them and feel them every day. Likewise, they know how important emotions are to their everyday functioning because they experience their impact. However, logical positivism largely ignores emotion because they are difficult, or perhaps impossible, to study quantitatively, or rather, to observe. This belief in the observable should not mean that an idea or subject should not be studied simply because it is not possible to study it using observable and verifiable methods. No idea more thoroughly summarizes the divide between those favoring EBPs and subscribing to logical positivist thinking, and those who are opposed. To continue with the previously mentioned example, a logical positivist would be satisfied with ignoring emotion, while others, such as existential thinkers, or systems thinkers, might feel the role emotions play is not only necessary, but possibly important. Until the gap in thinking between the many different philosophies regarding the definition of evidence is narrowed, the debate between what counts as evidence will wage on: By linking the assessment of validity to methodological concerns alone, it conveys a message that the foundations of evidence reside primarily in controlled, scientific

STOP MAKING SENSE

87

settings, independent of human action and interest. It thus reinforces the distance between research and practice it was developed to overcome.(Berkwits, 1998, p. 1544) There are those today who suggest that logical positivism has taken almost complete control of science and there are those who argue against this notion (Gambrill, 2010). On the one hand, those who think it has taken over science say this system is evidence of how much EBPs are emphasized. Throughout education, testing and assessment has increased and all policy decisions are based on these assessments. On the other hand, those who think this domination by logical positivism is not the case argue that EBPs are the wave of the future and that EBPs actually need to be increased and achieve actualization. While it may be possible that logical positivism has largely taken complete control of science, the influence it has had on science in the past decades and the influence it currently holds over the field is definitely much larger than any competing theories by a very significant margin. There are even those who claim that EBPs are not based on logical positivism (Gambrill, 2010), saying there are not different philosophical theories, but rather that science is science, based on evidence, and only evidence that can be observed counts. There are also supporters of EBPs who say EBP is a process related to the philosophy of Thomas Kuhn (Kuhn, 2015; Sehon & Stanley, 2003), but this proposal does not hold much water as a large part of Kuhn’s philosophy was the idea that science can never be completely objective, so this idea contains an inherent contradiction. While there is value to certain parts of the logical positivist argument, much like the evidence-based argument, there are parts of the logical positivist thought that are incomplete. One of the most glaring inconsistencies is the idea that just because a statement can be studied empirically, it is more correct or valuable than a statement that is studied nonempirically. Once again, the previously mentioned example of emotion is appropriate. Emotion could be the most important facet of human behavior. Just because emotion is difficult or perhaps impossible to

STOP MAKING SENSE

88

measure, does not logically disqualify it from importance. Emotion could very easily be more important to human behavior than thoughts. And yet they are largely ignored by logical positivists in the field of psychology. This is especially important when one is reminded that processes such as logic and rationality are not necessarily considered empirical, that verifiability is what is central to the applications of EBPs. This inconsistency is exacerbated by the fact that most services such as different psychological practices, different medical and nursing practices, or different teaching practices that EBPs compare themselves to are of unknown effectiveness (Gambrill, 2010). This inconsistency is related to the logical positivist fear of uncertainty. According to this viewpoint, uncertainty is the enemy of decision-making and one of the negatives EBPs seek to eliminate. EBPs are a method to handle uncertainty in an “honest and informed manner, sharing ignorance as well as knowledge” (Gambrill, 2010, p. 35). Indeed, many of the articles researched for this dissertation frequently discuss the need to handle uncertainty, even though an explanation for why certainty is so scary was never provided. Some studies have even gone so far as to claim that EBPs provide proven methods, assuming that proof is even possible (Gambrill, 2010). This dilemma emphasizes the previously mentioned importance of philosophy on the scientific process. If proponents of EBPs are not even able to debate why uncertainty is so bad, are they even allowed to make judgments about its worth? Not to mention that while the idea of limiting uncertainty sounds good in theory, in practice this situation rarely happens. One of the issues with the logical positivist view of evidence is that it is so narrowly focused on evidence alone and ignores any philosophical issues, such as the role of critical thinking. This narrow view causes people to ignore other factors and important aspects of the scientific process, such as criticism, social factors, and logical interpretation of evidence, that need to be involved in order for a more complete and accurate understanding of science to exist

STOP MAKING SENSE

89

(Berkwits, 1998). Likewise, logical positivism, and EBPs, encourage conformity. According to these practices, research finds its strength not in the depth of its argument or the solidarity of its data, but rather in how well the research agrees with other research in the same community (Berkwits, 1998). Once again, there is a logical fallacy at play here: Just because research agrees with other research does not make it correct. It is very possible for both studies to be inaccurate, or one study to be accurate while the other study is not. Inclusion does not make research accurate. Likewise, scientific research is never independent of culture. The processes of defining, gathering, and interpreting, as well as the philosophical bases for understanding these processes as part of the scientific method, is always influenced by culture (Berkwits, 1998). For proof of this idea one has only to view a short snippet of history and the correlating research. For example, currently EBPs are in vogue, as is logical positivism, meaning most of the research being conducted conforms to these ideals. In the past, this correlation to EBPs has not be the case. In the future, if the philosophy behind logical positivism is questioned, humanity will likely see a change in the type of research conducted and valued as well. One of the most overwhelming problems created by logical positivist philosophy is the emphasis on reductionism as a complete scientific method and answer, methodology, problem solving system, and means to determine accurate conclusions. EBPs support a reductionistic model of knowledge (Goldenberg, 2005). While it is true that reductionism is an important part of the scientific method, and can reach many correct conclusions, such as in the field of medicine, an extreme reliance on reductionism as the only answer and as superior to holism creates real problems. Holism, or the idea of looking at the whole problem or issue as a whole, rather than subdividing it into parts or simplifying, also needs to be an important part of the scientific process. In fact, without the ability to apply what reductionism learns and recombine

STOP MAKING SENSE

90

this knowledge with other fields and areas of scientific research, the conclusions reached through scientific study will be inaccurate (Gambrill, 2010). When applied to subjects such as music and the creative arts, people can see how reductionism can be problematic and therefore not applicable to music. As EBPs are reductionistic in nature, the conclusion can easily be drawn that EBPs can, at most, play a limited role in a healthy music education. Music is a holistic experience and it is impossible to separate music from this type of experience (Finney, 2002; Molenda, 2008). Other philosophical issues include a lack of questioning at the most basic level. This subtopic is related to the issues of making assumptions, which will be discussed later. Still, continuing the previously mentioned idea of conformity, logical positivism encourages people not to examine the basis of their philosophy, such as the basis of objectivity (Berkwits, 1998). What qualifies as objective and what makes objectivity superior to subjectivity? Is objectivity even achievable? The definition of EBPs continues to be problematic. Proponents of EBP tend to define them in an overly broad, vacuous way, muddying any possible working definition of EBPs and making it almost impossible to disagree with what EBPs are proposing (Sehon & Stanley, 2003). This topic is explored more thoroughly in a later section. A second debatable topic between the logical positivists and those who do not subscribe to a logical positivist philosophy, is the idea of operational definitions, or the belief that anything or everything can be defined, with logical positivists believing any concept can be defined or should be ignored, and those outside this camp saying not every concept can or should be defined. Operational definitions are described as a way of naming ideas, actions, or physical items in such a way that they can be empirically studied. For example, diseases and illnesses are defined as a set of symptoms, but a more abstract concept such as creativity is difficult to define,

STOP MAKING SENSE

91

and therefore is often ignored by logical positivists. Once again, if they can’t be defined, they should not be studied. And this lack of study regarding creativity exists despite knowledge that creativity is incredibly important to human functioning and success (Csikszentmihalyi, 2013). But the question is whether or not everything, or at the very least, certain things, can even be defined, especially from an operational point of view? A great example of this argument is the idea that skills such as empathy can be operationally defined. In the past, behavioral scientists have attempted to define empathy by using such actions as a nodding of the head, or a verbal expression, or a visual gesture such as the nodding of the head. But is this definition of empathy truly complete? Does it really capture what empathy is all about? What if empathy is not a characteristic that can be seen, but rather needs to be felt? Is it not possible for a person to display all the mentioned characteristics and actions of empathy, and yet not be at all empathetic to a person’s feelings? By the same token, is it not also possible for a person to display none of the characteristics of empathy and yet completely identify with another person’s emotions? Is empathy even a term that can be defined by the visual rather than the emotional? Similar problems exist throughout the research spectrum. For example, how would one operationally define what it means to be a good musician? Is it even possible to define what makes one a great musician, and if so, what would this definition entail? And if not, how does one go about teaching music without knowing what makes a musician great? And what makes for a great artist? Who is the superior artist, Picasso or Dali? And why? Isn’t it possible this idea is simply subjective, and yet still important? For example, it is possible to define whether one has hit a group of notes, played at a certain speed, with a certain rhythm, and a certain volume, correctly. However, this statement also belies many of the subtleties within music that are not possible to define. In this example,

STOP MAKING SENSE

92

while it is possible to say a student has performed an example in time or not, in reality the truth is much more complex. One instructor may believe the student’s rhythm was metronomically exact, while another may say they were ahead of the beat or believe their rhythm was not as precise as they would have liked, demonstrating once again the inherent subjective nature of science. And yet another instructor may say they were ahead of the beat and applaud their ability to do so. One instructor may claim they love their warm and beautiful tone while another may say the sound is dark and muddy. There is no way to escape the subjective nature of interpretative part of the scientific method. Likewise, just as there can be disagreements surrounding metronomic accuracy, there can also be disagreements surrounding other musical aspects such as dynamics, tone production, and phrasing. One could argue that any aspect of music can be interpreted subjectively which begs the question: how can a field of study based on something so subjective, and so creative, be studied objectively, if evidence-based models are the primary zeitgeist? To summarize this section, the logical positivist philosophy contains some inherent problems. The most prominent of these problems is the idea that only what can directly be observed should be studied. This idea is so narrow that it ignores many important areas of note, such as the study of emotion or creativity. Furthermore, logical positivism assumes objectivity is not only possible but also an ideal to uphold. This philosophy looks upon subjectivity with disdain and ignores the fact that subjectivity is always a part of the scientific process and can never be ignored. Subjectivity and Science This subject brings up another problem with EBPs, the fact that no subject, action, interpretation, study, or any other aspect of life is completely objective. Even the most objectively based science or study, is still mired in interpretation or hermeneutics, and people’s

STOP MAKING SENSE

93

observations are theory laden as well (Kuhn, 2015). In fact, studies of medical journals have shown that the same data are often interpreted differently by not only readers, but authors as well (Berkwits, 1998); in addition, the theories humans form are never formed solely based on evidence (Duhem & Wiener, 1996; Quine, 2013). Furthermore, the idea that any claim can stand or fall based solely on what one believes to be factual is also a flawed idea (Goldenberg, 2005). For example, one of the most basic and objective subjects of all, mathematics, is not entirely objective and certainly not as objective as many believe. For example, while most people believe it is an objective truth that 2 + 2 = 4, this truth exists only because of a certain set of previously agreed upon principles. If one were to reject those basic principles, this equation would no longer necessarily be true. If this is the case with one of the most basic and objective of all sciences, how can humanity be expected to let go of subjectivity in subjects that are even more subjective, like music and the human psyche? An applicable example of this situation is the use of EBPs in the field of human psychology. Promoters of EBPs in Psychology believe the use of standards and measurability as attempts to make their field more objective will only increase the accuracy of their conclusions and improve the field. What this belief ignores, however, is the fact that all data and evidence require interpretation. While it can easily be argued that certain interpretations are more logical, accurate, and simply better than others, the fact remains that interpretation of data are always somewhat subjective, regardless of the methods, standards, or criteria being used. EBPs do “not increase objectivity, but rather obscures the subjective elements that inescapably enter all forms of human inquiry” (Goldenberg, 2005, p. 2621). So, if a psychologist is using a standardized assessment such as the MMPI-2, a standard instrument in the study of human personality, there is still a subjective interpretation being done on the objective data provided by the test. This interpretation occurs no matter what the science or what area of study, including music.

STOP MAKING SENSE

94

There are other issues to consider regarding objective interpretation as well. For example, research has shown that so-called objective interpretations often differ from one another (Gambrill, 2010). How does this difference happen? If they are truly objective interpretations, would they not more than likely agree with each other? The point being that even within the objective world and community, the methodology is not perfect. This statement is not an attempt to say objectivity is not useful, but rather that it is incomplete and not perfect, much like subjectivity. Subjectivity takes place at the beginning of the evidentiary process as well when defining terms operationally. While the process of creating a definition in operational terms is somewhat objective as far as attempting to create a definition that is testable and measurable, there is still a fair amount of individualization when creating these definitions, meaning that once again, there is a reliance on humans’ own experience and subjectivity to at a least a certain degree of importance. The fact that subjectivity takes place at the beginning and end of the scientific process means that even the most sincere attempts to remain objective is still part of a process surrounded by subjectivity. Furthermore, this observation means subjectivity is not an ideal to be devalued and ignored. It also demonstrates part of the hypocrisy of evidence-based ideals: These ideals claim to embrace objectivity and discredit subjectivity, even though their process is still partly subjective. As an example of this problem, McCluskey and Lovarini (2005) wrote about the importance to improve education regarding EBPs before concluding that more education did not increase the use of EBPs and then saying the results of their study are an interpretation, verifying the role subjectivity place in the scientific process. Humanity’s scientific questions need to be redefined not as questioning whether or not subjectivity plays a role in science, but rather questioning how large a role it occupies. Social

STOP MAKING SENSE

95

sciences have their limitations as well, as do the hard sciences (Berkwits, 1998; Lowentin, 1997), as all science is interpreted through culture, contextual factors, and previously held knowledge and beliefs, none of which can or should be eliminated or avoided (Goldenberg, 2005). Music is also a highly subjective area and process. If one were studying music and attempting to study it objectively, they would have to start with an operational definition. If the nature of this study entailed critiquing tone quality, the individual studying tone would have to operationally define what makes a tone desirable, or subjectively good. Again, even in the very beginning steps of EBPs, there is subjectivity. After this definition was completed, the scientist could then begin collecting and interpreting their data. But herein lies the second issue with subjectivity. If musicians list certain characteristics as part of their definition of good tone quality, such as roundness, fullness, a large dynamic range, and a lack of any harsh character, these qualities are themselves naturally subjective. What one person may find as a round tone, another may call muted. What one person says is a full sound, another person may call stuffy. One person may say a tone is harsh while another says it is clear. These subjective qualities not only muddy the objective waters, they demonstrate that no true science is ever completely objective. Complete objectivity by itself is objectively impossible: Objectivity is nothing more than the subjective beliefs of a group (Code, 1995). This is one of the biggest problems within EBPs: evidence that conforms to a logical positivist view of knowledge is then assumed to be completely factual, and no longer subject to scientific scrutiny (Goldenberg, 2005). This idea begins a debate around another large-scale question: Is subjectivity really an ideal that needs to be avoided, especially in the realms of science, education, and creativity? Even within the confines of strictly evidence-based practice, empirical evidence undergoes numerous subjective interpretations…against the insistence of radical empiricists, feminists contend that science is not a value-free enterprise… Scientific

STOP MAKING SENSE

96

inquiry cannot be value-free, as traditional empiricists required, for cultural and social values make knowledge possible. (Goldenberg, 2005, p. 2626) As much as EBPs try to separate themselves from the non-evidence-based ideas of intuition, habits, and unsystematic clinical experience, they still rely on these ideas to make interpretation: Feminist insights tell us that rather than empirical evidence increasing certainty by factoring out the subjective features of everydayness that bias our understanding of things, the constructs of “objectivity,” “universality,” and “value-free” instead obscure the subjective elements that inescapably enter all forms of human inquiry. Since evidence is by no means objective or neutral, but rather part of a social system of knowledge production, many feminist epistemologists recommend social models of scientific practice. (Goldenberg, 2005, p. 2626) There are huge problems regarding interpretation in EBPs, such as various program evaluations explicitly stating they do not endorse an intervention, while readers still assume the articles these groups publish support the listed interventions (Slavin, 2008). Data always have to be interpreted, and it is impossible to separate interpretation from subjectivity (Morrison, 2001). When viewed as a whole, all of these problems mean that EBPs do not necessarily increase objectivity, but rather seek to obscure the subjective elements of science (Silk et al., 2010). To summarize, complete objectivity does not exist. The scientific process is inherently subjective at both the beginning and end of its structures. This subjectivity is not necessarily a negative quality and should not be interpreted as so. The Myth of Quantitative Superiority Another idea championed by logical positivists is the idea that quantitative data, or rather data that can be studied empirically, are more objective, or unbiased. Dr. Maya Goldenberg

STOP MAKING SENSE

97

(2005), a professor of philosophy at the University of Guelph, said that “EBPs maintain an antiquated understanding of evidence as ‘facts’ about the world in the assumption that scientific beliefs stand or fall in light of the evidence” (p. 2622). This idea that quantitative data are automatically superior is again a logical folly: Just because data are quantitative, or number based, does not mean they are not susceptible to bias. Nonetheless, this belief seems to be the argument promoted by many an evidence-based user. This point illustrates why philosophy is so central to the study of this argument. If quantitative data are superior to qualitative data, then it is indeed true that EBPs are superior. In fact, EBPs propose just this: a hierarchy of evidence, with certain types of evidence being superior to others (Goldenberg, 2005; Upshur, 2003). However, there is no reasoning to overwhelmingly demonstrate quantitative data are superior to qualitative data. The general consensus seems to be that because quantitative data are easier to see, measure, and study, they are the superior form of data. However, this statement contains a logical fallacy. Just because one form of data are easier to collect and see does not make it superior. A more complete argument would suggest that both forms of evidence have their advantages and a complete study would utilize a variety of evidence collections. The research has also shown that even though many individuals currently believe EBPs to be a new standard in the scientific community that was only recently invented, EBPs have actually been around for a long time. This statement introduces the next chapter: the definition of EBPs. This chapter will focus on the many different definitions of EBPs and discuss how and why this lack of a set definition is a problem. The crux of this issue is one of mutual agreement and understanding: How can authorities say EBPs are the future as well as an important and necessary part of the scientific culture if a definition cannot even be agreed upon? One only has to look through a small amount of literature both for and against EBPs to notice there is a stark contrast in the ways these practices are defined.

STOP MAKING SENSE

98

In short, the field of medicine contains applicable examples regarding the myth of quantitative superiority. Take cholesterol for example; the debate over cholesterol still rages on, despite decades and decades of quantitative research (Berkwits, 1998). If quantitative data are so superior, why has it not been able to answer the mysteries of cholesterol? Treatments for depression and anxiety have fallen victim to this problem as well. EBPs have been in vogue for many years at the writing of this paper, and yet there exists no evidence demonstrating the remarkable increase in successful treatment EBP supporters have promised. Why not? Even if EBPs are the great system supporters have promised, where are the results? Both supporters of quantitative data and qualitative data each believe their side is superior (Berkwits, 1998), but there are situations where observational studies are more appropriate than clinical trials (Sehon & Stanley, 2003). Supporting the use of quantitative evidence does not mean psychologists should reject clinical experience or observational studies completely. Much advancement has been made in previous decades using observation and clinical experience, demonstrating these modalities as also having a measure of effectiveness (Sehon & Stanley, 2003). In fact, many scientific organizations are in agreement that qualitative methods should be considered scientific (Eisenhart & Towne, 2003). Qualitative, correlational, and naturalistic data, as well as surveys, are just as important as RCTs, even though they may answer a different question (Morrison, 2001). Additionally, RCTs should be included in scientific methods, but they need to be accompanied by other sources of information as well (Morrison, 2001). In fact, one could argue that the methods used for reviewing program evaluations, especially concerning the methodological and substantive issues, are really not much different than quantitative methods used for meta-analyses (Slavin, 2008). Furthermore, nonexperimental and qualitative research methods have been added to the accepted methodologies approved by science (Haynes, 2002).

STOP MAKING SENSE

99

In summation, EBPs are partly based on the philosophical belief that quantitative evidence is superior. Once again, the role of philosophy is demonstrated. This belief exists because quantitative data are easier to see, measure, and study. This belief is logically fallible and should be tempered with the idea that both quantitative and qualitative data are useful and have their place in science. They answer different types of questions and there currently exists no evidence showing quantitative data to be superior, and this evidence is not likely to exist as it would be a value statement and not research laden. Problems with Definitions Of all the issues with EBPs, perhaps no issue is more problematic than defining EBPs. This issue is problematic because it is essentially impossible to define EBPs in a clear, concise, successful, and reasonable way. To begin, it must be asked: what is the purpose of EBPs? The scholars and practitioners promoting EBP claim the whole purpose behind these practices is to “minimize error in treatment selection and administration” (Lilienfeld, 2014; Sackett & Roseberg, 1995; Upshur, 2003). Supporters also believe standardization will limit error and improve the quality of treatment and care (Upshur, 2003). These authors were specifically referring to EBPs in medicine; however, this theory already belies many problems. First of all, is not a desire to lessen errors solely in the selection of treatment one facet of a multidimensional treatment? Isn’t there truly more to treatment, education, understanding, and fixing various problems than the correct selection of a treatment? If this goal is the main accomplishment of EBPs, then could it not be argued that proponents of EBPs are making their own argument against themselves? If this the only goal EBPs hope to accomplish, they have in essence admitted EBPs are only a small part of the scientific method, thereby agreeing that other theories and ideas need to be part of the process as well.

STOP MAKING SENSE

100

The reason EBPs are impossible to fully define is because, as has already been seen, there is no agreed upon definition. There are many definitions, and these definitions vary greatly. A variety of differing definitions currently exist and this variety is problematic (Eisenhart & Towne, 2005). To start, many definitions define themselves in an overly broad manner (Sehon & Stanley, 2003). In other words, these definitions are so vast, they could essentially be anything. Most of these types include wording such as “allowing for clinical judgment.” However, allowing for clinical judgment essentially defeats the entire purpose of EBPs and makes the use of clinical research moot. For example, the research is pretty clear that deep breathing exercises are standard EBP treatments for anxiety. However, what if clinical judgment dictates that these treatments are inappropriate or ineffective? What if they use their clinical judgment and experience to decide deep breathing exercises are not effective? The whole purpose of using the best available clinical research for treatment has just been defeated. One of the goals of EBPs is to have a standardization of treatment and care. In this example, this standardization has been completely destroyed, as part of their own definition. Furthermore, EBPs were founded in part to get rid of less scientific treatments such as the previously mentioned rebirthing procedures. And yet, what if clinical judgment dictates rebirthing procedures are effective? Again, the entire purpose of EBPs have been relinquished. They are considered unscientific, and yet a broad definition of EBPs would allow for their use if they are demonstrated to be effective. Again, according to this definition of treatment, clinicians can essentially claim anything is evidence-based. What if a client is encouraged to smoke marijuana as part of treatment? The clinician could not only defend this treatment based on clinical judgment, they could actually

STOP MAKING SENSE

101

proclaim this treatment is evidence based. What about encouraging a meat-only diet? Or smoking cigarettes and drinking coffee for schizophrenia? Essentially anything can be called evidence based under this definition. It is simply too broad. EBPs do not benefit by not embracing such an overly broad definition (Sehon & Stanley, 2003). EBPs are defined in a “vacuous” manner and EBP proponents “have obscured the current debate by defining EBM in an overly broad, indeed almost vacuous, manner (Sehon & Stanley, 2003). By defining EBPs in such an open way, advocates have simply defined it as the best way to practice, which is far too open a definition and makes their ideas of improving the quality of practice obsolete: The debate about the value of EBM has been muddied by an unfortunate tendency to define the term “evidence-based medicine” in an overly broad manner. (Sehon & Stanley, 2003, p. 2). It would seem astonishing that anyone could disagree with EBPs when they are defined as the wise use of the best evidence available. This definition is hardly revolutionary. Who would be against the use of the best evidence (Sehon & Stanley, 2003)? As can be seen, this part of the definition is far from unproblematic. This lack of agreement upon a definition has led to confusion amongst many practitioners. A national online survey of 973 social work faculty found that 73% respondents viewed EBPs favorably but demonstrated a large degree of disparity in their definition of EBPs (Rubin & Parrish, 2006). EBP advocates frequently complain about the lack of EBP use. Is it possible they are not being used in part because of confusion caused by this lack of consensus on a definition? As an example of these problematic definitions, most EBP supporters claim EBPs are simply using “what works,” or what has proven to be scientifically effective (Sehon & Stanley, 2003). Again, this definition is so broad it can essentially be anything. If a psychiatrist is treating

STOP MAKING SENSE

102

depression and were to recommend cocaine as a treatment, as Freud did, this treatment would completely qualify as EBP under this definition. Obviously, this example is an extreme case, but it does demonstrate the philosophical unreasonableness of this definition and how an overly broad definition actually does EBPs more disservice than it does help. Furthermore, what works is still subjectively determined and defined, meaning this part of the definition will never be the objective truth EBP supporters desire. A standard definition of EBP still needs to be provided (Nelson & Steele, 2007). EBPs have no standard definition and it is very important to find an agreed upon definition: If we are to improve the effectiveness of how social work programs teach EBP, then we must first reach some agreement on what we mean when we use the term evidence-based practice. (Springer, 2007, p. 623) Also problematic is the fact that definitions of EBPs have generally been lumped together and there are important distinctions between the definitions (Eisenhart & Towne, 2003). For example, Sackett et al. (2000) proclaimed that there are three different styles of EBP. However, this claim confuses as much as it clarifies as others have claimed anywhere from zero to tons of definitions. Sackett is largely credited as being one of the foremost experts promoting the use of EBPs, and one of the most common EBP definitions is often credited to him. Many consider him the founder, or at the very least, the first major proponent of using EBPs in the social sciences. According to Sackett, EBPs can be defined by the following principles: ∑

Integrate treatment using the best available research that is dependent on efficacy;



with clinical judgment and expertise; and



client preferences and values (Lilienfeld, 2014; Sackett & Rosenberg, 1995).

STOP MAKING SENSE

103

It is apparent that this definition is consistent with being too broad and general, as previously mentioned. It fails to endorse anything too specific or meaningful. In other words, under this definition, EBPs can be anything a person decides them to be. For example, if a teacher is trying to teach a student about improvisation and they use a method they have developed over the years from experience, according to Sackett, this method counts as evidencebased, as experience is considered one of the determining factors of EBPs. According to Sackett, clinical expertise refers to the “ability to rapidly identify each client’s unique circumstances and characteristics” (Gambrill, 2010, p. 32). This claim supports the idea that according to this definition, EBPs can be anything the practitioner believes is warranted through experience and indeed, supporters of EBPs do claim that they encourage the use of clinical expertise (AlGhimlas, 2013), though this claim is somewhat debatable in real world practice. The broadness of Sackett’s definition then introduces a whole new set of problems, such as not being able to know if outcomes are the result of the empirical evidence or rather the practitioners experience, or not being able to determine if research and science has a larger influence on the practice than experience and philosophy. This definition also includes the problems of de-legitimizing any claims about evidence that evidence-based practitioners were trying to make. In other words, by making the definition of EBPs so broad, authentic quantitative data and evidence actually carries no weight or significance. If one is allowed to support their positions with their own experience, then how can quantitative data such as randomizedcontrolled studies (RCTs), which are considered the gold standard of quantitative evidence, be considered more significant than any other form of knowledge? In other words, there is an inherent contradiction written into this definition. The first part of the definition creates a hierarchy of evidence, giving weighted significance to certain quantitatively based definitions of evidence, such as the RTA. However, according to this

STOP MAKING SENSE

104

definition of EBPs, this very hierarchy can then be completely ignored based on an individual’s experience and expertise. Does this step not completely remove the whole point of the previous step? If one is not required to follow the hierarchy of evidence, what is the point in having it? Sackett’s definition defines itself so broadly, it is impossible to take seriously as a definition. Literally any treatment, education, plan, documentation, or idea could be claimed to be evidence-based under this definition. In essence, this definition accomplishes the exact opposite of what EBPs purport to accomplish: narrowing the scope of available practices and eliminating practices that have previously been considered non-evidence-based. But what are other definitions of evidence-based? However, supporters of this definition also claim that the research leg of the stool needs to carry some extra weight and more emphasis, mostly in the form of rigorously controlled RCTs (Lilienfeld, 2014). It is this emphasis on not only research, but a very specific brand of research that sets this definition apart and singles it out against its counterparts. It is important to note that the evidence leg of the stool officially refers to the best and most current knowledge used for decision making about groups and populations (Gambrill, 2010). According to these proponents, when deciding between a person’s feeling, intuitions, belief, or experience, scientifically gathered evidence written by others should be the side taken (Lilienfeld, 2014). In this manner, what was once a definition that was so broad it could be considered anything, has now narrowed considerably. To emphasize this point, all it took for this definition to change 180 degrees, was to include in the definition and emphasis on the superiority of quantitative data. Another part of this definition and part of many of the definitions, although rarely mentioned, is the necessity to discuss a lack of knowledge when practitioners are uninformed (Gambrill, 2010). This need to discuss a lack of knowledge and for practitioners in any field to mention when they do not have the answers was discussed in many of the articles researched.

STOP MAKING SENSE

105

However, once again, this definition fails to mention that many theories support this idea. The need to admit a lack of knowledge is not a new concept initiated by EBPs. In fact, a look through any philosophy of science text explains how researchers have been emphasizing the need to admit to a lack of knowledge for centuries. Another definition is a more simplified version of the first definition mentioned: EBPs are the “conscientious, explicit, and judicious use of the best evidence in making decision about the care of individual patients” (Al-Ghimlas, 2013, p. 1). Once again, it can be seen that this definition is quite broad and supporters of this definition again, often claim practitioners need to account for a hierarchy of evidence and the art of clinical decision making. The hierarchy of evidence is again intended to mean an emphasis on quantitative data over qualitative; the idea that quantitative data are superior reigns in this definition as well. Other definitions of EBPs exist as well, with some claiming the original definition has never been adhered to. Eileen Gambrill, a leading proponent of EBPs, is noted for consistently writing about how a five-step process identifying what EBPs are has been left out of Sackett’s original definition (Gambrill, 2010). According to Gambrill (2010), this five-step process is: 1. Converting information needs related to practice decisions into well-structured questions. 2. Tracking down, with maximum efficiency, the best evidence with which to answer them. 3. Critically appraising that evidence for its validity (closeness to the truth), impact (size of effect) and applicability (usefulness in practice). 4. Integrating this critical appraisal with clinical expertise and with the client’s unique characteristics and circumstances. This involves deciding whether evidence found (if any) applies to the decision at hand (e.g., is a client similar to those studied? Is there access to services described?) and considering client values and preferences in making decisions as well as other application concerns.

STOP MAKING SENSE

106

5. Evaluating our effectiveness and efficiency in carrying out Steps 1-4 and seeking ways to improve them in the future. (p. 32) As can be seen, this definition is a more complex expansion on the original definition by Sackett. The largest difference is the emphasis placed on defining what counts as evidence, once again supporting the theory that the biggest distinction EBPs make between previous practices is the emphasis on quantitative data as evidence. While this definition is certainly more applicable to many situations, it still fails to account for many problems, such as the amount of time a clinician would be forced to spend performing research. To continue emphasizing the point that there is no agreed upon definition of EBPs, Maya Goldenberg (2005) provided yet another definition: First, clinical decisions should be based on the best available evidence; second, the clinical problem, and not the habits or protocols, should determine the type of evidence to be sought; third, identifying the best evidence means using epidemiological and biostatistical ways of thinking; fourth, conclusions derived from identifying and critically appraising evidence are useful only if put into action in managing patients or making health care decisions; and fifth, performance should be constantly evaluated. (p. 2622) These definitions, taken as a whole, illustrate part of the problem, not only with EBPs, but with science itself. There is such variation in the philosophy behind science, it is impossible to agree on a definition of evidence and knowledge, much less a definition of EBPs (Eisenhart & Towne, 2003). Gambrill (2002) stated, Evidence-based professionals pose specific answerable questions regarding decisions in their practice, search electronically for the answer, critically appraise what they find, carefully consider whether findings apply to a particular client, and, together with the client, select an option to try and evaluate the results. EBP takes advantage of advances in

STOP MAKING SENSE

107

question formulation, bibliographic databases, search strategies, and computer hardware. (p. 453) While this definition is not much different from Gambrill’s previously mentioned definition, notice the distinct similarity between this process and the scientific method. This similarity suggests EBPs would be more useful when defining the scientific and research methods instead of defining treatment. EBPs are characterized by the following hallmarks: (a) an individual assessment and a wellformulated question, (b) a technically efficient electronic search for external research findings related to practice questions, (c) deciding if this evidence applies to the client(s) at hand, and (d) considering this evidence together with the values and expectations of clients. EBP involves using individual expertise to integrate the best external evidence, based on research findings, with information about the client’s characterizations and circumstances, and the client’s preferences and actions. It includes five steps. 1. Convert information needs into answerable questions. Such questions are stated specifically enough to guide a computer search, concern the client’s welfare, relate to a problem that has some chance of a solution, and, ideally, formed in collaboration with the client. A well-formed question describes the client, course of action, alternate course(s) of action, and intended result. 2. Track down the maximum efficiency the best evidence with which to answer the question. (This requires electronic access to bibliographic databases and skill in searching them efficiently and quickly enough to guide practice.) 3. Critically appraise the evidence for its validity and usefulness. (This entails applying a hierarchy of evidence relevant to several question/evidence types.) 4. Apply the results of this appraisal to policy/practice decisions. This requires deciding whether the evidence

STOP MAKING SENSE

108

applies to the decision at hand based on whether a client is similar enough to those studied, access to interventions described in the literature, weighing anticipated outcomes relative to concerns such as number needed to treat, practical matters, and client’s preferences. 5. Evaluate outcome. This may entail record keeping including single-case designs. (Gibbs & Gambrill, 2002, p. 453) Again, more definitions. And this time, it has been shown that differing definitions can even come from the same individual. This last definition develops some serious questions. For starters, who gets to decide what the best evidence is and whether or not it applies to the case? Isn’t this decision subjective? Doesn’t this idea automatically imply clinical decision-making? Is it possible to have EBPs without clinical decision-making, and if so, is it accurate to say none of these other steps make much difference if a clinician still makes choices based on their clinical experience, negating other steps and features of EBP? Evidence-based practitioners will use certain techniques to locate the best evidence, including electronic searching methods. These methods are used in collaboration with their clients and occur quickly enough to affect their decision in practice. Proponents believe this occurrence makes EBPs information literate (Valejs, 1991). Why does the search have to be electronic? Is evidence not evidence regardless of how it’s found, electronic or not? Why does the question have to be specific and asked in real-time? If one is reading a psychology journal and learns about a new technique used for treating depression and then applies this technique when appropriate, is this occurrence not EBP? A current definition of EBM is the explicit, judicious, and conscientious use of current best evidence from health care research in decisions about the care of individuals and populations. A more pragmatic definition is a set of tools and resources for finding and

STOP MAKING SENSE

109

applying current best evidence from research for the care of individual patients. (Haynes, 2002, p. 4) Yet more definitions. This statement demonstrates that there is not even agreement within the EBP community about what EBP is. EBP in education can be defined as: (a) basing policy on what works; (b) what works in education is based upon evidence and what works when put into practice; (c) this evidence should be derived from RCTs; and (d) RCTs can be analyzed through meta-analyses (Morrison, 2001). The final definition is a definition that many evidence-based supporters disagree with passionately: EBPs are simply practices proclaiming efficacy and supported by methods using and emphasizing quantitative data. Supporters of EBP deny that this definition captures the full meaning, breadth, and versatility of EBPs. Opponents say this definition more accurately describes what happens in practice rather than theory. In many ways though, this definition makes more sense, for without a strong emphasis on the importance or superiority of quantitatively based evidence, the previous definitions do not differ much from the ideas that came before them. For example, have students not always been required to participate in testing as part of the public education system? And have not the results of these tests been combined with theory and teacher experience to formulate an educational philosophy? Does this method not follow the principles the Sackett definition outlines? Is it not true that for a major difference to exist between the definition of EBPs and previous philosophies there has to be an extreme change in the emphasis of quantitative data? Perhaps a more important question would be: what does the definition of evidence-based appear to be in the real world, or everyday life? In other words, regardless of what is being said about EBPs in the research, what is being seen in the professional world that defines evidence-

STOP MAKING SENSE

110

based? The answer to this question is that EBPs seem to be defined as a practice that uses only quantitative data, when referring to how they function in the professional world. This definition causes most of the problems regarding the whole evidence-based debate because evidence-based promoters take offense to this definition and say it is inaccurate, while evidence-based opponents say it is accurate and criticize the proponents for not understanding the consequences of their own beliefs. As previously mentioned, one of the founding principles of EBPs is what is commonly referred to as a hierarchy of evidence. This term describes the idea that certain types of evidence, such as quantitative data and randomized-controlled trials, or RTAs, take precedence over other types of evidence such as qualitative data like narratives, case studies, phenomenology, and ethnographies. Another way to understand this idea is that quantitative data are trusted more than qualitative data. But why? The principle argument seems to revolve around EBP proponents only accepting, or at the very least, highly valuing quantitative data. In other words, in the professional world, regardless what someone’s experience says, or the value an individual may place on qualitative data, evidence-based proponents only consider one type of information valuable: quantitative. EBP proponents insist this dilemma is not the case but EBP opponents insist it is. For example, an experienced social worker may believe a client needs a particular intervention while the EBP social worker will say they need a quantitative assessment first. A non-EBP therapist may ask their client how they are feeling while an EBP therapist will give their client an assessment and base their conclusions off the assessment. An EBP music teacher may focus on their students’ ability to play a particular scale for so many octaves and at a certain speed, while a non-EBP music teacher will be more focused on their students’ ability to play a piece of music in a manner that is both pleasant and musical.

STOP MAKING SENSE

111

When questioned about these habits, and after denying that these examples are true, evidence-based proponents claim examples like the previously mentioned cases only exist because not enough training about EBPs has been given. In their minds, these examples are evidence that more EBPs, as well as training about EBPs is needed. This mentality may be based on another assumption. Since these supporters assume that situations can only improve with the implementation of evidence-based policies, then there can be only one explanation when these policies do not prove successful. Opponents say these examples are evidence that EBPs don’t exist in practice the way proponents believe they do. Regardless of whether or not these scenarios are true, these examples illustrate the debate between EBPs more succinctly than any other statement: the most important consideration in this argument revolves around what constitutes evidence and the emphasis that has been put on quantitative over qualitative evidence. Many of the research articles studied for this research did not even appear to consider any other type of data as possible evidence with some papers outright claiming that evidence is only derived from quantitative data (Al Ghimlas, 2013). Whether it is true or not that evidence-based practitioners value ideas such experience and qualitative data, the impression shared by many professionals is that they do not. There is a distinct belief amongst those who have worked with evidence-based professionals that they tend to focus solely on quantitative data, at the exclusion of experience and qualitative data. As previously mentioned, this type of practice makes more sense than some of the definitions listed that subscribe to valuing experience and different types of evidence. In fact, one could argue that if experience and qualitative data are still valued, then EBPs really are no change at all from the previously used, expert-based practices. For the purposed of this research, it is necessary to define EBPs in terms that apply to all that has been discussed so far. This means viewing the literature through a critical lens and

STOP MAKING SENSE

112

discovering what these definitions have in common. Not surprisingly, given the discussion in the previous paragraphs, what all definitions of EBPs have in common, across a wide variety of subjects, is an emphasis on quantitative data over that of qualitative data, or experience as defining “what works.” For all practical purposes, it could be argued that experience is itself a type of qualitative data. Nonetheless, even though it could be argued that evidence-based supporters value qualitative data and experience to varying degrees, the emphasis on one type of data cannot be ignored. So, if EBPs are to be defined as an emphasis on quantitative data over all other types of information, the next question becomes: what are the consequences, or results, of this emphasis? What’s in a Name? One of the largest problems with EBPs is simply the name. EBP supporters have essentially hijacked the use of the word evidence. The master narrative being promoted works in part by using the name “evidence-based.” This name is so strong and so biased because it implies you are anti-science or anti-evidence if you do not follow modern-day EBPs (Dalal, 2018; Shedler, 2017). The term evidence-based practice has a sense of obviousness in the name, making it difficult to argue against (Goldenberg, 2005). Essentially, if you believe in evidence then you must support EBPs. Likewise, if you disagree with EBPs then you must be against evidence. In the field of psychology, this issue manifests itself by implying certain treatments are based on evidence while other treatments aren’t, even though there is evidence for most forms of treatment. Most prominently, supporters tend to believe in the efficacy of CBT treatments, even though the evidence is clear that other treatments are just as successful (Shedler, 2017). Proponents of EBP will frequently disregard evidence supporting theories such as psychodynamic therapy. They often claim these therapies do not count because they are not

STOP MAKING SENSE

113

manualized or scripted, so they can’t be supported by the evidence. This situation lends credence to the claim that what supporters of EBPs really mean is CBT and not psychodynamic therapies. Evidence really has very little to do with this situation. Criticism of other forms of therapy often occur linguistically, using phrasing such as non-evidence-based to mean any therapy that is nonmanualized (Shedler, 2017). “The very term ‘evidence-based medicine’ seems similarly vacuous—as if any alternative to EBM means doing medicine based on something other than evidence” (Sehon & Stanley, 2003, p. 3), as if doctors only recently began basing their treatments upon evidence. According to these ideas, evidence was a concept that did not even exist until the late 20th century. This obviousness of the name is problematic. After all, who disagrees with evidence? This term is problematic because it implies any action outside of EBPs is based on something other than evidence (Sehon & Stanley, 2003). So, the name makes it impossible to argue with EBP treatments without being labeled antievidence. These ideas exist to make it impossible to disagree with EBPs: Taken at face value, these definitions seem merely to say that EBM is the wise use of the best evidence available. Given that characterization alone, it would be astonishing that there is any dispute about EBM. It would be equally astonishing that anyone could think EBM, defined in this manner, revolutionary or even useful. After all, who could possibly be opposed to using the best evidence wisely. (Sehon & Stanley, 2003, p. 2) This statement also verifies the previously mentioned idea that EBPs are not the revolutionary paradigm shift their supporters have promised. Scientists have always been using the best available evidence, making this part of this definition no change at all. By including the word “evidence” in the name, supporters have unfairly and inaccurately rigged the contest. If one were called to vote on a theory based upon the name, EBPs would win

STOP MAKING SENSE

114

the contest simply because the word evidence is in the title. The use of the word evidence creates major problems since it becomes easy to assume that a theory with this word in the title is going to be a more scientifically valid theory than the other theories. The use of the word evidence is also a weighted term and one that makes it impossible for one to disagree with in today’s scientifically charged culture. When one stops to think, there are very few who would deny using evidence, or do not believe in basing practices in evidence, which implies even larger issues such as: Do psychologists have a firm grasp of the nuances that exist in the arguments surrounding the definition of evidence (Goldenberg, 2005)? Furthermore, how successful is the American education system in teaching philosophical or critical thinking skills if more people are failing to question the fundamental nature of evidence? Because of the name, there has been much misunderstanding around what EBPs are and aren’t. This point coincides with the previous message that there is much confusion regarding what EBPs are. Many students report believing EBPs are superior because they believe they are the only theory that is scientific. This belief is rooted in a deep misunderstanding of both the definitions of evidence-based and other theories as well. Some refer to the previous theories by different names, but especially in the field of medicine, previous theories are now referred to as practice-oriented theories. However, once again, when comparing this name to an evidencebased practice type of name, it is easy to see why one might assume the superiority of one theory over the other. The common assumption is that the theory with the word evidence in the title would be more rooted in, and place more emphasis on, evidence. However, this belief is not necessarily true. In fact, one could argue that a practice-oriented theory fits the exact definition of the Sackett defined evidence-based practice. Previous theories did believe and focus on evidence. However, these theories also focused on theories, practice, and experience as well,

STOP MAKING SENSE

115

creating a theory that one could argue is overall more balanced, and therefore more likely to be accurate, than a theory that emphasizes only one aspect of the scientific process at the expense of the other parts. The only way for evidence practices to truly be different from previous theories is to place a large emphasis on the importance or superiority of quantitative data. This idea explains why the particular definition of EBPs was chosen for this research. As previously mentioned, there is controversy surrounding this definition amongst supporters of EBPs and opponents, although some of this confusion is difficult to understand. For example, in the field of psychology, there is a movement to push quantitative data as not only the superior form of data, but as the only form of data that should be used. Somehow, proponents of this idea believe this action fits into Sackett’s definition of combining evidence and experience. Opponents see this as a very narrow-minded action that limits data collection and does not fit into Sackett’s definition at all. It is difficult to understand how these types of actions fit within the definition established by Sackett. To debate or argue about the superiority of one type of evidence is one thing, but to push for all practitioners to use only one type of evidence is quite another and creates problems of its own. Again, if a definition claims to include practitioner experience and expertise, then why the push to eliminate qualitative data? Overall, the use of the word evidence in the title of EBPs is not the only name related issues EBPs have. Supporters of EBPs frequently use terms such as best practices, empirically validated, and well-established, furthermore creating a false sense of belief that EBPs are the only scientific method that has established any of these criteria (Gambrill, 2010). The term evidence-based simply sounds good (Gambrill, 2010). These terms are still emotionally charged, implying a superiority when compared to any other forms of practice. For example, if one were

STOP MAKING SENSE

116

not to follow EBPs then are they really using “best practices?” Again, this title has a ring of obviousness to it and implies a lack of professionalism or knowledge if one were to disagree with this methodology. In summary, the word evidence is a loaded word, and causes people to assume a positive definition of the theory without questioning their assumptions. The name EBPs is itself problematic as it assumes people will interpret their policies as scientific and opponents’ beliefs as unscientific. This idea is faulty logic. Furthermore, this name implies previous research, ideas, beliefs, and practices were not based upon evidence. Once again, this idea is false and shows how problematic the name can be. Randomized-Control Trials The hallmark of EBPs is the hierarchy of evidence that consistently places randomized control trials at the top (Goldenberg, 2005). They are the gold standard of the philosophy (Silk et al., 2010) and the heart of their treatments: “What separates EBM from other approaches is the priority it gives to certain forms of evidence, and according to EBM the most highly prized form of evidence comes from RCTs (including systematic reviews) and meta-analyses of RCTs” (Sehon & Stanley, 2003, p. 3). There is an assumption in the EBP community that RCTs are flawless and can be used to completely predict human behavior. This idea is flawed. RCTs are inherently an incomplete picture. They are also inherently costly in terms of implementation and time. Psychologists need to be asking themselves if they are worth the time and work that accompanies them (Morrison, 2001). They can be used to predict certain aspects of human behavior but not understand it as a whole: “If there are EBM proponents who think that medical practice can be reduced to an algorithmic application of evidence from RCTs, they are certainly mistaken” (Sehon & Stanley, 2003, p. 9).

STOP MAKING SENSE

117

It is not acceptable for EBP proponents to determine that best evidence comes only from RCTs (Morrison, 2001). There are significant limitations on RCTs, including their practicality and ethical value (Sehon & Stanley, 2003). Morrison (2001) describes these limitations best: There are, then, several limitations to RCTs. To suggest that simply striving to describe “what works” by using RCTs neglects the important issues of (1) what works in what conditions, (2) what are the roles and the behaviours of people and context in contributing to “what works,” (3) why programmes and interventions work or do not work, and (4) how to overcome problems of sampling, reliability and ethics. Though RCTs seek to address these matters, the argument so far has suggested that the very procedure of RCTs renders them unable to do so satisfactorily… There are the larger issues of clarifying the terms of criteria for judging “what works”; the notion has to be qualified in terms of (1) “what works for whom,” (b) “in whose terms ’what works’ is being judged”; (c) “against what criteria are ‘what works’ being judged”; and (d) “at what cost/benefit is ‘what works’ being ‘judged’” … he natural science, whose methodologies are invoked by advocates of RCTs, are turning to new modes of understanding the world through chaos and complexity theories, i.e. methodologies which operate holistically at systems levels rather than in the fragmented, atomized, reductionist world of RCTs…such easy dismissal of methods other than RCTs undermines the notion of “fitness for purpose”: certain types of research are very useful for certain purposes, whilst others (including RCTs) are not. Causality may be addressed in RCTs…but not all educational research is, or should be concerned with causality, just as “what works” is not simply a matter of causality. (Morrison, 2001, p. 76). Part of the popularization of RCTs is the growth in research, and especially health care research that has taken place in the last few decades (Berkwits, 1998). However, specific

STOP MAKING SENSE

118

problems exist within RCTs. For example, a key problem with experimental designs is whether or not the control group is created retrospective or prospective, after or before the experiment is run. Respective control groups have been shown to favor statistical significance almost twice as often as prospective control groups. Other techniques such as “trim and fill,” which estimates the number of studies with null effects to balance against the bias of small sample sizes, have even been developed because of these problems related to small sample size (Slavin, 2008). Many of these problems are not exclusive to RCTs but demonstrate problems with quantitative research in general. Just as qualitative research has its own issues, quantitative studies contain inherent flaws. For example, studies brief in nature tend to be low in external validity (Slavin, 2008). There are also well-defined and well-known problems with determining causality. Behind every apparent A and B cause, lurks other causes C and D that are unknown, undetermined, and unaccounted for. This reality makes a simple case of causality elusive. Not to mention than causality could likely be the unknown combination of variables which the RCTs have not taken into account (Morrison, 2001). At the top of EBP supporters’ argument, and at the top of their evidentiary hierarchy, is the randomized-controlled trial, so named because its method employs randomization and comparison to a controlled sample. The theory behind this methodology is that the process of randomization and comparison to a control group ensure a higher level of accuracy. Even more esteemed are the double and triple blind randomized-controlled studies, in which the researchers are not privileged to details of the variables making bias a nonissue. While these studies are highly accurate and definitely provide important and valuable information, they are not perfect. Nor do they invalidate other forms of information, study, and research. Of all the misinformation passed around in this subject and all the philosophical errors made, this one seems to be the most often repeated: Other forms of data, measurement, and study are inferior to the RCTs, simply

STOP MAKING SENSE

119

because the RCT is just that amazing. As amazing as RCTs may be, other forms of study are amazing as well and provide a different type of information. The problem with RCTs is not necessarily in their methodology but rather in their interpretation. In fact, research demonstrates that RCTs are as prone to many errors as other forms of research and their outcomes are more influenced by factors such as sociocultural characteristics as they are influenced by what the data says. This influence can be partially attributed to the lack of well-rounded education and emphasis on one domain of the scientific method the current culture emphasizes (Berkwits, 1998). In other words, RCTs are still interpreted by the researcher and this interpretation is always prone to human error. Another problem with randomized-controlled trials is they often fail to provide the type of information needed. For example, much of the theory around RCTs revolve around tests of statistical significance, which accordingly determine the accuracy or truth of a study’s findings. However, philosophy would argue this statement, asking, “What is truth?” Not only should science question the assumption and definition of truth, but shouldn’t research also ask what truth means? If a study says it is true simply because it has reached a level of statistical significance, does that assumption mean the study is applicable to the Doctor in a rural town, or the therapist at a recovery center, or the schoolteacher at a high school? In fact, if the results aren’t true or applicable to any one individual attempting to integrate their findings, couldn’t one then argue that the results are not true, or significant? If a therapist uses CBT for clients with depression because the research says they should and guarantees the results are accurate and significant, and yet this therapist finds CBT does not work, how are the results then true or significant? In other words, just because a study says it is true and significant does not make it so. Just because a study has a mathematical way of determining what it defines as truth, does not make it so. In fact, is truth even a mathematically

STOP MAKING SENSE

120

defined idea? What makes a study true? Is the music teacher who quits teaching his students how to read sheet music, even though tradition and research say teaching students how to read music is the best way to teach, wrong? What if his experience has shown him this research is flawed, how can the study then be true and accurate? These questions demonstrate the importance of philosophy when questioning not only EBPs, but the whole method of research and scientific study. Further, measure of frequency, odds ratios, and event rates are also scientific ideas that may or may not be the best representation of reality, and at the very least need to be questioned philosophically (Berkwits, 1998). The problem with these scientific requirements is that these practices then become self-replicating, “a method which initially served pragmatic social purposes became a self-authenticating criterion for objective clinical truths” (Berkwits, 1998, p. 1541). These studies, and randomized-controlled trials are true because they say they are. There is no questioning their results or their methods. RCTs are the best evidence, therefore they must be true, they must be accurate, and questioning their results serves little purpose and is generally met with criticism. But shouldn’t all results be open for questioning and critique? As previously mentioned, even the most staunch opponent will admit that RCTs have a purpose and excel in certain areas, but this excellence does not mean they are above reproach. One of the biggest fears surrounding the use of RCTs is a fear that qualitative studies are not scientific. However, the findings of observational studies frequently agree with the findings of RCTs (Haynes, 2002). In medicine, other approaches besides RCTs take into account the patient’s clinical state and values, meaning this idea cannot separate EBPs from other ways of practice (Sehon & Stanley, 2003). The National Research Council (NRC) has recommended a variety of studies to be used for scientific research, including experimental, case study, ethnographic, and survey designs. They also emphasized the importance of both qualitative and

STOP MAKING SENSE

121

quantitative methodologies as well as the importance of nonscientific means of knowledge, such as philosophy and history (Esienhart & Towne, 2005). This idealized perfection EBP supporters apply to RCTs is unable to study many important factors in scientific research, such as social processes that are at work in everyday interactions may be the determining factor in human behaviors, and yet RCTs are unable to account for them. It remains unclear how RCTs can disentangle the complexity that is human behavior and studying complex systems. During experiments it is often common for other unanticipated factors to come into play; factors which were not predicted and cannot be controlled for. These factors may contribute a massive influence on the experiment and yet RCTs cannot account for these problems. This situation is different than the world of the laboratory in which the simplicity of nonsocial beings is not a concern. This idea is a reality of social and human settings. The importance of context is undeniable in social research and settings, and yet where do RCTs take context into account? In fact, RCTs actively seek to eliminate important factors such as context. RCTs are only one source of contributory data and there are plenty of alternative methods compared to RCTs (Morrison, 2001). Some of the topics mentioned in the previous Philosophy section also apply to the topic of RCTs. First of all, RCTs are not completely independent of basic science and still require scientific interpretation (Sehon & Stanley, 2003). Again, EBPs seek to separate research from science by essentially saying research is science: Statistical information from an RCT is virtually uninterpretable and meaningless if stripped away from the backdrop of our basic understanding of physiology and biochemistry. (Sehon & Stanley, 2003, p. 7) The emphasis on RCTs has changed some of the techniques used for effective science, but there has been no immense change in doctrine (Sehon & Stanley, 2003). Scientific

STOP MAKING SENSE

122

knowledge and basic understanding are still needed, the gold standard of RCTs appears to be ill suited to many of the values, changes, and ideas science promotes (Silk et al., 2010): “Clinical experience, observational studies, and RCTs have much in common” (Sehon & Stanley, 2003, p. 7). EBP supporters believe research is all science needs, a basic understanding of the scientific method or subject is not needed. This idea is false. Even RCTs results need to be philosophically plausible and need realistic explanation (Sehon & Stanley, 2003). RCTs also need an interpretation, and as discussed previously, interpretation is never completely objective nor infallible. This interpretation makes RCTs just as susceptible to human error as other studies. The amount of confidence placed in data is subjective and a matter of judgment, not scientific. In order for EBPs to be scientifically accurate, they must first answer some questions, including how to define what works, who decides what works, how is complexity and multidimensionality addressed, how does science address the inherent subjective nature of interpreting data, how do science understand ethical concerns, what methodologies address what works, what are the limitations of RCTS and how does science use them to provide useful evidence, and how to address some of the technical concerns surrounding RCTs, including generalizability, sampling, meta-analyses, validity, and reliability (Morrison, 2001). While EBPs and RCTs make an important contribution to the fields of science, there are limitations to how much information science can receive from these sources. Using RCTs as the sole source of information or evidence is a mistake and not scientifically accurate. RCTs have their limitations (Morrison, 2001). Critics of EBPs argue that EBPs overemphasize the importance and value of clinical trials (Sehon & Stanley, 2003). Modern research suggests that much of the conclusions of these

STOP MAKING SENSE

123

systematic studies are not more prone to accurate conclusions than many of their counterparts, especially given their tendency to overestimate positive effects (Gambrill, 2010). EBPs seek to make policy based upon RCTs, meta-analysis, and other scientificallybased research as compared to basing police on politics, ideas, or beliefs, but often struggle to do so. Morrison (2001), presents the numerous problems with RCTs most eloquently: At what point does the accumulation of a quantity of RCTs become sufficient to be converted to a qualitative shift in policy making. Are the type of causality, the limits to causality, and the parameters of applicability that control RCTs worth their undertaking? What other sources of evidence should be admissible as “evidence” in EBE? The evidence that might answer these questions does not derive from RCTs alone, and this perhaps undermines their own status…he rigour required for RCTs may not be practicable, and evidence-based practices in education that are based largely on RCTs face several problems as they: (a) operate from a restricted view of causality and predictability; (b) understate the value of other data sources; (c) neglect the significance of theories of chaos and complexity; (d) display unrealistic reductionism, simplification and atomisation of a complex whole; underestimate the importance of multiple perspectives on “what works”; (f) are unable to catch the dynamics of non-linear phenomena; (g) are unable to comment on the processes taking place in experiments; (h) neglect the significance of context. Judging “what works” is a deliberative and evaluative as well as an empirical matter… Chaos and complexity theories here are important, for they argue against the linear, deterministic, patterned, universalizable, stable, atomized, objective, controlled, closed systems of law-like behavior which may be operating in the world of medicine and the laboratory but which do not operate in the social world of education. Chaos and complexity theories identify several factors which seriously

STOP MAKING SENSE

124

undermine the value of RCTs in education… What is being argued here is that, even if we could conduct an RCT, the applicability of that RCT to ongoing, emerging, interactive, relational, changing, open situations, in practice, may be limited, even though some gross similarities may be identified through meta-analysis. To hold that policy making must be based on the evidence of “what works” as derived from some forms of meta-analysis is perhaps questionable… Even if one wanted to undertake an RCT, to what extent is it actually possible to identify, isolate, control and manipulate the key variables in RCTs, to have truly equivalent groups and, thence, to attribute causality? RCTs may be fallible because they cannot meet the requirements of the principles on which they are based. (Morrison, 2001, p. 76) Supporters of EBPs and RCTs defend their propositions by saying RCTs produce further information which can then be tested using a meta-analysis. This claim may be true, but it is a far cry from the claim that RCTs produce evidence about “what works” but does not show cause. RCTs are another attempt to simplify a complex process—a process that ultimately cannot be simplified and needs to remain complex. RCTs fail to consider the central role people play in experiments and the fact that people have attitudes, perceptions, motivations, responses, wishes, and desires, all of which have an important effect on the experiment and cannot be reduced to just one variable. RCTs excel at epistemology but are horrible regarding ontology (Morrison, 2001). As an example of this complexity, Morrison (2001) continued, “RCTs require exactly the same intervention or programme across the control and experimental groups, i.e. to ensure that the protocols and procedures for the research are observed and are identical. Yet this is impossible” (p. 73).

STOP MAKING SENSE

125

Trying to control this influence of extraneous factors by using techniques such as random assignment also eliminates factors that may explain the cause of behaviors and allow researchers to understand exactly “what works.” They conceal a wide range of complex factors in their desire to uncover what works and fail to consider long-term consequences and other problems that may arise as a result of the research, as demonstrated by the current situations caused by standardized testing in public school systems. Science needs to adopt a multiperspective system when studying behavior and it is impossible for RCTs to develop this multiperspective system independent of other types of research (Morrison, 2001). Within certain subjects, such as education, it is next to impossible to avoid some of these problems, such as control and experimental group interaction (Morrison, 2001). Blind RCTs are perhaps impossible with social research, meaning reliability and validity in these studies are essentially always compromised. The issue of generalizability from a sample has long been recognized as problematic in the field of medicine: “There are limits to generalizability, typicality and representativeness” (Morrison, 2001, p. 73). There is a belief within the scientific community that randomization overcomes almost all problematic factors in research, but exactly how randomization overcomes these problems is unknown or not discussed. Just because randomization exists, doesn’t mean all variables and problems can be ruled out (Morrison, 2001). Furthermore, the range of potential outcomes that RCTs address are severely limited (Morrison, 2001). RCTs assume outcomes are capable of being measured and operationalized. They assume the data will present in a straightforward manner with no unintended consequences or no other unmeasured variables presenting as important: Measures may only catch superficiality. An RCT may be entirely rigorous but, maybe as a consequence of meeting the canons of rigour by isolating and controlling important

STOP MAKING SENSE

126

factors out of the experiment, leave only “trivial” factors in the experiment; they yield little worth knowing. Even if causality is shown, it might be in such restricted, local, specific terms as to be non-transferable to other contexts. Though constructing appropriate measures may simply be a technical matter, this is unlikely to be the sole resolution of the problem, as the judgment of adequate construct and content validity is not decided by measures alone, it is deliberative. Interventions also have to be judged not only in their own terms but in terms of their compatibility with the overall conditions, contexts, programmes, practices, purposes, and values of education. RCTs, inherently reductionist and atomizing in their focus and methodology, are incapable of taking in the whole picture… What is being suggested here is that RCTs should be concerned not only with which programmes work, but, more specifically, which programmes work, in what ways, with what consequences, and with which people and, of course, which programmes do not work. In one sense this is simply a matter of sampling and clarifying the parameters of external validity and generalizability, avoiding homogenizing disparate populations. In another sense, the more one recognizes the uniqueness of situations, people, conditions and interactions, the less likely it is that replicability or generalizability become attainable. The question, then, becomes less about “what works” and more about what works with, and for, whom, and in what and whose terms. (Morrison, 2001, p. 78) The power of RCTs is multiplied through their examination by meta-analyses (Morrison, 2001). In order for meta-analyses to be fair under the terms of traditional science, alternative hypotheses need to be gathered, explored, and ruled out. RCTs do not seek to accomplish this criteria:

STOP MAKING SENSE

127

How can we be certain that meta-analysis is fair if the hypotheses for the separate experiments were not identical, if the hypotheses were not operationlisations of the identical constructs, if the conduct of the separate RCTs (e.g. time frames, interventions and programmes, controls, constitution of the group, characteristics of the participants, measures used) were not identical? Further, for meta-analysis to be defensible, there needs to be differentiation between “good” and “bad” RCTs. (Morrison, 2001, p. 78) The role of random assignment is one of the most contentious issues with program evaluation (Slavin, 2008). The emphasis on random assignment exists because it purports to eliminate selection bias. However, selection bias can occur anywhere in the study and random assignment doesn’t necessarily eliminate this bias, although it does make it less likely to occur (Slavin, 2008). This belief is based upon the philosophy that randomization will correct for bias and a lack of bias means science can be certain the results are accurate, valid, and scientific. But, again, this philosophy is flawed. Just because a sample is selected randomly does not mean the entire study is perfect and contains no flaws. Random assignment simply means that scientists have not been able to influence the selection. That’s it. Randomization provides protection and security against selection bias, but it does not guarantee different results in practice or protection from bias: Random assignment does not guarantee validity. Entirely appropriate policies promoting experiments using random assignment should not be allowed to lead to an emphasis on studies that are brief, small, artificial, or otherwise of little value to practicing educators. (Slavin, 2008, p. 9) Comparisons between studies with random assignment and nonrandom assignment have shown that in practice, there is frequently no difference between the two and controlling for variables in studies have been shown to reduce, but not eliminate, differences between

STOP MAKING SENSE

128

randomized and matched studies. Reviews of studies have shown that using effect size estimates adjusted for pretests and some other covariates, the outcomes for randomized and matched samples are essentially identical. Additionally, random assignment studies tend to have smaller sample sizes, making this issue a significant problem. Research has demonstrated that the estimate of program outcomes, when done with no selection bias at the initial level, correlate strongly with pretests for both randomized and matched samples (Slavin, 2008). Many randomized studies have other issues, besides the actual assignment, including being too brief, artificial, and too small. These flaws may produce serious limitations regarding both the internal and external validity. Such as many researchers assigning their subjects using random assignment but not analyzing them at the same level, such as analyzing at a student level when using students. This type of methodology can lead to overstated statistical significance (Slavin, 2008). Also, it has long been understood that the differences in the results of experimental effects between low and high validity studies is quite small (Smith & Glass, 1977). Large studies also have their problems, as they are unable to focus on the implementation of results as easily as small studies (Slavin, 2008). Randomized quasiexperiments (RQEs) are flawed because they produce more negative differences or statistically significant positive difference than they are supposed to, even though their effect size estimates are considered unbiased. Slavin (2008) continued: A large, prospective matched study may provide more meaningful and reliable information than a small, randomized one. Limiting reviews to randomized experiments may inadvertently introduce bias if most randomized studies are small…a focus on randomized studies without attention to sample size and other design elements that also have potential to introduce bias can lead to illogical conclusions… Randomized studies are few in number, and many are very small, very brief, very artificial, and/or very old.

STOP MAKING SENSE

129

Given the increasingly common finding that in studies in education, randomized and well-matched studies tend to produce similar effect sizes, the rationale for restricting attention to randomized studies alone is diminished. (Slavin, 2008, p. 12) RCTs also have a variety of other problems, including: patients withdrawing from the study. For many studies patients are withdrawn or withdrawal themselves for a variety of reasons. These reasons contribute to the “experimental mortality” and attrition rates. This phenomenon has long been recognized as a problem in research and RCTs and EBPs do nothing to overcome this issue (Morrison, 2001). Sampling is also an issue with serious ethical concerns, as the ethics of research often disrupt truly accurate and unbiased sampling methods (Morrison, 2001). Dissimilarity between study groups is also problematic. The fact that the control group and the experimental group can never be exactly alike, technically should invalidate pretty much any trial done on human subjects. The amount these differences influence the experiment can never be known or determined (Morrison, 2001). Time itself is also a factor in research. The fact that time progresses and changes is never mentioned in research, and yet one can never test the same subject for the same variable at the same time, meaning that time will always remain an unknown (Morrison, 2001). Further questions need to be answered when using RCTs, including: What is the extent to which one uses data from RCTs in formulating policy, are RCTs able to meet their own standards for rigour, and what are the costs and benefits of RCTs (Morrison, 2001)? Can RCTs actually study themselves? Can scientists develop a better way to test RCTs? Will they be able to consider ethical, contextual, and social concerns in the future? Will scientists be able to develop a more thorough understanding of science that uses RCTs for the research they perform

STOP MAKING SENSE

130

exceptionally well while acknowledge their limitations and supporting them with additional methods of research? To bring all of these problems with RCTs together, consider the example of cancer treatments. Treatment for cancer is a great example of RCTs and meta-analyses. A certain treatment might be shown to be effective at reducing cancer but will fail to account for the myriad of other health problems that come from the treatment and could be worse than the treatment itself (Morrison, 2001). And yet these treatments are considered as the most prominent and important EBP treatments in medicine. For a doctor not to prescribe these treatments would be risking career suicide. But what if a doctor were to consider that the side effects of the treatment were worse than the cancer itself? What if the doctor has enough experience to say the studies supporting these treatments are flawed and he has not experienced the same success as reported by these studies? Are these RCTs perfect? Are they absolutely flawless? Do they contain no human error? What is the role of interpretation in these studies? Can scientists truly say they are completely objective just because patients were chosen randomly? To summarize, RCTs are not the perfect example of quantitative data their supporters claim them to be. They contain many inherent flaws and like any other form of research, they have their strengths and weakness. As previously discussed, they are not completely objective because complete objectivity doesn’t exist. They contain subjective elements in the design and definitions of the study as well as in the interpretations. Furthermore, they often contain low sample sizes and have yet to develop a method to include cultural and social factors as part of the design. Philosophical Assumptions As previously stated, EBPs are a philosophy and they contain many assumptions, including assumptions about the accuracy of “facts.” People have assumptions and beliefs that

STOP MAKING SENSE

131

color their interpretation of evidence (Goldenberg, 2005) and EBPs involve assumptions as well (Addis et al., 1999). EBP contains many assumptions, including the assumption that scientific standards are transparent, neutral, objective, and universal (Goldenberg, 2005). Furthermore, the assertions of EBPs seem common sense when first taken at face value. However, there are some critics who claim that the assumptions of EBPs are “absurd and irrational” (Sehon & Stanley, 2003). One of the biggest consequences EBPs cause is likely unintended, and somewhat antithetic to the logical positivist name: a lack of logic and evidence used in decision making. For example, many of the arguments around EBPs and used by proponents of these practices are based on assumptions (Goldenberg, 2005). So many of the authors in the literature supporting EBPs have discussed the need for EBPs to be applied and used in their prospective fields of study. The authors have frequently claimed that this application is needed to improve their field, but at no point is any evidence or logic presented as to how this application would be an improvement, or why. Many of the articles read for this paper claim EBPs will reduce errors (Lilienfeld, 2014), though none of them cite any support or research for these claims, once again going against their own ideals. The authors have simply assumed it will be an improvement, claiming EBPs can only lead to an improvement (Berkwits, 1998), and a lot of this assumption is likely caused by the name, as previously mentioned. A theory with the word evidence in the title must be better than other theories, correct? And since evidence-based theories claim to be new and innovative, this fact must mean previous theories didn’t have an evidentiary base, right? From a logical perspective, both of these claims are false: just because a theory claims to be based on evidence doesn’t mean it is. And respectively, just because other theories do not mention evidence in their name does not mean they are less evidence-based. As discussed earlier, this idea is partially why EBPs create problems through the name. For all practical purposes,

STOP MAKING SENSE

132

who is going to disagree with basing a theory or ideas on evidence? So simply from the title alone, most every person is going to claim to be evidence-based. Other assumptions that exist err in the opposite direction: making false claims about other ideologies rather than defending EBP. An example of this happening is the claim that other ideologies do not believe in critically evaluating evidence or claim there are no reasons to critically evaluate evidence (Berkwits, 1998). This statement is false, as many other ideologies believe in evaluating evidence. In fact, these assumptions can cause major problems within the whole debate about evidence. Perhaps EBPs are superior, perhaps they are not, but doesn’t the very definition of EBPs itself imply that this statement should be tested before being assumed? And what other assumptions are being made? Much of the literature discusses concerns EBP supporters have over why there does not seem to be an increase in EBPs in use, or why certain professional communities have not completely embraced them. This concern is based on an assumption: the assumption that EBPs are good and should be followed. The literature researched for this project have also made large assumptions regarding this concern. Every study that discussing this topic assumed the reason EBPs have not been adopted more thoroughly must be caused by any other reason other than the fact that they may not work. Aside from the fact that an assumption goes against many of the ideas EBPs and logical positivism purports to value, does not the fact that EBP supporters are not able to even consider the idea that their theory may not be the answer they believe it to be, also demonstrate that the theory is incomplete? If a theory claims to be objective and yet its practitioners are just as subjective as individuals from other theories, does this not show that perhaps it’s impossible for humankind to be completely objective, regardless what a theory says?

STOP MAKING SENSE

133

For example, is it not possible that the reason EBPs have not been adopted thoroughly by most of the fields they have been used in, because perhaps these practices are not as effective as their followers believe them to be? “We assume that manual-based treatments have something (not everything) to offer clinical practitioners… It is assumed that clinical practitioners would like to know and judge whether these treatments are effective” (Addis et al., 1999, p. 434). Then again, perhaps they are as effective, but is this not the point? Should an argument such as this not use evidence to reach a conclusion rather than an assumption? Is this problem not a hypocrisy of EBPs? As a small example of assumptions existing in clinical work, many of the resources previously available as support have been redirected into surveillance and auditing work. This fact is rarely discussed and is assumed to be a positive change (Davies, 2003). But is this change positive? Isn’t it an assumption to assume so? If science is truly to be evidence-based, shouldn’t scientists test this assumption before assuming it to be true? Another example brings us back to the familiar subject of standardized testing. Standardized tests make assumptions about what children learn and the knowledge they need to know (Bhattacharyya et al., 2013). This emphasis on standardized testing is based on the idea that teachers and students need to be held accountable for their learning. There is an assumption that the best way to practice this accountability is through standardized testing. Furthermore, it is assumed that since every student will be completing the same test, the results will be valid and this method is fair and accurate (Bhattacharyya et al., 2013). High-stakes testing uses multiplechoice with only one correct answer because it is assumed the student will know the answer (Amrein & Berliner, 2002; Bhattacharyya et al., 2013). Again, this belief is an assumption and emphasizes the importance of philosophy in science. This idea also emphasizes the belief that even standardized-testing is never truly objective.

STOP MAKING SENSE

134

To summarize, much of what EBP claims to value and believe are nothing more than assumptions and not based on evidence. Perhaps the biggest example of these assumptions is the belief that requiring everyone in a field to practice EBPs will lead to improvement. This statement is an assumption and has not been tested or shown to be true. Further assumptions exist as well. Critical Thinking Skills This issue leads to the next criticism: EBPs actually close the doors on curiosity and critical thinking, and they betray scientific values (Silk et al., 2010). Because they so completely emphasis the quantitative over all other forms of data, there is a trend to accept what the quantitative says as the definitive answer. Is a homeless person malingering? What does the assessment say? Is a client depressed? What does the depression inventory say? Is the music student progressing? Have they learned any new scales or not? In all these examples, the answer is given in simplistic, and numerically driven terms that are seen as the answer, rather than part of a possible answer. One of the criticisms of this approach is that practically speaking, solutions are never simple, they are complex. There are many factors at play in any system and any attempt to oversimplify any type of problem will end with a loss of information, or an incomplete solution. In the examples just mentioned, is it not possible that all the answers are incomplete? Is it really possible to say an individual is malingering simply from one test? How about two or three tests for that matter? What about working with a depressed client? Can one assessment really indicate the presence of depression completely and thoroughly? And just because a music student can play a new scale, does this simple fact imply they are improving? Isn’t there are a lot more to improvement than the ability to play a scale?

STOP MAKING SENSE

135

An example of this is in the nursing field. As Mary points out, after being trained in EBPs, her nurses are frozen and less likely to respond to patients because they worry they will not be following the evidence. She tells the story of one nurse who refused to check on a patient because she had not yet seen the evidence about how to interact with a patient struggling with that particular sickness (Gambrill, 2010). One of the ironies surrounding EBPs is that these philosophies are being touted as necessary to promote critical thinking skills (Gambrill, 2010) and as helpful in promoting critical thinking. Again, this idea may be an assumption on the part of the EBP promoters and more research should likely be done before this statement is said to be valid, proving once again that the EBP philosophy is not perfect. Teachers report their main job as instilling a love for learning in students. This task is accomplished by teaching students the importance of critical thinking (Heubert & Hauser, 1999). Sadly, standardized testing has made it impossible for teachers to help instill this love of learning (Bhattacharyya et al., 2013) by promoting memorization as the primary learning tool, thereby diminishing the importance of critical thinking skills (Bhattacharyya et al., 2013; Heubert & Hauser, 1999). To summarize, EBPs promote a culture of conformity, where everyone is expected to follow the same protocols, think in a similar manner, and perform the same actions. These policies lead to a lack of critical thinking skills, rather than the increase EBP supporters have claimed. The Importance of Creativity EBPs also limit creativity and any type of exploration. In subjects such as music, the exploration of new ideas is not only encouraged, but considered necessary and is a vital part of the development of new ideas regarding new forms of art and creativity. If performers are limited

STOP MAKING SENSE

136

to performing a certain way because it is the way they have been taught to perform based on evidence, when will any type of innovation occur? In medicine, a field which is usually years ahead of other areas of research (Lilienfeld, 2014), and one of the first fields to embrace EBPs, the consequences of EBPs can be summarized as thus: 1. An emphasis or basis on the financial (Gambrill, 2010; Howick, 2015). 2. An unrealistic and unmanageable amount of evidence being produced (Howick, 2015). 3. Accuracy issues within the research (Howick, 2015). 4. The applicability of said evidence when being applied and used in the world (Howick, 2015). There may be more consequences in existence; however, these four complaints cover the bulk of most of the problems. Science has no idea how the process of innovation and creativity occurs (Amabile, 1988). Scientists have little idea what creativity even is and like EBPs, defining creativity is an impossible feat. But they do know that creativity is important (Csikszentmihalyi, 2013). In fact, the most successful people, such as scientists, are not necessarily the most intelligent, but the ones with the highest levels of curiosity, motivation, and creativity (Amabile, 1998). Creativity is essential to a variety of different industries, subjects, and ventures (Amabile, 1996). Business imperatives such as coordination, productivity, and control are recipes to destroy creativity. Organizations frequently kill creativity with deadlines. Fake deadlines can breed distrust while impossible deadlines or tight deadlines cause burnout (Amabile, 1998). Environmental factors are the most important consideration when supporting creativity as people will not be creative if their environment is not supportive to taking new risks. Individual creativity and organizational systems are interdependent (Amabile, 1988). In other words,

STOP MAKING SENSE

137

humans need systems that allow for creativity in order for it to prosper. Without systems that are open and risk-taking, which EBPs are not, people will not develop new ideas, new ways of thinking, or new treatments. Sadly, creativity is more often terminated rather than supported (Amabile, 1998). EBPs are the opposite of creativity and creativity needs new ideas and new ways of approaching the same problems (Amabile, 1988). As people have already seen, EBPs create a cookie-cutter approach to treatments, education, medicine, and lack a basic understanding of scientific knowledge. How can creativity grow in this type of environment? In fact, it has been known for a long time that extrinsic constraints in the work environment can indeed undermine creativity, such as evaluation, reward, surveillance, competition, and restriction of choice (Koestner et al., 1984; Kurglanski et al., 1971; McGraw & McCullers, 1979). People with the least amount of knowledge are generally the most creative (Amabile, 1988) and EBPs discourage this type of mentality as well. In music education, the impact of this idea is stark. Music is a creative and subjective field, and as mentioned, EBPs do not handle creativity well. In fact, the latest teaching philosophy encourages conformity and leaves little room to challenge the establish. Creative thinking is not encouraged and next to impossible to enact (Finney, 2002). One example is the emphasis on technique EBPs in music education encourage. Technique is viewed as a proper way to play and most music schoolteachers focus some of their teaching on technical skills. The reasoning behind this teaching is well-intended, as it is often believed good technique is necessary to play well. However, this idea is again a philosophy. But when one researches this idea, they can find a history of very successful musicians playing with not only flawed technique, but absolutely horrendous technique. Take one of the most popular instruments of all time for example, the electric guitar. If technique were important, musicians

STOP MAKING SENSE

138

such as Jimi Hendrix and Stevie Ray Vaughan would never have existed. Nor would their contemporary, Jeff Healey, who played with his guitar laying face up on his lap due to his visual impairment. Yo-Yo Ma’s technique on cello is flawed as is Joshua Bell’s technique on the violin. Herbie Hancock’s piano technique is reportedly problematic as is Keith Jarrett’s. In fact, Keith Jarrett’s technique is so bad, his physician even told him he needs to change it (Keith Jarrett – The Art of Improvisation, 2005). What about processes such as composition? How can composition be taught from a strictly EBP perspective? Composition requires risk, creativity, and personal expression. If EBPs had been in existence, music wouldn’t have innovators such as Beethoven, Wagner, Bach, Mozart, and Schoenberg. EBPs excel at teaching objective rules so they could sure teach the basics of things like arrangement, structure, and inform a composer of the techniques used by famous composers. But how can one teach composition (Werner, 2015)? Another of the most important factors in human behavior that is largely ignored by EBPs because of their belief in measurability, is motivation. Of all the aspects determining human behavior, motivation might be the trickiest to study. And all forms of motivation do not have the same impact on creativity. Extrinsic motivation, the kind of motivation supported by businesses, organization, and educational systems, often crushes creativity, while intrinsic motivation increases it (Amabile, 1998). In summary, EBPs stifle creativity. They expect everyone to conform and use the same treatments, techniques, and methods rather than encouraging and accepting differences. They fear the different and seek to establish a simplified version of what is right, appropriate, and usable, rather than accepting the complex. Important ideas like motivation are also ignored by EBPs.

STOP MAKING SENSE

139 Other Theories

One of the other most recent discoveries regarding EBPs is the acknowledgement that they create a distinction between practitioners and users (Upshur, 2003). There has been a recent understanding that EBPs are not used as often as they are promoted, studied, researched, and encouraged. In other words, many practitioners are not using EBPs, regardless of their training or education. This lack of consistency between philosophy and practice is then creating an “us vs. them” mentality in certain fields. This issue is partially caused by the misinterpretation and misunderstanding of other theories and methodologies. For instance, supporters of EBPs claim their beliefs allow them to practice their craft with courage, curiosity, empathy, humility, integrity, and persistence (Gambrill, 2010). They also claim that an important principle of EBP is that of transparency, which includes the ability to admit what is not known (Gambrill, 2010). While these values are all fine and good, are they not also values promoted by other theories as well? The idea spread by proponents of EBP that somehow these values are practiced solely by followers of EBPs is not only inaccurate, it creates a scary view of other ideas and beliefs as well as being misleading. It gives any reader a false sense that EBPs are superior because they are the only philosophy of knowledge that encourages these values. A glance through research touting the importance of EBPs reveals claims such as social workers being obligated by their code of ethics to draw on research that will allow them to share their evidentiary basis with their clients and will be helpful towards their client (Gambrill, 2010). Is there not a danger in this statement? Does this statement not imply that if someone is using a different methodology than what EBPs promote, they are automatically unable to provide an evidentiary basis or that their research will not be helpful towards their clients?

STOP MAKING SENSE

140

Much research also mentions the need to admit to methodological flaws (Gambrill, 2010). Again, does not this statement imply that other methodologies are unwilling to admit their flaws? Is not this statement another dangerous assumption? So, what are other options besides EBPs? A wise man once said criticisms often work better when the criticizer presents alternatives, so what alternatives does science have? Alternatives to EBPs include using basic scientific knowledge. Normal deduction from this scientific knowledge is a valid way to practice (Sehon & Stanley, 2003). What if psychology switched from evidence-based treatments to science-based treatments? Would this idea solve the problem? Recently there has been some pushback towards EBPs, with some practitioners encouraging a change to the term evidence-informed practices (Morrison, 2001; Nevo & SlonimNevo, 2011). This term certainly seems more appropriate, as supporters of evidence-informed practices argue about the limitations of EBPs, saying evidence-informed is not only more realistic, but allows for a more comprehensive way of viewing the scientific process. As shall be seen, one of the problems with EBPs is they aren’t the effective treatments they claim to be and actually use some pretty problematic science to promote their theories. A simple solution would be to change the meaning of evidence-based to what the evidence actually supports. In the field of psychology this new system would like more holistic, as in multiple forms of treatment and emphasis on the therapeutic relationship, since this idea is overwhelmingly supported by evidence. It would be a return to a complex system where the expertise of clinicians holds value: “Rather than touting the superiority of manual-based psychotherapy, discussions should focus on practitioners’ perspectives on the utility of these treatments” (Addis et al., 1999, p. 439).

STOP MAKING SENSE

141

Regardless, one of the most important and necessary changes is a return to a basic understanding of science and a recognition that the simplistic system supported by EBPs, this black and white, binary view of data and research, is not realistic and has not led to the results society desires. This idea goes double regarding a definition of evidence. As an example, science will likely need to end its reliance on this hierarchy of evidence so vastly encouraged by EBPs, especially the faith science has placed in RCTs. RCTs are inherently reductionist in their nature. This idea means RCTs may be true, but their truth is trivial and unknowable. It is actually impossible to generalize RCTs to a wider population or circumstance, despite standard thinking (Morrison, 2001). In education, this change would appear as less reliance on testing and an understanding of the complexities existing within the school systems. It would probably be a return to teachers having more say in how they run their classrooms. In science, it would appear as a return to basic science, where complexities of systems are understanding, acknowledged, accepted, and regarded in a positive manner. To summarize this section, there are other competing ideas the sciences can use besides EBPs. These philosophies also promote scientific values but generally appear to acknowledge a more complex and holistic understanding of science. One of the frontrunners to replace EBPs is evidence-informed practice. This philosophy seeks to use evidence to inform decision making but lacks some of the extreme traits of EBPs. Authoritarian Mentality EBPs also encourage an authoritarian mentality (Gambrill, 2010). This mentality is far different than an authoritarian model of government and it should be noted that this statement is not an attempt to label supporters of EBPs as evil, vindictive, and controlling individuals who favor oppressive government. It is rather a statement proclaiming the natural consequences of a

STOP MAKING SENSE

142

system where only certain people are allowed to determine what qualifies as evidence and what this evidence says, while the rest of the people are expected to take these people’s analyses of the evidence at face value, without any questioning, arguing, or disagreement. The product of this system can only be less critical thought and more authoritarianism (Gambrill, 2010). EBPs are notoriously difficult to implement and enact (McCluskey & Lovarini, 2005). They require a huge amount of education and experience, and even if a teacher, practitioner, doctor, or manager is experienced and educated, there is still no guarantee they will buy into the idea, much less all the ideas, of EBPs. Even the way EBPs are taught influences their effectiveness (McCluskey & Lovarini, 2005). Research has demonstrated the common occurrence of practitioners growing in knowledge regarding the latest purported evidence, and yet still not increasing their practice of EBPs (McCluskey & Lovarini, 2005). Professionals also face great barriers when trying to implement EBPs, the most difficult of which is attempting to stay up to date and current with the latest research. The simple process of reviewing literature itself takes tens of hours, requiring the researcher to uncover the author’s background, uncover who funded the research, and then research any criticism. At this point, the reader still has a decision to make regarding the claims of the researcher and whether or not their claims make sense and are valid, thus verifying previous statements about the inherent subjective nature of science. In fact, one of the studies read for this dissertation researched the effectiveness of EBPs and reported 94% of therapists reported time constraints as being the largest factor for not implementing EBPs more often. The fact that EBPS are difficult to implement also creates not only a misunderstanding of the effectiveness of EBPs, it creates a hardship on the whole system, in terms of financial, educational, and emotional outcomes as well. The fact that EBP supporters expect others to

STOP MAKING SENSE

143

follow their methods regardless of their own personal beliefs demonstrates how authoritarian this belief system could one day become. Faulty Research Perhaps the biggest problem with current EBPs, at least in the field of psychology, is that most of them are actually bad science. The truth about EBPs is they are not good, effective treatments (Shedler, 2017). The irony here is that EBPs claim to be more scientific than other treatments, but this statement is not the reality, as these beliefs stem from some poor science and faulty research. According to Slavin (2008), three factors—involvement of commercial companies, small number of studies, and high stakes—should cause reviewers to be skeptical of much of the research being performed. A good study should focus on detailed reporting, thorough review of the methodologies, and openness about limitations. It is possible that more than just these three issues exist. There is a real lack of quality research supporting EBPs, with many poor studies easily found. However, this occurrence is difficult to avoid since proponents of this theory often proclaim the importance of exceptional research and following the findings of research is a pillar of their orientation. For example, during research for this study, one paper promoting the virtue of EBPs itself designed a study without the use of a control group (McCluskey & Lovarini, 2005). Social sciences and the humanities have a responsibility to display the consequences of changes in democracy, race, gender, class, nation-states, freedom, globalization, and community, and how EBPs influence the social structures that lead to these changes (Silk et al., 2010). Sadly, these responsibilities are not present in much research. To begin this section, consider this quote: We do not have convincing studies showing that patients whose clinicians practice EBM are better off than those whose clinicians do not practice EBM: no one has done a

STOP MAKING SENSE

144

randomized controlled trial of EBM with patient outcomes as the measure of success. (Hayens, 2002, p. 5) This statement means there is really no evidence showing EBPs to be superior than other forms of treatment or this vast change in treatment they purport to be. So why are they still being pushed? And why then, do supporters of EBPs continue supporting them? Isn’t a hallmark of their theory to test everything? Doesn’t this statement mean their theories have been tested and their hypothesis has been proven false? According to their own ideas, doesn’t this mean EBPs are ineffective and science should not be using them for treatment? Research practices used in many EBT studies are problematic and also inconsistent with the values evidence-based proponents espouse (Shedler, 2017). Empirical research actually shows EBPs are unusually ineffective for most people, most of the time. They show EBPs are weak treatments and most patients do not get well. There benefits are trivial and usually do not last (Dalal, 2018; Shedler, 2017). For example, psychodynamic therapy for depression has also been suggested as effective but also indicates that time-limited treatment is not sufficient for most patients (Dreissen et al., 2013). Research has discovered that the average client receiving CBT treatment for depression remains depressed after treatment. Panic disorder fares similarly with the average client for this disorder reporting at least one weekly panic attack and still endorsing 4 out of 7 symptoms from the DSM after receiving CBT treatment. One of the largest findings regarding EBPs is that their effects are temporary. When patients from EBP studies are followed over time, their results diminish. The majority of patients receiving an EBP treatment return to therapy in 6 to 12 months (Shedler, 2017). Only about half of therapy clients respond to any interventions and only a third ever reach remission. Most clients do not continue in remission unless they are receiving ongoing treatment (Hollon et al., 2002).

STOP MAKING SENSE

145

If one considers only patients who improve to the point of being mentally healthy and stay mentally healthy, they are down to 5% or fewer of all clients originally seeking help. In other words, an accurate interpretation of the research regarding evidence-based treatments would suggest that approximately 5% of all clientele seeking treatment experience lasting benefits from evidence-based treatments (Shedler, 2017). A large part of this lack of improvement is based upon the EBP idea that treatment only serves to reduce symptomology. As previously discussed, this idea is part of a philosophy. If the goal of treatment is to only temporarily reduce treatment, then the treatment is successful. However, once again, this example demonstrates the importance of philosophy, as the success of the treatment changes depending upon the philosophy. The quantitative/evidence-based idea that therapy is successful if symptomology has been reduced has proven ineffective (Shedler, 2017). If the philosophy is changed to include success as permanent remission, then EBPs are not successful treatments. Even CBT-based researchers have admitted they their studies do not prove patients have improved in any meaningful way. For example, it is not known by most researchers whether or not their clients’ everyday lives have improved in meaningful ways (Kazdin, 2006). One example of problematic research can be found in one of the largest studies on depression, published in a 2013 edition of the American Journal of Psychiatry by the NIMH. This study incorrectly claimed EBP results were superior while in the discussion of the study, making such claims as not noticing any significant difference in effectiveness between the two treatments (Dreissen et al., 2013). In this study, the control group compared to CBT was labeled psychodynamic in nature but was not performed by experienced and trained practitioners in psychodynamic therapy, as the CBT practitioners were. Rather, the psychodynamic therapy was performed by graduate students who received two days of training in this form of therapy. And

STOP MAKING SENSE

146

the results of this study were still listed as CBT showing superior results than psychodynamic therapy (Gilboa-Schechtman et al., 2010). Many actions taken during research, such as allowing students to perform therapy they were not adequately trained in, would be considered unethical if done by a practicing psychologist (Shedler, 2017). Very few studies have been done that directly compare evidence-based treatments with other forms of treatments. A meta-analysis studying this subject found 2,500 studies in existence claiming they compare EBPs while only 14 of the studies actually did. This meta-analysis found no difference between evidence-based treatments and other therapies used and also found that EBTs were frequently compared to doing nothing in these studies (Wampold et al., 2011), making their claims of effectiveness when compared to other treatments, false. This study is an example of another problem EBP research frequently demonstrates: not comparing itself to alternative and effective treatments. This lack of comparison happens because control groups compared to CBT groups are usually not genuine alternative therapies performed by experience practitioners (Shedler, 2017), as seen with the NIMH study. The previously mentioned NIMH study demonstrates another possible research problem: subjects quitting during the research. In the previously mentioned study, many of the patients dropped out and this fact was not mentioned in the research article until the results section. What do drop-outs mean for an effective treatment? Statistical models would ignore these dropouts, but one could certainly argue they support the idea a treatment is ineffective. Sadly, many studies are written in such a way that requires going through the entire study with a fine-tooth comb to understand the details and the mistakes made in interpretation (Shedler, 2017). In many randomized control trials for evidence-based treatments, about two-thirds of patients end up excluded from the study beforehand. Sometimes exclusion rates have exceeded 80%. And because of these higher exclusion rates, studies can have better outcomes. The patients often

STOP MAKING SENSE

147

excluded from these studies may have dual diagnoses, personality disorders, appear to be too unstable, or may be suicidal, similar to the patients since every day by real-world psychologists (Shedler, 2017). These examples are consistent with the previously mentioned idea that EBPs are not effective science. Methods used much of the scientific research to prove what works vary in fundamental ways and often lead to inconsistent conclusions (Slavin, 2008). Plus, EBPs serve to bolster the objective claims of science that are not really objective (Silk et al., 2010). There are plenty of articles claiming to be evidence-based but are not (Gibbs & Gambrill, 2000). Similarly, methodological studies frequently disagree with each other, making it easy to understand why proponents of EBPs would feel their methods are superior and necessary (Haynes, 2002). There are many programs and evaluations where there is only one study or research project done and policy decisions are being made off this study. This is bad science (Slavin, 2008). Furthermore, “there is a mismatch between the questions studies of ‘evidence-based’ therapy tend to ask versus what patients, clinicians, and health care policymakers need to know” (Shedler, 2017, p. 322). In addition to the issue of bad science, many of the advances EBPs report are not actually advances at all. The advance of knowledge proposed by EBP is not accurate. Advances are often incremental and slow, contain many false steps, and true breakthroughs are few and far between. Only a very small amount of information in scientific studies will contain new knowledge that has been accurately tested and important enough to make a difference in the way clinicians practice (Haynes, 2002). There are inconsistencies between various agencies and how studies are evaluated, with some agencies refusing to review studies unless they are completely randomly assigned, while other agencies do not focus on random assignment at all (Slavin, 2008). Multiple problems exist

STOP MAKING SENSE

148

within the groups gathering EBP information, such as the What Work Clearinghouse (WWC). The WWC specifies certain inclusions and synthesis procedures needed for use in research with great detail, but in reality, allows for a wide range of variation in research: The WWC has suffered from an inability to meet its own expectations in terms of completion of reviews. After several false starts and many controversies, the WWC announced in 2004 that several of its key reviews, such as those on beginning reading and middle school math, were about to appear. These and others were not posted until summer 2007 and still have major gaps. Potentially, the WWC is the most important of the synthesis efforts for policy, because it alone carries the endorsement of the U.S. Department of Education. (Slavin, 2008, p. 5) Procedures used by programs such as the WWC emphasize randomized, but very small, experiments. These experiments largely determine ratings given to many programs (Slavin, 2008). The CSRQ, or Comprehensive School Reform Quality Center, claimed to be evidencebased but used research methods that were quite different from the WWC. Their methodologies focused more on the number of studies, review methods, and statistical significance. Their federal funding was ended in 2007 (Slavin, 2008). The Evidence for Policy and Practice Information and Co-ordinating Centre in the UK based their reviews on variables rather than specific programs (Slavin, 2008). Publication bias is also a big problem, with journals frequently unwilling to make negative evaluations available. Furthermore, this effect is increased for quantitative studies and there are frequently problems with companies carrying out their own evaluations or contracting out the evaluation (Slavin, 2008).

STOP MAKING SENSE

149

Regarding the frequently discussed standardized testing, most teachers agree: standardized testing does not accurately portray what students know (Rebora, 2012). A wide variety of test results can be seen in standardized testing depending on the location, culture, age, diversity, and population being tested. Despite these discrepancies in scores, proponents and testdevelopers continue promoting these tests as effective (Bhattacharyya et al., 2013; Kumeh, 2011). For example, improvement in student performance in Texas that is often credited as evidence of standardized testing success occurred at the same time class sizes were decreased, spending on education increased, and court-ordered equalization of resources occurred (Kohn, 2000). This interpretation is bad science and demonstrates that EBPs are not the perfect scientific system they claim to be. As previously mentioned, much of the faulty research exists because of random assignment. Many studies have claimed random assignment but are actually not random (Slavin, 2008). Retrospective designs have their own set of problems, as the field of subjects is narrowed due to the passing of time (Slavin, 2008). One of the more problematic issues mentioned during research for this dissertation was the size of studies, with smaller studies being more common and unable to predict accurate effect sizes. Many studies, especially in certain fields such as education, use small sample sizes, which can create inadequate statistical power and confounding. These studies with small sample sizes are less likely to have a null result and more likely to have extreme effect sizes (Slavin, 2008). Statistically testing for effect sizes and coding for characteristics and procedures are rarely possible in program evaluations because of the small number of studies within each program. Computing a rating of study quality does not work because the amount of studies is usually limited. This feature cannot be balanced out by another feature in the study and introduces a significant amount of bias. As an example:

STOP MAKING SENSE

150

If there were multiple large-scale, randomized, multiyear evaluations of each of several educational programs, then reviewing the evaluations would be straightforward. Given that this is not the case, however, the reviewer faces a dilemma. One could decide to make inclusions criteria extremely stringent, but the result would be a very small set of programs because few have even a single qualifying study. (Slavin, 2008, p. 7) Compromises are needed if a broader set of studies is being performed on a broad set of programs. Reviewers need to decide which compromises are worth making in research (Slavin, 2008). Clustering, or failing to account for the level people are naturally at when being placed into a study, is frequently a problem as well (Slavin, 2008). Few other disciplines, besides psychology, emphasize significance over change and this reliance on statistical significance is problematic. For one, statistical significance is somewhat subjective. In other subjects, when a meaningful change is witnessed, investigators emphasis this change. For example, if a drug lowers a person’s blood pressure, science discusses how much it lowers it. For weight loss programs, scientists report the average weight loss. For cholesterol drugs, scientists report how much cholesterol was lowered, not statistical significance (Shedler, 2017). Furthermore, a gap exists in the way research is explained, with researchers often referring to statistical significance and most people mistakenly assuming this phrase means people get better (Shedler, 2017). Publication bias also contributes to the idea that evidence-based treatments are superior to other treatments. This bias refers to the fact that publishers prefer studies showing large effects (Shedler, 2017). Proponents of EBPs also routinely ignore evidence not matching their beliefs, such as automatically ignoring evidence for theories that are not manualized or scripted (Shedler, 2017).

STOP MAKING SENSE

151

For example, during the course of discovering research for this dissertation, studies were found where EBP use was assessed using a single question. The authors claimed, “Practitioners may have felt pressure to report high levels of EBP” (Nelson & Steele, 2007, p. 328). In summary, there is much faulty research and science in EBPs. For starters, there are many studies in psychology claiming the superiority of EBPs, but only a dozen or so comparing EBPs to other forms of treatments. These studies have all concluded EBPs have no superiority over other treatments even though thousands of reportedly scientific studies have made this claim. Many studies contact irrational or inaccurate interpretations and hide details in their discussion sections. There are further problems with sample and effect sizes. In general, the research does not support the superiority of EBPs. Music Education Perhaps no other subject demonstrates the incompleteness of EBPs as well as music education. This subject has undergone a devastating transformation in recent years, largely due to the adoption of EBPs across the education system. These problems include a lack of funding and the cutting of an overwhelming amount of music programs across the country (NAfME, 2020). There are also many barriers in existence to the learning of popular music in schools. American teenagers spend only slightly fewer hours listening to popular music than they do attending school over their 12-year school career (Law & Ho, 2015). But other problems exist. For example, music education has demonstrated a severe deficit in their ability to stay in touch with today’s students and music (Stambaugh & Dyson, 2016). Other countries have experienced deficits as well. China has experienced a recent resurgence in interest of Western classical music, which is likely the result of the one-child policy. Thirty-six million children in China study the piano, as compared to 6 million in the United states and 50 million children in China study the violin. Parents in China encourage their children to excel at music the same way

STOP MAKING SENSE

152

parents in the United States encourage their children to excel at sports. However, China also struggles to implement policies allowing for the inclusion of today’s popular music, thereby diminishing the amount of people music education can reach. There are large differences between education methods in various countries. Books used for music appreciation teach the traditional Chinese concept that good music produces good morals. Creativity and Individuality are now goals for the curriculum in China music education policy (Huang, 2011). Some studies have questioned the importance of a school music education to student’s own music world. Students report popular music as allowing them to “feel the emotion” better. Sadly, an examination of 10 different schools showed they focused more on preserving classical music’s heritage than educating through the use of popular music. Numerous studies have demonstrated that music students feel torn when learning popular music (Law & Ho, 2015). Schools have tried incorporating popular music into curriculum but have not experienced much success doing so (Law & Ho, 2015). Facts such as these have caused people within the music education community to voice concern that music education is facing a crisis of becoming irrelevant (Williams, 2007). A large reason for the changes in music education appear to be the promotion of EBPs amongst educators. But what have the effects of this promotion been? The emphasis on quantitative evidence means that more surveys, tests, and questionnaires need to be completed. This emphasis means teachers will not be able to make evaluations unless they have thoroughly documented what they are trying to evaluate and will not be allowed to make assumptions without this evaluation. In other words, a teacher cannot make a simple evaluation of how well their students are doing by simply examining their grades and their behavior. They will need to have their students assessed, fill out surveys, complete documentation, examine parental questionnaires, and more. Similar to the therapist who is no longer allowed to assume their client

STOP MAKING SENSE

153

is depressed when the therapist themselves witness depression as well as the report of the client, this notion of evaluative evidence is gone. Currently, one’s feelings, assumptions, and experience are not considered to be at all a part of evidence and are completely discounted (Eisenhart & Towne, 2003). If a teacher has experienced that her class is performing about as well as can be expected, especially given their current circumstances, this opinion counts as nothing because a test, survey, or evaluation may say otherwise. Similar to how a psychologist cannot determine a person as intelligent based on anything other than a standardized IQ score, this type of assessment leaves large holes in the ways they reach their conclusions. Certainly, quantitative and standardized information is helpful and a large part of many determinations when assessing a situation. However, at what point does a standardized assessment become more important than a knowledgeable person’s experience? If the teacher has spent all year with her class, could it not be argued that they understand and know their class better than any assessment? So, the effects of this movement in education, which is largely influenced by the business world and the desire for financial accountability, has been to make classrooms more quantifiable (Eisenhart & Towne, 2003). The teachers are held accountable, measured, and studied, as well as the students. The impact of this quantization has been large. For starters, teachers no longer feel secure in their jobs and their teaching. They are constantly being observed and studied, they feel more pressure than they used to, which they claim adversely affects their performance. They also feel as if they no longer have the option to teach authentically, in a manner congruent and natural to themselves; their personal style is gone and the option of developing a unique method of teaching is no longer an option as well. In the documentary Waiting for Superman (2010), filmmaker David Guggenheim delves deep into the public education system to witness some of the current horrors. One of the most

STOP MAKING SENSE

154

potent moments in the documentary occurs when one of the successful teachers is being interviewed and discussing the process of firing teachers. This process is an exact and applicable demonstration of the effects an extreme reliance on quantitative evidence can have. The need for accountability when firing teachers has made the process so complex, difficult, and long, that it has become next to impossible to fire teachers. For example, the process to fire a teacher takes more than 3 years and requires an administrator to complete a 30-page booklet documenting all the reasons the teacher is being fired. During this process, the teacher is given certain warnings and chances to improve their teaching. If, for some reason, the administrator misses some of their exact deadlines, which are not easy to obtain, the whole process starts over again. The students are also affected by this process as well. The constant evaluation of students, manifested as the need to consistently measure and test said students, has created an environment of overtesting. This overtesting has resulted in a culture where teachers are complaining they now have to teach “to the test,” rather than teach from the theory or for understanding, as they would like to. The results of this teaching to the test have not been kind. The state of education has been on the decline for many years and though there have been certain improvements shown, there have also been many education regressions as well. As a contrast, currently the top education system in the world is Finland, a country where the students are tested at most, twice a year. In fact, Europe as a whole engages in much less testing of their students and perhaps consequently, their education system is better for it. This trend to “teach to the test” has created its own sets of problems with reports of cheating amongst teachers and students increased by a large margin since these policies have been implemented. Philosophy, once again, plays a major role in understanding the errors of thought in this process. Teachers and schools unable to achieve the standards are threatened with the prospect of job loss and decreased funding. They are also held responsible for all their

STOP MAKING SENSE

155

student scores regardless of whether these scores are their fault or not. What if a teacher happens to have less intelligent students in her class a particular year? What if a teacher’s class comes primarily from an underprivileged area, where most students’ parents are not as present, emotional problems are more prevalent, and students are therefore less likely to pay attention and achieve in school? Are these teachers completely responsible for their students’ scores? In fact, is any teacher completely responsible for their students’ test scores for that matter? What about the high school teacher who has students who refuse to study? Is this lack of studying and the low test scores that will likely result as a consequence the teachers’ fault? This scenario exists in other areas of life as well, including the private section. One of the most obvious examples is the use of quotas in the business world, which have created similar problems, the most obvious of these problems being the idea that it makes things more difficult to perform the job. Research has shown that corporations who actually trust their workers to perform their job to the best of their abilities and do not attempt to hold them accountable have increased productivity? Since many of the consequences in the educational world mirror those of the consequences in the corporate world, is it not realistic to believe the educational world would witness a similar increase in productivity if teachers were also trusted and held to less rigid standards? Educational funding is also affected by this change in evidential value, with increased funding going to the schools who achieve the standards set by the Department of Education. This funding structure also creates its own set of problems, with schools who are often already better off achieving superior scores and receiving a larger share of the funding. Is this philosophy of spending not backwards? Should not the schools with inferior scores be receiving the increased funding to help them improve their programs? Once again, a strong philosophical background is the main factor in this argument. The current reasoning assumes that schools with higher test

STOP MAKING SENSE

156

scores are more responsible and spending their money better, therefore the financially accountable choice would be to give more money to the schools who have already proven they can spend taxpayer money wisely. But this idea stems from the fear of public taxpayer money being spent unwisely or on education concepts the public disagrees with, not from a place of concern about the education of children and what educational policies are most likely to help them. Another philosophical conclusion regarding this policy as that of incentives: schools are incentivized to perform better because they want taxpayer money. In essence, the schools not receiving an increase in public funding based on their education scores will have motivation to improve their school systems as an effort to increase their funding. Once again, is this philosophy complete and sound? Responsibility and accountability should surely be a consideration regarding funding, but as the sole and only incentive for school performance? In essence, these policies are saying a school systems primary motivation is financial. This idea fails to account for motivations such as wanting children and students to succeed, as well as the simple joy of teaching, or caring for others. Similarly, the philosophy behind test scores fails to account for other possible reasons behind low or high test scores, such as lack of ability, environmental factors, illness, lack of funding, poor support systems, and more. Is it not possible that these philosophies are faulty and incomplete? One could even argue they are doing more harm than good. Since these evidence-based values have resulted in a fundamental change in the way school systems educate clients, it is reasonable to believe that these policies are also affecting music education. For starters, funding for music education has decreased drastically in the past two decades, with many schools dropping their music programs completely. Entire cities with budget problems in their education systems, such as Chicago, have even cut music education programs throughout the entire city (Fang, 2013). There is even concern these programs could

STOP MAKING SENSE

157

eventually be cut across entire states or nationally in the future, if the current budget systems and priorities continue on their current path. While lack of funding in music education may be the biggest and most important concern in music education, it is hardly the only issues created by this new emphasis on EBPs. However, this area is of interest has not been highly researched and is fertile ground for exploration. A brief glimpse at the research suggests that music education is also being invaded by many of the same consequences as the more generalized subjects of public education (NAfME, 2020). Examples include teachers being held solely responsible for their students’ success as well as standards dictating whether or not these programs will receive funding. Standardization has played a large role in these consequences, with teachers needing to create standard ensembles such as bands, orchestras, marching bands, and jazz bands to be judged and compared to other schools. This process creates needs for the teachers that cannot be ignored, including the limiting of instrument availability to students. If a teacher is fortunate enough to have 50 students who all want to play flute (or unfortunate enough, as many teachers and the NAfME would consider this scenario), a flute ensemble would not be considered appropriate and the teacher will need to assign students to new instruments. Standardization has also created a limited curriculum for music students and teachers to choose from. Material must be chosen from an approved list and must meet certain agreed upon standards to qualify for this list. If a beginning student is interested in playing Metallica on the tenor sax, this interest would not be allowed as part of public education (Law & Ho, 2015). If a student is interested in playing solely in jazz band and has no desire to be in the classical or concert band as well as the marching band, this option is not allowed either, as music educators have determined that all music education must begin and be centered around a solid traditional,

STOP MAKING SENSE

158

classical background. The reverse of this case is not true, students are allowed to participate in concert band or marching band without joining the jazz band (Salazar & Randle, 2015). One of the most important and yet seemingly unasked questions these practices create is how these values impact students? How many students either refuse to join the band or quit because they are required to participate in a musical experience, they have no desire to be part of? If a student does not value, or is not moved by classical music, but they have no options within the public school music program, regardless of their level of musical talent or the fact that they may be musically creative and successful. The NAfME standards also dictate that students are to be taught a certain way, through music notation and not through other methods such as demonstration or playing by ear, even though research long ago indicated that students who start playing by ear at a young age are far more successful than their peers and musicians who focus solely on reading music (Garson et al., 1974). But what is the purpose of these policies? Once again, reading music allows a musician’s performance, or skill level, as well as improvement to be measured far more easily than without notation. If a student is performing a song they wrote or an arrangement they created, it is far more difficult to judge skill and improvement than it is if they were reading a piece of music placed in front of them. Music notation also makes it possible to and easier to create various performance groups such as bands and orchestras which again, are used to judge teacher accountability and assess funding allocation. In summary, there are many problems with music education, not least of all is a disagreement about music education philosophy. One of the most current battles in this field is the ignorance of popular music and the effect this oversight is having on the popularity of music education. Further research suggests EBPs are partially responsible for the decline in the number of music education programs. Other important issues include the belief that students need to read

STOP MAKING SENSE

159

music even though research has demonstrated music notation as being unimportant to a students’ abilities. History of Music Education Music and music education in both formal and informal setting are complex cultural constructs that have existed across multiple cultures for centuries (Law & Ho, 2015). It is an old field of study, likely existing since the times humans have been creating music (Taruskin & Gibbs, 2013). In the earliest days of music, humanity’s musical experience likely consisted solely of singing, although this is debatable. Certainly, by medieval times, the emphasis was almost exclusively focused on singing, and the sole purpose of singing was reserved for the church (Burkholder et al., 2014). Not much is known about music education from this era, but it is a logical assumption that the songs and music were passed down in some form, meaning some form of education existed. The days of formal public education began in 1838 in Boston. This attempt to teach children in the public schools became the foundation of what is now known as public music education (Birge, 1937). However, music education did not become commonplace and part of the public curriculum throughout the United States until the early 1800s. Before this time period it was reserved for the wealthy as a sign of their superiority. However, once it was discovered that music can be a tool used to promote social change, music education became an important part of the curriculum, used to educate the poor and promote social change from within the poorer classes. Music became a tool to educate the common folk (Keene, 1982). This idea emphasizes one of the important contentions of this dissertation: The philosophy of music education is important and can be changed. Equally important to this emergence of music education as part of the standards of education was the common school reform era in the 1860s. This era was one of some sweeping

STOP MAKING SENSE

160

reforms in the United States public education system, including an emphasis on school-wide instruction. It was believed that school could be a unifying force in the nation and this belief began an era of emphasizing the mass public education and universal schooling. All students would be taught the same material and curriculum as well as being required to attend school (Rury, 2009). Because of the movement, systematic training and mandates were created for teachers, a process that had a large impact on the field of music education. This early education also began with an emphasis on vocal music and music education was limited to four-part choral singing. Any type of music outside this style was not considered music and was not allowed within the public education system (Mark & Gary, 1992). This decision would also have a big impact on music education in the following decades. This shift from individualized music education outside of school, towards an education system that included music as part of its system forced a shift in the way music education was approached. For the first time, learning music was no longer an individualized activity, but a group activity as well (Mark & Gary, 1992). These changes further emphasize the importance philosophy plays in music education and the impact it has had on the system. The following quote from Mark and Gary (1992) summarizes the role philosophy played in the early education system and how it changed the way music was taught. Music was integral to the education system during the progressive education era. However, when this era ceased to be a movement circa the 1950s, the philosophical basis for continuing to teach music was lost. But no new education philosophy was instituted to replace what was lost, and there was consequently, no intended direction for music education to follow. Music education became static. This idea indicates that the country’s educational philosophy plays a large role in how and what it decides to teach. A philosophy of education is just as important, if not more important, than any evidence one may deem prudent to their educational goals. This shift in

STOP MAKING SENSE

161

philosophy had an impact and music education found itself at an impasse (Jorgenson & WardSteinman, 2015). However, this impasse proved to have positive benefits. Music teachers found themselves banding together to promote a more complete curriculum they called comprehensive musicianship (Sindberg, 2012). Comprehensive musicianship taught many aspects of music, including multiple music learnings, a variety of skills, technical proficiency, cognition, and personal meaning (Mark & Gary, 1992; Sindberg, 2012). Comprehensive musicianship was first introduced to the education system in 1965 and quickly reinvigorated and radicalized the music education system; it was soon the standard way of teaching music. In 1963, music educators met at Yale University to devise ways to entice more young people to pursue music. They left with the conclusion that music education needed to be broadened to include other types of music, such as folk and jazz. However, rock music was still deemed inappropriate to teach (NAfME, 2015). This decision proved to have serious consequences for music education as students became less and less engrossed with the music of the past centuries and hungry for an educational system that embraced contemporary styles of music (Kratus, 2007; Kruse, 2014; Ladson-Billings, 1995). Today the education system is headed back towards the direction of focusing on the more quantitative subjects such as math and science. Education used to emphasize the three “Rs”: reading, writing, and arithmetic. The current trend in education is headed back in that direction, and it is having an impact on music education (Taylor, 2011). This trend in education is being influenced by the government and educational spending, both of which are seeking to more narrowly define what research in education means in order to save money and promote financial accountability. This trend dates back to the late 1990s and increased during the early 2000s, culminating with the No Child Left Behind Act, which defined educational research in narrow terms, using the familiar evidentiary hierarchy. Since this time, educational researchers and

STOP MAKING SENSE

162

government authorities have been promoting a system that embraces quantitative evidence and they have championed the randomized control trials as the pinnacle of all evidence. In 1999, the government officially defined, for the first time, what constitutes educational research, officially making EBPs part of the education system (Eisenhart & Towne, 2003). Once again, the impetus for this definition was to be certain sure federal dollars were being appropriately funded (Eisenhart & Towne, 2003). There was a shift in the type of music research being performed from the time of 1953 to 1978, with research going from the humanities and philosophical studies to more scientificallybased psychological studies. Recent studies have focused more on individual behavior of students when learning music. Music research used to be represented by a more humanistic, historical, and philosophical approaches but the type of research being performed changed as the scientific philosophy in the United States changed towards a more positivist philosophy (Jorgensen & Ward-Steinman, 2015). Students are currently learning more about popular music in informal music education environments than they are in schools and universities (Law & Ho, 2015). By not allowing the inclusion of popular music study, formal music education is currently disconnected from the ways many students experience music. There is currently a tense, dichotomous relationship between formal and informal music learning within the music education community (Salazar & Randle, 2015). The UK, US, and Australia began incorporating popular music in the curricula by the 1970s (Law & Ho, 2015) but popular music has still not been fully embraced as a type of music worthy of study by the music education community (Law & Ho, 2015). However, the traditional music taught in music education may be causing it to become irrelevant: “Formal music education has limitations for students in the 21st century” (Salazar & Randle, 2015, p. 281).

STOP MAKING SENSE

163

Students claim their peers, as opposed to music educators, are actually their primary source of musical knowledge and most students supported having a more “well-rounded” music education, including an education in popular music. Music teachers have encouraged students to learn popular music but largely want students to do so outside of their formal music training, so their performance and examination scores are not negatively affected. Students are requesting a more “open” music curriculum, one that embraces more variety, diversity, and acceptance of more styles (Law & Ho, 2015). Most cultures throughout history have regarded music and music education to be an important part of their culture and an important part of any individual’s life (Reimer, 1997): “Music education has maintained the same traditions for decades. However, the longevity of traditional music education does not necessarily support its perceived value” (Salazar & Randle, 2015, p. 281). American music education is currently more congruent with the way people experienced music in the 19th century than how students experience music today and for decades music education has resisted the inclusion of popular music into the curriculum. This resistance is based upon the belief that popular music is inartistic and associated with some of the more negative aspects of students’ culture. Music educators have also sought to preserve the history of this art form. These ideas are philosophies and these philosophies are causing tension within the music education system (Salazar & Randle, 2015). There has been increasing attention on the National Standards for Music Education, as well as standard-based teaching since 1994, the year the standards were initially published (Williams, 2007). Because of these standards, music educators are currently focused on teaching students to sing and play in large ensembles (Williams, 2007). Williams (2007) stated,

STOP MAKING SENSE

164

To date, the profession seems uninterested in broadening its secondary offerings beyond the traditional bands, orchestras, and choruses established over the past century. The current system that both prepares preservice teachers and maintains in-service teachers appears intent on preserving the status quo. We could be protecting the very thing that is destroying us… This marked decline in music performance activity has been all but ignored by the music education profession… Today we are witnessing another societal shift—one that our profession is not prepared to react to, nor seems to care to address in any serious way. Accompanied by rapid advances in digital technologies, we are entering an era when interactions with music are becoming more complex. The lines between the traditional roles of listener, performer, and composer are blurring. Thanks to digital technologies, it's now possible to become the composer, the performer, and the listener at the same time—and much more accessible than at any time in history. Additionally, with the help of technology, music is becoming more multisensory as it becomes increasingly associated with visual stimuli… More so than at any time in our history, students can now do more musically at home without us than they can at school with us in most traditional music programs. (p. 4) Music and other artistic and creative forces have recently been seen in terms of recreation only, disregarding the role they play in an education curriculum. The idea that music and creativity can be an important part of education and exist in equality to other subjects is still not an idea that has been grasped by educators and the general public (Finney, 2002). Finney (2002) explained, But all this has a history. I am describing the case of music and the arts that sought in the 1960s and 1970s to find a place in a world that admired “objective reality” and “truth.” The triumph of the Enlightenment had nurtured a confidence in these notions. These, it

STOP MAKING SENSE

165

was thought, depended either on empirical verification, something could be shown to be the case or on a particular form of logical reasoning. That which was accurately measurable, clearly quantifiable or rationally justifiable had objective status and was therefore worthy of assessment. Value judgements were recognized but were viewed with suspicion and thought to be inferior to objective judgements. Music, like all subjects claiming a place in the curriculum, had validity to the extent that it was able to establish objectives amenable to this kind of “assessment.” Thus, at examination level certain kinds of knowledge, such as well-codifed harmonic progressions and melodic patternings, musical historical truths and established interpretations of music, qualified as being assessable. This socially constructed knowledge rested upon a canon of works and procedures that endorsed high-status knowledge long codifed, reduced and abstracted from musical experiences and from personal and social meanings. The objective model that could measure reliably held sway. (p. 121) Previous generations believed that the arts were an important part of an education, and that no society could be complete without the advancement of the splendid and beautiful. Art, science, and learning are all means by which society grows and by which the lives of the society can become enriched and beautified (Finney, 2002). In the late 1980s there was a fundamental shift in educational values and how these values were being used to create policy. It was determined that student scores and assessment should be based on comparison, rather than comprehension. Since the introduction of music education, the intuitive has constantly been replaced by the formal (Finney, 2002). Music has long been considered part of the aesthetics and aesthetics have long been considered an important part of education, society, and history. They have been valued since the

STOP MAKING SENSE

166

time of Aristotle and Plato, but education reform in the late 1980s changed this idea. Music education was now considered the teaching of some lesser important skills (Finney, 2002). By the early 1990s, music standards had begun taking the form of quantitative statements, which was an unsurprising even given the cultural trend at the time. There has long been much talk about how to handle the relationship between the subjective and objective in music education, especially regarding assessment. Music education came to be considered the teaching of skills, and skills can be measured. Composing, performing, and listening were considered the main skills and these skills can all be taught, measured, and studied (Finney, 2002). Where music education has been successful, research has shown students growing in personal and social development, increasing both their knowledge and skill in a variety of art forms, improved their social communication, growing in expressive skills, and increasing their creativity. Where music education has not been successful, music students have noted a lack of space from which they can form their own meaning and relevance to the subject at hand (Finney, 2002). However, government reforms have achieved very little change in terms of music education (Finney, 2002). In this modern era of accountability, especially within the American school system, it is impossible to enact the necessary changes to involve independence and creativity necessary to teach music. Music educators around the world and outside of the US have using pedagogical methods that are more progressive and inclusive for years. To say that music education should be more open to different methods and ways of learning should not be a controversial statement (Salazar & Randle, 2015). To summarize, the history of music education in this country is long and varied, with many successes and many disappointments. A recent trend to focus on the quantitative elements

STOP MAKING SENSE

167

of music and an emphasize assessment, consistent with an EBP approach to music education, has caused some drastic consequences, including a diminished role for music in the education system. Further values such as basing music education on reading music notation and the quality of ensembles are causing further problems. Philosophy of Music Education Like the previous philosophy section discussing the philosophy of EBPs, music education is also based upon a philosophy: Much of the tension between progressive music methods and traditional methods is caused by the inability to comprehend these differences in philosophy, or to accept variations of these philosophies” (Salazar & Randle, 2015, p. 286). A progressive philosophy of music would consider a student’s experience of music and the multiple social roles music can play in a person’s life, as compared to a traditional music education philosophy which believes in the superiority of tradition, leadership, and group consistency (Salazar & Randle, 2015). However, the current philosophy of the American music education system is based upon a “narrow” philosophy (Salazar & Randle, 2015) and only recently have music educators begun paying attention to the idea of music philosophy (Stambaugh & Dyson, 2016). But since, as previously mentioned, philosophy exists in all subjects, it automatically exists in music education. Music education is currently based upon a philosophy, a philosophy that states Western musical traditions are superior and should be central in the role of music education. This philosophy is a very narrow definition of music education (Salazar & Randle, 2015). Current music education philosophy treats the teacher as the expert and expects students to follow the teacher’s recommendations and lessons. This philosophy contrasts with more comprehensive philosophies were the teacher and student consider themselves colearners, where the teacher is considered more of a guide than an autocratic authority (Salazar & Randle, 2015).

STOP MAKING SENSE

168

Yet there is no standard universal philosophy regarding music education and how to teach music. What basis does education have to believe it can even accurately articulate a music philosophy? There are values that exist in human nature that prevent people from universal agreement on a philosophy (Reimer, 1997). Some believe the goal of music education should be comprehensiveness (Reimer, 1997). Yet performance is the only issue that has been shown to be of equal concern between music education and music philosophy and music education is much more concerned with curriculum than music philosophy appears to be. Whereas music philosophy is willing to attend to subjects such as religion and spirituality while music education has not been open to these ideas. Teacher preparation is also of extra concern by music educators (Stambaugh & Dyson, 2016). According to Confucius, Music produces pleasure which human nature cannot be without. That pleasure must arise from the modulation of the sounds and have its embodiment in the movements of the body—such is the rule of humanity. These modulations and movements are the changes required by nature, and they are found complete in music. (Huang, 2011, p. 167) However, Confucius did not become content with his musical progress until he fully grasped the intent of the composer. How can this skill be measured? Confucius also believed music could be used to overcome social and political problems (Huang, 2011). There is currently a divide between what music students want to learn and what schools teach. Shouldn’t students have some say in what they learn? At least regarding a creative subject? Should not student choice be part of the current music education philosophy? Despite the popularity of popular music, it has not been a significant part of music education through many countries of the world, including the US, who has the largest popular music industry across the entire globe (Law & Ho, 2015). And yet music education programs largely refuse to embrace

STOP MAKING SENSE

169

popular music as part of the curriculum. This goes against the natural instincts of students as music taught in the standard school curricula is often viewed by younger people as “old people’s music” while they view popular music as belonging to their generation (Green, 2006). There is a tension between the way music is currently taught and students’ interests in music (Salazar & Randle, 2015). An increase in popular music within music education is blocked by two separate barriers: Adults’ attitudes towards popular music and concerns about popular music’s relationship to rebellious behavior. Part of this blockage exists because there is a stereotypical view that classical music encourages elite behavior, such as education, while popular music encourages behavior associated with “the unwashed masses.” There is also a belief that teaching popular music is merely teaching for entertainment while teaching traditional music means passing down a culturally preserved tradition (Law & Ho, 2015). A study of Chinese music students suggested that students still want to learn traditional music but want it to be a smaller part of their curriculum, while making popular music a larger part of their curriculum. One of the most prominent reports from music students is their desire to study music that feel relates to their everyday lives. This desire has been seen across cultures (Law & Ho, 2015). Students have reported learning about music more effectively through informal education means but it will be difficult to embrace popular music as part of the music education curriculum until the government recognizes it. This type of education relates closely to a student-centered education philosophy (Salazar & Randle, 2015). For it is national authorities, not local entities, who determine what is taught and not taught, including how the subject of music education is taught. Additionally, “music and music education can be continually reinvented” (Law & Ho, 2015, p. 318).

STOP MAKING SENSE

170

Part of the difficulty developing a philosophy of music education is the complex and ambiguous nature of art itself. As previously mentioned, all creative fields are subjective and inherently hard to define (Csikszentmihalyi, 2013). Music is not just a set of sounds and silences. It has its own conveyed meanings that are socially and culturally influenced. It is inherently ambiguous. It is a complex construct and cannot be narrowed down into a simplified definition (Green, 2006): “Music is not static, but dynamic, and music education should be as well” (Law & Ho, 2015, p. 322). Music is a universal language, it speaks to all times, generations, and cultures (Reimer, 1997). It is a subjective field; this idea has been known for some time. Discussions around this subject often revolve around opinion or experience but are rarely fact-based (Luce, 1965). These statements again verify the importance philosophy plays in people’s lives as there is no set definition about music. There is an institutional theory of art that states that any form of art, by definition, has no intrinsically identifiable or recognizable qualities, but becomes art simply because one considers it art (Reimer, 1997). In other words, music is whatever a person decides it is. Attempts to define music and art will always fail to some degree. They can be defined to a certain extent, but only to a certain extent (Reimer, 1997). Furthermore, attempts to define music demonstrate the important factors frequently left behind by EBPs, factors such as not being reducible (Reimer, 1997) and the importance of immeasurable human factors such as relationships and emotions. The role feeling plays within music is itself of immense importance: “The ‘vitalities’ of consciousness itself are charged with feeling, in that they are the foundational determinants of all that humans can experience” (Reimer, 1997, p. 18). The details of music can be changed from culture to culture, but the way in which music affects each of us as individuals, on an emotional level, is universal and cannot be defined. And

STOP MAKING SENSE

171

the need to create and share in an experience are two of the most basic human needs (Reimer, 1997). Music educators have long promoted change within the music education system. Still, little has changed, and music education is still largely focused on traditional Western classical music and the use of large ensembles to verify music education results (Salazar & Randles, 2015). Music teachers in the 20th century have largely been trained to teach Western music tradition through large ensemble settings (Salazar & Randles, 2015). Music educators are trained to evaluate music based on the final product, or a large ensemble. A more comprehensive philosophy emphasizes the process of creating music and practice as being just as important as the performance. Music education in other countries such as the UK and Finland focus on a more comprehensive teaching of music, including popular music, vernacular learning, and composition (Salazar & Randles, 2015). Music experiences often happen in an unsupervised setting and are untutored. Informal acts replace strict education and include the process of “making up music.” Students naturally have abilities that do not require formal education. Many students have vernacular abilities that are more developed than their formal music abilities, such as reading music (Green, 2006; Salazar & Randle, 2015). In music education, students must accept the tastes, interests, and values of the conductor, which may or may not match their own personal or social experiences. The idea of music developing in ways that are either simple or sophisticated, and applying a superiority to sophistication, is a very Eurocentric philosophy and denies the importance of uniqueness and diversity: “The traditional pedagogy simultaneously disconnects learning from the way in which many students experience music. Music education might relate with more students if it is

STOP MAKING SENSE

172

compatible with the vernacular processes through which many people experience music” (Salazar & Randle, 2015, p. 282). Very little research exists regarding the practical applications of a more comprehensive philosophy emphasizing a vernacular approach (Salazar & Randle, 2015). The current educational philosophy represents a separation of fact and feeling: Other kinds of knowledge, where description, words, statements and propositions held sway. Together with the other arts, music could establish an identity where the hard dualisms of fact and feeling, cognition and affect, objectivity and subjectivity were, if not completely lost, at least suspended. At the heart of aesthetic knowing was feeling, and feeling was knowable. (Finney, 2002, p. 123) Musical perception is a dynamic and active process that involves interpretation. Feeling and emotion are part of the musical experience and should be included and valued as part of any comprehensive assessment. It’s impossible to separate the musical experience from the subjective experience, but this impossibility is not necessarily bad. It is impossible to truly “get” music until you have experienced the music in a physical way (Finney, 2002). Musical intuition and personal experience are necessary if one is to enjoy music: “Knowledge might be more than fact…music’s place in the curriculum depended on recognizing it as a non-verbal, intuitive area of experience” (Finney, 2002, p. 124). To emphasize this point, research has shown a correlation between life-changing experiences and work in the arts (Finney, 2002). Recent changes in music education have also pushed for further emphasis on assessment (NAfME, 2020). Assessment in music has to assure progression with a fixed set of preestablished criteria. It must assure standardization and accountability. Informal techniques such as observation and participation have been shown to be the missing link between an effective

STOP MAKING SENSE

173

music instructor and a non-effective instructor (Finney, 2002). Salazar and Randle (2015) explained this reality well: These skills were evaluated through journal entries, rehearsals, and a final performance. However, these methods of evaluation could not extensively demonstrate my musical growth. Only I could accurately determine the extent of my personal progress, because my growth was implicitly based upon my previous musical experiences. (p. 287) Creating lesson plans based on informal learning is difficult. Music educators are taught that structured lesson plans based on school, state, and national standards are important to a quality education, as is establishing clear methods of evaluation and objective standards to demonstrate the effectiveness of their teaching methods (Salazar & Randle, 2015). They continued: However, vernacular music making is somewhat contrary to this prescriptive approach. The vernacular process liberates the teacher from prescribing a path toward success in order to allow students to construct their own musical understanding based upon their interests and experiences. While the teacher is an important part of the learning process, setting rigid learning goals could hinder the independent and creative nature of vernacular music making. (Salazar & Randle, 2015, p. 287) According to Carlos Santana, the idea of assessment in music is a misnomer. Part of this misnomer is caused by the emotional state of music. He verbalized: By necessity, the discussion tends to leave the world of the pragmatic, and sails into areas that can be uncomfortable for many players—God, spirituality, openness, selfassessment, self-awareness, ego, habit, ambition, and the potential evil of comfort zones. But if a player seeks to truly jettison stylistic mimicry, and fight like a cornered jackal to develop a unique and personal musical fingerprint, then studying the creative

STOP MAKING SENSE

174

ramifications of intangibles is an essential part of the mission… Although musicians talk about it a lot, I think the role of the heart in actual creative performance remains a huge mystery. (Moldenda, 2008, p. 4) Santana provides an example of what he is talking about by sharing the following story about his friend, Buddy Guy: A guitarist can more clearly understand technique. He or she can say, ‘Buddy Guy is over there, and I’ve practiced this blues scale forever. I can fly over the notes, add a few tricks, and not give any ground. I’m going to burn.” That’s not going to help you. I’ve seen Buddy destroy a couple of musicians. I won’t say their names, because I don’t want to hurt their feelings. But they were playing a gazillion notes per second, while he was holding one note, and he looked like a man who was on top of the Grand Canyon wielding a lightning bolt that sounded like Jimi Hendrix. He’s holding that one note, and he’s grinning at these guys playing a bunch of little notes that sound like mosquitoes stuck in a screen door. That’s not going to hurt him. But he’ll hurt you with one note that transcends the blues, and all the equipment he’s using. That’s heart, and Buddy Guy has an incredible heart. (Molenda, 2008, p. 11) So, music transcends technique and this transcendence can’t be taught. It cannot be defined and can only be received through spiritual means (Molenda, 2008). It is the inexpressible and immeasurable that are the most important parts of music. If this statement is true, and becomes part of philosophy regarding music education, should a discussion begin regarding whether or not music is even completely measurable? And how does science assess something that is not measurable? And if music is not completely measurable, does science just ignore it, as education has begun doing?

STOP MAKING SENSE

175

Santana says the ultimate goal for musicians is to create a sense of wonderment for their listeners. Musicians have to learn to tell a story with their instruments but describing or explaining how to do so is complex and ambiguous. If this sense of wonderment is the ultimate goal, how does music education measure whether or not a musician has achieved their goal? And how does music education quantify this goal and make it teachable? Can music educators include it in their best practices for music education? What about things like imagination and creativity? Santana says imagination and creativity are responsible for truly great music. He also claims not everything in music can be quantized or understood. Music goes beyond the physical into the intangible. Part of music is intuition and this intuition cannot be taught. Music exists in the heart and not the mind. The mind is not able to process the emotions of music the way the heart is. Music that uplifts, transforms, and moves people does exist, but it is unalterable and intangible. The aspects that make it so great, beautiful, and excellent cannot be explained and can only be experienced (Molenda, 2008). Regarding the current state of music education, Santana said, “The more McDonald’s that are out there, the more you need a grandma who spends all day in the kitchen stirring the sauce on Thanksgiving” (Molenda, 2008, p. 33). To summarize this issue of the intangibles existing in all artistic endeavors, particularly music, Santana continued: Let’s look at it this way—what is this intangible world that Jimi Hendrix and Beethoven dipped into? You can call it hocus pocus or whatever, but this world exists. How often are you willing to dip into it? Do you dip into it at all? Do you ever have those moments where you’re playing by yourself, and you think you’ve been at it for 30 minutes, but you suddenly realize you’ve got sweat and saliva all over your guitar because you’ve been at it for more than two hours? I think this is the next step for Guitar Player magazine. You

STOP MAKING SENSE

176

should invite people to go beyond the mechanics of the physical brain and the fingers, and go to that place where— like one of my favorite bands, the Doors—we can all open the doors to perception. When you hear Robby Krieger’s creepy minor-major blues thing at beginning of “The End,” it’s like Dracula giving you a hug, but you don’t mind it. I love music that makes me feel like I’m seven years old, going to the movies for the first time and experiencing Panavision. That’s why we love Jimi Hendrix—he assaults all of our senses. His music has a wide circumference. Hendrix is another name for a bridge to the unknown, because what he was playing, even he couldn’t reproduce sometimes. He couldn’t quantize it—as much as he might have tried to get back there by taking seven tabs of acid and a little bit of wine and some coke. Sometimes, it’s nothing— just the willingness. The willingness to take a deep breath and take what was given to you. It’s inside you, as John Lee Hooker said, and it has to come out. But maybe you won’t let it out, because you want to analyze it before it comes out. Don’t analyze it. Leave that for other people. Just take a deep breath, stop what you’re thinking, and let go. Let God light you up, and let it come out. Then, you can get rid of all the sh*t you know, and play things that sound like singing water… Intangibles and willingness are, by definition, far more mysterious than pragmatic results—such as mastering a difficult melodic run. The poet William Blake was a visionary who spoke to angels, Jackson Pollack was a drunk, and Jimi Hendrix took drugs. Those are just three examples of artists swimming in pure creative inspiration. While I’d never advocate the abuse of alcohol or pharmaceuticals, is there some trigger needed to launch a musician beyond the concrete and into the unknown? I don’t need to take acid or mescaline to trip anymore, but I learned enough from that stuff to realize what Einstein meant when he said, “Imagination is more important than knowledge.” Your imagination is your best equipment, and you cannot

STOP MAKING SENSE

177

learn it or earn it—it was given to you. Take it. It’s yours. You don’t have to go down to the crossroads, and wait for a black cat or the full moon. You don’t have to sell yourself to the devil. What for? You have God’s love, and what can be better than that? Whether you’re Yngwie Malmsteen or Steve Vai or Joe Satriani or John Scofield, you just need to shut everything off and utilize the main television—your imagination. (Molenda, 2008, p. 13) In summation, philosophy is important in music education. It informs how music is taught and the methods used. It decides how much input students have in their own learning. It allows the teacher to focus on the metaphysical, such as motivation and intrinsic meaning. It informs all of teaching and needs to be given the highest priority. Currently there is no agreed upon philosophy of education, as it is often ignored amongst music educators. The current philosophy favors ideas similar to EBPs and these ideas have caused serious consequences for the music education community. Best Practices in Music Education The music standards are comprehensive in nature, causing one to think music education would be focused on activities such as performance, listening, composition, improvisation, notation skill-acquisition, analysis, and developing associations with other artistic fields, as well as with fields outside the arts. However, this situation does not appear to be the case as most music education engages in only a couple activities, such as reading music and developing skills for large ensemble playing (Williams, 2007). The goals of music education have changed as society’s desires for education to become more accountable have changed (Finney, 2002). The NAfME is currently the regulatory body responsible for standards in music education. They list on their website a set of national standards regarding creating, performing, and responding to music. The standards were

STOP MAKING SENSE

178

developed in 2014 and are divided into five different categories, including: PK-8 General Music, Composition/Theory, Music Technology, Guitar/Keyboard/Harmonizing Instrument, and Ensemble. Included amongst all these division are the categories: Creating, Performing, Responding, and Connecting. Each of these categories is in turn divided further into subcategories. For example, the Creating category contains the subcategories: Imagine, Plan and Make, Evaluate and Refine, and Present. Performing contains the subcategories: Select, Analyze, Interpret, and then Rehearse, Evaluate, and Refine before presenting. Responding contains the subcategories: Select, Analyze, Interpret, and Evaluate. Connecting contains the subcategories: Connecting #10 and Connecting #11. These subcategories are shared across the original five categories and each of these subcategories is then divided further into three levels depending upon skill: proficient, accomplished, and advanced. The standards seek to define everyone of these categories, subcategories, and levels, including a definition, an explanation labeled enduring understanding, and an essential question for each subcategory. For example, the creating category is defined as generating musical ideas for various purposes and contexts. This definition is explained as developing creative ideas, concepts, and feelings that will influence musicians’ and come from a variety of sources (NAfME, 2020). The essential question is “How do musicians generate creative ideas” (NAfME, 2020)? Further categories follow the same pattern and while it must be said these standards are excellent and do a great job providing information, definitions, structure, and ideas, they demonstrate the previously quoted idea that EBPs are either so broad as to be applicable to anything, or so specific as to be too narrow. In this case, these standards are so broad a teacher could essentially defend any lesson plan. In other words, these standards do very little to change or improve education.

STOP MAKING SENSE

179

While these standards are definitely an improvement over a micromanaged set of rules where every student would need to learn the same instrument, the same songs, and creativity is not allowed, as was demonstrated, these standards do very little to improve the quality of music education and are consistent with the ineffectiveness of EBPs.

STOP MAKING SENSE

180 Chapter 3: Methodology

The methodology used for this dissertation was a critical literature review—an in-depth study and critique of the important and relevant research on the topic that is available to date. The design of the critical literature review is similar to other designs in many ways. It begins with an introduction, explaining why the research is necessary and how it can help. After the introduction, an in-depth explanation of the literature is written. This review is large and extremely thorough, compromising a majority of the body of the paper. The review explains all the relevant details of the literature completely and discusses all necessary points needing to be included as part of research. After the review, the methodology is explained before moving onto the discussion chapter and finally the results. The magic in the critical literature review format occurs in the conclusions, or final chapter, where the author is not only allowed to critique the findings but also present theoretical alternatives, or hypotheses. Because the research method being used is the critical literature review, no special setting was needed for the research. Likewise, this research method required no outside participation, meaning that any ethical considerations regarding potential research subjects were moot and invalid. This method also meant no outside participation was required to complete the research and the process of searching for and finding participants was not of concern. Data collection was a simplified process because of the lack of participants and consisted of collecting the appropriate research articles, books, documentaries, news clips, and interviews. Data analysis included looking for themes repeatedly present throughout the literature, coding these themes, and then presenting them in a structured manner. The critical review has a long and important history in education. It is often part of the first process in any type of research being performed. It has long been an important process in almost any research project, and eventually developed into a complete project of its own. It has a

STOP MAKING SENSE

181

strong history of being the first consideration for any research project. Every dissertation, regardless of subject or type, includes some sort of summary about the existing and previous research. This is a necessary process to ensure information, time, and other valuable resources are not wasted. It is also necessary to ensure the researcher has a complete and thorough understanding of the material being studied and to explain the purpose, importance, and necessity of this research to the reader and those approving the research. In the case of this particular research, it is necessary to critique the subject and develop alternative theories and ideas. Further research could test and compare these alternative ideas, confirming whether or not they have any validity. When performing any kind of research, be it experimental, quantitative, qualitative, physical, or a quasitype design, it is first necessary to familiarize oneself with the subject being studied; hence the literature review. This review allows one to not only familiarize themselves with information on the subject, but also understand the purpose of their research. Through the process of familiarizing themselves with previously gathered information on the subject, the researcher gains a deep and thorough understanding of the topic, discovering if their interest in the question they are asking is justified or not. Has the research been performed before? Is the question important enough or justifiable to merit research? Is there significant interest in the topic being explored? Is the research that has already been performed sound? Is more research into a certain area necessary, required, or needed? The process of the critical literature review requires the researcher to first gather a very thorough amount of information from which to perform their research. The vast majority of this information was garnered from scientific articles in various research journals revolving around the many subjects covered in this dissertation. Among the various subjects involved in this dissertation, the majority of the paper revolved around the topic of EBPs, both for and against.

STOP MAKING SENSE

182

The amount of research into this particular study was large and vast, meaning the dissertation delved deeply and completely into this part of the subject. This deep exploration focused on many aspects and subtopics of EBPs, including the history of evidence, the definition of EBPs, philosophies behind EBPs, and the consequences of an evidence-based philosophy. After the research on EBPs was thoroughly explored, music education will also be researched and the information from this research was presented. This information covered many facets, including the history of music education, music education philosophy, standard practices within music education, and current trends in music education. These two subjects, music education and EBPs, comprised the bulk of the research explored for this dissertation. However, other research was included and also needed to be part of the dissertation. These topics included a brief history of philosophy, the importance of philosophy, how philosophy relates to research, general education practices, current issues and statistics in general education, EBPs in general education, and general education philosophy. The combination of these subjects, along with the information provided on music education and EBPs, was presented and provided the information explored. Even though a large majority of the information was found in scientific articles and journals, other forms of media also presented themselves as legitimate ways of obtaining knowledge. These forms include documentaries, published interviews with certain experts in a given field, newspaper reports, biographies, and news shows. The information from these sources was documented, cited, and explained, with its purpose, its function, and how it related to the overall subject clearly presented. The next part of the methodology involves looking at the information presented in the dissertation from a critical perspective. Dissertations not only present information, they explore the information presented as well. The existing research explored within the dissertation was

STOP MAKING SENSE

183

examined with a critical lens, with the intent being an examination of the philosophy, logic, content, results, and consistency of the research. The content of the dissertation was organized into logical chapters, with each chapter expanding and growing in knowledge about the specific topic explored. The topics were organized fluently from one area to the next with the first chapter of the dissertation being the introduction. This introduction explained what research was performed and why. It also explained the purpose of the research, the research questions being asked, and what these questions hope to answer and discover. The importance of these questions was also be discussed. To summarize, the first chapter of the dissertation outlined the topics being explored and discussed why these topics are being studied, discussed the organization of the dissertation, explained the questions the dissertation hopes to answer, discussed the importance of the research, and explained the impact the dissertation could have for the general public and the scientific community. The next chapter of the dissertation was the review. This is the largest physical section of the dissertation and contains many different subchapters exploring different topics. These topics discussed EBPs and music education but each of these chapters included subheading with titles such as the history of EBPs, the importance of philosophy, the history of logical positivism, problems with quantitative data, the history of music education, music and measurement, and many others. The current chapter explains the methodology and provides a thorough description of the research methods. After the research findings have been presented, the next chapter focuses on a discussion of these findings. This discussion will be the critical part of the literature review and focuses mostly on critiquing the literature through the use of logical argument and reasoning. The logic and reasoning behind these arguments is explained thoroughly and justified.

STOP MAKING SENSE

184

Additionally, this section focuses on limitations of the research, problems, and future directions for the research topic. The final chapter is a conclusion chapter summarizing the research, the findings, and the critique. This chapter includes alternative hypotheses suggestions for further research, and theoretical ideas I developed, following the research. This chapter is the shortest chapter, designed to succinctly explain what the research has found and present alternative ideas. Other research methods are available for this topic. A critical literature review was chosen because this subject has not previously been researched in dissertation format, so a thorough review of available research and literature seemed appropriate, as a first step in the process. The critical literature allows for not only a thorough understanding of all the existing literature and research, but also a basis for future research. In fact, one could ethically argue that this critical literature review is necessary and ethically sound given the premise that research without a solid historical basis is unfounded and possibly unethical. This statement is an oversimplification of the issue, but the logic that a critical review of all literature and research should be the foundation of all future research holds merit. Other research methods were considered, especially that of qualitative interviews. In the end, the critical lit review was chosen for many factors, the first being that the state of Illinois has dropped its music education programs in the public school systems. The dropping of the public education music programs means public music teachers in the state of Illinois would be hard to find, much less interview. Add to the interviewing process the Internal Review Boards and any ethical concerns regarding the interviewing of research subjects and this process becomes lengthy and difficult. Finally, as previously mentioned, because no other research on this subject exists, to my awareness, a thorough literature review is the first step in the process and seems to be an appropriate place to begin.

STOP MAKING SENSE

185

This review attempts to answer many questions, most importantly the issue of consequences in music education that may have developed as a result of embracing EBPs. This is not the only question this dissertation aims to answer; however, the other questions are less important and the answers less complete. These questions include: What is the role of philosophy in education, what are the goals of music education, why has society embraced EBPs, and what are some possible solutions to these issues? As the first step in a long research process, this review was not be capable of answering many of these questions completely or with a great degree of confidence. However, this problem dismisses the purpose of the literature review. The literature review is not a research method designed or capable of answering complex questions with a great degree of certainty. Rather, it is the beginning part of a long and complex process, designed to promote knowledge and establish a solid foundation from which to gather and grow in information as first step before undertaking more complex research process. This dissertation hopes to become part of the foundation for questioning many aspects of the music education system. In summary, the dissertation is organized into different chapters, starting with an introduction chapter, before moving into the literature review section, and then finally a results chapter which will explore the literature and possible interpretations.

STOP MAKING SENSE

186 Chapter 4: Results EBP Outcomes

Perhaps the most important question one can ask regarding EBPs is: What have the results been? How well have they worked? This chapter explores this idea, outlining the results seen from EBPs. As previously mentioned, they have not been the game changers they promised to be. Scientists are far from certain that EBPs will lead to the changes needed in science and society (Silk et al., 2010). First of all, EBPs have long since evolved and changed from the initial idea (Haynes, 2002). They developed into something different than originally intended. They are not the paradigm shift they report to be. They have existed in the field of medicine for quite some time now and critics of EBPs say there is no evidence and likely never will be that EBPs provide better medical care. Critics of EBPs claim there is no evidence that EBPs provide better care (Sehon & Stanley, 2003). Furthermore, there have been no new theories that are incompatible with previous theories and require EBPs to exist. Everything learned from science could have happened without the shift towards EBPs (Sehon & Stanley, 2003). Regarding education, the results of EBPs have been tragic. Standardized testing has not led to an increase in scores or student performance (Amrein & Berliner, 2002) and high-stakes testing has radically altered the type of education students in America are receiving (Kohn, 2000). The results of standardized testing have been horrible, especially for poor and minority children (Perrone, 1991). These tests have failed to close the gap between the rich and the poor as they promised (Kohn, 2000). Social class plays a larger difference in the quality of education than in other countries (Ryan, 2013) and EBPs have done little to change this fact. Wealthy schools, communities, and families are better able to afford the much-needed test preparation, thereby increasing the gap between the rich and poor even more and demonstrating just how

STOP MAKING SENSE

187

much these tests are not accomplishing what they promised they would accomplish (Kohn, 2000). The numbers indicate students from low-income and minority groups suffer the most from standardized testing (Bhattacharyya et al., 2013). Standardized tests tend to overlook many factors such as family home situations, educational backgrounds of parents, community issues, poverty rates, how many times the test is taken, the instructions, familiarity with the test, and more (Bhattacharyya et al., 2013). Poor schools are forced to spend money on testing at the expense of other materials, such as books or educational resources. The way the current education system works, the quality of education actually declines for those people who need educational help the most. Noninstructional factors, such as ethnicity, culture, and socioeconomic status explain most of the difference in variations between test scores. Eighty-nine percent of variation in test scores can be explained by just four variables: number of parents living at home, parents’ educational background, type of community, and poverty rate (Kohn, 2000). The emphasis on standardized testing has caused a complete change in the teaching profession with 50% of teachers quitting after their first year on the job, with “teaching to the test” the most common criticism of standardized testing (Bhattacharyya et al., 2013). According to the Center for Teaching Quality (2007), teachers place extra emphasis on tests now because these test scores determine the resources they will receive in the future. They also place extra emphasis on certain information while teaching to make sure their students pass the test, thereby “teaching to the test” (Center for Teaching Quality, 2007). Teachers will change their curriculum to make sure their students can answer these questions, but if these questions are ultimately unimportant to a student’s education and intellect, they have degraded the quality of their teaching by doing so (Kohn, 2000). Almost all teachers admit changing their curriculum to accommodate standardized testing (Perrone, 1991).

STOP MAKING SENSE

188

This change occurs because poor scores on standardized tests may stigmatize teachers and students, even though these poor performances may be unavoidable and place the blame on the wrong people (Bhattacharyya et al., 2013). For these reasons, “many educators are leaving the field because of what is being done to schools in the name of ‘accountability’ and ‘tougher standards’” (Kohn, 2000, p. 2). State surveys have demonstrated teachers are leaving the education field at an alarming rate and less people are entering the education field because of the responsibility placed solely on teachers and not on the system as a whole. Teachers are hesitant to enter a field where test scores matter more than anything else and they alone are pressured to achieve these scores (Kohn, 2000). Because teachers are under so much stress because of standardized testing, many of them are quitting (Bhattacharyya et al., 2013). They quit because they expect to teach, not shuffle paperwork and spend all of their free-time working (Bhattacharyya et al., 2013). It is this emphasis on testing that is causing teachers to quit (Kohn, 2000). Teachers report feeling helpless to instigate necessary change, saying they will lose their jobs if the NCLB mandates are not fulfilled (Bhattacharyya et al., 2013). In the current system, those who are helping students most in need are those most likely to be punished and branded as failures (Kohn, 2000). Cheating scandals have occurred in cities such as Washington, DC; New York; Atlanta; and 23 schools in California (Bhattacharyya et al., 2013). Pushback against standardized testing has generally caused those in favor of it to dig in their heels and push the issue on teachers (Kohn, 2000). Standardized test scores can improve when students are coached, but studies have suggested that even when students are trained for the test, their level of overall learning still does not improve (Shepard, 2000). In 2012, the US ranked 12 out of 50 in quality of education (Bhattacharyya et al., 2013). Few countries have used standardized testing for children younger

STOP MAKING SENSE

189

than high school (Kohn, 2000). Finland, the country that consistently places highest in terms of education comparisons, is the most outspoken against standardized testing: “Researchers believe, standardized tests yield few benefits to student learning while neglecting higher-order thinking skills” (Bhattacharyya et al., 2013, p. 636). In the American education system, the philosophy of EBPs presents itself as a system that holds accountability and testing as the highest and most important standards. This system has led to a huge increase in standardized testing to the point that children in the US are currently tested at higher rate than any other country and ever in history. The emphasis schools are placing on standardized testing has taught students that education is all about the memorization of facts and determined by how quickly they can perform certain functions (Kohn, 2000). Standardized testing enforces rote memorization, an obsolete teaching technique most teachers consider outdated (Bhattacharyya et al., 2013). The quality of education has decreased since teachers have to focus so much time towards material covered by tests. Programs in the arts, recess, elective high school classes, class meetings, current event discussions, literature, and entire subject areas such as science have been cut because of standardized testing (Kohn, 2000). The focus on standardized testing has led to the loss of many subjects and the loss of certain aspects of teaching (Bhattacharyya et al., 2013). The current system of emphasizing standardized and high-stakes testing is actually rewarding poor teaching methods and rewarding the worst teachers. It has become more difficult for teachers to focus on students’ social and moral development as standardized testing as increased and they have less time to focus on anything other than subjects covered in the test (Kohn, 2000). Because of the emphasis on the standardized testing subjects, other subject areas are discarded and not taught. Many teachers currently complain they are only teaching math and

STOP MAKING SENSE

190

reading (Bhattacharyya et al., 2013). According to the National Center for Education Statistics, the increased time being spent on math and reading is not making our students better at math and reading (National Center for Education Statistics, 2009). A standardized test is considered accurate only if scores remain the same when a student retakes a test. However, changes in scores are quite common (Bhattacharyya et al., 2013). Serious mistakes have been made as the result of standardized testing. One such error occurred when a New York school sent 8,600 students to remedial summer school because of a scoring error on a test (Kohn, 2000). Society has also seen a decrease in activities such as reading for pleasure, as these activities are no longer being intrinsically rewarded (Bhattacharyya et al., 2013). As an example of how EBP treatments can fail, qualitative health research that recognized the social and political context of women’s health proved to be more effective than the treatments recommended by EBPs (Goldenberg, 2005). Another concern regarding EBP practices is that they may not actually have the intended effect they purport to have. Multiple studies have confirmed that outcomes amongst EBPs do not show significant improvement when compared to previous methods (McCluskey & Lovarini, 2005). Likewise, a simple look at the U.S. education system demonstrates the ineffectiveness of EBPs as well. Since EBPs have been enacted, U.S. student scores on a variety of tests have declined, while student scores in European countries, which use a much more integrated educational philosophy, are far superior to U.S. scores. The U.S. education system is a mediocre education system when compared to those of other developed countries, according to data from an international ranking of OECD countries (Ryan, 2013). Parents and students are opting out of Common Core testing at an alarming rate (Boser et al., 2016). The US ranks 17th out of 34 countries in educational quality, 21st in

STOP MAKING SENSE

191

science, and 17th in reading. These scores have largely remained unchanged since 2000 (Ryan, 2013). Massachusetts, one of the highest-achieving U.S. states in terms of education, still scored 2 years behind Shanghai in terms of educational quality (Ryan, 2013). In the US, only about 123,000 eighth-graders, or 3% of the eighth-grade population in the US, have scored at the advanced level in reading (Boser et al., 2016). While some areas have made improvements in education since 2000 and the implementation of NCLB, these gains are marginal and still expose large failings in our educational systems. For example, in Massachusetts, fourth-graders improved their proficiency scores in math 13% between 2003 to 2013. However, they are still only 54% proficient, meaning that almost half of fourth-graders in one of the most highly educated states still are not proficient in reading (Boser et al., 2016). As a nation, a tremendous amount of effort and time to the idea of increasing test scores and have almost nothing to show for it (Boser et al., 2016). Graduates abilities to meet today’s challenges such as responding to global warming, stem cell research, energy research, or environmental technology has not improved with all of the changes in education (Bhattacharyya et al., 2013). Differences in accuracy regarding standardization procedures on common psychological tests between doctoral and nondoctoral practitioners do not vary in a significant manner (Wolfe-Christensen & Callahan, 2008). Indeed, the reported benefits from EBPs gain during graduate training appears to not exist as competencies regarding referral questions, case conceptualization, diagnosis, and treatment recommendations have been found to be harder to execute during graduate training than expected (Price et al., 2017) Regarding evidence-based research: To condone—better put, to pander to—EBR, compromises, if not neuters, everything that we, as critical intellectuals strive for and believe in; it is a powerful virus of sorts that

STOP MAKING SENSE

192

speaks against our ontological, axiological, epistemological, methodological, and political approaches, acting as mere handmaidens…to the very forces to which we are looking to respond, oppose, and, critique. (Silk et al., 2010, p. 108) Additionally, there have been well-documented increases in client dissatisfaction, despite technological advances in medicine (Goldenberg, 2005), suggesting EBPs are causing the huge improvement in quality they promised. In the field of psychology, EBPs have had a significant impact. Regarding the push for manualized therapy: “Manualized treatments are often regarded as highly technical and disorder specific. How many different manualized treatments must the working clinician learn in order to best serve a diverse group of clients” (Addis et al., 1999, p. 436)? One of the most common criticisms about manualized treatments is that they do not pay enough attention to the role of emotion in human behavior. Many clinicians are concerned that structured protocols do not devote enough time to exploring and validating a client’s feelings. Practitioners might be less concerned if they were provided with explicit information about the importance of emotion and how it is addressed with a particular manual-based treatment. For example, it would be instructive to contrast the conceptualization of emotion in CBT versus other approaches (Addis et al., 1999). In a poll of practicing psychologists, 45% of clinicians agreed with the statement “Treatment manuals overemphasize therapeutic techniques.” Forty-seven percent agreed with the statement “Treatment manuals ignore the unique contributions of individual therapists.” Thirtythree percent agreed that “using treatment manuals detracts from the authenticity of the therapeutic interaction” (Addis et al., 1999, p. 431). In medicine, EBMs are now trying to augment rather than replace individual clinic experience (Haynes, 2002).

STOP MAKING SENSE

193

A therapist might strive for proficiency in a separate manualized treatment for a broad range of different disorders (Addis et al., 1999). For example, how will therapists know whether or not a manualized treatment is appropriate for a particular client? What about client preference? What role would client preference play? In fact, treatment manuals should be devoting more space to nonspecific factors and the therapeutic relationship. An example of a nonspecific treatment factor would be generating hope (Addis et al., 1999). There are also critics who emphasize the value of clinical experience and the judgment of individual physicians; these critics sometimes emphasize the art of medicine, and contrast this with the science of medicine, or they speak of technique vs. theory or compassion vs. reason. (Sehon & Stanley, 2003, p. 1). The common perception that the therapeutic relationship is of less importance in certain orientations suggests psychological education is not relaying the importance of the therapeutic relationship enough. Suggesting the therapeutic relationship is important in manualized treatments will not be enough. Clinicians may view the directiveness required for manualized treatments as threatening to the therapeutic alliance. In fact, rigid adherence to protocol is associated with poorer therapeutic outcomes (Addis et al., 1999). In another example, McCluskey and Lovarini (2005) demonstrated how difficult it is to follow the principles of EBPs in their research. They designed a study to measure the effectiveness of educating practitioners about EBPs. Not only did they not use a control group in their study, they also found there was no significant difference after more education, and then promoted the importance of interpretation in their results before saying they believed more research is needed and education regarding EBPs can still be effective. Would not the philosophy EBPs promote suggest that education does not help EBPs and regard this statement as fact without any further interpretation needed?

STOP MAKING SENSE

194

Current research may generate valuable information, but there is doubt that the EBP movement is doing anything to accelerate the transfer of these findings into practice (Haynes, 2002). Complete application of EBPs is too costly, both in terms of finances and resources, which would lead to unforeseen consequences (Haynes, 2002). Furthermore, education supporting EBP research does not necessarily lead to EBP endorsement but will often times prevent antagonistic feelings towards EBP research (Nelson & Steele, 2007). Empirical research actually shows EBPs as weak treatments with few benefits, few patient improvements, and temporary effects (Shedler, 2017). Constant threat of punishment and constant expectation of conformity erode skills such as professional judgment (Davies, 2003). EBPs are evidence of a society that has stopped questioning itself and legitimizes and concretizes a simplified form of science (Silk et al., 2010). American classrooms have not become studentcentered or even teacher-centered but rather legislature-centered (Kohn, 2000). Feminists have found bias with the reportedly fair methods used to analyze evidence within EBPs (Goldenberg, 2005) and EBPs are following a political agenda according to critics (Sehon & Stanley, 2003). For example, evidence-based medicine has failed to correct the women’s health needs it promises to fix (Goldenberg, 2005). EBPs have failed to correct for gender bias in research (Goldenberg, 2005). Certain areas of research remain under researched while other remain over researched. EBPs have failed to correct for these problems, again suggesting they are not the amazing treatments their supporters suggest (Goldenberg, 2005). To summarize, EBPs have not proven to be the effective and great treatments they proclaim and promised. They have not shown large improvements in their respected fields and have not produced the effects they have promised. In many situations, such as in education, they have actually made situations worse and have caused serious problems, such as large amounts of

STOP MAKING SENSE

195

teachers leaving the workforce and a reduction in students’ abilities to learn critical thinking skills and important subjects outside of math and reading. Criticism Regarding the functioning of EBPs in music education, criticism is also appropriate. For one, music education does not currently meet the aspirations and goals of music students. It has become entirely about gaining knowledge; a one-size-fits-all approach to education. Formal music education impinges on the subjectivity and creativity inherent in the creation of music (Finney, 2002). These methods often restrict creativity and limit the development of certain musical skills. These skills can be developed through vernacular methods (Salazar & Randle, 2015). Formal music education continues to be seen as elitist by young music students: “Formal methods seem exclusive because they cater to an elite portion of musically talented students” (Salazar & Randle, 2015, p. 281). There are many who are unhappy with the role of assessment in music education. These assessments can be interpreted as a loss of happiness (Finney, 2002). As of yet, no system in existence has been able to accomplish criterion-referencing, moderation, progression, and formative assessment successfully: “Music inspires deep personal responses beyond the reach of language. Attempts to codify, classify or normalise these can only lead to a loss of happiness” (Finney, 2002, p. 121). Furthermore, the subjective nature of music makes accurate assessment difficult (Brophy, 2008). Additionally, “the outcomes of aesthetic engagement could not be easily prescribed and in the belief that appraisal involving pupils themselves would be the preferred way to assess their achievements” (Finney, 2002, p. 119). This difficulty exists because judgments about music are personal and cannot be codified using a set of predefined criteria. Any attempts to define music based on a set of defined criteria

STOP MAKING SENSE

196

denies the inherent subjective nature existent within music. It is not possible to make a right or wrong judgment when it comes to music and other creative fields (Finney, 2002). Regarding the national standards for music education set by the NAfME, it is questionable how much of an impact these standards have had in the classroom (Williams, 2007). Part of the problem is that less time is spent on standards developing creative or artistic skills (Williams, 2007). Studying different objectives can help people understand the flaws in the current music education philosophy. A strict focus on performance can limit students from using their innate musical abilities. A vernacular education can help students use their abilities outside of school, and “a common term such as ‘creativity’ has traditional implications that change when used in the progressive context. This can create resistance toward innovative practices, because the language seems to complicate music education with unfamiliar standards” (Salazar & Randle, 2015, p. 283). The process of creating music through vernacular methods more closely aligns to the national music standards than the current traditional methods do. The process of teaching a student to play in a large ensemble through the process of learning to read music only directly addresses two of the national music standards (Salazar & Randle, 2015). This emphasis on performance and large ensemble performance in music education are also problematic. Williams (2007) explained these problems and how they best relate to the current music education philosophy: “I suggest that our fascination with large-group performance has limited our access to students, and at the same time has cut us off from multiple other involvements with music that many students might find exciting” (p. 4). Williams also stated, “The pressures of performance preparation keep many students from receiving anything resembling a rich music education. I would suggest that our model of music education, as large performance ensembles, has failed and continues to fail” (p. 4). Williams continued,

STOP MAKING SENSE

197

When music was first introduced into the school systems of this country in the 1830s, it was logical to concentrate on performance and notation-reading skills since that is what people needed — it was the way they experienced music. But as society gradually changed, music education did not evolve to fit the changes. As a result, we are totally out of touch with the musical needs of our society, to the point where students find us irrelevant and unconnected to their lives…as a profession, we must begin to offer substantial opportunities for students beyond the traditional large ensembles. (p. 5) He went on: But where to begin change is a dilemma. Should K-12 teachers scrap traditional programs to offer new programs for which they may not be trained, and that principals and communities might be hesitant to support? Should universities add offerings to train future teachers in new types of music programs, even as degree programs are already overburdened with too many required credit hours? I believe the answer is a little bit of “yes” to both. We need leaders in the K-12 schools to step up and begin to offer programs more relevant to students—programs that embrace everything digital technologies, as well as other relevant alternatives, make possible for student learning. At the same time, innovative universities must begin to address the needs of our future teachers. We need pathfinding programs in the delivery of relevant new pedagogies, as found in digital media, so tomorrow's teachers will be prepared for the societal realities they will face. Perhaps then we can look forward to a time when music in schools will be truly applicable to the society we serve. In the meantime, perhaps we can rethink what we are doing and begin to do it better. (p. 6)

STOP MAKING SENSE

198

A vernacular music education has the primary advantage of focusing on students’ music interests and experiences (Salazar & Randle, 2015). Music education should be focused not on the music expertise of the teacher but rather the student’s desires, interests, and abilities: “Dependence on notation and a conductor perpetuates compliance (not creativity) among music students” (Salazar & Randle, 2015, p. 285). Creativity, as previously mentioned, is also ignored in the current music education philosophy. This ignorance is somewhat understandable, as creativity is difficult to define and is an ambiguous term. However, it is ultimately of incredible importance to music and music education. The current music education system limits creativity: “If music educators approach assessment in a purely objective manner, learning might become reduced and abstracted from music experiences and from personal and social meanings” (Finney, 2002, p. 121). Because the process of music making is perhaps most meaningful to the musician, it inherently resists purely objective or factual assessment. This could be controversial given the increased emphasis on assessment in American schools (Salazar & Randle, 2015, p. 286) Music education functions best when it is beyond the bounds of a formal education and free from the constraints caused by social function, assessment, and accountability. It is an artistic-creative endeavor and this, ipso facto, leads those who partake in it to bring forth work that cannot be held up against existing criteria. To place what we make and say of musical experience within existing categories, or to benchmark them against theories, rationales and exemplary models, denies the very nature of the creative adventure. (Finney, 2002, p. 121) Students in higher education have demonstrated the danger of having a work of music or performance judged, as this judgment often has unforeseen consequences and lets to an inherent

STOP MAKING SENSE

199

loss of meaning for the student. But music is a much more rewarding process when not formally assessed (Finney, 2002). Formal assessments have been shown to not be as effective as many claim. Research has shown music education to be ineffective in growing student interest in creative, musical, and artist pursuits, “Music teachers were disinclined to actualise the musical minds of their pupils or to create a climate of mutuality in which these might grow” (Finney, 2002, p. 131). Furthermore, the criteria and methods used with which to assess music leave much to be desired. For one, there are limitations to norm-referencing, especially in the arts (Finney, 2002). Criterion-referencing is a much more appropriate assessment tool to use in the arts. It allows students to participate and demonstrate the knowledge they have and measures them on the knowledge the need to know, as opposed to comparing them to other students (Finney, 2002). As Salazar and Randle (2015), so appropriately expressed: “I learned that vernacular music methods develop covert skills that would be particularly challenging for a teacher to meaningfully assess” (p. 287). Additionally, the study of subjects such as music need to include discussion and acceptance of ideas such as intuitive knowledge. Programs in the UK have recommended music education focus on students understanding music through more direct experiences of the processes involved and the study of music education must include questions about motivation and intrinsic motivation (Finney, 2002). As previously discussed, intrinsic motivation is completely ignored in current music education philosophy, even though the intrinsic state of flow is strongly linked to positive experiences. A person’s experience of flow is a strong indicator of their enjoyment of the subject and the likeliness of their desire to continue learning (Csikszentmihalyi, 2009). Once a basic set

STOP MAKING SENSE

200

of human needs have been met, such as belongingness, love, esteem, and safety, intrinsic motivation naturally thrives (Maslow, 1986). As also previously mentioned, the more human elements of behavior that are harder to measure, such as socialization, context, and emotion, are incredibly important in music. And yet, education continues to have no mention of feeling and self-discrimination as measures to assess. It is not possible to study music, music comprehension, and musical skill attainment solely from a quantitative perspective (Finney, 2002), music educators need to also accept the emotional, the spiritual, and the social, just to name a few. Regarding assessment in music, as he so often does, Carlos Santana best stated the problems with the current philosophy: Musicians hear 1,000 voices saying they’re not good enough—that they’re just lucky, that they always play out of tune or their tone sucks, that they never get it right. John Lennon once said he hated everything he did, because he could have done it better. That’s the ego. All those voices are the ego in disguise giving you guilt, shame, judgment, condemnation, and fear that you’re never going to be good enough. Then, you have one voice that is very quiet, but it’s louder and clearer than the other voices. This voice says, “Pick up the guitar. Here it comes.” And out comes a song that’s like Jeff Beck playing “People Get Ready” [sings the main melody]. Bam! Your freaking hair stands up, you’ve got tears coming out of your eyes, and you don’t even know why. These are the things that drive me to go inside my heart, and going there is the only thing that is worth attaining for me. When I’m there, the heart will lead me to play a melody that makes families put all their sh*t aside, and just see how beautiful their families are. That’s what is really beautiful about music—it brings you into harmony. The other stuff you can learn

STOP MAKING SENSE

201

by repetition, like a hamster. With willingness, you can truly learn why people adore Jimi Hendrix. (Molenda, 2008, p.12) Recent changes in music education have attempted to turn music into an objective activity. These changes have ignored the subjective and creative, the expressive, and the most important parts of music that make it magical. The change in music education has demoralized students and teachers alike. Finney described this change so well: “An official model of music education had been imposed and there was mounting evidence to suggest that it was deficient. The text Teaching Music in the Secondary School” (Finney, 2002, p. 131). Finney (2002) explained: A view of effective music teaching for the twenty-first century. It shows teachers how to manage and control learning outcomes that are largely non-aesthetic. The knowledge and understanding Music education as aesthetic education: a rethink 131 gained is far removed from the imperative set out by Reid in earlier times. The case given of the blues is exemplary in this respect. By the end of the project pupils will be able to: use a standard 12-bar blues sequence, play chords in C major, have listened to and recognised the use of ninths in three pieces of music, know the names of notes in chords I, IV and V. (p. 132) He continued: There is no mention of musical qualities, of human interest or of felt experience. The core of aesthetic endeavour seems to have been lost. The blues doesn't seem to be about the way people feel or how these feelings become musical. It is barely worth becoming acquainted with. The model of what is thought to be officially good practice causes music educators to pause for thought… Take the case of knowing not music but a lake. I experience it at sunset as calm, dark, mysterious. I am perceptually open, able to forget

STOP MAKING SENSE

202

myself, dreaming yet wide awake, able to indwell. The lake has symbolic and metaphoric resonance and my experience is intensified. In knowing the lake I know something important to me. I may remain in an inarticulate state, in a state of intelligent feeling. I become deeply interested in and committed to the lake. How deep is it? Does it contain fish and of what kind? How was it made? I wonder about the lake and about other lakes close by and far away in distant places. I become willing to learn and to know and any kind of calculative thought becomes laden with an aesthetic imperative. Being poetic in this way acknowledges my existence as a part of others, as a social and cultural being. Winnicott (1985) told of the value of “apperception,” a preparedness of mind and body to receive and to give in dialogue with external reality, a harbinger of creativity. Buber (1970) spoke of the twofold human disposition, the I & THOU and the I & IT. The former shows the mutual relationship existing between subject and object, while the latter shows some control, some distance or objectification existing between subject and object. The former would seem to correspond to aesthetic musical encounter. This does not make our knowledge and understanding solipsistic, subjective, nor does it give credence to the hopeless relativity of pupil voice epistemology. To know the blues aesthetically I might feel the weight and nuance of its gestures. I may know something of its sorrow and complaint. This may lead me to wonder why it was made. What did it mean? What does it mean now, to me in my life, to others, to all of us? I would like to sing and play the blues. How shall I get its feel? How is that effect created? The question “What chords does it use?” may then take on significance. Criteria for valuing the experience begin to emerge, not rooted in normative theories of musical development, though these may be helpful, but in the subtleties of musical engagement (see Mellor, 2000). Teachers must know what they are looking for, what their pupils are feeling and finding, how to manage

STOP MAKING SENSE

203

what is a delicate balance between different kinds of knowledge. They must decide on emphasis and weigh the costs. Teachers need to be aware how their pupils become involved in the lake or the blues. We might look for the knower to be in dialogue with the known as she discovers that there can be no knowledge or truth without meaning. This way of thinking about music as aesthetic education broadens its scope and leads teachers to consider the kind of learning climate that is needed for it to flourish, the kinds of relationships to be developed between learners and between pupils and teachers. And in holding these matters together, music as aesthetic education allows teachers to consider their pupils' moral, spiritual, personal, social and cultural development as inevitable subelements of an aesthetic education that concerns their burgeoning identities. However, our National Curriculum holds to an undifferentiated concept of knowledge and understanding. Must all subjects have the same kind of intellectual virility? To differentiate and pluralise would require a willingness to recognise fresh relationships between the knower and the known. (Finney, 2002, p. 132) Holding music education methods static has cost the field dearly in terms of student retention, the popularity of its programs, and it will continue to do so in the future if the more openness to student’s individual preferences is not accepted (Salazar & Randle, 2015). To summarize, attempts to make music education more objective have largely failed, in part because music is a creative and subjective activity and does not need to be taught in this manner. The subjective can absolutely be embraced in music and should be.

STOP MAKING SENSE

204

Chapter 5: Discussion, Limitations, Conclusion, and Implications for Future Research Conclusion This research concludes that EBPs are not the successful treatments and programs they have promised to be. Although they may have some positive features, they have not proven to be the paradigm shift they promised (Goldenberg, 2005; Haynes, 2002; Sehon & Stanley, 2003). They have not proven themselves to be the ultimate solution nor has science seen improvements where they have been enacted, especially regarding the fields of psychology (Shedler, 2017) and education (Kohn, 2000), the two primary focuses of this research. Furthermore, this dissertation concludes that assessments, especially regarding certain subjects, such as music, remain a problematic idea. Assessing music for example, can never be completely quantified (Finney, 2008; Molenda, 2008; Williams, 2007). While there may be parts of music that can be objectively quantified, such as playing in tune, as this dissertation has attempted to demonstrate, even the idea of playing in tune is somewhat subjective as any assessment would still require human interpretation. So the question then becomes: How does one assess and how do they draw conclusions? Especially as part of a scientific field and exploration? As previously stated, criticisms are more likely to be considered when alternative hypotheses are presented. As this research has also demonstrated, any form of assessment would likely need to be complex to be accurate and remove itself from this simplified form of science EBPs support (Goldenberg, 2005). Previous sections discussed the need for a definition of evidence; this research suggests that any definition of evidence needs to allow for complexity to be thorough. So, a new definition allowing for complex thought, interpretation, and treatment is necessary. One can almost hear the complaints already: Allowing for complexity in evidence and methodology allows things to become too ambiguous, broad, and imprecise. The response to this

STOP MAKING SENSE

205

criticism is as follows: Yes, science is broad, not as precise as people would like, and embraces the ambiguous. While these qualities are frustrating, they exist and are part of science, whether people want to accept this fact or not. Take the experience of attempting to diagnose a psychological disorder. Suppose a mother brings her son to the psychologist for psychological testing related to attentional problems at school. Almost immediately any psychologist will know these problems could be related to 10 or so disorders. While it would be nice if this example was a clean and clear-cut case of ADHD, what if the inattention was caused by anxiety? Or depression? Or what if there is an ID involved or an LD? It would be great if the child performed low on the Conners Performance Test (CPT) and all Conners exams endorsed ADHD. But what if he also takes a Beck Depression Inventory and a Children’s Depression Inventory (CDI) and endorses depression as well? How is one to tell if it is ADHD now or depression causing the inattention? This scenario is realistic and occurs often. It is also a great example of the complexity of science. While many would love the simplified and precise answers EBPs claim to provide, the reality is that science is much messier than many would like, but it needs this messiness to be accurate. Using the above situation as an example, it would be easy for an EBP practitioner to simply label the child as ADHD if they meet the testing criteria or depressed if they meet that criteria. However, this diagnosis may not be accurate. Why? Because again, situations and life are messy. This scenario has already presented four possibilities: ADHD, depression, ADHD and depression, and no diagnosis. A clinician needs to use interpretive skills to determine the difference between these four diagnoses, which automatically makes this process messy and reduces objectivity. If science desires accuracy, it must embrace a complex definition of evidence that allows for nuance and embraces ambiguity. It has done so before and can embrace this definition again.

STOP MAKING SENSE

206

Furthermore, this messiness is compounded by the fact that true objectivity is impossible to obtain in any situation because human interpretation is fallible but is always the final step in any scientific process. Because of this reason alone, EBPs will never be the precise science they hope to be and claim. Science needs to embrace this fact. Look at all science has accomplished! In the field of psychology, Freud made plenty of scientific advancements, despite faulty interpretation or less than perfect methodology. His observations linking human behavior to animal behavior and uncovering the importance of the unconscious are perhaps two of the most important scientific discoveries regarding human behavior. And Freud was the opposite of EBP! Science can choose not to accept this messy definition of evidence and the scientific process, but its efforts will be moot. Because, as just explained, human interpretation will always be part of science, and any methods not embracing this fact will be fighting against reality. In music, any assessment will need to measure the subjective and recognize this subjectivity as an inherent part of the scientific process and not a part of the process to be ignored. People will need to acknowledge the role of subjectivity in science and recognize they cannot rid themselves completely of this subjectivity in science; it is simply not possible. Furthermore, this dissertation concludes that EBPs need to change and the possibilities for these changes are numerous. Let’s start with the name. EBPs could simply change their name and the situation would be much improved. A suggestion for this name-change might be research-based practices. This name change would allow for very little change regarding the way supporters of EBPs practice. However, the name would more accurately reflect the philosophy of EBPs and how they can and should function. Furthermore, this name-change would more thoroughly allow for a much needed conversation regarding the role research plays in practice and also the effectiveness of research. For example, why are scientists not having a conversation around EBPs and the fact that most of them only show temporary treatment effects (Shedler,

STOP MAKING SENSE

207

2017)? This name change could promote further discussion about effective research and which research is most appropriate. Another name-change would be towards evidence-informed practices (Nevo & SlonimNevo, 2011). This name-change has already been used in some places and has garnered some support amongst some psychological practitioners. The benefits of this name-change are also numerous, as it will also allow EBPs to functions in the manner their philosophy intends, as information used to inform practice rather than rule it. However, there has already been some confusion regarding evidence-informed practices and EBPs, with some claiming the two are synonymous. Regardless of the cause, as previously mentioned, a name-change is definitely needed, as the name itself is problematic (Dalal, 2018; Goldenberg, 2005; Shedler, 2017), and it would be helpful if any name change lacked the word evidence in the title, as this word seems to be problematic. Could the term science be used instead? Or could people simply identify themselves by the techniques they use? In psychology, could CBT practitioners simply refer to themselves as CBT rather than evidence-based? Another possibility is to either change to, or add, practice-based evidence. These changes also have supporters and could be used to either completely change the meaning of EBPs, or simply add another dimension to their use. In other words, the research practitioners are currently expected to use to inform their decisions would need to be expanded to including research informed by practitioner experience. This change would be an expansion of research and a more thorough research available for practitioner use. Once again, an expansion of the definition is needed to support this idea and a broadening of the what constitutes evidence is seen. As just discussed, what about the term science-based practices? This change would also reflect a more accurate intent and perhaps spark debate on the philosophy of science and what is

STOP MAKING SENSE

208

considered scientific. The role of philosophy could possibly reemerge as important and science could find itself in the midst of an important discussion about science, research, and evidence. Furthermore, this name would keep the original intent of evidence-based which is not in itself a bad idea. It would allow for people to claim they are scientific and follow their scientific understanding of a subject, but would also allow for a broadening of treatments, since scientific understanding appears to be more comprehensive than the treatments EBPs often suggest. The final ideas for changes consider no changes to the name but actually changing the philosophy. For starters, science can accept that the idea of evidence is broad and science is complex and irreducible (Goldenberg, 2005) and have a vast use of evidence. While this solution should be acceptable and would be a more accurate use of science, it will enrage those who want to use evidence to control the way others practice and teach. Those practitioners or educators who believe everyone should be using the same treatments or teaching in the same manner will likely balk at this idea. However, their methods have failed and fail to consider that not all students learn at the same pace or in the same way (Kohn, 2000). In other words, in the field of psychology, this change would look like a much larger acceptance of the many forms of treatment. As demonstrated in this research, there is much evidence for many of the theories. So, shouldn’t all the theories be embraced by EBPs? Because they are not all embraced, this idea suggests EBPs would have to broaden their philosophy and accept what the research really says: Namely, EBPs are not the perfect and amazing treatments they claim to be. EBPs would have to start embracing what the science actually says. For example, in psychology this change would look like an emphasis on the therapeutic relationship and a realization that the human elements of the therapy, such as empathy and rapport, are the most effective factors in treatment. This type of treatment is scientifically accurate, valid, and justified by the research! This treatment should already be considered evidence-based so this

STOP MAKING SENSE

209

change would simply require evidence-based treatments to become what they already claim to be! This change would keep the philosophy of EBPs the same but actually use what the research states. For example, the research is very clear standardized-testing has not been effective, so why not reject standardized-testing as part of EBPs? In fact, what are EBPs still doing supporting standardized tests if they believe in following the evidence and research? In the field of psychology, EBPs would follow the research and recognize that many EBP treatments, such as relaxation and grounding techniques, are only temporary fixes. This change in philosophy would also force EBPs to recognize the success of other treatments and begin focusing on the parts of therapy that can be considered more successful, such as the therapeutic relationship and empathy. Limitations of the Study Certain and obvious limitations exhibit themselves strongly in this dissertation. The first limitation is the qualitative nature of this work. This statement is somewhat ironic given the subject of interest in this dissertation and previous statements suggesting that qualitative research is not inferior to quantitative. Obviously, it is impossible to prove a theory or hypothesis using qualitative research from a mathematical perspective, or at the very least, with any mathematic certainty. However, as this dissertation argues, one could easily argue the impossibility of proving an argument with quantitative research. Regardless, the nature of this dissertation makes absolute proof impossible. As a response to this lack of proof, I suggest that this dissertation, despite its qualitative limitations, does prove one thing: There are many who claim EBPs are not successful and are problematic. This statement is based off the philosophical idea that if a demonstration of a significant number of people (whatever number that may be decided to be) support an idea,

STOP MAKING SENSE

210

people can assume there is some truth to that idea. In other words, if an individual makes the claim that many people disagree with the belief that EBPs are effective forms of treatment and practice, and then demonstrates that many people do not agree with this statement, as was done in this research, one can conclude this premise is not true. This idea is also supported by the philosophic observation that it is much easier to disprove an idea than prove one. I would argue this idea has been fulfilled in this research. A qualitative approach was also used because this research would be self-defeating if had not embraced a qualitative methodology. To critique an overemphasis on quantitative research and then use quantitative research to defend this idea would have been the ultimate in hypocrisy. Furthermore, the nature of this research did not lend itself to a quantitative nature. Another limitation is the diverse range of subjects and research materials, both in terms of topic and age. The vast range of topics covered under this research were considered necessary to demonstrate both the vast reach of EBPs and also the significance of the subject. This idea is based around the philosophy that a subject will be more impactful based upon how many fields of study it impacts. In the case of EBPs, they impact many fields, including government, education, finance, social work, psychology, and the sciences. Using all of these areas as part of the study allows the dissertation to demonstrate the importance and reach of EBPs. However, it also limits a more focused and precise form of research, focusing on a single detail rather than multiple details. In other words, the broad range of the topic being studied made it difficult for the research to be more specific and structured. It would have been much clearer if the dissertation had focused only on one aspect of EBPs, such as the philosophy or history, but this realization is hindsight. Because the subject of EBPs is so broad and vast, research also came from many locations and dates, and the task of narrowing down this research turned out to be more than one

STOP MAKING SENSE

211

researcher could handle. For example, research for this dissertation ranged from Rene Descartes to modern times, with much of the research happening around the turn of the century after the passage of the No Child Left Behind Act. Also, much research was found from European countries regarding the ineffectiveness of EBPs. The task of only finding current U.S.-based research proved to be more extensive than imagined. There are many current research articles that agree with the articles used for the paper. However, because of time constraints and the overwhelming amount of research available on this subject, it was impossible to use only current research and successfully complete the projects. In certain cases, such as that of Alfie Kohn, it should be noted that he has made it clear that his research is still applicable to modern times. It would also have been nice to focus more on the music aspect of this dissertation, but I felt that a thorough explanation of EBPs and the problems associated with them, as well as the reasons they should be considered ineffective and not successful, was needed and was more important than focusing on the music aspect. However, this focus was large and vast, making it difficult to narrow down a topic into something more precise. Implications for Future Research Future research should explore some possibilities regarding assessments in music education and various assessment possibilities. This research will likely need to focus on the qualitative and subjective elements of assessment and how these elements can be successfully coordinated into assessment. This research should also expand to other areas outside of music, especially standardized testing. As the research has already shown that standardized testing is not the answer society is looking for (Kohn, 2000), then what is the answer? What other testing possibilities are there and how can educators incorporate these ideas into the current system? What about essays and critical thinking?

STOP MAKING SENSE

212

Perhaps the most important implications regarding this work is that research needs to first return to a discussion on science and the philosophy of science. What is science? How do scientists research and what counts as research? If scientists are only willing to qualify the simplistic version of science EBP proponents have pushed for the past couple decades, society will have a simplified form of research but also a much less thorough understanding of science and less knowledge. These implications for further research suggest that science needs to focus not necessarily more on further research but rather application of the research already in existence. For example, although the success of standardized testing has been so thoroughly debunked, why are educators, scientists, and researchers still even using it? Why has this change not been made? What is preventing science from actually following what the research has found, and why have EBP proponents been able to convince others that EBP proponents are the supporters of science when their conclusions are anything but scientific? How can these changes be made and what are the results of the changes? What happens when a school embraces a more open and liberal type of education where standardized testing is not used? What happens when a psychologist practices from a perspective other than CBT? What are the results? This dissertation is significant because criticism is an important part of the scientific process. In fact, it is an important part of the EBP process. Essentially, science needs to determine if its methods are working and are having success. Society needs to know and understand whether or not scientists’ explanations of the world have merit and carry any weight. In the case of EBPs, as this dissertation has demonstrated, these methodologies have not proven successful. EBP methodologies themselves state that everything needs to be tested, and if EBPs

STOP MAKING SENSE

213

are tested, they fail. They have not been successful. Science needs to return to methodologies that are successful and accurate. Furthermore, science needs to return to an accurate version of science, which includes an emphasis on philosophy, as this dissertation claims. Because philosophy is part of every subject and scientific understanding, it is impossible to separate philosophy from science. A push to further include philosophy in the scientific discussion can only enhance the quality of science and improve discussion and debate. This dissertation shows the importance of philosophy and the role it plays in science. Society must return to the philosophy of science if people want science to be an accurate understanding of nature and human behavior. Perhaps Einstein, as he often did, can articulate this importance more succinctly than anyone else. In one of his letters to Robert Thornton in 1944, he proclaimed, I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today—and even professional scientists—seem to me like somebody who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is—in my opinion—the mark of distinction between a mere artisan or specialist and a real seeker after truth. (Howard & Giovanelli, 2019, p. 2)

STOP MAKING SENSE

214 References

Aarons, G. A. (2004). Mental health provider attitudes toward adoption of evidence-based practice: the Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research, 6(2), 61–74. https://doi.org/10.1023/b:mhsr.0000024351.12294.65 Addis, M. E., Wade, W. A., & Hatgis, C. (1999). Barriers to dissemination of evidence-based practices: Addressing practitioners concerns about manual-based psychotherapies. Clinical Psychology: Science and Practice, 6(4), 430–441. https://doi.org/10.1093/clipsy.6.4.430 Al-Ghimlas, F. (2013). The philosophy of evidence-based clinical practice: Is evidence enough? Annals of Thoracic Medicine, 8(3), 131–132. https://doi.org/10.4103/1817-1737.114282 Amabile, T. (1998). How to kill creativity. Creative Management and Development Creative Management and Development,77–87. Amabile, T. M. (1988). A model of creativity and innovation in organizations. Research in Organizational Behavior, 10, 123–167. Amabile, T. M. (1996). The motivation for creativity in organizations. Harvard Business School Background Note, 1–13. American Philosophical Association. (n.d.). Philosophy? Retrieved June 04, 2017, from http://www.apaonline.org/page/major American Psychological Association. (2019). Guidelines and procedures for accreditation of programs in professional psychology. American Psychological Association. Amrein, A., & Berliner, D. (2002). High stakes testing, uncertainty, and student learning. Educational Policy Analysis Archives, 10. http://epaa.asu.edu/epaa Begley, S. (2009, October 1). Why psychologists reject science. Retrieved January 22, 2019, from https://www.newsweek.com/why-psychologists-reject-science-begley-81063

STOP MAKING SENSE

215

Benson, K., & Hartz, A. J. (2000). A comparison of observational studies and randomized, controlled trials. American Journal of Opthamology, 130. https://doi.org/10.1056/nejm200006223422506 Berkwits, M. (1998). From practice to research: The case for criticism in an age of evidence. Social Science & Medicine, 47(10), 1539–1545. https://doi.org/10.1016/S02779536(98)00232-9 Bhattacharyya, S., Junot, M., & Clark, H. (2013). Can you hear us? Voices raised against standardized testing by novice teachers. Creative Education, 4(10), 633–639. https://doi.org/10.4236/ce.2013.410091. Birge, E. B. (1937). History of Public School Music in the United States (New and Augmented ed.). Oliver Ditson Company. Boser, U. (2000). Teaching to the test? Education Week, 19, 1–10. Boser, U., Baffour, P., & Vela, S. (2016, January 26). A look at the education crisis. https://www.americanprogress.org/issues/education-k-12/reports/2016/01/26/129547/alook-at-the-education-crisis/ Broh, B. A. (2002). Linking extracurricular programming to academic achievement: Who benefits and why? Sociology of Education, 75(1), 69–95. Buber, M. (1970). I and thou. Edinburgh: T. & T. Clark. Burkholder, J. P., Grout, D. J., & Palisca, C. V. (2014). A history of western music (9th ed.). W.W. Norton & Company. Campbell, L. F., Worrell, F. C., Dailey, A. T., & Brown, R. T. (2018). Master’s level practice: Introduction, history, and current status. Professional Psychology, Research, and Practice, 49, 299–305.

STOP MAKING SENSE

216

Center for Teaching Quality. (2007). Performance pay for teachers: Designing a system that students deserve. Chan, A. S., Ho, Y. C., & Cheung, M. C. (1998). Music training improves verbal memory, Nature, 396, 128. Csikszentmihalyi, M. (2009). Flow: the psychology of optimal experience. Harper Row. Csikszentmihalyi, M. (2013). Creativity: The psychology of discovery and invention. Harper Perennial Modern Classics. Code, L. (1995). What can she know? Feminist theory and the construction of knowledge. Cornell University Press. Costa-Giomi, E. (2004). Effects of three years of piano instruction on children’s academic achievement, school performance and self-esteem. Psychology of Music, 32(2), 139–152. Couto, J. S. (1998). Evidence-based medicine: A Kuhnian perspective of a transvestite nontheory. Journal of Evaluation in Clinical Practice,4(4), 267–275. https://doi.org/10.1111/j.1365-2753.1998.tb00085.x Dalal, F. (2018). CBT: the cognitive behavioural tsunami: Politics, power and the corruptions of science. Routledge. Davies, B. (2000). Troubling gender, troubling academe. In University Structures, Knowledge Production and Gender Construction Conference. University of Copenhagen. Davies, B. (2003). Death to critique and dissent? The policies and practices of new managerialism and of “evidence-based practice.” Gender and Education,15(1), 91–103. Dawes, M. (2008). Evidence-based practice: A primer for health care professionals. Elsevier Churchill Livingstone. Dennis, D. (1995). Brave new reductionism: TQM as ethnocentrism. Education Policy Analysis Archives,3, 9. https://doi.org/10.14507/epaa.v3n9.1995

STOP MAKING SENSE

217

Descartes, R., & Cress, D. (1999). Discourse on method and meditations on first philosophy (4th ed.). Hackett Publishing Company. Duhem, P., & Wiener, P. (1996). The aim and structure of physical theory. Princeton University Press. Duke, R. A. (2005). Intelligent college teaching: Essays on the core principles of effective instruction. Learning and Behavioral Resources. The Editors of Encyclopedia Britannica. (2015, April 28). Logical positivism. Retrieved February 15, 2020, from https://www.britannica.com/topic/logical-positivism Eisenhart, M., & Towne, L. (2003). Contestation and change in national policy on “scientifically based” education research. Educational Researcher, 32(7), 31–38. https://doi.org/10.3102%2F0013189X032007031 Finney, J. (2002). Music education as aesthetic education: A rethink. British Journal of Music Education, 19(2), 119–134. Forgeard, M., Winner, E., Norton, A., & Schlaug, G. (2008). Practicing a musical instrument in childhood is associated with enhanced verbal ability and nonverbal reasoning. PloS One, 3(10), e3566. Gambrill, E. (2000). The role of critical thinking in evidence-based social work. In P. AllenMears & C. Garvin (Eds.), The handbook of social work direct practice (pp. 43–63). Sage Publications. Gambrill, E. (2006). Social work practice: A critical thinker’s guide (2nd ed.). Oxford University Press. Gambrill, E. (2010). Evidence-based practice and the ethics of discretion. Journal of Social Work, 11(1), 26–48. https://doi.org/10.1177%2F1468017310381306

STOP MAKING SENSE

218

Gardiner, M. F., Fox, A., Knowles, F., & Jeffrey, D. (1996). Learning improved by arts training. Nature, 381(6580), 284. Garson, A., Mills, E., & Murphy, T. C. (1974). The Suzuki concept: An introduction to a successful method for early music education. Music Educators Journal, 61(1), 119. https://doi.org/10.2307%2F3394681 Gibbs, L., & Gambrill, E. (2002). Evidence-based practice: Counterarguments to objections. Research on Social Work Practice,12(3), 452–476. https://doi.org/10.1177%2F1049731502012003007 Goldenberg, M. J. (2005). On evidence and evidence-based medicine: Lessons from the philosophy of science. Social Science & Medicine, 62(11), 2621–2632. https://doi.org/10.1016/j.socscimed.2005.11.031 Goodman, K. W. (2004). Ethics and evidence-based medicine: Fallibility and responsibility in clinical science. Cambridge University Press. Gorman, F. (2016). Number of high school students enrolled in music programs. Retrieved October 06, 2016, from http://oureverydaylife.com/number-high-school-studentsenrolled-music-programs-3990.html Graziano, A. B., Peterson, M., & Shaw, G. L. (1999). Enhanced learning of proportional math through music training and spatial temporal reasoning. Neurological Research, 21, 139– 52. Green, L. (2006). Popular music education in and for itself, and for “other” music: Current research in the classroom. International Journal of Music Education, 24(2), 101–118. https://doi.org/10.1177%2F0255761406065471 Gromko, J., & Poorman, A. (1998). The effect of music training on preschoolers’ spatialtemporal task performance. Journal of Research in Music Education, 46, 173–181.

STOP MAKING SENSE

219

Hammersley, M. (2001). Some questions about evidence-based practice in education. In Annual Conference of the British Educational Research Association. Leeds. Haynes, R. B. (2002). What kind of evidence is it that evidence-based medicine advocates want health care providers and consumers to pay attention to? BMC Health Services Research, 2(1). https://doi.org/10.1186/1472-6963-2-3 Haney, W. M., Madeus, G. F., & Lyons, R. (1993). The fractured marketplace for standardized testing. Evaluation in education and human services. Springer. Hearing the music, honing the mind. (2010). Scientific American, 303(5), 16. Hetland, L. (2000). Learning to make music enhances spatial reasoning. Journal of Aesthetic Education, 34(3–4), 179–238. Heubert, J., & Hauser, R. (1999). High stakes: Testing for tracking, promoting, and graduation. National Academy Press. Ho, Y. C., Cheung, M. C., & Chan, A. (2003) Music training improves verbal but not visual memory: Cross-sectional and longitudinal explorations in children. Neuropsychology, 12, 439–450 Hollon, S. D., Thase, M. E., & Markowitz, J. C. (2002). Treatment and prevention of depression. Psychological Science in the Public Interest, 3(2), 39–77. Howard, D., & Giovanelli, M. (2019, September 13). Einstein’s philosophy of science. Retrieved October 05, 2020, from https://plato.stanford.edu/entries/einstein-philscience/ Howick, J. (2015). The double-edged sword of the evidence-based medicine renaissance: CEBM. Retrieved November 03, 2016, from http://www.cebm.net/double-edged-swordevidence-based-medicine-renaissance/

STOP MAKING SENSE

220

Huang, H. (2011). Why Chinese people play Western classical music: Transcultural roots of music philosophy. International Journal of Music Education, 30(2), 161–176. https://doi.org/10.1177%2F0255761411420955 Husserl, E. (2014). Ideas for a pure phenomenology and phenomenological philosophy: First book: General introduction to pure phenomenology. Hackett Publishing Company. Jenlink, C. L. (1993). The relational aspects of a school, a music program, and at-risk student self-esteem: a qualitative study [Doctoral dissertation, Oklahoma State University]. Dissertation Abstracts International. https://hdl.handle.net/11244/316911 Johnson, C. M. & Memmott, J. E. (2007). Examination of relationships between participation in school music programs of differing quality and standardized test results. Journal of Research in Music Education, 54(4), 293–307. Jorgensen, E. R., & Ward-Steinman, P. M. (2015). Shifting paradigms in music education research (1953–1978): A theoretical and quantitative reassessment. Journal of Research in Music Education, 63(3), 261–280. https://doi.org/10.1177%2F0022429415601690 Josefson, D. (2001). Rebirthing therapy banned after girl died in 70 minute struggle. British Medical Journal, 322(7293), 1014. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1174742/ Kant, I., & Meiklejohn, J. M. (2015). The critique of pure reason. CreateSpace Independent Publishing Platform. Kazdin, A. E. (2006). Arbitrary metrics: Implications for identifying evidence-based treatments. American Psychologist,61(1), 42–49. https://doi.org/10.1037/0003066x.61.1.42 Keene, J. A. (1982). A history of music education in the United States. University Press of New England.

STOP MAKING SENSE

221

Keith Jarrett – The Art of Improvisation. (2005). Retrieved from https://www.imdb.com/title/tt2120083/ Kelly, S. N. (2012). Fine arts-related instruction’s influence on academic success. Florida Music Director, 8–10. Koegh, J., Garvis, S., Pendergast, D., & Diamond, P. (2012). Self-determination: Using agency, efficacy and resilience (AER) to counter novice teachers/ experiences of intensification. Australian Journal of Teacher Education, 37, 46–65. Koestner, R., Ryan, R. M., Bernieri, F., & Holt, K. (1984). Setting limits on children’s behavior: The differential effects of controlling vs. informational styles on intrinsic motivation and creativity. Journal of Personality, 52(3), 233–248. https://psycnet.apa.org/doi/10.1111/j.1467-6494.1984.tb00879.x Kohn, A. (2000). The case against standardized testing: Raising the scores, ruining the schools. Heinemann. Kuhn, T. S. (2015). The structure of scientific revolutions. The University of Chicago Press. Kratus, J. (2007). Music education at the tipping point. Music Educators Journal, 94(2), 42–48. https://doi.org/10.1177%2F002743210709400209 Kruse, A. J. (2014). Toward hip-hop pedagogies for music education. International Journal of Music Education, 34(2), 247–260. https://doi.org/10.1177%2F0255761414550535 Kumeh, T. (2011). Education: Standardized tests, explained. http://standardizedtests.procon.org/view.answers.php?questionID=001747 Ladson-Billings, G. (1995). Toward a theory of culturally relevant pedagogy. American Educational Research Journal, 32(3), 465. https://doi.org/10.3102%2F00028312032003465

STOP MAKING SENSE

222

Law, W.-W., & Ho, W.-C. (2015). Popular music and school music education: Chinese students’ preferences and dilemmas in Shanghai, China. International Journal of Music Education, 33(3), 304–324. https://doi.org/10.1177%2F0255761415569115 Lewontin, R. C. (1991). The doctrine of DNA: Biology as ideology. Harper Perennial. Lilienfeld, S. (2014, January 27). Evidence-based practice: The misunderstandings continue. Psychology Today. Luce, J. R. (1965). Sight-reading and ear-playing abilities as related to instrumental music students. Journal of Research in Music Education, 13(2), 101–109. https://doi.org/10.2307%2F3344447 Mark, M. L., & Gary, C. L. (1992). A history of American music education. Schirmer Books. Maslow, A. (1986). Religions, values and peak experiences. Penguin Books. McCluskey, A., & Lovarini, M. (2005). Providing education on evidence-based practice improved knowledge but did not change behaviour: A before and after study. BMC Medical Education, 5(1). https://doi.org/10.1186/1472-6920-5-40 McGraw, K. O., & McCullers, J. C. (1979). Evidence of a detrimental effect of extrinsic incentives on breaking a mental set. Journal of Experimental Social Psychology, 15(3), 285–294. https://psycnet.apa.org/doi/10.1016/0022-1031(79)90039-8 Miettinen, O. S. (2001). The modern scientific physician: 1. Can practice be science. Journal of Evaluation in Clinical Practice,165, 441–442. Molenda, M. (2008, December 1). Multi-dimensional miracles: Carlos Santana celebrates the power of intangibles. Guitar Player. https://www.santana.com/news/guitar-playermagazine-interview-multi-dimensional-miracles-carlos-santana-celebrates-the-power-ofintangibles-by-michael-molenda/

STOP MAKING SENSE

223

Morrison, K. (2001). Randomised controlled trials for evidence-based education: Some problems in judging what works. Evaluation & Research in Education, 15(2), 69–83. https://doi.org/10.1080/09500790108666984 Murray, S. J., Holmes, D., Perron, A., & Rail, G. (2007). No exit? Intellectual integrity under the regime of “evidence” and “best-practices.” Journal of Evaluation in Clinical Practice, 13, 512–516. Murray, S. J., Holmes, D., & Rail, G. (2008). On the constitution and status of “evidence” in the health sciences. Journal of Research in Nursing. 13, 272–280. National Association for Music Education (NAfME). (2020, March 24). Retrieved August 14, 2020, from nafme.org/ National Center for Education Statistics. (2009). The nation’s report card: Trends in academic progress in reading and mathematics, 2008. http://nces.ed.gov/nationsreportcard/pubs/main2008/2009479.asp Nelson, T. D., & Steele, R. G. (2007). Predictors of practitioner self-reported use of evidencebased practices: practitioner training, clinical setting, and attitudes toward research. Administration and Policy in Mental Health and Mental Health Services Research, 34(4), 319–330. https://doi.org/10.1007/s10488-006-0111-x Nevo, I., & Slonim-Nevo, V. (2011). The myth of evidence-based practice: Towards evidenceinformed practice. British Journal of Social Work, 41(6), 1176–1197. https://doi.org/10.1093/bjsw/bcq149 Perrone, V. (1991). On standardized testing. Childhood Education, 132–142. Popper, K. R. (2002). The logic of scientific discovery.

STOP MAKING SENSE

224

Portowitz, P., Lichtenstein, O., Egorova, L., & Brand, E. (2009). Underlying mechanisms linking music education and cognitive modifiability. Research Studies in Music Education, 31, 107–129. Price, S. D., Callahan, J. L., & Cox, R. J. (2017). Psychometric investigation of competency benchmarks. Training and Education in Professional Psychology, 11(3), 128–139. https://psycnet.apa.org/doi/10.1037/tep0000133 Quine, W. V. (2013). Word and object. MIT Press. Rauscher, F. H., & Zupan, M. A. (1999). Classroom keyboard instruction improves kindergarten children’s spatial-temporal performance: A field study. Early Childhood Research Quarterly, 15(2), 215–228. Rebora, A. (2012). Teachers place little value on standardized testing. Education Week, 31(14). Reimer, B. (1997). Should there be a universal philosophy of music education? International Journal of Music Education, os-29(1), 4–21. https://doi.org/10.1177%2F025576149702900103 Rose, N. S. (2010). Powers of freedom: Reframing political thought. Cambridge University Press. Rubin, A., & Parrish, D. (2006). Views of evidence-based practice among faculty in master of social work programs: A national survey. Research on Social Work Practice, 17(1), 110– 122. https://doi.org/10.1177%2F1049731506293059 Rury, J. L. (2009). Excerpt on the common school reform movement (1830s-60s). Education and Social Change: Contours in the History of American Schooling, 74-80. Ryan, J. (2013, December 3). American schools vs. the world: Expensive, unequal, bad at math. The Atlantic.

STOP MAKING SENSE

225

Sackett, D. L., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000). Evidence-based medicine: How to practice and teach EBP (2nd ed.). Churchill Livingstone. Sackett, D. L., & Rosenberg, W. M. (1995). On the need for evidence-based medicine. Journal of Public Health, 17, 330–334. Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t. BMJ, 312(7023), 71–72. https://doi.org/10.1136/bmj.312.7023.71 Salazar, R., & Randles, C. (2015). Connecting ideas to practice: The development of an undergraduate student’s philosophy of music education. International Journal of Music Education, 33(3), 278–289. https://doi.org/10.1177%2F0255761415581150 Sambar, C. (2001). Pros and cons of standardized tests. http://www.sambar.com/chuck/pros.htm. Schellenberg, E. G. (2004). Music lessons enhance IQ. Psychological Science, 15(8), 511–514. Schrag, P. (2000). High stakes are for tomatoes. The Atlantic Monthly, 286, 19–21. Sehon, S. R., & Stanley, D. E. (2003). A philosophical analysis of the evidence-based medicine debate. BMC Health Services Research, 3(1). https://dx.doi.org/10.1186%2F1472-69633-14 Shedler, J. (2017). Where is the evidence for “evidence-based” therapy? Journal of Psychological Therapies in Primary Care, 47–59. Shim, W. & Walczak, K. (2012). The impact of faculty teaching practices on the development of students’ critical thinking skills. International Journal of Teaching and Learning in Higher Education, 24(1), 16–30. Sifford, K. (2012). On the benefits of a philosophy major. Retrieved October 08, 2016, from https://pleasandexcuses.com/2012/09/06/philosophy-major/

STOP MAKING SENSE

226

Silk, M. L., Bush, A., & Andrews, D. L. (2010). Contingent intellectual amateurism, or, the problem with evidence-based research. Journal of Sport and Social Issues, 34(1), 105– 128. https://doi.org/10.1177/0193723509360112 Sindberg, L. (2012). Just good teaching: Comprehensive musicianship through performance (CMP) in theory and practice. Rowman & Littlefield Education. Skoe, E., & N. Kraus. (2012). A little goes a long way: How the adult brain is shaped by musical training in childhood. The Journal of Neuroscience, 32(34), 11507–11510. Slavin, R. E. (2008). Perspectives on evidence-based research in education—what works? Issues in synthesizing educational program evaluations. Educational Researcher, 37(1), 5–14. https://doi.org/10.3102%2F0013189X08314117 Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. American Psychologist, 32(9), 752–760. https://doi.org/10.1037//0003-066x.32.9.752 Springer, D. W. (2007). The teaching of evidence-based practice in social work higher education—Living by the Charlie Parker dictum: A response to papers by Shlonsky and Stern, and Soydan. Research on Social Work Practice, 17(5), 619–624. https://doi.org/10.1177%2F1049731506297762 Stambaugh, L. A., & Dyson, B. E. (2016). A comparative content analysis of music educators journal and philosophy of music education review (1993–2012). Journal of Research in Music Education, 64(2), 238–254. https://doi.org/10.1177%2F0022429416646997 Standley, J. M. (2008). Does music instruction help children learn to read? Evidence of a metaanalysis. Update: Applications of Research in Music Education, 27(1), 17–32. Sternberg, R. J. (1998). A balance theory of wisdom. Review of General Psychology, 2, 347–365. http://dx.doi.org/10.1037/1089-2680.2.4.347 Strauss, V. (2006). The rise of the testing culture. The Washington Post, A09.

STOP MAKING SENSE

227

Taruskin, R., & Gibbs, C. H. (2013). The Oxford history of Western music. Oxford University Press. Tonelli, M. R. (1998). The philosophical limits of evidence-based medicine. Academic Medicine, 73(12), 1234–1240. https://doi.org/10.1097/00001888-199812000-00011 Upshur, R. (2003). Are all EBPs alike? Problems in the ranking of evidence. Canadian Medical Association Journal, 671–673. Wampold, B. E., Budge, S. L., Laska, K. M., Del Re, A. C., Baardseth, T. P., Flűckiger, C., Minami, T., Kivlighan, D., & Gunn, W. (2011). Evidence-based treatments for depression and anxiety versus treatment-as-usual: A meta-analysis of direct comparisons. Clinical Psychology Review, 31(8), 1304–1312. https://doi.org/10.1016/j.cpr.2011.07.012 Werner, K. (2015). Effortless mastery: Liberating the master musician within. Jamey Aebersold Jazz. Williams, D. A. (2007). What Are music educators doing and how well are we doing it? Music Educators Journal, 94(1), 18–23. https://doi.org/10.1177%2F002743210709400105 Winnicott, D. (1985). Playing and reality. Harmondsworth: Penguin. Wolf, D. P., LeMahiue, G. P., & Eresh, J. (1992). Good measure: Assessment as a tool for educational reform. Educational Leadership,49, 8–13. Wolfe-Christensen, C., & Callahan, J. L. (2008). Current state of standardization adherence: A reflection of competency in psychological assessment. Training and Education in Professional Psychology, 2(2), 111–116. https://doi.org/10.1037/1931-3918.2.2.111 Wong, P., Skoe, E., Russo, N., Dees, T., & Kraus, N. (2007). Musical experience shapes human brainstem encoding of linguistic pitch patterns. Nature Neuroscience, 10, 420–422. https://doi.org/10.1038/nn1872