This book presents several metascientific strategies and explains how they can be used to improve research about the aut
208 99 5MB
English Pages 158  Year 2023
A fresh approach to bridging research design with statistical analysis While good social science requires both research
295 87 15MB Read more
Graph theory can be applied to ecological questions in many ways, and more insights can be gained by expanding the range
651 79 8MB Read more
What is the evidence? Why do you need it? How do you evaluate it? How do you use it to make decisions? Put the evidence
950 58 13MB Read more
This text outlines for the first time a structured articulation of an emerging Islamic orientation to psychotherapy, a f
1,833 195 39MB Read more
277 68 7MB Read more
This volume was conceived as a "best practices" resource for pronunciation and speaking teachers in the way th
467 115 1MB Read more
This text is intended for instructors who emphasize teaching students how to locate, read, and interpret, and apply the
886 28 3MB Read more
This text outlines for the first time a structured articulation of an emerging Islamic orientation to psychotherapy, a f
1,104 154 6MB Read more
Table of contents :
About the Book
About the Author
List of Figures
List of Tables
Chapter 1: Introduction
1.1 The Public’s Trust in Scientific Endeavours and Researchers
1.2 Overview of Metascience
1.3 Benefits of Metascience
1.3.1 Reduce Article Retractions
1.3.2 Prevent Flawed Research from Being Incorporated into Academic Disciplines
1.3.3 Reducing the Wastage of Financial Resources
1.4 Pedagogical Features in This Book
1.5 Objective of This Book
1.6 Summary of Upcoming Chapters
1.6.1 Chapter 2: Preventing the Certification and Proliferation of Specious Research
1.6.2 Chapter 3: Addressing the Reproducibility Crisis
1.6.3 Chapter 4: Evaluating and Improving the Peer Review Process
1.6.4 Chapter 5: Reducing Questionable Research Practices
1.6.5 Chapter 6: Creating Studies That Are Respectful of Autistic Participants
Chapter 2: Preventing the Certification and Proliferation of Specious Research
2.1 The Creation of Predatory Publishers and Beall’s List
2.2 Consequences of Predatory Publishers
2.2.1 Corrupting Research
2.2.2 Undermining the Training of Scholars
2.2.3 Increased Email Correspondence to Academics
2.3 Checklists and Flow Diagrams to Identify Predatory Journals
Chapter 3: Addressing the Reproducibility Crisis
3.1 Defining Reproducibility
3.2 Prevalence of Irreproducible Autism Research
3.3 Strategies to Improve the Reproducibility of Autism Research
3.3.1 Archiving Datasets
3.3.2 Journal Requesting Datasets from Authors
3.3.3 Open Science Badges
3.3.4 Reforming Academic Hiring Practices to Promote Reproducible Research
3.3.5 Pre-registering Studies
3.3.6 Improving the Readability of a Study’s Methodology
3.3.7 Requiring Researchers to Self-examine Their Previous Research
3.3.8 Declaring All Conflicts of Interest
3.3.9 Declaring all Sources of Funding
3.3.10 Publicise Materials
3.3.11 Creating a Journal That Only Publishes Replications of Autism Research
Chapter 4: Evaluating and Improving the Peer Review Process
4.1 The Peer Review Process
4.2 Drawback One: The Pervasive Incentives Placed on Academics to Publish Manuscripts
4.3 Drawback Two: Publication Bias
4.4 Drawback Three: Inconsistent Publishing Policies Between Journals
4.5 Drawback Four: Redundancy in Repeating the Peer Review Process
4.6 Drawback Five: Lack of Formal Education for Early Career Researchers About How to Peer Review a Manuscript
4.7 Drawback Six: Inconsistent Reviews of the Entire Manuscript
4.8 Drawback Seven: The Impacts of Unprofessional Comments by Peer Reviewers
4.9 Drawback Eight: Ambiguity About Citing Preprinted Articles
Chapter 5: Reducing Questionable Research Practices
5.1 Defining Questionable Research Practices
5.1.1 Cherry Picking
5.1.3 Hypothesising After Results Are Known
5.2 Prevalence of Questionable Research Practices
5.3 Factors That Can Create Questionable Research Practices
5.4 Strategies to Reduce Questionable Research Practices
5.4.1 Registering a Study’s Design
5.4.2 Reforming Grant Awarding Agencies
5.4.3 Educating Scholars About Questionable Research Practices
5.4.4 Reforming the Holistic Process of Publishing Research
5.4.5 Using Evidence-Based Language
5.4.6 Reforming the Publish or Perish Culture
5.4.7 Removing the Financial Incentives for Academic Publishing
5.4.8 Creating an Independent Research Integrity Agency
5.4.9 Developing a Confidential Reporting System
Chapter 6: Creating Studies That Are Respectful of Autistic Participants
6.1 What Is Participatory Research?
6.1.1 Cultivating and Maintaining Respect
6.1.2 Establishing Authenticity
6.1.3 Correcting False Assumptions
6.1.4 Allocating Funding for Stakeholder Engagement
6.1.5 Building Empathy Between Autistic Participants and Non-autistic Researchers
6.2 Establishing an Audit Trail
6.3 Establishing an Autism Advisory Panel
6.4 Identifying Any Risks and Benefits with the Research
6.5 Preparing for the Interview or Focus Group
6.5.1 Preparing for the Interview
6.5.2 Preparing for the Focus Group
6.5.3 Giving the Participants the Interview or Focus Group Questions Before the Study
6.6 Providing Support
6.7 Signing the Consent Form
6.8 Conducting the Interview or Focus Group
6.8.1 Identify When Qualitative Research Interviews Are Appropriate
6.8.2 Familiarise Yourself with the Topic
6.8.3 Create an Interview Guide and Test the Questions
6.8.4 Consider the Cultural and Power Dynamics of the Interview Situation
6.8.5 Building Rapport with the Participants
6.8.6 The Interviewer Is the Co-creator of the Data
6.8.7 Talk Less and Listen More
6.8.8 Adjusting the Interview Guide
6.8.9 Prepare for Unanticipated Emotions
6.8.10 Transcribe the Interviews in Good Time
6.8.11 Check the Data
6.8.12 Initiate the Analysis Early
6.9 Recording the Interview Session or Focus Group
6.10 Conducting Virtual Interviews
6.11 Guides to Facilitate the Participation of Autistics in Research
Appendix A: Results from the PubMed Search
Appendix B: Open Science Badges Published by the Center for Open Science
Appendix C: AutismCRC Participatory and Inclusion Guidelines
Participatory Research Practice Guide 1: Consulting with Autistic People in Research
What Is Consultation?
Methods of Consultation
Practical Strategies for Consultation
Participatory Research Practice Guide 2: Co-producing Research with Autistic People
What Is Co-production?
Principles of Co-production
Guidelines for Including Autistic Adults as Co-researchers
Practical Strategies for Co-production
Participatory Research Practice Guide 3: Supporting Autistic People to Produce Community-Led Research
What Is Community-Led Research?
Supporting Community-Led Research
Inclusive Research Practice Guide 1: Involving Autistic People as Research Participants
Involving Autistic People as Research Participants
Participant Information and Consent
Planning Face-to-Face Research
Planning Group Research
Inclusive Research Practice Guide 2: Disseminating Research Findings
What Is Dissemination?
Dissemination Methods and Materials
Planning for Dissemination
What Makes Dissemination Effective?
Reporting on Community Engagement
Applying Metascientific Principles to Autism Research
Applying Metascientific Principles to Autism Research
Applying Metascientific Principles to Autism Research
Matthew Bennett Independent Researcher Adelaide, SA, Australia
ISBN 978-981-19-9239-1 ISBN 978-981-19-9240-7 (eBook) https://doi.org/10.1007/978-981-19-9240-7 © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: Pattern © Melisa Hasan This Palgrave Macmillan imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
This book is dedicated to future generations of scholars who will devote their professional lives to the study of autism.
Having watched and been a part of the evolution of autism research over the past two decades, I am excited by what the coming years will bring. Indeed, things are changing rapidly. Our understanding of autism is changing, including what it is, what it means in people’s lives, and what it means for people’s identities. Our thinking about autism is changing. While slow, we are seeing the gradual identification and erosion of ableist views and attitudes about autistic people, including their way of understanding and interacting with other people and the world. These changes have been driven largely by the generous and selfless sharing of insights by autistic people about their lives, views, preferences, and priorities. I see clear evidence of a growing desire, understanding, and commitment from both the autistic and non-autistic communities to work together to bring about positive change in society, as the basis for upholding individual rights and supporting meaningful and valued change in individual people’s lives in ways that they chose and desire. But is autism research changing? After all, research can play an invaluable role in inspiring, informing, and supporting positive change, if done correctly. Few people are as qualified as Matthew Bennett to answer this question. Bringing together his lived expertise as an autistic person, research expertise, and professional expertise in roles that sit at the interface of research, policy, and practice, Matthew is uniquely positioned to offer an authoritative and insightful view. Having read the chapters that follow, my impression is that the answer to the question is yes, autism research is changing, but not enough, and not fast enough. vii
This book sets out a roadmap for how autism research can be, and should be, conducted. I say can be conducted because for many readers, it will open their eyes to new ways of thinking about research itself, such as in Chap. 6, which outlines how to create studies that are respectful of autistic participants. I say should be conducted because the book also provides readers with a masterclass on how to engage in high-quality research, and address the proliferation of low-quality research in a practical and applied way. Chapter 1 introduces the book and a concise summary of what metascience is, why it matters, and how the book aims to support readers in designing, carrying out, interpreting, reporting, and applying high-quality research and its findings. Chapter 2 addresses the issue of misleading research and provides guidance to readers about recognising and avoiding predatory publishers. Chapter 3 addresses the reproducibility crisis in autism research. While not unique to this field, Matthew eloquently presents the challenges it poses, and how readers can help address these in their research. Chapters 4 and 5 provide guidance on improving peer review processes and avoiding questionable research practices respectively. As noted, Chap. 6 provides an insightful explanation of how readers can be partners in, and champions of, research that is respectful of autistic participants. In setting out a roadmap for change, Matthew presents a challenge. My reading is that the challenges readers to embrace new and more appropriate ways of conducting autism research and to do so in a principled manner. This means being ethical and professional, but also humanistic and individualised in our approach. Indeed, most striking to me in this book is the constant connection between what needs to be done to improve autism research, how as individuals we can contribute, and why it matters so much to individual autistic people and those closest to them. To this end, the book should be a valuable resource for anyone wanting to contribute to positive change moving forward. Menzies Health Institute Queensland David Trembath Griffith University, Southport, QLD, Australia CliniKids, Telethon Kids Institute, Subiaco, WA, Australia
In the decades after Leo Kanner published his seminal paper, titled ‘Autistic Disturbances of Affective Contact’, countless generations of academics have devoted their professional careers to improving our understanding of autism. Their efforts should be commended, since, among other things, they have given us insights into what it is like to be autistic and what can be done to support autistics in society. However, in their unrelenting quest for new knowledge, academics rarely stop and selfexamine the integrity, robustness, and limitations of their actions as researchers. For example, most do not preserve or even contemplate preserving their datasets so that others can confirm their findings by repeating an identical analysis of the dataset. One consequence of this practice is that the truthfulness of findings, including some that have been used to pave the way for newer research, will forever remain uncertain. Most books focus on a particular aspect about autism. In contrast, relatively few contain explanations about the shortcomings associated with the creation of this knowledge. This authoritative book addresses this deficit by giving readers the opportunity to contemplate how we have acquired knowledge about autism instead of what has been discovered. Different readers can obtain various benefits from this conceptual distinction. Scholars can gain valuable insights about how they can improve the techniques that they use to create research about autism. Alternatively, if the reader is an editor of an academic journal, they can learn about some of the shortcomings of the peer review process and some applicable solutions. Equipped with this knowledge they can alter the peer review process ix
they oversee to ensure that the research that is published and disseminated is more reliable. There is an ongoing debate in the field of autism spectrum research, as well as in the broader field of disability studies, as to what is the most appropriate use of terminology to address members of the autistic community. Some prefer using person-first language (i.e., people on the autism spectrum) while others prefer using identity-first language (i.e., autistic person). Throughout this book I use identity-first language, since research has shown that most autistics prefer this language convention. Furthermore, it is my belief, as an autistic researcher, that since the autism spectrum is an inseparable part of a person’s identity, the word autistic should be used instead of person with autism or person on the autism spectrum. Adelaide, SA, Australia
About the Book
There is a considerable amount of research about the autism spectrum that has both improved our understanding of autism and created strategies that can support autistics and their families. However, metascientific strategies that can improve the production of this research have not been comprehensively applied to this field. Consequently, the quality and integrity of research about the autism spectrum is lacking compared to other fields. Several metascientific concepts and how they can be applied to the study of autism are outlined in this book. Equipped with this knowledge, the reader, regardless of their experience, will be able to produce high-quality research about the autism spectrum that will withstand continual scrutiny.
1 Introduction 1 1.1 The Public’s Trust in Scientific Endeavours and Researchers 1 1.2 Overview of Metascience 3 1.3 Benefits of Metascience 3 1.3.1 Reduce Article Retractions 3 1.3.2 Prevent Flawed Research from Being Incorporated into Academic Disciplines 5 1.3.3 Reducing the Wastage of Financial Resources 6 1.4 Pedagogical Features in This Book 7 1.5 Objective of This Book 7 1.6 Summary of Upcoming Chapters 8 1.6.1 Chapter 2: Preventing the Certification and Proliferation of Specious Research 8 1.6.2 Chapter 3: Addressing the Reproducibility Crisis 8 1.6.3 Chapter 4: Evaluating and Improving the Peer Review Process 9 1.6.4 Chapter 5: Reducing Questionable Research Practices 9 1.6.5 Chapter 6: Creating Studies That Are Respectful of Autistic Participants 9 References 10
2 Preventing the Certification and Proliferation of Specious Research 13 2.1 The Creation of Predatory Publishers and Beall’s List 13 2.2 Consequences of Predatory Publishers 15 2.2.1 Corrupting Research 15 2.2.2 Undermining the Training of Scholars 15 2.2.3 Increased Email Correspondence to Academics 16 2.3 Checklists and Flow Diagrams to Identify Predatory Journals 16 2.4 Conclusion 18 References 18 3 Addressing the Reproducibility Crisis 21 3.1 Defining Reproducibility 21 3.2 Prevalence of Irreproducible Autism Research 22 3.3 Strategies to Improve the Reproducibility of Autism Research 25 3.3.1 Archiving Datasets 25 3.3.2 Journal Requesting Datasets from Authors 25 3.3.3 Open Science Badges 28 3.3.4 Reforming Academic Hiring Practices to Promote Reproducible Research 29 3.3.5 Pre-registering Studies 30 3.3.6 Improving the Readability of a Study’s Methodology 30 3.3.7 Requiring Researchers to Self-examine Their Previous Research 31 3.3.8 Declaring All Conflicts of Interest 32 3.3.9 Declaring all Sources of Funding 32 3.3.10 Publicise Materials 33 3.3.11 Creating a Journal That Only Publishes Replications of Autism Research 33 3.4 Conclusion 34 References 34
4 Evaluating and Improving the Peer Review Process 39 4.1 The Peer Review Process 39 4.2 Drawback One: The Pervasive Incentives Placed on Academics to Publish Manuscripts 42 4.3 Drawback Two: Publication Bias 43 4.4 Drawback Three: Inconsistent Publishing Policies Between Journals 44 4.5 Drawback Four: Redundancy in Repeating the Peer Review Process 46 4.6 Drawback Five: Lack of Formal Education for Early Career Researchers About How to Peer Review a Manuscript 47 4.7 Drawback Six: Inconsistent Reviews of the Entire Manuscript 48 4.8 Drawback Seven: The Impacts of Unprofessional Comments by Peer Reviewers 50 4.9 Drawback Eight: Ambiguity About Citing Preprinted Articles 51 4.10 Conclusion 54 References 54 5 Reducing Questionable Research Practices 59 5.1 Defining Questionable Research Practices 59 5.1.1 Cherry Picking 60 5.1.2 P-Hacking 60 5.1.3 Hypothesising After Results Are Known 62 5.2 Prevalence of Questionable Research Practices 64 5.3 Factors That Can Create Questionable Research Practices 64 5.4 Strategies to Reduce Questionable Research Practices 65 5.4.1 Registering a Study’s Design 65 5.4.2 Reforming Grant Awarding Agencies 66 5.4.3 Educating Scholars About Questionable Research Practices 66 5.4.4 Reforming the Holistic Process of Publishing Research 67 5.4.5 Using Evidence-Based Language 67
5.4.6 Reforming the Publish or Perish Culture 70 5.4.7 Removing the Financial Incentives for Academic Publishing 70 5.4.8 Creating an Independent Research Integrity Agency 71 5.4.9 Developing a Confidential Reporting System 72 5.5 Conclusion 73 References 73 6 Creating Studies That Are Respectful of Autistic Participants 77 6.1 What Is Participatory Research? 77 6.1.1 Cultivating and Maintaining Respect 78 6.1.2 Establishing Authenticity 78 6.1.3 Correcting False Assumptions 79 6.1.4 Allocating Funding for Stakeholder Engagement 80 6.1.5 Building Empathy Between Autistic Participants and Non-autistic Researchers 80 6.2 Establishing an Audit Trail 80 6.3 Establishing an Autism Advisory Panel 81 6.4 Identifying Any Risks and Benefits with the Research 83 6.5 Preparing for the Interview or Focus Group 84 6.5.1 Preparing for the Interview 84 6.5.2 Preparing for the Focus Group 85 6.5.3 Giving the Participants the Interview or Focus Group Questions Before the Study 86 6.6 Providing Support 86 6.7 Signing the Consent Form 86 6.8 Conducting the Interview or Focus Group 87 6.8.1 Identify When Qualitative Research Interviews Are Appropriate 87 6.8.2 Familiarise Yourself with the Topic 88 6.8.3 Create an Interview Guide and Test the Questions 88 6.8.4 Consider the Cultural and Power Dynamics of the Interview Situation 89 6.8.5 Building Rapport with the Participants 89 6.8.6 The Interviewer Is the Co-creator of the Data 89 6.8.7 Talk Less and Listen More 89 6.8.8 Adjusting the Interview Guide 90
6.8.9 Prepare for Unanticipated Emotions 90 6.8.10 Transcribe the Interviews in Good Time 90 6.8.11 Check the Data 91 6.8.12 Initiate the Analysis Early 91 6.9 Recording the Interview Session or Focus Group 92 6.10 Conducting Virtual Interviews 92 6.11 Guides to Facilitate the Participation of Autistics in Research 93 6.12 Conclusion 94 References 94 Appendix A: Results from the PubMed Search97 Appendix B: Open Science Badges Published by the Center for Open Science109 Appendix C: AutismCRC Participatory and Inclusion Guidelines113 Index139
About the Author
Matthew Bennett, PhD is a researcher who examines various aspects of the autism spectrum. He believes that one of the best ways to improve the lives of autistics is through the creation and dissemination of knowledge that has translational benefits for the autistic community. He has published several books about the autism spectrum, including: 1. Bennett, M., Webster, A. A., Goodall, E., & Rowland, S. (2019). Life on the autism spectrum: Translating myths and misconceptions into positive futures. Springer. https://doi.org/10.1007/ 978-981-13-3359-0 2. Bennett, M., Goodall, E., & Nugent, J. (2020). Choosing effective support for people on the autism spectrum: A guide based on academic perspectives and lived experience. Routledge. https://doi. org/10.4324/9780367821975 3. Bennett, M., & Goodall, E. (2021). Sexual behaviours and relationships of autistics: A scoping review. Springer International Publishing AG. https://doi.org/10.1007/978-3-030-65599-0 4. Bennett, M., & Goodall, E. (2021). Employment of persons with autism: A scoping review. Springer International Publishing AG. https://doi.org/10.1007/978-3-030-82174-6
About the Author
5. Bennett, M., & Goodall, E. (2022). Addressing underserved populations in autism spectrum research: An intersectional approach. Emerald Publishing Group. https://doi.org/10.1108/ 9781803824635 6. Bennett, M., & Goodall, E. (2022). Autism and COVID-19: Strategies for supporters to help autistics and their families. Emerald Publishing Group. https://doi.org/10.1108/9781804550335
ASA AutismCRC ECRs EQUATOR HARKing IJPM QRPs RQI
American Statistical Association Cooperative Research Centre for Living with Autism Early career researchers Enhancing the QUAlity and Transparency Of health Research Hypothesising After Results are Known Indian Journal of Psychological Medicine Questionable research practices Review quality instrument
List of Figures
Fig. 1.1 Fig. 2.1 Fig. 3.1 Fig. 3.2 Fig. 3.3 Fig. 4.1 Fig. 4.2 Fig. 4.3
Number of articles that referenced Wakefield et al.’s (1998) fraudulent article by year after the notice of retraction was published. (Source: Suelzer et al., 2019, p. 5) The relationships between different types of journals and the author and reader. (Source: Richtig et al., 2018, p. 1443) Reproducible, robust, replicable, and generalisable research. (Source: Leipzig et al., 2021, p. 2) Flowchart of the manuscripts handled by Tsuyoshi Miyakawa in Molecular Brain from December 2017 to September 2019. (Source: Miyakawa (2020, p. 2)) Logos of Open Science badges by the Centre for Open Science. (Source: American Psychiatric Association Journals, 2022) Task of peer reviews. (Source: Glonti et al., 2019, p. 12) Biases at different stages of the research process. (Source: Williams et al., 2020, p. 2) EQUATOR reporting guideline decision tree. ARRIVE, Animal Research Reporting of In Vivo Experiments; CARE, CAse Report; CHEERS, Consolidated Health Economic Evaluation Reporting Standards; CONSORT, CONsolidated Standards of Reporting Trials; MOOSE, Meta-analysis Of Observational Studies in Epidemiology; PRISMA, Preferred Reporting Items for Systematic Review and Meta-Analysis
6 17 22 27 28 41 43
List of Figures
Fig. 5.1 Fig. 5.2 Fig. 5.3
Protocols; SPIRIT, Standard Protocol Items: Recommendations for Interventional Trials; SQUIRE, Standards for QUality Improvement Reporting Excellence; SRQR, Standards for Reporting Qualitative Research; STARD, Standards for Reporting Diagnostic Accuracy; STREGA, STrengthening the REporting of Genetic Association Studies; STROBE, STrengthening the Reporting of OBservational studies in Epidemiology; TRIPOD, Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis. (Source: Struthers et al., 2021, p. 4) 49 Publishing processes for peer review and preprint manuscripts. (Notes: (A) Traditional peer review publishing workflow and (B) Preprint submission establishing priority of discovery. Source: Tennant et al., 2019, p. 3) 51 The Centre for Open Science outline for pre-registration of studies. (Source: Centre for Open Science (2022)) 66 Recommendations for authors, editors, and reviewers of experimental philosophy studies. (Source: Polonioli et al. (2021, p. 67)) 68 Suggested ranges to approximately translate the p value into the language of evidence. (Notes: Boundaries should not be understood as ‘hard’ thresholds. Source: Muff et al. (2022, p. 206))69 Process to prepare interview data for analysis. (Source: Class et al., 2021, p. 7) 91
List of Tables
Table 1.1 Table 3.1 Table 3.2 Table 4.1 Table 4.2 Table 5.1 Table 5.2 Table 6.1
Five major aspects of metascience Reproducibility indicators Ten tips for good research data management Examples of strategies that can reduce publication bias Five recommendations by Ravinetto et al. (2021) about preprints Definitions of different questionable research practices Four-stage process of lodging and investigating allegations of academic misconduct and fraud proposed by Fischhoff and colleagues The inclusive research protocol outlined by Arnold and colleagues
4 23 26 45 53 61 72 82
Abstract This chapter provides a concise summary of metascience and why it matters to researchers who study the autism spectrum. It also furnishes the reader with an overview of this book’s upcoming chapters. Keywords Article retractions • Peer review process • Predatory publishers • Questionable research practices • Reproducibility crisis • Respectful research practices for autistic participants
1.1 The Public’s Trust in Scientific Endeavours and Researchers Undoubtedly, medical science has improved public health, quality of life, and life expectancy. For instance, it has resulted in the creation of effective vaccines that have protected people against developing coronavirus disease during the COVID-19 pandemic. Despite such benefits a survey by the Pew Research Centre (2022), which sampled 14,497 adults in the United States from 30 November to 12 December 2021, showed that only 29% of respondents had a high level of confidence in medical scientists.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Bennett, Applying Metascientific Principles to Autism Research, https://doi.org/10.1007/978-981-19-9240-7_1
There are two factors that have reduced the public’s trust in science and researchers. First, high-profile cases of academic misconduct, in which researchers deliberately create fraudulent studies with the intention of obtaining funding for research, have eroded the integrity of science disciplines (Marks & Buchanan, 2020; Miles, 2022; Pan & Chou, 2020; SAGE Publications, 2014). Second, due to the demanding and precarious nature of academic employment, researchers are more concerned with publishing high-impact research that can improve their prospects of obtaining both grants and employment (Eisner, 2018). Working under such conditions, researchers are not incentivised to improve the standard of their research by incorporating and then practising quality control strategies in their research activities. To prevent any further deterioration of public confidence and trust in research, scholars have started to self-examine their actions and reform the processes that they use to create new knowledge. For example, in conjunction with publishing new discoveries, researchers are starting to describe limitations in the processes they have used to create research itself. For example, to avoid their neurological research about autism succumbing to citation bias, Nebel et al. (2022) wrote a ‘citation diversity statement’ in their article, in which they claimed that “Before writing this manuscript, we set an intention of selecting references that reflect the diversity of the neuroscience and statistics fields in the form of contribution, gender, race, and ethnicity” (Nebel et al., 2022, p. 14). Along with such acknowledgements and changes to searching for relevant research, other research practices intended to improve the quality of research are also being implemented. For example, publishing datasets along with studies or writing data availability statements in manuscripts are also starting to become typical conventions in the production of research. The need for such changes in research practices has been expressed by Vazire and Holcombe (2021, p. 9), who once stated: The American president Ronald Reagan, in negotiation with his Soviet counterpart Mikhail Gorbachev, frequently invoked a Russian proverb that means “trust, but verify” (“Doveryai, no proveryai”). At one time, perhaps, there was little reason to suspect that the sciences could not be trusted and that verification was necessary. The replication crisis, along with the relatively new phenomenon of scientists being spectacularly wrong in public (as we saw with some high-profile errors in COVID-19 research), has highlighted the crucial role of verification in buttressing the credibility of science.
This book does not reveal and discredit any actual or potential fraudulent research about the autism spectrum. Such debunking will be left to other scholars, especially those willing to challenge established findings. Instead, some metascientific strategies that can improve the production and quality of research about the autism spectrum are the focus of this book.
1.2 Overview of Metascience Metascience, also called meta-research, is the academic discipline concerned with evaluating how research is produced and disseminated. Khakshooy et al. (2020, p. 5) claimed that “metascience involves the use of new and stringent scientific methodology to study science itself for raising the overall quality of scientific knowledge”. They also stated that “the very goal of metascience is to ensure that scientific progress and information grow from accurate, systematically verified, statically incontrovertible, and unquestionably true facts” (Khakshooy et al., 2020, p. 5). Similarly, Ioannidis et al. (2015, p. 3) explained that metascience “involves taking a bird’s eye view of science”. Although not a complete list, Ioannidis et al. (2015) proposed that metascience is composed of five distinct aspects, which are methods, reporting, reproducibility, evaluation, and incentives (see Table 1.1).
1.3 Benefits of Metascience 1.3.1 Reduce Article Retractions Within several fields, including dentistry and biomedical research, the number of articles being retracted has increased (Rapani et al., 2020; Wang et al., 2019). Rapani et al. (2020) examined 180 papers about dentistry that were published between 2001 and 2018. They reported that compared to 2009 to 2013 (n = 64), retractions had increased by 47% from 2014 to 2018 (n = 94). They also reported that the most common reason for an article being retracted was author misconduct (65%), followed by honest scientific errors (12.2%) and publisher-related issues (10.6%). Wang et al. (2019) examined the rate and reasons why articles about biomedical research were retracted from open access journals. They claimed that since 2010 the number and rate of retractions had increased. The most common reason for these retractions were errors (24.6%),
Table 1.1 Five major aspects of metascience Metascience subdiscipline
Specific interests (nonexhaustive list)
Methods: “performing research” study design, methods, statistics, research synthesis, collaboration, and ethics
Biases and questionable practices in conducting research, methods to reduce such biases, meta-analysis, research synthesis, integration of evidence, cross design synthesis, collaborative team science and consortia, research integrity and ethics Reporting: “communicating research” Biases and questionable practices in reporting, reporting standards, study explaining, disseminating, and popularising registration, disclosing conflicts of research, conflicts of interest disclosure and interest, information to patients, management, study registration and other bias public, and policy makers prevention measures, and methods to monitor and reduce such issues Reproducibility: “verifying research” Obstacles to sharing data and methods, sharing data and methods, replication studies, replicability and repeatability, replicability, reproducibility of published research, methods to reproducibility, and self-correction improve them, effectiveness of correction and self-correction of the literature, and methods to improve them Evaluation: “evaluating research” Effectiveness, costs, and benefits of old and new prepublication peer review, post- approaches to peer review and other science publication peer review, research assessment methods, and methods to improve funding criteria, and other means of them evaluating scientific quality Incentives: “rewarding research” Accuracy, effectiveness, costs, and benefits of old promotion criteria, rewards, and and new approaches to ranking and evaluating penalties in research evaluation for the performance, quality, value of research, individuals, teams, and institutions individuals, teams, and institutions
Source: Ioannidis et al. (2015, p. 3)
followed by plagiarism (22.9%), duplication of a study (16.3%), fraud or suspicion of fraud (15.8%), and invalid peer review processes (15%). Three metascientific practices can help reduce the number of articles that are retracted. First, journals can incorporate strategies intended to increase the standard of research that they publish, such as pre-registration and peer review of a study’s design or mandating that authors and peer reviewers use the most appropriate Enhancing the QUAlity and Transparency Of health Research (EQUATOR) checklist before submitting/completing an examination of a manuscript. Second, agencies that award grants can impose on prospective applicants’ higher standards of
research transparency and production. Such applicants could be asked, for example, to submit their study’s results and the dataset to the agency so that they can independently verify the results reported (i.e., reproducible research) (Diong et al., 2021). Third, emerging scholars could receive training about how to successfully peer review a paper. Equipped with this training and experience they should be able to identify flawed manuscripts before they are published. Such preventative action would decrease the number of articles that are retracted (Dennehy et al., 2021). 1.3.2 Prevent Flawed Research from Being Incorporated into Academic Disciplines Studies published by predatory publishers receive either inadequate or no peer review. Consequently, some are incorporated into other studies, which ultimately undermines the integrity of the discipline. This sentiment was articulated by Beall (2013, p. 47), who once stated: [S]cience is cumulative—contemporary research builds on research recorded as part of the scholarly record. Because many predatory publishers do a fake or minimal peer review, it is possible for bogus research to be published in these journals, masquerading as real science. This work can then get cited in legitimate journals, dirtying future science.
Several studies have added credence to Beall’s view (Rapani et al., 2020). In the field of autism spectrum research, for example, Suelzer et al. (2019) examined if 1153 studies that cited Wakefield et al.’s (1998) fraudulent study had noted that it was retracted. They reported that after a notice of retraction was published in 2010, the number of studies each year that noted this retraction was larger than those that did not (see Fig. 1.1). Despite this trend Suelzer et al. (2019, p. 1) stated: A significant number of authors did not document retractions of the article by Wakefield et al. The findings suggest that improvements are needed from publishers, bibliographic databases, and citation management software to ensure that retracted articles are accurately documented.
From a metascientific perspective, there are three strategies that can be used to reduce the prospect of fraudulent and flawed research being incorporated into academic disciplines. First, Cukier et al.’s (2020) checklist for
Fig. 1.1 Number of articles that referenced Wakefield et al.’s (1998) fraudulent article by year after the notice of retraction was published. (Source: Suelzer et al., 2019, p. 5)
identifying predatory publishers can be used by researchers to ensure that they do not incorporate into their research any articles that have been published by predatory publishers. Second, a series of modifications to improve the peer review process can be implemented, such as compelling authors to submit the relevant EQUATOR checklist and dataset along with their manuscript for peer review. Some of these suggestions will be explained in more detail in Chap. 4. Third, like Beall’s list, a constantly updated website can be created whereby predatory journals and the articles that they have published can be blacklisted. This suggestion, however, may not be feasible considering that there has been a rapid increase in the number of predatory publishers that have been created (Shen & Björk, 2015). 1.3.3 Reducing the Wastage of Financial Resources Within the field of metascience, the term reproducibility refers to validating a study’s findings by repeating the same analytical procedures using the same dataset. Freedman et al. (2015) have estimated that during
2015 in the United States of America 50% of all funding that was spent on preclinical research (i.e., US$28.2 billion) was spent on studies that were irreproducible. Stern et al. (2014) estimated that approximately US$58 million in direct funding from the National Institutes of Health that was allocated between 1992 and 2012 was spent on retracted articles. By improving the quality of research designs, which is commonly advocated by metascientists, it is possible that less public funding will be spent on inferior research that might be retracted and/or irreproducible.
1.4 Pedagogical Features in This Book Readers learn in different ways and to cater for these distinctions this book has a variety of pedagogical features to help emphasise key concepts, promote better information retention, and consolidate what the reader has learned. This book contains the following features: • Diagrams or illustrations: Some readers learn concepts via diagrams and/or illustrations. Whenever possible if a concept can be explained visually an illustration or diagram is provided. • Checklists and guidelines: This book contains an EQUATOR reporting guideline decision tree that the reader can use to improve their ability to judge a manuscript’s quality (see Fig. 4.3). This book also contains guidelines to help researchers incorporate autistic participants in their research activities (see Appendix C).
1.5 Objective of This Book Since its discovery there has been a systematic increase in the amount of research about the autism spectrum. This trend should be applauded as it has allowed us to understand what it can be like to be autistic. However, whilst the findings discovered have improved our knowledge about the autism spectrum, this has come at the expense of not investigating and perfecting our processes used to create this knowledge. The objective of this book is to address this imbalance by applying aspects of metascience to our current processes for creating research about the autism spectrum. To achieve this objective this book has five upcoming chapters that will now be outlined.
1.6 Summary of Upcoming Chapters 1.6.1 Chapter 2: Preventing the Certification and Proliferation of Specious Research Predatory publishers charge authors an article processing fee in exchange for publishing their manuscript after either an inadequate or no peer review. Chapter 2 begins with an explanation about the creation of predatory publishers and Beall’s list. This is followed by three detrimental consequences of predatory publishers, namely corrupting research, undermining the training of scholars, and increased email correspondence to academics. To inhibit the proliferation of predatory publishers, which will curtail the certification and dissemination of inferior research, tools to identify such publishers will be outlined. It is the intention of this chapter that the credibility of autism spectrum research will be maintained by preventing the integration of studies from predatory publishers into this field of research. 1.6.2 Chapter 3: Addressing the Reproducibility Crisis Within several disciplines, including biomedical and neurological research, the results of studies cannot be confirmed because the study’s dataset has been destroyed and/or the processes used to analyse the dataset are inadequately explained and therefore cannot be repeated. Chapter 3 explains the reproducibility crisis. It begins with a definition of the term reproducibility and its distinctions with robustness, generalisability, and replicability. Common factors that can improve reproducibility are then explained. These factors are then applied to eligible studies listed on PubMed that were published from 1 to 31 August 2022 with the phrase autis* in the manuscript’s title. This exercise showed the extent of reproducible practices within research about the autism spectrum. This chapter concludes with a series of strategies to improve the reproducibility of this research. The intention of this chapter is to contribute to the debate about how the practice of creating autism research can be changed to ensure that the findings produced can be verified using reproducibility.
1.6.3 Chapter 4: Evaluating and Improving the Peer Review Process Typically, a study must pass a peer review examination before it is deemed credible and worthy of publication. Although commonly used, this process is not flawless. Chapter 4 begins with a description of the peer review process followed by eight drawbacks and possible solutions to this process. The intention of this chapter is to outline several strategies that can improve the peer review process, which will ultimately inhibit flawed research about the autism spectrum from being certified as correct. 1.6.4 Chapter 5: Reducing Questionable Research Practices Questionable research practices (QRPs), which are not outright academic fraud and misconduct but occupy an ethical grey zone, are the subject of Chap. 5. This chapter begins with a definition of QRPs and three main types of QRPs, which are Hypothesising After Results are Known (HARKing), cherry picking, and p-hacking. The prevalence and factors that contribute to the creation of QRPs are then outlined. This chapter concludes with a series of suggestions to reduce the prospect of QRPs occurring. The purpose of this chapter is to improve the reader’s awareness of QRPs and how they can undermine the creation of knowledge about the autism spectrum. 1.6.5 Chapter 6: Creating Studies That Are Respectful of Autistic Participants Traditionally, researchers decided what topics about the autism spectrum would be researched and how the study would be performed, including the study’s aims, interview questions, and recruitment strategies. Recently, a paradigm shift has occurred in which autistic participants now have more influence in deciding what topics they would like examined and how the study should be conducted. This type of research is commonly termed participatory research, and it is the focus of Chap. 6. This chapter describes some of the main findings in the literature about how to conduct participatory research with autistics. It also explains some of the requirements that are typically required to perform a study, such as obtaining consent to
participate in the study and strategies for conducting an interview or focus group. The purpose of this chapter is to equip researchers with knowledge and tools that they can use to produce research that is more representative of the interests of autistics.
References Beall, J. (2013). Medical publishing triage—Chronicling predatory open access publishers. Annals of Medicine and Surgery, 2(2), 47–49. https://doi. org/10.1016/S2049-0801(13)70035-9 Cukier, S., Helal, L., Rice, D. B., Pupkaite, J., Ahmadzai, N., Wilson, M., Skidmore, B., Lalu, M. M., & Moher, D. (2020). Checklists to detect potential predatory biomedical journals: A systematic review. BMC Medicine, 18(1), 1–20. https://doi.org/10.1186/s12916-020-01566-1 Dennehy, J., Hoxie, I., di Schiavi, E., & Onorato, G. (2021). Reviewing as a career milestone: A discussion on the importance of including trainees in the peer review process. Communications Biology, 4(1), 1126. https://doi. org/10.1038/s42003-021-02645-6 Diong, J., Kroeger, C. M., Reynolds, K. J., Barnett, A., & Bero, L. A. (2021). Strengthening the incentives for responsible research practices in Australian health and medical research funding. Research Integrity and Peer Review, 6(1), 11. https://doi.org/10.1186/s41073-021-00113-7 Eisner, D. A. (2018). Reproducibility of science: Fraud, impact factors and carelessness. Journal of Molecular and Cellular Cardiology, 114, 364–368. https:// doi.org/10.1016/j.yjmcc.2017.10.009 Freedman, L. P., Cockburn, I. M., & Simcoe, T. S. (2015). The economics of reproducibility in preclinical research. PLoS Biology, 13(6), e1002165. https:// doi.org/10.1371/journal.pbio.1002165 Ioannidis, J. P., Fanelli, D., Dunne, D. D., & Goodman, S. N. (2015). Meta- research: Evaluation and improvement of research methods and practices. PLoS Biology, 13(10), e1002264. https://doi.org/10.1371/journal.pbio.1002264 Khakshooy, A., Bach, Q., Kasar, V., & Chiappelli, F. (2020). Metascience in bioinformation. Bioinformation, 16(1), 4–7. https://doi.org/10.6026/ 97320630016004 Marks, D. F., & Buchanan, R. D. (2020). King’s College London’s enquiry into Hans J Eysenck’s ‘Unsafe’ publications must be properly completed. Journal of Health Psychology, 25(1), 3–6. https://doi.org/10.1177/1359105319887791 Miles, J. (2022). Leading Queensland cancer researcher Mark Smyth fabricated scientific data, review finds. https://www.abc.net.au/news/2022-01-11/ qld-cancer-researcher-mark-smyth-fabricated-data-review-finds/100750208
Nebel, M. B., Lidstone, D. E., Wang, L., Benkeser, D., Mostofsky, S. H., & Risk, B. B. (2022). Accounting for motion in resting-state fMRI: What part of the spectrum are we characterizing in autism spectrum disorder? NeuroImage, 257, 119296. https://doi.org/10.1016/j.neuroimage.2022.119296 Pan, S. J. A., & Chou, C. (2020). Taiwanese researchers’ perceptions of questionable authorship practices: An exploratory study. Science and Engineering Ethics, 26(3), 1499–1530. https://doi.org/10.1007/s11948-020-00180-x Pew Research Centre. (2022, September 12). Americans’ Trust in Scientists, Other Groups Declines. https://www.pewresearch.org/science/2022/02/15/ americans-trust-in-scientists-other-groups-declines/ Rapani, A., Lombardi, T., Berton, F., Del Lupo, V., Di Lenarda, R., & Stacchi, C. (2020). Retracted publications and their citation in dental literature: A systematic review. Clinical and Experimental Dental Research, 6(4), 383–390. https://doi.org/10.1002/cre2.292 SAGE Publications. (2014). Retraction notice. Journal of Vibration and Control, 20(10), 1601–1604. https://doi.org/10.1177/1077546314541924 Shen, C., & Björk, B. C. (2015). ‘Predatory’ open access: A longitudinal study of article volumes and market characteristics. BMC Medicine, 13(1), 1–15. https://doi.org/10.1186/s12916-015-0469-2 Stern, A. M., Casadevall, A., Steen, R. G., & Fang, F. C. (2014). Financial costs and personal consequences of research misconduct resulting in retracted publications. Elife, 3, e02956. https://doi.org/10.7554/eLife.02956 Suelzer, E. M., Deal, J., Hanus, K. L., Ruggeri, B., Sieracki, R., & Witkowski, E. (2019). Assessment of citations of the retracted article by Wakefield et al with fraudulent claims of an association between vaccination and autism. JAMA Network Open, 2(11), e1915552. https://doi.org/10.1001/jamanetwork open.2019.15552 Vazire, S., & Holcombe, A. O. (2021). Where are the self-correcting mechanisms in science? Review of General Psychology, 26(2), 212–223. https://doi. org/10.1177/10892680211033912 Wakefield, A. J., Murch, S. H., Anthony, A., Linnell, J., Casson, D. M., Malik, M., et al. (1998). RETRACTED: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet, 351(9103), 637–641. https://doi.org/10.1016/S0140-6736(97)11096-0 Wang, T., Xing, Q. R., Wang, H., & Chen, W. (2019). Retracted publications in the biomedical literature from open access journals. Science and Engineering Ethics, 25(3), 855–868. https://doi.org/10.1007/s11948-018-0040-6
Preventing the Certification and Proliferation of Specious Research
Abstract Since the early 2000s there has been a dramatic increase in the number of predatory publishers. These publishers are attractive to academics because they offer an easy and fast way to publish their manuscripts which consequently helps them to advance their careers since their career progression is often determined by the number of articles that they have published. This chapter begins with the historical background about the creation of predatory publishers and Beall’s list. It then outlines some of the consequences that such publishers pose to the creation of research. This chapter concludes with some practical solutions that can curtail the creation and proliferation of predatory publishers and maintain the integrity of research about the autism spectrum. Keywords Article processing fee • Beall’s list • Open access journals • Peer review • Predatory publishers
2.1 The Creation of Predatory Publishers and Beall’s List Before the internet became ubiquitous nearly every academic journal was physically published and only disseminated to institutions that had a subscription to the publisher. During this time the peer review process was
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Bennett, Applying Metascientific Principles to Autism Research, https://doi.org/10.1007/978-981-19-9240-7_2
meticulously managed resulting in the publication of high-quality research that could withstand scrutiny. During this time there were a handful of low-quality academic publishers. However, researchers were generally aware of their existence and avoided either submitting to them their manuscripts for publication or citing their published articles (Beall, 2017). Many college and university libraries in North America during the 1980s and 1990s began to cancel their journal subscriptions because the prices of these subscriptions had increased while library budgets had decreased. There were three reasons why the cost of journal subscriptions had increased. First, many baby boomers were reaching the age when they were completing their PhDs and entering academic roles where they were required to publish articles. Thus, the workload of journals increased to accommodate this increased research output and some journals went from being published bi-annually to quarterly. Second, during the late 1990s the American and Canadian dollars were weaker, making the purchase of international journal subscriptions more expensive. Third, with baby boomers entering higher education a range of new disciplines emerged, such as nanomaterials and genomics, which meant that it was not commercially viable for academic libraries to pay subscription fees for academic journals that only had a few interested readers (Beall, 2017). When rises in subscription fees occurred most within higher education attributed these price rises to the greed of publishers, rather than the higher cost associated with manuscript production. In response to this increase cost, along with the advent of the internet, the development of the open-access movement occurred. However, soon after this movement began predatory journals, which use a pay-to-publish model, began to appear. Beall first noticed them during 2008 when he started to receive spam emails inviting him to submit his manuscript to them for publication (Beall, 2017). Typically, subscription and open-access journals both use a rigorous peer review process to ensure that high-quality research is published. Consequently, few dubious articles are published. In contrast, predatory publishers have either minimal or no peer review process and are instead interested in obtaining an author’s money in exchange for publishing low- quality manuscripts (Beall, 2016a). This view has been expressed by Beall (2017, p. 275), who explained: What I learned from predatory publishers is that they consider money far more important than business ethics, research ethics, and publishing ethics
2 PREVENTING THE CERTIFICATION AND PROLIFERATION OF SPECIOUS…
and that these three pillars of scholarly publishing are easily sacrificed for profit. Soon after they first appeared, predatory publishers and journals became a godsend both for authors needing easy publishing outlets and sketchy entrepreneurs wanting to make easy money with little upfront investment.
In response to the proliferation of predatory publishers Beall (2016b) created four separate lists, which are colloquially referred to as ‘Beall’s list’. These four lists are: . Predatory or questionable publishers 1 2. Predatory or questionable journals 3. Journals that imitate an already established reputable journals 4. Fake metrics companies (Beall, 2016b)
2.2 Consequences of Predatory Publishers 2.2.1 Corrupting Research As explained previously, in Sect. 1.3.2, predatory journals disseminate flawed studies that scholars might inadvertently cite in their own academic publications. By citing this substandard and flawed research the integrity of their own research is undermined and the entire discipline in general would be questioned. This sentiment has been articulated by Tsuyuki et al. (2017, p. 274), who claimed: Peer review is the coin of the realm of science, and because predatory journals either carry out a fake peer review or are negligent at managing it, they often publish science that has not been properly vetted. And because research is cumulative, unscientific papers pollute the pool of published science (and evidence), threatening future research and making it difficult for clinicians to wade through the evidence.
2.2.2 Undermining the Training of Scholars [S]ince the advent of predatory publishing, there have been tens of thousands of researchers who have earned Masters and Ph.D. degrees, been awarded other credentials and certifications, received tenure and promotion, and gotten employment—that they otherwise would not have been
able to achieve—all because of the easy article acceptance that the pay-to- publish journals offer. (Beall, 2017, p. 275)
As noted in the quotation above, predatory publishers have given some under-performing academics the opportunity to publish their mediocre and flawed research, which has consequently improved their chances of securing academic employment. Since these academics have built their careers on publishing flawed studies their research skills and credibility as scholars are questionable. A flow-on effect is that they are unable to competently teach and mentor their successors (Beall, 2017). 2.2.3 Increased Email Correspondence to Academics Academics can be inundated with spam emails from predatory publishers, which can prevent them from focusing on their teaching and publishing activities (Krasowski et al., 2019; McKenzie et al., 2021; Sousa et al., 2021; Wood & Krasowski, 2020). Krasowski et al. (2019) examined the email inboxes and junk folders over seven consecutive days of 17 faculty staff (i.e., four assistant, four associate, and nine full professors) and nine trainees (i.e., five medical students, two pathology students, and two pathology fellows). In total, 755 emails met their eligibility criteria (i.e., 417 emails from 328 unique journals, 244 conference invitations, and 94 webinar invitations). They reported that full professors received the most emails (i.e., on average 158 during the study) and some trainees and assistant professors had more than 30 emails during the study. McKenzie et al. (2021) have also examined the volume of email correspondence that an academic surgeon received from his hospital-provided email account over a six-month period. Two independent reviewers examined a total of 1905 emails. Of these, 608 were classified as fraudulent phishing emails.
2.3 Checklists and Flow Diagrams to Identify Predatory Journals To help academics identify predatory journals checklists and flow diagrams have been developed (Deora et al., 2021; Richtig et al., 2018; Salmi & Blease, 2021). Arguably, Cukier et al. (2020) have compiled one of the most comprehensive lists of checklists that can help researchers identify predatory biomedical journals. Most instruments that they listed were
2 PREVENTING THE CERTIFICATION AND PROLIFERATION OF SPECIOUS…
Fig. 2.1 The relationships between different types of journals and the author and reader. (Source: Richtig et al., 2018, p. 1443)
published in English (n = 90, 97%) and could be completed in less than five minutes (n = 68, 73%). Richtig et al. (2018) have created a flow diagram that can be used to understand the main distinctions between predatory, subscription-based, and open-access journals. As illustrated below, the peer review process is one of the main distinctions between predatory journals and other journal types. Unlike open-access and subscription- based journals, the articles that are published in predatory journals are not subjected to this process (see Fig. 2.1). Scholars can use this diagram to help them avoid submitting their manuscripts to predatory publishers which will inevitably preserve their academic reputation and the discipline that they study. Scholars who study the autism spectrum should use a combination of checklists and the flow diagram provided by Richtig and colleagues to ensure that they do not include studies that were published in predatory journals in their research. Failure to undertake such an approach can undermine their research activities and the entire field of autism spectrum research and result in a wastage of public funds that were allocated to this
field of study. Additionally, scholars who study the autism spectrum should use the checklists compiled by Cukier et al. as well as Richtig et al.’s flow diagram to identify and avoid submitting their manuscripts to predatory publishers. By not submitting their manuscripts to such publishers their professional reputation should be maintained.
2.4 Conclusion This chapter had three objectives. First, to outline the historical origins of predatory publishers and Beall’s list. Second, to present some of the consequences that these publishers pose to the creation of research. Third, to suggest some practical solutions that can curtail the detrimental impacts of predatory publishers and maintain the integrity of research about the autism spectrum. It is anticipated that scholars who study the autism spectrum can use a combination of checklists and the flow diagram provided by Richtig and colleagues to identify studies published in predatory journals and to avoid submitting their manuscripts to such journals. Such identification can ensure that dubious research is not included in their research or undermine their careers.
References Beall, J. (2016a). Dangerous predatory publishers threaten medical research. Journal of Korean Medical Science, 31(10), 1511–1513. https://doi. org/10.3346/jkms.2016.31.10.1511 Beall, J. (2016b). Best practices for scholarly authors in the age of predatory journals. Annals of the Royal College of Surgeons of England, 98(2), 77–79. https:// doi.org/10.1308/rcsann.2016.0056 Beall, J. (2017). What I learned from predatory publishers. Biochemia Medica, 27(2), 273–278. https://doi.org/10.11613/BM.2017.029 Cukier, S., Helal, L., Rice, D. B., Pupkaite, J., Ahmadzai, N., Wilson, M., Skidmore, B., Lalu, M. M., & Moher, D. (2020). Checklists to detect potential predatory biomedical journals: A systematic review. BMC Medicine, 18(1), 104. https://doi.org/10.1186/s12916-020-01566-1 Deora, H., Tripathi, M., Chaurasia, B., & Grotenhuis, J. A. (2021). Avoiding predatory publishing for early career neurosurgeons: What should you know before you submit? Acta Neurochirurgica, 163(1), 1–8. https://doi. org/10.1007/s00701-020-04546-9 Krasowski, M. D., Lawrence, J. C., Briggs, A. S., & Ford, B. A. (2019). Burden and characteristics of unsolicited emails from medical/scientific journals, con-
2 PREVENTING THE CERTIFICATION AND PROLIFERATION OF SPECIOUS…
ferences, and webinars to faculty and trainees at an academic pathology department. Journal of Pathology Informatics, 10, 16. https://doi.org/10.4103/ jpi.jpi_12_19 McKenzie, M., Nickerson, D., & Ball, C. G. (2021). Predatory publishing solicitation: A review of a single surgeon’s inbox and implications for information technology resources at an organizational level. Canadian Journal of Surgery, 64(3), E351–E357. https://doi.org/10.1503/cjs.003020 Richtig, G., Berger, M., Lange-Asschenfeldt, B., Aberer, W., & Richtig, E. (2018). Problems and challenges of predatory journals. Journal of the European Academy of Dermatology and Venereology, 32(9), 1441–1449. https://doi. org/10.1111/jdv.15039 Salmi, L., & Blease, C. (2021). A step-by-step guide to peer review: A template for patients and novice reviewers. BMJ Health & Care Informatics, 28(1), e100392. https://doi.org/10.1136/bmjhci-2021-100392 Sousa, F., Nadanovsky, P., Dhyppolito, I. M., & Santos, A. (2021). One year of unsolicited e-mails: The modus operandi of predatory journals and publishers. Journal of Dentistry, 109, 103618. https://doi.org/10.1016/j.jdent.2021. 103618 Tsuyuki, R. T., Al Hamarneh, Y. N., Bermingham, M., Duong, E., Okada, H., & Beall, J. (2017). Predatory publishers: Implications for pharmacy practice and practitioners. Canadian Pharmacists Journal, 150(5), 274–275. https://doi. org/10.1177/1715163517725269 Wood, K. E., & Krasowski, M. D. (2020). Academic e-mail overload and the burden of “academic spam”. Academic Pathology, 7, 2374289519898858. https://doi.org/10.1177/2374289519898858
Addressing the Reproducibility Crisis
Abstract In the field of metascience the term reproducibility refers to using the same analytical tools and procedures on the same dataset to confirm if the results are accurate. This chapter begins by defining reproducibility, followed by explaining the extent of reproducible studies about the autism spectrum. With the discovery that such research is often irreproducible, this chapter concludes with suggestions about how to make such research reproducible. Keywords Conflict of Interest • Datasets • Funding • Irreproducible research • Pre-registration • Protocol • Reproducibility
3.1 Defining Reproducibility The inability to verify a study’s results has been termed the reproducibility crisis. However, there are some who believe that this inability is not a crisis (Munafò et al., 2022). Understanding the distinctions between reproducibility, robustness, generalisability, and replicability can help contextualise the reproducibility crisis. Reproducible refers to confirming if a study’s results are correct by repeating an analysis of the study’s dataset using the same analytical tools and procedures (i.e., code). Replicable means using the same analytical tools and procedures to examine different datasets. Robust is defined as using the same dataset but different analytical tools © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Bennett, Applying Metascientific Principles to Autism Research, https://doi.org/10.1007/978-981-19-9240-7_3
Fig. 3.1 Reproducible, robust, replicable, and generalisable research. (Source: Leipzig et al., 2021, p. 2)
and procedures. Finally, generalisable refers to analysing different datasets with different analytical tools and procedures (Hejblum et al., 2020; Leipzig et al., 2021) (see Fig. 3.1). To ensure that a study is reproducible its dataset, research protocol, analytical tools, and procedures need to be clear and accessible to other researchers (Rauh et al., 2020; Wright et al., 2020). Failure to satisfy these and other reproducibility indicators, as listed in the table below, can mean that the reproduction of a study is impossible and that the study’s results will be doubted and questioned indefinitely (see Table 3.1).
3.2 Prevalence of Irreproducible Autism Research The extent of irreproducible research within different academic fields has been measured (Adewumi et al., 2021; Fladie et al., 2019; Hardwicke et al., 2022; Johnson et al., 2020; Okonya et al., 2020; Sherry et al., 2020; Walters et al., 2019). For example, the extent of irreproducible research within social sciences, psychological, biomedical, and neurological research has been measured (Hardwicke et al., 2020, 2022; Iqbal et al., 2016; Rauh et al., 2020; Wallach et al., 2018). Aside from one study, that was published by Kistner and Robbins (1986), the extent of reproducible research about the autism spectrum has not been examined. To address this gap a search for citations published from 1 to 31 August 2022 on PubMed with the keyword autis* in the article’s title was conducted on 5
3 ADDRESSING THE REPRODUCIBILITY CRISIS
Table 3.1 Reproducibility indicators Reproducibility indicator Article accessibility: Articles were assessed for open accessibility, paywall access, or unable to access full text Funding statement: Presence of funding sources of the study
Role in producing transparent and reproducible science
Ease of access to publications enables interdisciplinary research by removing access barriers. Full-text access allows for validation through reproduction Funding provides researchers the ability to create new experiments and tangibly investigate their ideas. However, funding sources can play a role in influencing how researchers conduct and report their study (e.g., scientific bias), which necessitates its disclosure Conflict of interest statement: Conflict of interest conveys the authors’ potential Presence of conflict of interest associations that may affect the experimental statement design, methods, and analyses of the outcomes. Thus, full disclosure of possible conflicts allows for unbiased presentation of their study Data statement: Presence of a data Raw data availability facilitates independent availability statement, retrieval verification of research publications. It can method, comprehensibility, and improve accountability of outcomes reported and content integrity of the research published Pre-registration statement: Presence Pre-registration explicitly reports aspects of the of statement indicating registration, study design prior to the commencement of the retrieval method, accessibility, and research. Pre-registration functions as a way to contents (hypothesis, methods, limit selective reporting of results and prevents analysis plan) publication biases and p-hacking Protocol statement: Statement Reproducibility of a study is dependent on the indicating protocol availability, and if accessibility of the protocol. A protocol is a highly available, what aspects of the study detailed document that contains all aspects of the are available (hypothesis, methods, experimental design which provides a step-by-step analysis plan) guide in conducting the study Analysis scripts statement: Presence of Analysis scripts are used to analyse data obtained analysis script availability statement, in a study through software programs such as R, retrieval method, and accessibility python, and MatLab. Analysis scripts provide step-by-step instructions to reproduce statistical results Replication statement: Presence of Replication studies provide validation to previously statement indication a replication done publications by determining whether similar study. outcomes can be acquired. Materials statement: Presence of a Materials are the tools used to conduct the study. materials availability statement, Lack of materials specification impedes the ability retrieval method, and accessibility to reproduce a study Source: Wright et al. (2020, p. 5)
September 2022 (i.e., “autis*[Title]”). This search term was used since it enabled the terms autism, autism spectrum, and autistic to be retrieved. This search procedure yielded 402 results. Editorials, commentaries, scoping reviews, meta-analyses, literature reviews, study protocols, animal models, bibliometric analyses, studies measuring autistic traits, and book reviews were all excluded (n = 127). Articles that were inaccessible (n = 26) and/or published in a language other than English (n = 8) were also excluded. Using Wright et al.’s (2020) and Rauh et al.’s (2020) reproducibility indicators as a guide, the remaining 241 articles, that were not excluded from the PubMed search, were examined for the following reproducibility and transparency indicators: Open Access article, sources of funding statement, conflict of interest statement, availability of materials, pre-registration of study, dataset availability or data availability statement, and open peer review (i.e., publishing of peer review reports and author responses to such reports). Of the 241 eligible articles that were examined the most common reproducibility indicator identified was a conflict-of-interest statement (n = 198, 82.15%), followed by a source of funding disclosure (n = 174, 72.19%), Open access article (n = 127, 52.69%), availability of a dataset or a data availability statement (n = 108, 44.81%), availability of materials (n = 17, 7.05%), Open Peer review (n = 7, 2.90%), and pre-registered study (n = 2, 0.82%). Based on these results, to improve the reproducibility of research about the autism spectrum academic journals need to improve the adoption of reproducibility indicators, in particular making datasets and materials available, using open peer review processes, and pre- registering a study’s design (see Appendix A). Overall, the results from the PubMed search indicated that strategies that can improve the reproducibility of autism research are lacking. However, this conclusion should be adopted with caution for two reasons. First, for brevity, the search was restricted to only articles that were archived on PubMed for August 2022 and which contained the search phrase ‘autis*’ in the article’s title. It is possible that different results may have been generated if these search parameters were modified. Second, PubMed did not list articles from other academic journals about the autism spectrum, such as Focus on Autism and Other Developmental Disabilities.
3 ADDRESSING THE REPRODUCIBILITY CRISIS
3.3 Strategies to Improve the Reproducibility of Autism Research 3.3.1 Archiving Datasets To confirm if a study’s results are correct the dataset needs to be archived and made accessible so that it can be reanalysed using the same analytical tools and procedures that were used originally. Failure to preserve a study’s dataset for this purpose will result in the study’s results always being doubted. Currently, it is conventional practice to destroy datasets once a study has been published. Ioannidis (2012, p. 646) once wrote about this practice: [I]t is also possible that a Library of Alexandria actually disappears every few minutes. Currently, there are petabytes of scientific information produced on a daily basis and millions of papers are being published annually. In most scientific fields, the vast majority of the collected data, protocols, and analyses are not available and/or disappear soon after or even before publication. If one tries to identify the raw data and protocols of papers published only 20 years ago, it is likely that very little is currently available. Even for papers published this week, readily available raw data, protocols, and analysis codes would be the exception rather than the rule.
Scholars should use effective data management practices to ensure that their datasets are archived. Appropriate data archival will give others the opportunity to conduct a reproducible study and/or combine the archived dataset with their own dataset to improve the generalisability of their findings. Kanza and Knight (2022) have outlined ten strategies that researchers can use to help them perfect their data management practices (see Table 3.2). 3.3.2 Journal Requesting Datasets from Authors Traditionally, an author is not required to submit the corresponding dataset with their manuscript. Furthermore, some authors are unwilling to provide their datasets even when an editor requests this information. Recalling his role as the Editor-in-Chief of Molecular Brain, from early 2017 to September 2019, Miyakawa (2020) reviewed 181 manuscripts for publication. During his appointment he requested datasets for 41 submitted manuscripts. Of these, 21 were withdrawn without providing any raw data and the other 20 were resubmitted with some raw data. Of these 20 resubmissions, 19 were rejected due to insufficient raw data and/or there
Table 3.2 Ten tips for good research data management Tip
Plan your data management strategy right from the start, think about every aspect of your project from the data collection, organisation, storage, and even where you are planning on publishing and sharing the results Data management Make one of these right at the beginning and refer to it and improve plan it throughout the entire project life cycle Organisation is key Use sensible folder/file structures that have been agreed with the entire team Version control Decide on what version control systems you are going to use and your work implement these plans from the beginning Storage strategy Consider your long term and short term data storage and implement the 321 data storage rule: (3 copies of the data, within 2 types of media, with 1 stored at a separate site), and NEVER rely on USB sticks Remember your Think about what standards you are going to make your data standards and be available in. Data should be findable, accessible, interoperable, and FAIR re-useable (FAIR) Consider ethics If you are interacting with human data in any way, you will need ethics! These applications can take a while to write and obtain approval for, so start straight away! Factor in resources Time and costs should be factored in for all required resources, including your data management! Future proof your Metadata alone will not future proof your data, you should get data DOI’s for your datasets and include relevant README’s and description files Communicate If you are working on collaborative research projects then communication is key both in setting up the initial organisational strategies and throughout the entire project life cycle to ensure that team members are working consistently with respect to data collection, organisation, storage, etc. Source: Kanza and Knight (2022)
was a mismatch between the raw data and the results. Of the 21 submissions that were withdrawn without providing raw data, 14 were published in other journals. Of these 14 manuscripts, 12 were published in journals that either required or recommended raw data be provided upon request. Miyakawa mostly received no response (n = 10) after sending a request for raw data to the authors of those 12 manuscripts (see Fig. 3.2). Based on these findings, Miyakawa (2020, p. 1) concluded:
3 ADDRESSING THE REPRODUCIBILITY CRISIS
Fig. 3.2 Flowchart of the manuscripts handled by Tsuyoshi Miyakawa in Molecular Brain from December 2017 to September 2019. (Source: Miyakawa (2020, p. 2))
Considering that any scientific study should be based on raw data, and that data storage space should no longer be a challenge, journals, in principle, should try to have their authors publicize raw data in a public database or journal site upon the publication of the paper to increase reproducibility of the published results and to increase public trust in science.
Providing a dataset with a submitted manuscript should be a condition of publication so that the results published can be confirmed via replication. To reinforce this obligation, editors should automatically reject manuscripts when the author has not submitted the corresponding datasets. Failure to enforce such a policy will result in the continual publication and proliferation of manuscripts whose results cannot be validated.
3.3.3 Open Science Badges To increase awareness among academics about the importance of creating reproducible research the Centre for Open Science has created five Open Science badges (see Fig. 3.3) (see Appendix B). Since their inception, Open Science badges have been endorsed and used in several journals, including the International Journal for the Psychology of Religion (van Elk et al., 2018), Journal of Social Psychology (Grahe, 2014), and Canadian Journal of Experimental Psychology (Pexman, 2017). According to the Centre for Open Science, Open Science badges are included on publications and signal to the reader that the content of the publication has been made publicly available and certify its accessibility in a persistent location. They acknowledge open science practices are incentives for researchers to share data, materials, or to preregister protocols and have proven to be successful and continue to gain visibility in the scientific community. (Kretser et al., 2019, p. 334)
Since their inception, Open Science badges have produced mixed outcomes and require additional investigation. Kidwell et al. (2016) and Rowhani-Farid et al. (2020) have both examined if the Open Data Open Science badge motivated researchers to provide datasets along with their manuscripts to journals. Rowhani-Farid and colleagues reported that the
Fig. 3.3 Logos of Open Science badges by the Centre for Open Science. (Source: American Psychiatric Association Journals, 2022)
3 ADDRESSING THE REPRODUCIBILITY CRISIS
Open Data Open Science badge did not motivate researchers who published in BMJ Open to share their data. Kidwell and colleagues, however, reported that when badges were earned the data was more likely to be available, correct, usable, and complete. Despite these mixed results the influence of other Open Science badges in making studies reproducible, such as the Pre-registered badge, have not been measured. To determine the utility of these badges more research needs to be conducted. For example, researchers can randomly select a sample of studies with the Preregistered badge and repeat an identical analysis of the deposited datasets. This reanalysis will help determine if the Pre-registered badge had improved the description of the study and, by extension, its reproducibility. As of 2023, the main autism-specific journals1 have not endorsed Open Science badges. To improve awareness of the reproducibility crisis among scholars who study the autism spectrum these journals should embrace Open Science badges. 3.3.4 Reforming Academic Hiring Practices to Promote Reproducible Research It is important that institutions ensure that organisational structures within which researchers work reward engagement with and adoption of open and transparent research practices. Academic hiring decisions, annual performance reviews, and promotion are often informed by easy-to-calculate research metrics such as the number of research outputs an academic has produced, or the amount of grant income an academic has generated within a particular period. A high score on these metrics does not mean that the underlying research is transparent and robust (often simply that there is a lot of it). Academics need to be incentivised to produce research that is both high-quality and transparent. (Stewart et al., 2021, pp. 2–3)
Obtaining an academic job, promotion, or tenure is primarily influenced by the number of articles that the candidate has published and the number of times they have been cited. Consequently, there has not been much emphasis on giving career opportunities to researchers who incorporate 1 Main autism-specific journals are the Journal of Autism and Developmental Disorders, Autism: The International Journal of Research and Practice, Review Journal of Autism and Developmental Disorders, Research in Autism Spectrum Disorders, and Focus on Autism and Other Developmental Disabilities.
reproducibility practices into their research activities. To rectify this trend, the criteria for academic positions should include a criterion about reproducible research practices. Such criteria will contribute to a cultural change within academic institutions, away from rewarding academics who have favourable publication metrics (i.e., number of publications and citation counts) to rewarding research practices that foster reproducibility and transparency. To facilitate this transition, research institutions that employ researchers who examine aspects of the autism spectrum could incorporate into their hiring practices an assessment of the candidate’s knowledge and awareness of questionable research practices (QRPs). 3.3.5 Pre-registering Studies To improve the clarity and thoroughness of a study’s design several journals, including the Journal of Sex Research, offer prospective authors the opportunity to pre-register their study’s design (Sakaluk & Graham, 2022). Once pre-registered, a study’s design can be scrutinised and suggested improvements can be made before the study is conducted (Centre for Open Science, 2022). The prospect of successfully repeating an identical analysis of the dataset will increase due to an enhanced clarity of the study’s design. Of the 241 eligible citations that were examined in Sect. 3.3, two (0.82%) were pre-registered (i.e., Kallitsounaki & Williams, 2022; Li et al., 2022). Additionally, as of February 2023 all main autismspecific journals do not provide authors with a pre-registered pathway to publishing their articles. In the interests of improving the quality and reproducibility of autism research these journals should provide such a pathway for publication. 3.3.6 Improving the Readability of a Study’s Methodology For a study to be reproducible the procedures used to analyse the dataset need to be clearly explained. Failure to make these procedures easy to understand can hamper a reproduction of the original study. Despite the need to clearly explain a study’s methodology for reproducibility purposes, Plavén-Sigray et al. (2017) reported that over time the readability of scientific texts has decreased. They attributed this trend to increased usage of discipline-specific technical and scientific jargon. As they explained:
3 ADDRESSING THE REPRODUCIBILITY CRISIS
[W]e showed that there is an increase in general scientific jargon over years. These general science jargon words should be interpreted as words which scientists frequently use in scientific texts, and not as subject specific jargon. This finding is indicative of a progressively increasing in-group scientific language (‘science-ese’). (Plavén-Sigray et al., 2017, p. 5)
To rectify this situation, researchers should be given the opportunity to attend professional learning and development courses that focus on writing methodologies that are comprehensive, simple, and easy to repeat. Such courses can include literature about how to write a manuscript for an academic journal (Barroga & Matanguihan, 2021; Forero et al., 2020). 3.3.7 Requiring Researchers to Self-examine Their Previous Research In the current research environment, self-correction, or even just critical reconsideration of one’s past work, is often disincentivized professionally. The opportunity costs of a self-correction are high; time spent on correcting past mistakes and missteps is time that cannot be spent on new research efforts, and the resulting self-correction is less likely to be judged a genuine scientific contribution. … Researchers might also fear that a self-correction that exposes flaws in their work will damage their reputation and perhaps even undermine the credibility of their research record as a whole. (Rohrer et al., 2021, p. 1265)
Currently, researchers are not incentivised to re-examine their previous academic work. Instead, they might resist engaging in this act because they fear that their reputation will be damaged if they reveal that their previous academic work contained flaws. With a lack of self-examination, they are unable to learn from their past mistakes and improve their research skills and processes. Additionally, they are not able to master the skills needed to make their research reproducible since the incentive was to create highly cited research instead of research that could be reproduced (Fiala & Diamandis, 2017). Fiala and Diamandis (2017, p. 3) have described this situation: Accountability in science is ad hoc. Researchers get credit for a publication well before enough time has passed for the scientific community to really know whether the paper has made a valuable contribution. No wonder that
researchers bent on submitting a paper are obsessed with making the best possible case for its acceptance rather than illustrating its limitations. If researchers are forced to consider how well their paper will stand up five years hence, they will be more careful when doing the work and more critical in their analysis.
There are two strategies that can compel authors to re-examine their own manuscripts after they have been published. First, publishers can establish a journal in which authors can submit their post-publication insights about their manuscripts. Such a journal, titled Reflections in Medicine, once existed. However, it was discontinued due to a lack of interest from authors who were invited to submit a self-reflection. Second, as a condition of publishing, authors could be required to publish a post- publication reflection of their published manuscript. To assist with this process, the editor could provide the author with a series of uncomplicated yes/no questions about the impact that their research has had on their chosen discipline. Additionally, they could also be asked a series of open-ended reflective questions, such as If you could repeat the study what design aspects would you change? and What are some biases in your research (e.g., selection bias, sex and gender bias, and cultural bias)? (Fiala & Diamandis, 2017). 3.3.8 Declaring All Conflicts of Interest Disclosing all conflicts of interest can help ensure that the study is not biased. More than half of all eligible studies (n = 198, 82.15%) examined in Sect. 3.3 had a conflict-of-interest statement. To improve the reproducibility of research about the autism spectrum it is important that there are consistent conflict-of-interest policies for authors in any journal that publishes research about the autism spectrum. 3.3.9 Declaring all Sources of Funding Sources of funding can influence the types of research and subsequent results that are published (den Houting & Pellicano, 2019; McCrabb et al., 2021). To mitigate the impacts of this bias all funding sources and financial conflicts of interest should be declared (Daou et al., 2018; Rauh et al., 2020; Wright et al., 2020). There are several journals that predominantly publish articles about the autism spectrum that contain statements
3 ADDRESSING THE REPRODUCIBILITY CRISIS
about sources of funding, such as the Journal of Autism and Developmental Disorders. In the interests of creating research about the autism spectrum that is reproducible and free of any bias relating to funding, declaring sources of funding should become common practice for all journals that publish research about the autism spectrum. 3.3.10 Publicise Materials To confirm that the results documented in a study are correct the datasets, analysis scripts, and materials need to be published (Rauh et al., 2020; Wright et al., 2020). Examples of analysis scripts and materials include focus group and/or interview questions, surveys, a step-by-step outline for conducting the study, and a description of the computer program that was used to perform the analysis. Few articles examined in Sect. 3.3 published their study materials, such as interview questions and surveys (n = 17, 7.05%). To improve the prospect of research being reproducible journals should make the submission of any relevant materials a condition for publication. 3.3.11 Creating a Journal That Only Publishes Replications of Autism Research Within several disciplines, such as neuroscience and psychology, academic journals discourage authors from submitting manuscripts that are replication studies. Yeung (2017) examined the aims, scope, and instructions of 465 neuroscience journals published in English from the Scopus database to determine if the journal had a policy position about publishing replication studies. They reported that 28 (6%) explicitly stated that they did accept replication studies, 394 (84.7%) did not state any position about replication studies, 40 (8.6%) implicitly discouraged replication studies by emphasising that the manuscript had to have novel concepts and/or findings, and 3 (0.6%) explicitly stated that they rejected replication studies. Martin and Clarke (2017) examined 1151 psychological journals to determine if they published replication studies. They reported that 33 (3%) journals stated that they accepted replication studies, 728 (63%) did not state any position about replication studies, 379 (33%) implicitly discouraged replication studies by emphasising that the manuscript had to
contain novel results, and 12 (1%) explicitly rejected the publication of replication studies. As of 2023, there is no autism-specific journal that only accepts for publication manuscripts that are replications of autism research. However, the academic journal Review Journal of Autism and Developmental Disorders specialises in only publishing systematic literature reviews about autism studies (Matson, 2014). Thus, it is plausible to establish an academic journal that only publishes replications of previous studies about the autism spectrum. Creating such an academic journal can give scholars the opportunity to repeat the analysis procedure with the same dataset to verify existing results. Additionally, such a journal will bolster efforts to improve the reproducibility of autism research and improve confidence in the results that have been published.
3.4 Conclusion This chapter described the metascientific concept of reproducibility, which is the ability to rerun the same analysis using the same dataset, analytical tools, and procedures to confirm published results. This chapter began with an explanation of reproducibility and factors that can make a study reproducible. The extent of irreproducible research about the autism spectrum was measured within a sample of citations retrieved from PubMed by applying Wright et al.’s and Rauh et al.’s checklist for reproducibility. A series of strategies that can improve the reproducibility of research about the autism spectrum, such as Open Science badges and pre-registration of studies, concluded this chapter.
References Adewumi, M. T., Vo, N., Tritz, D., Beaman, J., & Vassar, M. (2021). An evaluation of the practice of transparency and reproducibility in addiction medicine literature. Addictive Behaviors, 112, 106560. https://doi.org/10.1016/j. addbeh.2020.106560 American Psychiatric Association (APA) Journals. (2022). https://twitter.com/ apa_journals/status/1030530992715563008?lang=hi Barroga, E., & Matanguihan, G. J. (2021). Creating logical flow when writing scientific articles. Journal of Korean Medical Science, 36(40), e275. https:// doi.org/10.3346/jkms.2021.36.e275
3 ADDRESSING THE REPRODUCIBILITY CRISIS
Centre for Open Science. (2022). Simple registered report protocol preregistration. https://osf.io/rr/ Daou, K. N., Hakoum, M. B., Khamis, A. M., Bou-Karroum, L., Ali, A., Habib, J. R., Semaan, A. T., Guyatt, G., & Akl, E. A. (2018). Public health journals’ requirements for authors to disclose funding and conflicts of interest: A cross- sectional study. BMC Public Health, 18(1), 533. https://doi.org/10.1186/ s12889-018-5456-z den Houting, J., & Pellicano, E. (2019). A portfolio analysis of autism research funding in Australia, 2008-2017. Journal of Autism and Developmental Disorders, 49(11), 4400–4408. https://doi.org/10.1007/s10803-019-04155-1 Fiala, C., & Diamandis, E. P. (2017). Make researchers revisit past publications to improve reproducibility. F1000Research, 6, 10.12688/f1000research.12715.1. Fladie, I. A., Adewumi, T. M., Vo, N. H., Tritz, D. J., & Vassar, M. B. (2019). An evaluation of nephrology literature for transparency and reproducibility indicators: Cross-sectional review. Kidney International Reports, 5(2), 173–181. https://doi.org/10.1016/j.ekir.2019.11.001 Forero, D. A., Lopez-Leon, S., & Perry, G. (2020). A brief guide to the science and art of writing manuscripts in biomedicine. Journal of Translational Medicine, 18(1), 425. https://doi.org/10.1186/s12967-020-02596-2 Grahe, J. E. (2014). Announcing open science badges and reaching for the sky. The Journal of Social Psychology, 154(1), 1–3. https://doi.org/10.108 0/00224545.2014.853582 Hardwicke, T. E., Thibault, R. T., Kosie, J. E., Wallach, J. D., Kidwell, M. C., & Ioannidis, J. (2022). Estimating the prevalence of transparency and reproducibility- related research practices in psychology (2014-2017). Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 17(1), 239–251. https://doi.org/10.1177/1745691620979806 Hardwicke, T. E., Wallach, J. D., Kidwell, M. C., Bendixen, T., Crüwell, S., & Ioannidis, J. (2020). An empirical assessment of transparency and reproducibility- related research practices in the social sciences (2014-2017). Royal Society Open Science, 7(2), 190806. https://doi.org/10.1098/rsos.190806 Hejblum, B. P., Kunzmann, K., Lavagnini, E., Hutchinson, A., Robertson, D. S., Jones, S. C., & Eckes-Shephard, A. H. (2020). Realistic and robust reproducible research for biostatistics. https://doi.org/10.20944/preprints202006.0002.v1. Ioannidis, J. P. (2012). Why science is not necessarily self-correcting. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 7(6), 645–654. https://doi.org/10.1177/1745691612464056 Iqbal, S. A., Wallach, J. D., Khoury, M. J., Schully, S. D., & Ioannidis, J. P. (2016). Reproducible research practices and transparency across the biomedical literature. PLoS Biology, 14(1), e1002333. https://doi.org/10.1371/journal. pbio.1002333
Johnson, A. L., Torgerson, T., Skinner, M., Hamilton, T., Tritz, D., & Vassar, M. (2020). An assessment of transparency and reproducibility-related research practices in otolaryngology. The Laryngoscope, 130(8), 1894–1901. https:// doi.org/10.1002/lary.28322 Kallitsounaki, A., & Williams, D. M. (2022). Implicit and explicit gender-related cognition, gender dysphoria, autistic-like traits, and mentalizing: Differences between autistic and non-autistic cisgender and transgender adults. Archives of Sexual Behavior, 51(7), 3583–3600. https://doi.org/10.1007/s10508- 022-02386-5 Kanza, S., & Knight, N. J. (2022). Behind every great research project is great data management. BMC Research Notes, 15(1), 20. https://doi.org/10.1186/ s13104-022-05908-5 Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L. S., Kennett, C., Slowik, A., Sonnleitner, C., Hess-Holden, C., Errington, T. M., Fiedler, S., & Nosek, B. A. (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS Biology, 14(5), e1002456. https://doi.org/10.1371/journal. pbio.1002456 Kistner, J., & Robbins, F. (1986). Brief report: Characteristics of methods of subject selection and description in research on autism. Journal of Autism and Developmental Disorders, 16(1), 77–82. https://doi.org/10.1007/ BF01531580 Kretser, A., Murphy, D., Bertuzzi, S., Abraham, T., Allison, D. B., Boor, K. J., Dwyer, J., Grantham, A., Harris, L. J., Hollander, R., Jacobs-Young, C., Rovito, S., Vafiadis, D., Woteki, C., Wyndham, J., & Yada, R. (2019). Scientific integrity principles and best practices: Recommendations from a scientific integrity consortium. Science and Engineering Ethics, 25(2), 327–355. https:// doi.org/10.1007/s11948-019-00094-3 Leipzig, J., Nüst, D., Hoyt, C. T., Ram, K., & Greenberg, J. (2021). The role of metadata in reproducible computational research. Patterns (New York, N.Y.), 2(9), 100322. https://doi.org/10.1016/j.patter.2021.100322 Li, B., Blijd-Hoogewys, E., Stockmann, L., Vergari, I., & Rieffe, C. (2022). Toward feeling, understanding, and caring: The development of empathy in young autistic children. Autism: The International Journal of Research and Practice, 13623613221117955. Advance online publication. https://doi. org/10.1177/13623613221117955. Martin, G. N., & Clarke, R. M. (2017). Are psychology journals anti-replication? A snapshot of editorial practices. Frontiers in Psychology, 8, 523. https://doi. org/10.3389/fpsyg.2017.00523 Matson, J. (2014). Editor’s welcome note. Review Journal of Autism and Developmental Disorders, 1(1), 1–1. https://doi.org/10.1007/s40489014-0013-x
3 ADDRESSING THE REPRODUCIBILITY CRISIS
McCrabb, S., Mooney, K., Wolfenden, L., Gonzalez, S., Ditton, E., Yoong, S., & Kypri, K. (2021). “He who pays the piper calls the tune”: Researcher experiences of funder suppression of health behaviour intervention trial findings. PLoS One, 16(8), e0255704. https://doi.org/10.1371/journal.pone.0255704 Miyakawa, T. (2020). No raw data, no science: Another possible source of the reproducibility crisis. Molecular Brain, 13(1), 24. https://doi.org/10.1186/ s13041-020-0552-2 Munafò, M. R., Chambers, C., Collins, A., Fortunato, L., & Macleod, M. (2022). The reproducibility debate is an opportunity, not a crisis. BMC Research Notes, 15(1), 43. https://doi.org/10.1186/s13104-022-05942-3 Okonya, O., Rorah, D., Tritz, D., Umberham, B., Wiley, M., & Vassar, M. (2020). Analysis of practices to promote reproducibility and transparency in anaesthesiology research. British Journal of Anaesthesia, 125(5), 835–842. https://doi. org/10.1016/j.bja.2020.03.035 Pexman, P. M. (2017). CJEP will offer open science badges. Canadian Journal of Experimental Psychology, 71(1), 1. https://doi.org/10.1037/cep0000128 Plavén-Sigray, P., Matheson, G. J., Schiffler, B. C., & Thompson, W. H. (2017). The readability of scientific texts is decreasing over time. eLife, 6, e27725. https://doi.org/10.7554/eLife.27725 Rauh, S., Torgerson, T., Johnson, A. L., Pollard, J., Tritz, D., & Vassar, M. (2020). Reproducible and transparent research practices in published neurology research. Research Integrity and Peer Review, 5, 5. https://doi.org/10.1186/ s41073-020-0091-5 Rohrer, J. M., Tierney, W., Uhlmann, E. L., DeBruine, L. M., Heyman, T., Jones, B., Schmukle, S. C., Silberzahn, R., Willén, R. M., Carlsson, R., Lucas, R. E., Strand, J., Vazire, S., Witt, J. K., Zentall, T. R., Chabris, C. F., & Yarkoni, T. (2021). Putting the self in self-correction: Findings from the loss-of-confidence project. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(6), 1255–1269. https://doi.org/10.1177/1745691620964106 Rowhani-Farid, A., Aldcroft, A., & Barnett, A. G. (2020). Did awarding badges increase data sharing in BMJ open? A randomized controlled trial. Royal Society Open Science, 7(3), 191818. https://doi.org/10.1098/rsos.191818 Sakaluk, J. K., & Graham, C. A. (2022). New year, new initiatives for the journal of sex research. Journal of Sex Research, 59(7), 805–809. https://doi.org/1 0.1080/00224499.2022.2032571 Sherry, C. E., Pollard, J. Z., Tritz, D., Carr, B. K., Pierce, A., & Vassar, M. (2020). Assessment of transparent and reproducible research practices in the psychiatry literature. General Psychiatry, 33(1), e100149. https://doi.org/10.1136/ gpsych-2019-100149 Stewart, A. J., Farran, E. K., Grange, J. A., Macleod, M., Munafò, M., Newton, P., Shanks, D. R., & Institutional Leads, U. K. R. N. (2021). Improving research quality: The view from the UK reproducibility network institutional
leads for research improvement. BMC Research Notes, 14(1), 458. https://doi. org/10.1186/s13104-021-05883-3 van Elk, M., Rowatt, W., & Streib, H. (2018). Good dog, bad dog: Introducing open science badges. The International Journal for the Psychology of Religion, 28(1), 1–2. https://doi.org/10.1080/10508619.2018.1402589 Wallach, J. D., Boyack, K. W., & Ioannidis, J. (2018). Reproducible research practices, transparency, and open access data in the biomedical literature, 2015-2017. PLoS Biology, 16(11), e2006930. https://doi.org/10.1371/ journal.pbio.2006930 Walters, C., Harter, Z. J., Wayant, C., Vo, N., Warren, M., Chronister, J., Tritz, D., & Vassar, M. (2019). Do oncology researchers adhere to reproducible and transparent principles? A cross-sectional survey of published oncology literature. BMJ Open, 9(12), e033962. https://doi.org/10.1136/bmjopen-2019033962 Wright, B. D., Vo, N., Nolan, J., Johnson, A. L., Braaten, T., Tritz, D., & Vassar, M. (2020). An analysis of key indicators of reproducibility in radiology. Insights Into Imaging, 11(1), 65. https://doi.org/10.1186/s13244-020-00870-x Yeung, A. (2017). Do neuroscience journals accept replications? A survey of literature. Frontiers in Human Neuroscience, 11, 468. https://doi.org/10.3389/ fnhum.2017.00468
Evaluating and Improving the Peer Review Process
Abstract This chapter begins with an overview of the main tasks and responsibilities that authors, peer reviewers, and editors perform during the peer review process. Eight drawbacks of this process and associated solutions are then explained. These drawbacks are: (1) the pervasive incentives placed on academics to publish manuscripts, (2) publication bias, (3) inconsistent publishing policies between journals, (4) redundancy in repeating the peer review process, (5) lack of formal education for early career researchers about peer reviewing a manuscript, (6) inconsistent peer reviews, (7) the impacts of unprofessional comments by peer reviewers, and (8) ambiguity about citing preprinted articles. The purpose of this chapter is to improve the peer review process so that research about the autism spectrum which undergoes this process is more robust. Keywords Checklists • File drawer phenomenon • Peer review • Professionalism of peer review comments • Publishing policies
4.1 The Peer Review Process Authors, reviewers, and journal editors each have specific tasks in the peer review process. For authors, their manuscripts should contain four elements. First, they should contain a position or hypnosis that can be either supported or discredited by evidence. Second, the methodology must be © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Bennett, Applying Metascientific Principles to Autism Research, https://doi.org/10.1007/978-981-19-9240-7_4
described in sufficient detail so that others can successfully validate the experiment by repeating the study using the same dataset. Third, the manuscript must show the results of all experiments, including those that did not support the hypothesis. Fourth, the manuscript must describe appropriate statistical techniques and not statistical tests that are only intended to manipulate the statistical results above or below the arbitrary 0.05 p value (Bhattacharya & Ellis, 2018; Marcoci et al., 2022). The main role of peer reviewers is to evaluate the contents of the manuscript and decide if it should proceed to publication. To perform this function, they need to perform three tasks. First, they should only review manuscripts when they do not have any conflict of interest. Second, they should complete their examination in accordance with the timeline stipulated by the journal that invited them to review the manuscript. Third, they should provide an honest appraisal of the manuscript. Failure to perform any of these tasks can result in either flawed research being published or credible research not being published (Bhattacharya & Ellis, 2018). In conjunction with these three requirements, Glonti et al. (2019) have illustrated some of the most prominent tasks that peer reviewers are expected to do when they review a manuscript. The most common response from those that they interviewed were to examine the methods of the study (n = 440 statements), followed by examining the manuscript’s discussion and conclusion sections (n = 202 statements) and the results in the article (n = 145 statements) (see Fig. 4.1). Journal editors often perform three main tasks during the peer review process. First, they need to make timely decisions about if a manuscript should be rejected or sent to peer reviewers for examination. Second, they need to ensure that peer reviewers complete their examination of the manuscript by the agreed date. Third, they need to evaluate the peer reviewer’s recommendations and decide if the manuscript should be either accepted, accepted after minor modifications have been conducted, or rejected (Bhattacharya & Ellis, 2018). The peer review process is generally regarded as the gold standard approach for examining the relevance and truthfulness of a manuscript. Despite this belief, it is imperfect and occasionally flawed research is published, such as the Sokal Square hoax and more recently the grievance studies affair (Cole, 2021; Lagerspetz, 2021; Pluckrose et al., 2021; Staller, 2019). In conjunction with these hoaxes, it is estimated that globally each year $100 billion US dollars is wasted on poor quality published biomedical research (Galipeau et al., 2013). Eventually, some of this
4 EVALUATING AND IMPROVING THE PEER REVIEW PROCESS
Fig. 4.1 Task of peer reviews. (Source: Glonti et al., 2019, p. 12)
research is retracted (Elango, 2021; Faggion Jr et al., 2018; Nair et al., 2020). However, sometimes several years elapse before this occurs. During these years this research can be cited by others, which reduces the credibility of their research (Elango, 2021). Furthermore, even after a piece of research has been retracted it can still be cited by others as credible, as is the case with Wakefield et al.’s study (Suelzer et al., 2019).
There are many different factors that have contributed to the creation and proliferation of low-quality research, such as predatory publishers and scholars who engage in academic misconduct (e.g., plagiarism) or questionable research practices (e.g., cherry picking information to either confirm or disprove a hypothesis). This chapter will not focus on these factors as they are discussed in other chapters. Instead, some deficits in the peer review process that result in flawed research being published are explained in this chapter, along with suggestions to rectify these deficiencies. The intention of this chapter is to recommend improvements to the peer review process so that the dissemination of flawed research about the autism spectrum can be reduced.
4.2 Drawback One: The Pervasive Incentives Placed on Academics to Publish Manuscripts Often a scholar’s prospect of obtaining an academic position and winning research grants is determined by a combination of the number of articles they have published and the prestige of the academic journals where they were published. The shrinking pool of available research funds and academic job opportunities has created a publish or perish culture within academia (Génova & de la Vara, 2019; Harvey, 2020; Tijdink et al., 2016). This culture has resulted in some scholars working outside regular work hours and sometimes during weekends to submit manuscripts for peer review (Barnett et al., 2019). It has also motivated some to engage in unethical research practices, such as dividing their results into slices and then publishing one paper per slice (e.g., salami slicing) (Šupak Smolcić, 2013) and/or text recycling (Anson & Moskovitz, 2021; Moskovitz, 2019; Timmins, 2019). Although scholars who engage in unethical research practices may improve their career prospects, ultimately they undermine the integrity of the discipline that they have devoted their careers to studying and prevent future generations from having the opportunity to make important discoveries based on accurate information.
4 EVALUATING AND IMPROVING THE PEER REVIEW PROCESS
4.3 Drawback Two: Publication Bias As illustrated by Williams et al. (2020), at every stage of the research process there are biases that can undermine the conceptualisation, production, and dissemination of research (see Fig. 4.2) However, publication bias, also known as the file drawer phenomenon, is described in this section. Publication bias usually occurs when an editor selects an article for review and the peer reviewers then approve it for publication because it contains results and/or contents that they find appealing. When publication bias happens studies that are accurate but not as appealing may be ignored by editors and are subsequently not sent to peer reviewers for evaluation. Consequently, important contributions to knowledge are never published and disseminated to potentially interested scholars (Ayorinde et al., 2020). This view was once expressed by Chambers (2014), who stated:
Fig. 4.2 Biases at different stages of the research process. (Source: Williams et al., 2020, p. 2)
[P]ublication bias is simple human nature: in judging whether a manuscript is worthy of publication, editors and reviewers are guided not only by the robustness of the method but by their impressions of what the results contribute to knowledge. Do the outcomes constitute a major advance, worthy of space within a journal that rejects the majority of submissions? Results that are novel and eye-catching are naturally seen as more attractive and competitive than those that are null or ambiguous, even when the methodologies that produce them are the same. (Kretser et al., 2019, p. 346)
Publication bias has occurred within the discipline of autism spectrum research. Kujabi et al. (2021) stated that there was a significant risk of publication bias in all 32 studies that they examined about neonatal jaundice and the autism spectrum. Carrasco et al. (2012) reported that publication bias has occurred within the field of pharmacological treatments for the repetitive behaviours that autistics have exhibited. Since the peer review process has many different components there are multiple strategies that can be used to reduce the potential for publication bias to occur. Carroll et al. (2017) have provided a list of such strategies (see Table 4.1). In the interests of ensuring that publication bias does not impede important discoveries about the autism spectrum from being published all suggestions by Carroll and colleagues should be implemented. For example, an autism-specific journal for publishing studies with negative results could be established since studies with positive results tend to be published (i.e., publication bias in favour of publishing studies with hypotheses supported). Autism-specific journals publishing a list of articles that were rejected is another example of a strategy that can reduce the influence that publication bias has on research about the autism spectrum. Such a list should document all articles and explanations about why they were not sent out for peer review by the editor. Alternatively, if the peer reviewers have rejected the manuscript, then their explanations should also be published on this list (Carroll et al., 2017).
4.4 Drawback Three: Inconsistent Publishing Policies Between Journals Currently, there are inconsistent publishing policies between publishers. Klebel et al. (2020) examined the overall clarity of publishing policies for different disciplines and publishers. They reported that life and Earth sciences had the clearest publishing policies while business, economics, and
4 EVALUATING AND IMPROVING THE PEER REVIEW PROCESS
Table 4.1 Examples of strategies that can reduce publication bias Strategy
As part of gaining ethical approval and/or by law, researchers would have to guarantee publication of their research, regardless of the findings Negative results Having more journals specifically designed to accept research with journals/articles negative, null, and unfavourable results Open reviewing Requiring that journals name the reviewers and publish their comments with the final manuscript Peer review Requiring all peer reviewers to attend peer review training after which training and they would become accredited peer reviewers on a peer review accreditation database, which can also highlight potential conflicts of interest Post-publication Editors make a decision regarding the publication of an article. After review publication, other researchers provide review comments which the authors can respond to. Although specific experts can be asked to conduct post-publication review, anyone is free to comment on all or part of the paper Pre-study Researchers publish full details of their planned methodology before publication of commencing the research. The methods are then peer reviewed to help methodology ensure they are well justified. Once the study is completed, the full manuscript is peer reviewed and published, regardless of the findings Published Journals would openly archive the abstracts of rejected manuscripts rejection lists with a summary of why the paper was rejected. Research Researchers would be required to register their research on specific registration databases within a certain time frame of commencing the research. Registration would be compulsory for all research, and would include key aspects of the study design, including the primary and secondary outcomes and analysis plans Two-stage Authors initially submit only their introduction and methods to a review journal. These get peer reviewed, after which a decision is made regarding the study quality. If provisionally accepted, the authors would then submit the results and discussion for review. Rejection at this second stage would be justified by concerns over the quality of the reporting/ interpreting of the results, but not according to the significance/direction of the results Source: Carroll et al. (2017, p. 4)
management had the least clarity. Regarding academic publishers, Springer Nature was deemed to have the clearest publishing policies while Wiley was the least clear. The Journal of Autism and Developmental Disabilities, Review Journal of Autism and Developmental Disorders, Autism: The International Journal of Research and Practice, Research in Autism Spectrum Disorders, and Focus
on Autism and Other Developmental Disabilities are the main English- language autism-specific journals. The possibility that their publishing policies are inconsistent has not been investigated. To improve the creation of research about the autism spectrum all their publication policies should be compared, and any identified inconsistencies should be standardised.
4.5 Drawback Four: Redundancy in Repeating the Peer Review Process There are many suggestions about how a prospective author can successfully navigate their manuscript through the peer review process (Agathokleous, 2022; Annesley, 2011; Baker et al., 2017). One common recommendation proposed is that authors should incorporate the peer reviewer’s suggestions only if they do not jeopardise the intellectual integrity of the argument they have explained in the manuscript (Agathokleous, 2022). However, evidence suggests that authors are more inclined to make some minor adjustments to their manuscript and then submit it to another journal (Crijns et al., 2021). Crijns et al. (2021) reported that of the 250 rejected manuscripts that they examined, 200 (80%) were published in another journal. Among the 609 substantive actionable items identified in the rejection letters of the 200 manuscripts that were eventually published, 205 (34%) were addressed in the published manuscripts. Based on these results, Crijns et al. (2021, p. 517) concluded: Our findings suggest that authors often disregard advice from peer reviewers after rejection. Authors may regard the peer review process as particular to a journal rather than a process to optimize dissemination of useful, accurate knowledge in any media.
Crijns and colleagues have shown that authors tend to disregard suggestions proposed by peer reviewers and just submit the same manuscript to another journal. According to Bennett and Goodall: The act of submitting a manuscript to another journal without first incorporating previous peer review suggestions can undermine the production of high-quality research. This act wastes the peer reviewer’s time because they are having to examine a manuscript that has already been reviewed. It also ignores previous peer review suggestions that were valid. (Bennett & Goodall, 2022, pp. 192–193)
4 EVALUATING AND IMPROVING THE PEER REVIEW PROCESS
To prevent this behaviour from occurring Crijns and colleagues suggest that journals should use a single manuscript submission site that facilitates the transferring of peer review reports from one journal to another. Alternatively, Bennett and Goodall proposed that: To save a peer reviewer’s time and to improve the quality of published manuscripts, a condition of publication should be that authors declare if their manuscript has been previously peer-reviewed. If so, they should also be obligated to provide all reports by the previous peer reviewers. This documentation can accelerate the peer review process because an Editor can make a quick and accurate decision about if it should be sent out to peer- review. Such documentation can also augment another peer reviewer’s comments, thus resulting in a more comprehensive assessment of the manuscript and a better-quality study. (Bennett & Goodall, 2022, p. 193)
4.6 Drawback Five: Lack of Formal Education for Early Career Researchers About How to Peer Review a Manuscript There are very few training programmes that can teach early career researchers (ECRs) the steps required to conduct a comprehensive peer review (Kerig, 2021). A combination of formal training and experience should be offered so that they can learn and master these abilities. The benefits of giving them such opportunities have been explained by several mid-career researchers (Dennehy et al., 2021; Muñoz-Ballester, 2021). For example, reflecting on why ECRs should be given opportunities to peer review manuscripts Elia di Schiaui claimed: Including trainees in the review process is pivotal for them (and the whole peer review process), for their education on how to write articles and review them, but also for their professional development. … it’s important to give a clear message to early-career researchers that reviewing is an important aspect of science, that it is a difficult and long task, but their job will be somehow acknowledged. Of course, there’s also the possibility of adding this experience to build a more appealing CV. (Dennehy et al., 2021, p. 2)
Other researchers have cited their own experiences of research training as to why ECRs should receive opportunities to peer review manuscripts. As John Dennehy explained: I was never asked to participate in a peer review by my mentors. Being asked to review a manuscript was an important milestone for me as it indicated
that I was accepted as a peer in the scientific community. However, it took some time for me to learn the ins and outs of reviewing, a process that would have been facilitated if my mentors had shared reviewing responsibilities with me … As a mentor, I include my mentees in all reviews that I perform. The mentee benefits from first-hand exposure to the peer review process and the authors and editors benefit from an extra set of eyes assessing the work. (Dennehy et al., 2021, p. 2)
4.7 Drawback Six: Inconsistent Reviews of the Entire Manuscript For a peer review to be comprehensive all components of the submitted manuscript must be examined. However, in some disciplines few journals have required that peer reviewers examine all components of the manuscript. For example, high impact factor journals that publish articles about surgical medicine often ask peer reviewers to examine a manuscript’s statistical methodology and ethical considerations. In contrast, low impact factor journals typically request that peer reviewers judge the manuscript’s novelty/originality, scientific validity, and importance to the field (Davis et al., 2018). Due to such inconsistencies the quality of published manuscripts between and within disciplines varies widely. Regardless of their skill and experience, checklists can help peer reviewers determine if a manuscript should or should not be published. They can also streamline the peer review process and create consistent outcomes between journals and disciplines (Agathokleous, 2022; Del Mar & Hoffmann, 2015; Salmi & Blease, 2021). Salmi and Blease (2021) have created a checklist that peer reviewers and authors can use to help them evaluate the quality of a manuscript. Alternatively, once a manuscript has been created they could use the most applicable Enhancing the QUAlity and Transparency Of health Research (EQUATOR) guideline to evaluate its contents (Struthers et al., 2021). Struthers et al. (2021) have published a flow diagram that peer reviewers, manuscript authors, and editors can use to help them select the most appropriate EQUATOR Network checklist (see Fig. 4.3). For editors, they can use this flow diagram to identify the most relevant EQUATOR checklist that they would like a peer reviewer to use during their examination of the manuscript. For authors, this flow diagram can help them improve the quality of their manuscript prior to its submission to an academic journal. Finally, for peer reviewers they can use this flow diagram to locate the most applicable EQUATOR checklist in the event the editor does not assign it to them.
Fig. 4.3 EQUATOR reporting guideline decision tree. ARRIVE, Animal Research Reporting of In Vivo Experiments; CARE, CAse Report; CHEERS, Consolidated Health Economic Evaluation Reporting Standards; CONSORT, CONsolidated Standards of Reporting Trials; MOOSE, Meta-analysis Of Observational Studies in Epidemiology; PRISMA, Preferred Reporting Items for
4.8 Drawback Seven: The Impacts of Unprofessional Comments by Peer Reviewers The main task of a peer reviewer is to examine and then decide if the submitted manuscript should or should not be published. Part of this process involves the peer reviewer giving the manuscript’s author comments that are either intended to assist them in refining their paper for publication or why their paper will not be published. Typically, these comments are professional. However, there are some peer reviewers who have published derogatory, insulting, and unprofessional comments (Gerwing et al., 2020; Menon et al., 2021; Silbiger & Stubler, 2019). Menon et al. (2021) examined the structure, tone, and quality of 527 reviewer reports from 291 original articles that were lodged to the Indian Journal of Psychological Medicine (IJPM) between 1 January 2019 to 15 May 2020. They reported that 24.1% (n = 117) contained insulting words and 8.8% (n = 43) had an unprofessional tone. Based on their 8-item Review Quality Instrument (RQI) they rated most reviewer reports as either poor (n = 266, 50.5%) or below average (n = 203, 38.5%). They concluded that most reviewer reports submitted to IJPM were unstructured and were rated poorly according to the RQI. Gerwing et al. (2020) have also examined 920 reports from the online review repository Publons and 571 reports from six early career investigators. After evaluating each review report using an assessment rubric, they calculated that 41% (n = 611) contained incomplete or inaccurate comments and 12% (n = 179) contained at least one unprofessional comment directed toward the author or their work. Gerwing and colleagues have provided three strategies to improve the professionalism of peer reviewer comments. First, they propose that peer reviewers should only provide comments about the technical aspects of the manuscript. They should not make any comments about the author’s characteristics or background, such as gender, age, sex, or race. Second, after a peer reviewer discovers an error in the manuscript, they should Systematic Review and Meta-Analysis Protocols; SPIRIT, Standard Protocol Items: Recommendations for Interventional Trials; SQUIRE, Standards for QUality Improvement Reporting Excellence; SRQR, Standards for Reporting Qualitative Research; STARD, Standards for Reporting Diagnostic Accuracy; STREGA, STrengthening the REporting of Genetic Association Studies; STROBE, STrengthening the Reporting of OBservational studies in Epidemiology; TRIPOD, Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis. (Source: Struthers et al., 2021, p. 4)
4 EVALUATING AND IMPROVING THE PEER REVIEW PROCESS
provide detailed and clear recommendations so that the author can implement their suggestions. Third, peer reviewers should only examine a manuscript if they have the time and expertise to conduct a comprehensive review. In the interests of improving the professionalism of feedback given to scholars who study the autism spectrum all of Gerwing et al.’s suggestions should be endorsed.
4.9 Drawback Eight: Ambiguity About Citing Preprinted Articles Unlike peer reviewed articles, that have been assessed by experts before they are published, preprinted articles are published before they have been examined. Preprinted articles are also publicly available while they are being peer reviewed while manuscripts that are undergoing a traditional peer review process are not accessible to the public as they are being examined (Tennant et al., 2019) (see Fig. 4.4). Preprinting is a quick and
Fig. 4.4 Publishing processes for peer review and preprint manuscripts. (Notes: (A) Traditional peer review publishing workflow and (B) Preprint submission establishing priority of discovery. Source: Tennant et al., 2019, p. 3)
simple way of disseminating research and informing interested parties about the latest developments occurring in a rapidly evolving field of research. For example, the COVID-19 pandemic is a situation that is quickly changing and stakeholders, such as the media and policy makers, have obtained the latest research about this event from preprinted articles (Ravinetto et al., 2021). Aside from the COVID-19 pandemic, since 2017 the number of preprinted articles being published has dramatically increased (Balaji & Dhanamjaya, 2019; George et al., 2021). There are five reasons why scholars disseminate their manuscripts via preprint servers. First, they believe that their manuscript contains important information that they would like published before it undergoes peer review. Second, the manuscript has been rejected by an academic journal and the author believes that it should be published. Third, an open access journal has accepted their manuscript for publication and the author would like it distributed to a broader audience. Fourth, an author may wish to publish negative results that they believe would not be published in an academic journal, especially since journals tend to publish studies that only contain positive results. Fifth, authors may wish to accelerate the distribution of their results instead of being frustrated by the time and effort required to process their manuscript through the peer review process (Elmore, 2018). Publishing preprints is a quick and convenient way to disseminate research. However, the quality of preprint manuscripts is often less than those of peer reviewed studies (Carneiro et al., 2020). Añazco et al. (2021) examined preprinted articles about the COVID-19 pandemic that were posted on the preprint servers bioRxiv, medRxiv, and Research Square from 1 January 2020 to 31 May 2020. They compared the publication rate, citation counts, and time interval from posting on the preprint server to publication in a scholarly journal. They reported that of the 5061 preprinted manuscripts examined only 288 were published in scholarly journals (publication rate of 5.7%). They concluded that the low publication rate of preprinted manuscripts about the COVID-19 pandemic could be partly attributed to scholarly journals being inundated with article submissions. They also concluded: [O]ur findings show that preprints had a significantly lower scientific impact, which might suggest that some preprints have lower quality and will not be able to endure peer-reviewing processes to be published in a peer-reviewed journal. (Añazco et al., 2021, p. 1)
4 EVALUATING AND IMPROVING THE PEER REVIEW PROCESS
Despite their abundance, some scholars still do not understand the distinction between peer reviewed and preprinted manuscripts. Furthermore, sometimes this ambiguity is exacerbated because this distinction is not published in the manuscript. Consequently, scholars might cite flawed research from preprinted manuscripts in their own research (Bourne et al., 2017; Elmore, 2018; Ravinetto et al., 2021). To avoid this outcome during 2021 the American Medical Writers Association, the European Medical Writers Association, and the International Society for Medical Publication Professionals released a joint statement and recommendations about medical publications, preprints, and peer reviews (American Medical Writers Association, European Medical Writers Association, & International Society for Medical Publication Professionals, 2021). Ravinetto et al. (2021) have also provided similar recommendations (see Table 4.2). Preprinted manuscripts have both positive and negative consequences for the dissemination of research about the autism spectrum. However, scholars who study the autism spectrum should not cite them in their grant proposals or their own manuscripts for three reasons. First, there is no peer review or quality control processes to ensure that the preprinted study has a robust design or that its conclusions reflect the data collected and analysed. Second, the preprinted manuscript might contain data that Table 4.2 Five recommendations by Ravinetto et al. (2021) about preprints Recommendation Number
Consensus should be sought on a term clearer than preprint, such as unrefereed manuscript, manuscript awaiting peer review, or non-reviewed manuscript. Caveats about unrefereed manuscripts should be prominent on their first page, and each page should include a red watermark stating, caution—Not peer reviewed. Preprint authors should certify that their manuscript will be submitted to a peer review journal, and should regularly update the manuscript status. High level consultations should be convened, to formulate clear principles and policies for the publication and dissemination of non-peer reviewed research results. In the longer term, an international initiative to certify servers that comply with good practices could be envisaged.
Source: Ravinetto et al. (2021)
was obtained without ethical approval. Third, there is no mechanism to check that the preprinted manuscript is free of any conflicts of interest. Due to a lack of ethical safeguards or oversight about conflicts of interest preprint manuscripts should not be cited (Elmore, 2018).
4.10 Conclusion This chapter explained the peer review process and eight drawbacks of this process. These drawbacks were (1) the pervasive incentive placed on academics to publish manuscripts so that they can achieve academic success, (2) publication bias, (3) inconsistent publishing policies between journals, (4) redundancy in repeating the peer review process, (5) lack of formally educating emerging early career researchers about how to peer review a manuscript, (6) inconsistent reviews of the entire manuscript, (7) the impacts of unprofessional comments by peer reviews, and (8) confusion about citing preprinted articles. Each drawback’s potential impact on the production of research about the autism spectrum along with some solutions was explained. It was anticipated that such suggested solutions will improve the peer review process and consequently the production of research about the autism spectrum.
References Agathokleous, E. (2022). Mastering the scientific peer review process: Tips for young authors from a young senior editor. Journal of Forestry Research, 33(1), 1–20. https://doi.org/10.1007/s11676-021-01388-8 American Medical Writers Association, European Medical Writers Association, & International Society for Medical Publication Professionals. (2021). AMWA- EMWA-ISMPP joint position statement on medical publications, preprints, and peer review. Current Medical Research and Opinion, 37(5), 861–866. https://doi.org/10.1080/03007995.2021.1900365 Añazco, D., Nicolalde, B., Espinosa, I., Camacho, J., Mushtaq, M., Gimenez, J., & Teran, E. (2021). Publication rate and citation counts for preprints released during the COVID-19 pandemic: The good, the bad and the ugly. PeerJ, 9, e10927. https://doi.org/10.7717/peerj.10927 Annesley, T. M. (2011). Top 10 tips for responding to reviewer and editor comments. Clinical Chemistry, 57(4), 551–554. https://doi.org/10.1373/ clinchem.2011.162388 Anson, I. G., & Moskovitz, C. (2021). Text recycling in STEM: A text-analytic study of recently published research articles. Accountability in Research, 28(6), 349–371. https://doi.org/10.1080/08989621.2020.1850284
4 EVALUATING AND IMPROVING THE PEER REVIEW PROCESS
Ayorinde, A. A., Williams, I., Mannion, R., Song, F., Skrybant, M., Lilford, R. J., & Chen, Y. F. (2020). Publication and related biases in health services research: A systematic review of empirical evidence. BMC Medical Research Methodology, 20(1), 137. https://doi.org/10.1186/s12874-020-01010-1 Baker, W. L., DiDomenico, R. J., & Haines, S. T. (2017). Improving peer review: What authors can do. American Journal of Health-System Pharmacy, 74(24), 2076–2079. https://doi.org/10.2146/ajhp170187 Balaji, B. P., & Dhanamjaya, M. (2019). Preprints in scholarly communication: Re-imagining metrics and infrastructures. Publications, 7(1), 6. https://doi. org/10.3390/publications7010006 Barnett, A., Mewburn, I., & Schroter, S. (2019). Working 9 to 5, not the way to make an academic living: Observational analysis of manuscript and peer review submissions over time. BMJ (Clinical research ed.), 367, l6460. https://doi. org/10.1136/bmj.l6460 Bennett, M., & Goodall, E. (2022). Addressing underserved populations in autism Spectrum research. Emerald Publishing Limited. Bhattacharya, R., & Ellis, L. M. (2018). It is time to re-evaluate the peer review process for preclinical research. BioEssays: News and Reviews in Molecular, Cellular and Developmental Biology, 40(1). https://doi.org/10.1002/ bies.201700185 Bourne, P. E., Polka, J. K., Vale, R. D., & Kiley, R. (2017). Ten simple rules to consider regarding preprint submission. PLoS Computational Biology, 13(5), e1005473. https://doi.org/10.1371/journal.pcbi.1005473 Carneiro, C. F., Queiroz, V. G., Moulin, T. C., Carvalho, C. A., Haas, C. B., Rayêe, D., et al. (2020). Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature. Research Integrity and Peer Review, 5(1), 16. https://doi.org/10.1186/s41073-020-00101-3 Carrasco, M., Volkmar, F. R., & Bloch, M. H. (2012). Pharmacologic treatment of repetitive behaviors in autism spectrum disorders: Evidence of publication bias. Pediatrics, 129(5), e1301–e1310. https://doi.org/10.1542/ peds.2011-3285 Carroll, H. A., Toumpakari, Z., Johnson, L., & Betts, J. A. (2017). The perceived feasibility of methods to reduce publication bias. PLoS One, 12(10), e0186472. https://doi.org/10.1371/journal.pone.0186472 Chambers, C. (2014). Registered reports: A step change in scientific publishing. https://www.elsevier.com/reviewers-u pdate/stor y/innovation-i n- publishing/registered-reports-a-step-change-in-scientific-publishing Cole, G. G. (2021). The grievance studies affair; one funeral at a time: A reply to Pluckrose, Lindsay, and Boghossian. Sociological Methods & Research, 50(4), 1937–1945. https://doi.org/10.1177/00491241211009949 Crijns, T. J., Ottenhoff, J., & Ring, D. (2021). The effect of peer review on the improvement of rejected manuscripts. Accountability in Research, 28(8), 517–527. https://doi.org/10.1080/08989621.2020.1869547
Davis, C. H., Bass, B. L., Behrns, K. E., Lillemoe, K. D., Garden, O. J., Roh, M. S., et al. (2018). Reviewing the review: A qualitative assessment of the peer review process in surgical journals. Research Integrity and Peer Review, 3, 4. https://doi.org/10.1186/s41073-018-0048-0 Del Mar, C., & Hoffmann, T. C. (2015). A guide to performing a peer review of randomised controlled trials. BMC Medicine, 13, 248. https://doi. org/10.1186/s12916-015-0471-8 Dennehy, J., Hoxie, I., di Schiavi, E., & Onorato, G. (2021). Reviewing as a career milestone: A discussion on the importance of including trainees in the peer review process. Communications Biology, 4(1), 1126. https://doi. org/10.1038/s42003-021-02645-6 Elango, B. (2021). Retracted articles in the biomedical literature from Indian authors. Scientometrics, 126(5), 3965–3981. https://doi.org/10.1007/s11192021-03895-1 Elmore, S. A. (2018). Preprints: What role do these have in communicating scientific results? Toxicologic Pathology, 46(4), 364–365. https://doi. org/10.1177/0192623318767322 Faggion, C. M., Jr., Ware, R. S., Bakas, N., & Wasiak, J. (2018). An analysis of retractions of dental publications. Journal of Dentistry, 79, 19–23. https://doi. org/10.1016/j.jdent.2018.09.002 Galipeau, J., Moher, D., Skidmore, B., Campbell, C., Hendry, P., Cameron, D. W., et al. (2013). Systematic review of the effectiveness of training programs in writing for scholarly publication, journal editing, and manuscript peer review (protocol). Systematic Reviews, 2, 41. https://doi.org/10.1186/2046-4053-2-41 Génova, G., & de la Vara, J. L. (2019). The problem is not professional publishing, but the publish-or-perish culture. Science and Engineering Ethics, 25(2), 617–619. https://doi.org/10.1007/s11948-017-0015-z George, C. H., Alexander, S. P., Cirino, G., Insel, P. A., Izzo, A. A., Ji, Y., et al. (2021). Editorial policy regarding the citation of preprints in the British Journal of pharmacology (BJP). British Journal of Pharmacology, 178(18), 3605–3610. https://doi.org/10.1111/bph.15589 Gerwing, T. G., Gerwing, A. M., Avery-Gomm, S., Choi, C. Y., Clements, J. C., & Rash, J. A. (2020). Quantifying professionalism in peer review. Research Integrity and Peer Review, 5, 9. https://doi.org/10.1186/s41073- 020-00096-x Glonti, K., Cauchi, D., Cobo, E., Boutron, I., Moher, D., & Hren, D. (2019). A scoping review on the roles and tasks of peer reviewers in the manuscript review process in biomedical journals. BMC Medicine, 17(1), 118. https://doi. org/10.1186/s12916-019-1347-0 Harvey, L. (2020). Research fraud: A long-term problem exacerbated by the clamour for research grants. Quality in Higher Education, 26(3), 243–261. https://doi.org/10.1080/13538322.2020.1820126
4 EVALUATING AND IMPROVING THE PEER REVIEW PROCESS
Kerig, P. K. (2021). Why participate in peer review? Journal of Traumatic Stress, 34(1), 5–8. https://doi.org/10.1002/jts.22647 Klebel, T., Reichmann, S., Polka, J., McDowell, G., Penfold, N., Hindle, S., & Ross-Hellauer, T. (2020). Peer review and preprint policies are unclear at most major journals. PLoS One, 15(10), e0239518. https://doi.org/10.1371/journal.pone.0239518 Kretser, A., Murphy, D., Bertuzzi, S., Abraham, T., Allison, D. B., Boor, K. J., Dwyer, J., Grantham, A., Harris, L. J., Hollander, R., Jacobs-Young, C., Rovito, S., Vafiadis, D., Woteki, C., Wyndham, J., & Yada, R. (2019). Scientific integrity principles and best practices: Recommendations from a scientific integrity consortium. Science and Engineering Ethics, 25(2), 327–355. https:// doi.org/10.1007/s11948-019-00094-3 Kujabi, M. L., Petersen, J. P., Pedersen, M. V., Parner, E. T., & Henriksen, T. B. (2021). Neonatal jaundice and autism spectrum disorder: A systematic review and meta-analysis. Pediatric Research, 90(5), 934–949. https://doi. org/10.1038/s41390-020-01272-x Lagerspetz, M. (2021). “The grievance studies affair” project: Reconstructing and assessing the experimental design. Science, Technology, & Human Values, 46(2), 402–424. https://doi.org/10.1177/0162243920923087 Marcoci, A., Vercammen, A., Bush, M., Hamilton, D. G., Hanea, A., Hemming, V., Wintle, B. C., Burgman, M., & Fidler, F. (2022). Reimagining peer review as an expert elicitation process. BMC Research Notes, 15(1), 127. https://doi. org/10.1186/s13104-022-06016-0 Menon, V., Varadharajan, N., Praharaj, S. K., & Ameen, S. (2021). Quality of peer review reports submitted to a specialty psychiatry journal. Asian Journal of Psychiatry, 58, 102599. https://doi.org/10.1016/j.ajp.2021.102599 Moskovitz, C. (2019). Text recycling in scientific writing. Science and Engineering Ethics, 25(3), 813–851. https://doi.org/10.1007/s11948-017-0008-y Muñoz-Ballester, C. (2021). Transparency and training in peer review: Discussing the contributions of early-career researchers to the review process. Communications Biology, 1115. https://doi.org/10.1038/s42003-021-02646-5 Nair, S., Yean, C., Yoo, J., Leff, J., Delphin, E., & Adams, D. C. (2020). Reasons for article retraction in anesthesiology: A comprehensive analysis. Canadian Journal of Anasthesia, 67(1), 57–63. https://doi.org/10.1007/s12630- 019-01508-3 Pluckrose, H., Lindsay, J., & Boghossian, P. (2021). Understanding the “grievance studies affair” papers and why they should be reinstated: A response to Geoff Cole. Sociological Methods & Research, 50(4), 1916–11936. https://doi. org/10.1177/00491241211009946 Ravinetto, R., Caillet, C., Zaman, M. H., Singh, J. A., Guerin, P. J., Ahmad, A., et al. (2021). Preprints in times of COVID19: The time is ripe for agreeing on terminology and good practices. BMC Medical Ethics, 22(1), 106. https://doi. org/10.1186/s12910-021-00667-7
Salmi, L., & Blease, C. (2021). A step-by-step guide to peer review: A template for patients and novice reviewers. BMJ Health & Care Informatics, 28(1), e100392. https://doi.org/10.1136/bmjhci-2021-100392 Silbiger, N. J., & Stubler, A. D. (2019). Unprofessional peer reviews disproportionately harm underrepresented groups in STEM. PeerJ, 7, e8247. https:// doi.org/10.7717/peerj.8247 Staller, K. M. (2019). The darker side of a hoax: Creating a presumption of deception. Qualitative Social Work, 18(2), 149–151. https://doi.org/10.1177/ 1473325019833833 Struthers, C., Harwood, J., de Beyer, J. A., Dhiman, P., Logullo, P., & Schlüssel, M. (2021). GoodReports: Developing a website to help health researchers find and use reporting guidelines. BMC Medical Research Methodology, 21(1), 217. https://doi.org/10.1186/s12874-021-01402-x Suelzer, E. M., Deal, J., Hanus, K. L., Ruggeri, B., Sieracki, R., & Witkowski, E. (2019). Assessment of citations of the retracted article by Wakefield et al with fraudulent claims of an association between vaccination and autism. JAMA Network Open, 2(11), e1915552. https://doi.org/10.1001/jamanetwork open.2019.15552 Šupak Smolcić, V. (2013). Salami publication: Definitions and examples. Biochemia medica, 23(3), 237–241. https://doi.org/10.11613/bm.2013.030. Tennant, J. P., Crane, H., Crick, T., Davila, J., Enkhbayar, A., Havemann, J., et al. (2019). Ten hot topics around scholarly publishing. Publications, 7(2), 34. https://doi.org/10.3390/publications7020034 Tijdink, J. K., Schipper, K., Bouter, L. M., Maclaine Pont, P., de Jonge, J., & Smulders, Y. M. (2016). How do scientists perceive the current publication culture? A qualitative focus group interview study among Dutch biomedical researchers. BMJ Open, 6(2), e008681. https://doi.org/10.1136/bmjopen- 2015-008681 Timmins, F. (2019). Writing for publication-implications of text recycling and cut and paste writing. Journal of Nursing Management, 28(5), 999–1001. https:// doi.org/10.1111/jonm.12868 Williams, I., Ayorinde, A. A., Mannion, R., Skrybant, M., Song, F., Lilford, R. J., & Chen, Y. F. (2020). Stakeholder views on publication bias in health services research. Journal of Health Services Research & Policy, 25(3), 162–171. https:// doi.org/10.1177/1355819620902185
Reducing Questionable Research Practices
Abstract Questionable research practices (QRPs) occupy a grey zone in that they are not outright academic fraud or legitimate research. This chapter begins with a definition of QRPs and explanations of three main types of QRPs. The prevalence and reasons why researchers engage in QRPs are then discussed. This chapter concludes with three strategies that can be used to deter the possibility of QRPs occurring. The purpose of this chapter is to explain QRPs and to make recommendations to change the creation of research so that the occurrence of QRPs can be reduced. One consequence of such a purpose is that research about the autism spectrum is more robust since it will not contain any QRPs. Keywords Cherry picking • Datasets • Hypothesising After Results are Known • P-hacking • Pre-registration • Questionable research practices
5.1 Defining Questionable Research Practices QRPs can be conceptualised as being in the middle of a spectrum about ethical research practices. Dubious research practices (e.g., data falsification and plagiarism) occupy one end of this spectrum and legitimate research occupies the opposite end (Ravn & Sørensen, 2021). Banks et al. (2016, p. 7) have defined QRPs as “design, analytic or reporting practices that have been ‘questioned’ because of the potential for the practice to be © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Bennett, Applying Metascientific Principles to Autism Research, https://doi.org/10.1007/978-981-19-9240-7_5
employed with the purpose of presenting biased evidence in favor of an assertion”. QRPs is an umbrella term that is typically applied to three types of QRPs, which are p-hacking, cherry picking, and Hypothesising After Results are Known (HARKing). Each QRPs has been defined in various ways (see Table 5.1). What follows now is a detailed explanation of each QRP. 5.1.1 Cherry Picking The term cherry picking originates from fruit pickers who only select fruit that appears palatable and ripe. Researchers who engage in cherry picking only select information that supports pre-existing notions (Andrade, 2021). There are five different types of cherry picking that researchers can use either individually or in combination. First, selecting specific records in time occurs when a researcher only selects records that were created during a specific timeframe. Second, selecting isolated examples occurs when a researcher only selects records that support their pre-formed conclusions. Third, selecting responses from specific locations occurs when a researcher only selects data that was collected from one or a few specific locations. Fourth, selecting isolated papers occurs when a researcher only selects studies that support their conclusions. Fifth, quote mining occurs when a researcher only selects quotes that support their conclusions (Farmer & Cook, 2013). 5.1.2 P-Hacking P-hacking is the act of deliberately changing the p values in a study to either confirm or disprove a hypothesis (Andrade, 2021; Fraser et al., 2018). There are three reasons why academics engage in p-hacking. First, education institutions place upon them the expectation that they need to publish results that are statistically significant. Second, journals that have high impact factors tend to publish articles that only contain statistically significant results that support the hypothesis. Third, agencies that allocate grants base their decisions on the researcher’s publication history and the results from previous studies. Thus, to obtain a grant a researcher is incentivised to engage in p-hacking (Raj et al., 2018). P-hacking can have three detrimental consequences on the production of research. First, studies that contain fraudulently claims of statistical significance, due to a p value being subjected to p-hacking, can encourage
5 REDUCING QUESTIONABLE RESEARCH PRACTICES
Table 5.1 Definitions of different questionable research practices Questionable Research Practice
Definition of Questionable Research Practice
Cherry picking “Cherry picking includes failing to report dependent or response variables or relationships that did not reach statistical significance or other threshold and/or failing to report conditions or treatments that did not reach statistical significance or other threshold.” (Fraser et al., 2018, p. 2) “Selective outcome reporting, or outcome switching or ‘cherry-picking’ as it is also known, refers to the practice of using multiple outcomes in a research study but reporting only a selection. Selective outcome reporting increases the probability that a statistically significant study finding is due to chance.” (Büttner et al., 2020, p. 1366) “The researcher cherry-picks only the significant outcomes for the paper that presents the findings; the nonsignificant outcomes are omitted as though those outcomes had not been studied. Or, when discussing the findings of their study, authors may cherry-pick for consideration research that favors their viewpoint and may criticize or even neglect to cite studies that do not support their arguments. Cherry-picking is a QRP because the reader is deceived into seeing a picture that is more favorable than it truly is.” (Andrade, 2021. p. e1) P-hacking “P hacking refers to a set of activities: checking the statistical significance of results before deciding whether to collect more data; stopping data collection early because results reached statistical significance; deciding whether to exclude data points (e.g., outliers) only after checking the impact on statistical significance and not reporting the impact of the data exclusion; adjusting statistical models, for instance by including or excluding covariates based on the resulting strength of the main effect of interest; and rounding of a p value to meet a statistical significance threshold (e.g., presenting 0.053 as p