Psychoeducational assessment and report writing. [2 ed.] 9783030446406, 3030446409

2,708 283 8MB

English Pages [570] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Psychoeducational assessment and report writing. [2 ed.]
 9783030446406, 3030446409

Table of contents :
Preface
Contents
Editor and Contributors
Part I Overview of the Psychoeducational Assessment and Report Writing Process
1 Purpose of Psychoeducational Assessment and Report Writing
1.1 Definition and Purpose of Psychoeducational Assessment
1.2 Who Conducts Psychoeducational Assessments?
1.3 Psychoeducational Versus Psychological Assessment and Report Writing
1.4 Feedback Conference
1.5 Conclusion
References
2 A Newly Proposed Framework and a Clarion Call to Improve Practice
2.1 Introduction and Historical Overview
2.2 A Renewal of a Commitment
2.3 The First Pillar of Evidence-Based Psychoeducational Assessment: Scientific Decision-Making and Critical Thinking
2.3.1 Errors in Decision-Making
2.3.1.1 Naïve Realism
2.3.1.2 Confirmation Bias
2.3.1.3 Premature Closure
2.3.1.4 Belief Perseverance
2.3.1.5 Illusory Correlation
2.3.1.6 Hindsight Bias
2.3.1.7 Groupthink
2.3.1.8 Overreliance on Heuristics
2.3.1.9 Base Rate Neglect
2.3.1.10 Bias Blind Spot
2.3.1.11 Unpacking
2.3.1.12 Aggregate Bias
2.3.1.13 Search Satisfying
2.3.1.14 Diagnostic Overshadowing
2.3.1.15 Diagnosis Momentum
2.3.1.16 Psych Out Error
2.3.1.17 Pathology Bias
2.3.2 Warning Signs of Pseudoscience
2.3.2.1 Lack of Falsifiability and Overuse of Ad Hoc Hypotheses
2.3.2.2 Lack of Self-Correction
2.3.2.3 Emphasis on Confirmation
2.3.2.4 Avoidance of Peer Review
2.3.2.5 Overreliance on Anecdotal and Testimonial Evidence
2.3.2.6 Absence of Connectivity
2.3.2.7 Extraordinary Claims
2.3.2.8 Ad Antequitem Fallacy
2.3.2.9 Use of Hypertechnical Language
2.3.2.10 Reversed Burden of Proof
2.3.2.11 Extensive Appeals to Authority or Gurus (i.e., Eminence-Based Assessment)
2.3.2.12 Heavy Reliance on Endorsements from Presumed Experts
2.3.2.13 Extensive Use of Neurobabble or Psychobabble
2.3.2.14 Tendency of Advocates to Insulate Themselves from Criticism
2.3.2.15 Limited or Non-existent Reporting of Contradictory Findings
2.4 The Second Pillar of Evidence-Based Psychoeducational Assessment: Evidence-Based Instrument Selection, Metrics of Interpretability Evaluation, and Consideration of Diagnostic Utility
2.4.1 Instrument Selection: Psychometric Considerations
2.4.2 Metrics of Interpretability Evaluation
2.4.3 Diagnostic and Treatment Validity/Utility Consideration
2.4.4 Reasons for Continued Popularity of Non-empirically Supported Assessment Practices
2.4.4.1 Illusory Correlation
2.4.4.2 The P.T. Barnum Effect
2.4.4.3 Overperception of Psychopathology (or Disability)
2.4.4.4 The Alchemist’s Fantasy
2.4.4.5 Clinical Tradition and Educational Inertia
2.4.4.6 Confirmation Bias
2.4.4.7 Hindsight Bias
2.4.4.8 Overconfidence
2.4.4.9 Suboptimal Psychometric Training and Education
2.5 The Third Pillar of Evidence-Based Psychoeducational Assessment and Report Writing: Expert Clinical Judgment
2.5.1 The Dimensions of Expertise
2.5.1.1 Domain-Specificity
2.5.1.2 Greater Knowledge and Experience
2.5.1.3 Meaningful Perception
2.5.1.4 Reflective, Qualitative Problem Analysis
2.5.1.5 Effective Strategy Construction
2.5.1.6 Postanalysis Speed and Accuracy
2.5.1.7 Working Memory
2.5.1.8 Self-monitoring Skills
2.5.2 How to Develop Expert Clinical Judgment
2.6 The Fourth Pillar of Evidence-Based Psychoeducational Assessment: A Well-Written, Organized, and Aesthetically Appealing Report that Is Properly Conveyed to Stakeholders
References
3 The Psychoeducational Assessment and Report Writing Process: A General Overview
3.1 Overview
3.2 Steps in the Psychoeducational Assessment and Report Writing Process
3.3 Working with Children
3.4 Observing the Child
3.5 The Testing Environment and the Test Session
3.5.1 Establish a Working, not a Therapeutic, Relationship with the Child
3.5.2 Take Advantage of the Honeymoon Effect
3.5.3 The Room Layout
3.5.4 How to Start the Testing Session
3.5.5 Examiner Anxiety
3.5.6 Be Well Prepared and Adhere to Standardized Directions
3.5.7 Triple Check Protocol Scoring
3.5.8 Breaks, Encouragement and Questions
3.5.9 Debrief the Testing Process
3.6 Conclusion
References
4 Interviewing and Gathering Data
4.1 Introduction
4.2 Interviewing
4.2.1 Do not Overlook the Need to Connect with Caregivers
4.2.2 The Three Main Interviewing Approaches
4.3 The Psychoeducational Interview Format
4.3.1 Caregiver/Parent Format
4.3.2 Teacher Format
4.3.3 Student Interview Format
4.3.4 Interview Format for Mental Health Conditions that Might Impact Educational Functioning
4.3.5 A Semi-structured Psychoeducational Interview
4.4 Gathering Background and Additional Data
4.4.1 Structured Developmental History Questionnaires
4.4.2 Child Development Questionnaire (CDQ)
4.4.3 Ascertaining Additional Background Information
4.5 Conclusion
References
5 Observing the Child
5.1 Introduction
5.2 Types of Observation
5.2.1 Naturalistic Observation
5.2.1.1 Anecdotal Recording
5.2.1.2 A-B-C Recording
5.2.2 Systematic Direct Observation
5.2.2.1 Event Recording
5.2.2.2 Duration Recording
5.2.2.3 Latency Recording
5.2.3 Time Sampling Procedures
5.2.3.1 Momentary Time Sampling
5.2.3.2 Partial Interval Time Sampling
5.2.3.3 Whole Interval Time Sampling
5.2.4 Observation of Comparison Students
5.2.5 Analog Observation
5.2.6 Observation of Permanent Products
5.2.7 Self and Peer Observation of Student Behavior
5.2.8 Observation in the Home
5.2.9 Observation Systems
5.2.9.1 Behavior Assessment System for Children (BASC-3)
5.2.9.2 Behavioral Observation of Students in Schools (BOSS)
5.3 How Many Observations Are Enough?
5.4 Observation of Student Behavior During Administration of Standardized Assessments
5.5 Hawthorne and Halo Effects
5.6 Summary
References
6 General Guidelines on Report Writing
6.1 Overview
6.1.1 Make the Report Aesthetically Appealing
6.2 Structure of the Psychoeducational Report
6.3 Conceptual Issues in Psychoeducational Report Writing
6.3.1 Address Referral Questions
6.3.2 Avoid Making Predictive and Etiological Statements
6.3.3 Make a Classification Decision and Stand by It
6.3.4 Rule Out Other Classifications and State Why You Ruled Them Out
6.3.5 Use Multiple Sources of Data and Methods of Assessment to Support Decision-Making
6.3.6 Eisegesis
6.3.7 Be Wary of Using Computer-Generated Reports
6.3.8 Sparingly Use Pedantic Psychobabble
6.3.9 Avoid Big Words and Write Parsimoniously
6.3.10 Address the Positive
6.3.11 Write Useful, Concrete Recommendations
6.4 Stylistic Issues in Psychoeducational Assessment Report Writing
6.4.1 Report Length
6.4.2 Revise Once, Revise Twice, Revise Thrice and then Revise Once More
6.4.3 Avoid Pronoun Mistakes
6.4.4 Use Headings and Subheadings Freely
6.4.5 Provide a Brief Description of Assessment Instruments
6.4.6 Use Tables and Charts to Present Results
6.4.7 Put Selected Statements Within Quotations to Emphasize a Point
6.4.8 Improve Your Writing Style
6.5 Conclusion
References
Part II Section-by-Section Report Writing Guidelines
7 Identifying Information and Reason for Referral
7.1 Introduction
7.2 Identifying Information
7.3 Reason for Referral
7.3.1 Generic Referral
7.3.2 Hybrid Referral
7.3.3 Example of Hybrid Referrals for IDEA Categories
7.4 Conclusion
References
8 Assessment Methods and Background Information
8.1 Introduction
8.2 Assessment Methods
8.3 Background Information and Early Developmental History
8.3.1 Introduction
8.3.2 Prenatal, Perinatal, and Early Developmental History
8.3.3 Medical and Health
8.3.4 Cognitive, Academic, and Language Functioning
8.3.5 Social, Emotional, Behavioral, and Adaptive Functioning
8.3.6 Strengths and Interests
8.3.7 Conclusion
8.4 Conclusion
References
9 Assessment Results
9.1 Introduction
9.2 Organization of Assessment Results Section
9.3 Format for Presentation of Assessment Instruments
9.4 Understanding Standard and Scaled Scores
9.5 Use of Confidence Intervals
9.6 Comment on Raw, Grade, and Age Equivalent Scores
9.7 Conclusion
References
10 Conceptualization and Classification
10.1 Introduction
10.2 Integration of Information
10.2.1 Guidelines for the Integration of Results
10.3 General Framework for the Conceptualization and Classification Section
10.4 Specific Conceptualization and Classification Examples
10.4.1 Learning Disabilities Conceptualization and Classification
10.4.2 Emotional Disturbance Conceptualization and Classification
10.4.3 Conceptualization and Classification of Autism
10.4.4 Classification of Intellectual Disability
10.4.5 Conceptualization and Classification of OHI
10.5 Conclusion
References
11 Summary and Recommendations
11.1 Introduction
11.2 Summary Section
11.2.1 Contents of the Summary Section
11.3 Recommendations Section
11.3.1 Why Do Psychologist’s Exclude Recommendations?
11.3.1.1 State Practices
11.3.1.2 Psychologist and School District Practice
11.3.2 Importance of Recommendations
11.4 General Recommendation Writing Guidelines
11.5 Why Recommendations Are Not Implemented?
11.6 Recommendation Examples
11.6.1 Sample Recommendations for Commonly Faced Academic Difficulties
11.6.1.1 Reading Difficulties
11.6.1.2 Writing Difficulties
11.6.1.3 Mathematics Difficulties
11.6.2 Sample Recommendations for Social, Emotional, and Behavioral Difficulties
11.6.2.1 Recommendations for ADHD
11.6.2.2 Referral to a Child Psychiatrist for a Psychotropic Medication Evaluation
11.6.2.3 Recommendations for Counseling/Psychotherapy
11.6.2.4 Social Skills Intervention Recommendations
11.6.2.5 Social Skills Recommendations for Children with Autism Spectrum
11.6.2.6 Crisis Recommendations
11.6.2.7 Parental Recommendations
11.7 Accommodations
11.7.1 Environmental Accommodations
11.7.2 Testing Accommodations
11.7.3 Homework/Classwork Modifications
11.8 Web Site Suggestions
11.9 Conclusion
References
Part III Guidance Regarding Assessment and Classification of IDEA Categories Including Sample Reports
12 Learning Disabilities
12.1 Overview
12.2 Historical Definition and Conceptual Considerations
12.3 Definition and Identification of Specific Learning Disabilities
12.3.1 Diagnostic and Statistical Manual, Fifth Edition (DSM-5)
12.3.2 Developmental Learning Disorder (ICD-11)
12.3.3 IDEA
12.3.4 Response to Intervention (RTI)
12.3.4.1 Advantages of RTI for SLD Assessment and Intervention
12.3.4.2 Limitations of RTI for SLD Assessment and Intervention
12.3.5 Pattern of Strengths and Weaknesses (PSW)
12.3.5.1 Historical Considerations
12.3.5.2 Patterns of Strengths and Weaknesses (PSW) for SLD Determination
12.3.5.3 Discrepancy/Consistency Model (D/CM)
12.3.5.4 Dual/Discrepancy Consistency Model (DD/C)
12.3.5.5 Concordance/Discordance Model (C/DM)
12.3.5.6 Core-Selective Evaluation Process (C-SEP)
12.3.5.7 Strengths of the PSW Model
12.3.5.8 Potential Weaknesses of the PSW Model
12.4 General Guidance Regarding the Assessment of Learning Disabilities
12.4.1 Alternative Research-Based Procedures
12.5 Comment on Use of IQ Tests
12.6 Conclusion
Appendix
LD REPORT EXAMPLE
LD EXAMPLE WITH SUPPORT FOR ADHD
LD EXAMPLE WITH CONSIDERATION OF OHI/ADHD
References
13 Autism
13.1 Overview
13.2 Definition of Autism Within IDEA
13.3 Definition of Autism Within DSM-5
13.4 Autism Spectrum Disorder (ICD-11)
13.5 Identification of Autism
13.6 General Guidance Regarding Psychoeducational Assessment of ASD
13.6.1 Consider Comorbidity and Rule Out Selected Disorders
13.6.2 Cognitive Ability
13.6.3 Academic Achievement
13.6.4 Communication and Language
13.6.5 Adaptive Functioning
13.6.6 Fine and Gross Motor Skills
13.7 Conclusion
Appendix
Sample Report 1: High Functioning Autism Example
Sample Report 2: Low- to Mid-functioning Autism
Sample Report 3: Lower Functioning Autism
References
14 Emotional Disturbance
14.1 Overview
14.2 IDEA Definition
14.3 Identification
14.4 General Guidance Regarding Psychoeducational Assessment of ED
14.4.1 Instrument Selection
14.4.2 Eschew Projective Measures
14.4.3 A More Behavioral Approach to Identification of ED that Considers Cultural Context
14.5 Conclusion
Appendix
References
15 Intellectual Disabilities
15.1 Overview
15.2 Definition
15.2.1 Etiology
15.2.2 Characteristics of Intellectual Disabilities
15.3 Identification of ID
15.4 General Guidance Regarding Psychoeducational Assessment
15.4.1 Dual Deficit in IQ and Adaptive Behavior
15.4.2 Tests of Intelligence
15.4.2.1 Avoid Rigid Cut Score Application
15.4.2.2 IQ Test Selection: Psychometric Considerations
15.4.2.3 Administer a Second IQ Test?
15.4.3 Adaptive Behavior
15.4.4 Consider Comorbidity and Rule Out Selected Disorders
15.5 Conclusion
Appendix
References
16 Other Health Impaired
16.1 Overview
16.2 Definition
16.3 Identification
16.4 General Guidance Regarding Psychoeducational Assessment of OHI
16.4.1 Attention-Deficit/Hyperactivity Disorder (ADHD)
16.4.2 Other Health Conditions
16.5 Conclusion
Case Background Information
Appendix: Sample Report 1—Qualify Without Outside Diagnosis of ADHD
References
17 Miscellaneous IDEA Categories and Section 504
17.1 Overview
17.2 Section 504
17.2.1 Definition
17.2.2 Identification and Psychoeducational Assessment
17.2.3 Conclusion
17.3 Traumatic Brain Injury (TBI)
17.3.1 Overview
17.3.2 Definition
17.3.3 Correlates of TBI
17.3.4 Guidance Regarding Psychoeducational Assessment
17.3.5 Conclusion
17.4 Visual Impairment/Blindness
17.4.1 Overview
17.4.2 Definition
17.4.3 Identification and Psychoeducational Assessment Considerations
17.4.4 Conclusion
17.5 Hearing Loss and Deafness
17.5.1 Overview
17.5.2 Definition
17.5.3 Degree of Hearing Loss
17.5.4 Identification and Psychoeducational Assessment Considerations
17.5.5 Conclusion
17.6 Orthopedic Impairment
17.6.1 Overview
17.6.2 Definition
17.6.3 Psychoeducational Assessment Considerations
17.6.4 Conclusion
Appendix
References
Part IV Oral Reporting and Miscellaneous Topics in Psychoeducational Assessment and Report Writing
18 Assessment and Identification of Gifted Youth
18.1 Introduction
18.2 Conceptualizations of Giftedness
18.3 Prevalence Rates
18.4 How Do States Define and Identify Giftedness?
18.5 Guiding Concepts for Assessment and Report Writing
18.5.1 Demographics, Reason for Referral, Relevant Background Data
18.5.2 Parent/Teacher Report
18.5.3 Cognitive Ability
18.5.4 Academic Achievement
18.5.5 Rule Out Considerations: ELL, ED, Health Concerns, SLD, Dually Exceptional Student Discussion
18.6 Conclusion
Appendix
References
19 The Use of Curriculum-Based Measures/Response to Intervention Within the Context of Psychoeducational Assessment and Report Writing
19.1 Introduction
19.2 Curriculum-Based Assessment and Curriculum-Based Measurement Approaches to Assessment
19.3 Curriculum-Based Measurement
19.3.1 Curriculum-Based Measures of Reading
19.3.1.1 Oral Reading Fluency
19.3.1.2 Comprehension
19.3.1.3 Early Literacy
19.3.2 Curriculum-Based Measures of Mathematics
19.3.2.1 Computational Fluency
19.3.2.2 Concepts/Application
19.3.3 Curriculum-Based Measures of Writing
19.3.3.1 Total Words Written
19.3.3.2 Correct Word Sequences
19.3.3.3 Emerging Technologies
19.4 The Role of Curriculum-Based Measurement in Specific Learning Disabilities Eligibility Decision-Making
19.4.1 Response to Intervention
19.4.1.1 Data Display and Visual Analysis of Curriculum-Based Measurement in Response to Intervention Models
19.4.1.2 Responders Versus Non-responders to Intervention
19.4.1.3 Specific Learning Disabilities Eligibility and Reporting
19.5 Evidence-Based Practice and Fidelity
19.6 Summary
Appendix
References
20 Culturally and Linguistically Diverse Learners
20.1 Overview
20.2 General Concepts
20.2.1 Pre-referral Bias
20.2.2 Assessment Bias
20.2.3 Translations and Interpreters
20.2.4 Practical Considerations on the Use of Translators and Interpreters
20.2.5 Bilingual Special Education Interface
20.3 Ethical and Legal Considerations
20.3.1 Sensory Issues
20.3.2 Diagnostic Issues
20.3.3 Environmental Issues
20.4 Nondiscriminatory Assessment of Culturally and Linguistically Diverse Students
20.4.1 Choosing Tests that Are not Discriminatory
20.4.2 Choosing Tests Provided in a Child’s Native Language or Other Mode
20.5 Additional Concerns
20.6 Conclusion
References
21 Oral Reporting
21.1 Overview
21.2 Format and Meeting Participants
21.3 General Framework for the Feedback Conference
21.3.1 How to Start the Conference
21.3.2 Brief Description of Evaluation Process
21.3.3 Presentation of the Classification Conclusion
21.3.4 Address Any Questions or Concerns
21.3.5 Discuss the Major Domains Addressed by the Assessment Including a Strength-Based Assessment, Integrate the Findings, and Discuss the Rationale for Your Conclusion
21.3.5.1 Cognitive and Academic
21.3.5.2 Social-Emotional and Behavioral
21.3.6 Integrate Findings and Reiterate the Rationale for Your Conclusion
21.3.7 Discuss Recommendations
21.4 Case Examples
21.4.1 Case Example 1: The Unexpected Response
21.4.2 Case 2: The Caregiver in Denial
21.4.3 Case 3: The Appreciative Caregiver
21.5 General Oral Reporting Guidelines
21.5.1 Facilitate the Meeting and Engage All Appropriate Participants
21.5.2 Team Decision
21.5.3 Avoid Pedantic Psychobabble
21.5.4 Expect the Unexpected
21.5.5 Pressure or Negotiation for a Classification
21.5.6 Be Empathetic but Maintain Boundaries
21.5.7 Be Prepared
21.5.8 Be Direct but Gentle
21.5.9 Speak Slowly and Permit Time for Caregivers to Process information and Ask Questions
21.6 Conclusion
22 Special Issues in Psychoeducational Assessment and Report Writing
22.1 Overview
22.2 Applicable General Ethical Principles and Test Standards
22.2.1 Beneficence and Nonmaleficence
22.2.2 Respect for Rights and Dignity of Individuals
22.2.3 Competence
22.2.4 Engage in Empirically Validated Practices
22.2.5 Conflict Between the Law and Ethical Standards
22.2.6 Confidentiality and Maintenance of Records
22.2.7 Limits to Confidence
22.2.8 Consent for Assessment
22.2.9 Consent to Share Records
22.2.10 Report Security
22.2.11 Release of Test Protocols
22.2.12 Maintenance of Records
22.3 General Assessment and Report Writing Principles
22.3.1 Avoid the Use of Age and Grade Equivalent Scores
22.3.2 Know Points on the Normal Curve
22.3.3 Evidence-Based Test Use and Interpretation
22.3.4 Avoid the Discrepancy Approach
22.3.5 Adhere to Standardized Directions
22.3.6 Report Test Scores, but Do Not Overemphasize Numbers
22.3.7 Fully Complete Test Protocols
22.3.8 Pressure or Negotiation for a Classification
22.3.9 No Classification, Now What?
22.3.10 Use of New Instruments
22.3.11 Old Versus New Instruments
22.3.12 Consider Culture and Language
22.4 Conclusion
References
Index

Citation preview

Stefan C. Dombrowski   Editor

Psychoeducational Assessment and Report Writing Second Edition

Psychoeducational Assessment and Report Writing

Stefan C. Dombrowski Editor

Psychoeducational Assessment and Report Writing Second Edition

123

Editor Stefan C. Dombrowski Department of Graduate Education, Leadership and Counseling Rider University Lawrenceville, NJ, USA

ISBN 978-3-030-44640-6 ISBN 978-3-030-44641-3 https://doi.org/10.1007/978-3-030-44641-3

(eBook)

1st edition: © Springer Science+Business Media New York 2015 2nd edition: © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

This book is dedicated to the field of school and clinical psychology. May we adopt a more evidence-based approach to psychoeducational assessment and report writing. This book is also dedicated to my wife and two boys, Maxwell (age 15) and Henry (age 13). Life is passing too quickly and I am lucky to have you in my life. Stefan C. Dombrowski

Preface

Now significantly updated and in its second edition, this book advances the practice of psychoeducational assessment and report writing and casts the process within a newly proposed evidence-based psychoeducational assessment and report writing framework. Most assessment and report writing texts are too broad, offering simultaneous guidance on clinical assessment, psychoeducational assessment, adult assessment, and preschool assessment. None of the existing books provide sufficient coverage of the process of psychoeducational assessment and report writing particularly in relation to the US-based IDEA/state special education classifications for which psychologists in the schools will become responsible. Psychologists working in US public schools are beholden to IDEA/state special education regulations; however, the approach to assessment and report writing offered in this text will be useful regardless of which diagnostic taxonomy [e.g., IDEA, Diagnostic and Statistical Manual (DSM), or the International Classification of Disease (ICD)] is referenced; or where the psychologist practices whether in a US public school or a clinic domestically or internationally. This book casts a narrow net, seeking to offer specific guidance on the practice of psychoeducational assessment and report writing for school-aged children. It portends to be a useful resource for faculty in school and clinical child psychology who teach coursework on the evaluation of children. This book will also be useful for practicing psychologists who wish to expand their knowledge base and acquire a more contemporary, evidence-based understanding of this process. This book comprises four parts. The first part furnishes an evidence-based framework for psychoeducational assessment and report writing and includes a general overview of the process. The second part offers a section-by-section report writing discussion (e.g., Reason for Referral; Assessment Methods and Sources of Data; Assessment Results; Conceptualization and Classification; Summary and Recommendations) with a chapter devoted to each major report component. The third part furnishes general guidance regarding the psychoeducational assessment of major IDEA classification categories (e.g., learning disabilities, emotional disturbance, autism, other health impaired and intellectual disability). It also presents vii

viii

Preface

sample reports for those categories in an appendix at the end of each chapter. Within each sample report, the various report sections are annotated to augment the reader’s understanding of the rationale for decisions discussed in the report. The fourth and final part discusses miscellaneous legal, ethical, and professional issues including practical guidance on the process of furnishing oral report feedback. Additionally, this part includes a discussion of the evaluation of giftedness, the use of curriculum-based assessment/response to intervention in a psychoeducational assessment context for purposes of learning disabilities classification, and the assessment of culturally and linguistically diverse learners.

Objectives Useful for licensed psychologists, certified school psychologists, and graduate students in school and clinical psychology who wish to expand their knowledge of contemporary practices, this book seeks to accomplish the following objectives: 1. Offers a newly proposed evidence-based psychoeducational assessment framework that encourages psychologists to recognize the makers of pseudoscience and regard the scientific literature when engaging in the assessment and report writing process. 2. Offers a comprehensive, practical resource that may be useful to instructors, psychologists, and graduate students in school and clinical child psychology on the process of conducting comprehensive psychoeducational assessments, writing reports, and furnishing feedback to caregivers and other stakeholders. 3. Offers specific guidance on gathering information and data on the child via interviewing, rating forms, classroom observations, and developmental history questionnaires. 4. Offers a section-by-section detailed discussion of each psychoeducational report component including identifying information, referral reason, assessment methods and sources of data, assessment results, conceptualization and classification, and summary and recommendations. 5. Offers a detailed discussion about how to integrate disparate pieces of assessment data, conceptualize a clinical case, and offer a classification/diagnostic decision. 6. Offers a structured approach to the provision of feedback to parents, caregivers, and teachers. 7. Offers a discussion of ethical, practical, cultural, linguistic, legal, and empirical considerations when engaging in psychoeducational assessment, report writing, and oral reporting. As a resource for practicing psychologists and graduate students alike, this text assumes that students already have a sufficient grasp of standard written English. Therefore, it will not review basic writing principles. If writing is generally an area

Preface

ix

of weakness, then remediation is strongly suggested. This text will not cover in detail the assessment of children’s intelligence; however, it will provide an evidence-based framework for the interpretation of IQ measures in relation to learning disabilities diagnosis. The assessment of intelligence has been, and will likely continue to be, intertwined with the assessment of learning disabilities specifically and psychoeducational assessment more broadly. Nonetheless, a discussion of the history, mechanics, and components of specific IQ test instruments is adequately covered in existing texts and will not be discussed in this book. This book restricts its focus to the psychoeducational assessment of children in kindergarten through 12th grade. It does not extend its gaze downward to infant and preschool assessment nor upward to college, adult, and geriatric assessment even though the approach to assessment and report writing offered in this book can be extended to those groups. Additionally, this book does not discuss neuropsychological, vocational, or forensic assessment. Finally, the distinction between psychoeducational and psychological assessment and report writing, including the many areas of overlap, is covered in the first chapter.

The Book’s Genesis I wrote this text out of a genuine desire to advance practice in the field. The assessment of children is big business and should be guided by a structured, organized, and evidence-based approach. I have endeavored to integrate nearly 20 years of research, teaching, and clinical/school-based experience in assessment to accomplish this task. I hope that this book augments your understanding of the process of psychoeducational assessment and report writing, and promotes the adoption of an evidence-based approach to the process. Cherry Hill, NJ, USA January 2020

Stefan C. Dombrowski, Ph.D.

Acknowledgements I would like to thank Anna Beier, a graduate student in school psychology and soon to be colleague, for her assistance in putting together charts, tables, and PowerPoint slides for numerous chapters.

Contents

Part I

Overview of the Psychoeducational Assessment and Report Writing Process 3

1

Purpose of Psychoeducational Assessment and Report Writing . . . Stefan C. Dombrowski

2

A Newly Proposed Framework and a Clarion Call to Improve Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefan C. Dombrowski

9

The Psychoeducational Assessment and Report Writing Process: A General Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefan C. Dombrowski

61

3

71

4

Interviewing and Gathering Data . . . . . . . . . . . . . . . . . . . . . . . . . . Stefan C. Dombrowski

5

Observing the Child . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Karen L. Gischlar

6

General Guidelines on Report Writing . . . . . . . . . . . . . . . . . . . . . . 131 Stefan C. Dombrowski

Part II

Section-by-Section Report Writing Guidelines

7

Identifying Information and Reason for Referral . . . . . . . . . . . . . . 145 Stefan C. Dombrowski

8

Assessment Methods and Background Information . . . . . . . . . . . . 151 Stefan C. Dombrowski

9

Assessment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Stefan C. Dombrowski

xi

xii

Contents

10 Conceptualization and Classification . . . . . . . . . . . . . . . . . . . . . . . . 173 Stefan C. Dombrowski 11 Summary and Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Stefan C. Dombrowski Part III

Guidance Regarding Assessment and Classification of IDEA Categories Including Sample Reports

12 Learning Disabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Stefan C. Dombrowski 13 Autism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Stefan C. Dombrowski 14 Emotional Disturbance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Stefan C. Dombrowski 15 Intellectual Disabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Stefan C. Dombrowski 16 Other Health Impaired . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Stefan C. Dombrowski 17 Miscellaneous IDEA Categories and Section 504 . . . . . . . . . . . . . . 433 Stefan C. Dombrowski Part IV

Oral Reporting and Miscellaneous Topics in Psychoeducational Assessment and Report Writing

18 Assessment and Identification of Gifted Youth . . . . . . . . . . . . . . . . 461 Ryan M. Murphy 19 The Use of Curriculum-Based Measures/Response to Intervention Within the Context of Psychoeducational Assessment and Report Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Karen L. Gischlar 20 Culturally and Linguistically Diverse Learners . . . . . . . . . . . . . . . . 517 S. Kathleen Krach 21 Oral Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 Stefan C. Dombrowski 22 Special Issues in Psychoeducational Assessment and Report Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 Stefan C. Dombrowski Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571

Editor and Contributors

About the Editor Stefan C. Dombrowski is Professor and Director of the School Psychology Program at Rider University in Lawrenceville, New Jersey. Dr. Dombrowski is a licensed psychologist and certified school psychologist. You may view more about Stefan C. Dombrowski at the following website: http://www.stefandombrowski. com/. Feel free to contact Dr. Dombrowski at [email protected] if you have any questions or positive remarks.

Contributors Stefan C. Dombrowski Department of Graduate Education, Leadership and Counseling, Rider University, Lawrenceville, NJ, USA Karen L. Gischlar Rider University, Lawrenceville, NJ, USA S. Kathleen Krach Florida State University, Tallahasse, FL, USA Ryan M. Murphy Rider University, Lawrenceville, NJ, USA; Council Rock School District, Newtown, PA, USA

xiii

Part I

Overview of the Psychoeducational Assessment and Report Writing Process

Chapter 1

Purpose of Psychoeducational Assessment and Report Writing Stefan C. Dombrowski

Assessment is a defining activity of the profession of psychology and may well even be the primary defining feature of the profession of school psychology. School psychologists are recognized for their ability and training with the assessment process and are considered experts in psychoeducational assessment. A key goal of any psychologist conducting an assessment is to adhere to evidence-based assessment practices. This was something encouraged by Lightner Witmer when he established the first psychological clinic in the USA over a century ago (McReynolds, 1997). It was also endorsed by Shakow et al. (1947) when he proposed a scientist–practitioner model that was subsequently adopted by the psychological community at the Boulder Conference in 1949. Although the emphasis on evidence-based practice appears to be a recent phenomenon, psychology has recognized the need for evidence-based professional practice for well over 100 years. Whether psychology has adopted a truly evidence-based approach is debatable. This book hopes to move the field closer to that ideal. This chapter begins by discussing important definitional and conceptual activities involved in the assessment process including the relationship between psychoeducational and psychological assessment. Chapter 2 offers a newly proposed evidence-based psychoeducational assessment and report writing framework within which the entirely of this book is placed.

S. C. Dombrowski (&) Rider University, Lawrenceville, NJ, USA e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S. C. Dombrowski (ed.), Psychoeducational Assessment and Report Writing, https://doi.org/10.1007/978-3-030-44641-3_1

3

4

1.1

S. C. Dombrowski

Definition and Purpose of Psychoeducational Assessment

It is important to define psychoeducational assessment and distinguish it from psychological assessment. The two terms are sometimes used interchangeably but there are differences. The term psychoeducational assessment may be defined as a type of assessment that is used to understand an individual’s cognitive, academic, social, emotional, behavioral, communicative and adaptive functioning within an educational setting. Psychoeducational assessment may extend downward to the preschool age time period or upward to the college and adult time period. The majority of psychoeducational assessments are conducted on the kindergarten to grade 12 populations. Psychoeducational assessment addresses whether the individual is eligible for services and what those services might look like in an educational setting. It places primary emphasis upon impairment that occurs in the educational setting rather than in environments outside of the educational context that is customary in clinical classification. Psychoeducational assessment frequently involves an evaluation of a child’s learning and academic needs. However, it can also include the evaluation of intellectual, behavioral, social, emotional, communication and adaptive areas if those areas are suspected to adversely impact educational functioning. As a result, individuals conducting psychoeducational assessments must have a thorough understanding of what may be considered clinical conditions. This includes, but is not limited to, autism spectrum disorders, schizophrenia, attention-deficit/hyperactivity disorder (ADHD), disorders of mood (e.g., anxiety, depression, bipolar), disorders of conduct and medical conditions (e.g., migraine; traumatic brain injury) that may come to bear on educational functioning. Keep in mind, however, that IDEA, not DSM or ICD, drives classification decisions in the school setting in the USA so the respective state special education classification categories should be referenced. Psychoeducational assessment is distinguished from psychological assessment by its narrower scope and focus on an individual’s (i.e., children’s) functioning in an educational setting. Psychological assessment is broader and may address questions of custody in divorce proceedings, fitness to stand trial, suitability for a job, qualification for social security benefits, or qualification for additional support and services under a diagnosis of intellectual disability. Psychoeducational assessment deals primarily with educationally based classification and services. The vast majority of psychoeducational assessments are conducted on primary school (K to 12) populations although psychoeducational evaluations may also be conducted in university and private clinic settings. Within the USA, questions typically addressed include whether a child qualifies for additional support under a specific IDEA classification and what school-based accommodations and services are appropriate. The focus is on how the suspected disability impacts the child’s educational functioning within a school setting and what recommendations and accommodations are appropriate to support the child’s educational functioning.

1 Purpose of Psychoeducational Assessment and Report Writing

1.2

5

Who Conducts Psychoeducational Assessments?

Although precise data is unavailable, it is likely safe to assume that psychologists working in, or contracted by, the schools conduct the vast majority of psychoeducational evaluations. These psychologists (i.e., masters, Ed.S. or doctoral level) are employed by the school district and, in the USA, receive their school psychologist certification most commonly through a state department of education. The school psychologist may also be a licensed psychologist who can “hang up a shingle” and work in private practice. However, in most US states with few exceptions (e.g., Wyoming), only certified school psychologists may be employed by a school district for purpose of completing a psychoeducational assessment. Licensed psychologists working in private practice may also be involved in the completion of psychoeducational assessment reports. Many times parents seeking questions about their child’s functioning may wish to obtain an outside opinion. In these instances, and for a cost, the outside psychologist may be able to devote the time and energy to completing a thorough evaluation. Of course, the parent should be warned that the outside psychological/psychoeducational evaluation must be considered, but need not be accepted, by the receiving school district. The outside psychoeducational evaluation may provide valuable information but it sometimes lacks alignment with the customs and nuances of psychoeducational assessments written by psychologists in the schools. This may limit the utility of such evaluations for classification and educational planning within the school. For graduate students in clinical child psychology and licensed psychologists who may wish to pursue a private psychoeducational assessment practice, this book will be extremely useful. At times, a parent or legal guardian may wish to challenge the results of the school-based evaluation. When this occurs, the legal guardian is permitted to obtain an Independent Educational Evaluation (IEE). An IEE is generally high stakes and anxiety-evoking because it could involve a due process hearing or litigation. This book will serve as a resource regarding how to properly conduct an evaluation for this purpose.

1.3

Psychoeducational Versus Psychological Assessment and Report Writing

Psychoeducational assessment and report writing has areas of overlap, and distinction from, psychological assessment report writing. Those areas of overlap and distinctiveness are discussed below and depicted in the forthcoming Fig. 1.1.

6

S. C. Dombrowski Psychological Assessment

Psychoeducational Assessment

Focus may be on functioning outside of the educational setting including the family, community May focus on adult populations May use projective measures and personality assessment Encompasses forensic assessment including child custody, child welfare, criminal cases and other forensic matters

Primarily focuses on the kindergarten to 12th grade environment but could extend to university setting

Assesses most domains of functioning Report structure, format and organization Approach to conceptualization and classification

In most jurisdictions, must be a licensed psychologist

Uses many of the same assessment instruments

DSM or ICD

Both should use the evidence-based framework discussed in this book

May only observe child in clinic setting and may not ascertain direct perspective of teacher Obtains a wider range of interview results from individuals who have had contact with child

Limited use of projective measures and personality assessment Could be either a licensed psychologist or certified school psychologist at the master’s level IDEA Must observe child in classroom setting and ascertain rating forms and interview results from teachers

Will typically only interview parents, teachers, the child, and other paraprofessionals

Fig. 1.1 Psychoeducational assessment and psychological assessment: Areas of distinction and overlap

Similarities • Assesses most domains of functioning including cognitive, academic, social, emotional, behavioral and adaptive functioning. • The report structure and format are similar. • The approach to conceptualizing and classifying/diagnosing is data-driven and evidenced-based. • Uses many of the same norm-referenced assessment instruments. Differences • Psychoeducational assessment may not fully present the results of an evaluation of family functioning because of concerns over family privacy issues. A psychological assessment may evaluate and discuss family functioning more fully. • The bulk of psychoeducational assessments occur within the kindergarten to grade 12 time period. Psychological assessment spans the lifespan from infant to geriatric.

1 Purpose of Psychoeducational Assessment and Report Writing

7

• To determine a classification, psychoeducational assessment primarily determines whether there has been an adverse impact on children’s educational functioning, whereas psychological assessment will investigate whether there has been an impact on social, vocational and relational functioning. • Psychoeducational assessment generally stays away from personality assessment and projective measures (e.g., Rorschach). Psychological assessment will investigate more fully personality dimensions and may utilize projective measures. • Psychological assessment encompasses forensic evaluation including child custody, child welfare, criminal cases and other forensic matters. • Psychological assessment usually involves a licensed psychologist. In the US school setting, psychoeducational assessment may be conducted by master’s level or educational specialist (Ed.S.) practitioners. • Psychoeducational assessment includes observations of children in vivo (i.e., in the classroom or school setting), whereas psychological assessment generally uses a clinic-based assessment setting observation and does not observe in the classroom or school setting. • Psychological assessment may provide interview results from a wider range of individuals (i.e., collateral contacts). Psychoeducational assessment generally only gathers interview data on students, parents, teachers, teacher’s aides and professionals who provide direct support to the child. • In the USA, psychological assessment utilizes the DSM to make classification decisions, whereas psychoeducational assessment uses IDEA for classification decisions. The DSM is used in Canada for both psychological and psychoeducational assessment, whereas the International Classification of Disease (ICD) is used in Europe and other parts of the world.

1.4

Feedback Conference

Students learning how to furnish feedback to parents/legal guardians regarding a psychoeducational assessment are best served by a structured framework. The provision of feedback requires practice to improve in this area. The purpose of the feedback conference is to convey the report’s contents and conclusion in as clear and concise manner as possible. As close to the beginning of a report conference as possible, it is important to furnish a summary of your classification conclusion (i.e., whether the child was found eligible for special education support). Otherwise, the parent or caregiver may anxiously anticipate this discussion and may not hear anything that is discussed until you present your classification/diagnosis. The assumption that the process will unfold as planned is naïve. The oral feedback session will generally unfold smoothly, but there are times when it will not. For instance, let us take the example of an oral feedback session preceding an IEP meeting that did not go as planned. Sometimes, parents will appear

8

S. C. Dombrowski

unexpectedly with a special education advocate who will raise questions about the report and its conclusions (This should not be misconstrued to suggest that the presence of a special education advocate automatically makes the meeting contentious). This may feel unnerving for even the most veteran psychologist but it need not be if the psychologist assumes a collaborative approach. To assist with the provision of oral feedback, the chapter on oral reporting will offer guidance for various feedback scenarios.

1.5

Conclusion

The practice of psychoeducational assessment and report writing is complex and draws upon multiple skill sets to arrive at a quality written report. The process of psychoeducational assessment and report writing serves the purpose of understanding a child’s specific intellectual, academic, behavioral and social-emotional needs. Specific details on the practice and process of psychoeducational assessment and report writing will be discussed throughout this book, but first this process must be cast within an evidence-based framework. This is the subject of the next chapter.

References McReynolds, P. (1997). Lightner Witmer: His life and times. Washington, DC: American Psychological Association. Shakow, D., Hilgard, E. R., Kelly, E. L., Luckey, B., Sanford, R. N., & Shaffer, L. F. (1947). Recommended graduate training program in clinical psychology. American Psychologist, 2, 539–558.

Chapter 2

A Newly Proposed Framework and a Clarion Call to Improve Practice Stefan C. Dombrowski

2.1

Introduction and Historical Overview

Psychologists have recognized the need for an evidence-based assessment approach for well over 100 years. However, the clinical and school psychological community has faced many obstacles in its pursuit of a true evidence-based approach. This is likely related to the scientific maturity of the discipline on the one hand and the interplay of science, politics, and commerce on the other. Simply put, assessment is big business and politically charged. IDEA is a taxonomy ultimately designed by politicians (with the input of experts and advocacy groups) that has been problematic, if not lamentable in some cases, for decades (e.g., emotional disturbance and specific learning disability classification). Additionally, those who create our instruments and some interpretive approaches have a vested financial interest in the procedures they create which poses a conflict of interest. I am not suggesting that there is a deliberate attempt to promote practices that lack an evidentiary basis. I am merely stating that the promotion of, and hesitancy to relinquish, non-evidenced-based practice becomes more probable when there is a financial conflict of interest. Of course, one has to be cautious about indicting the commercial side of assessment. Much human well-being has emerged from the creation of assessment instruments and procedures so we should continue to look with optimism, openness, and yet skepticism toward new advancements in our assessment and interpretive technology. Ironically enough, many of the practices of yesteryear that were deemed by the scientific community as evidence based at the time of their creation are now found to be lacking such an evidentiary foundation. Thus, it is not just the commercial literature that needs continuous scrutiny but also the scientific community. It is this openness, yet skepticism, to new ideas that sits at the heart of scientific thinking and evidence-based professional practice. S. C. Dombrowski (&) Rider University, Lawrenceville, NJ, USA e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S. C. Dombrowski (ed.), Psychoeducational Assessment and Report Writing, https://doi.org/10.1007/978-3-030-44641-3_2

9

10

S. C. Dombrowski

There is an expanding literature base that critiques selected assessment and interpretive practices. This literature has been available for decades and perhaps remains under the radar of most psychologists, contributing to the perpetuation of non-empirically based practice. The lack of complete understanding, if not regard, of the empirical assessment literature is certainly a roadblock to a universally adopted evidence-based approach to psychoeducational assessment and report writing. One of the great scientific sins against humanity is to know enough about a subject to think you are right but not enough to know when you are wrong. Within this chapter, I propose an evidence-based psychoeducational assessment and report writing framework that establishes the platform for this book in its entirety. This framework also offers a springboard from which we may view our assessment practices more objectively attempting to eliminate numerous biases that obscure our scientific thinking and obfuscate our clinical judgment.

2.2

A Renewal of a Commitment

In the early nineties, the field of clinical psychology renewed its commitment to the scientist–practitioner model under the new label of evidence-based professional practice. At approximately the same time, school psychology endorsed an evidence-based approach to practice through its advancement of functional academic and behavioral assessment where assessment was directly linked to intervention, and intervention could subsequently be monitored for its effectiveness (Burns, Riley-Tillman, & Rathvon, 2017; Shinn & Walker, 2010). The endorsement of praxis guided by science has culminated in two distinct, intertwined practices that fall under the umbrella of evidence-based professional practice: (a) evidence-based assessment [EBA] and (b) evidence-based treatment [EBT]. The field of psychology has an extensive literature on EBT. There are numerous guides to “treatments that work” (e.g., Nathan & Gorman, 2015) within clinical psychology, and manuals for evidence-based interventions in school psychology (e.g., Burns et al., 2017). On the other hand, the literature base on the broader topic of evidence-based assessment is less mature and even less well-defined. Although there has been considerable discussion of a framework for instrument selection (i.e., evidence-based testing) within the context of assessment (see Hunsley & Mash, 2007; Youngstrom & Van Meter, 2016), there has been considerably less emphasis placed on the overall clinical utility of the psychological/psychoeducational assessment process. This chapter endeavors to expand the discussion of EBA beyond its predominant emphasis on evidence-based testing/instrument selection and proposes a framework for researchers and practitioners alike. In general, EBA is an approach to

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

11

Fig. 2.1 The four pillars of evidence-based psychoeducational assessment and report writing

assessment that uses scientific decision-making, critical thinking, clinical judgment, and the latest scientific research to guide the constructs to be evaluated, the methods and measures to be used, and how one should integrate the information that is collected to arrive at a diagnostic decision that guides a treatment plan that can be subsequently progress monitored. The proposed framework offers four key dimensions (or pillars) of the evidence-based psychoeducational assessment and report writing process. These include the following: (1) scientific decision-making and critical thinking; (2) evidence-based instrument selection, metrics of interpretability evaluation, and diagnostic utility consideration; (3) expert clinical judgment; and (4) a well-written and aesthetically appealing report that is properly conveyed to stakeholders. This framework is depicted below (Fig. 2.1) with a more elaborate discussion of each pillar forthcoming.

2.3

The First Pillar of Evidence-Based Psychoeducational Assessment: Scientific Decision-Making and Critical Thinking

The field of psychology, and school psychology more specifically, has been accused of using less than scientifically sound practices in the area of assessment (Lilienfeld, Lynn, Namy, & Woolf, 2011). Researchers in the fields of school

12

S. C. Dombrowski

psychology (e.g., Bradley-Johnson & Dean, 2000; Canivez, 2019; Hoagwood & Johnson, 2003; Hunter, 2003; Kratochwill, 2006, 2007; Kratochwill & Shernoff, 2003; Kratochwill & Stoiber, 2000; Lilienfeld et al., 2011; Miller & Nickerson, 2006; Stoiber & Kratochwill, 2000; Walker, 2004) and clinical psychology (e.g., Lilienfeld, Lynn, & Lohr, 2003) have called for increased attention to evidence-based practices. Lilienfeld, Ammirati, and David (2012) discussed what they contend is a scientist–practitioner gap in school psychology. Notably, this gap manifests when there is a disparity between the evidence base regarding an assessment or treatment approach and what psychologists actually do in their everyday clinical practice. Consequently, both research and practicing psychologists are encouraged to use a more scientifically guided approach to thinking about their assessment practices. What does this mean? What does this look like? The late scientist and talk show host, Carl Sagan (1995), defined scientific thinking as follows: Science involves a seemingly self-contradictory mix of attitudes. On the one hand, it requires an almost complete openness to all ideas, no matter how bizarre and weird they sound, a propensity to wonder…But at the same time, science requires the most vigorous and uncompromising skepticism, because the vast majority of ideas are simply wrong, and the only way you can distinguish the right from the wrong, the wheat from the chaff, is by critical experiment and analysis. Too much openness and you accept every notion, idea, and hypothesis—which is tantamount to knowing nothing. Too much skepticism—especially rejection of new ideas before they are adequately tested—and you’re not only unpleasantly grumpy, but also closed to the advance of science. A judicious mix is what we need. (p. 30)

As Lilienfeld et al. note, practitioners can embrace the scientist side of the scientist– practitioner equation or they can practice non-scientifically. Practitioners do not need to conduct research to be a scientist. Rather, they can bring rigorous scientific thinking to the assessment of children’s social, emotional, behavioral, academic, and cognitive concerns. Practitioners can be skeptical consumers of the research literature in school and child psychology attempting to ground their practices in the best available research evidence. Reynolds (2011) notes: It is not enough to read the literature or to attend in-service or continuing education seminars. We must read and listen carefully. Just because a paper is published in a peer-reviewed journal does not mean the science is accurate or necessarily strong. (Reynolds, 2011, p. 5)

Lilienfeld et al. (2012) explained that the concept of the scientific practitioner is not an oxymoron and suggested that those school psychologists who do not engage in research can act as scientists in the clinical setting. All psychologists, whether researcher or practitioner, need to develop and maintain a skeptical skill set that permits them to discern evidence-based practices from those that are non evidence-based including those that may be characterized as pseudoscientific. The field of school psychology, clinical psychology, and education may well contain popular, but ill-supported, approaches including many in the area of assessment.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

13

How do we distinguish practices that are evidence-based from those that are not? One solution is to embrace scientific decision-making and critical thinking. Human beings are prone to errors in thinking (Kahnehan, 2011) even those who have completed many years of education with advanced degrees in psychology and other disciplines. We are inclined to trust our intuition (i.e., “gut”) and observations, and tend to downplay scientific findings. Genuine scientific thinking involves knowing enough to recognize evidence and to change one’s mind in the presence of that evidence.

2.3.1

Errors in Decision-Making

Errors in decision-making are not something that is limited to the field of psychology. They are found in education, business, and just about every field including medicine (see Kahnehan, 2011 for an in depth discussion). In fact, throughout history there are accounts of physicians administering treatments based upon their clinical expertise (e.g., blood letting; lobotomy1) that, at the time, were deemed effective, but are now recognized as little more than placebo or worse, harmful (Grove & Meehl, 1996). Lilienfeld and Landfield (2008) suggest that pseudoscientific practices “possess the superficial appearance of science but lack substance. In other words, they are imposters of science, although their proponents are rarely deliberate or conscious charlatans” (p. 1216). Errors in decision-making occur when we adopt practices that seem to be grounded in science but are not. For instance, an intuitive-based assessment practice that disregards the empirical evidence in favor of an ipsative approach that seeks to find unique intraindividual patterns may not offer any degree of incremental validity. In fact, it may even be harmful to the children we serve by excluding them from needed services or leading to inaccurate diagnoses and the provision of inappropriate services. Thus, scientific decision-making is considered a safeguard against errors in thinking that might lead to the adoption pseudoscientific assessment approaches (Lilienfeld et al., 2012). Lilienfeld et al. (2012), like many others (e.g., Etzioni & Kahneman, 2012; Sloman, 2012; Stein, 2013; Tversky & Kahneman, 1973), noted that errors in decision-making are cognitive distortions, like a visual illusion or a specious correlation, that may be adaptive by helping us to construct meaning out of patterns to understand the world, even when meaning is not there or our attributions are incorrect. By understanding these cognitive errors and bias, we will be in a better position to critically evaluate phenomena and make more informed scientifically

1

In 1949, Egas Monitz was awarded the Nobel Prize for his development of the lobotomy to treat schizophrenia. Starting in the 1960s, psychotropic medications were used to treat schizophrenia supplanting the psychosurgical procedure as the accepted course of treatment for the condition.

14

S. C. Dombrowski

Table 2.1 Widespread cognitive errors and biases relevant to school psychologists Error

Description

Naïve realism Confirmation bias

Belief that the world is precisely as we see it Tendency to seek out evidence consistent with our beliefs, and deny, dismiss, or distort evidence that is not Coming to a premature conclusion before adequate evidence is available Tendency to cling to beliefs despite repeated contradictory evidence Tendency to perceive statistical associations that are objectively absent Error of perceiving events as more predictable after they have occurred Preoccupation with group unanimity that impedes critical evaluation of an issue Tendency to place too much weight on mental shortcuts and rules of thumb Judging the probability of an event by the ease with which it comes to mind Tendency to be unduly influenced by initial information Judging the validity of an idea by the emotional reaction it elicits in us Judging the probability of an event by its similarity to a prototype

Premature closure Belief perseverance Illusory correlation Hindsight bias Groupthink Overreliance on heuristics Availability heuristic Anchoring heuristic Affect heuristic Representativeness heuristic Base rate neglect

Neglecting or ignoring the prevalence of a characteristic in the population Bias blind spot Tendency to see ourselves as immune to biases to which others are prone Unpacking Failure to elicit all relevant diagnostic information about an individual Aggregate bias Assumption that group data are irrelevant to a given individual or client Search satisfying Tendency to end one’s search for other diagnostic possibilities once one has arrived at an initial diagnostic conclusion Diagnostic Tendency for a dramatic, salient diagnosis (e.g., major depression) to overshadowing lead clinicians to overlook less obvious diagnoses (e.g., learning disability) Diagnosis Tendency to uncritically accept previous diagnoses of the same momentum individual Psych out error Neglecting the possibility that a behavioral problem is medical, not psychological, in origin Pathology bias Tendency to overpathologize relatively normative behavior Reprinted from Lilienfeld et al. (2012). Distinguishing science from pseudoscience in school psychology: Science and scientific thinking as safeguards against human error. Journal of School Psychology, 50(1), 7–36. http://dx.doi.org/10.1016/j.jsp.2011.09.006 with permission from Elsevier

guided decisions. Table 2.1 lists some of the more common cognitive errors and biases encountered by school psychologists. Each of these cognitive errors is also described briefly after the table.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

2.3.1.1

15

Naïve Realism

This is the erroneous assumption that the world is exactly as we see it. We base our decisions on straightforward observations. For instance, the belief that the earth was the center of the universe is a form of naïve realism. For centuries, scientists studying the planets and stars concluded that the earth was stationary and the planets and stars moved around the earth. Until Polish mathematician and astronomer, Nicolaus Copernicus, detailed a heliocentric conception of the universe at the Jagiellonian University in Krakow, Poland most human beings endorsed a geocentric conception of the universe. Even with the production of clear scientific evidence, substantiating a heliocentric conception of our solar system Copernicus’ life was in danger because it confronted the religious dogma of the era. To confront naïve realism Lilienfeld et al. (2012) noted that one must use scientific reasoning, rather than one’s gut or hunches, as a basis for decision-making. An example in psychology is the autism–vaccination link where a child at the age of 18 months starts to regress in behavior following administration of the MMR vaccine which typically occurs around that time period in the USA. Some may intuitively conclude that the vaccine is responsible; however, the behavioral regression is likely related to the progression of autism spectrum rather than the vaccine. Thus, a parent might understandably, yet mistakenly, conclude that the vaccine was the “cause” of their child’s autism. This conclusion would be erroneous and an example of naïve realism (as well as illusory correlation).

2.3.1.2

Confirmation Bias

This refers to the tendency to seek out evidence consistent with our hypotheses and dismiss or deny evidence that is inconsistent with it (Tavris & Aronson, 2007). Lilienfeld et al. (2011) stated that confirmation bias can be summed up in five words: “seek and ye shall find.” A few examples in school psychology are particularly salient. A psychologist working in the schools may have received information from an influential and well-respected teacher that a child is a behavior problem. Instead of approaching the evaluation independently, the psychologist (perhaps unconsciously) attends to evidence that supports this preconceived perception of the child. A second example may involve an investigation of the factor structure of an IQ test that was designed to comport with a particular theory of cognitive ability (e.g., CHC Theory). Instead of initially utilizing factor analytic best practice, the test author decides to rely significantly upon confirmatory factor analysis and embark (potentially) on a modeling fitting statistical fishing expedition, tweaking elements of the model until it fits a preconceived theoretical structure without fully testing alternative, competing models. A safeguard against confirmation bias in factor analytic model fitting would be to specify a priori what models are to be investigated and perhaps even use exploratory factor analysis as an initial means to determine competing a priori models to be tested with confirmatory factor analysis. An empirical example of this approach is furnished by Dombrowski,

16

S. C. Dombrowski

McGill, and Canivez (2017, 2018) who investigated the factor structure of the WJ IV Cognitive using both EFA and CFA and located an alternative theoretical/ factor structure for the cognitive ability instrument using factor analytic practices recommended in the psychometric literature.

2.3.1.3

Premature Closure

This reflects the tendency to arrive at a diagnostic conclusion before sufficient data are available or fully analyzed. For instance, a psychologist may observe a child in second grade who displays a degree of social awkwardness during a presentation to the class and may conclude that the child has significant social skills deficits and maybe even autism spectrum disorder. Or, a caregiver may take a child to a pediatrician who observes the child to be active and fidgety during the annual visit and may conclude that the child has ADHD without even considering data from teachers or the child’s report card. Both conclusions may be premature without additional supporting data because the child’s behavior during a presentation or an office visit may be quite different than during other times and in other environments.

2.3.1.4

Belief Perseverance

This is the tendency to cling to one’s belief despite the availability of contrary evidence. For instance, a psychologist may have classified a child with emotional disturbance in third grade. It turns out that there were acute adjustment based factors or medical issues that contributed to the child’s emotional issues. Nonetheless, the psychologist still retains the notion that the child should receive an ED classification as the prior evidence was so convincing that the present evidence is overlooked. It is important to abandon prior notions when new evidence is uncovered and suggests the need for reconsideration.

2.3.1.5

Illusory Correlation

Human beings attempt to make connections and find meaning in events. It helps us cope with what might otherwise be a chaotic world. Nonetheless, this sometimes leads us to make associations and connections that are erroneous (Chapman & Chapman, 1967). For instance, there is an abundance of literature discussing how cross-battery assessment (XBA) may be used to differentiate LD from non-LD individuals. Often, the hypotheses undergirding the presumed patterns appear theoretically plausible particularly when considering the broader cognitive psychology literature. However, the relationship is a specious one with subsequent scientifically guided diagnostic utility research pointing to the evidence suggesting that XBA only discerns LD at the level of a coin flip (McGill, 2018; McGill, Dombrowski, & Canivez, 2018; Watkins, 2000).

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

2.3.1.6

17

Hindsight Bias

This is the tendency of a practitioner to receive information about a child which assists the practitioner in offering a diagnosis that seems prescient. For instance, a psychologist may learn that a child struggles with reading comprehension and reading fluency and had a prior classification of specific learning disabilities (i.e., reading disability or dyslexia). The practitioner subsequently evaluates a child and arrives at a classification of reading disability in a much more confident fashion than the psychologist may have had this information remained unavailable. Knowing ahead of time that the child had a prior classification of learning disabilities may have influenced the psychologist’s diagnostic decision-making. The same scenario may occur when visiting a physician for a second opinion. If the second physician knows of the first physician’s diagnosis, then this may influence the decision-making of the second opinion. Thus, hindsight bias may contribute not only to overconfidence but also to decision-making, wherein second opinions are spuriously corroborated by first opinions.

2.3.1.7

Groupthink

This is the tendency to arrive at uniformity in judgment in attempt to conform to a group and not rock the boat. It is a term that is derived from the Kennedy era following the decision to invade Cuba based upon decision-making that attained consensus but was not well thought out. School psychologists are part of a multi-disciplinary team and may be prone to influence from well-respected or powerful individuals on the team. As an example, there is a noted phenomenon of teacher squeak wherein a well-regarded teacher may complain about a child and influence diagnostic decision-making (Gutkin & Nemeth, 1997). There is no doubt that the insights gleaned from the perspective of many individuals is often superior to the insight offered by a single individual but one must be cautious about completely deferring to the group decision-making process in the event that decision-making is not completely thought out. An antidote to groupthink is to demand that members of the group play diablo advocado and intentionally consider countering perspectives and differing diagnostic and treatment options.

2.3.1.8

Overreliance on Heuristics

Heuristics are quick, efficient rules of thumb that help us sort our processing of information, permitting us to make more expedient decisions. As an example, we might use a simplistic heuristic such as learning that someone else visited a particular expert, assuming that since it was acceptable for that individual then it should be acceptable for us. We subsequently base our decision to see that expert on this basis. This is known as the recognition heuristic.

18

S. C. Dombrowski

Lilienfeld et al. (2012) discussed four other common heuristics used by school psychologists. The first is the availability heuristic. This simply reflects the perception that a condition might be more likely or prevalent than it is because it is in our purview. For instance, let us assume that we witness at our school in a given year a number of cases that lead to a classification of autism spectrum disorder. We might be tempted to conclude that autism is on the rise because of our observations but this would be inaccurate as it could be that autism spectrum prevalence may not have changed at all. The second is the anchoring heuristic which is the tendency to be influenced by initial information or impressions. For instance, we might receive a referral for two children, one of whom we are told has ADHD. We mistakenly mix up the information about the two children, correcting course later on, but remain unduly influenced by the initial misperception and more likely to offer a classification of Other Health Impaired. Note that this may also be related to premature closure, belief perseverance, and confirmation bias. Third, the affect heuristic suggests that we use our emotions or intuition as guides to decision-making. If we react positively to information, then we are more likely to believe in it. If we react negatively to information, then we are likely to reject it. Thus, our decision-making may, unfortunately, be guided by our emotions. For instance, let us assume a psychologist is in the market for an assessment instrument that evaluates children though a multicultural lens. The assessment instrument has a catchy title and is well-packaged and marketed as an evidence-based approach to multicultural assessment. This grabs the attention of the psychologist who subsequently spends a good deal of time learning how to administer and interpret the instrument. The psychologist feels good that he is undertaking best practices and subsequently overlooks the lack of evidence in the instrument’s technical manual. When the psychologist learns of critical reviews the clinician becomes angry or upset, defends the use of the instrument as evidence-based, and uses the resulting emotional response to conclude that the evidence was erroneous. (See Benson, Beaujean, McGill, & Dombrowski, 2018, 2019; Schultz & Stephens, 2019 for an example of a critique of an assessment practice and then an emotional, spirited defense of a position for an example of the affect heuristic). Finally, the representative heuristic suggests that we have a prototype in mind to which we compare other individuals. For instance, assume our initial experience with the classification of a child with autism spectrum disorder entailed a child who was fascinated with superheroes and who was socially awkward. We encounter another who has a fascination with superheroes and we might be inclined to place a higher valence on the fact that the child is interested in superheroes as a hallmark of ASD pushing us in the direction of a classification of ASD although other aspects may not meet criteria. Algorithms and heuristics are an important part of clinical decision-making but our initial hunches should be questioned and contrary evidence disconfirmed prior to arriving at our ultimate decision.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

2.3.1.9

19

Base Rate Neglect

Base rate neglect refers to the error of underemphasizing the prevalence (or base rate) of a condition when evaluating the likelihood of an occurrence (Glutting, McDermott, Watkins, Kush, & Konold, 1997). The higher the base rate the more likely the condition; conversely, the lower the base rate the less likely the condition. This may play out in several ways. Assume we have a pretty good working knowledge of what childhood schizophrenia might look like (Dombrowski, Gischlar, & Mrazik, 2011). We note that a child matches a profile we have in mind for the condition. However, we fail to regard the base rate of the condition (extremely low) and may erroneously conclude that a child has a classification of schizophrenia based upon our own subjective experience (see representative heuristic just discussed) instead of a more prevalent condition (i.e., autism spectrum).

2.3.1.10

Bias Blind Spot

We all would concur that human beings are inherently biased in our perception and thinking. However, we have a tendency to reject that notion when it is applied to ourselves. We may be convinced that we are open minded and conscious of bias, suggesting that such cognitive errors do not apply to us and only to others. This error in thinking is known as bias blind spot. An example may well be a psychologist who researches, and advocates for, social justice. This psychologist may believe that others are less informed about LGBTQ issues but he is enlightened about the LGBTQ community and beyond the point of bias.

2.3.1.11

Unpacking

This is the failure to elicit or consider all relevant information before arriving at a diagnostic conclusion. It is for this reason that multiple methods of assessment are utilized. An example might be an experienced clinician who feels confident in his or her diagnostic capability. Instead of administering a behavior rating scale to a teacher, the clinician interviews the teacher, failing to query on conditions of a more internalizing nature. The psychologist then arrives at a conclusion that overlooks conditions such as anxiety or depression in the child.

2.3.1.12

Aggregate Bias

This is the assumption that group data are irrelevant to the individual. This is often the argument used by proponents of cognitive profile analytic procedures. They argue that we should be mindful of the profile analytic data but then should regard each child as an individual using clinically astute decision-making and detective

20

S. C. Dombrowski

work that may yield useful clinical information. Unfortunately, the field has attempted to use cognitive profiles to clinical decisions about children for nearly a century without much success.

2.3.1.13

Search Satisfying

This is the tendency to end a search for a diagnostic conclusion once one has obtained sufficient information to offer a diagnosis. This is inappropriate as there may be other conditions faced by the child that may require investigation. For example, let us say we evaluate a child for suspected autism spectrum disorder (ASD). We ultimately decide that the child qualifies and then proffer that diagnosis singularly. We should have considered other classifications including intellectual disability as the child’s cognitive ability score was quite low. In this case, our classification would be incomplete because we end our search for any other classification once we arrived at the classification of ASD.

2.3.1.14

Diagnostic Overshadowing

This is the tendency for dramatic diagnoses (e.g., emotional disturbance) to obscure less dramatic ones (e.g., learning disabilities). In this example, a psychologist may miss the child’s problems with learning disabilities and focus exclusively on the ED classification and treatment.

2.3.1.15

Diagnosis Momentum

This is the tendency to uncritically accept a prior diagnosis. This may well occur when a psychologist receives an outside classification (e.g., ADHD) from an esteemed institution and merely accepts the diagnosis without independently verifying the classification.

2.3.1.16

Psych Out Error

This pertains to neglecting to consider a medical/biological origin for a particular condition. Psychologists see the world through a psychological lens. However, it will be important to conduct a thorough developmental history (see the CDQ; Dombrowski, 2014a offered for free to purchasers of this book) that inquires about medical history prior to arriving at a diagnostic conclusion. Given the genetic linkage of many psychiatric conditions, it will be important to consider family psychiatric history (though we still may wish to protect family privacy within our psychoeducational report) as well as a medical/biological explanation for a condition.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

2.3.1.17

21

Pathology Bias

This is the tendency to pathologize relatively normal behavior. Keep in mind that for an individual to receive a classification it must impair educational performance (IDEA) or social, vocational or relational performance (DSM or ICD). Thus, be cautious about locating a psychopathological condition within every child, but be mindful, that the prevalence of mental health conditions is about 1 in 4 according to World Health Organization estimates. Then, there may be the compounding effect of various cognitive errors/distortions leading to even more inaccurate diagnostic decision-making. For instance, a teacher may refer a child for “ADHD.” The psychologist happens to observe and test the child on a day the child is to receive a newly anticipated video game. The child is unusually active during the observation and the testing session, leading the psychologist to corroborate the teacher’s impression that the child struggles with overactivity and loss of focus. Conclusion In addition to recognizing cognitive distortions and bias in our scientific decision-making, it is important to recognize the signs of pseudoscience within practices that are used or promoted in the field. The warning signs of pseudoscience are next discussed.

2.3.2

Warning Signs of Pseudoscience

It is insufficient to simply recognize that cognitive errors cloud capacity to embrace scientifically guided clinical decision-making. It is also necessary for clinician to go one step further and recognize the warning signs of pseudoscience. To assist with this process, I have drawn upon, and augmented, the works of several researchers (e.g., Lee & Hunsley, 2015; Lilienfeld et al., 2012; Meichenbaum & Lilienfeld, 2018) who have written about the warning signs of pseudoscience. These warning signs should be considered by all school psychologists when evaluating claims at workshops, presentations at professional conferences, and on the Internet in blogs and other forums; and even within the peer-reviewed literature. In the realm of assessment psychology, there is not an oversight agency, like the Food and Drug Administration (FDA), to protect against the promotion of practices that have been debunked in the scientific literature. Accordingly, it is important to be recognize the following warning signs to guard against utilization of practices that lack scientific support (Table 2.2). Each of the above warning signs will be useful to psychologists in a position to evaluate an assessment approach and is elaborated upon in the forthcoming discussion with an example also furnished under each warning sign.

22

S. C. Dombrowski

Table 2.2 Warning signs of pseudoscience 1 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Lack of falsifiability and overuse of ad hoc hypotheses Lack of self-correction Emphasis on confirmation rather than refutation Evasion of peer review Overreliance on anecdotal evidence Lack of connection with basic or applied research Extraordinary claims Ad antequitem fallacy Use of hypertechnical language (i.e., Neurobabble or psychobabble) Absence of boundary conditions Reversed burden of proof in which proponents of an approach demand that critics refute claims of effectiveness 12. Extensive appeals to authority or gurus (i.e., eminence-based assessment) 13. Heavy reliance on endorsements from presumed experts 14. Tendency of advocates or followers of an assessment practice to insulate themselves from criticism or be dismissive of criticism 15. Limited or non-existent reporting of contradictory findings Source Adapted from Lilienfeld, S. O., Ammirati, R., & David, M. (2012). Distinguishing science from pseudoscience in school psychology: Science and scientific thinking as safeguards against human error. Journal of School Psychology, 50, 7–36; Lee, C. M., & Hunsley, J. (2015). Evidence-Based Practice: Separating Science from Pseudoscience. The Canadian Journal of Psychiatry, 60(12), 534–540. https://doi.org/10.1177/070674371506001203; and Meichenbaum, D & Lilienfeld, S. O. (2018). How to spot hype in the field of psychotherapy: A 19-item checklist. Professional Psychology: Research and Practice, 49, 22–30. https://doi.org/10.1037/pro0000172

2.3.2.1

Lack of Falsifiability and Overuse of Ad Hoc Hypotheses

For a theory or argument to have a scientific basis, it needs to be falsifiable (Popper, 1959). The ability to reject a theory (based on evidence) is a key characteristic of science. The lack of capacity to falsify is one of the hallmarks of pseudoscience as it is insulated against countering perspectives. One approach that insulates a theory or assessment practice from capacity to be rejected is through the use of ad hoc hypotheses rendered after the results of a study are determined. This way the researcher and clinician can explain away the negative findings that challenge their hypotheses essentially creating a loophole or an excuse to dismiss research findings. Example: An example may be the practice of cognitive subtest interpretation. Although this practice has been empirically debunked resources in assessment psychology continue to promote and discuss this practice (e.g., Sattler, 2018). These resources often posit that subtest interpretation may not hold at the group level but suggest instead that each individual is distinct and unique so patterns of different scores are meaningful. Nonetheless, more than three decades of research has not supported this perspective and even encouraged test users to “just say no” to

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

23

subtest level interpretation (McDermott, Fantuzzo, & Glutting, 1990; McGill et al., 2018; Watkins, 2000, 2003; Watkins, Glutting, & Youngstrom, 2005).

2.3.2.2

Lack of Self-Correction

Lilienfeld et al. (2012) discussed this hallmark of pseudoscience as the lack of evolution of a psychological practice. In other words, the practice remains stagnant over time and is resistant to change. There is no further evolution in the practice and it does not change pending the addition of new scientific knowledge. Example Lilienfeld et al. offered as an example the notion that one can improve student learning through use of different learning styles (i.e., visual, auditory, kinesthetic). This was a pedagogical approach that emerged in the 1940s and persists to the present time period despite contrary research evidence and the concomitant suggestion that teachers focus effort on more empirically direct instruction. An example for school psychology may well be the continued use of the IQ-achievement discrepancy model in selected school districts throughout the country. The use of the discrepancy approach has been debunked empirically for decades (Aaron, 2016) yet some in the field continue using it. Of course, this may not be the best example as the IQ-achievement discrepancy is still permitted in special education law and some may argue that the law simply offers administrative guidelines for how to allocate resources.

2.3.2.3

Emphasis on Confirmation

Lilienfeld et al. reported that this is defined as the tendency to seek out evidence to support one’s findings and to overlook or ignore evidence contradictory to those desired findings. This is also known as confirmation bias. Science attempts to safeguard against confirmation bias through use of the randomized assignment of individuals (i.e., randomized clinical trials) and use of large samples that are representative of the population under study. Pseudoscience may ignore this approach and instead emphasize any study that tends to support a desired outcome. Example There are numerous research studies and IQ test technical manuals that use confirmatory factor analysis (CFA) as a means to investigate the structure of an IQ test. There is absolutely nothing wrong with this methodology and it can be a powerful tool when used appropriately. However, it has potential for misuse. There are examples of studies where researchers used this technique to investigate an IQ test’s theoretical structure yet appear to go on a statistical fit index fishing expeditions in search of final, validated structures that substantiate a predefined structure. Although I am not making the case that selected researchers or test publishers are intentionally setting out to “prove” a preferred structure I am contending that safeguards are needed when using CFA as it may be prone to confirmation bias. For instance, consider the WISC-V. The technical manual of the WISC-V used flawed

24

S. C. Dombrowski

standards to arrive at the final, adopted five factor (verbal, visual-spatial, fluid reasoning, processing speed, and working memory) structure. However, subsequent independent factor analyses (e.g., Canivez, Dombrowski, & Watkins, 2018; Dombrowski, Canivez, & Watkins, 2018) did not support the technical manual’s five factor structure but rather the four factor structure of the WISC-IV. Confirmatory factor analysis may well be a powerful tool when used properly, but when used without regard for established scientific practices could be subject to confirmation bias where a researcher, likely unintentionally, locates findings that support a preferred outcome. 2.3.2.4

Avoidance of Peer Review

Another warning sign in Lilienfeld et al.’s framework relates to publishing practices that avoid the peer review process in favor of other methods of dissemination. Although not perfect, the peer review process offers some degree of gatekeeping regarding the quality of the research undertaken, with some journals considered more methodologically rigorous than others. One barometer of the rigor of a journal may be its citation factor and percentage of manuscript acceptance. It is noted, however, that some disciplines (e.g., school psychology) are notoriously less cited than other disciplines (e.g., health sciences). The same applies to the major journals within both disciplines. For instance, the Journal of the American Medical Association has an acceptance rate of 8% with a recent citation impact factor of 51.27 across the 2017–18 period. Comparatively, the present flagship journal in school psychology, the Journal of School Psychology, has an impact factor of 3.076 with an acceptance rate of 23% over a similar time period. Example Examples include assessment and interpretive approaches that are primarily disseminated through books, book chapters, Internet blogs, professional presentations, or workshops. Psychologists should be mindful that outlets such as the newsletter of the National Association of School Psychologist’s (NASP; e.g., Communiqué) sometimes contain advertisements promoting assessment practices. The presence of these advertisements in the Communiqué should not be misconstrued as an endorsement by NASP that the assessment practices have an evidentiary basis. I want to clarify my stance regarding commercially promoted instruments. There is much good that has come from commercially available practices and products, and such products have served to advance practice in the field and improve human well-being, but we must nonetheless remain skeptical about advertisements promoting practices as evidence-based unless those practices have documented evidence in their technical manuals or have been vetted in the independent empirical literature. Why do I raise the clarion call of skepticism? Because there have been many practices since the inception of psychology (e.g., cognitive ability subtest analysis and psychiatric evaluation; over emphasis on clinical intuition) that have been promoted in trade books and endorsed by experts that have taken, and continue to take, our field down a dead end, non evidence-based alley.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

2.3.2.5

25

Overreliance on Anecdotal and Testimonial Evidence

Anecdote (i.e., a single case report, the opinion of an expert, or the testimony of consumers) is not evidence. An expert’s endorsement may well attest to the quality of a product but it should it should neither be construed as, nor supplant, evidence. Example There are numerous educational curricula that are disseminated at conferences and in workshops across the country with noted experts, some from prestigious institutions, endorsing the curriculum. Such endorsement may well be compelling but it is not evidence. In some cases, it may even represent a financial conflict of interest which would need to be disclosed.

2.3.2.6

Absence of Connectivity

Science is generally cumulative with new research adding slightly to existing knowledge in the same way that a bricklayer lays the foundation to a house one brick at a time. Pseudoscience is poorly connected to extant knowledge and frequently offers revolutionary discoveries that introduce a new paradigm. I am not discounting the possibility of a scientific breakthrough but I would note that they are exceptionally rare and are the exception rather than the rule. Example One example regarding supposed revolutionary treatments with an absence of connectivity to prior treatment research was recently promoted by a company that claims to have discovered a breakthrough cure for cancer. Almost unanimously this stance was rebuked by the scientific cancer research community and the American Cancer Society as irresponsible and misleading, and lacking a connection to prior cancer treatment research (see link https://www.forbes.com/ sites/victoriaforster/2019/01/30/an-israeli-company-claims-that-they-will-have-acure-for-cancer-in-a-year-dont-believe-them/#259d0c3b28a8).

2.3.2.7

Extraordinary Claims

Lilienfeld’s warning sign suggests that a hallmark of pseudoscience is extraordinary claims. This could entail a special treatment that is a cure all for a diagnosis, or an ability possessed by a child that is unequaled. Sagan (1995) notes that extraordinary claims require extraordinary evidence. Example One example of extraordinary claims may well have involved the promotion of the Feingold diet for the treatment of ADHD. The Feingold diet was hailed as revolutionary but subsequent research suggested that this diet did little to promote behavioral improvement (Mattes, 1983). In a second example, within the technical manual of the Woodcock Johnson Test of Cognitive Abilities and Achievement the test author made the claim that the CHC model was the most empirically supported framework for understanding human abilities. This is an extraordinary claim and must be supported by evidence. For instance, how do the

26

S. C. Dombrowski

analyses reported in the WJ III Technical Manual compare to the 457 studies across nearly 75 years undertaken by John Carroll when developing his three stratum theory of cognitive abilities? Although the attempt to empirically validate CHC within a single instrument across cognitive abilities and achievement is laudable, it is but a single study. Consequently, the statement that CHC theory is the most empirically valid theory of cognitive abilities appears to be premature. This is even more salient when one considers that subsequent independent research into how well CHC theory fits our measures of cognitive ability has come under criticism within independent factor analytic research and in a special issue devoted to examining CHC theory (Canivez & Youngstrom, 2019; Geisinger, 2019; McGill & Dombrowski, 2019; Wasserman, 2019).

2.3.2.8

Ad Antequitem Fallacy

This simply refers to the continued use of a practice simply because it has been in existence for a long period of time. Lilienfeld et al. noted that the practice may be enduring not because it has demonstrable evidence but because it has been in existence for some time. Often, these practices fall prey to confirmation bias and illusory correlation. Example The practice of incorporating a Bender or VMI as a battery within a psychoeducational assessment may fall under this category. Beyond ascertaining whether a child can copy figures or has poor handwriting the results from the Bender or VMI offer little in relation to academic progress. The same can be said for use of the sentence completion test. It might be something commonly administered but does it offer any incremental validity beyond interviews and ratings from multiple informants? Generally, I would contend the answer to this question is “no.” However, one may make that case that a sentence completion task may serve a purpose. First, it is easy to complete and allows the examiner an avenue to develop rapport with the examinee. Second, it can be a defense against outside experts who rely on projective devices. Thus, they cannot claim that your assessment was inadequate because it failed to include a projective device (even though this argument can be countered by the empirical evidence looking at the incremental validity of projectives).

2.3.2.9

Use of Hypertechnical Language

It is occasionally necessary in scientific endeavors to introduce more technical language to differentiate among concepts. However, this practice can be abused and is a technique that is sometimes used to support the legitimacy of a practice (Lilienfeld & Landfield, 2008) that otherwise may lack an empirical foundation. Example In the assessment domain, one sees the use of highly technical language as a way to establish the legitimacy of an assessment practice or conclusion. This may sometime be seen when an individual attempts to link a cognitive ability subtest or

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

27

index score with a brain-based region and further associate it with SLD or some other condition. Lilienfeld et al. (2015) specifically list 50 psychological and psychiatric terms that should be avoided by the field and that would be considered hypertechnical language.

2.3.2.10

Reversed Burden of Proof

Those who would promote a particular assessment or treatment method are sometimes guilty of insisting that critics disprove the approach they proposed rather than accepting the criticism or attempting to empirically establish or defend it in the first place! Although it is indeed easier to criticize than propose the burden of establishing the empirical veracity of a proposal should rest with the proposers rather with than critics. Example Researchers who question an SLD evaluation technique or what an IQ test measures (based upon factor analysis) may be criticized for having an anti-scientific bias. Rather than demonstrating how an instrument or assessment method has an evidentiary basis the authors of the instrument or method reject evidence that criticizes a practice, and sometimes even resort to ad hominem attacks including the contention that those who would dare to critique a method are behaving unethically.

2.3.2.11

Extensive Appeals to Authority or Gurus (i.e., Eminence-Based Assessment)

There is no doubt that certain individuals in child psychology garner a reputation as an expert. In the realm of assessment, these individuals frequently have created popular assessment instruments or interpretation approaches. Such individuals also frequently make conference presentations, keynote addresses, or paid workshops. Subsequently, they are cited as experts and used to justify a particular practice. However, one needs to be cautious about uncritically accepting such an endorsement just because of the eminent status of the expert. Otherwise we may be accused of engaging in eminence-based assessment which fails to live up to scientific standards. Example Numerous times on listserves I have read psychologists asking questions about how to interpret a profile with subsequent responses pointing to a text or chapter written by a test author with notoriety. The individual’s reputation is the justification that is offered rather than the evidence base. Nonetheless, we feel good because we trust the expertise of the individual promoting the practice and say to ourselves, “If Dr. Z is promoting it, and he is a perceived leader in my field, then the practice must be valid.” This approach to empirical justification fails to live up scientific standards. If the expert has a reputation for expertise because of a skill set that has been established on the basis of legitimate peer-reviewed research productivity, then we can be more confident in the expert’s opinion about the empirical veracity of the instrument or procedure (pending no there is no conflict of interest).

28

2.3.2.12

S. C. Dombrowski

Heavy Reliance on Endorsements from Presumed Experts

This is similar to extensive appeals to authority and refers to expert endorsement of an assessment approach. It is not uncommon to have advertisements for a new assessment instrument or diagnostic method contain endorsements from those who are perceived as experts, often holding a legitimate or a titular university position. An endorsement from someone within the academy holds cachet and ostensibly confers a degree of empirical validation. Typically, someone would not offer an endorsement unless they believe in the product. However, be sure to check the empirical evidence rather than relying on the endorsement of an expert prior to adopting an assessment procedure. Example An expert endorses an assessment instrument, book, or classification algorithm and the field subsequently suspends critical appraisal of the qualities of the instrument, book or algorithm and adopts the procedure. This happens in many fields and not just psychology. 2.3.2.13

Extensive Use of Neurobabble or Psychobabble

This may include the use of highly technical language linking an assessment of a cognitive ability with a particular pathological outcome or neurological substrate. Intuitively, this discussion may make sense but in reality the linkage is spurious and based upon an illusory correlation. Example An expert in the field conducts a neuropsychological or neurocognitive evaluation. The expert subsequently writes a report and explains the linkage of the child’s struggles with reading via the neuropsychological results citing a linkage between the child’s working memory deficits (as assessed on the neuropsychological battery) and specific neurological substrates presumed to be linked to reading and working memory. Unfortunately, there is limited research substantiating the use of cognitive tests to determine neurological deficits. Yet, the discussion of the linkage seems very convincing as the expert cites the literature using neuroanatomical terms, points to how certain brain areas light up when an individual is engaged in a particular task and discusses linkages in the allied field of cognitive neuroscience. Unfortunately, the use of IQ tests for such purpose has failed to live up to evidentiary standards.

2.3.2.14

Tendency of Advocates to Insulate Themselves from Criticism

At times, psychologists may have devoted a considerable amount of time to learning a particular assessment technique that they use in their practice. As a result, the individual becomes invested in the practice and may inadvertently overlook its shortcomings particularly as new evidence emerges. Indeed, this may even represent an existential crisis! What am I to do, asks the psychologist, if I do not administer numerous cognitive tests and offer my interpretive insights?

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

29

Example An example might be those who use subtest analysis or PSW as a means to classify learning disabilities. Individuals may become defensive when this practice is criticized and cite the workshops, presentations, and even align with those who attack researchers who criticize the practice of PSW. Nonetheless, cognitive profile analysis has not withstood empirical criticism for more than three decades (McGill et al., 2018).

2.3.2.15

Limited or Non-existent Reporting of Contradictory Findings

It is important to be vigilant about practices that profess empirical support but only offer one-sided support. This is pseudoscientific and non-academic. Example A classic example appears in the technical manual of the Woodcock Johnson Tests of Cognitive Ability and Achievement. Information within the manual may be found that discusses the factor analytic results but only those that confirm the structure proposed by the test publisher. Nowhere within this manual do we see the numerous articles about the prior edition (e.g., Dombrowski, 2013a; Dombrowski & Watkins, 2013) that offered a different perspective regarding the overall factor and theoretical structure of the instrument. A one-sided review of the scientific literature degrades the trust that one may place in the scientific foundation of the instrument. I am not claiming that the test publisher or author is being dishonest. I am contending that scientific rigor would be enhanced by discussing opposing evidence. Conclusion The greater the number of the above warning signs regarding an instrument or assessment procedure the greater the likelihood of engaging in pseudoscientific practice (unconsciously or otherwise). Thus, professionals will need a skeptical eye in the move toward scientifically supported practice. Frequently, we see monikers of self-congratulation (e.g., “scientifically proven,” “evidence-based,” “empirically guided”) or testimonials from authorities in the field. These are effective marketing tools but offer little to advance the scientific practice of assessment.

2.4

The Second Pillar of Evidence-Based Psychoeducational Assessment: Evidence-Based Instrument Selection, Metrics of Interpretability Evaluation, and Consideration of Diagnostic Utility

With a better understanding of scientific decision-making and the warning signs of pseudoscience, it will be important to examine an important facet in the evidence-based assessment process: evidence-based instrument selection. It is

30

S. C. Dombrowski

important to keep in mind that research on assessment instruments is not synonymous with research on the process of psychological assessment (or in this case, psychoeducational assessment). Psychoeducational assessment is a broader, more comprehensive, and complex. So, what is the second pillar of evidence-based psychoeducational assessment? This will entail the use of assessment instruments that are psychometrically sound and useful for diagnostic decision-making and treatment planning. This means that we must consider not only the underlying psychometric properties of the instruments we select, but also their diagnostic/treatment utility when used either singularly or within an interpretive approach such as cross-battery assessment and the various processing strengths and weaknesses (PSW) models.

2.4.1

Instrument Selection: Psychometric Considerations

Hunsley and Mash (2007) offer a framework for the review and subsequent selection of assessment instruments. This is shown in Table 2.3 and described more fully below. These characteristics may be referenced to determine whether an instrument is appropriate for a given evaluation. Following the presentation and discussion of this table, you will find a discussion of metrics of interpretability and diagnostic treatment/utility. The consideration of metrics of interpretability determines how much emphasis should be placed upon a particular scale while the consideration of diagnostic utility is important in establishing whether an assessment approach actually works. Instrument selection: Training on, and subsequent use of, psychological assessment instruments that lack proper psychometric qualities is a problem for the field. An example might be the Gilliam Asperger’s Rating Scale. This instrument was advertised as being capable of differentiating those with Asperger’s from those without. However, even a cursory review of the instrument’s technical manual suggests that it does not meet basic psychometric standards for use. In other words, the normative sampling was extremely restricted and lumped all children from preschool through young adulthood into one normative category. Obviously, there are tremendous developmental differences across this lifespan so the use of this instrument in a quantitative fashion would be dangerous. One may think the panacea to this problem is to reference review compendiums, such as the Mental Measurement Yearbook, but this may not always be the solution. Some reviews may lack a critical appraisal of instruments. Not all instruments are reviewed equally. Take, for example, a review of the WJ IV within the Mental Measurements Yearbook. There are often two of them. The review undertaken by Canivez (2017) offers a critical appraisal of both the strengths and limitations of the instrument including its utility for interpretation. I have seen other reviews that, although well-written, are more of a paraphrasing of the instrument’s technical manual than a critical review of the instrument. Thus, even within review compendiums it is incumbent upon the user to critically examine the review. It is also important for the

Validity

Structural/ Factorial

Test-retest

Inter-rater reliability

Internal consistency

Evidence of both exploratory and confirmatory factorial validity within the technical manual in addition to independent research all converging to support the structure proposed in the technical manual. All factor analytic results disclosed in the technical manual including eigenvalues, variance partitioning, factor loadings, and communality estimates

Preponderance of evidence indicates a values  0.90. Omega estimates  0.75 Preponderance of evidence indicates ĸ values  0.85; the preponderance of evidence indicates Pearson correlation or intraclass correlation values  0.90 Preponderance of evidence indicates test-retest correlations of at least 0.70 over a period of a year or longer

Measures of central tendency and distribution for the total score (and subscores if relevant) based on several large, representative samples (must include data from both clinical and nonclinical samples) are available

Norms

Reliability

Excellent

Characteristic

Evidence of both exploratory and confirmatory factorial validity within the technical manual converging to support the same factor structure. All factor analytic results disclosed in the technical manual including eigenvalue, variance partitioning, factor loadings, and communality estimates

Preponderance of evidence indicates a values of 0.80–0.89 Omega estimates  0.75 Preponderance of evidence indicates ĸ values 0.75–0.84; the preponderance of evidence indicates Pearson correlation or intraclass correlation values of 0.80–0.89 Preponderance of evidence indicates test-retest correlations of at least 0.70 over a period of several months

Measures of central tendency and distribution for the total score (and subscores if relevant) based on several large, representative samples (must include data from both clinical and nonclinical samples) are available

Good

Table 2.3 Psychometric considerations for the adoption of a new assessment instrument Adequate

Evidence of both exploratory and confirmatory factorial validity within the technical manual converging to support the same factor structure. Partial disclosure of factor analytic results provided. Must include at least the correlation matrix and factor loadings permitting independent replication of structure to be considered tentatively adequate (continued)

Preponderance of evidence indicates a values of 0.70–0.79. Omega estimates  0.50 Preponderance of evidence indicates ĸ values 0.60–0.74; the preponderance of evidence indicates Pearson correlation or intraclass correlation values of 0.70–0.79 Preponderance of evidence indicates test-retest correlations of at least 0.70 over a period of several days to weeks

Measures of central tendency and distribution for the total score (and subscores if relevant) based on a large, relevant clinical sample are available

2 A Newly Proposed Framework and a Clarion Call to Improve Practice 31

Characteristic

Validity generalization

Content

Construct

Table 2.3 (continued)

Preponderance of independently replicated evidence, across multiple types of validity (e.g., predictive validity, concurrent validity, convergent/discriminant validity) is supportive of construct validity. Additionally, there is evidence of incremental validity with respect to clinical data for a rating of excellent The test developers clearly defined the domain of the construct being assessed and ensured that selected items were representative of the entire set of facets included in the domain. All elements of the instrument (e.g., instructions, items) were evaluated by judges (e.g., by experts or by pilot research participants). Multiple groups of judges were employed and quantitative ratings were used by the judges Preponderance of evidence supports the use of the instrument with more than one specific group (based on sociodemographic characteristics such as age, gender, and ethnicity) and across multiple contexts (e.g., home, school, primary care setting, inpatient setting)

Excellent

Good

Preponderance of evidence supports the use of this instrument with either (a) more than one specific group (based on sociodemographic characteristics such as age, gender, and ethnicity) or (b) in multiple settings (e.g., home, school, primary care setting, inpatient setting)

The test developers clearly defined the domain of the construct being assessed and ensured that selected items were representative of the entire set of facets included in the domain. All elements of the instrument (e.g., instructions, items) were evaluated by judges (e.g., by experts or by pilot research participants)

Preponderance of independently replicated evidence, across multiple types of validity (e.g., predictive validity, concurrent validity, convergent/discriminant validity) is supportive of construct validity

Adequate

(continued)

Some evidence supports the use of this instrument with either (a) more than one specific group (based on sociodemographic characteristics such as age, gender, and ethnicity) or (b) in multiple contexts (e.g., home, school, primary care setting, inpatient setting)

The test developers clearly defined the domain of the construct being assessed and ensured that selected items were representative of the entire set of facets included in the domain

Some independently replicated evidence of construct validity (e.g., predictive validity, concurrent validity, and convergent/discriminant validity)

32 S. C. Dombrowski

Clinical utility

Treatment sensitivity

Preponderance of independently replicated evidence indicates sensitivity to change over the course of treatment. Additionally, there is evidence of sensitivity to change across different types of treatments Taking into account practical considerations (e.g., costs, ease of administration, availability or administration and scoring instructions, duration of assessment, availability of relevant cutoff scores, acceptability to patients), the resulting assessment data are likely to be clinically useful

Excellent

Good

Taking into account practical considerations (e.g., costs, ease of administration, availability or administration and scoring instructions, duration of assessment, availability of relevant cutoff scores, acceptability to patients), the resulting assessment data are likely to be clinically useful. Additionally, there is some published evidence that the use of the resulting assessment data confers a demonstrable clinical benefit (e.g., better treatment outcome, lower treatment attrition rates, greater patient satisfaction with services)

Preponderance of independently replicated evidence indicates sensitivity to change over the course of treatment

Adequate Some evidence of sensitivity to change over the course of treatment

Taking into account practical considerations (e.g., costs, ease of administration, availability or administration and scoring instructions, duration of assessment, availability of relevant cutoff scores, acceptability to patients), the resulting assessment data are likely to be clinically useful. Additionally, there is some published evidence that the use of the resulting assessment data confers a demonstrable clinical benefit (e.g., better treatment outcome, lower treatment attrition rates, greater patient satisfaction with services). There also is independently replicated published evidence that the use of the resulting assessment data confers a demonstrative clinical benefit Adapted and reproduced with permission of the Licensor through PLSclear from Hunsley, J., & Mash, E. J. (Eds.). (2008). Oxford series in clinical psychology. A guide to assessments that work. New York, NY, US: Oxford University Press

Characteristic

Table 2.3 (continued)

2 A Newly Proposed Framework and a Clarion Call to Improve Practice 33

34

S. C. Dombrowski

editor of review compendiums to select individuals with a baseline of expertise to review an instrument. The consequence of an uncritical analysis of an instrument leaves trainers, researchers, and practitioners to accept test publisher’s claims about the validity/ reliability and subsequent utility of an instrument for an advertised purpose. Unfortunately, other than ethical and test standards, safeguards are not sufficiently in place to ensure that the public is protected from claims about the utility of an instrument for an intended purpose. The field of assessment psychology does not have an oversight committee like the FDA to ensure the psychometric properties of an instrument or assessment procedure. Again, let us consider the WJ IV test of cognitive ability. The test publisher claims that it measures seven distinct CHC ability areas but subsequent, independent research (e.g., Dombrowski et al., 2017, 2018; Dombrowski, McGill, & Morgan, 2019) indicated that it measures four constructs with only three constructs in common with CHC theory. Trainers and researchers overlooking these independent analyses may well continue interpreting and using the instrument in interpretive algorithms as promoted in the commercial literature but run the risk of drawing inappropriate conclusions. Norms: Standardized, norm-referenced instruments have certain basic requirements. In an ideal world norms for both the general population and a specific clinical population would be provided. These norms should be representative of the population of interest with respect to a number of characteristics including gender, socioeconomic status, age, and ethnicity among other characteristics. Some of the best normed instruments are IQ tests. They tend to be developed to reflect the population of the country of interest. For instance, in the USA, the Wechsler scales are stratified according to the US census and evaluated for match thereto. Scores for interpretation are then stratified according to age to account for developmental differences. This is an important feature for child and adolescent scale development. I cringe when I see an instrument, typically a narrow band instrument, standardized on a sample of youth ranging in age from 2 to 21 without an adequate sampling of the various age ranges. Second, and something that is often arbitrarily created in narrow band instruments, are cut scores for at-risk and psychopathology. How are those cut scores created? How was the clinical sample for norming purposes ascertained? Conversely, did the narrow band instrument collect normative data on the broader population so that an individual who is suspected of disability have metrics for understanding for where they fall relative to the population at large? If a review of the norms of the instrument reveal that it did not align with the population of interest or did not draw from the majority of contacted respondents, then it is likely that the instrument should not be considered norm-referenced, should be viewed as inadequate, and should be used as a qualitative, not quantitative measure. Reliability: Reliability refers to the consistency of the obtained scores within a measure whether the consistency is within the instrument itself (internal consistency), across a second or multiple administrations (test-retest), or between different clinicians administering the same instrument (test-retest).

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

35

Coefficient alpha is the predominantly used reliability coefficient for internal consistency although omega coefficients have also been suggested as a means to determine whether the scales have sufficient variance to reliability interpret. This distinguishes alpha from omega. Alpha may be used to determine whether a scale is reliable but omega may be used to determine whether a scale is reliable within a group for a purpose (i.e., interpretation of subscales). Because omega estimates are a function of variance they have often been used as a metric of validity. These are discussed in the next section on validity. Generally accepted coefficient alpha thresholds for all three forms of reliability are as follows: excellent (alpha  0.90); good (alpha  0.80); adequate (alpha  0.70); and poor (alpha < 0.70). Keep in mind that test-retest reliability measures are interpreted relative to the construct being assessed. They can be influenced by situational variables and the time frame in between assessments. Thus, for instruments that measure traits, rather than situational/states, we would like to see reliability at least  0.70 and a longer duration between assessments. Additionally, there may be varying standards based on the seriousness of the decision. For instance, there may be a requirement of > 0.90–0.95 for high-stakes individual decisions, whereas 0.70 may be sufficient for group decisions. Finally, keep in mind that a reliability constrains validity. In other words, a scale cannot be valid if it is not reliable. For instance, if a scale has a reliability coefficient of 0.80, then the upper bound limit of validity would be 0.64. Validity: Without validity evidence, we cannot be confident that the instrument is measuring what it claims to measure. Although all types of validity may be characterized as construct validity, validity is often broken out according to “type.” There are numerous types of validity evidence. In the classical model of test validity there are three types of validity: construct, content and criterion validity (predictive and concurrent). Modern conceptions of validity also include discriminant validity, convergent validity, face validity, and incremental validity. Messick’s notions of validity include six basic validity considerations: (a) consequential; (b) content; (c) substantive; (d) structural; (e) external; and (f) generalizability. Each of these types of validity evidence is discussed briefly below. Construct validity is broadly defined as the degree to which a test measures what it purports to measure. Criterion validity refers to the extent to which a measure is related to an outcome (or criterion). Criterion validity is divided into concurrent and predictive validity. Both predictive and concurrent validity are generally measured as correlations between a test and some criterion measure. Predictive validity has one measure preceding another. For instance, the validity of the SAT in predicting grade point average (GPA) in college is reflected in a correlation between the two variables. The SAT is thought to have predictive validity for performance in college. With concurrent validity, both instruments are measured at the same time (or concurrently). An example of concurrent validity would be performance on the New York State Regents exam in chemistry with classroom GPA in chemistry. (The Regents exam in New York is a standardized test administered to certain students in public school districts in New York). Keep in mind that the latest version of the Test Standards no longer uses the term, predictive validity; rather, the

36

S. C. Dombrowski

standards use the Evidence Based on Relationships [between the test scores and] Other Variables. Convergent validity refers to the degree to which constructs are purported to be theoretically related are indeed related. For instance, we would expect the Unit-2 to exhibit a degree of convergent validity with the WISC-V since the Unit-2 measures aspects of cognitive ability in common with the WISC-V (i.e., visual-spatial and fluid reasoning abilities). When calculating a convergent validity correlation, it is necessary to correct for attenuation in the correlation due to measurement error. The same applies to the discriminant validity correlation and other types of validity. Discriminant validity, on the other hand, measures whether constructs thought to be dissimilar are in fact unrelated. An example would be a test of reading and a test of mathematics. If the two measures demonstrate a low correlation between one another, then this would be evidence of discriminant validity. When calculating a discriminant validity correlation, it is necessary to correct for attenuation in the correlation due to measurement error. Face validity is the extent to which a test is perceived to measure the construct it purports to measure. In other words, a test is considered to have face validity if it looks like it measures the construct it claims to measure. An example may well be a self-concept scale. Although the correlation between low self-concept and depression is fairly high, and quite nearly equivalent between one measure of depression with another, it would not have as much face validity for the construct of depression as would another depression measure. Structural validity. Structural validity, also known as internal structure, uses factor analysis to determine whether the instrument measures the scales it reports to measure. Structural validity is often considered one of the most important validity characteristics. Without validation of a test’s internal structure, one cannot be confident in the constructs measured by the test. Structural validity uses exploratory and confirmatory factor analysis to establish an instrument’s internal structure. The resulting factor structure is used to determine whether the instrument fits the theory it was created upon and determines the various indices of the instrument (i.e., FSIQ and five index scores). In other words, the factor structure determines how the instrument should be scored and interpreted. Incremental validity. Incremental validity refers to whether an additional instrument increases the predictive power beyond that provided by existing measures. An example may be the use of additional measures of autism spectrum disorder beyond the use of observations, interviews, and an existing autism measure to assist with classification decisions. If the additional measure does not increase predictive power, then it is appropriate to use a more parsimonious approach to assessment that does not include the measure. Content validity. Content validity reflects capacity to measure all dimensions of a particular construct. Typically, it will reflect test content that measures all aspects of a particular attribute. For instance, if it is agreed that an IQ test measures not only verbal ability but also nonverbal abilities yet the instrument contains only verbal test items then the instrument may not evidence content validity.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

37

Consequential validity. Consequential validity refers to the either the positive or negative social consequences of scores derived from a particular instrument. For instance, the common core curriculum has both positive and negative consequences. On the positive side, it ensures that children are exposed to breadth of content and are making gains. On the negative, it could morph into a philosophy where school districts test to the test to appear higher performing and actually not necessarily accomplish this goal. Consequential validity, first coined by Messick, also requires test developers to be circumspect when labeling factors within an instrument. For instance, one should not label a test as a test of nonverbal IQ if it measures other aspects of cognitive ability beyond nonverbal abilities.

2.4.2

Metrics of Interpretability Evaluation

Beyond investigating the psychometrics characteristics discussed above, there are several statistical metrics that should be considered including explained common variance (ECV), explained total variance (ETV), coefficients omega-hierarchical (xH) and omega-hierarchical subscale (xHS), the percentage of uncontaminated correlations (PUC), and H. These metrics are calculated following a factor analysis, which in its own right is important for determining what indices should be created, and whether the instrument fits the theoretical structure proposed for it. Although metrics of interpretability are generally not available in various instrument’s technical manuals, these metrics are important, should perhaps be inserted into technical manuals, and can assist clinicians whether they should interpret scales and how much emphasis should be placed upon them. Within intelligence and other tests (e.g., behavior; personality) that assume either a higher order (or bifactor) structure, the variance in the instrument may be apportioned according to the hierarchical general factor and the subscales. Once this occurs, ECV and ETV may be calculated. ECV reflects the sum of the variance of the indicators of a factor (general or subscale) divided by the sum of the communalities of the indicators (i.e., subtests). ETV reflects the sum of the variance of a factor (general or subscale) divided by the number of indicators (i.e., subtests). Once these are calculated for each scale, then a decision may be made in terms of how much emphasis should be placed upon a particular scale. Consider Table 2.4. We see that the general factor’s ETV of 0.372 is more than 10 times higher than the ETV of the Gc, perceptual reasoning, and Gwm factors. The ETV of the general factor is five times the size of that of the Gs factor. Similarly, the ECV of the general factor is more than 10 times the magnitude of the Gc, perceptual reasoning and Gwm factors, and approximately five times the size of the Gs factor. These results would suggest greater emphasis should be placed upon interpretation of the general factor as the variance apportioned to it significantly exceeded the variance apportioned to three of the subscales. There is not a specific rule that may be applied when looking at ECV or ETV but when we see that the variance of the general factor exceeds that of the subscales by a significant margin

b

0.72 0.73 0.72 0.63 0.64 0.65 0.64 0.65 0.53 0.74 0.66 0.55 0.65 0.37 0.43 0.19

Subtest

Similarities (Gc) Vocabulary (Gc) Information (Gc) Comprehension (Gc) Block design (Gv) Visual puzzles (Gv) Matrix reasoning (Gf) Figure weights (Gf) Picture concepts (Gf) Arithmetic (Gsm/Gf/Gc) Digit span (Gsm) Picture span (Gsm) Letter–Number seq. (Gsm) Coding (Gs) Symbol search (Gs) Cancellation (Gs) Total variance Common variance

General

0.52 0.53 0.52 0.39 0.41 0.42 0.41 0.42 0.28 0.54 0.44 0.30 0.42 0.13 0.18 0.04 0.372 0.696

S2 0.35 0.47 0.38 0.32

0.037 0.069

0.12 0.22 0.15 0.10

Verbal comprehension (Gc) b S2

0.38 0.50 0.13 0.16 0.06

0.027 0.051

0.14 0.25 0.02 0.03 0.00

Perceptual reasoning (Gf/ Gv) b S2

0.49 0.13 0.30 0.45

b

0.035 0.065

0.24 0.02 0.09 0.20

S2

Working memory (Gsm)

Table 2.4 Sources of variance for WISC-V according to a confirmatory factor analysis bifactor model

0.63 0.68 0.37

b

0.39 0.46 0.14 0.062 0.115

S2

Processing speed (Gs)

0.36 0.26 0.33 0.50 0.45 0.32 0.57 0.55 0.71 0.44 0.32 0.61 0.37 0.47 0.36 0.83 0.52

u2

(continued)

0.64 0.74 0.67 0.50 0.56 0.69 0.43 0.45 0.29 0.56 0.68 0.39 0.63 0.53 0.64 0.17 0.48

h2

38 S. C. Dombrowski

b

General

S2

Verbal comprehension (Gc) b S2

Perceptual reasoning (Gf/ Gv) b S2 b

S2

Working memory (Gsm) b

S2

Processing speed (Gs) h2

u2

xH/xHS 0.844 0.302 0.110 0.181 0.518 H 0.915 0.567 0.354 0.407 0.626 PUC 0.792 Note b standardized loading of subtest on factor; S2 variance explained in the subtest; h2 communality; u2 uniqueness; xH Omega-hierarchical (general factor); xHS Omega-hierarchical subscale (group factors); H construct reliability or replicability index; PUC percentage of uncontaminated correlations. Publisher posited group factor alignment in parentheses. Above analysis reflects the average application of bifactor CFA to a Monte Carlo resampling of the publisher’s correlation matrices over 1,000 runs. Reprinted Table 2.2 with permission of Sage Publishing from Dombrowski, S. C., McGill, R. J., & Morgan, G. B. (2019). Monte Carlo Modeling of Contemporary Intelligence Test (IQ) Factor Structure: Implications for IQ Assessment, Interpretation, and Theory. Assessment. https://doi.org/10.1177/1073191119869828

Subtest

Table 2.4 (continued)

2 A Newly Proposed Framework and a Clarion Call to Improve Practice 39

40

S. C. Dombrowski

then we can make a case that limited, if any, emphasis should be placed on the interpretation of subscales (i.e., the WISC-5 indices of Gc, Gwm, etc.). Coefficients xH and xHS are another metric that ought to be considered when the construct being evaluated is presumed to have a dominant or pervasively influential general factor (e.g., cognitive ability). They were initially conceptualized as model-based reliabilities and estimates of the reliability of unit-weighted scores produced by a given set of indicators (Reise, 2012; Rodriguez, Reise, & Haviland, 2016; Watkins, 2017); however, they may also be conceptualized as a metric of construct interpretability of the resulting latent factors (Gustafsson & Åberg-Bengtsson, 2010). For most conventional IQ tests, the xH coefficient represents the estimate of the general intelligence factor variance with variability from the group factors removed; the xHS coefficient represents the group factor estimate with variability from all other group and general factors removed (Brunner, Nagy, & Wilhelm, 2012; Reise, 2012). In general, the xH/xHS coefficient thresholds needed for confident clinical interpretation of general and group factor-based scales is a value of 0.50 but 0.75 is generally preferred (Reise, 2012; Reise et al., 2013). Omega coefficients started to be incorporated into EFA and CFA studies by those researching cognitive ability instruments around 2014. The thresholds established by Reise et al. (2013) are somewhat subjective and further research is needed to establish their veracity. From Table 2.4, we see results that parallel those from our analyses of ECV and ETV. This is not surprising as ECV/ETV and omega coefficients reflect ratios of variance ascribed to the general factor and subscale factors. Thus, we note that omega estimates suggest that the index derived from the general factor (i.e., FSIQ) may be confidently interpreted since it is above the minimum established threshold of 0.50. However, three of the four subscales (e.g., Gc, Gwm and Gf/Gv) in the WISC-5 should not necessarily be interpreted. In other words, omega estimates suggest that we should consider placing less emphasis on the Gc, Gf/Gv and Gwm subscales although the Gs subscale may be independently interpreted as a meaningful scale. Hancock and Mueller’s (2001) coefficient H may also be a consideration with the caveat that H is the replicability index of an optimally weighted score. Consequently, it may be redundant with omega estimates in the case of unit-weighted instruments such as the WISC-5. The H coefficient provides an estimate of how well a posited latent construct is (a) represented by a given set of indicators and (b) likely to be replicated across similar factor analytic studies. A value of 0.70 or higher is preferred for interpretation of factor indices (Hancock & Mueller, 2001; Rodriguez et al., 2016). In this case, the H coefficient would suggest that we should only interpret the index derived from the general factor as its H value is the only one above 0.70. Finally, the percentage of uncontaminated correlations (PUC) should be considered to obtain a sense of how many of the correlations in the model inform directly on the general factor. A PUC value of 0.70 or higher (along with explained common variance of 0.70 or higher) would suggest the measure is primarily unidimensional. As we can see, the PUC was 0.792 while the ECV was 0.696.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

41

This would suggest that the WISC-V is predominantly unidimensional suggesting that interpretation beyond the general factor (i.e., FSIQ) may be psychometrically unwise. Thus, it is noted that ECV, ETV, omega coefficients, PUC, and H all converge to suggest that the WISC-V, like most IQ tests available on the marketplace, may best be interpreted at the FSIQ level [e.g., WJ IV full test battery and WJ IV Cognitive (Dombrowski, McGill & Canivez, 2017, 2018); WJ III full test battery, WJ III Cognitive (Dombrowski, 2013b, 2014a, 2014b; Dombrowski & Watkins, 2013); WISC–V (Canivez, Watkins & Dombrowski, 2016, 2017; Canivez et al. 2020; Dombrowski, Canivez & Watkins, 2017; Dombrowski, Beaujean, Canivez & Watkins, 2015; Dombrowski, Beaujean, McGill, Benson & Schneider, 2019); Stanford-Binet, Fifth Edition (Canivez, 2008; DiStefano & Dombrowski, 2006); DAS-II (Canivez & McGill, 2016; Dombrowski, Golay, McGill & Canivez, 2018; Dombrowski, McGill, Canivez & Peterson, 2019); KABC–2 (McGill & Dombrowski, 2018; McGill & Spurgin, 2017); and RIAS (Nelson, Canivez, Lindstrom & Hatt, 2007; Dombrowski, Watkins & Brogan, 2009)]. In the end, I would posit that it would behoove our test publishing companies to provide metrics of interpretability for their cognitive ability instruments such as some of those presented above. At minimum ECV and ETV should be presented but it is also suggested omega estimates and H be furnished. This will permit users of those assessment instruments greater capacity to determine where interpretive emphasis should be placed (e.g., composite or index in the case of IQ tests). At present, the only locale were I have seen these metrics are in independent research articles (see Dombrowski, McGill, & Morgan, 2019).

2.4.3

Diagnostic and Treatment Validity/Utility Consideration

Although metrics of interpretability and the various psychometric features of an instrument are critically important, it is equally important to consider how an instrument might be used to make diagnostic and treatment decisions. This is labeled diagnostic or treatment validity (or utility). Diagnostic utility studies focus on whether the assessment procedure actually works. Consider for a moment an approach to SLD assessment using PSW. Some individuals (e.g., Schultz & Stephens, 2019) provide a theoretical rationale that makes intuitive sense. Other PSW researchers (e.g., Feifer, Nader, Flanagan Fitzer, & Hicks, 2014) go one step further and classify research participants into five different SLD groups using regression methods. Some (e.g., Feifer et al., 2014) found different PSW patterns of statistically significant relationships among the cognitive and achievement test scores and subsequently concluded that PSW profiles were diagnostic. Although the finding of group

42

S. C. Dombrowski

differences is more evidenced-based than simply relying upon theoretical/intuitive considerations, it is not sufficient for determining whether PSW may actually be used to make accurate SLD diagnostic decisions (Benson et al., 2018). One approach that is more useful and empirically sound for determining whether an assessment algorithm has diagnostic utility is to use receiver operating characteristic (ROC) analyses. Studies that have used ROC analyses have concluded the following with respect to PSW: (a) cognitive scatter does not matter and identifies broader academic dysfunction at no better than chance levels (i.e., a coin flip; McGill, 2018); (b) specific cognitive weaknesses have low positive predictive values in identifying academic weaknesses (Kranzler, Floyd, Benson, Zaboski, & Thibodaux, 2016); and (c) PSW methods have low to moderate sensitivity and low positive predictive values for discerning SLD (Miciak, Taylor, Stuebing, & Fletcher, 2018; Stuebing, Fletcher, Branum-Martin, & Francis, 2012). It is emphasized that specific cognitive profiles confirming or disconfirming the presence of a learning disorder have yet to be identified (Mather & Schneider, 2015). Nonetheless, it is often claimed in the professional and commercial literature that use of profile analytic methods (e.g., PSW) may be useful for diagnosis and treatment planning for individuals with academic weaknesses. A countering body of literature has emerged over the past few years that identifies a number of psychometric and conceptual concerns about these profile analytic methods (e.g., Miciak, Fletcher, Stuebing, Vaughn, & Tolar, 2014; Miciak, Taylor, Denton, & Fletcher, 2015; Miciak, Taylor, Stuebing, & Fletcher, 2018; Taylor, Miciak, Fletcher, & Francis, 2017; Williams & Miciak, 2018). Burns et al. (2016) summarized this research and found that the effect size associated with academic interventions guided by cognitive data (i.e., aptitude-by-treatment interaction [ATI]) were mostly small with the exception of phonological awareness-based interventions which had a modest effect size. Burns et al. concluded that “measures of cognitive abilities have little to no utility in screening or planning interventions for reading and mathematics” (p. 37). Still, supporters of profile analytic methods like PSW often point to the extant literature where one can find numerous citations that espouse the utility of these methods. However, one must make a critical examination of these studies as they have the appearance of a scientific foundation but actually lack overall empirical support. In one of the more compelling quotes about the extant evidentiary basis of cognitive profile analytic methods, Schneider and Kaufman (2017) concluded the following: “After rereading dozens of papers defending such assertions [about the utility of profile analytic methods], including our own, we can say that this position is mostly backed by rhetoric in which assertions are backed by citations of other scholars making assertions backed by citations of still other scholars making assertions” (Schneider & Kaufman, 2017, p. 8).

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

43

Conclusion The considerations above about the psychometric foundations of an instrument and the diagnostic utility of an assessment method are a good starting point for determining whether an instrument and/or assessment method should be used as part of the overall assessment process. However, it will also be important to be mindful of additional considerations. Despite empirical evidence against selected practices such as the SLD discrepancy approach, subtest analysis, and PSW approaches, our field continues to use them. Why do non-empirically supported assessment practices continue to be used?

2.4.4

Reasons for Continued Popularity of Non-empirically Supported Assessment Practices

Lilienfeld, Wood, and Garb (2006) presented a framework for determining why practices perpetuate despite the lack of scientific support. They discussed their framework primarily in relation to projective instruments such as the Rorschach, Thematic Apperception Test (TAT), and the sentence completion test but the considerations they offered are appropriate for other measures and interpretive algorithms (i.e., subtest analysis, PSW, and cross-battery assessment) as well. Each of the reasons for the continued popularity of tests that lack empirical support is discussed (Table 2.5).

2.4.4.1

Illusory Correlation

This is defined as the perception of an association between two variables, statistical or observational, when there is none, or a stronger association between two variables than actually exists. Chapman and Chapman (1967) present an empirically based critique of the interpretation of figure drawings that remained popular since the 1920s when Florence Goodenough (1926) first created her system for the Table 2.5 Reasons for continued popularity of tests without empirical support

1 Illusory correlation 2 The PT Barnum Effect 3 Overperception of psychopathology 4 The Alchemist’s Fantasy 5 Clinical Tradition and Educational Inertia 6 Confirmation bias 7 Hindsight bias 8 Overconfidence 9 Suboptimal psychometric training and education Adapted from Lilienfeld, S. O., Wood, J. M., & Garb, H. N. (2006). Why questionable psychological tests remain popular. The Scientific Review of Alternative Medicine, 10, 6–15

44

S. C. Dombrowski

interpretation of human figure drawings. Chapman and Chapman asked experienced practicing clinicians to list the signs that they felt were associated with clinical characteristics. The clinicians reported that DAP signs such as large eyes were associated with suspiciousness despite the dearth of support for this finding. To conduct their study, Chapman and Chapman presented undergraduate students various DAP drawings they were told from psychiatric populations with various mental health issues. Each drawing was randomly paired with two psychological problems and participants were asked to inspect drawings and estimate the degree to which the DAP signs co-occurred with the problems. Chapman and Chapman found that participants located the same DAP correlates that had been previously reported by clinicians despite the entirely random pairing with a psychological problem. In other words, participants believed that large eyes in the drawing and broad shoulders were associated with suspiciousness and manliness, respectively, even though there was not found to be any association between these drawing indicators and psychological problems. The findings support the conclusion that illusory correlation was attributable to perceived relationship between the DAP signs and suspected psychological problems (Cressen, 1975; Thomas & Jolley, 1998). Illusory correlation has been demonstrated in other projective measures including the Rorshach and the Sentence Completion Test (Starr & Katkin, 1969). It is also likely that illusory correlation is the rationale that the practice of astrology, a pseudoscientific practice, has persisted for more than 5,000 years across many cultures. As Lilienfeld, Wood, and Garb (2006) noted, “The durability of a belief is a poor barometer of its validity, an error enshrined in the logician’s ad antiquitem fallacy” (p. 9).

2.4.4.2

The P.T. Barnum Effect

Named after the showman and circus founder, Phineas Taylor Barnum, the P.T. Barnum effect essentially states the obvious to which many individuals would agree. This might include statements such as “sometimes I get worried about my future” or “sometimes I prefer to be by myself” which may well have a high base rate (i.e., attributable to most people) and use this as descriptive of a child or individual. Lilienfeld, Wood & Garb (2006) described this effect in relation to projective measures but it might well be ascribed to the attempt to use a variety of cognitive profiles as a means to describe an individual. It is a statement that is applicable to many but appears to be a profound insight into an individual’s functioning. Additionally, individual and teacher acceptance of an interpretative explanation derived from an extended and often contradictory battery of neurocognitive measures may fall under the P.T. Barnum Effect. In other words, when a clinician offers an extensive description of the linkage between neurocognitive assessments and underlying neurological substrates as the locus of the struggle with academic progress, this perpetuates such practice under this effect.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

2.4.4.3

45

Overperception of Psychopathology (or Disability)

This refers to the tendency of psychological disorder to be located quite frequently. It may well even be a psychological disorder that is en vogue and within the public’s sphere of awareness (e.g., ADHD or Autism). Certain assessment instruments that leave much to a practitioner’s subjective perception may perpetuate the over diagnosis of a disability. Projectives such as the Rorschach or Human Figure Drawing are two examples of instruments that have poor predictive validity for diagnosing psychopathology but have been used in the past for such practice (Murstein & Mathes, 1996; Weiner, 1996). From another angle let us consider a child referred for SLD. We decide to use the XBA assessment system to diagnosis SLD. We find that indeed this child experiences reading difficulties, ascribe a putative cause, and feel good that our clinical acumen was on point in diagnosing a child with SLD. We fail to recognize the fact that the child was referred for an evaluation in the first place because of reading difficulties and so the technique that we use may not be as sensitive as we think. If one were to prospectively use one of the SLD XBA approaches to classification, we may find that children are misclassified as SLD (when they are not) and miss SLD (when they are) at the level of a coin toss (McGill, 2018). 2.4.4.4

The Alchemist’s Fantasy

This is the notion that an assessment instrument can perform poorly in research studies but offers a wellspring of clinical insights in a clinic-based setting (Lilienfeld, Wood & Garb 2006). Thus, the clinician is able to transform disparate pieces of data into useful clinical gold. The argument has been proffered by many a clinical specialist over the decades including those who promote the use of projectives and cognitive subtest analysis. The argument goes like this: 1. Clinicians evaluate test scores in the context of numerous other forms of assessment data. 2. The addition of the other sources of data can transform an instrument, when standalone, from being invalid to a rich reservoir of clinical insight. 3. This ability is referred to by Lilienfeld, Wood & Garb (2006) as “the alchemist’s fantasy” because there is the belief that clinicians can transform less than useful, if not worthless, information into clinical gold. The practice of subtest analysis as discussed by Sattler (2018) in his clinical assessment text and endorsed by Alan Kaufman via his “intelligent testing” approach encourages clinicians to engage in clinically astute detective work through gathering multiple assessment sources and crafting together a clinical profile for a given child. At this point in time, science—and therefore history—seems to be on the side of eschewing such an approach to IQ test interpretation. I suspect the practice of subtest analysis and maybe even cognitive profile analysis will go the way of projective measure interpretation in school and clinical child psychology. Nonetheless, for reasons I will next discuss these practices are resistant to change.

46

2.4.4.5

S. C. Dombrowski

Clinical Tradition and Educational Inertia

The ubiquitous use of an assessment instrument should not supplant the need for empirical validation of the nature described in this chapter. Human beings are social animals and may well acquiesce to communal reinforcement. In other words, students, psychologists, and professors may well derive reinforcement for using an instrument when they see many others using it. One has to be on alert for the ad populum fallacy—the incorrect belief that an instrument that is widely used must be valid and useful. For instance, PSW approaches are used widely by many individuals and there are many chapters, books, and presentation workshops that endorse this practice. This may give the perception that the techniques are empirically sound. Nonetheless, the accumulated evidence behind the techniques suggests that this is not the case. Thus, it is incumbent upon those who promote these techniques to be circumspect in their statements of effectiveness as unwarranted assertions may unwittingly encourage practitioners to use PSW approaches of questionable validity to derive improper clinical inferences on the basis of dubious research support. 2.4.4.6

Confirmation Bias

This is the tendency to seek out evidence consistent with one’s beliefs and deny, overlook or distort evidence that is not. For instance, researchers (e.g., Beaujean, Benson, Canivez, Dombrowski, McGill, Watkins) have independently investigated the factor structure of several IQ tests and found that in some cases the publisher did not use exploratory methods or did not investigate rival models and only sought to substantiate the theory they set out to investigate. This is not something that is necessarily undertaken intentionally but it can have a consequence on the final outcome of the instrument. This was discussed more fully in Sections 2.3.1.2 and 2.3.2.3. 2.4.4.7

Hindsight Bias

This is a phenomenon where we ascribe an outcome or rationale post hoc or after we know about a child’s background. Watkins (2000) demonstrates this in the context of subtest interpretation approach promoted by Sattler (2018). Watkins demonstrated that when approached prospectively, the interpretations ascribed to cognitive subtest profiles on the WISC were nothing more than chance although one might make a judgement retrospectively (i.e., in hindsight) and believe that one has been able to use the various profiles as a means to understand etiology or assist with diagnosis. 2.4.4.8

Overconfidence

Lilienfeld et al. (2017) discuss this in relation to one’s knowledge base, not necessarily an interpersonal demeanor. This type of epistemic humility is needed in

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

47

psychology to bridge the science–practitioner gap and, at a minimum, avoid doing harm to clients. An example may well be a practitioner who has been conducting assessments for decades, has amassed a reservoir of experience, and is quite confident in his or her clinical judgment but relies on techniques for interpretation of IQ tests and personality measures that were scientifically debunked decades ago. 2.4.4.9

Suboptimal Psychometric Training and Education

This is a tremendous problem in school psychology particularly at the masters/Ed.S. level, but even in doctoral programs that eschew the teaching of measurement, thinking the topic is not necessary (even though most school psychologists in practice engage in the assessment process). This may come in the form of a school psychology program dismissing the utility of standardized cognitive and achievement assessment and farming out the teaching of these courses to individuals with less expertise in psychometrics and the assessment process. Or, this may be reflected in the absence of a course in psychometrics within a program where psychometric training is overlooked. A consequence might be the uncritical acceptance of how the instrument is promoted. Conclusion It is strongly suggested that psychologists examine the influence of the above factors to avoid the use of assessment instruments of questionable reliability and validity. Assessment, and therefore psychoeducational assessment, is complex, iterative, and in need of integration and conceptualization. As Hunsley and Mash (2011) noted, “…a truly evidence-based approach to assessment involves an evaluation of the accuracy and usefulness of this complex decision-making task in light of potential errors and biases in data synthesis and interpretation, the costs associated with the assessment process and, ultimately; the impact the assessment had on clinical outcomes for the person(s) being assessed” (p. 77). Thus, evidence-based assessment ultimately demands not only scientific decision-making and critical thinking recognizing the fact that we are biased in our observational capacities and judgment, but also expert clinical judgment. This is discussed next and is considered the third pillar of evidence-based psychoeducational assessment and report writing.

2.5

The Third Pillar of Evidence-Based Psychoeducational Assessment and Report Writing: Expert Clinical Judgment

School and child clinical psychologists are tasked with making critical decisions about children ranging from determining diagnosis/classification to placement and intervention decisions. Frequently these decisions occur within complex

48

S. C. Dombrowski

environments and without complete data (Watkins, 2009). Clinicians must therefore rely upon their training and expertise to properly diagnose and treat, and ultimately benefit, the children that we serve.

2.5.1

The Dimensions of Expertise

Expertise, or adept clinical acumen, is an essential facet of the psychoeducational assessment and report writing process. The process of becoming an expert is one that has been researched and discussed in numerous disciplines including psychology. The literature would suggest that experts in a specific discipline are more adroit at distilling necessary data from the superfluous, formulating a problem-solving approach, and enacting a strategy than are novices. Whether expertise is learned or acquired over time is a topic that has been debated. We know that the broader literature suggests that it takes approximately ten years before one becomes an expert in a particular domain (Ericsson, 1996), so long as there is (a) deliberate practice of at least 3.5 h per day and (b) access to a more experienced mentor who can offer expert advice (Brynes, 2008) and evaluative feedback that is considered. Additionally, expertise in one domain (e.g., medicine) does not translate to another (e.g., finance). Byrnes (2008) surveyed the expertise literature (e.g., Anderson, 1990; Ericssson, 2003; Glaser & Chi, 1988) and found seven main dimensions of expertise. I have added two additional ways in which experts differ from novices. The nine dimensions of expertise are depicted below in both graphical and tabular form followed by an explanation of each dimension (Fig. 2.2, Table 2.6).

Greater selfmonitoring skills

Better use of working memory

Principled problem representation

Greater knowledge and experience

Meaningful perception

The Dimensions of Expertise

Reflective, qualitative problem analysis Post analysis speed and accuracy

Effective strategy construction

Fig. 2.2 The nine dimensions of expertise

Domain-specificity

2 A Newly Proposed Framework and a Clarion Call to Improve Practice Table 2.6 The dimensions of expertise

2.5.1.1

49

1. Domain-specificity 2. Greater knowledge and experience 3. Meaningful perception 4. Reflective, qualitative problem analysis 5. Principled problem representation 6. Effective strategy construction 7. Post analysis speed and accuracy 8. Better use of working memory 9. Greater self-monitoring skills Adapted from pages 151–153 of Brynes, J. W. (2008). Cognitive development and learning in instructional contexts, third edition. New York, NY: Allyn and Bacon

Domain-Specificity

Although an expert in one domain such as a business-related field may be perceived as having high ability in an altogether different field the literature does not support this stance. For instance, consider an eminent medical professor who discovered a cure for a rare disease. This individual works in a university setting and is quite capable in the medical trade. However, to consult with this individual regarding financial matters (unless the physician runs a business or hospital) would be a mistake. The physician’s knowledge is domain specific and relegated to medicine. This is likely why back in the 1990s and 2000s business leaders fared either worse or no better than educational leaders when attempting to run schools. Recall that there was a push to improve school functioning and it was thought that a business professional would do a superior job. Although successful business leaders may have the traits to learn how to successfully run school systems, and there are doubtless overlaps in personnel management, goal setting, budgeting, and other areas, it will still require a learning curve to come up to speed with the abundance of successful educational leaders who are presently proficiently stewarding some of our high quality educational systems.

2.5.1.2

Greater Knowledge and Experience

The magnitude of knowledge and experience is likely the biggest difference between novices and experts. Experts have accrued considerably more knowledge and skills in their particular domain than novices. This permits automaticity in decision-making and processing of information in more elaborate complex and abstract ways offering much deeper conceptual insight into their domain and function (Chi, Hutchinson, & Robin, 1989). Experts are also more adept at distilling the essential from the superfluous.

50

2.5.1.3

S. C. Dombrowski

Meaningful Perception

Byrnes (2008) notes that when experts perceive a situation, they look it is holistically, whereas novices see it as a collection of disparate parts. For instance, a novice, junior soccer player may only see an adjacent teammate or player but an experienced expert soccer player is anticipating multiple plays well ahead of just the next play.

2.5.1.4

Reflective, Qualitative Problem Analysis

Novices tend to jump right into a situation and begin to problem-solve. Experts take the time to reflectively gather information and mentally represent the problem. Novices use stereotyped and simplistic approaches to problem-solving, whereas experts mentally represent the problem analyzing it from multiple angles. Although novices initiate the problem-solving process sooner experts, once they start the problem-solving process, complete it much more expediently.

2.5.1.5

Effective Strategy Construction

Experts tend to be more effective at the strategy they deploy when problem-solving than do novices.

2.5.1.6

Postanalysis Speed and Accuracy

Experts spend considerably more time on the analysis phase of problem-solving than do novices. However, they are much faster in their problem-solving execution than novices. This is likely linked to the greater experience with the problem-solving process in a particular domain. Byrnes (2008) notes that when required to speed up their execution of a problem-solving approach novices make many more errors than experts.

2.5.1.7

Working Memory

On account of greater depth of knowledge, as well as speed and accuracy of problem-solving, experts have greater working memory capacity in their particular discipline than do novices. This frees up mental capacity for more in depth elaboration and problem-solving.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

2.5.1.8

51

Self-monitoring Skills

Experts are much more likely to recognize when they are confused and have committed an error than novices. This is likely related to capacity to mentally represent problems in a more elaborate than novices.

2.5.2

How to Develop Expert Clinical Judgment

Now that we know about the characteristics of experts versus novices; it will be important to move to a discussion of how expertise is developed (Fig. 2.3). One of the key components of expertise is experience within a particular domain. The literature indicates that it takes approximately a decade of experience in a

Components of Expertise Development in Psychoeducational Assessment

Humility in the decision-making process that assumes a preponderance of the evidence standard

ConƟnuous renewal of knowledge base

ScienƟfic decisionmaking & criƟcal thinking (including recogniƟon of signs of pseudoscience)

10 years of experience with at least 3.5 hours of daily engagement

Proper iniƟal educaƟon and training in discipline

Access to mentoring and consultaƟon post graduaƟon

Fig. 2.3 Components of expertise development in psychoeducational assessment and report writing

52

S. C. Dombrowski

particular profession so long as the trade is engaged in for at least 3.5 h per day with guidance from a more experienced mentor. Evaluative feedback for the development of expertise is critically important. I am not aware of a feedback mechanism for newly minted psychologists but evaluative feedback, like winning in chess or hitting a marksman’s target, requires not only engagement in the task but feedback regarding performance. New psychologists should deliberately collect feedback on their cases and honestly evaluate the results. Second, there needs to be adequate training in a particular domain. The use of this book is a very good start for learning the important aspects of evidence-based psychoeducational assessment and report writing. This will furnish you with the needed tools and basic skill set to put you on the proper path toward expertise development. Your graduate training program, and internship experience, should offer additional training. Third, it is necessary to receive mentorship from a seasoned practitioner once you complete your graduate training. If one does not have access to an expert with whom to consult, then one’s clinical understanding will likely stagnate. Even seasoned experts frequently consult on tough cases prior to arriving at important decisions regarding the children they serve. Fourth, it will be important to recognize the errors in thinking, biases in judgment, and markers of pseudoscience that can afflict decision-making capacity. This chapter offers a useful, initial safeguard against such problems. Fifth, it is necessary to be open to, yet skeptical about, new idea development, and to continuously renew one’s knowledge base. This requires reflection upon one’s practice and awareness of the trends in the literature within one’s field. It also suggests a more humble, tentative style which promotes a preponderance of the evidence standard rather than an overly confident style which suggests more dichotomous, simplistic approach to arriving at conclusions within the psychoeducational assessment and report writing process. Why is the latter characteristic important? Some of the literature in our field suggests that the clinical judgement of an experienced psychologist is no better than that of a novice psychologist (Garb, 1998, 2005). This is not relegated solely to psychology. Errors in clinical judgment have been observed among physicians, psychiatrists, social workers, and other disciplines. For instance, errors in judgment have been found in areas such as psychotropic medication management (Pappadopulos et al., 2006) and medical intervention to determination of whether a child has been sexually abused (Herman, 2005). Thus, consideration of the factors discussed (and depicted) above associated with expertise will start one on a path toward expertise with the recognition that we should be ever vigilant about the need for continuous growth.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

2.6

53

The Fourth Pillar of Evidence-Based Psychoeducational Assessment: A Well-Written, Organized, and Aesthetically Appealing Report that Is Properly Conveyed to Stakeholders

Much of the remainder of this book focuses on this particular pillar. Psychoeducational assessment and report writing entails collecting assessment data, integrating often disparate information, conceptualizing a clinical situation, arriving at a classification decision, and making appropriate recommendations synthesized together in a written document. The report should be aesthetically appealing, well-organized and written, and then properly conveyed to stakeholders via an oral feedback conference. A well-written report that is properly conveyed to stakeholders is predicated upon a proper assessment and data collection process. The report itself is the end product of the assessment process that is based on a detailed and organized assessment approach. Chapter 3 provides an overview of the fourth pillar while the remainder of the book is nested within this fourth pillar and in the context of the evidence-based psychoeducational assessment and report writing framework offered in this chapter. Conclusion Evidence-based psychoeducational assessment requires integrating individual clinical expertise, the use of current best evidence, and scientific decision-making when making decisions about youth. Neither clinical expertise nor the best available external evidence is singularly sufficient. Without clinical expertise, praxis risks becoming paralyzed by evidence which may well be inappropriate for an individual child. Without clinical expertise practice risks becoming out of date for even excellent external evidence may be inapplicable or inappropriate for an individual child. Evidence-based psychoeducational assessment is not something relegated to those in the ivory tower; rather, practicing psychologists should consider the discussion offered in this chapter when engaging in the assessment and report writing process. The concepts offered in this book chapter establish the foundation not only for every topic in this book, but also for each component of the report, from reason for referral and presentation of assessment results to conceptualization and classification and recommendations. Each IDEA classification category and approach to classification will be reviewed for adherence to an EBA framework with the understanding that this process is ever evolving, and hopefully improving, as the field accumulates research and greater perspective on the practice of psychoeducational assessment and report writing.

54

S. C. Dombrowski

References Aaron, P. G. (2016). The impending demise of the discrepancy formula. Review of Educational Research, 67(4), 461–502. Anderson, J. R. (1990). Cognitive psychology and its implications (2nd ed.). New York: Freeman. Benson, N. F., Beaujean, A., McGill, R. J., & Dombrowski, S. C. (2018). Critique of the core-selective evaluation process. The DiaLog, 47(2), 14–18. Benson, N. F., Beaujean, A. A., McGill, R. J., & Dombrowski, S. C. (2019). Rising to the challenge of SLD identification: A rejoinder. The DiaLog, 48(1), 17–18. Bradley-Johnson, S., & Dean, V. J. (2000). Role change for school psychology: The challenge continues in the new millennium. Psychology in the Schools, 37, 1–5. https://doi.org/10.1002/ (sici)1520-6807(200001)37:1%3c1:aid-pits1%3e3.0.co;2-q. Brunner, M., Nagy, G., & Wilhelm, O. (2012). A tutorial on hierarchically structured constructs. Journal of Personality, 80, 796–846. https://doi.org/10.1111/j.1467-6494.2011.00749.x. Brynes, J. W. (2008). Cognitive development and learning in instructional contexts (3rd ed.). New York, NY: Allyn and Bacon. Burns M. K., et al. (2016). Meta-analysis of academic interventions derived from neuropsychological data. School Psychology Quarterly, 31(1), 28–42. Burns, M. K., Riley-Tillman, T. C., & Rathvon, N. (2017). Effective school interventions: Evidence-based strategies for improving student outcomes (3rd ed.). New York: Guilford Press. Canivez, G. L. (2017). Review of the Woodcock–Johnson IV. In J. F. Carlson, K. F. Geisinger, & J. L. Jonson (Eds.), The twentieth mental measurements yearbook (pp. 875–882). Lincoln, NE: Buros Center for Testing. http://marketplace.unl.edu/buros/. Canivez, G. L. (2008). Orthogonal higher order factor structure of the stanford-binet intelligence scales—fifth edition for children and adolescents. School Psychology Quarterly, 23(4), 533– 541. Canivez, G. L. (2019). Evidence-based assessment for school psychology: Research, training, and clinical practice. Contemporary School Psychology, 23(2), 194–200. https://doi.org/10.1007/ s40688-019-00238-z. Canivez, G. L., Watkins, M. W., & Dombrowski, S. C. (2016). Factor structure of the Wechsler intelligence scale for children—fifth edition: Exploratory factor analyses with the 16 primary and secondary subtests. Psychological Assessment, 28(8), 975–986. Canivez, G. L., Watkins, M. W., & Dombrowski, S. C. (2017). Structural validity of the Wechsler intelligence scale for children—fifth edition: Confirmatory factor analyses with the 16 primary and secondary subtests. Psychological Assessment, 29(4), 458–472. Canivez, G. L., & Youngstrom, E. A. (2019). Challenges to the Cattell-Horn-Carroll theory: Empirical, clinical, and policy implications. Applied Measurement in Education, 32(3), 232– 248. https://doi.org/10.1080/08957347.2019.1619562. Canivez, G. L., Dombrowski, S. C., & Watkins, M. W. (2018). Factor structure of the WISC–V for four standardization age groups: Exploratory and hierarchical factor analyses with the 16 primary and secondary subtests. Psychology in the Schools, 55, 741–769. https://doi.org/10. 1002/pits.22138. Canivez, G. L., & McGill, R. J. (2016). Factor structure of the differential ability scales–second edition: Exploratory and hierarchical factor analyses with the core subtests. Psychological Assessment, 28(11), 1475–1488. Canivez, G. L., McGill, R. J., Dombrowski, S. C., Watkins, M. W., Pritchard, A. E., & Jacobson, L. A. (2020). Construct validity of the WISC-v in clinical cases: Exploratory and confirmatory factor analyses of the 10 primary subtests. Assessment, 27(2), 274–296. Chapman, L. J., & Chapman, J. P. (1967). Genesis of popular but erroneous psychodiagnostic observations. Journal of Abnormal Psychology, 72, 193–204. https://doi.org/10.1037/h0024670.2.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

55

Chapman, L. J., & Chapman, J. P. (1969). Illusory correlation as an obstacle to the use of valid psychodiagnostic observations. Joumal of Abnormal Psychology, 74(3), 271–280. Chi, M. T. H., Hutchinson, J. E., & Robin, A. F. (1989). How inferences about novel domain-related concepts can be constructed by structured knowledge. Merrill-Palmer Quarterly, 35, 27–62. Cressen, R. (1975). Artistic quality of drawings and judges evaluations of the DAP. Journal of Personality Assessment, 39(2), 132–137. https://doi.org/10.1207/s15327752jpa3902_7. DiStefano, C., & Dombrowski, S. C. (2016). Investigating the theoretical structure of the Stanford-Binet (5th ed.). Journal of Psychoeducational Assessment, 24(2), 123–136. Dombrowski, S. C. (2013a). Investigating the structure of the WJ-III cognitive at school age. School Psychology Quarterly, 28(2), 154–169. Dombrowski, S. C. (2013b). Exploratory bifactor analysis of the WJ-III cognitive in adulthood via the Schmid–Leiman procedure. Journal of Psychoeducational Assessment 32(4), 330–341. Dombrowski, S. C. (2014a). Child Development Questionnaire (CDQ). Unpublished Document. Dombrowski, S. C. (2014b). Investigating the structure of the WJ-III cognitive in early school age through two exploratory bifactor analysis procedures. Journal of Psychoeducational Assessment, 32(6), 483–494. Dombrowski, S. C., Beaujean, A. A., McGill, R. J., Benson, N. F., & Schneider, W. J. (2019). Using exploratory bifactor analysis to understand the latent structure of multidimensional psychological measures: An example featuring the WISC-V. Structural Equation Modeling: A Multidisciplinary Journal, 26(6), 847–860. Dombrowski, S. C., Canivez, G. L., Watkins, M. W., & Beaujean, A. A. (2015). Exploratory bifactor analysis of the Wechsler intelligence scale for children—fifth edition with the 16 primary and secondary subtests. Intelligence, 53, 194–201. Dombrowski, S. C., Canivez, G. L., & Watkins, M. W. (2017). Reliability and factorial validity of the Canadian Wechsler intelligence scale for children—fifth edition. International Journal of School & Educational Psychology, 6(4), 252–265. Dombrowski, S. C., McGill, R. J., & Canivez, G. L. (2018). An alternative conceptualization of the theoretical structure of the Woodcock-Johnson IV tests of cognitive abilities at school age: A confirmatory factor analytic investigation. Archives of Scientific Psychology, 6, 1–13. https:// doi.org/10.1037/arc0000039. Dombrowski, S. C., Gischlar, K., & Mrazik, M. (2011). Assessing and treating low incidence/high severity psychological disorders and childhood. New York: Springer Science. Dombrowski, S. C., Golay, P., McGill, R. J., & Canivez, G. L. (2018). Investigating the theoretical structure of the DAS-II core battery at school age using Bayesian structural equation modeling. Psychology in the Schools, 55(2), 190–207. Dombrowski, S. C., McGill, R. J., & Canivez, G. L. (2017). Exploratory and hierarchical factor analysis of the WJ IV Cognitive at school age. Psychological Assessment, 29, 294–407. https:// doi.org/10.1037/pas0000350. Dombrowski, S. C., McGill, R. J., & Morgan, G. B. (2019). Monte Carlo modeling of contemporary Intelligence Test (IQ) factor structure: implications for IQ assessment, interpretation, and theory. Assessment. https://doi.org/10.1177/1073191119869828. Dombrowski, S. C., McGill, R. J., Canivez, G. L., & Peterson, C. H. (2019). Investigating the theoretical structure of the differential ability scales—second edition through hierarchical exploratory factor analysis. Journal of Psychoeducational Assessment, 37(1), 91–104. Dombrowski, S. C., Canivez, G. L., & Watkins, M. W. (2018). Factor structure of the 10 WISC–V primary subtests across four standardization age groups. Contemporary School Psychology, 22, 90–104. https://doi.org/10.1007/s40688-017-0125-2. Dombrowski, S. C., & Watkins, M. W. (2013). Exploratory and higher order factor analysis of the WJ-III full test battery: A school-aged analysis. Psychological Assessment, 25(2), 442–455. Dombrowski, S. C., Watkins, M. W., & Brogan, M. J. (2009). An exploratory investigation of factor structure of the Reynolds Intellectual Assessment Scales (RIAS). Journal of Psychoeducational Assessment, 27, 279–286.

56

S. C. Dombrowski

Ericsson, K. A. (1996). The road to excellence: The acquisition of expert performance in the arts, science, sports, and games. Mahwah, NJ: Erlbaum. Ericsson, K. A. (2003). The acquisition of expert performance as problem solving: Construction and modification of mediating mechanisms through deliberate practice. In J. E. Davidson & R. J. Sternberg (Eds.), The psychology of problem solving (pp. 31–83). New York: Cambridge University Press. Etzioni, A., & Kahneman, D. (2012). The end of rationality? Contemporary Sociology, 41(5), 594–597. Feifer, S. G., Nader, R. G., Flanagan, D. P., Fitzer, K. R., & Hicks, K. (2014). Identifying specific reading disability subtypes for effective educational remediation. Learning Disabilities: A Multidisciplinary Journal, 20(1), 18–30. Garb, H. N. (1998). Studying the clinician: Judgment research and psychological assessment. Washington, DC: American Psychological Association. Garb, H. N. (2005). Clinical judgment and decision-making. Annual Review of Clinical Psychology, 1, 67–89. Geisinger, K. F. (2019). Empirical considerations on intelligence testing and models of intelligence: Updates for educational measurement professionals. Applied Measurement in Education, 32(3), 193–197. https://doi.org/10.1080/08957347.2019.1619564. Glaser, R., & Chi, M. T. H. (1988). Overview. In M. T. H. Chi, R. Glaser, & M. Farr (Eds.), The nature of expertise. Hillsdale, NJ: Erlbaum. Glutting, J. J., McDermott, P. A., Watkins, M. W., Kush, J. C., & Konold, T. R. (1997). The base rate problem and its consequences for interpreting children’s ability profiles. School Psychology Review, 26, 176–188. Goodenough, F. (1926). Measurement of intelligence by drawings. New York: World Book Co. Grove, W. M., & Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical– statistical controversy. Psychology, Public Policy, and Law, 2(2), 293–323. https://doi.org/10. 1037/1076-8971.2.2.293. Gustafsson, J. E., & Åberg-Bengtsson, L. (2010). Unidimensionality and interpretability of psychological instruments. In S. Embretson (Ed.), Measuring psychological constructs: Advances in model-based approaches (pp. 97–121). Washington, DC: American Psychological Association. Gutkin, T. B., & Nemeth, C. (1997). Selected factors impacting decision making in prereferral intervention and other school-based teams: Exploring the intersection between school and social psychology. Journal of School Psychology, 35(2), 195–216. https://doi.org/10.1016/ s0022-4405(97)00005-8. Hancock, G. R., & Mueller, R. O. (2001). Rethinking construct reliability within latent variable systems. In R. Cudeck, S. Du Toit, & D. Sorbom (Eds.), Structural equation modeling: Present and future (pp. 195–216). Lincolnwood, IL: Scientific Software International. Herman, S. (2005). Improving decision making in forensic child sexual abuse evaluations. Law and Human Behavior, 29(1), 87–120. https://doi.org/10.1007/s10979-005-1400-8. Hoagwood, K., & Johnson, J. (2003). School psychology: A public health framework I. From evidence-based practices to evidence-based policies. Journal of School Psychology, 41, 3–21. Hunsley, J., & Mash, E. J. (2007). Evidence-based assessment. Annual Review of Clinical Psychology, 3, 29–51. Hunsley, J., & Mash, E. J. (Eds.). (2008). Oxford series in clinical psychology. A guide to assessments that work. New York, NY, US: Oxford University Press. https://doi.org/10.1093/ 9780195310641.001.0001. Hunsley, J., & Mash, E. J. (2011). Evidence-based assessment. In D. H. Barlow (Ed.), Oxford library of psychology. The Oxford handbook of clinical psychology (pp. 76–97). New York, NY, US: Oxford University Press. Hunter, L. (2003). School psychology: A public health framework III. Managing disruptive behavior in schools: The value of a public health and evidence-based perspective. Journal of School Psychology, 41, 39–59. https://doi.org/10.1016/s0022-4405(02)00143-7.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

57

Kahneman, D. (2011). Thinking, fast and slow. London: Penguin Books. Kranzler, J. H., Floyd, R. G., Benson, N., Zaboski, B., & Thibodaux, L. (2016). Classification agreement analysis of cross-battery assessment in the identification of specific learning disorders in children and youth. International Journal of School & Educational Psychology, 4, 124–136. https://doi.org/10.1080/21683603.2016.1155515. Kratochwill, T. R. (2006). Evidence-based interventions and practices in school psychology: The scientific basis of the profession. In R. F. Subotnik & H. J. Walberg (Eds.), The scientific basis of educational productivity (pp. 229–267). Greenwich, CT: Information Age. Kratochwill, T. R. (2007). Preparing psychologists for evidence-based school practice: Lessons learned and challenges ahead. American Psychologist, 62, 829–843. https://doi.org/10.1037/ 0003-066x.62.8.829. Kratochwill, T. R., & Shernoff, E. (2003). Evidence-based practice: Promoting evidence-based interventions in school psychology. School Psychology Quarterly, 18, 389–408. https://doi. org/10.1521/scpq.18.4.389.27000. Kratochwill, T. R., & Stoiber, K. (2000). Uncovering critical research agendas for school psychology: Conceptual dimensions and future directions. School Psychology Review, 29, 591–603. Lee, C. M., Hunsley, J. (2015). Evidence-based practice: Separating science from pseudoscience. The Canadian Journal of Psychiatry, 60(12), 534–540. Lilienfeld, S. O., & Landfield, K. (2008). Science and pseudoscience in law enforcement: A user-friendly primer. Criminal Justice and Behavior, 35(10), 1215–1230. https://doi.org/10. 1177/0093854808321526. Lilienfeld, S. O., Ammirati, R., & David, M. (2012). Distinguishing science from pseudoscience in school psychology: Science and scientific thinking as safeguards against human error. Journal of School Psychology, 50(1), 7–36. https://doi.org/10.1016/j.jsp.2011.09.006. Lilienfeld, S. O., Lynn, S. J., & Lohr, J. M. (Eds.). (2003). Science and pseudoscience in clinical psychology. New York, NY: Guilford Press. Lilienfeld, S. O., Lynn, S. J., Namy, L., & Woolf, N. (2011). Psychology: From inquiry to understanding (2nd ed.). Boston: Allyn & Bacon. Lilienfeld, S. O., Sauvignã, K. C., Lynn, S. J., Cautin, R. L., Latzman, R. D., & Waldman, I. D. (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.01100. Lilienfeld, S. O., Wood, J. M., & Garb, H. N. (2006). Why questionable psychological tests remain popular. The Scientific Review of Alternative Medicine, 10, 6–15. Lilienfeld, S. O., Lynn, S. J., O’Donohue, W. T., & Latzman, R. D. (2017). Epistemic humility: An overarching educational philosophy for clinical psychology programs. The Clinical Psychologist, 70, 6–14. Mather, N., & Schneider, D. (2015). The use of intelligence tests in the diagnosis of specific reading disability. In S. Goldtstein, D. Princiotta, & J. A. Naglieri (Eds.), Handbook of intelligence: Evolutionary theory, historical perspective, and current concepts (pp. 415–434). New York: Springer. Mattes, J. A. (1983). The feingold diet: A current reappraisal. Journal of Learning Disabilities, 16 (6), 319–323. https://doi.org/10.1177/002221948301600602. McDermott, P. A., Fantuzzo, J. W., & Glutting, J. J. (1990). Just say no to subtest analysis: A critique on Wechsler theory and practice. Journal of Psychoeducational Assessment, 8, 290– 302. https://doi.org/10.1177/073428299000800307. McGill, R. J. (2018). Confronting the base rate problem: More ups and downs for cognitive scatter analysis. Contemporary School Psychology, 22, 384–393. https://doi.org/10.1007/s40688-0170168-4. McGill, R. J., & Dombrowski, S. C. (2018). Factor Structure of the CHC model for the KABC-II: Exploratory factor analyses with the 16 core and supplementary subtests. Contemporary School Psychology, 22(3), 279–293.

58

S. C. Dombrowski

McGill, R. J., & Spurgin, A. R. (2017). Exploratory higher order analysis of the Luria interpretive model on the Kaufman assessment battery for children—second edition (KABC-II) school-age battery. Assessment, 24(4), 540–552. McGill, R. J., Dombrowski, S. C., & Canivez, G. L. (2018). Cognitive profile analysis in school psychology: history, issues, and continued concerns. Journal of School Psychology, 71, 108– 121. https://doi.org/10.1016/j.jsp.2018.10.007. McGill, R. J., & Dombrowski, S. C. (2019). Critically reflecting on the origins, evolution, and impact of the Cattell-Horn-Carroll (CHC) model. Applied Measurement in Education, 32(3), 216–231. https://doi.org/10.1080/08957347.2019.1619561. Meichenbaum, D., & Lilienfeld, S. O. (2018). How to spot hype in the field of psychotherapy: A 19-item checklist. Professional Psychology: Research and Practice, 49(1), 22–30. Miciak, J., Fletcher, J. M., Stuebing, K. K., Vaughn, S., & Tolar, T. D. (2014). Patterns of cognitive strengths and weaknesses: Identification rates, agreement, and validity for learning disabilities identification. School Psychology Quarterly, 29, 21–37. https://doi.org/10.1037/ spq0000037. Miciak, J., Taylor, W. P., Denton, C. A., & Fletcher, J. M. (2015). The effect of achievement test selection on identification of learning disabilities within a patterns of strengths and weaknesses framework. School Psychology Quarterly, 30, 321–334. https://doi.org/10.1037/spq0000091. Miciak, J., Taylor, W. P., Stuebing, K. K., & Fletcher, J. M. (2018). Simulation of LD identification accuracy using a pattern of processing strengths and weaknesses method with multiple measures. Journal of Psychoeducational Assessment, 36, 21–33. https://doi.org/10. 1177/0734282916683287. Miller, D. N., & Nickerson, A. B. (2006). Projective assessment and school psychology: Contemporary validity issues and implications for practice. California School Psychologist, 1173, 84. Murstein, B. I., & Mathes, S. (1996). Projection on projective techniques = pathology: The problem that is not being addressed. Journal of Personality Assessment, 66(2), 337–349. https://doi.org/10.1207/s15327752jpa6602_11. Nathan, P. E., & Gorman, J. M. (Eds.). (2015). A guide to treatments that work (4th ed.). New York, NY, US: Oxford University Press. Nelson, J. M., Canivez, G. L., Lindstrom, W., & Hatt, C. V. (2007). Higher-order exploratory factor analysis of the reynolds intellectual assessment scales with a referred sample. Journal of School Psychology, 45(4), 439–456. Pappadopulos, E., Woolston, S., Chait, A., Perkins, M., Connor, D. F., & Jensen, P. S. (2006). Pharmacotherapy of aggression in children and adolescents: Efficacy and effect size. Journal of the Canadian Academy of Child and Adolescent Psychiatry, 15(1), 27–39. Popper, K. R. (1959). The logic of scientific discovery. Oxford, England: Basic Books. Reynolds, C. R. (2011). Perspectives on specialization in school psychology training and practice. Psychology in the Schools, 48, 922–930. Reise, S. P. (2012). The rediscovery of bifactor measurement models. Multivariate Behavioral Research, 47, 667–696. https://doi.org/10.1080/00273171.2012.715555. Reise, S. P., Bonifay, W. E., & Haviland, M. G. (2013). Scoring and modeling psychological measures in the presence of multidimensionality. Journal of Personality Assessment, 95, 129– 140. https://doi.org/10.1080/00223891.2012.725437. Rodriguez, A., Reise, S. P., & Haviland, M. G. (2016). Evaluating bifactor models: Calculating and interpreting statistical indices. Psychological Methods, 21, 137–150. https://doi.org/10. 1037/met0000045. Sagan, C. (1995). Wonder and skepticism. Skeptical Inquirer, 19, 24–30. Sattler, J. M. (2018). Assessment of children: Cognitive foundations and applications (6th ed.). La Mesa, CA: Jerome M. Sattler. Schneider, W. J., & Kaufman, A. S. (2017). Let’s not do away with comprehensive cognitive assessments just yet. Archives of Clinical Neuropsychology, 32, 8–20. https://doi.org/10.1093/ arclin/acw104.

2 A Newly Proposed Framework and a Clarion Call to Improve Practice

59

Schultz, E. K., & Stephens-Pisecco, T. L. (2019). Exposing educational propaganda: A response to Benson et al., (2018) “Critique” of C-SEP. The DiaLog, 48, 10-16. Shinn, M. R., & Walker, H. M. (2010). Interventions for achievement and behavior problems in a three-tier model including Rti. Bethesda, MD: National Association of School Psychologists. Starr, B. J., & Katkin, E. S. (1969). The clinician as an aberrant actuary: Illusory correlation and the incomplete sentences blank. Journal of Abnormal Psychology, 74(6), 670–675. https://doi. org/10.1037/h0028466. Sloman, S. (2012). The battle between intuition and deliberation. American Scientist, 100, 73–75. Stein, A. (2013). Are people probablistically challenged? Michigan Law Review, 111, 855–875. Stoiber, K., & Kratochwill, T. R. (2000). Empirically supported interventions and school psychology: Rationale and methodological issues—Part I. School Psychology Quarterly, 15, 75–105. https://doi.org/10.1037/h0088780. Stuebing, K. K., Fletcher, J. M., Branum-Martin, L., & Francis, D. J. (2012). Evaluation of the technical adequacy of three methods for identifying specific learning disabilities based on cognitive discrepancies. School Psychology Review, 41, 3–22. Tavris, C., & Aronson, E. (2007). Mistakes were made (but not by me): Why we justify foolish beliefs, bad decisions, and hurtful acts. Orlando, FL, US: Harcourt. Taylor, W. P., Miciak, J., Fletcher, J. M., & Francis, D. J. (2017). Cognitive discrepancy models for specific learning disabilities identification: Simulations of psychometric limitations. Psychological Assessment, 29, 446–457. https://doi.org/10.1037/pas0000356. Thomas, G. V., & Jolley, R. P. (1998). Drawing conclusions: A reexamination of empirical and conceptual bases for psychological evaluations of children from their drawings. British Journal of Clinical Psychology, 37(2), 127–139. Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207–232. https://doi.org/10.1016/0010-0285(73)90033-9. Walker, H. M. (2004). Commentary: Use of evidence-based intervention in schools: Where we’ve been, where we are, and where we need to go. School Psychology Review, 33(3), 398–407. Wasserman, J. D. (2019). Deconstructing CHC. Applied Measurement in Education, 32(3), 249– 268. https://doi.org/10.1080/08957347.2019.1619563. Watkins, M. W. (2000). Cognitive profile analysis: A shared professional myth. School Psychology Quarterly, 15, 465–479. https://doi.org/10.1037/h0088802. Watkins, M. W. (2003). IQ subtest analysis: Clinical acumen or clinical illusion. Scientific Review of Mental Health Practice, 2, 118–141. Watkins, M. W. (2009). Errors in diagnostic decision making and clinical judgment. In T. B. Gutkin & C. R. Reynolds (Eds.), Handbook of school psychology (4th ed., pp. 210–229). New York, NY: Wiley. Watkins, M. W., Glutting, J. J., & Youngstrom, E. A. (2005). Issues in subtest profile analysis. In D. P. Flanagan & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (2nd ed., pp. 251–268). NY: Guilford. Weiner, I. B. (1996). Some observations on the validity of the Rorschach Inkblot Method. Psychological Assessment, 8(2), 206–213. https://doi.org/10.1037//1040-3590.8.2.206. Williams, J., & Miciak, J. (2018). Adopting costs associated with processing strengths and weaknesses methods for learning disabilities identification. School Psychology Forum, 12, 17–29. Youngstrom, E. A., & Van Meter, A. (2016). Empirically supported assessment of children and adolescents. Clinical Psychology: Science and Practice, 23(4), 327–347.

Chapter 3

The Psychoeducational Assessment and Report Writing Process: A General Overview Stefan C. Dombrowski

3.1

Overview

Explicating the fourth pillar of psychoeducational assessment and report writing, what follows is an overview of the mechanics of the psychoeducational assessment and report writing process for children and adolescents. The assessment of children and adolescents requires a specialized skill set that can be introduced through reading a book on the topic, but will require actual experience with the process to become an expert in it (although expertise is illusive, if ever attained). As you progress through your practica, clinic, internship, and career, you will become increasing competent and self-assured in the psychoeducational assessment process. Within this chapter, an overarching framework of the psychoeducational assessment and report writing process will be presented. The comprehensive psychoeducational assessment process requires the use of data-based decision-making and the gathering of multiple sources of data via multiple methods of assessment. In some respects, you are undertaking psychological detective work, attempting to uncover as much information as possible about a child to permit you to confidently make an informed, data-based decision using evidence-based procedures.

3.2

Steps in the Psychoeducational Assessment and Report Writing Process

There are generalized steps in the psychoeducational assessment process that should be undertaken. Figure 3.1 furnishes a visual overview with each step described in general below. S. C. Dombrowski (&) Rider University, Lawrenceville, NJ, USA e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S. C. Dombrowski (ed.), Psychoeducational Assessment and Report Writing, https://doi.org/10.1007/978-3-030-44641-3_3

61

62

S. C. Dombrowski

Steps in the Psychoeducational Assessment and Report Writing Process 1. Obtain signed consent prior to starting the assessment.

2. Gather and review relevant educational, medical, and psychological records.

3. Observe the child, conduct interviews, and gather rating forms and questionnaires.

4. Engage in normreferenced, informal, and curriculumbased assessment.

5. Integrate, conceptualize, classify, and recommend.

6. Write the report and furnish feedback during an in-person meeting.

Fig. 3.1 Steps in the psychoeducational assessment and report writing process

1. Obtain signed consent prior to starting the assessment. This is a critically important step which should not be overlooked. Do not begin the assessment process until you have signed consent from a legal guardian even if your supervisor, agency, or school districts indicate that they “have the signature” or the legal guardian “will sign” the consent form. When both parents are considered legal guardians in the case of divorce or separation, then it is legally permitted to begin the process with just one party’s consent particularly if it serves in the best interests of the child. Of course, a good practice would be to obtain signed consent from both legal guardians. 2. Gather and Review Relevant Educational, Medical, and Psychological Records. Consistent with the requirement to gather information from multiple sources of data, you should gather relevant educational, medical, and psychological records from the caregiver. This may include information from the following sources: • • • • • • • • •

Child’s therapist Pediatrician’s medical records Report cards Prior psychological reports Functional behavioral assessments Behavior intervention plans Behavioral write-ups Section 504 plans IEP documents.

Once these documents are collected, you will need to review them and then reconcile with additional information that will be, or has been, collected during your interview with parents, teachers, and other caregivers involved with the child. For instance, perhaps your collection and review of medical records will reveal a medical condition that better accounts for the child’s symptoms of autism or intellectual disability. Or, perhaps your review of behavioral write-ups identifies a particular environment or pattern wherein the child is experiencing difficulties (e.g., transition between recess and lunch). This type of information (among others) is something that will need to be sorted out.

3 The Psychoeducational Assessment and Report Writing Process …

63

3. Observe the child, conduct interviews, and gather rating forms and questionnaires. You will need to observe the child in the classroom and school setting. You will also need to observe the child during the testing session as discussed further in Chap. 4. If a developmental history questionnaire (i.e., see Child Development Questionnaire (CDQ); Dombrowski, 2014; Chap. 3) or behavior rating forms (e.g., BASC-3) are distributed then these should be collected and reviewed. Additionally, it will be important to interview legal guardians and other collateral contacts (i.e., teachers, support staff, behavioral specialists) to ascertain their perspective regarding the child’s functioning across all domain areas (cognitive, academic, behavioral, social, emotional, communication, and adaptive). This process of interviewing is discussed further in Chap. 4. 4. Engage in norm-referenced, informal, and curriculum-based assessment. Evaluate the child’s cognitive, academic, behavioral, social-emotional, and adaptive functioning using a variety of norm-referenced and informal measures (e.g., curriculum-based assessment). You also need to observe the child during the testing session and interview the child. The interviewing of the child may occur either before or after the testing session. Testing requires a specific skill set that requires knowledge of the evidence-based testing framework discussed in Chap. 2 including basic knowledge of the psychometric properties of an instrument and whether it measures the constructs in purports to measure. 5. Integrate, Conceptualize, Classify, and Recommend (ICCR). Following scoring of standardized and informal assessment instruments, and gathering of relevant background, interview and observational data from multiple techniques and informants (e.g., child, parents, teachers), you must integrate the information collected. Integration of data sources is a necessary prerequisite to writing the conceptualization, classification, and recommendations section of a psychoeducational report. In short, I like to use an acronym—ICCR— to refer to this process (Fig. 3.2). This is arguably the most difficult aspect of the assessment and report writing process. Detailed guidelines for integrating salient information and writing the conceptualization, classification, and recommendation section of the report are offered in Chap. 10. 6. Write the Report and Furnish Feedback During an In-Person Meeting. You are now in the position to write the report and then report the findings to caregivers. This is often called a feedback session or report conference. It may only be with the parent. This is common in a clinic-based approach (and sometimes a school setting) or it is more likely in the presence of the multidisciplinary team. Please see Chap. 21 for the discussion regarding the provision of oral feedback. Now that you have been introduced to a general framework for psychoeducational assessment, I would like to discuss additional aspects regarding the process of

64

S. C. Dombrowski

Integrate

Conceptualize

Classify

Recommend

Fig. 3.2 Approach to conceptualization and classification

conducting a comprehensive assessment. This includes how to work with children, a brief overview of how to observe the child, and the nuances of administering standardized tests.

3.3

Working with Children

It is likely that you are pursuing a degree in school or clinical child psychology because you enjoy working with children (or you think you do). These dispositional traits are necessary but insufficient. The next step will be for you to gain experience working directly with children or indirectly by observing how others work with them. If you are in your first year of a graduate program in school or clinical child psychology, then you would be well served by observing other experienced practitioners who work with children. I recommend shadowing an experienced psychologist who engages in the assessment of children. I also recommend observing the classroom of a veteran elementary and middle school teacher (i.e., one with greater than ten years of experience). These individuals have an extensive repertoire of skills for effective management and support of children’s behavior. These observational experiences will get you only so far. Equally valuable would be for you to gain direct work experience with children and adolescents. Some of you may already have had this experience whether through coaching a sport, working at a summer camp, teaching in the classroom, working at a daycare, counseling

3 The Psychoeducational Assessment and Report Writing Process …

65

children, working as a babysitter, having your own children, or even being an older sibling. These experiences will give you greater insight into, and practice with, working with children, but that is still not enough. The assessment of children requires complex clinical skills and a hefty dose of practice. Research suggests that you will not fully develop your expertise until seven to ten years into your role as a school or clinical child psychologist (Ericsson & Smith, 1991). I would like to make one final comment. The first year you enter into the schools, typically during your internship year, you will likely acquire more illness than you have experienced in your adult life. I do not have data to support this assertion. It was just my experience, and my own observation of psychology students over the past two decades. Do not despair. Your immune system will strengthen and become more robust in subsequent years!

3.4

Observing the Child

The classroom and school observation is an important component of the psychoeducational evaluation. Research supports the incremental validity of classroom and test session observations (McConaughy et al., 2010). Chapter 5 discusses a framework for various observations in the classroom and school setting. An observation should be conducted prior to meeting the child to avoid the Hawthorne Effect. The Hawthorne Effect (Landsberger, 1958) is a phenomenon where individuals being observed behave differently, often more favorably, then when not being observed. A second observation occurs during the testing session itself. Be mindful, again, that within session observational validity is limited (Glutting, Oakland, & McDermott, 1989). Children often behave in a more compliant way upon first meeting clinicians (i.e., the “honeymoon effect”; LaBarbera & Dozier, 1985) or even act out when they normally are well behaved in another setting.

3.5

The Testing Environment and the Test Session

A large amount of your time will be spent in direct contact with the child collecting assessment data via formal standardized testing or informal assessment. This will require the consideration of several issues.

66

3.5.1

S. C. Dombrowski

Establish a Working, not a Therapeutic, Relationship with the Child

The assessment process requires a working relationship, not a therapeutic relationship. Nonetheless, the same processes and skills involved in establishing a therapeutic relationship will come to bare. Your initial contact with the child is important. You should greet the child warmly, sincerely, and with a smile. If the child is younger or of smaller stature, then you might consider squatting down to meet the child at their eye level. Keep in mind cultural considerations when using this approach as some children from diverse cultural backgrounds may be apprehensive about returning the eye contact of the examiner. Ask the child’s name. Be cautious about using affected, high pitched speech. It is reasonable to use an elevated tone of voice with very young children, but children who are of school age may respond better to enthusiasm (e.g., “It is very nice to meet you Jacob. I’ve been looking forward to working with you” as opposed to “Hi! How are you? Are you ready for some funsie onesie?”). This affected style with children over age seven will seem just as odd to the child as it does to the adult. Additionally, you will need to be cautious about being overly loud and assertive. This may serve to intimidate a timorous child. Your body language is also important. Generally, you should not overcrowd a child’s space or tower over a child. This can be intimidating to anyone let alone a child. While considering the above recommendations, you ought to avoid spending a protracted time attempting to establish rapport with the child. Your goal is a working relationship where you can quickly, efficiently, validly, and reliably ascertain the information you need from the child. You are less concerned about rapport building than you would be if you were in a counseling relationship with the child.

3.5.2

Take Advantage of the Honeymoon Effect

Generally speaking, you have a window of opportunity when first meeting children where they will be on their best behavior. This is known as the “honeymoon effect” (LaBarbera & Dozier, 1985). The chances are good that even children with significant behavioral difficulties will behave well upon first meeting you. Take advantage of this and begin the testing process quickly.

3.5.3

The Room Layout

The room where you evaluate the child should be free from distractions such as noise, smells, toys, and other tangibles that may distract or be of interest to the

3 The Psychoeducational Assessment and Report Writing Process …

67

child. It is also important to have an appropriately sized table and chairs. Some school districts may not have adequate space. For example I have been asked to evaluate children in two locations that are worth noting: (1) Next to the music room with thin walls and (2) in the Janitor’s office next to the cafeteria. I will first describe the music room’s poor conditions. The room where I was conducting the testing was situated adjacent to the music room. It was spacious, had comfortable chairs, and a table. For the first 30 min, the room seemed ideal. It was away from the noise of the main hallway and well lighted and ventilated. The problems began approximately mid-way through a working memory test on a measure of cognitive ability. The music teacher played the piano as the rest of the class sang the song, “Little Teapot.” Unfortunately, the walls did not block the music and spoiled the memory test. We had to relocate to another room. This location was clearly inappropriate as it was not free from noisy distractions. The second location was not ideal, but it was acceptable. The testing was conducted during off cafeteria hours in a janitorial alcove. There were not any distractions and the location was quiet, so the location, while not aesthetically appealing, sufficed as it suffer from foul smells. Of course, the ideal environment would be a separate office with a table, chairs, and the ability to close the door to block out external distractions. You may not always have that ideal location but you do need a quiet location free from noise and distractions. If not, you may jeopardize the validity of the testing session.

3.5.4

How to Start the Testing Session

You will need to be brief, direct, and honest. Discuss the rationale for testing and then move directly to administration. Many of the tests of cognitive ability have their own suggested introductions to the test. I like to use the following to introduce the overall process as I walk the child down the hall to the testing room (or when I take the child to the clinic office). After providing this introduction and briefly addressing any questions, the child might have then it is time to start the testing session.

3.5.5

Examiner Anxiety

Often neophyte examiners will feel a degree of anxiety about their skills in administering standardized assessment instruments. In fact, I have observed the occasion where a child attains a low score on an instrument and the examiner attributes this score to an error with his or her administration. While it is accurate that new examiners make scoring errors (McDermott, Watkins & Rhoad, 2014; Mrazik, Janzen, Dombrowski, Barford, & Krawchuk, 2012), examiners should not necessarily misattribute examinee errors to administration problems. When standardized procedures are assiduously followed and protocol scoring errors are

68

S. C. Dombrowski

minimized, then it is more likely that the obtained score reveals the level of the examinee’s ability rather than a presumed error in the examiner’s administration.

3.5.6

Be Well Prepared and Adhere to Standardized Directions

Psychologists in training (and any psychologist acquiring skills with a new instrument) need to rehearse thoroughly prior to administering an instrument. The time to practice a new assessment instrument should not be when you administer to a child who needs an evaluation. This would be in violation of Test Standards and could jeopardize the validity of the scores. For this reason, psychologists should practice thoroughly the instruments that they will be using. Similarly, you will need to vigorously adhere to standardized directions. This eliminates a high degree of the construct irrelevant variance and assessor bias (i.e., drift from standardized procedures; Hoyt & Kerns, 1999; McDermott et al., 2014; Raudenbush & Sadoff, 2008) associated with the test. Departure from either of these strictures is contraindicated and suggests that a norm-referenced score should not be computed.

3.5.7

Triple Check Protocol Scoring

I cannot tell you how many times I have encountered scoring errors in the protocols of examiners. A red flag sometimes may be an unusually high or low score on a subtest or an index area. However, variability in subtest scores should not be construed to mean that the administrator made an error as subtest scores are typically less reliable than composite scores (Watkins & Canivez, 2004). But, it should trigger the need to thoroughly scrutinize your scoring as the literature is clear that practitioners and graduate students alike make mistakes when administering and scoring protocols (Mrazik, Janzen, Dombrowski, Barford & Krawchuk, 2012).

3.5.8

Breaks, Encouragement and Questions

Younger children may need a break to stretch and get a drink of water. You will need to use your judgment when offering breaks. I would not ask a child every fifteen minutes whether he or she requires a break; otherwise, your testing session may extend for hours if not days. More reticent children may be acquiescent and require a predetermined break. Younger children may also require a bathroom break and may need to be prompted about this. More assertive and verbally impulsive

3 The Psychoeducational Assessment and Report Writing Process …

69

children sometimes frequently ask for a break. During these situations, you may need to redirect the child back to the testing session. Some more active and garrulous children may ask a significant amount of questions and get drawn off task. You would be wise to avoid engaging in frequent responding to questions of the child and instead prompt the child to remain on task. Much of standardized testing requires that the psychologist avoid praising correctness of response. This may have an awkward feel to the child and is different from what is experienced within the classroom where teachers tend to offer effusive feedback and praise. As a result, a younger child may become discouraged or feel like your transaction with him or her somewhat strange. (It actually is a bit awkward to put on a poker face and avoid offering whether the child answered correctly). When I get the sense that a child is beginning to feel uneasy or lacking in confidence from providing an answer and not receiving feedback, I am reminded to offer effusive praise of effort. Another option that I sometimes use to circumvent frustration and upset is to use the following statement: I really appreciate how hard you are working. I can see that you are wondering how you are doing on this test. This is not important. What is important is that you just try your best. I cannot give you answers or tell you whether your answer is correct. You only need to try your best. The combination of this statement and frequent praise of effort often serves to alleviate any sense of frustration or anxiety faced by the child.

3.5.9

Debrief the Testing Process

At the end of the testing process, you should praise the child for working hard. For younger children, it is a good idea to offer a selection of stickers from which the child may choose. Most children appreciate this end of testing reward. You should also mention to the child the next steps in the process which will include the production of a written report and a discussion of the report with his or her caregivers. This brings closure to the testing process for the child.

3.6

Conclusion

Within this chapter, you were introduced to the nuances of the psychoeducational assessment process. This chapter serves as a general framework and there are additional skills that will need to be attained. This chapter discusses interviewing and gathering of additional sources of data including rating forms, records, and

70

S. C. Dombrowski

background information. Chapter 4 furnishes an overview of how to conduct an observation. The remaining chapters in the book discuss report writing including integrating information, writing a report, and providing oral feedback.

References Dombrowski, S. C. (2014). Child Development Questionnaire (CDQ). Unpublished Document. Ericsson, K. A., & Smith, J. (1991). Toward a general theory of expertise: Prospects and limits. Cambridge, MA: Cambridge University Press. Glutting, J. J., Oakland, T., & McDermott, P. A. (1989). Observing child behavior during testing: Constructs, validity, and situation generality. Journal of School Psychology, 27, 155–164. Hoyt, W. T., & Kerns, M.-D. (1999). Magnitude and moderators of bias in observer ratings: A meta-analysis. Psychological Methods, 4, 403–424. https://doi.org/10.1037/1082-989X.4.4. 403. LaBarbera, J. D., & Dozier, J. E. (1985). A honeymoon effect in child psychiatric hospitalization: A research note. Child Psychology & Psychiatry & Allied Disciplines, 26(3), 479–483. https:// doi.org/10.1111/j.1469-7610.1985.tb01948.x. Landsberger, H. A. (1958). Hawthorne revisited. Ithaca, NY: Cornell University Press. McConaughy, S. H., Harder, V. S., Antshel, K. M., Gordon, M., Eiraldi, R., & Dumenci, L. (2010). Incremental validity of test session and classroom observations in a multimethod assessment of attention deficit/hyperactivity disorder. Journal of Clinical Child Adolescent Psychology, 39, 650–666. https://doi.org/10.1080/15374416.2010.501287. McDermott, P. A., Watkins, M. W., & Rhoad, A. (2014). Whose IQ is it? Assessor bias variance in high-stakes psychological assessment. Psychological Assessment, 26, 207–214. Mrazik, M., Janzen, T. M., Dombrowski, S. C., Barford, S. W., & Krawchuk, L. L. (2012). Administration and scoring errors of graduate students learning the WISC-IV: Issues and controversies. Canadian Journal of School Psychology, 27(4), 279–290. https://doi.org/10. 1177/0829573512454106 Raudenbush, S. W., & Sadoff, S. (2008). Statistical inference when classroom quality is measured with error. Journal of Research on Educational Effectiveness, 1, 138–154. https://doi.org/10. 1080/19345740801982104. Watkins, M. W., & Canivez, G. L. (2004). Temporal stability of WISC-III subtest composite: Strengths and weaknesses. Psychological Assessment, 16, 133–138.

Chapter 4

Interviewing and Gathering Data Stefan C. Dombrowski

4.1

Introduction

This chapter contains two sections. Presented first will be a discussion of the process of interviewing key stakeholders in the child’s life including teachers, paraprofessionals, and caregivers. This will be followed by a discussion of the process of collecting relevant background information including prior psychological reports, medical records, IEP documents, grade reports, and any additional pertinent sources. A comprehensive child development questionnaire (CDQ; Dombrowski, 2014) will be presented and is a useful option for gathering information regarding children’s background and development. Keep in mind that the interview and the collection of relevant background and developmental history data serves as to augment the assessment process whose ultimate goal is to be able to understand a child’s functioning, offer a classification decision, and then make intervention and other recommendations that support and improve the child’s functioning.

4.2

Interviewing

The main purpose of conducting an interview with a child, parent, teacher, or other individual is to gather information about the child to help better understand the child’s functioning (Dombrowski, 2015). Although the ultimate goal of conducting an assessment is to collect data to conceptualize an individual’s functioning, make a classification decision, and recommend strategies to improve overall functioning— and the interview serves as an important source of information—there is a second S. C. Dombrowski (&) Rider University, Lawrenceville, NJ, USA e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S. C. Dombrowski (ed.), Psychoeducational Assessment and Report Writing, https://doi.org/10.1007/978-3-030-44641-3_4

71

72

S. C. Dombrowski

and perhaps equally important purpose of interviewing. The interview may well be the very first contact you establish with a caregiver.

4.2.1

Do not Overlook the Need to Connect with Caregivers

A secondary, and perhaps equally important purpose, is to begin the rapport process with caregivers. I do not have anything but anecdotal evidence to support this contention but I have found that psychologists who do not reach out to caregivers at the onset of the evaluation process and offer to answer questions about the evaluation process are at increased risk for having their reports contested either during a feedback session or within a due process hearing. I cannot sufficiently emphasize that all the technical capabilities in the world cannot overcome the human element of simply feeling heard, understood, and that the psychologist is concerned about the progress of the child. Thus, in addition to serving as an essential part of the data gathering process, the interview serves to connect with caregivers and establish rapport with them. Again, this should not be construed to mean that the psychologist should approach the assessment process as one would the therapeutic process. However, the conveyance of openness and concern for the child is critically important. The initial contact with caregivers via the interview may also serve an additional purpose. It may well be a time when caregivers ask questions of the clinician. This may include questions about the clinician’s training, experience, and background. Thus, taking time to speak to caregivers may also serve as a form of credibility establishment, establishing one’s expertise (Meyers, Parsons, & Martin, 1979), and ultimately allaying fears that the caregiver’s child’s needs are not properly being addressed. Thus, it is essential that the psychologist be respectful, patient, and open to answer questions that the caregiver may have. However, the psychologist is cautioned about offering any sort of classification or diagnosis during the interview stage of the assessment. This is not only premature but also unethical as you have not competed all sources of data collection. Along with other methods of assessment and sources of information, data gathered from interviews serve to assist in conceptualizing a child’s functioning, making a classification decision, and recommending intervention strategies and accommodations that will improve the child’s functioning. Interviewing, like other methods of assessment, serves to furnish data regarding a child.

4 Interviewing and Gathering Data

4.2.2

73

The Three Main Interviewing Approaches

There are three main approaches to interviewing: structured, semi-structured, and informal. The informal approach is constructivist in nature and allows the process to unfold based on responses provided by the interviewee. It might begin with a question such as “Tell me why this assessment is being conducted?” Follow-up questions are then offered depending upon responses provided. The benefit to an informal approach is to be able to branch off on lines of inquiry based on the response offered. Structured interviewing, by contrast, requires a series of detailed questions. A semi-structured interview has a theme of questions that need to be addressed, but permits the clinician to deviate and explore a topic more fully with the caregivers. The availability of structured and semi-structured interview formats for psychoeducational assessment is unavailable (Loeber et al., 1991; Frick et al., 2005). The structured interview has the highest degree of validity and reliability, but requires the greatest time commitment (60–90 min) (Silverman & Ollendick, 2005). A structured interviewing style is straightforward (and perhaps easiest for an inexperienced clinician), but it can be quite time consuming depending upon the structured interview selected. Structured interviewing requires the clinician to ask an exhaustive series of questions of stakeholders (e.g., parents, teachers, and counselors) despite the question’s relevance to the referral question or diagnostic issue. For instance, a clinician may ask questions about a child’s ability to participate in activities of daily living even though the child has solid cognitive abilities and the referral question was about the child’s mathematical understanding. This approach casts a wide net seeking to be as comprehensive as possible and to leave no stone unturned. The structured interview process often uses instruments such as the DISC-IV (Shaffer et al., 2000), which is linked to DSM diagnostic criteria and is intensely comprehensive. Both structured and semi-structured interview formats are available, but they are geared toward clinical (e.g., DSM) diagnoses. Structured and semi-structured interview formats based on DSM diagnostic categories can certainly be helpful for exploration of emotional and behavioral conditions that may not be assessed on behavior rating scales, but keep in mind that psychoeducational classifications within the school are not based on the DSM, but rather on IDEA classification categories. A structured or semi-structured clinical interview may have its place if one is considering an OHI (via ADHD) or autism spectrum classification, or when attempting to rule out social maladjustment from emotional disturbance, but IDEA/ state special education classification criteria supersedes DSM diagnostic criteria when determining special education eligibility (Zirkel, 2011).

74

S. C. Dombrowski

4.3

The Psychoeducational Interview Format

Because there is a lack of structured or semi-structured interview formats for the psychoeducational evaluation of children, I would like to offer the following semi-structured format for interviewing caregivers and teachers regarding a child’s functioning. A format for interviewing students is also presented. Please note at the end of this chapter the semi-structured psychoeducational interview approach may be printed on and used for your evaluation process.

4.3.1

Caregiver/Parent Format

Some caregivers have tremendous insight into their children’s functioning. Others offer less details about their children’s functioning. My suggestion would be to send along a developmental history questionnaire, along with the permission to evaluate, so that the caregiver can complete to the best of his or her ability. The next section of this chapter offers a detailed child development questionnaire that is available to all purchasers of this book. Receipt of a structured developmental history questionnaire will also permit the clinician to tailor the interview to specific topics of concern that require additional elaboration following review of the questionnaire. If one is working in a school-based setting, then the interview process may occur in-person or via telephone. Again, it is important to connect with caregivers and not just send email interview questions, surveys to complete, or not have any contact at all. This serves a twofold purpose of establishing rapport and gathering data regarding the child’s functioning. Keep in mind that some caregivers may be anxious about the assessment process, may have questions about specific steps, and may even have misperceptions about why the child is being evaluated or what will transpire if the child is found eligible for services. Thus, interviewing of caregivers is much more than a simple data gathering process. It is transactional and serves as a source of information for both the psychologist and the caregiver. There is one caveat that I would like to mention. Never, ever, make a prediction about whether a child will receive a diagnosis or qualify for services. This is inappropriate, unethical, and premature. Even if you are fairly certain that a child will be found eligible do not make such prediction. Instead, discuss how the assessment process will proceed and that when it is complete the psychologist will

4 Interviewing and Gathering Data

75

review the report with the caregiver. You can certainly discuss the steps and the components of the assessment process but do not make predictions about diagnosis or eligibility. The following is a possible introductory statement when you first connect with a caregiver. Introductory statement to caregiver I am conducting the evaluation of your child. As part of this process, I always reach out to caregivers to get their perspective on their child’s functioning in the academic, behavioral, and social-emotional arena, as well as your child’s areas of strength and areas of need. First, I would like to ask you why your child is being evaluated? What do you think is the purpose of the evaluation?

Academic Functioning In this next section, you are to get a sense from the caregivers about their child’s academic progress in school, particularly, in the areas of reading, writing, and mathematics (i.e., the three ‘Rs’) as these are connected to specific learning disability eligibility. The following general statement could be asked of the caregiver. After that, additional, more fine-grained questioned could be asked if so desired. Academic Functioning I would next like to inquire further about your child’s academic progress. Could you comment on his/her progress in reading, writing, and mathematics?

The above query will begin the process of ascertaining information from the caregiver. After that you can consider asking one of several more specific and additional questions. Please describe his or her grades in each class? Does your child struggle with any academic subject? Please explain. Do you know your child’s guided reading level? What is your child’s handwriting like? Please describe your child’s progress in spelling? Does your child find any topic particularly difficult or easy? What are your child’s areas of academic strength and areas of academic need? Behavioral and Social-Emotional Functioning The approach to the interview will depend upon the child’s referral issue. Additional, more specific questioning may be necessary if the child has a disorder or condition that might make the child eligible under autism, emotional disturbance, or other health impaired. Otherwise, the following list of questions is a good start

76

S. C. Dombrowski

when interviewing caregivers about their children’s behavioral and social-emotional functioning. Keep in mind that you will not likely need to ask all questions of caregivers. Social-Emotional and Behavioral Functioning I would next like to discuss your child’s behavioral and social functioning at school. We will cover many topics including what you child likes about school, his or her friendships, and whether there are any behavioral or social difficulties.

The following list of questions is meant to serve as a guide for possible questions to ask of a caregiver within each category. It is by no means exhaustive and is more illustrative. Strengths/Interests What are your child’s strengths and areas of interest? Does your child participate in any activities or sports? What are your child’s hobbies? Rule Compliance Does your child follow classroom and teacher rules? Does your child get into trouble at school? If so, for what? How does your child respond to teacher and staff requests to follow rules? Peer Relations How does your child get along with peers at school? Does your child have friends at school? Mood/Affect Describe your child’s mood. Does your child seem anxious, depressed, or worried? Does your child have anger issues? Is your child aggressive with other children or adults? Does your child have a bad temper or throw temper tantrums? Reality Testing/Psychosis Does your child ever indicate that he or she hears voices or sees things that other people do not? Does your child believe that they are someone or something they are not? Does your child talk about odd or unusual things? If so, what?

4 Interviewing and Gathering Data

77

Dangerousness to Self or Others Does your child ever state a desire to hurt him or herself? Does your child ever state that he or she wishes he or she were dead? Does your child try to hurt others? Prior Psychiatric History Does your child have an outside classification from a doctor or a psychologist? Is your child taking medication? If so, what type of medication (name, dosage, when started)? Has your child ever been hospitalized for a mental health condition? Adaptive Functioning Questioning a parent about adaptive functioning is particularly relevant if the child is suspected of struggling with an intellectual disability or an autism spectrum disorder. Of course, a norm-referenced instrument such as the Vineland-3 or ABAS-3 may also be worthwhile, if not necessary, in the case of suspected ID. When interviewing a parent, caregiver, or teacher, you will need to inquire about the child’s functioning in the conceptual, communication, and social domains. Also, inquire about sensory responses, gross and fine motor skills, and stereotyped behaviors. Because of the range of adaptive tasks across the developmental spectrum, I would recommend that you place primary reliance on a norm-referenced adaptive behavior scale to gain insight into a child’s adaptive functioning. There are also exceptionally detailed curriculum-based behavioral and adaptive functioning measures that may be referenced and that are linked directly with intervention. Nonetheless, the data from these informal measures may be useful as a diagnostic tool when part of a comprehensive assessment. The following brief questions along with a broad-band behavior rating scale could serve as a screener to determine whether you should move to a comprehensive adaptive functioning evaluation. Self-Care Does your child struggle with everyday activities such as brushing teeth, tying shoelaces, getting dressed, toileting, and eating? Does your child struggle with keeping clean and bathing? Daily Living Can your child find his or her way around school without assistance? Does your child understand the difference between dangerous and safe situations? Communication Does your child have difficulty communicating with other children and adults? Social Does your child struggle when playing with other children? Does he or she show interest in playing with other children? When your child plays with other children, does he or she interact with or just play alongside them? Does your child display any repetitive activities, interests, or actions?

78

S. C. Dombrowski

Supplemental Caregiver Interview Questions Prior to the interview, it would be optimal for caregivers to complete a structured developmental history questionnaire such as the child development questionnaire (CDQ, Dombrowski, 2014). However, this may not be practical in all cases. The questionnaire may not be returned or time constraints make it more practical to just go through sections of the CDQ with caregivers. If time constraints preclude the completion of each component of the CDQ, then it will be important at a minimum to collect information regarding a child’s familial, prenatal, perinatal, early development, medical, recreational, behavioral, and educational history. The caregiver interview will not necessarily be as detailed as the child development questionnaire but the following may suffice if a questionnaire cannot be completed or is not returned by the caregiver. This information will be added to the caregiver interview described at the beginning of this chapter. Parental and Family History 1. Who provides primary caregiving for child? 2. Is there a family history of learning or mental health issues? 3. Are there family medical issues? Prenatal, Perinatal, and Early Childhood History 1. 2. 3. 4. 5. 6.

Were there any issues during pregnancy with your child? Were there any complications during delivery? Was the child born early or with low birthweight? Did the child stay in the neonatal intensive care unit (NICU) following delivery? Was the pregnancy healthy or were there illnesses during pregnancy? Did you child experience colic?

Early Childhood Development 1. Were there any issues with your child’s early development (i.e., crawling, walking, talking, and learning to count or read)? 2. Did your child attend daycare or preschool and were there any concerns expressed while in attendance? Medical History 1. 2. 3. 4. 5.

Are your child’s vision and hearing intact? Does your child have a history of head trauma or concussions? Does your child have a history of illness? Does your child take any medications? Has your child experienced any infections and other illness?

Strength-Based Questions What are some activities your child likes to participate in? What is your child good at? What hobbies does your child enjoy? Does your child play any sports or musical instruments? What are some positives about your child?

4 Interviewing and Gathering Data

4.3.2

79

Teacher Format

Ascertaining information from a teacher is critically important, and required by IDEA. In the early elementary school years, there is frequently only one teacher to interview. In the later grades, particularly, middle school, there will be more (i.e., four academic teachers). I recommend ascertaining information from all of the academic teachers and perhaps from additional teachers depending upon the referral. For instance, if the referral is for a behavioral issue, then perhaps it is appropriate to ascertain behavioral and social-emotional information from the physical education or art teacher. In-person versus emailed questionnaire interview I recommend taking the time to get to know all teachers in your building if you work in a K − 12 school. This will mean that the first few evaluations you conduct involving a teacher of a student it would be a good idea to speak with the teacher to ascertain his or her perspective regarding the student. If you are working in a clinic setting, then it will be important to speak with the teacher in lieu of or after the provision of a questionnaire. For psychologists working in a K − 12 setting, I would posit that the first few interviews should be conducted directly in-person or via telephone call. After that you may have a choice to conduct an online interview where you send a series of questions to the teacher to address. Depending upon the depth of response you may or may not need to follow-up with the teacher. Within more complex cases, I recommend an in-person interview process where you speak to the teacher to clarify issues. Email Interview Approach To reiterate, the use of an email approach to interviewing should not be conducted when you have never taken the time to meet a teacher. Also, new psychologists should not use the approach as it takes time to become proficient at interviewing and understanding the nuances of a student’s clinical situation. Additionally, the teacher or other professional with whom you are engaging should understand that what they incorporate into the email interview will be reproduced in the report. Finally, procedural safeguards must be put in place to ensure that the information transmitted online is secure and the student’s privacy is protected. Once these considerations are taken into account, then an email interview may be used as an approach to gathering data regarding a child.

80

S. C. Dombrowski

Email Interview Approach Introductory statement to Teacher I am conducting an evaluation of your student, [name]. As part of this process, I would like to ascertain your perspective about [name’s] progress at school in the academic, behavioral, social, emotional and adaptive arena, as well as the student’s areas of strength and areas of need. Please comment on the student’s progress in each of the following areas: Academics (reading, writing and mathematics; other subjects) Behavior Social-Emotional Adaptive (If applicable) Areas of Strength Areas of Need

In-Person Interview Approach The following structure may be worthwhile when conducting an interview with teachers. This represents a general structural approach and serves as a guide to be supplemented by additional questions as needed. Introductory statement to Teacher I am conducting an evaluation of [student’s name]. As part of this process, I would like to speak to you about the student’s progress at school in the academic, behavioral, social, emotional, and adaptive arena, as well as the student’s areas of strength and areas of need.

Academic Functioning In this next section, you are to get a sense from teachers about the student’s academic progress in school, particularly, in the areas of reading, writing, and mathematics (i.e., the three ‘Rs’) as these are connected to specific learning disability eligibility. The following general statement could be asked of the caregiver. After that, additional, more fine-grained questioned could be asked if so desired.

4 Interviewing and Gathering Data

81

Academic Functioning I would next like to inquire further about your child’s academic progress. Could you comment on his/her progress in reading, writing, and mathematics?

Please describe his or her grades in your class? Does this student struggle with any academic subject? Please explain. What is the student’s guided reading level? What is the student’s handwriting like? What are the student’s areas of academic strength and areas of academic need? Behavioral and Social-Emotional Functioning The approach to the interview will depend upon the child’s referral issue. Additional, more specific questioning may be necessary if the child has a disorder or condition that might make the child eligible under autism, emotional disturbance, or other health impaired. Otherwise, the following list of questions is a good start when interviewing caregivers about their children’s behavioral and social-emotional functioning. Keep in mind that you will not likely need to ask all questions of teachers. Social-Emotional and Behavioral Functioning I would next like to discuss [student’s name’s] behavioral and social functioning at school. We will cover many topics including what you child likes about school, his or her friendships, and whether there are any behavioral or social difficulties.

The following list of questions is meant to serve as a guide for possible questions to ask of a teacher within each category. It is by no means exhaustive and more illustrative. The approach to the interview will be dependent upon the child’s referral issue. More specific questioning may be necessary if the child has a specific disorder or condition that might make the child eligible under autism, emotional disturbance, or other health impaired. In the meantime, here is a list of questions that are appropriate for ascertaining a sense of the student’s social, emotional, and behavioral functioning. General Strengths/Interests What are the student’s strengths and areas of interest? Does the student participate in any activities or sports? What are the student’s hobbies?

82

S. C. Dombrowski

Rule Compliance Does the student follow classroom and teacher rules? Does the student get into trouble at school? If so, for what? How does the student respond to teacher and staff requests to follow rules? Does this student follow classroom and teacher rules? Does this student get into trouble at school? If so, for what? How does the student respond to teacher and staff requests to follow rules? Work Habits Describe this student’s work habits? Describe the student’s homework completion and quality. Peer Relations How does the student get along with peers at school? Does the student have friends at school? Mood/Affect Describe the student’s mood. Does the student seem anxious, depressed or worried? Does the student have anger issues? Is the student aggressive with other children or adults? Does the student have a bad temper or throw temper tantrums? Reality Testing/Psychosis Does the student ever indicate that he or she hears voices or sees things that other people do not? Does the student believe that they are someone or something they are not? Does the student talk about odd or unusual things? If so, what? Does the student believe that they are someone or something they are not? Dangerousness to Self or Others Does the student ever state a desire to hurt him or herself? Does the student ever state that he or she wishes he or she were dead? Does the student try to hurt others? Prior Psychiatric History Does the student have an outside classification from a doctor or a psychologist? Is the student taking medication? If so, what type of medication (name, dosage, when started)? Has the student ever been hospitalized for a mental health condition?

4 Interviewing and Gathering Data

83

Adaptive Functioning Questioning a teacher about adaptive functioning is particularly relevant if the child is suspected of struggling with an intellectual disability or an autism spectrum disorder. Of course, a norm-referenced instrument such as the Vineland-3 or ABAS-3 may also be worthwhile, if not necessary in the case of suspected ID. When interviewing a parent, caregiver, or teacher, you will need to inquire about the child’s functioning in the conceptual, communication, and social domains. Also, inquire about sensory responses, gross, and fine motor skills and stereotyped behaviors. Because of the range of adaptive tasks across the developmental spectrum, I would recommend that you place emphasis on a norm-referenced adaptive behavior scale to gain insight into a child’s adaptive functioning. There are also exceptionally detailed curriculum-based behavioral and adaptive functioning measures that may be referenced and that are linked directly with intervention. Nonetheless, the data from these informal measures may be useful as a diagnostic tool when part of a comprehensive assessment. The following brief questions along with a broad-band behavior rating scale could serve as a screener to determine whether you should move to a comprehensive adaptive functioning evaluation. Self-Care Does the student struggle with everyday activities such as brushing teeth, tying shoelaces, getting dressed, toileting, and eating? Does the student struggle with keeping clean and bathing? Daily Living Can the student find his or her way around school without assistance? Does the student understand the difference between dangerous and safe situations? Communication Does the student have difficulty communicating with other children and adults? Social Does the student struggle when playing with other children? Does he or she show interest in playing with other children? When the child plays with other children, does he or she interact with or just play alongside them? Does the child display any repetitive activities, interests, or actions? Behavioral and Social-Emotional Functioning Adaptive Functioning Questioning a teacher about adaptive functioning is particularly relevant if the child is suspected of struggling with an intellectual disability or an autism spectrum disorder. Of course, a norm-referenced instrument such as the Vineland and ABAS would be necessary in these cases.

84

S. C. Dombrowski

Does the student struggle with everyday activities such as brushing teeth, tying shoelaces, getting dressed, toileting, and eating? Does the student struggle with keeping clean and bathing? Can the student find his or her way around school without assistance? Does the student understand the difference between dangerous and safe situations? Does the student have difficulty communicating with other children and adults? Does the student struggle when playing with other children? Does he or she show interest in playing with other children? When the student plays with other children does the student play alongside or engage with the other students? Does the student display any repetitive activities, interests, or actions? Strength-Based Questions What are some activities the student likes to participate in? What is the student good at? What hobbies does the child have? Does the child play any sports or musical instruments? What are some positives about the child?

4.3.3

Student Interview Format

When conducting a psychoeducational assessment it is critically important to interview the child or adolescent. Younger children may be acquiescent (i.e., be agreeable and rarely furnish what they think you would disagree with) or less capable of being insightful about their functioning. When this occurs, the younger child will perceive his progress favorably. Starting at around fourth grade children gain greater insight into their internal functioning and are able to compare their abilities with other children. From that age forward, you may be able to ascertain greater insight into their self-perception of academic, behavioral, and social-emotional functioning. Still, the question remains as to how much emphasis should be placed upon an interview with a child. Some children report positive feelings despite facing severe struggles. Introductory Statement I would like to talk to you about how you like school, think you are doing in school, and about your thoughts and feelings. General Questions about School How do you like attending the Smith Public School? What is your favorite part of school? What do you like the least? How are you doing in school [or each of your classes for older students?] Do you have any difficult with any subjects at school?

4 Interviewing and Gathering Data

85

Academic Functioning I would next like to inquire about your progress at school. Do you struggle with any academic subject? Please explain. How are you doing in reading? Do you know your guided reading level? How are you doing in writing? How are you doing in spelling? How are you doing in mathematics? Do you need any help with any of your subjects? Describe your reading, writing, and mathematics skills? Have you ever received extra support for difficulties? What are your grades in each of your classes? [Question for older students] Behavioral and Social-Emotional Functioning I would now like to talk about your behavior and friendships at school. How are you with following classroom and teacher rules? Do you get into trouble at school? If so, for what? Do you have friends at school? If so, do you get along with them? What do you like to do with friends? Do you have any fears or worries? Do you ever feel down or very sad? If so, about what? Do you have temper or anger control problems? Do you ever hear voices or see things that other people do not? Are you taking any medication? If so, for what? Strength-Based Questions What are some activities you like to do? What are you good at? What do people say you are good at? Do you have any hobbies or play any sports or musical instruments?

4.3.4

Interview Format for Mental Health Conditions that Might Impact Educational Functioning

If the child has received an outside diagnosis of autism, ADHD, schizophrenia, or a mood disorder, or if there is suspicion that any of the above symptoms associated with these diagnoses may have an adverse impact upon educational functioning, then that may result in an ED, OHI, or autism classification. Consequently, the questions from either a structured interview or directly from the DSM-5 may assist with a special education classification decision. Keep in mind, however, that if you

86

S. C. Dombrowski

are working within a US public school you are not arriving at a DSM classification but rather a special education classification so be cautious about placing primary emphasis on instruments or interview procedures with specific linkage to DSM. There is not necessarily a one-to-one correspondence between a special education classification and a DSM classification. Nonetheless, questions from either the DSM or a structured interview may be used to assist you in arriving at your special education classification decision. In Canada and other countries, practitioners do not have to concern themselves with IDEA classification categories and should instead reference the DSM. In Canada, or in a US, clinic-based setting that offers a DSM or ICD classification, the use of a structured or semi-structure interview format is recommended so long as the evaluation also considers pertinent IDEA classification categories. The following will present commonly experienced disorders of childhood— ADHD, ODD, disruptive mood dysregulation disorder, and autism spectrum—with their associated DSM diagnostic criteria rephrased in the form of an interview question for the caregiver or teacher. Endorsement of sufficient items may be indicative of eligibility for special education so long as educational impairment can be documented. This approach to interviewing may be used with other possible disorders of childhood, including schizophrenia, bipolar disorder, and depression. All relevant psychological disorders of childhood discussed within the DSM-5 are not listed below. Instead, I have listed a sampling of commonly experienced psychological disorders of youth and demonstrate how to implement a structured/ semi-structured interview with direct linkage to the DSM. I am including several frequently observed psychological disorders of childhood as examples. This approach can be used with any of the DSM disorders with which a child might struggle.

4 Interviewing and Gathering Data

Suspected ADHD The questions below are directly linked to ADHD symptoms from the DSM5 and revised to make more readable and in the form of a question for the interviewer. Introductory statement: “I’m now going to ask you some detailed questions about [Name’s] activity, attention and focus.” Inattention a. Does [Name] struggle with giving close attention to details or making careless mistakes in schoolwork, work, or other activities. In other words, does [Name] ignore or miss details? b. Does [Name] have difficulty maintaining attention when playing or doing other tasks? Does [Name] have difficulty staying focused during classroom activities, conversations, or when reading? c. Does [Name] not seem to listen when spoken to directly? Does [Name] seem absent-minded, even without a clear distraction? d. Does [Name] often struggle to follow through on directions and fail to finish schoolwork or chores? Does [Name] begin a task but quickly lose focus or become easily distracted? e. Does [Name] often have difficulty organizing tasks and activities? Does [Name] have difficulty keeping materials and personal items organized? Does [Name’s] work appear sloppy and disorganized? Does [Name] have poor time management? Does [Name] fail to meet deadlines? f. Does [Name] avoid tasks that require continuous mental effort such as reading a book or writing an essay? g. Does [Name] child often lose need things (e.g., school materials, eyeglasses, pencils)? h. Is [Name] often easily distracted by the external environment and stimuli unrelated to the activity with which he/she is engaged (or unrelated thoughts for older adolescents)? i. Is [Name] often forgetful in daily activities (e.g., chores, running errands, returning calls, remembering appointments)? Hyperactivity and Impulsivity a. Does [Name] often fidget with hands or feet or is unable to sit still? b. Does [Name] often leave his or her seat in situations when remaining seated is required? c. Does [Name] often move around in situations where it is inappropriate? (This may be a feeling of restlessness in adolescents). d. Is [Name] often unable to engage in play activities quietly? e. Is [Name] often energetic across contexts? Is [Name] unable or uncomfortable being still for an extended time, such as during classroom lessons or in restaurants? f. Does [Name] often speak excessively? g. Does [Name] often complete others’ sentences or not wait his/her turn to speak? h. Does [Name] often have trouble waiting his or her turn (e.g., while waiting in cafeteria line to buy lunch)? Source: Adapted from the DSM-5, American Psychiatric Association (2013).

87

88

S. C. Dombrowski

Suspected Autism The questions below are aligned with symptoms from the DSM 5 regarding autism spectrum disorder and revised to make more readable and in the form of a question for the interviewer. Introductory statement: “I’m now going to ask you some detailed questions about [Name’s] socialization, communication and behavior.” Social Communication and Social Interaction The diagnostic criteria require persistent deficits in social communication and social interaction across multiple contexts, as manifested by the following, currently or by history (examples are illustrative, not exhaustive): 1. Does [Name] experience deficits in social-emotional exchanges? In other words, does [Name] struggle with typical back-and-forth conversation or fail to share interests and emotions with others? Does [Name] struggle with entering into and maintaining social situations? Does [Name] fail to mirror the emotions of others? Does [Name] struggle with initiating or responding to social situations? 2. Does [Name] experience deficits in nonverbal communicative behaviors used for social interaction, ranging, for example, from deficits in the understanding of the use of gestures and lack of appropriate eye contact and body language; to a complete lack of affect and nonverbal communication? 3. Does [Name] experience difficulty with developing and maintaining relationships, or display a lack of interest in peers? Does [Name] experience difficulties with adjusting his/her behavior to fit social contexts (e.g., sharing during play or in making friends)? Restricted, repetitive patterns of behavior, interests, or activities Requires at least two of the following, currently or by history (examples are illustrative, not exhaustive): 1. Does [Name] display stereotypic or repetitive motor movements, use of objects, or speech (e.g., simple motor stereotypies,echolalia, lining up objects). 2. Does [Name] insist on sameness or demonstrate a rigid adherence to routines, ritualized patterns or verbal/nonverbal behavior (e.g., extreme distress at minor changes, difficulties with transitions, need to eat same food every day)? 3. Does [Name] have fixated interests that are abnormal in intensity or focus (e.g, strong attachment to or obsession with unusual objects or perseverative interest)? 4. Is [Name] hyper or hyporeactive to sensory input, or possess unusual interests in sensory aspects of the environment (e.g., apparent lack of sensation to pain, negative response to specific sounds or textures, excessive smelling or manipulation of objects, visual attraction to lights or movement)? Source: Adapted from the DSM-5, American Psychiatric Association (2013).

4 Interviewing and Gathering Data

Oppositional Defiant Disorder The questions below are aligned with symptoms from the DSM 5 regarding oppositional defiant disorder and revised to make more readable and in the form of a question for the interviewer. Introductory statement: “I’m now going to ask you some detailed questions about [Name’s] behavior and ability to listen to those in authority including parents and teachers.” Angry/Irritable Mood 1. Does [Name] frequently lose his or her temper? 2. Is [Name] often easily offended or annoyed? 3. Is [Name] often angry and hostile? Argumentative/Defiant Behavior 4. Does [Name] often argue with authority figures or with adults? 5. Does [Name] often refuse to adhere to rules or requests from authority figures? 6. Does [Name] purposely bother others? 7. Does [Name] often blame others for his/her misbehavior? Vindictiveness 8. Has [Name] been malicious or revengeful to others at least twice with the past six months? Source: Adapted from the DSM-5, American Psychiatric Association (2013).

89

90

S. C. Dombrowski

Disruptive Mood Dysregulation Disorder The questions below are aligned with symptoms from the DSM 5 regarding disruptive mood dysregulation disorder and revised to make more readable and in the form of a question for the interviewer. Introductory statement: “I’m now going to ask you some detailed questions about the child’s behavior and ability to modulate his or her anger.” 1. Does [Name] experience frequent, severe verbal and/or behavioral temper outbursts that are grossly out of proportion in intensity or duration to the situation or provocation? 2. Are the temper outbursts conflicting with his/her developmental level? 3. Do the temper outbursts occur, on average, three or more times per week? 4. Is [Name’s] mood between temper outbursts is consistently irritable or angry on a daily basis, and is observable by others (e.g., parents, teachers, peers)? Decision for Psychologists: 5. Are criteria 1 to 4 present in at least two of three settings (i.e., at home, at school, with peers)? Is the behavior severe in at least one of these contexts? 6. The diagnosis should not be made for the first time before age 6 years or after age 16 years. 7. By history or observation, the age or onset of Criteria 1 to 5 is before age 10 years. 8. There has not been a clear period lasting more than 1 day during which the full symptom criteria, except for duration, for a manic or hypomanic episode have been met. Note: A developmentally appropriate mood elevation, such as when occurs in during the anticipation or experience of a highly positive event, should not be classified as a symptom of mania or hypomania. 9. The behaviors do not occur exclusively during an episode of major depressive disorder and are not better explained by another mental disorder (e.g., autism spectrum disorder, posttraumatic stress disorder, separation anxiety disorder, persistent depressive disorder [dysthymia]. 10. The symptoms are not caused by the physiological effects of a substance or to another medical or neurological condition. Source: Adapted from the DSM-5, American Psychiatric Association (2013)

There are additional childhood psychological disorders which could contribute to a child’s eligibility for a special education-based classification including major depressive disorder. Not all are presented in this chapter. Rather, it is meant to be illustrative and serve as a guide for interviewing caregivers and teachers.

4.3.5

A Semi-structured Psychoeducational Interview

The below document may be used as a semi-structured approach to the interviewing of caregivers and teachers. All of the discussion listed above has been compiled within this document. The below document is available to purchasers of this book. Please email me at [email protected] for a copy.

4 Interviewing and Gathering Data Appendix

91

Psychoeducational Interview – Caregiver/Teacher Form Stefan C. Dombrowski, Ph.D.

Introductory statement to caregiver I am conducting the evaluation of your child. As part of this process, I always reach out to caregivers to get their perspective on their child’s functioning in the academic, behavioral, and socialemotional arena, as well as your child’s areas of strength and areas of need. First, I would like to ask you why your child is being evaluated? What do you think is the purpose of the evaluation?

Introductory statement to Teacher I am conducting an evaluation of your student, [name]. As part of this process, I would like to speak to you about [name’s] progress at school in the academic, behavioral, social, emotional and adaptive arena, as well as the student’s areas of strength and areas of need.

Academic Functioning I would like to inquire about [Name’s] academic progress. Caregivers Could you comment on your child’s progress in reading, writing and mathematics? What are the child’s areas of academic need? What are the child’s areas of academic strength?

Teachers Could you comment on [name’s] progress in reading, writing and mathematics? What are the child’s areas of academic need? What are the child’s areas of academic strength? Please describe his or her grades in your class? Does this child struggle with any academic subject? Please explain. What is the child’s guided reading level? What is the child’s handwriting like?

Behavioral and Social-Emotional Functioning The approach to aspect of the interview will depend upon the child’s referral issue. Additional, more specific questioning may be necessary if the child has a disorder or condition that might make the child eligible under autism, emotional disturbance or other health impaired. Otherwise, the following list of questions is a good start when interviewing caregivers/teachers about their children’s behavioral and social-emotional functioning. Keep in mind that you will not likely need to ask all questions of teachers/caregivers.

Rule Compliance Does the child follow classroom and teacher rules? Does the child get into trouble at school? If so, for what? How does the child respond to teacher and staff requests to follow rules?

92

S. C. Dombrowski Psychoeducational Interview – Caregiver/Teacher Form (page 2) Stefan C. Dombrowski, Ph.D.

Behavioral and Social-Emotional Functioning (continued) Peer Relations How does the child get along with peers at school? Does the child have friends at school? Mood/Affect Describe the child’s mood. Does the child seem anxious, depressed or worried? Does the child have anger issues? Is the child aggressive with other children or adults? Does the child have a bad temper or throw temper tantrums? Strength-Based Questions What are some activities the child likes to participate in? What is the child good at? What hobbies does the child enjoy? Does the child play any sports or musical instruments? What are some positives about the child? Reality Testing/Psychosis Does the child ever indicate that he or she hears voices or sees things that other people do not? Does the child believe that they are someone or something they are not? Does the child talk about odd or unusual things? If so, what? Dangerousness to Self or Others Does the child ever state a desire to hurt him or herself? Does the child ever state that he or she wishes he or she were dead? Does the child try to hurt others? Prior Psychiatric History Does the child have an outside classification from a doctor or a psychologist? Is the child taking medication? If so, what type of medication (name, dosage, when started)? Has the child ever been hospitalized for a mental health condition? Adaptive Functioning (If there are suspected delays or suspected ID or ASD) Self-Care Does the child struggle with everyday activities such as brushing teeth, tying shoelaces, getting dressed, toileting, and eating? Does the child struggle with keeping clean and bathing? Daily Living Can the child find his or her way around school without assistance? Does the child understand the difference between dangerous and safe situations?

4 Interviewing and Gathering Data

93

Psychoeducational Interview – Caregiver/Teacher Form (page 3) Stefan C. Dombrowski, Ph.D.

Adaptive Functioning (Continued) Communication Does the child have difficulty communicating with other children and adults? Social Does the child struggle when playing with other children? Does he or she show interest in playing with other children? When the child plays with other children, does he or she interact with or just play alongside them? Does the child display any repetitive activities, interests or actions?

Supplemental Caregiver Interview Questions (Caregivers only. All of this is in the CDQ; Complete this section only if not completed in CDQ or another questionnaire)

Parental and Family History 1. Who provides primary caregiving for your child? 2. Is there a family history of learning or mental health issues? 3. Are there family medical issues? Prenatal, Perinatal and Early Childhood History 1. Were there any issues during pregnancy with your child? 2. Were there any complications during delivery? 3. Was the child born early or with low birthweight? 4. Did the child stay in the neonatal intensive care unit (NICU) following delivery? 5. Was the pregnancy healthy or were there illnesses during pregnancy? 6. Did you child experience colic? Early Childhood Development 1. Were there any issues with your child’s early development (i.e., crawling, walking, talking, learning to count or read)? 2. Did your child attend daycare or preschool and were there any concerns expressed while in attendance? Medical History 1. Are your child’s vision and hearing intact? 2. Does your child have a history of head trauma or concussions? 3. Does your child have a history of illness? 4. Does your child take any medications? 5. Has your child experienced any infections and other illness?

94

S. C. Dombrowski

Psychoeducational Interview – Caregiver Form/Teacher (page 4) Specific Psychiatric Diagnoses (Maybe be helpful when ruling in or out OHI or ASD)

Suspected ADHD for Classification of OHI Introductory statement: “I’m now going to ask you some detailed questions about [Name’s] activity, attention and focus.” Inattention a. Does [Name] struggle with giving close attention to details or making careless errors in schoolwork, work, or other activities? In other words, does the child ignore or miss details? Is [Name’s] work accurate, when completed? b. Does [Name] have difficulty maintaining attention in tasks or play? Does [Name] have difficulty staying focused during classroom activities, conversations with others, or when doing homework or reading? c. Does [Name] not seem to listen when spoken to directly? Does [Name] seem absent-minded, even without a clear distraction? d. Does [Name] often not follow through on directions and fail to finish schoolwork or chores? Does [Name] begin a task but quickly lose focus and become easily distracted? e. Does [Name] often have difficulty with organizing and sequencing tasks and activities? Does [Name] have difficulty keeping materials and personal items organized? Does [Name’s] work appear disorganized and sloppy? Does [Name] have poor time management? Does [Name] fail to meet deadlines? f. Does [Name] avoid or is unwilling to engage in tasks that require maintained mental effort such as reading a book or writing an essay? g. Does [Name] often lose things needed for tasks and activities (e.g., school materials, eyeglasses, pencils)? h. Is [Name] often easily distracted? i. Is [Name] often forgetful in daily activities (e.g., chores; leaving water bottles and other items at school)?

4 Interviewing and Gathering Data

Psychoeducational Interview – Caregiver/Teacher Form (page 5)

Suspected ADHD for Classification of OHI (continued) Hyperactivity and Impulsivity a. Does [Name] often fidget with hands or feet or is unable to sit still? b. Does [Name] often leave his or her seat in situations when remaining seated is required? c. Does [Name] often move around in situations where it is inappropriate? d. Does [Name] have difficulty playing quietly? e. Is [Name] often energetic across contexts? Is [Name] unable or uncomfortable being still for an extended time, such as during classroom lessons or in restaurants? f. Does [Name] frequently speak excessively? g. Does [Name] often complete others’ sentences or not wait his/her turn to speak? h. Does [Name] often have difficulty waiting his or her turn (e.g., while waiting in cafeteria line to buy lunch).

Source: Adapted from the DSM-5, American Psychiatric Association (2013).

95

96

S. C. Dombrowski

Psychoeducational Interview – Caregiver/Teacher Form (page 6)

Suspected Autism Introductory statement: “I’m now going to ask you some detailed questions about [Name’s] socialization, communication and behavior.” Social Communication and Social Interaction 1. Does [Name] experience difficulties with socialization? In other words, does [Name] struggle with back-and-forth conversation or fail to share interests? Does [Name] struggle with entering into and maintaining social situations? 2. Does [Name] fail to mirror the emotions of others? 3. Does [Name] experience difficulties in nonverbal communicative behaviors used for social interaction ranging from deficits in understanding of the use of gestures and lack of appropriate eye contact and body language; to a complete lack of nonverbal communication. 4. Does [Name] display a lack of interest in peers? 5. Does [Name] experience difficulties with adjusting his/her behavior to fit various social contexts? 6. Does [Name] have difficulty making or playing with friends? Restricted, repetitive patterns of behavior, interests, or activities 1. Does [Name] display hand flapping, spinning of objects, echolalia or lining up objects? 2. Does [Name] insist on sameness or demonstrate a rigid adherence to routines and become distressed by minor changes to routines? Does [Name] experience difficulties with transitions? 3. Does [Name] have fixated interests that are abnormal in intensity or focus? Does [Name] have a strong attachment to unusual objects? 4. Is [Name] hyper or hyporeactive to sensory stimuli? 5. Does [Name] have unusual interests in sensory aspects of the environment such as a negative response to specific sounds or textures, an excessive smelling or touching of objects, or an interest in lights or movement?

Source: DSM-5, American Psychiatric Association (2013).

4 Interviewing and Gathering Data

4.4

97

Gathering Background and Additional Data

Information about a child may be collected via structured developmental history questionnaires, direct questioning of caregivers, teachers and related personnel, review of records, and via distribution and collection of rating forms. This section will discuss these additional data sources.

4.4.1

Structured Developmental History Questionnaires

For the graduate student in school and clinical child psychology, it is recommended that a structured developmental history questionnaire be used to gather information about a child’s prenatal, perinatal, early childhood history, and medical, educational, and family history. There are several questionnaires that are appropriate for such purposes including the BASC-3 structured developmental history (Reynolds & Kamphaus, 2015) and a new, very comprehensive one that was recently created called the child development questionnaire (CDQ; Dombrowski, 2014). The CDQ is available from Dr. Dombrowski free of charge to purchasers of this book. Please contact Dr. Dombrowski at [email protected] for a copy of the CDQ. When completed, structured developmental history questionnaires can provide valuable information about a past and present functioning.

4.4.2

Child Development Questionnaire (CDQ)

The CDQ is presented below and available to purchasers of this book who may contact the author via email for a copy. The CDQ is one of the more comprehensive structured child developmental history questionnaires available in the marketplace.

98

S. C. Dombrowski

Child Development Questionnaire (CDQ) STEFAN C. DOMBROWSKI, PH.D.

Demographic Information Child’s Name: _____________________

Name of Person Comple ng Form: ____

Date: ____________________________

_________________________________

Sex Assigned at Birth:

Rela onship to Child:

Male

Biological parent

Female

Stepparent

Current Gender Iden ty: _____________

Grandparent

Grade: ___________________________

Foster Parent

Date of Birth: ______________________ Race/Ethnicity: (Please select all at that apply)

Aunt/Uncle Other ________________ Address: __________________________ Mobile Phone: _____________________

White

Home Phone: _______________________

Black or African-American

Work Phone: _______________________

Na ve Hawaiian or Pacific Islander

Reason for Evaluation: _______________

Asian

__________________________________

Hispanic or La no

At what age did you first become concerned? ________________________

Na ve American or Na ve Alaskan Other: _________________________ Primary Language(s) spoken in home: __________________________________ Preferred language? _________________ English a Second Language: Yes or No? Describe: _________________________

What are your concerns about your child? (Check all that apply) Language/Speech/Communication Cognitive/learning development Emotional development Medical Motor development Behavior problems School performance

4 Interviewing and Gathering Data

99

Child Development Questionnaire (CDQ) STEFAN C. DOMBROWSKI, PH.D.

Parental Information Mother’s Name: ______________________ Age: ______________________

Father’s Name: ____________________ Age: ____________________

Highest Grade Level Completed:

Highest Grade Level Completed:

9th grade

9th grade

High School

High School

Some College

Some College

Associates Degree

Associates Degree

Bachelor’s

Bachelor’s

Master’s or Juris Doctor

Master’s or Juris Doctor

Doctorate (M.D., Ph.D., DDS, DVM)

Doctorate (M.D., Ph.D., DDS, DVM)

Schools A ended: ____________________

Schools A ended: ____________________

Occupa on ____________________

Occupa on __________________________

Primary Language: ____________________

Primary Language: ____________________

Secondary Language: _________________

Secondary Language: _________________

Family Constellation Where and with whom does the child reside? ________________________________________ Are parents Together Separated Individuals living with child: Name

Age

Sex

Relationship to Child

100

S. C. Dombrowski

Child Development Questionnaire (CDQ) STEFAN C. DOMBROWSKI, PH.D.

Child Strengths, Interests and Hobbies What are the child’s strengths? ___________________________________________________ What are the child’s interests and hobbies? __________________________________________ With what activities is the child involved? ____________________________________________

Prenatal History Number of prior pregnancies ___________ Was Mom under doctor’s care during pregnancy? ________ If so, how many visits? ________ Was the pregnancy ________

No How long? ________ Hospitalization during pregnancy: Yes

Planned Planned with preconception counseling Unplanned Experience of any of the following during pregnancy: Difficulty getting pregnant with this child? Fertility treatment? Yes or No

Yes

Describe:

Multiple Fetuses during pregnancy? Yes or No Describe: Infection(s) during Pregnancy: (Please select all that apply)

No Why? _________________________ Maternal injury during pregnancy? Yes No Premature rupture of membranes? Yes Substance Use during Pregnancy Alcohol: Frequency: _______________ Cocaine: Frequency: _______________

Influenza

Marijuana: Frequency: _____________

Fever

Heroin: Frequency: _______________

Measles Chicken Pox Herpes HIV

Other: Frequency: _______________ Cigarettes Used During Pregnancy? Yes No

Other Sexually Transmitted Infections

Frequency: _________________________

Other Infection

Prescription Medication (s) taken during pregnancy?

Need for bedrest?

Yes

4 Interviewing and Gathering Data

101

Child Development Questionnaire (CDQ) STEFAN C. DOMBROWSKI, PH.D.

No

Abnormal tests during pregnancy:

Describe: ____________________________ During pregnancy, mother had High blood pressure Diabetes

Yes No Explain: ____________________________ Stress during Pregnancy: Yes

Sexually Transmitted Infection Measles

No Explain: _____________________________

Abnormal Ultrasound:

Other pregnancy problems:

Yes

Yes No

No Explain: ____________________________

Explain: ___________________________

Perinatal Age of Mother at birth of child: _________

Yes

Age of Father at Birth of the child: _______

No

Length of Pregnancy: ______ Weeks: ____ Birthweight ____lbs

____ oz

Length at birth: ____ inches

Did the baby experience any of the following:

Labor Length: _____ Hours

Jaundice

Delivery Type:

Apnea

Vaginal

Bilirubin Lights

C-section

Reflux

Breech

NICU stay Blood problems

Delivery Type (Continued)

Breathing Problems

Forceps

Tube feeding

Vacuum used

Need for ventilator Low oxygen

Antibiotics given during labor? Yes No Any placental issues?

Infection Feeding/Sucking problems Intraventricular hemorrhage (bleeding in the brain): Medication (s) given to baby:

102

S. C. Dombrowski

Child Development Questionnaire (CDQ) STEFAN C. DOMBROWSKI, PH.D.

Early Developmental History Breastfed Yes No Until What age? _________________ Overall baby was

Removes clothing Feeds with fingers Hold cup Prints name Draws picture Tie Shoes

Easy

Language/Social

Moderate

Smiles at others Coo Laugh Babble Wave bye-bye Say mama/dada Says first word Point to object Follow command with gesture States name Uses complete sentences

Difficult Early Developmental Milestones (Check and include comments, if appropriate) Gross Motor

Normal Abnormal

Turn over Sit alone Crawl

Normal Abnormal

Pull to stand Walk alone Climb up stairs Walk Run Hop Pedal tricycle Ride Two wheel bike Throwing Catching Fine Motor Reach for Objects Zippers & buttons Pincer grasp Uses spoon

Abnormal

Normal

Abnormal

Holds conversation Language/Social

Gross Motor

Normal

Normal

Loss of language Share emotion Toilet trained Yes No Age: _____ Urine: _____ Daytime _____ At night: Stool: _____ Daytime _____ At night: Problems toileting?

Abnormal Explain: ___________________________

4 Interviewing and Gathering Data

103

Child Development Questionnaire (CDQ) STEFAN C. DOMBROWSKI, PH.D.

Behavioral and Temperamental History Temperament Activity Level: Normal Mood: Normal Task Persistence High Irritability Low Fearfulness Low Shyness Low Adaptable/Flexible Low Sociability and Play Does or did your child (check all that apply): Ignore children Initiate play Join play

Intrude on play

High Happy Medium Average Average Average Average

Low Sad Low High High High High

Observe them

Parallel play

Prefer adults

Play alone

Sports/Hobbies/Interests Describe your child’s hobbies, sports and interests? ______________________________________________________________________________ Has there been a decrease in interest in participating in these activities recently? ______________________________________________________________________________ Behavior Does your child have difficulty with any of the following (currently or past) Colic

Obsessions/Compulsions

Sleeping (too little or too much)

Eating

Unusual Interests

Temper Tantrums

Head banging

Unusual body movements

Hitting others

Biting

Forgetfulness

Distractibility

Attention span

Concentration

Hyperactivity

Task completion

Trouble with Peers

Fears

Stealing

Destructiveness

Fighting

Anxiety

Need for same routine

Involvement with law

Irritability

Rituals

Trouble with siblings

Impulsivity

Fire setting

Self-stimulation

Excessive crying

Failure to thrive

Separating from parents

Sensory issues

Overreactivity to problems

Meeting new people

Calming down

Unhappiness

Needs parental supervision

Smoke Drink alcohol Use illicit substances Comments:_________________________________________________________________

104

S. C. Dombrowski

Child Development Questionnaire (CDQ) STEFAN C. DOMBROWSKI, PH.D.

Parenting Style How do you deal with behavioral issues? (Check all that apply) Ignoring

Explaining

Scolding

Spanking

Send child to room/Time out

Removal of privileges

Reinforcement

Other: ____________________________________

Educational History Attend daycare/preschool? Yes No At what age? __________ Attend Kindergarten? Yes No At what age? __________ Any problems in daycare/preschool or kindergarten? Yes No Explain: _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________

Does your child or did your child receive any of the following services: Early intervention Feeding therapy Physical therapy Occupational therapy Speech therapy Behavior therapy Other

When

Frequency

Service Provider

Elementary/Middle School/High School Check if applicable Ever change schools? Repeated a grade? Skipped a grade? Difficulty with Reading? Difficulty with Math? Difficulty with writing? Receives poor grades? Been tested for special education? Frequent absences?

Explain

4 Interviewing and Gathering Data

105

Child Development Questionnaire (CDQ) STEFAN C. DOMBROWSKI, PH.D.

Medical History Normal

Abnormal

Comments

Head, eyes, ears, nose, throat Vision Screening (Date: _______) Hearing Screening (Date: ______) Heart Lungs Kidney Stomach/Intestinal/Constipation Reflux Asthma Feeding Issues Eczema/Skin issues Seizures Muscles Cerebral Palsy Joints/Bones Nervous system Exposure to toxins (lead; cigarette smoke; mold) Sleeping/Snoring Nutrition/Diet Other Celiac Yes No ______________________ Immunizations up to date

Allergies Yes

Yes

No

No

Diagnosed with any medical or genetic conditions?

Explain:

Explain: ______________________ Yes

No

Explain: ______________________ Is the child a picky eater? Explain: _______________________________________________________ Hospitalizations Date: ______________________

Reason ______________________

Date: ______________________

Reason ______________________

Date: ______________________

Reason ______________________

Medications Name ______________________

Dose ______________________

Frequency ______________________

106

S. C. Dombrowski

Child Development Questionnaire (CDQ) STEFAN C. DOMBROWSKI, PH.D.

Family History Condition

Biological Father’s Side

Biological Mother’s Side

Sibling

ADHD Learning Disabilities Speech Problems Autism Spectrum Disorder Cognitive Delays Suicide Bipolar Disorder Anxiety Depression Birth Defects Genetic Disorders Seizures Substance Abuse Obsessive Compulsive Disorder Tics/Tourette’s Thyroid Disorders Behavior Problems Contact with law Medications for Mental Health Other Additional Remarks/Comments:

4.4.3

Ascertaining Additional Background Information

In addition to the interview questions and developmental history noted above, there are additional sources of data that will need to be collected about a child. Information from rating forms and records are particularly important and need to be collected. 1. Educational and Behavioral Records at School • Report cards

4 Interviewing and Gathering Data

107

• State administered standardized test results (e.g., Iowa Test of Basic Skills; New Jersey Ask) • Curriculum-based assessments • School discipline records • Guided reading evaluations • Writing samples and other portfolios of the student’s work. 2. Prior Psychological and Functional Behavioral Assessment Reports—These reports contain important data such as previous norm-referenced and functional evaluation data that will be relevant to your report, serving as a source of baseline data and furnishing additional insight into the child’s functioning. 3. Other Professionals Reports—This may include reports from the speech-language pathologist, the counselor, the physical and occupational therapist, and the behavior analyst. 4. IEP Document—If available, the prior IEP should be reviewed to determine what domains are targeted for intervention and whether the child has been making progress toward goals and objectives stated within the IEP. 5. Medical Records—This may contain information not furnished by caregivers including data regarding a child’s hearing, vision, and infection history; whether the child has experienced any head trauma; and information on the child’s early developmental history. 6. Rating Forms—These are often of a standardized nature including broad (e.g., BASC-3) and narrow (e.g., Beck Depression Inventory) band rating forms. This is not the place to discuss how to administer and score such instruments, but only to indicate that they are vitally important to the psychoeducational report. Rating forms should be distributed to parents, teachers, and the student.

4.5

Conclusion

After collection of the numerous sources of information described in this chapter, you will be one step closer to integrating this information with other data sources. Review and integration of the sources of data is a necessary process when attempting to conceptualize a child’s cognitive, academic, behavioral, social, emotional, and adaptive functioning. These important processes are explained in forthcoming chapters of this book.

References American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: American Psychiatric Publishing.

108

S. C. Dombrowski

Dombrowski, S. C. (2014). Child Development questionnaire. Cherry Hill, NJ: Stefan C. Dombrowski Publishers. Dombrowski, S. C. (2015). Psychoeducational assessment and report writing. New York: Springer Science. Frick, P. J., Barry, C. T., & Kamphaus, R. W. (2005). Clinical assessment of child adolescent personality and behavior (3rd ed.). New York: Springer. Loeber, R., Green, S. M., Lahey, B. B., & Stouthamer-Loeber, M. (1991). Differences and similarities between children, mothers, and teachers as informants on disruptive child behavior. Journal of Abnormal Child Psychology, 19, 75–95. Meyers, J., Parsons, R., & Martin, R. P. (1979). Consultation in schools. San Francisco, CA: Jossey-Bass. Reynolds, C. R., & Kamphaus, R. W. (2015). Behavior assessment for children: Third Edn. (BASC-3) [Assessment Instrument]. Bloomington, MN: Pearson. Shaffer, D., Fisher, P., Lucas, C. P., Dulcan, M. K., & Schwab-Stone, M. E. (2000). NLMH diagnostic interview schedule for children version. IV IMH (DISC-IV): Description. Silverman, W. K., & Ollendick, T. H. (2005). Evidence-based assessment of anxiety and its disorders in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34, 380–411. Zirkel, P. (2011). The role of DSM in case law.Communique, 39. Retrieved on May 20, 2019 from http://www.nasponline.org/publications/cq/39/5/RoleofDSM.aspx.

Chapter 5

Observing the Child Karen L. Gischlar

5.1

Introduction

According to the Individuals with Disabilities Education Improvement Act of 2004, when a child has been referred for a special education evaluation, he must be observed by a member of the evaluation team in the general education setting. This mandate is reflected in practice, as a survey of school psychologists revealed that observation of students is the most frequent assessment method utilized. In fact, school psychologists reported conducting more than 15 observations per month via this survey (Wilson & Reschly, 1996). Direct observation of the student’s performance in the classroom is used to screen students, assess emotional and behavioral problems, evaluate the classroom environment in the design of interventions, and monitor student performance and progress (Volpe, DiPerna, Hintze, & Shapiro, 2005). Observational data may be either qualitative (e.g., anecdotal recording) or quantitative (e.g., frequency count) in nature (National Joint Committee on Learning Disabilities [NJCLD], 2011) and aid in verifying teacher report of an academic or behavioral problem (Shapiro & Clemens, 2005). Observation of the student should occur after the teacher interview, when the referral problem has been identified. This enables the observer to determine which instructional periods and settings are most relevant to the referral (Shapiro, 2011a). For example, if the teacher reported during the interview that the student was having difficulty with reading, the practitioner should observe during the reading instructional period and during other times when the student has the opportunity to read. Observations should also be conducted across different instructional arrangements, such as large and small group settings, to determine discrepancies in performance (Shapiro, 2011a). Ideally, multiple observations of the student across

K. L. Gischlar (&) Rider University, Lawrenceville, NJ, USA e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S. C. Dombrowski (ed.), Psychoeducational Assessment and Report Writing, https://doi.org/10.1007/978-3-030-44641-3_5

109

110

K. L. Gischlar

settings and time should be conducted (NJCLD, 2011), as a child’s behavior is apt to vary from day to day (McConaughy & Ritter, 2002). Generally, there are two approaches to direct observation of student behavior, naturalistic, and systematic. Naturalistic observation involves the anecdotal recording of all behaviors occurring, with no behavioral target predefined (Shapiro, Benson, Clemens, & Gischlar, 2011). Systematic direct observation (SDO), on the other hand, is conducted under standardized procedures and entails the recording and measurement of specific behaviors that are operationally defined prior to observation (Shapiro et al., 2011). These two approaches are described in greater detail within this chapter, which also includes a review of commercially available structured observation codes. Further, this chapter offers recommendations for reporting and sample narratives and tables.

5.2

Types of Observation

5.2.1

Naturalistic Observation

Naturalistic observation is the most frequently used type of direct observation in the schools, most likely due to its minimal training requirements and facility (Hintze, Volpe, & Shapiro, 2008). When conducting a naturalistic observation, the practitioner chronologically records behavioral events in the natural setting (e.g., classroom and playground) as they occur. The following section describes two types of naturalistic observation, the anecdotal recording and the antecedent-behavior-consequence chart, both of which are commonly used in the schools.

5.2.1.1

Anecdotal Recording

Often times during observation an anecdotal record is kept, which includes a description of the student’s behaviors, discriminative stimuli, and the context in which the behaviors occurred. Interpretation is limited to a descriptive account, with little inference drawn (Hintze et al., 2008). In fact, the practitioner is cautioned not to “over-interpret” data from anecdotal observations, as they are not standardized and generally are not evaluated with respect to the psychometric properties (i.e., reliability and validity) that other assessment methods are (Hintze, 2005). As such, these sorts of observations should not be used in high-stakes decision making (Hintze et al., 2008), but rather to confirm existence of a problem, operationally define target behaviors, develop future observation procedures, and identify antecedents and consequences to behavior (Skinner, Rhymer, & McDaniel, 2000). Following is a sample anecdotal report for a naturalistic observation of a third-grade student, Chloe, who engages in problematic behaviors in her classroom, as reported by her teacher Mrs. Dancer.

5 Observing the Child

111

Chloe was observed by the school psychologist in Mrs. Dancer’s general education 3rd grade classroom on January 20, 2014 from 1:00 until 2:00 pm during the social studies lesson. There were 16 children and one teacher present in the room. The desks were arranged in four groups of four and there was a carpeted area at the front of the room with an easel and a large chair. The room was colorfully decorated with posters that included both academic material (e.g., parts of speech, multiplication table) and behavioral expectations (e.g., listen when others are speaking). The lesson observed focused on maps and was the first in a unit. At the start of the observation, students had just entered the room from recess and the teacher allowed them to get drinks from the fountain at the back of the classroom before taking a seat on the carpet. As students waited for all to be seated, they spoke to one another. Chloe sat in between two other girls and was laughing and talking with them until the teacher sat in the large chair. The teacher signaled for quiet and the students turned their attention toward her. At 1:05 pm, the teacher began to read the children’s book “Mapping Penny’s World”, which is a component of the social studies curriculum. As the teacher read, Chloe was observed whispering in the ear of the girl to the right of her. When this student did not respond, Chloe ceased talking and looked toward the teacher. She sat quietly and still for the remainder of the story. After the story, the teacher asked questions of the students. Chloe was observed to call out three times during this component of the lesson, which lasted from approximately 1:20 pm until 1:26 pm. After the first two call-outs, the teacher put her finger to her lips and called upon other students. After the third call-out, the teacher reminded Chloe of the classroom rule for raising a hand. Chloe smiled and raised her hand. The teacher called upon her and allowed Chloe to provide a response to the question. After discussion of the book, the teacher explained that the students would be working with their table mates to “find treasure” with a map. Chloe appeared excited and began to call out questions about the “treasure.” The teacher ignored her first three questions and continued to speak. Upon the fourth question, the teacher stopped her instruction to speak with Chloe individually. After this discussion, the teacher resumed giving directions and Chloe sat quietly looking at the carpet. At 1:31, the children were dismissed by groups with their maps to hunt for the “treasure.” Each group was assigned a certain colored box to find. They were instructed to return to the carpet when they had found the appropriate box. By 1:37 pm, all groups were quietly reseated on the carpet. For the next 3 min, the teacher allowed the groups to share where they had found their boxes and what was inside— stickers. After the students had the opportunity to share, the teacher called their attention to the book that she had read earlier. They reviewed the maps, then the teacher explained that each group would work at their desks to make a map of the classroom. At 1:42 pm, she dismissed the students to their desks, where each group found a large piece of paper and markers that the teacher had left while students searched for “treasure.” During the group work period, Chloe called the teacher twice to her desk to look at her group’s work. She was also observed on one other occasion out of her seat speaking with another group. When directed by the teacher, Chloe promptly returned to her seat to resume work. At 1:58, the teacher announced that students needed to put away materials in preparation for the science lesson and that they would be allotted time to finish on the following day. The observation ended as students were cleaning up materials.

112

5.2.1.2

K. L. Gischlar

A-B-C Recording

Antecedent-behavior-consequence (A-B-C) recording is a second type of naturalistic observation that frequently occurs in the schools, especially as a component of functional behavioral assessment (FBA). A-B-C observation involves the recording of antecedents, behaviors, and consequences that are subsequently analyzed to determine the relationships between behaviors and the environment (Eckert, Martens, & DiGennaro, 2005). Antecedents are the conditions that precede a behavior, whereas consequent events occur contingent upon production of the behavior (Gresham, Watson, & Skinner, 2001). A popular method for completing an A-B-C observation is to use a recording schedule that includes three columns, one for each of the three conditions—antecedents, behaviors, and consequences. Typically, the observer records the behavior in the middle column first, followed by the antecedents and consequences for each of the behaviors, because many behaviors will occur during an observation, only those behaviors of clinical importance are usually recorded (Hintze et al., 2008). As with anecdotal recordings of behavior, data collected via the A-B-C system are descriptive in nature and, thus, causality between events should not be assumed. That is, it is impossible to differentiate events that are contiguous with, contingent upon, or dependent on behavior (Eckert et al., 2005). To increase the accuracy of A-B-C data in the FBA process, Eckert et al. (2005) suggest computing conditional probabilities for target behaviors that occur with moderate to high frequency, then constructing graphs. Such diagrams illustrate whether antecedents and consequences are more likely to occur with a behavior, or in its absence (Eckert et al., 2005), and bolster the functional hypothesis statement. The chart below provides a sample A-B-C recording for Chloe, Mrs. Dancer’s third-grade student. The observation took place during a social studies lesson on maps. Student: Chloe Date: January 20, 2024 Subject: Social Studies Time: 1:00–2:00 Observer: Mr. Singer, School Psychologist Time Antecedent Behavior 1:22 pm

Circle, teacher reading story

1:25 pm

Circle, teacher reading story

1:28

Circle, teacher giving direction for assignment Circle, teacher giving direction for assignment

1:31

Calls out answer to teacher question Calls out answer to teacher question Calls out question about the assignment Calls out question about the assignment

Consequence Teacher places her finger to her lips and calls upon another student Teacher stops lesson and addresses Chloe individually to remind her of classroom rules Teacher ignores and continues to speak Teacher stops instructing to address Chloe individually and remind her of classroom rules

5 Observing the Child

5.2.2

113

Systematic Direct Observation

Systematic direct observation (SDO) involves the recording and measurement of specific, operationalized behaviors. An operational definition is based on the shape or topography and describes the behavior in narrow terms, so that occurrence can be verified (Skinner et al., 2000). For example, the term “aggressive behavior” is broad and may lead to inconsistencies in recording. The operational definition “hitting or kicking other students” helps to clarify when an occurrence of the target behavior should be recorded (Skinner et al., 2000). SDO of predefined behaviors is conducted under standardized conditions; scoring and summary of data also are standardized, which limits variability among observers (Shapiro et al., 2011) and improves reliability of the recordings. Further, the use of SDO enables replication, unlike anecdotal recording, which is filtered through the perceptual lens of the observer. Because SDO is standardized, it can be used across time and even across observers. Replication allows for goal setting and the tracking of progress toward goals (Shapiro & Clemens, 2005). The following section describes different forms of SDO, example situations of when each system might be used, and sample data collection forms.

5.2.2.1

Event Recording

One method that can be used to quantify behavior is event recording, which involves drawing a tally mark each time the predefined behavior is exhibited. Event recording is best used for discrete behaviors, which are those that have a clearly discernible beginning and end (Shapiro & Clemens, 2005). For example, this system might lend itself well to Mrs. Dancer’s student, Chloe, as our anecdotal and A-B-C recordings suggested that she primarily engages in two problematic behaviors, calling out and extraneous talking. For purposes of this example, “calling out” is defined as “responding without raising a hand and/or waiting to be asked for a response.” The chart below represents frequency data for Chloe. Each hash mark represents one occurrence of “calling out” as previously defined. Student: Chloe Date: Week of January 20–24, 2024 Behavior: Calling out—responding without raising a hand and/or waiting to be asked for a response Observer: Mr. Singer, School Psychologist Subject Time January January January January January 20 21 22 23 24 Morning meeting Language arts

9:00–9:15

////

9:15–10:45

//

//

///// /// //

////

/// / (continued)

114

K. L. Gischlar

(continued) Student: Chloe Date: Week of January 20–24, 2024 Behavior: Calling out—responding without raising a hand and/or waiting to be asked for a response Observer: Mr. Singer, School Psychologist Subject Time January January January January January 20 21 22 23 24 Mathematics Social studies

10:45– 12:00 1:00–2:00

Science

2:00–3:00

Health

3:00–3:30

//// // / /

/

//

/

/

///

///// /// // //

///

//// // /

Although the frequency count above indicates that Chloe is calling out in class, greater information can be gained by calculating the behavior’s rate. The rate takes into account the amount of time during which the behavior was observed. Without knowing the timeframe, the data are less meaningful (Shapiro & Clemens, 2005). For example, the chart indicates that Chloe called out eight times during the morning meeting period on January 22 and eight times during the social studies period on the same day. However, these data cannot be directly compared because the morning meeting period was 15 min long, whereas the social studies period was 60 min. The rate is calculated by dividing the number of behavioral occurrences by the number of minutes observed (Shapiro & Clemens, 2005). Thus, in our example, on January 22, the rate of Chloe’s behavior was.53 per minute (approximately once every 2 min) during morning meeting and 0.13 per minute (approximately once every 7.5 min). Although the frequency was the same, the rate indicates that calling out was a bigger problem for Chloe during morning meeting on January 22 than it was during social studies. The data can be further analyzed for trends, including subject areas and days of the week during which the highest and lowest rates of the behavior tend to occur.

5.2.2.2

Duration Recording

In some cases, the length of time a behavior lasts may be more important than the frequency with which it occurs (Skinner et al., 2000). For example, a student may get out of his seat only twice during the 45 min instructional period, but if those instances occur for 7 and 8 consecutive minutes, respectively, one-third of the period is lost. To collect duration data, the observer needs a timing device, such as a stopwatch. During the observation, the timing device is started as soon as the student begins the behavior and is stopped when he ceases to engage (Skinner et al., 2000). Without resetting the device, the observer begins timing again on the next occurrence and continues this pattern until the observation session is complete; the

5 Observing the Child

115

total time is then recorded. To determine the percentage of time the student engaged in the behavior during the observation period, divide the number of minutes the behavior was observed by the number of minutes in the observation, then multiply by 100 (Shapiro & Clemens, 2005). As example, Mrs. Dancer is concerned with the amount of time that Chloe spends talking with her classmates, rather than engaged in the social studies lesson. The problem, “extraneous talking,” is operationally defined as “speaking with classmates about topics not related to instruction and without permission.” The following chart represents duration data collected during the social studies period. Student: Chloe Date: Week of January 20–24, 2024 Subject: Social Studies Time: 1:00–2:00 Behavior: Extraneous talking—speaking with classmates about topics not related to instruction and without permission Observer: Mr. Singer, School Psychologist Date Duration of behavior (min) Percentage of time (%) January January January January January

5.2.2.3

20 21 22 23 24

7.5 3 9.5 12 4

13 5 6 20 7

Latency Recording

Latency recording involves measuring the amount of time that elapses between the onset of a stimulus or signal, such as a verbal directive, and the start of the desired behavior (Hintze et al., 2008). To employ latency recording, both the stimulus and the target behavior must have discrete beginnings. As with duration recording, the observer uses a stopwatch. Timing begins immediately after the signal or stimulus is elicited and stops at the instant the behavior of interest is initiated (Hintze et al., 2008). In Chloe’s case, Mrs. Dancer might be concerned with how long it takes the student to cease talking and begin work after teacher direction is given. The observer collecting latency data would begin timing upon the teacher direction, “complete page 56 in your social studies book.” and stop timing the instant that Chloe complied and began to work on the assignment.

5.2.3

Time Sampling Procedures

Event, duration, and latency data recording all require the observer to record every instance of the behavior, which can be difficult for behaviors that occur with high

116

K. L. Gischlar

frequency (Shapiro & Clemens, 2005) or those that are more continuous than discrete (Skinner et al., 2000). Furthermore, these systems prevent the observer from recording other pertinent behaviors that may be occurring during the session (Shapiro & Clemens, 2005). To address these issues, the practitioner may want to employ a time sampling procedure. Within a time sampling system, the observation session is divided into smaller, equal units of time, and behavior is recorded during specific intervals (Skinner et al., 2000). Because every instance of the behavior is not recorded, these systems provide an estimate of the behavior (Whitcomb & Merrell, 2013). There are three methods for collecting time sampling data that can be employed —momentary time sampling, whole interval time sampling, and partial interval time sampling. For each of these three procedures, the observer will need a recording sheet or device with intervals marked and a method for cuing the intervals (Skinner et al., 2000). The recording sheet could contain a number for each interval. When observing, the practitioner would put a slash through a number to indicate that the behavior was observed during that interval and leave it blank if it was not (Skinner et al., 2000). To cue the intervals, a special timing device should be used. Use of a clock or watch with a second hand is discouraged because it will likely decrease reliability and validity of the observation, as the practitioner attempts to observe both child and the timing device. Rather, it is more efficient to use a timing device such as a vibrating watch or audio-cued system that alerts the observer to record behavior (Whitcomb & Merrell, 2013). Audio-cue files are freely available on the Internet from Web sites such as Intervention Central (http://www. interventioncentral.org/free-audio-monitoring-tapes).

5.2.3.1

Momentary Time Sampling

Momentary time sampling involves recording the target behavior only if it occurs at a specified moment (Skinner et al., 2000). Prior to the observation, the practitioner should have operationally defined the behavior and developed an interval recording system. During the session, the observer would look at the student on the cue (e.g., audio and vibrating watch cue) and record presence of the behavior only if it occurred at the time of the cue. During the remainder of the interval, the observer could record other behaviors or events, such as the antecedents and consequences (Skinner et al., 2000). However, if the target behavior occurs at any time other than the start of the interval, it should not be recorded in this system (Hintze et al., 2008). As an example, a momentary time sampling system with an audio cue could be used to record the calling out behavior of our demonstration student Chloe. The observer might divide a 15-min observation period into 5-s intervals, for a total of 180 intervals. On the cue every five seconds, the observer would look to see if Chloe were calling out, as defined previously. If Chloe were to call out on the audio tone, the observer would mark an occurrence. However, if Chloe called out at any other time during the interval, no occurrence would be recorded. In other words, the behavior would need to occur as the tone is sounding.

5 Observing the Child

117

Because the behavior is not recorded continuously when using momentary time sampling, it should be used for behaviors that occur at a high frequency (Skinner et al., 2000). Otherwise, this type of system could result in behaviors being underreported. Furthermore, the observer should somehow indicate on the recording form if there was a missed opportunity to observe. For example, if the teacher or another student walked between the target student and the observer on the cue occluding the observer’s vision, this should be noted on the recording sheet (Skinner et al., 2000). It is important to distinguish the lack of opportunity to observe from nonoccurrence of the behavior in describing it as accurately as possible.

5.2.3.2

Partial Interval Time Sampling

Partial interval time sampling works well for behaviors that occur at moderate to high rates or that are of inconsistent duration (Shapiro & Clemens, 2005). As with momentary time sampling, the target behavior should be operationally defined and an interval recording system designed prior to observation. Within this system, a behavior is recorded as occurring if it is observed during any part of the interval. Therefore, whether a behavior begins before the interval is cued and continues, or begins after the interval is cued, an occurrence is recorded. If multiple occurrences of the behavior occur during the same interval, it is only scored as if the behavior presented once (Hintze et al., 2008). A partial-interval time sampling system could be employed with our demonstration student Chloe for her “extraneous talking” behavior. For example, the observer could divide a 15-min observation period into 90 ten-second intervals. He or she would then watch to see if Chloe engaged in the predefined target behavior at any time during each 10-s interval. An occurrence could be slashed by marking the interval number, whereas a nonoccurrence would result in no mark. Again, it would be important to indicate somehow if an opportunity to observe was missed, perhaps by circling the number. A concern with partial-interval recording systems is that they tend to overestimate the actual occurrence of the behavior, especially if it is somewhat continuous (Hintze et al., 2008; Shapiro & Clemens, 2005). If we consider Chloe, she might be engaged in the “extraneous talking” target behavior for only 3-s of the 10-s interval, but occurrence of the behavior would be recorded for the full interval. Or, if Chloe began to speak at the end of one interval and continued briefly into the next, both intervals would be scored for occurrence. However, a benefit to using a partial-interval system is that it enables the observer to monitor and score multiple behaviors during the same interval (Shapiro & Clemens, 2005). For example, we could observe both of Chloe’s target behaviors with this system. If she were to speak with a classmate and/or call out at any time during a given interval, both behaviors could easily be recorded when employing partial-interval recording.

118

5.2.3.3

K. L. Gischlar

Whole Interval Time Sampling

Whole interval time sampling is best used for behaviors that are continuous or with intervals of short duration because it requires a behavior to be present for an entire interval (Hintze et al., 2008). In other words, the target behavior must occur continuously for the full interval in order to be marked as an occurrence. For example, the observer might divide a 15-min observation period into 180 intervals of 5-s each. During each interval where the behavior presented continuously for 5-s, occurrence would be recorded. A system such as this could be used to record the “extraneous talking” behaviors of our demonstration student Chloe. However, it is important to note that whole interval time sampling can underestimate the occurrence of behaviors (Hintze et al., 2008). Consider the use of a 5-s interval when observing Chloe. If she were to talk for the full 5-s, an occurrence would be marked. However, if she talked for 3-s, occurrence during that interval would not be recorded. Thus, it is possible for her to have engaged in many more instances of the target behavior than will be reflected in the data. For this reason, it is suggested that the observer keeps the interval duration brief (Skinner et al., 2000).

5.2.4

Observation of Comparison Students

Comparison peer data provide information about the degree of discrepancy between the target child’s behavior and the expected levels of behavior for students in the classroom, which aids in problem verification (Shapiro, 2011a) and the establishment of empirical goals (Skinner et al., 2000). Peer data can be collected either through simultaneous or discontinuous recording (Skinner et al., 2000). Simultaneous data can be collected on one or more peer comparisons when a momentary time sampling system is employed. Because momentary time sampling requires the observer to record the behavior for the target student on the interval, the rest of the interval can be devoted to observation of peer behavior. This type of system is best utilized when students are in close proximity to one another in the physical setting (Skinner et al., 2000). If the observer is using a whole-interval system to record the behavior of the referred student, simultaneous recording of peers is not recommended; whole-interval recording requires the practitioner to observe the target student for the entire interval. Observing more than one child, at the same time, might result in data that are less than accurate (Shapiro, 2011a). In this case, discontinuous recording of peers would be appropriate. In a discontinuous system, the practitioner would designate certain intervals during which to observe the peer (Skinner et al., 2000). For example, the observer could record behavior of a peer comparison student, rather than the target student, every fifth interval (Shapiro, 2011a). In a system such as this, the observer could record data for comparison student 1 during the 5th interval, for comparison student 2 during the 10th interval, and for

5 Observing the Child

119

comparison student C during the 15th interval; the cycle would then be repeated by observing comparison student 1 during the 20th interval (Skinner et al., 2000). Prior to conducting a peer comparison observation, the practitioner will need to decide which peers to select. One approach is to ask the teacher to identify a few students whose behavior is “average” or “typical” for her classroom (Shapiro & Clemens, 2005). If using this method, the observer should be careful to ensure that the teacher has not selected the best behaved or most poorly behaved students. The goal of peer observation is to supply a normative base or benchmark to which to compare the target student’s behavior in problem verification (Shapiro & Clemens, 2005). To avoid bias in teacher judgment of students, the observer could randomly select peers to observe, such as those sitting in close proximity to the student of interest (Shapiro, 2011a). However, this method of selection also may present problems. The practitioner may not be aware of the characteristics of the selected peers and may inadvertently choose students who are not considered “typical” in the classroom. Whichever method is used to identify comparison peers, peer comparison provides a method for problem verification and the establishment of intervention goals and is a recommended practice.

5.2.5

Analog Observation

Analog observation involves the use of controlled situations that simulate particular environments or circumstances of concern (Knoff, 2002) and reflect how a student might behave in the natural environment (Hintze, Stoner, & Bull, 2000). Typically, a hypothetical situation is designed to mimic the real-life environment in which the target behavior usually occurs, with the assumption that there will be some degree of similarity between the student’s behavior in the contrived setting and his behavior in the natural environment. Analog observation has utility for tracking multiple behaviors simultaneously and accommodating variability across behavioral domains (Mori & Armendariz, 2001). School-based applications involve enactment and role play procedures, paper-and-pencil techniques, audiotape, or videotape procedures (Hintze et al., 2000). Enactment exercises require the student to respond to contrived situations that are artificially arranged to mirror the natural setting (Hintze et al., 2000). For example, for a student having problems on the bus, the practitioner may set up lines of chairs to mimic the seating arrangement on a bus and observe children as they naturally interact. Conversely, role playing involves scripted behavior (Hintze et al., 2000). For example, the practitioner may want to assess the social skills of a child by asking him to approach another child to play during recess. During an enactment, the behavior of a student is free, but is clearly defined during role play. The third analog method, paper and pencil, requires the student to respond to a stimulus presented in written format, either through writing, speaking, or displaying a physical behavior (Hintze et al., 2000). For example, the practitioner may present a

120

K. L. Gischlar

situation in which a child is experiencing a problem with a friend and ask the target student to give a written or oral response. Finally, audio- and videotape analogs require the student to listen to or view a situation and respond. Although analog observations may have utility for behaviors that are situation specific, they are not without limitation. Their application requires expertise and time (Stichter, 2001). Certainly, analog assessment should not be employed without proper training and supervision.

5.2.6

Observation of Permanent Products

A permanent product is the tangible outcome of a student behavior (Steege & Watson, 2009). A block pattern, the number of math problems completed, and the number of words written on a page all can be considered permanent products. An advantage to utilizing such is that the practitioner need not be present to observe the behavior, yet still has a record of its occurrence. This can save time and circumvent issues with scheduling. A drawback to this procedure, however, lies in certifying authenticity. For example, it can be difficult to determine whether a student completed a task alone, or with support (Steege & Watson, 2009). Observation of permanent products is perhaps best used along with direct observation of student behavior.

5.2.7

Self and Peer Observation of Student Behavior

The literature has suggested self-monitoring of academic and nonacademic behavior difficulties in the classroom as a successful intervention component for both typically developing children and those with disabilities across grade levels. Self-monitoring refers to the systematic observation and recording of one’s own behavior, with the goal of producing change in the behavior of interest (Lam & Cole, 1994). As example of an academic behavior, a student could monitor his performance on daily math sheets by checking an answer key and recording the number of problems he solved correctly each day. A nonacademic behavior that could be self-monitored is number of call-outs; a student would record an instance every time she shouted a response or comment without first raising her hand and awaiting teacher prompting. Of course, before implementing a system that involves observation of one’s own behavior, it is important to discern whether or not the student has the ability to identify when he or she has engaged in the target behavior (Rafferty, 2010). Although self-monitoring data are subject to issues of reactivity and accuracy (Akande, 1997), they may have utility in the evaluation process relative to changes in the target behavior, intervention effectiveness, and the self-regulatory skills of the student.

5 Observing the Child

121

Another source of data that may be of value in decision-making is that obtained through the administration of peer-mediated strategies. Peer-mediated interventions (PMI) require a peer of the student who is exhibiting difficulties to prompt and/or provide consequences and can be used for both academic and nonacademic problems. Unlike group contingencies that often involve social pressure to drive behavior change in the target student, PMIs require the peer to observe and interact with the student on a consistent basis (Dart, Collins, Klingbeil, & McKinley, 2014). Thus, the peer interventionist may have information about the target student that the teacher does not. Caution should be used, however, in utilizing information gleaned from a student’s peer, given the role of relationship and lack of professional training.

5.2.8

Observation in the Home

At times, the evaluator may want to observe in the home setting, particularly, when the child is young or has a developmental disability, such as autism. When a child is young or has a diagnosis that impacts daily life skills, parents are usually highly involved in care giving and engage in frequent interaction with the child. Observations in the home environment, where interactions are witnessed in the naturalistic setting, can provide insight into the child’s behavior that cannot be gleaned from other sources; direct observations typically evidence less systematic bias than parent or teacher reports (Gardner, 2000). Children are embedded within multiple layers of context that include the family, school, and community (Woodman, Smith, Greenberg, & Mailick, 2016) and information gained from observing the child outside of the classroom can inform both problem definition and subsequent intervention. For example, if parents are observed to reinforce a child negatively by removing task demands in response to tantrum behaviors, the child may generalize this behavior to the classroom setting. In this case, intervention would likely include parent training and involvement in administration of treatment. Research suggests that for children with autism, in particular, family involvement in intervention enhances outcomes (Bindlish, Kumar, Mehta, & Dubey, 2018).

5.2.9

Observation Systems

5.2.9.1

Behavior Assessment System for Children (BASC-3)

The BASC-3 (Reynolds & Kamphaus, 2004) is a multidimensional system used to evaluate the behavior and self-perceptions of children and young adults between the ages of 2- and 25-years. It includes the following components which can be used individually, or in any combination: (a) parent and teacher rating scales; (b) self-report scale; (c) a structured developmental history recording form; and d) a

122

K. L. Gischlar

form for recording and classifying behavior from direct classroom observations. The student observation system (SOS) is designed for use by the practitioner in diagnosis, treatment planning, and for monitoring the effects of intervention and includes the following behavior domains: (a) response to teacher/lesson; (b) peer interaction; (c) work on school subjects; and (d) transition movement (Reynolds & Kamphaus, 2004). The SOS is designed to be used across multiple 15-min observations. To use the system, the observer needs both the recording form and a timing device. The form is divided into three sections. Part A includes a list of 65 specific behaviors across 13 categories. This list provides a reference for the observation session, but also serves as a checklist that is to be completed after the session. Part B is used to document the 15-min observation. During the observation, the observer records student behavior across thirty observations. Specifically, each observation occurs for 3-s at the end of each 30-s interval. During the 3-s, the observer should place a checkmark in the corresponding box for each class of behaviors exhibited by the student; specific examples for the classes are included on Part A of the form. At the end of the observation period, the observer tallies occurrences across classes in Part B, then retrospectively completes Part A by noting whether specific behaviors were not observed, sometimes observed, or frequently observed. Additionally, within Part A, the observer should indicate whether the behavior was disruptive to instruction. Part C includes space for the observer to record the teacher’s interaction with the student during the observation. This section also is completed retrospectively and requires the observer to record information regarding the teacher’s physical position in the classroom and attempts to change the student’s behavior (Reynolds & Kamphaus, 2004). The SOS is designed to provide information about the types of behavior being displayed and their frequency and level of disruption. To aid in defining the problem, the system can also be used to collect peer comparison data or to develop local norms (Reynolds & Kamphaus, 2004). The SOS is available in both paper-and-pencil and digital formats. Although the SOS appears easy to use and requires minimal training, it should be noted that the manual provides limited information about the reliability and validity of the system. It is unknown to what extent the SOS code categories correlate with clinical criteria in the identification of disorders. Furthermore, no norms are offered to allow for norm-referenced interpretation of scores (Frick, Barry, & Kamphaus, 2005). These limitations suggest that although the SOS may have utility in defining and monitoring behavior problems, caution should be used in the interpretation of results for high stakes decisions, such as diagnoses and classification for special education services.

5.2.9.2

Behavioral Observation of Students in Schools (BOSS)

The BOSS (Shapiro, 2011a) is an observation code that can be used to confirm or disconfirm teacher judgment of a problem, determine the severity of a behavior problem, and provide a baseline against which to measure the success of an

5 Observing the Child

123

intervention (Shapiro, 2011a). The system measures two types of engaged time, active and passive, and three classes of off-task behaviors, verbal, motor, and passive. In addition to collecting these data on the target student, the observer also records data on peer-comparison students and the teacher’s instruction. To conduct an observation with the BOSS, the practitioner needs a recording form (see Shapiro, 2011b) and a cuing device, such as a digital recorder or vibrating watch, to mark intervals. Use of a stopwatch is not recommended because such would require the observer to look frequently at the watch, which could result in inaccurate data recording (Shapiro, 2011a). The recording form for the BOSS (Shapiro, 2011a) includes a section for identifying information. In addition to personal information for the student, the practitioner should note the academic subject, type of activity (e.g., worksheets and silent reading), and instructional situation observed. Within this system, there are codes for the four most common types of instructional situations: (a) ISW:TPsnt (individual seatwork and teacher present), which indicates that the student was observed engaged in an individual seatwork activity as the teacher circulated the room checking assignments or worked with individual students; (b) ISW:TSmGp (individual seatwork and teacher working with a small group of students), which indicates that the teacher worked with a small group of students as the target student engaged in individual seatwork apart from the group; (c) SmGp:Tled (small group led by the teacher), which indicates that the target student was part of the small group directly taught by the teacher; and (d) LgGp:Tled (large group led by the teacher), which indicates that the target student was part of a large group during which at least half of the class was taught by the teacher (Shapiro, 2011a). If the instructional situation does not fit any of these codes (e.g., learning centers or cooperative groups), then the observer should indicate this with a note on the recording form. Furthermore, during an observation, the instructional arrangement may change. If a change occurs during the session, this should be indicated on the form at the interval during which it occurred; the observer can circle the interval number and make a notation (Shapiro, 2011a). Generally, an observation with the BOSS is conducted for at least a 15-min period (Shapiro, 2011a). If 15-s intervals are used, each 15-min session requires one recording form. Under the information section previously described, it is the section of the form, wherein behaviors are recorded. The left-hand column lists the behaviors to be observed, while the top row lists the intervals. Every fifth interval is shaded gray, indicating that it is designated for peer comparison and teacher directed instruction data. Momentary time sampling is employed while observing the student for active and passive engaged time, whereas partial interval recording is used while observing off-task motor, verbal, and passive behaviors. Likewise, teacher directed instruction is observed via a partial interval system (Shapiro, 2011a). The BOSS manual clearly defines all classes of behaviors (Shapiro, 2011b). For example, at the instant each interval is cued (momentary time sampling), the observer is to score the occurrence or non-occurrence of active engaged time (AET) or passive engaged time (PET). Both AET and PET require the student to be

124

K. L. Gischlar

on task. Examples of AET include writing, responding to a question, and reading aloud. Examples of PET include reading silently, listening to the teacher or a peer speak, and looking at the blackboard during instruction. During the remainder of the interval, the observer notes whether the student engaged in any off-task behaviors, including motor (OFT-M, e.g., rocking in chair, out of seat, or playing with pencil), verbal (OFT-V, e.g., making noise, talking to a peer off-topic), and passive (OFT-P, e.g., staring out the window). This same system is employed for observation of a peer student every fifth interval. Additionally, during that same interval, the teacher is observed for direct instruction, which includes lecturing to the class or modeling a skill (Shapiro, 2011b). Once data are collected, the manual instructs the user how to calculate percentages and interpret the data. It should be noted that there is also an electronic version of the BOSS available for handheld devices (Pearson, Inc., 2013). The BOSS is relatively easy to learn (Hintze et al., 2008) and use of the electronic version simplifies the process even further by eliminating the need to manage the recording form, timing device, and clip board during the observation and by performing the calculations afterward (Shapiro, 2011a). Following is a suggested format for reporting BOSS (Shapiro, 2011a) data for our example student, Chloe, for one observation. In practice, it would prove beneficial to conduct multiple observations with the BOSS, which would enable the observer to detect patterns in student behavior. Chloe was observed during the social studies period on January 21, 2024 with the Behavioral Observation of Students in Schools (BOSS), a structured observation code. The BOSS enables the observer to record occurrences of both on-task and off-task behavior for the referred student, as well as peer comparison and teacher directed instruction data, which aids in problem definition. During the 30-min observation, there were 16 children and one teacher present in the room. Student desks were arranged in four groups of four and there was a carpeted area at the front of the room with an easel and a large chair. The room was colorfully decorated with posters that included both academic material (e.g., parts of speech, multiplication table) and behavioral expectations (e.g., listen when others are speaking). During the portion of the lesson observed, students were seated at their desks completing a small group task that required them to take turns reading aloud from the social studies text. Once finished reading within their small groups, students worked together to answer the questions at the end of the selection and then made a poster, as directed by the teacher. During the activity, the teacher circulated from group to group and offered assistance when necessary.

Percentage of observed intervals Behavior Chloe (96 intervals) (%)

Peer comparison (24 intervals) (%)

Active engagement Passive engagement Off-task motor Off-task verbal

76 20 3 5

39 22 19 23

(continued)

5 Observing the Child

125

(continued) Percentage of observed intervals Behavior Chloe (96 intervals) (%) Off-task passive Teacher-directed instruction

9 79

Peer comparison (24 intervals) (%) – –

During the observation session, Chloe was observed across 96 fifteen-second intervals, whereas peer comparison students were observed across 24 intervals, as was teacher instruction. During observed intervals, Chloe was either actively (39%) or passively (22%) engaged, for a total engagement time of 61%. Active engaged time (AET) includes tasks such as writing in a workbook or reading aloud, whereas passive engaged time (PET) refers to activities such as reading silently or listening to a lecture. Chloe was either actively or passively engaged during fewer observed intervals than her peers (AET = 76%; PET = 20%; total engaged time = 96%). It should be noted that Chloe was observed to be on task more often during those intervals when students were reading, than when they were working on the poster. Chloe was observed to be engaged in off-task motor (OFT-M) behaviors such as getting out of her seat and playing with materials during 19% of observed intervals, whereas peer comparison students engaged in OFT-M behaviors for only 3% of observed intervals. Likewise, Chloe engaged in more off-task verbal (OFT-V) behavior, such as talking off topic and humming, than did her peers. OFT-V behaviors were recorded during 23% of the observed intervals for Chloe and during 5% of observed intervals for her peers. Finally, Chloe engaged in off-task passive (OFT-P) behaviors such as staring out the window or looking around the room during 9% of the intervals, whereas her peers were not observed engaging in OFT-P behaviors. The BOSS data suggest that Chloe engaged in higher levels of off-task behaviors and was less engaged than her peers during this social studies lesson, despite the high level of teacher-directed instruction (TDI = 79%).

5.3

How Many Observations Are Enough?

Typically, classroom observations are conducted for brief periods of time with the hope that a representative sample of behavior will be captured. From this sampling, generalities about the student’s behavior are made (Tiger, Miller, Mevers, Mintz, Scheithauer, & Alvarez, 2013). However, there currently is no standard regarding the frequency and duration of observation sessions necessary to obtain a representative sample of behavior. Although the questions of frequency and length of observation have been examined in the literature, research studies may be difficult to translate into a heuristic for school-based practitioners. To address this issue, Tiger et al. (2013) designed a study to evaluate more closely the nature of observations conducted in the natural classroom environment. Tiger et al. (2013) utilized duration recording methods in consecutive 10-min sessions to observe the behavior of three referred students. The authors then culled

126

K. L. Gischlar

10, 20, 30, and 60-min observation sessions from the complete record and compared those data to the daily mean of behavioral occurrences to assess their accuracy. The findings suggested that when there was low behavioral variability, the briefer observations were more representative of the overall levels, than when there was more variability in behavior. In fact, even 60-min observations did not capture the true levels of behavior when variability was high (Tiger et al., 2013). Tiger et al. (2013) concluded that the variability in the behavior is more relevant than its absolute level and suggested that the practitioner might attempt to control for variability in behavior by identifying the environmental variables that impact performance and then holding those variables constant. Implementing this type of structure might help to minimize the number of observations sessions necessary to achieve stability. Relative to the behavioral representativeness of direct observation is discussion of its psychometric properties. Other evaluation methods, such as intelligence and achievement tests and behavior rating scales, are expected to meet standards for reliability and validity, but the same is not true of direct observation as utilized in practice (Hintze, 2005). To address this shortcoming, Hintze (2005) has proposed a series of quality indicators that the practitioner can apply when developing an observation system or when selecting a commercially available system for use. These seven indicators include: (a) internal consistency reliability, which indicates that the observation is of sufficient length to sample occurrences and nonoccurrences of the behavior accurately; (b) test-retest reliability, which means that multiple observations are conducted to ensure consistency of behavior over time; (c) interobserver agreement, which pertains to the extent to which independent observers agree on the occurrence and nonoccurrence of behavior with and across observation sessions; (d) content validity, which indicates that operational and response definitions accurately describe the behavior of interest; e) concurrent and convergent validity, which speaks to the level of agreement between data garnered from direct observations and other assessment information (e.g., rating scales and interviews); f) predictive validity, which indicates how accurately observational data predicts behavior in future situations; and g) sensitivity to change, which pertains to how observational data change as a function of environmental manipulations and/or to developmental changes in the child over time (Hintze, 2005). Use of indicators such as these helps to ensure that the direct observation system employed is psychometrically sound and produces reliable, useful outcomes (Hintze, 2005).

5.4

Observation of Student Behavior During Administration of Standardized Assessments

In addition to the naturalistic environment, the practitioner also may have the opportunity to observe the student during the administration of standardized assessments, such as intelligence and achievement tests. During this time, the

5 Observing the Child

127

psychologist will want to note any individual or situational variables that might influence performance (McGrew & Flanagan, 1998). For example, if the student was highly distractible and inattentive during the testing session, his score may not accurately reflect his or her level of the trait being measured (i.e., his performance would be underestimated). It is important in the interpretation of scores to note when an administration might not truly portray the individual’s abilities ((McGrew & Flanagan, 1998). Furthermore, observation during the administration of tests can provide additional information about the student in terms of the referral problem. There are opportunities for the examiner to observe areas such as language abilities, motor coordination, problem solving, persistence to task, response to feedback, frustration, and impulsiveness (Flanagan & Kaufman, 2009). These observations may aid in interpretation of results, but might also suggest further areas for evaluation and are important to the assessment process.

5.5

Hawthorne and Halo Effects

Over 80 years ago, the results of the Hawthorne studies demonstrated that measurements of behavior in controlled studies were altered by the participants’ knowledge of being in an experiment. This phenomenon came to be known as the Hawthorne Effect and it has been suggested that it may extend beyond experimental behavior (Adair, 1984). For example, it is possible that a student’s behavior could be altered in the classroom or during an assessment session by the very fact that he is being observed. Furthermore, in the classroom setting, the presence of an observer might also impact the nature of the student and teacher relationship (Mercatoris & Craighead, 1974), or even the level of integrity at which a teacher implements instruction (Gresham, MacMillan, Beebe-Frankenberger, & Bocian, 2000). One way to limit this reactivity is to observe the student prior to meeting him during the evaluation process and to conduct multiple observation sessions to increase the validity of findings. Another phenomenon that may impact observations of the child is known as the halo effect. The halo effect, or error, occurs when the rater’s overall impression strongly influences the rating of a specific attribute (Murphy, Jako, & Anhalt, 1993). For example, it has been demonstrated that when a child is identified as being “impulsive,” the teacher tends to rate other problematic behaviors such as restlessness, poor concentration, and poor sociability as present, even if such cannot be verified by other sources (Abikoff, Courtney, Pelham, & Koplewicz, 1993). The halo effect is important for the practitioner to keep in mind, as previous behavioral reports or knowledge of a diagnosis (e.g., attention deficit hyperactivity disorder) may impact the observation and overall impressions of a student.

128

5.6

K. L. Gischlar

Summary

School psychologists and other education professionals frequently observe children in the classroom and other school settings when a behavior or learning problem has been suspected. Observation of the student is an integral component of a multimethod evaluation and can be used in the screening, assessment, and monitoring of student performance. There are two primary types of observation, naturalistic, and systematic. Naturalistic observation entails the recording of behavioral events in the natural setting, whereas systematic observation involves the recording and measurement of specific, operationalized behaviors. Practitioners are encouraged to employ both methods in conducting multiple observations of student behavior and to evaluate their systems for reliability and validity, which enhances their utility in the decision-making process.

References Abikoff, H., Courtney, M., Pelham, W. E., Jr., & Koplewicz, H. S. (1993). Teachers’ ratings of disruptive behaviors: The influence of halo effects. Journal of Abnormal Child Psychology, 21, 519–533. Adair, J. G. (1984). The Hawthorne Effect: A reconsideration of the methodological artifact. Jouranl of Applied Psychology, 69, 334–345. Akande, A. (1997). The role of reinforcement in self-monitoring. Education, 118, 275–281. Bindlish, N., Kumar, R., Mehta, M., & Dubey, K. T. (2018). Effectiveness of parent-led interventions for autism and other developmental disorders. Indian Journal of Health & Well-being, 9, 303–307. Dart, E. H., Collins, T. A., Klingbeil, D. A., & McKinley, L. E. (2014). Peer management interventions: A meta-analytic review of single-case research. School Psychology Review, 43, 367–384. Eckert, T. L., Martens, B. K., & Di Gennaro, F. D. (2005). Describing antecedent-behaviorconsequence relations using conditional probabilities and the general operant contingency space: A preliminary investigation. School Psychology Review, 34, 520–528. Flanagan, D. P., & Kaufman, A. S. (2009). Essentials of WISC-IV assessment (2nd ed.). Hoboken: Wiley. Frick, P. J., Barry, C. T., & Kamphaus, R. W. (2005). Clinical assessment of child and adolescent personality and behavior (3rd ed.). New York, NY: Springer. Gardner, F. (2000). Methodological issues in the direct observation of parent-child interaction: Do observational findings reflect the natural behavior of participants? Clinical Child & Family Psychology Review, 3, 185–198. Gresham, F. M., MacMillan, D. L., Beebe-Frankenberger, M. E., & Bocian, K. M. (2000). Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented? Learning Disabilities Research & Practice, 15, 198–205. Gresham, F. M., Watson, T. S., & Skinner, C. H. (2001). Functional behavioral assessment: Principles, procedures, and future directions. School Psychology Review, 30, 156–172. Hintze, J. M. (2005). Psychometrics of direct observation. School Psychology Review, 34, 507. Hintze, J. M., Stoner, G., & Bull, M. H. (2000). Analogue assessment: Emotional/behavioral problems. In E. S. Shapiro & T. R. Kratochwill (Eds.), Conducting school-based assessments of child and adolescent behavior. New York: The Guilford Press.

5 Observing the Child

129

Hintze, J. M., Volpe, R. J., & Shapiro, E. S. (2008). Best practices in the systematic direct observation of student behavior. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (5th ed., Vol. 2, pp. 319–335). Bethesda, MD: National Association of School Psychologists. Individuals With Disabilities Education Act, 20 U.S.C. § 1400 (2004). Knoff, H. M. (2002). Best practices in personality assessment. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (4th ed., Vol. 2, pp. 1303–1320). Washington: National Association of School Psychologists. Lam, A. L., & Cole, C. L. (1994). Relative effects of self-monitoring on-task behavior, academic accuracy, and disruptive behavior in students with behavior disorders. School Psychology Review, 23, 44–58. McConaughy, S. H., Ritter, D. R., & Shapiro, E. S. (2002). Best practices in multidimensional assessment of emotional or behavioral disorders. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (4th ed., Vol. 2, pp. 1303–1320). Bethesda, MD: National Association of School Psychologists. McGrew, K. S., & Flanagan, D. P. (1998). The intelligence test desk reference. Allyn & Bacon: Needham Heights, MA. Mercatoris, M., & Craighead, W. E. (1974). Effects of nonparticipant observation teacher and pupil classroom behavior. Journal of Educational Psychology, 66, 512–519. Mori, L. T., & Armendariz, G. M. (2001). Analogue assessment of child behavior problems. Psychological Assessment, 13, 36–45. Murphy, K. R., Jako, R. A., & Anhalt, R. L. (1993). Nature and consequences of halo error: A critical analysis. Journal of Applied Psychology, 78, 218–225. National Joint Committee on Learning Disabilities. (2011). Comprehensive assessment and evaluation of students with learning disabilities. Learning Disability Quarterly, 34, 3–16. Pearson, Inc. (2013). BOSS user’s guide. Bloomington, MN: Author. Retrieved on February 7, 2014 from http://images.pearsonclinical.com/images/Assets/BOSS/BOSS_UsersGuide.pdf. Rafferty, L. A. (2010). Step-by-step: Teaching students to self-monitor. Teaching Exceptional Children, 43, 50–58. Reynolds, C. R., & Kamphaus, R. W. (2004). The behavior assessment system for children manual (2nd ed.). Circle Pines: AGS Publishing. Shapiro, E. S. (2011a). Academic skills problems (4th ed.). New York, NY: The Guilford Press. Shapiro, E. S. (2011b). Academic skills problems workbook (4th ed.). New York, NY: The Guilford Press. Shapiro, E. S., Benson, J. L., Clemens, N. H., & Gischlar, K. L. (2011). Academic assessment. In M. Bray & T. Kehle (Eds.), Oxford Handbook of School Psychology. New York: Oxford University Press. Shapiro, E. S., & Clemens, N. H. (2005). Conducting systematic direct classroom observations to define school-related problems. In R. Brown-Chidsey (Ed.), Assessment for intervention: A problem-solving approach. New York, NY: The Guilford Press. Skinner, C. H., Rhymer, K. N., & McDaniel, E. C. (2000). Naturalistic direct observation in educational settings. In E. S. Shapiro & T. R. Kratochwill (Eds.), Conducting school-based assessments of child and adolescent behavior. New York: The Guilford Press. Steege, M. W., & Watson, T. S. (2009). Conducting school-based functional behavioral assessments: A practitioner’s guide (2nd ed.). New York, NY: The Guilford Press. Stichter, J. P. (2001). Functional analysis: The use of analogues in applied settings. Focus on Autism and Other Developmental Disabilities, 16, 232–239. Tiger, J. H., Miller, S. J., Mevers, J. L., Mintz, J. C., Scheithauer, M. C., & Alvarez, J. (2013). On the representativeness of behavior observation samples in classrooms. Journal of Applied Behavior Analysis, 46, 424–435. Volpe, R. J., DiPerna, J. C., Hintze, J. M., & Shapiro, E. S. (2005). Observing students in classroom settings: A review of seven coding schemes. School Psychology Review, 34, 454– 474.

130

K. L. Gischlar

Whitcomb, S., & Merrell, K. W. (2013). Behavioral, social, and emotional assessment of children and adolescents (4th ed.). New York, NY: Routledge. Wilson, M. S., & Reschly, D. J. (1996). Assessment in school psychology training and practice. School Psychology Review, 25, 9–23. Woodman, A. C., Smith, L. E., Greenberg, J. S., & Mailick, M. R. (2016). Contextual factors predict patterns of change in functioning over 10 years among adolescents and adults with autism spectrum disorders. Journal of Autism and Developmental Disorders, 46, 176–189.

Chapter 6

General Guidelines on Report Writing Stefan C. Dombrowski

6.1

Overview

Chapter 5 offers a general discussion of psychoeducational assessment reporting writing guidelines. This chapter offers a panoramic perspective with more specific guidance provided in Chaps. 6 through 10. Specific conceptual issues to improve report writing will be presented.

6.1.1

Make the Report Aesthetically Appealing

First impressions are extremely important in report writing. One of the more important aspects of a report is how it appears. The report should look balanced, organized, and structured with appropriate headings and subheadings. It should not contain too much information overwhelming the reader. Conversely, it should contain sufficient information for the report to convey information to caregivers, other psychologists, and other stakeholders. For that reason, I emphasize the need to make the report look attractive.

6.2

Structure of the Psychoeducational Report

A psychoeducational report has several components. Figure 6.1 presents a broad overview of the contents including the generalized structure of a report.

S. C. Dombrowski (&) Rider University, Lawrenceville, NJ, USA e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S. C. Dombrowski (ed.), Psychoeducational Assessment and Report Writing, https://doi.org/10.1007/978-3-030-44641-3_6

131

132

S. C. Dombrowski

Psychoeducational Report CONFIDENTIAL

Identifying Information Reason for Referral Assessment Methods and Sources of Data Background Information and Early Developmental History Assessment Results -Cognitive and Academic Functioning -Social, Emotional, and Behavioral Functioning -Interviews Student Caregiver Teacher Other

-Observations Classroom-based School-based Testing-based

Conceptualization and Classification (or Diagnostic Impressions) Summary and Recommendations

Fig. 6.1 Structure of psychoeducational assessment report

Forthcoming chapters will discuss each of these report components in greater detail. As discussed in the first chapter of this book, the purpose of psychoeducational assessment and report writing is to gather information about a child and convey that information so that caregivers, teachers, and other professionals will be

6 General Guidelines on Report Writing

133

able to help the child succeed (Blau, 1991). A second purpose of the report is to determine whether the child receives specially designed instruction (i.e., special education support) and to provide guidance as to how those supports are to be furnished. The report should be structured properly to appropriately convey this information.

6.3

Conceptual Issues in Psychoeducational Report Writing

Graduate students in school and clinical child psychology need to be mindful of selected report writing pitfalls and best practices. Some of these are addressed below. This listing is by no means exhaustive, but it will furnish the reader with a pathway for addressing commonly encountered report writing issues.

6.3.1

Address Referral Questions

The exclusion of referral questions within a report is considered by most texts and articles on report writing to be poor practice (Ownby, 1997; Tallent, 1993; Umaña, Khosraviyani & Castro-Villarreal, 2020; Weiner, 1985, 1987). All reports, whether psychological or psychoeducational, should contain referral questions that need to be addressed by the report (Brenner, 2003). This will help to focus the report and assure that issues of concern are addressed by the psychologist. The referral question may be a general referral suggesting the need for an evaluation to better understand the child’s functioning and whether the child may qualify for additional support; or very targeted pinpointing precisely the direction of the evaluation and questions to be addressed. In no situations should a referral question be omitted. A recent study examined teacher preferences in psychoeducational assessment reports and found that reports with specific referral questions were favored by teachers (Umaña et al., 2020).

6.3.2

Avoid Making Predictive and Etiological Statements

Research within psychology is often associative, not causative, and therefore quasi-experimental. It is probabilistic, not deterministic. This makes the provision of predictive and etiological statements extremely difficult. The public and even the courts may seek simple, categorical descriptions of behavior. This is sometimes offered by health care providers, so there may be the expectation of a similar response style from psychologists. For instance, when a child is taken to the

134

S. C. Dombrowski

pediatrician for a persistent sore throat the caregiver may wish to have strep ruled out by the pediatrician. The medical community has tools for such purpose and can determine etiology through the use of a rapid strep test and a culture-based strep test with more precision. In these situations, the physician offers a direct causation for the sore throat (e.g., streptococcal bacteria) and a specific intervention (e.g., antibiotics) to resolve the issue. A definitive test may be able to determine etiology. With psychology, such definitive tools may not be as readily available and the psychologist must be left to infer possible causation. But, this should be done extremely cautiously and only after exhaustive research and consideration, if it is done at all. As an example, suppose a child scores at the seventh percentile on a measure of cognitive ability. This same child was born at 31 weeks gestation. Although there is an association between reduced cognitive ability scores and prematurity (Dombrowski, Noonan, & Martin, 2007; Martin & Dombrowski, 2008), it would be inappropriate to conclude that the child’s prematurity caused the reduced cognitive ability scores. There may be other similarly plausible explanations that are responsible for the IQ test score decrement. As with the provision of etiological statements, it is inappropriate to make predictive statements about a child’s functioning. One cannot state with certainty that a child who scores in the gifted range on an IQ test will be destined for an Ivy League education with a prosperous career on Wall Street. Nor can we conclude that the child with severe learning disabilities will never be able to attend college. Psychologists must be genuine in their statements of abilities and disabilities about children, but must make such statements tentatively based upon the empirical evidence (Dombrowski, 2015). Of course, when we make such tentative statements, we are talking probabilities. There are always individuals who defy group-level statistics. Do not dismiss group-level statistics, however, in a Pollyanna fashion (Matlin & Gawron, 1979), but also do not be overly pessimistic in your conceptualization of a child. We may encounter children who experiences more moderate intellectual disabilities or more severe autism spectrum disorder. If a caregiver maintains unrealistic expectations about the child’s future, then we may have to gently temper such expectations. Caregivers may have the false impression that the child will overcome these difficulties and will be able to pursue a college degree and successful career in medicine or law. In this circumstance, and because this outcome is highly improbable, a delicate discussion of expectations for educational and vocational trajectories would be in order. This may include the need to introduce a back-up plan to the caregivers in the event that their desired pathway for the child does not come to fruition. The psychologist should start the process of a soft landing gently easing the caregiver’s expectations. The passing of time will also contribute so a brusque approach confronting the caregiver’s perception would be inappropriate.

6 General Guidelines on Report Writing

6.3.3

135

Make a Classification Decision and Stand by It

Do not be timid when making a classification decision. You need to offer a classification, discuss your rationale for it, and stand by your decision. Of course, your decision should be predicated upon solid data and sound clinical judgment. Similarly, you need to rule out other classifications and state why you ruled them out. Express your classification decision-making clearly and use data-based decision making and an evidence-based approach.

6.3.4

Rule Out Other Classifications and State Why You Ruled Them Out

A corollary to making a classification decision and standing by it is to discuss the other classifications you considered and why they are not as appropriate as the one you decided upon. The classification decision is not as straightforward as it might appear at the outset. There are times when you weighed the data and decided upon one classification over another. In these circumstances, you should state why you arrived at your decision and ruled-out the other.

6.3.5

Use Multiple Sources of Data and Methods of Assessment to Support Decision-Making

Gone are the days of using a single method of assessment (e.g., the WISC, Bender or Draw-A-Person) to infer that a child has an emotional disturbance, a learning disability, or a behavioral issue. Here are the days that require multiple sources of data to inform decision-making via an iterative problem-solving process. When multiple sources of information converge then we can be confident in a decision made about a child. These sources of data may include norm-referenced assessment, functional assessment, interview results, observations, and review of records as supported by clinical judgment (Dombrowski, 2015).

6.3.6

Eisegesis

Eisegesis is the interpretation of data from a report in such a way that it introduces one’s own presuppositions, agendas, or biases into and onto the report (Kamphaus, 2005; Tallent, 1993). Generally, this will entail overlooking the data and research evidence and instead applying one’s own interpretative schema to the evaluation of the data. Tallent (1993) suggests that selected clinicians’ reports can even be

136

S. C. Dombrowski

identified by the type of judgments that they superimpose upon the data. This is poor practice and should be eschewed.

6.3.7

Be Wary of Using Computer-Generated Reports

Errors abound and mistakes are made when computer-generated reports are freely relied upon. Be cautious about the practice of cutting and pasting computer-generated reports. This is poor practice and should be avoided. Butcher et al. (2004) suggest that nearly half of all computer-generated narratives may be inaccurate. For this reason, the apparently sophisticated, well-written, and reliable computer-generated interpretation and narrative print-out should not supplant clinical judgment. It might be tempting to incorporate narrative from computer-generated reports. Many programs incorporate computer scoring and interpretive statements in an organized format similar to what can be found in a psychological report. However, Michaels (2006) notes that simply cutting and pasting may be in violation of ethical principles. While acknowledging some of these limitations, Lichtenberger (2006) points out benefits of using computer-generated reports. She notes that there is potential to reduce errors in clinical judgment (e.g., relying on data collected early or late in the evaluation; judgments influenced by information already collected; attending to information most readily available or recalled) so there may certainly be a place for computer assisted assessment and report writing. However, it is important to use clinical judgment regarding how it should be interpreted and presented.

6.3.8

Sparingly Use Pedantic Psychobabble

Your report should be written at an accessible level to most parents/caregivers. You should avoid using psychology jargon and overly ornate language and terminology (Dombrowski, 2015; Harvey, 1997, 2006) whenever possible. Instead, write concisely and in terms understandable to most readers. For example, you should avoid a sentence like the following: Jack’s display of anhedonic traits likely emanate from ego dystonic features related to a chaotic home life and rearing by a borderline caregiver.

Instead, indicate the following: Jack reported feeling sad and depressed. He explained that his parents are divorcing and this has been quite upsetting for him.

6 General Guidelines on Report Writing

137

This second sentence is superior in that it uses more accessible language and avoids making etiological statements. Although you are to use psychology jargon sparingly, please keep in mind that this guideline is not to be rigidly adhered to in all circumstances. Certain words or phrases cannot be adequately expressed parsimoniously without the use of a psychological term. In some situations, it will be necessary to use psychology terminology. For instance, when describing a child with autism spectrum disorder who repeats a word or phrase over and over, it may be more parsimonious to indicate that the child displays echolalia (i.e., repeats the phrase “Hi Jack” after hearing it). Of course, you should provide an example of what you mean by echolalia. But, it may be necessary to use this term as an example of how a child meets criteria for an autism spectrum classification. I also understand that this raises the possibility that neophyte psychologists will misinterpret what I am trying to convey and may be more apt to write using complex psychological terms. This would be inappropriate.

6.3.9

Avoid Big Words and Write Parsimoniously

This is similar to the above admonition about avoiding the use of psychology jargon. Do not write to impress. Unusually, ornate language will obfuscate meaning and obscure the message in your report. Write clearly and succinctly and use accessible language (Shively & Smith, 1969). If a word conveys meaning more precisely and parsimoniously then by all means use that word. However, be cautious about using an extravagant word unless it enhances meaning or clarity. Harvey (1997, 2006) and others (e.g., Wedding, 1984) conducted studies regarding the readability of psychological reports and found that they were beyond the level of most parents, the majority of whom read at about an 11th grade level. Harvey noted that the reports were rated at a readability level of anywhere from the 15th to the 17th grade! When these reports were revised to a more readable 6th grade reading level they were viewed much more positively and were better able to be understood even among highly educated parents. It is not entirely known why psychologists write using jargon and highly technical terms. It is suspected that this is done because we tend to write for other professional colleagues or our supervisors (Shectman, 1979). However, it is important to write at a level that is accessible to parents and other stakeholder’s in the school.

6.3.10 Address the Positive Much of psychological and psychoeducational report writing is focused on identifying areas of deficit and psychopathology (Brenner & Holzberg, 2000; Tallent,

138

S. C. Dombrowski

1993). This is the nature of report writing where access to services is predicated upon a classification of psychopathology. When writing psychoeducational reports, it will be important to emphasize positive features of a child’s background and functioning (Michaels, 2006; Rhee, Furlong, Turner, & Harari, 2001; Snyder et al., 2006). The inclusion of positive aspects of a child’s functioning and a discussion of the child’s resiliency is important to caregivers who intuitively understand that their children are not defined by a simplistic label.

6.3.11 Write Useful, Concrete Recommendations The recommendation section within a psychoeducational report is arguably the most important section (Brenner, 2003). It offers parents, teachers, and others a way forward for the child. A recommendation that merely reiterates the concerns posed in the referral questions is generally less meaningful. For instance, if a child were referred for reading difficulties, and is found to struggle in this area, then resist a generic recommendation that suggests need for reading difficulties. Instead, provide more detailed, concrete, and empirically based guidance regarding how a specific deficit in reading will be remediated. In studies asking parents, teachers, and other personnel what they wished to gain from a report the response is invariably specific recommendations on how to intervene or treat the difficulty (Mussman, 1964; Salvagno & Teglasi, 1987; Tidwell & Wetter, 1978; Umaña et al., 2020; Witt, Moe, Gutkin, & Andrews, 1984).

6.4 6.4.1

Stylistic Issues in Psychoeducational Assessment Report Writing Report Length

Specific guidance in the literature is unavailable about optimal report length (Groth-Marnat & Horvath, 2006; Tallent, 1993). There is not general threshold for report length that is considered appropriate. More is not always better. Emphasize the presentation of important information that helps to conceptualize the child’s functioning, classify the child and understand the child. Do not include information just to fill space. Clarity is more important than report length (Wiener & Kohler, 1986). Favor parsimony over superfluity.

6 General Guidelines on Report Writing

6.4.2

139

Revise Once, Revise Twice, Revise Thrice and then Revise Once More

The importance of revising your report cannot be emphasized enough. Keep in mind that a psychoeducational report written for public school districts in the USA is a legal document that becomes part of a student’s educational records. This means that the report may be read by numerous stakeholders including other psychological professionals, teachers, physicians, caregivers, and even attorneys and judges. For this reason, an error free report is critical. One suggestion would be to write your report, put it down for three days, and then return to it on the fourth day. Obviously, time may be of the essence particularly with the IDEA mandated 60 day threshold, but the pressure to meet the 60 day deadline (or that prescribed by your state) should be balanced with the need for an error-free report. In the long run, an error-laden report can pose greater problems than being out-of-compliance with the 60 day deadline. Save yourself considerable embarrassment and revise your report repeatedly.

6.4.3

Avoid Pronoun Mistakes

A rookie mistake that continues to plague veteran report writers involves mismatched pronouns. This emanates from the use of the search and replace button in word processing programs. For instance, suppose you use a prior report as a template from which to write a new report. The prior report was written about a female student but you miss changing the /she/ to a /he/ in a few places. This is embarrassing and the psychologist needs to be vigilant about this type of flagrant error. For example, the following sentence is problematic: Rick scored in the 50th percentile on a measure of cognitive ability. This instrument evaluates her ability to reason and solve problems.

This error not only degrades the quality of the report, but undermines the credibility of the psychologist writing the report.

6.4.4

Use Headings and Subheadings Freely

The liberal use of headings and subheadings with underlined, bolded, and italics formatting is an effective way to organize your report. Without the use of such an organizational approach, the report may feel overwhelming, cumbersome, and disorganized. The reader will feel as if he or she is plodding through the document. An unorganized report loses its influence, utility, and credibility.

140

6.4.5

S. C. Dombrowski

Provide a Brief Description of Assessment Instruments

Prior to introducing test results, it is important to provide a narrative summary of the instrument whose results are being presented. This description should include the following information: a. Description of the instrument and the construct being evaluated including the age range and scale (e.g., mean = 100; std. dev = 15) being used. b. Description of index and subtest scores c. Chart to follow the above description d. Narrative description of results. Over the years, my graduate students have consistently asked me for verbiage regarding a new instrument that they used during the evaluation process. The above-offered framework may useful for this purpose.

6.4.6

Use Tables and Charts to Present Results

Most psychoeducational reports contain tables of test results either within the body of the report or as an addendum to the report. These charts are preceded by an introduction that describes the instrument. This is followed by a chart of test results, which, in turn, is followed by a brief description of the results. The use of a chart to present test results is good practice. On the other hand, reports that merely use a descriptive approach to the presentation of test results are less efficient and more cumbersome. This style of reporting test results is not recommended.

6.4.7

Put Selected Statements Within Quotations to Emphasize a Point

During the course of your interviews with the child, caregivers, or collateral contacts the interviewees will sometimes make statements that most efficiently and clearly describe the point they were trying to make. In such cases, it is a good practice to enclose those statements within quotation marks. For example, we could paraphrase a child who complains of sadness as follows: Jill notes that she is frequently sad and does not wish to participate in activities that used to interest her. She described suicidal ideation.

6 General Guidelines on Report Writing

141

However, when we enclose direct remarks from Jill it emphasizes the struggles that she is facing: Jill explained that she “…no longer wants to play soccer, baseball or ride her bike.” Jill stated that she does not understand why she does not want to participate: “ I just lost interest and feel like crying all the time.” Jill also described recent feelings of suicidality: “There are times when I wonder whether it would be better to be dead. I just feel so bad.”

As noted above, the statements from the adolescent reveal an additional layer of perspective that does not come through without the direct quotations.

6.4.8

Improve Your Writing Style

By this stage in your education, you should have decent command of standard written English. If not, you have some work to do. I suggest reading thoroughly the APA Guide to Style and The Elements of Style by Strunk and White. You may also wish to review reports written by experienced psychologists and those offered in this book. You may have the best analytical skills the world has ever seen in a psychologist, but if your report is not clearly written than your analytical prowess is wasted.

6.5

Conclusion

The above report writing guidelines, combined with specific section-by-section report writing guidance, should assist the beginning graduate student in school and clinical-child psychology avoid many of the pitfalls of report writing that may be frequently encountered by the field.

References Blau, T. H. (1991). The psychological examination of the child. New York, NY: Wiley. Brenner, E. (2003). Consumer-focused psychological assessment. Professional Psychology: Research and Practice, 34, 240–247. Brenner, E., & Holzberg, M. (2000). Psychological testing in child welfare: Creating a statewide psychology consultation program. Consulting Psychology. Journal: Practice and Research, 52, 162–177. Butcher, J. N., Perry, J., & Hahn, J. (2004). Computers in clinical assessment: Historical developments, present status, and future challenges. Journal of Clinical Psychology, 60(3), 331–345. Dombrowski, S. C. (2015). Psychoeducational assessment and report writing. New York: Springer Science.

142

S. C. Dombrowski

Dombrowski, S. C., Noonan, K., & Martin, R. P. (2007). Birth weight and cognitive outcomes: Evidence for a gradient relationship in an urban poor African-American birth cohort. School Psychology Quarterly, 22, 26–43. https://doi.org/10.1037/1045-3830.22.1.26. Groth-Marnat, G., & Horvath, L. S. (2006). The psychological report: A review of current controversies. Journal of Clinical Psychology, 62(1), 73–81. Harvey, V. S. (1997). Improving readability of psychological reports. Professional Psychology: Research and Practice, 28, 271–274. Harvey, V. S. (2006). Variables affecting the clarity of psychological reports. Journal of Clinical Psychology, 62, 5–18. Kamphaus, R. W. (2005). Clinical assessment of child and adolescent intelligence. New York, NY: Springer. Lichtenberger, E. O. (2006). Computer utilization and clinical judgment in psychological assessment reports. Journal of Clinical Psychology, 64, 19–32. https://doi.org/10.1002/jclp. 20197. Martin, R. P., & Dombrowski, S. C. (2008). Prenatal Exposures: Psychological and Educational Consequences for Children. New York: Springer Science. Matlin, M. W., & Gawron, V. J. (1979). Individual differences in pollyannaism. Journal of Personality Assessment, 43(4), 411–412. https://doi.org/10.1207/s15327752jpa4304_14. Michaels, M. H. (2006). Ethical considerations in writing psychological assessment reports. Journal of Clinical Psychology, 62, 47–58. Mussman, M. C. (1964). Teachers’ evaluations of psychological reports. Journal of School Psychology, 3, 35–37. Ownby, R. L. (1997). Psychological reports: A guide to report writing in professional psychology (3rd ed.). New York, NY: Wiley. Rhee, S., Furlong, M. J., Turner, J. A., & Harari, I. (2001). Integrating strength-based perspectives in psychoeducational evaluations. California School Psychologist, 6, 5–17. Salvagno, M., & Teglasi, H. (1987). Teacher perceptions of different types of information in psychological reports. Journal of School Psychology, 25, 415–424. Shectman, F. (1979). Problems in communicating psychological understanding: Why won’t they listen to me? American Psychologist, 34, 781–790. Shively, J., & Smith, D. (1969). Understanding the psychological report. Psychology in the Schools, 6, 272–273. Snyder, C. R., Ritschel, L. A., Rand, K. L., & Berg, C. J. (2006). Balancing psychological assessments: Including strengths and hope in client reports. Journal of Clinical Psychology, 62, 33–46. https://doi.org/10.1002/jclp.20198. Tallent, N. (1993). Psychological report writing (4th ed.). Englewood Cliffs, NJ: Prentice Hall. Tidwell, R., & Wetter, J. (1978). Parental evaluations of psychoeducational reports: A case study. Psychology in the Schools, 15, 209–215. Umaña, I., Khosraviyani, A., Castro-Villarreal, F. (2020). Teachers’ preferences and perceptions of the psychological report: A systematic review. Psychology in the Schools, 1–20. https://doi. org/10.1002/pits.22332. Wedding, R. R. (1984). Parental interpretation of psychoeducational reports. Psychology in the Schools, 21, 477–481. Weiner, J. (1985). Teachers’ comprehension of psychological reports. Psychology in the Schools, 22, 60–64. Weiner, J. (1987). Factors affecting educators’ comprehension of psychological reports. Psychology in the Schools, 24, 116–126. Wiener, J., & Kohler, S. (1986). Parents’ comprehension of psychological reports. Psychology in the Schools, 23(3), 265–270. Witt, J. C., Moe, G., Gutkin, T. B., & Andrews, L. (1984). The effect of saying the same thing in different ways: The problem of language and jargon in school-based consultation. Journal of School Psychology, 22, 361–367.

Part II

Section-by-Section Report Writing Guidelines

Chapter 7

Identifying Information and Reason for Referral Stefan C. Dombrowski

7.1

Introduction

This chapter offers a discussion of how to present the identifying information and the reason for referral sections of a report. Both sections are among the first you will encounter when reading a psychoeducational report. They present important contextual information that set the stage for the rest of the report.

7.2

Identifying Information

The identifying information section of a report is the initial section of a report that provides demographic information about the child. It will include the full name of the child, the caregiver’s names and address, caregiver’s contact information, the grade of the child, the child’s date of birth, the date of report completion, and the date the report was sent to the caregivers. At the top of the report can be found the title. This is often “Psychoeducational Report” or “Psychological Report” with the word “Confidential” placed underneath the word “Psychoeducational Report.” Placed beneath the words “Confidential” will be the identifying information. The following example will furnish the reader with an idea of how this section is formatted. Your school district or university clinic may have a slightly different format for the title and identifying section, but much of what is presented below is fairly standard.

S. C. Dombrowski (&) Rider University, Lawrenceville, NJ, USA e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S. C. Dombrowski (ed.), Psychoeducational Assessment and Report Writing, https://doi.org/10.1007/978-3-030-44641-3_7

145

146

S. C. Dombrowski

EXAMPLE IDENTIFYING SECTION Pyschoeducational Report Confidential Name: Jane Doe Grade: 5th

Date of Birth: 1/20/2013 Age of Child: 10 years 4 months

Date of Report: May 10, 2023

Date Sent: May 11, 2023

Parents: John and Ruby Doe Address: 124 Main Street Glastonbury, CT (203) 555-1212

Examiner’s Name: Sean Parker, B.A. Supervisor’s Name: Stefan C. Dombrowski, Ph.D.

Local Education Agency: Glastonbury Public Schools

Depending upon the school district, clinic, or psychologist, additional information may be included in the identifying section. This section of the report should be presented in an aesthetically appealing fashion with a visual balance between the right and left columns noted above. The title of the report should be centered. Remember one of the key recommendations in report writing is to make the report aesthetically appealing.

7.3

Reason for Referral

The reason for referral is an important section and should contain questions that need to be addressed by the report (Brenner, 2003). The omission of specific referral questions within a report is considered by most texts and articles on report writing to be poor practice (Ownby, 1997; Tallent, 1993; Weiner, 1985, 1987). This may well be the case within a broader psychological report but I am not in complete agreement with the notion that referral questions need to explicitly state all of the issues under investigation by the psychologist within a psychoeducational report. I will talk more about this later but it is often difficult at this outset of an evaluation to know all of the issues faced by a child; and, often the individual(s) responsible for the referral may not have insight into each and every concern faced by the child. Second, the presumption of a US-based psychoeducational report is that the child is being referred to determine eligibility in special education. Implicit in this presumption is that a classification decision will be made with associated recommendations/accommodations. The reason for referral helps to focus the report and assures that issues of concern are addressed by the psychologist. The reason for referral section is generally about a paragraph long and should be written concisely. It is inappropriate to incorporate too much information or information that makes this section into a

7 Identifying Information and Reason for Referral

147

mini-background section. Instead, the reason for referral section should discuss specific referral concerns but leave upon the possibility that additional issues will be uncovered. Most referrals for psychoeducational reports stem from concerns about a student’s cognitive, academic, social, emotional, behavioral, communicative, or adaptive functioning. Often, but not always, the reason for referral will include a referral source. In the case of psychoeducational reports, the source is usually the parents/caregivers or the multidisciplinary team. There are generally two approaches to writing the reason for referral section. One is to include a generic reason for referral and the other is to list specific referral concerns a priori that need to be addressed. As mentioned the latter is the preferred approach by most texts and articles on report writing. However, when focusing a report through the provision of referral questions, it will be important for the psychologist to be mindful not to overlook additional issues that will need to be addressed.

7.3.1

Generic Referral

The generic referral question can sometimes be seen used for psychoeducational evaluations conducted within a school setting. This is utilized when the focus of the report is on classification eligibility. There are both advantages and disadvantages to this approach. Advantages: Permits the psychologist to retain a panoramic perspective and be a bit more unconstrained in targeting additional areas of concern following commencement of the evaluation. For instance, at the outset of the evaluation, perhaps there was the suspicion that the child solely had an issue with reading and attention. A specifically targeted referral may overlook the fact that the child also has issues with anxiety and possibly even depression. Disadvantages: The psychologist may not sufficiently focus the evaluation on specific areas of need or may miss areas that should have been targeted. This could render the evaluation less useful to parents and teachers and be potentially off target.

7.3.2

Hybrid Referral

I contend that listing of specific concerns followed by more generic rationale for the referral is most appropriate. This permits the psychologist to focus the report on areas of concern as well as remain vigilant for other deficiencies. The following are three examples of a reason for referral section that I generally find effective. The first example in the box below is more general than the other two but still might be an option for psychoeducational assessment reports.

148

S. C. Dombrowski

Example 1: Johnny was referred for a comprehensive evaluation to (a) gain insight into his present level of functioning, (b) to ascertain diagnostic impressions and whether he might qualify for special education support, and (c) to determine treatment recommendations and accommodations that might be appropriate to support his learning and behavior in school. Example 2: Jack faces difficulty with low progress in reading. Specifically, he struggles with automatically decoding words and comprehension of text. Jack also struggles with remaining seated for long periods of time and cannot focus on his classwork. Jack was referred for a comprehensive evaluation to (a) gain insight into his present level of functioning, (b) to ascertain diagnostic impressions and whether he might qualify for special education support, and (c) to determine treatment recommendations and accommodations that might be appropriate to support his learning and behavior in school. Example 3: Jack is struggling with understanding social nuance which tends to hamper his capacity to get along with other children at school. Jack also struggles when his routines at school are changed. Jack was referred for a comprehensive evaluation to (a) gain insight into his present level of functioning, (b) to ascertain diagnostic impressions and whether he might qualify for special education support, and (c) to determine treatment recommendations and accommodations that might be appropriate to support his learning and behavior in school.

Advantages: This approach focuses on the report, providing evaluation goals for the psychologist. It also permits the psychologist to thoroughly evaluate the referral questions to be addressed and offer targeted feedback. And, it permits the psychologist more latitude to capture additional difficulties not contained in the referral.

7.3.3

Example of Hybrid Referrals for IDEA Categories

Referral for Suspected Learning Disabilities Joquim struggles with sounding out words and understanding written text. His teachers have furnished him with additional intervention in this area but he still continues to struggle. Ms. Smith, Joquim’s mother, noted that Joquim has had a tutor over the past year but still experiences difficulty with reading. Joquim was referred for a comprehensive evaluation to (a) gain insight into his present level of functioning, (b) to ascertain diagnostic impressions and whether he might qualify for special education support, and (c) to determine treatment recommendations and accommodations that might be appropriate to support his learning and behavior in school.

Referral for Suspected Emotional Disturbance Matthew struggles with getting along with other children in the classroom. He misperceives other children’s intent and views their words and actions toward him as hostile even though this is not the case. When this occurs, Matthew will argue with and often hit, kick, or punch other children. Teacher reports indicate that Matthew disregards teacher redirection and has sometimes cussed out the teacher or thrown his books across the room when frustrated or upset. Matthew was referred for a comprehensive evaluation to (a) gain insight into his present level of functioning, (b) to ascertain diagnostic impressions and whether he might qualify for special education support, and (c) to determine treatment recommendations and accommodations that might be appropriate to support his learning and behavior in school.

7 Identifying Information and Reason for Referral

149

Referral for Suspected Autism Nick struggles with communicating and socializing at an age expected level. He rarely makes eye contact when being addressed by or speaking with others. He vocalizes sounds “eeeaweee aw” repetitively. Nick becomes distressed when his routines are changed and seems lost during times of transition. Nick was referred for a comprehensive evaluation to (a) gain insight into his present level of functioning, (b) to ascertain diagnostic impressions and whether he might qualify for special education support, and (c) to determine treatment recommendations and accommodations that might be appropriate to support his learning and behavior in school.

Referral for Suspected Intellectual Disability Tina faces considerable difficulty with all kindergarten tasks. She still cannot distinguish between numbers and letters and cannot identify colors. Tina can only recognize the letters in her name and the letters /a/, /b/, and /c/. Tina enjoys playing with other students and she is generally well-liked. However, she sometimes cannot communicate her needs to them and so she will grab and hit when things do not go her way. Tina was referred for a comprehensive evaluation to (a) gain insight into his present level of functioning, (b) to ascertain diagnostic impressions and whether he might qualify for special education support, and (c) to determine treatment recommendations and accommodations that might be appropriate to support his learning and behavior in school.

Referral for Other Health Impaired Aaron has an outside classification of attention-deficit/hyperactivity disorder (ADHD) from his pediatrician. In class, he struggles with staying seated, remaining on task, following directions, and calling out at inappropriate times. With redirection, structure and support, Aaron is able to persist and complete his classwork. Aaron is presently reading above grade level and in one of the higher math groups. Ms. Jones, Aaron’s mother, sought an evaluation to determine what supports are available to Aaron. Aaron was referred for a comprehensive evaluation to (a) gain insight into his present level of functioning, (b) to ascertain diagnostic impressions and whether he might qualify for special education support, and (c) to determine treatment recommendations and accommodations that might be appropriate to support his learning and behavior in school.

Referral for Giftedness Lukas reads several grades above level and has extremely high mathematics abilities. Lukas is interested in a wide variety of subjects and requires differentiated instruction because of his advanced academic skills. Lukas was referred for a comprehensive psychoeducational evaluation to determine whether he qualifies for the districts gifted program.

A combination of the specific and general referral—the hybrid referral—may be the best referral question option to consider. I have found that this combined approach is useful and serves to guide the reader as well as focus the report on salient concerns that led to the referral.

150

7.4

S. C. Dombrowski

Conclusion

It is understood that the field of school psychology has moved away from its singular focus on being the gatekeeper of special education but an important purpose of the report is to determine eligibility for special education services. Because of this, the reason for referral should include a generic statement that indicates that the purpose of the report is to determine whether the child qualifies for special education support. The reason for referral should be sufficiently detailed to offer the reader a panoramic perspective—an executive summary statement of sorts—of what the comprehensive psychoeducational report will entail but not so detailed as to become a mini-background section.

References Brenner, E. (2003). Consumer-focused psychological assessment. Professional Psychology: Research and Practice, 34, 240–247. Ownby, R. L. (1997). Psychological reports: A guide to report writing in professional psychology (3rd ed.). New York, NY: Wiley. Tallent, N. (1993). Psychological report writing (4th ed.). Englewood Cliffs, NJ: Prentice Hall. Weiner, J. (1985). Teachers’ comprehension of psychological reports. Psychology in the Schools, 22, 60–64. Weiner, J. (1987). Factors affecting educators’ comprehension of psychological reports. Psychology in the Schools, 24, 116–126.

Chapter 8

Assessment Methods and Background Information Stefan C. Dombrowski

8.1

Introduction

This chapter has two parts. The first part discusses the assessment methods section while the second part discusses the background section.

8.2

Assessment Methods

The assessment methods section is important and provides the reader with a detailed listing of the sources of data used to understand and evaluate the child. It also demonstrates that the clinician has engaged in due diligence and suggests whether the evaluation was indeed comprehensive. If this section is sparse, then it might detract from the credibility of the report. The Methods of Assessment (or Assessment Methods) section also serves as a type of table of contents except without the page numbering. The Assessment Methods section should be formatted in a particular way. Generally, it is a good idea to list the names of broadband cognitive ability, achievement, and adaptive behavior followed by narrowband measures of the same. For instance, a cognitive ability test such as the Stanford–Binet should be listed first followed by the Woodcock–Johnson Tests of Achievement. In turn, this is followed by a narrowband measures such as the Peabody Picture Vocabulary Test, Fourth Edition and the Comprehensive Test of Phonological Awareness. Following the presentation of cognitive ability and academic achievement measures, broadband measures of behavior (e.g., BASC-3) followed by narrowband measures of behavior (e.g., Beck

S. C. Dombrowski (&) Rider University, Lawrenceville, NJ, USA e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S. C. Dombrowski (ed.), Psychoeducational Assessment and Report Writing, https://doi.org/10.1007/978-3-030-44641-3_8

151

152

S. C. Dombrowski

Depression Inventory) are presented. The general framework presented below along with a specific example will be useful in elucidating the approach to, and order of, this section. • Cognitive Ability – Narrowband measures • Academic Achievement including norm-referenced and curriculum-based measures – Narrowband measures • Adaptive Behavior • Broadband behavior tests – Narrowband behavior tests • Listing of Interviewees • Observations • Review of Records. Assessment Methods Wechsler Intelligence Scale for Children—Fifth Edition (WISC-V) Wechsler Individual Achievement Test—Fourth Edition (WIAT-IV) Behavior Assessment System for Children, Second Edition (BASC-3) -Ms. Jonna Smith (Teacher Rating Scale) -Mr. Bill McFly (Teacher Rating Scale) -Ms. Jean Gonzalez (Parent Rating Scale) BASC-3 Student Observation System -Joan Baez, Ph.D. Child Development Questionnaire -Ms. Jean Gonzalez Vineland-II Adaptive Behavior Scales -Ms. Jean Gonzalez (Parent/Caregiver Rating Form) Childhood Autism Rating Scale, Second Edition (CARS-2) -John H. Smith, Ph.D. Student Interview -Matthew Gonzalez Teacher Interviews -Jennifer Cramer (Resource Room Language Arts Teacher; 10/29/24 - Lauren Crane (Resource Room Math Teacher; 10/22/24) -William McMan (In-Class Support Special Education Teacher; 10/17/24) Speech-Language Pathologist Interview -Mary Ann Lares (Speech-Language Pathologist; 10/18/24) Parent Interview - Jean Gonzalez (Mother; 10/10/24 & 10/18/24) Classroom Observations (10/10/24, 10/12/24 & 10/22/24) Review of Educational and Psychological Records -Psychoeducational Report (Dr. Barbara West; completed 9/9/24) Review of Pediatrician’s Records

8 Assessment Methods and Background Information

8.3

153

Background Information and Early Developmental History

Within the background and early developmental history section, you will summarize much of the information gathered via observations, questionnaire forms, interview results, and review of educational, psychological and medical records. Do not incorporate present cognitive, academic, behavioral and social–emotional assessment results, in this section, however. This section of the report has several components as noted below: • • • • • • •

Introduction Prenatal, Perinatal and Early Developmental History Medical and Health Cognitive, Academic and Language Functioning Social, Emotional, Behavioral, and Adaptive Functioning Strengths/Interests Conclusion

8.3.1

Introduction

Within this component of the background section, you will introduce the child by presenting the child’s age, the child’s grade, and the salient issues faced by the child. It should not be longer than a paragraph. For example, the background section might begin with an introductory statement as follows: Matthew Osbourne is a six-year-old child in the first grade at the Hopewell Public School (HPS). Matthew faces difficulty with paying attention, organization, and remaining on task. Background information revealed difficulty with reading comprehension. Ms. Osbourne indicates that Matthew has been diagnosed with attentiondeficit/hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD). Matthew is not presently taking any medication for the management of his symptoms, but Ms. Osbourne reports that he is scheduled for a psychotropic medication evaluation in mid-October. Matthew is reported to be an even tempered child, but one who struggles with symptoms of inattention, distractibility, impulsivity, and loss of focus. His behavioral difficulties are having an impact on his educational functioning at school.

8.3.2

Prenatal, Perinatal, and Early Developmental History

This section of the background requires a discussion of pertinent factors that may impact a child’s developmental functioning. Research is well-established that disruptions to or complications during prenatal and perinatal development are associated with a host of adverse developmental outcomes (see Dombrowski & Martin,

154

S. C. Dombrowski

2009; Dombrowski, Martin, & Huttunen, 2003; Dombrowski, Martin, & Huttunen, 2005; Dombrowski, Noonan, & Martin, 2007; Martin & Dombrowski, 2008; Martin, Dombrowski, Mullis, & Huttunen, 2006). Similarly, children who are delayed in their early developmental history often face later difficulties with their development. For these reasons, a discussion of prenatal, perinatal, and early developmental history is especially important. Example Prenatal, Perinatal, and Early Developmental History section. Ms. Jones reports that her pregnancy with Michael was complicated by premature rupture of membranes at approximately 33 weeks gestation. Ms. Jones indicated that she was prescribed Magnesium Sulfate to delay labor for several days and was given a second medication (steroids) that enhanced Matthew’s lung development. Matthew was born at 33 weeks gestation weighing 4 pounds, 8 ounces. His Apgar scores were 7 at one minute and 10 at five minutes. He had a 13-day stay in the neonatal intensive care unit (NICU) during which time he was placed under bilirubin lights for jaundice. Upon release from the NICU, Matthew was given a vaccine to prevent the respiratory synstitycal virus (RSV). Ms. Jones noted that Matthew’s early developmental milestones were roughly on target. She explained that Matthew rolled over, sat up, and crawled within age expected limits. Ms. Jones noted that Matthew walked at 13 months gestation. She explained that all other milestones were accomplished within normal limits with the exception of babbling. Ms. Jones explained that Matthew was not much of a babbler although he was a very social baby. Ms. Jones noted that Matthew experienced extreme colic until about 6 months of age. She described Matthew as a shy child who experienced distress during his first year of preschool. Ms. Jones expressed regret about putting Matthew in preschool at age 2 ½ and noted that she should have delayed his preschool entry a year. Otherwise, Ms. Jones described Matthew as a happy and healthy child.

8.3.3

Medical and Health

Psychologists view the world through a psychological lens and place their gaze upon these factors. However, there are numerous medical/health conditions that need to be ruled out as they could present as a symptom of one of the IDEA/DSM categories but have their origins in a health/medical condition. There are myriad medical and genetic conditions some of which are widely known (e.g., Down syndrome or fetal alcohol syndrome) and others less well-known (e.g., Chiriari Malformation). Suffice to say that information from pediatric and other medical providers should be ascertained so that important medical or health conditions are investigated. Example Medical and Health Section: Ms. Winkler noted that Isabella was born with Tetralogy of Fallot, a congenital heart defect. Inspection of records from Isabella’s pediatrician also revealed a food allergy to tree nuts, asthma, and sickle cell anemia. Ms. Winkler indicated that Isabella carries an inhaler and noted that she had to expend considerable effort to document with Hometown Public Schools that it was a medically necessary medication since the school rarely permits students to carry inhalers. Isabella’s hearing and vision are all intact. Isabella’s physical health is otherwise intact and age appropriate.

8 Assessment Methods and Background Information

8.3.4

155

Cognitive, Academic, and Language Functioning

A large percentage of psychoeducational evaluations will have as its primary focus a child’s cognitive, academic, and language functioning. The child’s functioning in these areas, therefore, will need to be thoroughly discussed. Within this component of the background section, you will not discuss present norm-referenced measures. This will be discussed later on in the report within a separate section. You will only discuss those from outside evaluations or from a prior report. You will, however, discuss a summary of the child’s progress within the classroom as noted by parents, teachers, and grade reports. Example Cognitive, Academic, and Language Functioning Section: Juan presently experiences significant difficulty with reading and writing in the third grade. He struggles with word decoding, spelling, and reading comprehension. He also struggles with expressive language. English is Juan’s second language and is not spoken in the home. At home, Juan speaks only Spanish. He has received intervention for children with ELL since his arrival at Newfield Public School in first grade but his teachers do not feel as if he is making appropriate progress in the third grade curriculum and wonder whether he might struggle with a learning disability. Additionally, Juan struggles with pronouncing words that begin with /r/ and /fr/. He is presently receiving speech-language support. His expressive language functioning was found to be in the below average range (see Speech-Language evaluation dated 2/23/24). Juan’s cognitive ability was previously evaluated in first grade after moving from Guatemala. His performance on the Unit (Std. Score=110; 75th percentile) was high average. His academic achievement abilities were not assessed at that time because he had just moved six months previously to the USA.

8.3.5

Social, Emotional, Behavioral, and Adaptive Functioning

Within this area of the background section, you will discuss the child’s functioning in the social, emotional, behavioral, and adaptive domain from early in development through the present time period. You should not include a detailed developmental history within this section, but you may consider indicating the continuity of a difficulty or strength from earlier phases of development. For instance, if the child has struggled since preschool with social skills, and continues to struggle with such difficulties, then it is appropriate to discuss this information. Likewise, if the child has always had difficulty with overactivity, task persistence, and organization then it is appropriate to mention those characteristics.

156

S. C. Dombrowski

Example of Social–Emotional and Behavioral Functioning: Jayden has always struggled in his interaction with peers. He is primarily nonverbal and will only occasionally use simple language to communicate his needs. Jayden rarely participates in group activities. Jayden’s social and behavioral functioning at school has improved over the past year. He less frequently engages in behaviors that annoy other children and has learned to follow the basic classroom routines in his first grade room. Ms. Wong, Jayden’s teacher, reports that he unpacks every morning, is able to transition to different activities and usually stays in his "spot" whether at a table or sitting on the rug. Most academic work is too difficult for Jayden, but he will pretend to do what everyone else is doing. He looks around and even looks at what others are doing and tries to copy them. Ms. Wong indicates that Jayden very much wants to feel part of our classroom and wants to be able to do it on his own (without an adult sitting with him). Jayden’s interest in affiliating with and emulating of peers is a significant strength for Jayden and suggests continued need for access to age typical peers.

8.3.6

Strengths and Interests

It is good practice to conduct a strength-based assessment of children’s skills. This is required by IDEA and also consistent with the positive psychology literature base. As part of this process, the child’s hobbies and interests should be ascertained. Much of report writing emphasizes difficulties faced by children, which serves to make a case for the classification decision that will be made. However, it is difficult for caregivers to have negative aspects of their children discussed and emphasized. Imagine spending twenty or more minutes listening to a team of individuals emphasize where and how your child is struggling. This would be fairly disconcerting. A strengths-based assessment of the child’s functioning along with the child’s hobbies/interests is an important component of the psychoeducational report. Example of Strengths and Interests Section: Mike is a child who is polite, helpful and gets along well with others. He is good at drawing and enjoys most sports particularly baseball and soccer. Mike has several close friends with whom he plays Minecraft and builds model airplanes. Ms. Jones explains that Mike is family-oriented and helps her out with his younger sister. She explained that he is a compassionate child with many friends.

8.3.7

Conclusion

Within the conclusion to the background section, you will provide an overarching statement of the problem faced by the child that supports the case for the psychoeducational evaluation of the child.

8 Assessment Methods and Background Information

157

Example Conclusion Section: Jayden has made progress in his behavioral and social functioning since last evaluation. Background information suggests continued difficulties in these areas and continued need for accommodation for social, behavioral, communication, and academic difficulties.

As noted above, it is a generally brief statement that ties all aspects of the background and developmental history together and leaves the reader with the conclusion that an evaluation is necessary.

8.4

Conclusion

The assessment methods, and background and developmental history sections contain critically important information. The methods of assessment section serves as a table of contents for the sources you used to conduct your assessment. Everything reviewed or administered should be listed in this section. The background and developmental history section furnish an overview of the child’s functioning across multiple domains. Generally, the results from the assessments instruments just administered should not be incorporated into the background and developmental history section and will be incorporated into a different section of the report.

References Dombrowski, S. C., Noonan, K., & Martin, R. P. (2007). Birth weight and cognitive outcomes: Evidence for a gradient relationship in an urban poor African-American birth cohort. School Psychology Quarterly, 22(1), 26–43. Dombrowski, S. C., & Martin, R. P. (2009). Maternal fever during pregnancy: Association with infant, preschool and child temperament. Saarbrucken, Germany: VDM Verlag. Dombrowski, S. C., Martin, R. P., & Huttunen, M. O. (2003). Association between maternal fever and psychological/behavioral outcomes: An hypothesis. Birth Defects Research A: Clinical and Molecular Teratology, 67, 905–910. Dombrowski, S. C., Martin, R. P., & Huttunen, M. O. (2005). Gestational smoking imperils the long term mental and physical health of offspring. Birth Defects Research Part (A): Clinical and Molecular Teratology, 73, 170–176. Martin, R. P., & Dombrowski, S. C. (2008). Prenatal exposures: Psychological and educational consequences for children. New York: Springer Science. Martin, R. P., Dombrowski, S. C., Mullis, C., & Huttunen, M. O. (2006). Maternal smoking during pregnancy: Association with temperament, behavioral, and academics. Journal of Pediatric Psychology, 31(5), 490–500.

Chapter 9

Assessment Results Stefan C. Dombrowski

9.1

Introduction

The assessment results section contains an organized presentation of all the norm-referenced (e.g., standardized test results) and informal (e.g., curriculumbased measures; observations; interview results) assessment results that have been ascertained during the assessment process. This section is structured with headings and subheadings to give an organized flow and for ease of reading. The end user of the report should be able to easily access sections of the report such as the full-scale IQ test scores or ratings on behavior assessment instruments.

9.2

Organization of Assessment Results Section

The assessment results section has a generalized organizational format with four major headings: 1. 2. 3. 4.

Cognitive ability and academic achievement Social, emotional, behavioral, and adaptive functioning Observations Interview results

The first two major sections will be organized such that broad-band, normreferenced measures are presented first followed by narrow-band, norm-referenced measures. After all norm-referenced measures are presented within these first two sections then informal measures such as CBA/CBM, and any additional instruments

S. C. Dombrowski (&) Rider University, Lawrenceville, NJ, USA e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S. C. Dombrowski (ed.), Psychoeducational Assessment and Report Writing, https://doi.org/10.1007/978-3-030-44641-3_9

159

160

S. C. Dombrowski

are to be presented. Following this presentation, observations, and interview results are presented in separate subsections. The formatting of the assessment results section will look like the following: Assessment Results Cognitive Ability and Academic Achievement • Full-Scale IQ Measures (e.g., WISC-V; WJ-IV Cognitive) – Narrow-band cognitive ability measure (e.g., CTOPP; Memory, executive functioning, visual motor functioning) • Broad-Band Achievement Measures – Narrow band, full-scale achievement measures (e.g., KeyMath; WRMT) Social, Emotional, Behavioral, and Adaptive Functioning • Broad-band behavior rating scales (e.g., BASC-3) – Narrow full band behavior rating scales (e.g., Beck Depression Inventory) • Broad-band adaptive behavior scales (e.g., Vineland) Interview Results • • • •

Student Interview Parent Interviews Teacher Interviews Other Interviews

Observations • Classroom Observations • Assessment Observations

9.3

Format for Presentation of Assessment Instruments

When presenting any assessment instrument, whether norm-referenced or informal, it is important to incorporate the instrument’s title along with a narrative description of the instrument followed by a chart detailing the results. This section has three main characteristics:

9 Assessment Results

161

(1) Underlined title of the test which makes for easy reference to the instrument when a parent or professional wishes to locate that section in the report. (2) A description of the instrument that includes what the instrument measures, age range, and scaling (i.e., mean of 100 with standard deviation of 15 or mean of 50 with standard deviation of 10) along with confidence interval. I use the 95% confidence interval. The test manual may be referenced for the appropriate verbiage describing the instrument. (3) Following the general description include a chart that incorporates the names of the index and subtests in addition to their respective scores, the percentiles, the confidence interval, and a descriptive classification.

Test of Cognitive Ability The test of cognitive ability is an individually administered measure of intellectual functioning normed for individuals between the ages of 3 and 94 years. The test of cognitive ability contains several individual tests of problem solving and reasoning ability that are combined to form a Verbal Intelligence Index (VIQ) and a Nonverbal Intelligence Index (NIQ). The subtests that compose the VIQ assess verbal reasoning ability and vocabulary and are a reasonable approximation of crystallized intelligence. The NIQ comprises subtests that assess nonverbal reasoning and spatial ability and is a reasonable approximation of fluid intelligence and spatial ability. These two indexes of intellectual functioning are then combined to form a full-scale IQ.

Full-scale IQ Verbal IQ Nonverbal IQ

Score

Percentile

95% conf. interval

Descriptive classification

100 97 103

50 48 53

96–104 92–103 98–108

Average Average Average

Jackie scored in the average range on the Test of Cognitive ability (FSIQ = 100; 50th %ile) with a similar average score on the verbal IQ (standard score = 97; 48th %ile) and nonverbal IQ (standard score = 103; 53rd %ile).

The inclusion of a chart is critically important and should not be omitted. Parents and especially other professionals can expediently make their own determination of a child’s functioning from viewing the numbers presented within the charts. The chart formatting is fairly straightforward and offers information on the name of the tests, the standard scores, the confidence interval, and the descriptive classification.

162

S. C. Dombrowski

One just needs to quickly glance at a test publisher’s catalog or the Mental Measurements Yearbook to understand that verbiage for the universe of assessment instruments cannot be provided in this text. However, the approach to presentation offered in this chapter may be readily adopted and used with any assessment instrument you decide to use. The description of any norm-referenced instrument can be amended directly from the test publisher’s manual. As noted in the report writing chapter, each test should have a title in bold formatting to set it apart from the rest of the document and make it easier for the reader to locate the discussion of the instrument within the body of the report. This is not the section to integrate and synthesize information. This is the section where scores are just presented. The integration, synthesis, and conceptualization of collected data including test results will occur in another section of the report. For example, when a child scores a 100 on the WISC-V Full-Scale IQ and a 65 on the Working Memory Index then you are to report the following: Sarah scored in the average range on the WISC-V FSIQ (Standard Score = 100; 50th percentile) and in the below average range on the Working Memory Index (Standard Score = 65; 2nd percentile). Do not interpret by discussing the child’s memory abilities and how the child’s low memory abilities might contribute to difficulties with reading comprehension and acquisition of number facts. This would move into the realm of conceptualization which is reserved for a later section of the report. As another example, consider a child who scored a 65 on the BASC-3 Internalizing Composite and a 73 on the Depression clinical scales. For the BASC-3 performance, you should only report that the student scored in the at-risk range on the internalizing composite and in the clinically significant range on the Depression scale of the BASC-3. Do not, at this point in the report, launch into a discussion of how the child’s findings on the BASC-3 are consistent with a clinically significant elevation of the Beck Depression Inventory, Second Edition and with teacher impressions that the child is sad relative to other fourth grade children. Instead, just state where the child scored (average, at-risk, or clinically significant range). There will be time within the conceptualization and classification section to synthesize and integrate information to discuss the possibility of depression, to rule out need for additional evaluation in this area, and to offer recommendations for treatment (within the recommendation section).

9 Assessment Results

163

There is another important point that must be made. Be wary of excluding numbers from your report and just reporting descriptive classification information. I understand the temptation to take this approach to spare feelings of the parent or because the practitioner may think the scores are invalid, but this practice degrades the value of the report and potentially harms the credibility of the psychologist. If you feel that the instrument’s results are invalid, then express that sentiment with supporting evidence. The clinical aptitude of psychologists who exclude numbers from the report without furnishing a valid clinical or psychometric reason for doing so could be called to question. Let us consider the following examples using the Reynolds Intellectual Assessment Scales, Second Edition (RIAS-2) and the Behavior Assessment System for Children, Second Edition (BASC-3). Reynolds Intellectual Assessment Scale (RIAS-2) Malayiah was administered the Reynolds Intellectual Assessment Scales, Second Edition (RIAS-2). The RIAS-2 is an individually administered measure of intellectual functioning normed for individuals between the ages of 3 and 94 years. The RIAS-2 contains several individual tests of intellectual problem solving and reasoning ability that are combined to form a Verbal Intelligence Index (VIX) and a Nonverbal Intelligence Index (NIX). The subtests that compose the VIX assess verbal reasoning ability along with the ability to access and apply prior learning in solving language-related tasks. Although labeled the Verbal Intelligence Index, the VIX is also a reasonable approximation of crystallized intelligence. The NIX comprises subtests that assess nonverbal reasoning and spatial ability. Although labeled the Nonverbal Intelligence Index, the NIX also provides a reasonable approximation of fluid intelligence and spatial ability. These two indexes of intellectual functioning are then combined to form an overall Composite Intelligence Index (CIX). By combining the VIX and the NIX into the CIX, a strong, reliable assessment of general intelligence (g) is obtained. The CIX measures the two most important aspects of general intelligence according to recent theories and research findings: reasoning or fluid abilities and verbal or crystallized abilities. The RIAS-2 also contains subtests designed to assess verbal memory and nonverbal memory. Depending upon the age of the individual being evaluated, the verbal memory subtest consists of a series of sentences, age-appropriate stories, or both, read aloud to the examinee. The examinee is then asked to recall these sentences or stories as precisely as possible. The nonverbal memory subtest consists of the presentation of pictures of various objects or abstract designs for a period of 5 s. The examinee is then shown a

164

S. C. Dombrowski

page containing six similar objects or figures and must discern which object or figure was previously shown. The scores from the verbal memory and nonverbal memory subtests are combined to form a Composite Memory Index (CMX), which provides a strong, reliable assessment of working memory and may also provide indications as to whether or not a more detailed assessment of memory functions may be required. In addition, the high reliability of the verbal and nonverbal memory subtests allows them to be compared directly to each other. Each of these indexes is expressed as an age-corrected standard score that is scaled to a mean of 100 and a standard deviation of 15. These scores are normally distributed and can be converted to a variety of other metrics if desired. Following are the results of Malayiah’s performance on the RIAS-2.

RIAS-2 index Percentile Confidence interval (95%) Descriptive classification

Composite IQ

Verbal IQ

Nonverbal IQ

Memory index

89 23 84–95

82 12 76–90

101 53 95–107

86 18 80–93

Low average

Low average

Average

Low average

On testing with the RIAS-2, Malayiah earned a Composite Intelligence Index of 89. On the RIAS-2, this level of performance falls within the range of scores designated as low average and exceeded the performance of 23% of individuals at Malayiah’s age. Her Verbal IQ (Standard Score = 82; 12th % ile) was in the low average range and exceeded 12% of individuals Malayiah’s age. Malayiah’s Nonverbal IQ (Standard Score = 101; 53rd %ile) was also in the average range, exceeding 53% of individuals Malayiah’s age. Malayiah earned a Composite Memory Index (CMX) of 86, which falls within the low average range of working memory skills and exceeds the performance of 18 out of 100 individuals Malayiah’s age.

Behavior Assessment System for Children, Second Edition (BASC-3) The Behavior Assessment System for Children, Second Edition (BASC-3) is an integrated system designed to facilitate the differential diagnosis and classification of a variety of emotional and behavioral conditions in children. It possesses validity scales and several clinical scales, which reflect different dimensions of a child’s personality. Scores in the clinically significant range

9 Assessment Results

165

(T-Score > 70) suggest a high level of difficulty. Scores in the at-risk range (T-Score 60–69) identify either a significant problem that may not be severe enough to require formal treatment or a potential of developing a problem that needs careful monitoring. On the adaptive scales, scores below 30 are considered clinically significant while scores between 31 and 40 are considered at-risk. Clinical scales Hyperactivity Aggression Conduct problems Anxiety Depression Somatization Attention problems Learning problems Atypicality Withdrawal Adaptability Social skills Leadership Study skills Functional communication Externalizing problems Internalizing problems Behavioral symptoms index Adaptive skills School problems Clinical scales *At-risk rating (T = 60–69) **Clinically significant rating (T = 70+) Adaptive scales *At-risk rating (T = 30–39) **Clinically significant (T < 30)

Ms. Smith T-Score

Percentile

85** 72** 85** 43 47 46 73** 73** 43 55 30** 41 47 38 43 83** 44 66* 38 67*

99 95 99 26 51 46 99 99 19 74 3 22 43 15 25 99 30 93 12 94

The above results indicate clinically significant elevations on externalizing problems composite with an at-risk score on the behavioral symptoms index. The above results also indicate clinically significant elevations on the hyperactivity, aggression, conduct problems, attention problems, learning problems, and adaptability clinical scales. All other composite and clinical scales were in the average range.

166

S. C. Dombrowski

As noted in the above examples each of these tests is interpreted in a hierarchical fashion starting with the global composite and then moving to index level followed by subtest level interpretation. Subtest interpretation in IQ tests should not be undertaken (see McGill, Dombrowski & Canivez, 2018). At the present moment, interpretation at the subtest level within tests of cognitive ability is a practice that is hotly debated. Several researchers have cautioned the field about such practice (Dombrowski, 2013, 2014a, 2014b; Dombrowski & Watkins, 2013; McDermott, Fantuzzo, & Glutting, 1990; McDermott & Glutting, 1997; McGill, Dombrowski, & Canivez, 2018; Watkins & Canivez, 2004). Still, others suggest it may be an acceptable practice when placed in the context of a well-grounded theory (e.g., Fiorello et al., 2007; Keith & Reynolds, 2010). The debate is sure to continue, but the scientific evidence clearly suggests that this is a practice in cognitive ability tests that should be eschewed (Canivez, Dombrowski & Watkins, 2018; Canivez et al., 2020; Canivez, Watkins & Dombrowski, 2016, 2017; Dombrowski, Canivez & Watkins, 2018; Dombrowski, McGill & Canivez, 2017, 2018a, 2018b; Dombrowski, McGill, Canivez & Peterson, 2017; Dombrowski, Golay, McGill & Canivez, 2018; Dombrowski, McGill & Morgan, 2019; McGill & Dombrowski, 2018).

9.4

Understanding Standard and Scaled Scores

Your basic measurement class should have furnished you with the background to understand the relationship among standard scores, scaled scores, and percentile ranks. These scores are all based on the normal curve. You should be able to quickly and efficiently approximate the standard score when given a percentile rank and when given a scaled score. If not, I would recommend spending time revisiting the normal curve and reviewing the chart below. Most standardized assessment instruments furnish standard scores with a mean of 100 and a standard deviation of 15. Some of these tests, primarily tests of intellectual abilities, also include scaled subtest scores which have a mean of 10 and a standard deviation of 3. Behavior rating scales such as the BASC-3 furnish standard scores (t-scores) with a mean of 50 and a standard deviation of 10. However the norm-referenced score is scaled it can be converted to a percentile rank for ease of interpretation. The below chart serves as a guide for approximating the percentile rank of various scores. The test manual should be consulted for the specific relationship between the scale and the percentile rank as they can differ slightly from the below chart depending upon the age of the child and the norming process of the instrument.

9 Assessment Results

Standard score

167

Scaled score

145 140 135

19 18 17

125 120 115 110 105 100 95 90 85 80 75

15 14 13 12 11 10 9 8 7 6 5

65 60 55

3 2 1

T-Score 80

60 50 40

20

Percentile 99.9 99.6 99 95 91 84 75 63 50 37 25 16 9 5 1 0.5 0.1

*Interquartile range = 90 110 Threshold for intellectual disability = Threshold for giftedness =

You should know fluently several points on the normal curve (see Fig. 9.1) including the mean, the interquartile range, and one, two, and three standard deviations above and below the mean including corresponding percentile ranks. There is one important point to keep in mind and perhaps convey to caregivers. The percentile rank or the standard score is not akin to the percentage correct on a given test. Rather, the percentile rank provides a comparison of how well a child did relative to his or her agemates. Take for instance a standard score of 90. This equates to a percentile rank of 25% and suggests that the child did better than 25% of the normative group (or 25 out of 100 students of the same age taking the exam). This is quite different than a score of 90% on an exam. Some caregivers may confuse the two scales, so this will need to be explained to them. In terms of classification, I find the below descriptive classification framework to be useful. With this framework, the average range falls between a standard score of 90 and 110 (25th to 75th percentile) which reflects the interquartile range. A gifted/ superior score is 130 and above while a score in the significantly below average/ delayed range is