Children with hearing loss : developing listening and talking, birth to six [Fourth edition.] 9781635501544, 1635501547

2,372 185 14MB

English Pages [429] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Children with hearing loss : developing listening and talking, birth to six [Fourth edition.]
 9781635501544, 1635501547

Table of contents :
Preface
Acknowledgments
Part I. Audiological and Technological Foundations of Auditory Brain Development
1. Neurological Foundations of Listening and Talking: We Hear With the Brain
Introduction
Begin Conversations with the Critical Question: What Is the Family’s Desired Outcome?
Typical Infants: Listening and Language Development
Auditory Neural Development
New Context for the Word Deaf
Hearing Versus Listening
A Model of Hearing Loss: The Invisible Acoustic Filter Effect
Putting It All Together in a Counseling Narrative: Think About Hearing Loss as a Doorway Problem
Summary
Next Steps: What Will It Take to Optimize the Probability of Attaining a Listening and Spoken Language Outcome
2. The Audiovestibular System
The Nature of Sound
Subconscious Function
Signal Warning Function
Spoken Communication Function
Acoustics
Audibility Versus Intelligibility of Speech
The Ling 6-7 Sound Test: Acoustic Basis and Description
Audiovestibular Structures
Data Input Analogy
Outer and Middle Ear
Inner Ear to the Brain
The Vestibular System: The Sensory Organs of Balance
3. Hearing and Hearing Loss in Infants and Children
Introduction
Classifications
Degree (Severity): Minimal to Profound
Timing: Congenital or Acquired
General Causes: Endogenous, Exogenous, or Multifactorial
Genetics, Syndromes, and Dysplasias
Connexin 26
Genetic Testing
Syndromes
Inner Ear Dysplasias
Medical Aspects of Hearing Loss
Conductive Pathologies and Hearing Loss
Sensorineural Pathologies and Hearing Loss
Mixed, Progressive, Functional, and Central Hearing Losses
Synergistic and Multifactorial Effects
Auditory Neuropathy Spectrum Disorder (ANSD)
Vestibular Issues
Summary
4. Diagnosing Hearing Loss
Introduction
Newborn Hearing Screening and EHDI Programs
Test Equipment and Test Environment
Audiologic Diagnostic Assessment of Infants and Children
Test Protocols
Pediatric Behavioral Tests: BOA, VRA, CPA, Speech Perception Testing
Electrophysiologic Tests: OAE, ABR/ASSR, and Immittance
The Audiogram
Configuration (Pattern) of Thresholds on the Audiogram
Formulating a Differential Diagnosis
Sensory Deprivation
Ambiguity of Hearing Loss
Measuring Distance Hearing
Summary
5. Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems
Introduction
For Intervention, First Things First: Optimize Detection of the Complete Acoustic Spectrum
Listening and Learning Environments
Distance Hearing/Incidental Learning and S/N Ratio
ANSI/ASA S12.60-2010: Acoustical Guidelines for Classroom Noise and Reverberation
Talker and Listener Physical Positioning
Amplification for Infants and Children
Hearing Aids
Bone Anchored Implants for Children (Also Called Osseointegrated [Osseo] Implants) or Bone Conduction Hearing Devices
Wireless Connectivity
HATs for Infants and Children: Personal-Worn RM and Sound-Field FM and IR (Classroom Amplification) Systems
Cochlear Implants
Auditory Brainstem Implant (ABI)
Measuring Efficacy of Fitting and Use of Technology
Equipment Efficacy for the School System
Conclusion
Part II. Developmental, Family-Focused Instruction for Listening and Spoken Language Enrichment
6. Intervention Issues
Basic Premises
Differentiating Dimensions Among Intervention Programs
Challenges to the Process of Learning Spoken Language
Late to Full-Time Wearing of Appropriate Amplification or Cochlear Implant(s)
Disabilities in Addition to the Child’s Hearing Loss
Ongoing, Persistent Noise in the Child’s Learning Environment
Multilingual Environment
Educational Options for Children with Hearing Loss, Ages 3 to 6
7. Auditory “Work”
Introduction
The Primacy of Audition
The Acoustics-Speech Connection
Intensity/Loudness
Frequency/Pitch
Duration
The Effect of Hearing Loss on the Reception of Speech
A Historical Look at the Use of Residual Hearing
The Concept of Listening Age
Auditory Skills and Auditory Processing Models
Theory of Mind and Executive Functions
How to Help a Child Learn to Listen in Ordinary, Everyday Ways
Two Examples of Auditory Teaching and Learning
Scene I: Tony
Scene II: Tamara
Targets for Auditory/Linguistic Learning
A Last Word
8. Spoken Language Learning
Introduction
What’s Involved in Talking?
Intentionality/Speech Acts
Presuppositional Knowledge
Discourse/Conversational Conventions
Other Essential Rule Systems in English
How Does a Child Learn to Talk?
Relevance for Intervention Decisions
How Should Intervention Be Organized?
9. Constructing Meaningful Communication
Introduction
The Affective Relationship
The Child’s Development of Interactional Abilities
Joint Reference, or Joint Attention
Turn-Taking Conventions
Signaling of Intention
Characteristics of Caregiver Talk
1. Content: What Gets Talked About?
2. Prosody: What Does Motherese Sound Like?
3. Semantics and Syntax: What About Complexity?
4. Repetition: Say It or Play It Again
5. Negotiation of Meaning: Huh?
6. Participation-Elicitors: Let’s (Keep) Talk(ing)
7. Responsiveness
Issues About Motherese
How Long Is Motherese Used?
Motherese: Why Do We Use It?
Motherese: Is It Immaterial or Facilitative?
10. Interacting in Ways That Promote Listening and Talking
Introduction
The Emotional Impact of a Child’s Hearing Loss on the Family
Adult Learning
What Parents Need to Learn
Role of the LSL Practitioner
Components of Intervention for Babies and Young Children with Hearing Loss
When to Talk with Your Child and What to Talk About
A Framework for Maximizing Caregiver Effectiveness in Promoting Auditory/Linguistic Development in Children with Hearing Loss
Background and Rationale
Structure of the Framework
Getting a Representative Sample of Interacting
Discussing the Framework with Parents
Ways of Addressing Parent-Chosen Interactional Targets
Determining and Sequencing Targets Specific to the Child’s Development of Auditory, Language, and Speech Development
Relationship Between Family and LSL Practitioner
Teaching Through Incidental and Embellished Interacting
Teaching Through Incidental Interacting
Embellishing an Incidental Interaction
Teaching Spoken Language Through Embellished Interacting
Teaching Listening (Audition) Through Embellished Interacting
Teaching Speech Through Embellished Interacting
Preplanned Parent Guidance Sessions or Auditory-Verbal Therapy/Instructional Sessions
Where Should the Auditory-Verbal Therapy (LSL)/Instructional Sessions Occur?
What Happens in an Auditory-Verbal Therapy/Instructional Session to Address Child Targets?
Components to Be Accomplished in a Typical Preplanned Session to Address Child Targets
Sample Preplanned Scenario
Substructure
About the Benefits and Limitations of Preplanned Teaching
What Does the Research Say?
Appendix 1: How to Grow Your Baby’s or Child’s Brain Through Daily Routines
Appendix 2: Application and Instructions for the Ling 6-7 Sound Test for Distance Hearing
Appendix 3: Targets for Auditory/Verbal Learning
Appendix 4: Explanation for Items on the Framework
Appendix 5: Checklist for Evaluating Preschool Group Settings for Children With Hearing Loss Who Are Learning Spoken Language
Appendix 6: Selected Resources
Appendix 7: Description and Practice of Listening and Spoken Language Specialists: LSLS Cert. AVT and LSLS Cert. AVEd
Appendix 8: Principles of Certified LSL Specialists
Appendix 9: Knowledge and Competencies Needed by Listening and Spoken Language Specialists (LSLS)
Appendix 10: Listening and Spoken Language Domains Addressed in This Book
Glossary
References
Index

Citation preview

Children With Hearing Loss Developing Listening and Talking Birth to Six

Fourth Edition

Editor-in-Chief for Audiology Brad A. Stach, PhD

Children With Hearing Loss Developing Listening and Talking Birth to Six Fourth Edition

Elizabeth B. Cole, EdD, CCC-A, LSLS Cert. AVT Carol Flexer, PhD, CCC-A, LSLS Cert. AVT

5521 Ruffin Road San Diego, CA 92123 Email: [email protected] Website: https://www.pluralpublishing.com Copyright © 2020 by Plural Publishing, Inc. Typeset in 10.5/13 Garamond by Flanagan’s Publishing Services, Inc. Printed in the United States of America by McNaughton & Gunn, Inc. With select illustrations by Maury Aaseng. All rights, including that of translation, reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, recording, or otherwise, including photocopying, recording, taping, Web distribution, or information storage and retrieval systems without the prior written consent of the publisher. For permission to use material from this text, contact us by: Telephone:  (866) 758-7251 Fax:  (888) 758-7255 Email: [email protected] Every attempt has been made to contact the copyright holders for material originally printed in another source. If any have been inadvertently overlooked, the publishers will gladly make the necessary arrangements at the first opportunity. Disclaimer: Please note that ancillary content (such as documents, audio, and video, etc.) may not be included as published in the original print version of this book. Library of Congress Cataloging-in-Publication Data: Names: Cole, Elizabeth Bingham, author. | Flexer, Carol Ann, author. Title: Children with hearing loss : developing listening and talking, birth to six / Elizabeth B. Cole, Carol Flexer. Description: Fourth edition. | San Diego, CA : Plural Publishing, [2020] | Includes bibliographical references and index. Identifiers: LCCN 2019017800| ISBN 9781635501544 (alk. paper) | ISBN 1635501547 (alk. paper) Subjects: | MESH: Hearing Loss — therapy | Child, Preschool | Infant | Correction of Hearing Impairment — methods | Hearing Aids | Language Disorders — rehabilitation | Speech Disorders — rehabilitation Classification: LCC RF291.5.C45 | NLM WV 271 | DDC 617.8/9--dc23 LC record available at https://lccn.loc.gov/2019017800

Contents Preface xi Acknowledgments xiii

Part I. Audiological and Technological Foundations of Auditory Brain Development

1 Neurological Foundations of Listening and Talking:  We Hear

3

With the Brain Introduction 4 Begin Conversations with the Critical Question: What Is the Family’s 5 Desired Outcome? Typical Infants:  Listening and Language Development 5 Auditory Neural Development 8 New Context for the Word Deaf 12 Hearing Versus Listening 13 A Model of Hearing Loss:  The Invisible Acoustic Filter Effect 13 Putting It All Together in a Counseling Narrative:  Think About 14 Hearing Loss as a Doorway Problem Summary 15 Next Steps:  What Will It Take to Optimize the Probability of 17 Attaining a Listening and Spoken Language Outcome



2 The Audiovestibular System



3 Hearing and Hearing Loss in Infants and Children

19 20 The Nature of Sound Subconscious Function 20 Signal Warning Function 21 Spoken Communication Function 22 Acoustics 22 Audibility Versus Intelligibility of Speech 27 The Ling 6-7 Sound Test: Acoustic Basis and Description 28 Audiovestibular Structures 29 Data Input Analogy 29 Outer and Middle Ear 32 Inner Ear to the Brain 32 The Vestibular System:  The Sensory Organs of Balance 33 35 Introduction 36 Classifications 36 v



vi

Children With Hearing Loss:  Developing Listening and Talking

Degree (Severity): Minimal to Profound 36 Timing:  Congenital or Acquired 41 General Causes:  Endogenous, Exogenous, or Multifactorial 42 Genetics, Syndromes, and Dysplasias 43 Connexin 26 43 Genetic Testing 43 Syndromes 44 Inner Ear Dysplasias 46 Medical Aspects of Hearing Loss 47 Conductive Pathologies and Hearing Loss 47 54 Sensorineural Pathologies and Hearing Loss Mixed, Progressive, Functional, and Central Hearing Losses 61 Synergistic and Multifactorial Effects 63 Auditory Neuropathy Spectrum Disorder (ANSD) 64 Vestibular Issues 66 Summary 66

4 Diagnosing Hearing Loss



5 Hearing Aids, Cochlear Implants, and Remote Microphone

69 Introduction 70 Newborn Hearing Screening and EHDI Programs 70 74 Test Equipment and Test Environment Audiologic Diagnostic Assessment of Infants and Children 75 Test Protocols 77 Pediatric Behavioral Tests:  BOA, VRA, CPA, Speech 79 Perception Testing Electrophysiologic Tests:  OAE, ABR/ASSR, and Immittance 85 The Audiogram 89 Configuration (Pattern) of Thresholds on the Audiogram 92 Formulating a Differential Diagnosis 95 Sensory Deprivation 95 Ambiguity of Hearing Loss 96 Measuring Distance Hearing 97 Summary 97

105 (RM) Systems Introduction 106 For Intervention, First Things First:  Optimize Detection of the 107 Complete Acoustic Spectrum Listening and Learning Environments 107 Distance Hearing/Incidental Learning and S/N Ratio 107 ANSI/ASA S12.60-2010:  Acoustical Guidelines for Classroom 109 Noise and Reverberation Talker and Listener Physical Positioning 111

Contents vii

Amplification for Infants and Children 112 Hearing Aids 112 Bone Anchored Implants for Children (Also Called Osseointegrated 130 [Osseo] Implants) or Bone Conduction Hearing Devices 132 Wireless Connectivity HATs for Infants and Children:  Personal-Worn RM and 133 Sound-Field FM and IR (Classroom Amplification) Systems 144 Cochlear Implants Auditory Brainstem Implant (ABI) 154 Measuring Efficacy of Fitting and Use of Technology 155 155 Equipment Efficacy for the School System Conclusion 158 Part II. Developmental, Family-Focused Instruction for Listening and Spoken Language Enrichment

6 Intervention Issues

Basic Premises Differentiating Dimensions Among Intervention Programs Challenges to the Process of Learning Spoken Language Late to Full-Time Wearing of Appropriate Amplification or Cochlear Implant(s) Disabilities in Addition to the Child’s Hearing Loss Ongoing, Persistent Noise in the Child’s Learning Environment Multilingual Environment Educational Options for Children with Hearing Loss, Ages 3 to 6



7 Auditory “Work”

161 162 164 167 168 172 173 174 175

183 Introduction 184 The Primacy of Audition 184 186 The Acoustics-Speech Connection Intensity/Loudness 186 Frequency/Pitch 188 Duration 189 The Effect of Hearing Loss on the Reception of Speech 190 191 A Historical Look at the Use of Residual Hearing The Concept of Listening Age 192 Auditory Skills and Auditory Processing Models 195 198 Theory of Mind and Executive Functions How to Help a Child Learn to Listen in Ordinary, Everyday Ways 201 Two Examples of Auditory Teaching and Learning 204 Scene I:  Tony 204 Scene II:  Tamara 208

viii

Children With Hearing Loss:  Developing Listening and Talking

Targets for Auditory/Linguistic Learning A Last Word

8 Spoken Language Learning



9 Constructing Meaningful Communication

210 211

213 Introduction 214 214 What’s Involved in Talking? Intentionality/Speech Acts 214 Presuppositional Knowledge 215 Discourse/Conversational Conventions 215 Other Essential Rule Systems in English 216 218 How Does a Child Learn to Talk? Relevance for Intervention Decisions 220 221 How Should Intervention Be Organized? 225 Introduction 226 The Affective Relationship 228 The Child’s Development of Interactional Abilities 229 230 Joint Reference, or Joint Attention Turn-Taking Conventions 231 Signaling of Intention 233 Characteristics of Caregiver Talk 234 1. Content:  What Gets Talked About? 235 236 2. Prosody:  What Does Motherese Sound Like? 3. Semantics and Syntax: What About Complexity? 237 4. Repetition:  Say It or Play It Again 238 5.  Negotiation of Meaning:  Huh? 239 6. Participation-Elicitors: Let’s (Keep) Talk(ing) 239 7. Responsiveness 240 242 Issues About Motherese How Long Is Motherese Used? 242 Motherese:  Why Do We Use It? 242 243 Motherese:  Is It Immaterial or Facilitative?

245 10 Interacting in Ways That Promote Listening and Talking Introduction 246 The Emotional Impact of a Child’s Hearing Loss on the Family 247 253 Adult Learning What Parents Need to Learn 255 Role of the LSL Practitioner 255 Components of Intervention for Babies and Young Children with 256 Hearing Loss When to Talk with Your Child and What to Talk About 257 A Framework for Maximizing Caregiver Effectiveness in Promoting 259 Auditory/Linguistic Development in Children with Hearing Loss

Contents ix

Background and Rationale 259 Structure of the Framework 263 Getting a Representative Sample of Interacting 263 Discussing the Framework with Parents 264 Ways of Addressing Parent-Chosen Interactional Targets 265 Determining and Sequencing Targets Specific to the Child’s 267 Development of Auditory, Language, and Speech Development Relationship Between Family and LSL Practitioner 268 Teaching Through Incidental and Embellished Interacting 268 Teaching Through Incidental Interacting 269 270 Embellishing an Incidental Interaction Teaching Spoken Language Through Embellished Interacting 271 Teaching Listening (Audition) Through Embellished Interacting 274 275 Teaching Speech Through Embellished Interacting Preplanned Parent Guidance Sessions or Auditory-Verbal 279 Therapy/Instructional Sessions Where Should the Auditory-Verbal Therapy (LSL)/Instructional 279 Sessions Occur? What Happens in an Auditory-Verbal Therapy/Instructional 280 Session to Address Child Targets? Components to Be Accomplished in a Typical Preplanned Session 280 to Address Child Targets Sample Preplanned Scenario 282 Substructure 285 285 About the Benefits and Limitations of Preplanned Teaching What Does the Research Say? 286 Appendix 1: How to Grow Your Baby’s or Child’s Brain Through Daily Routines Appendix 2: Application and Instructions for the Ling 6-7 Sound Test for Distance Hearing Appendix 3: Targets for Auditory/Verbal Learning Appendix 4: Explanation for Items on the Framework Appendix 5: Checklist for Evaluating Preschool Group Settings for Children With Hearing Loss Who Are Learning Spoken Language Appendix 6: Selected Resources Appendix 7: Description and Practice of Listening and Spoken Language Specialists:  LSLS Cert. AVT and LSLS Cert. AVEd Appendix 8: Principles of Certified LSL Specialists Appendix 9: Knowledge and Competencies Needed by Listening and Spoken Language Specialists (LSLS) Appendix 10: Listening and Spoken Language Domains Addressed in This Book

289 291 293 307 317 323 329 331 333 339



x

Children With Hearing Loss:  Developing Listening and Talking

Glossary 343 References 359 Index 389

Preface children with hearing loss, a population we have never had before. With this new population whose hearing loss is identified at birth, we can facilitate access of enriched auditory/linguistic information to the baby’s brain. The miracle is that we can prevent the negative developmental and communicative effects of hearing loss that were so common just a few years ago. With these babies and young children, we can now work from a neurological, developmental, and preventative perspective rather than from a remedial, corrective one. What has happened in the field of hearing loss is truly revolutionary as we implement brain-based science. How do we understand and manage this new population of babies and their families? How do we revise our early intervention systems to respond to the desired outcomes of listening and talking that today’s parents have a right to expect? The fourth edition of Children With Hearing Loss: Developing Listening and Talking, Birth to Six is an exciting and dynamic compilation of crucially important information for the facilitation of brain-based spoken language for infants and young children with hearing loss. This text is intended for undergraduate and graduate-level training programs for professionals who work with children who have hearing loss and their families. This fourth edition is also directly relevant for parents, listening and spoken language specialists (LSLS Cert. AVT and LSLS Cert. AVEd), speech-language pathologists, audiologists, early childhood instructors, and teachers. In addition, much of the

Childhood hearing loss is a serious and relatively prevalent condition. About 12,000 new babies with hearing loss are identified every year in the United States, according to the National Institute on Deafness and Other Communication Disorders. In addition, estimates are that in the same year, another 4,000 to 6,000 infants and young children between birth and 3 years of age who passed the newborn hearing screening test acquire lateonset hearing loss. The total population of new babies and toddlers identified with hearing loss, therefore, is about 16,000 to 18,000 per year. The peril is that hearing loss caused by damage or blockage in the outer, middle, or inner ear keeps auditory/linguistic information from reaching the child’s brain, where actual hearing occurs. Over the decades, numerous studies have demonstrated that, when hearing loss of any degree is not adequately diagnosed and addressed, insufficient auditory information reaches the brain. As a result, the child’s speech, language, academic, emotional, and psychosocial development is at least compromised if not sabotaged. In recent years, there has been a veritable explosion of information and technology about testing for and managing hearing loss in infants and children, thereby enhancing their opportunities for the auditory brain access that creates neural pathways for spoken language and for literacy. The vanguard of this explosion has been newborn hearing screening. As a result, in this day and age, we are dealing with a vastly different population of

xi

xii

Children With Hearing Loss:  Developing Listening and Talking

information in Chapters 1 through 5, and Chapter 7, can be helpful to individuals of all ages who experience hearing loss, especially to newly diagnosed adults, as a practical “owner’s manual.” This fourth edition covers current and up-to-date information about auditory brain development, listening scenarios, auditory technologies, spoken language development, and intervention for young children with hearing loss whose parents have chosen to have them learn to listen and talk. There is new artwork throughout the book that illustrates key concepts of family-focused listening and spoken language interven-tion. All technology information has been updated, as has information about neurophysiology. The reference list is exhaus-tive, with the addition of the newest stud-ies while maintaining seminal works about neurophysiology, technology, and listening and spoken language development. Parents who are considering preschool placement for their children are referred to Chapter 6, and to the newly revised Appendix 5 for a guide to elements that need to be present in any group setting to address the unique needs of the child with hearing loss. Also, the companion website contains user-friendly versions of the “Framework for Maximizing Caregiver Effectiveness in Promoting Auditory/Linguistic Development in Children Who Have Hearing Loss” (Chapter 10), and of the “Targets for Auditory/Verbal Learning” (Appendix 3).

Appendices 7, 8, 9, and 10 provide important and useful information and tools for professionals who are interested in AG Bell Academy’s Listening and Spoken Language Specialist Certification Program (LSLS) — LSLS Cert. AVT and LSLS Cert. AVEd. These appendices define and list the competencies required for the LSLS credential, and reference each chapter of the book with regard to those requirements. This book is unique in its scholarly yet thoroughly readable style. Numerous new illustrations, tables, charts, and graphs illuminate key ideas. The fourth edition should be the foundation of the personal and professional libraries of students, clinicians, and parents who are interested in listening and spoken language outcomes for children with hearing loss. This book is divided into two parts. Part I, Audiological and Technological Foundations of Auditory Brain Development, consists of the first five chapters that lay the foundation for brain-based listening and talking. These five chapters include neurological development and discussions of ear anatomy and physiology, pathologies that cause hearing loss, audiologic testing of infants and children, and the latest in amplification technologies. Part II is titled Developmental, Family-Focused Instruction for Listening and Spoken Language Enrichment. These second five chapters focus on intervention — listening, talking, and communicating through the utilization of a developmental and preventative model that focuses on enriching the child’s auditory brain centers.

Acknowledgments The authors would like to acknowledge the vision and work of professionals who have influenced and inspired them: Helen Beebe, Marion Downs, James Jerger, Richard and Laura Kretschmer, Dan Ling, Jane Madell, Winifred Northcott, Marietta Paterson, Doreen Pollack, Mark Ross, Joe Smaldino, and Jacqueline St. Clair-Stokes, to name just a few. They also want to thank

their publisher, Plural Publishing, and editors, Valerie Johns and Kalie Koscielak who have supported them and helped them find their way through four editions of this book. Special thanks to all who assisted with drawings and photography, especially Thea Zimnicki, Maura McGuire, and Maury Aaseng.

xiii

This fourth edition is enthusiastically dedicated to our children, grandchildren, and great-grandchildren (present and future), and to all who access and develop a child’s auditory brain for listening and talking.

Part I

Audiological and Technological Foundations of Auditory Brain Development

1 Neurological Foundations of Listening and Talking: We Hear With the Brain

Key Points Presented in the Chapter often, because full maturation of central auditory pathways is a precondition for the normal development of speech and language skills in children, whether or not they have a hearing loss. n Neuroplasticity is greatest during the first 3.5 years of life; the younger the infant, the greater the neuroplasticity. n The highest auditory neural centers (called the secondary auditory association areas in the cortex) are not fully developed until a child is about 15 years of age. The less “intrinsic” redundancy, the more redundant the “extrinsic” signal must be. n Skills mastered as close as possible to the time that a child is biologically intended to do so, result in developmental synchrony.

n About 95% of children with

hearing loss are born to hearing and speaking families. Therefore, listening and talking likely will be desired outcomes for the vast majority of families we serve. n We hear with the brain — the ears are just a way in; ears are the “doorway” to the brain for auditory information. Therefore, the problem with hearing loss is that it keeps sound/auditory information from reaching the brain. n Because they have had 20 weeks of auditory neural experience in utero, at birth, typical infants prefer their mother’s speech and songs and stories heard before birth. n For auditory pathways to mature, acoustic stimulation of the brain must occur early (in infancy) and

3



4

Children With Hearing Loss:  Developing Listening and Talking

n There is a distinction between

hearing and listening. n Hearing loss can be described as an invisible acoustic filter that

Introduction Our biology is that we hear with the brain (Kral, Kronenberger, Pisoni, & O’Donoghue, 2016; Kral & Lenarz, 2015; Kraus & White-Schwoch, 2018). Because of brain research, newborn hearing screening, and very early use of modern hearing technologies that direct auditory information through the “doorway” to the brain to alleviate sensory deprivation and to develop multilevel neural connections, there is a new population of children with hearing loss. This new population has the benefit of brain science, language development research, and family systems research that can lead to spoken language and literacy outcomes consistent with hearing peers, if we do what it takes. What it takes is systemwide attention to all links in the Logic Chain. As detailed in the white paper “Start with the Brain and Connect the Dots” on the Hearing First website, the Logic Chain is a model that summarizes what we know, at this point in time, about the foundational ingredients necessary to create a listening, speaking, and reading brain (Flexer, 2017). In Part One of this book, the first five chapters, we are highlighting audiological and technological foundations of auditory brain development. Part Two, the second five chapters, will feature family-focused instruction for listening and spoken language (LSL) enrichment, designed for

distorts, smears, or eliminates incoming sounds from reaching the brain, especially sounds from a distance — even a short distance.

The Logic Chain represents a system of foundational structures that must ALL be in place to optimize the attainment of a listening, spoken language and literacy outcome, if that is the family’s desired outcome. No link can be skipped (Flexer, 2017). Components of the Logic Chain: Brain Development > General Infant/ Child Language Development in the Family’s Home Language > Early and Consistent Use of Hearing Technologies > Family-Focused LSL Early Intervention > LSL Early Intervention for Literacy Development

guiding and coaching families who have chosen a listening and spoken language outcome for their children with hearing loss. LSL also is referred to as auditory-verbal practice and auditory-verbal intervention in the literature. Auditory-verbal practice encompasses both auditory-verbal therapy (AVT) and auditory-verbal education (AVEd) and is inclusive of a child’s trajectory from birth through the educational system. Auditory-verbal practice is “the application and management of hearing technology, in conjunction with specific strategies, techniques, and conditions, which promote optimal acquisition of spoken language primarily through individuals listening to the sounds of their own voices, the voices of others, and all sounds of life” (Estabrooks, 2012, p. 2).



1.  Neurological Foundations of Listening and Talking:  We Hear With the Brain

One important goal of LSL intervention is for the children learning LSL to be on a trajectory toward achieving age-appropriate spoken language and literacy skills by third grade along with their hearing friends. This chapter begins with the essential question for families to answer to determine the course of the child’s intervention: what is the family’s desired outcome for the child? All intervention protocols and strategies are designed to optimize the attainment of the families’ desired outcome, beginning with the counseling narrative of the doorway. Next will be a discussion of two lines of research that have direct relevance regarding the crucially important connection between listening abilities and speech and language development. Following that discussion, auditory neural development is explored along with a brief explanation of neuroplasticity. An additional topic is the acoustic filter model of hearing loss. In view of all of the information in this chapter, a new context for describing deafness is posited.

Begin Conversations with the Critical Question: What Is the Family’s Desired Outcome? The bottom-line essential question for families to answer is: What is your desired outcome for your child? The family’s desired outcome guides the ethical and legal provision of intervention strategies and technological recommendations. Guiding questions for families include: n What is your long-term goal for your

child?

n How do you want to communicate

with your child? What language(s) do you know and what language(s) do you want your child to know? n Where do you want your child to be at ages 3, 5, 14, 20? Approximately 95% of children with hearing loss are born into hearing and speaking families (Mitchell & Karchmer, 2004); these families are interested in having their child talk. In addition, many families use a main language at home other than the school language, so they likely are interested in their child speaking several languages. The coaching and intervention strategies offered by professionals are driven by the family’s desired outcome.

It is suggested that practitioners refer repeatedly to the family’s stated desired outcome when explaining the reason for recommendations. For example, one can explain to the family that to increase the probability of attaining the family’s desired outcome of listening and spoken language for their child, the child’s technology (e.g., hearing aids, cochlear implants) must be worn at least 10 hours per day.

Typical Infants:  Listening and Language Development To begin with, what is language? As reviewed in the Logic Chain white paper on the Hearing First website (Flexer, 2017), language is an organized system of communication used to share information. Spoken language consists of sounds, words, and grammar used to express inner

5



6

Children With Hearing Loss:  Developing Listening and Talking

thoughts and emotions. Language includes facial expressions, gestures, and body movements. Language is the platform for the acquisition and sharing of knowledge. The language environment at home is the basis of an infant’s brain growth and best predicts the child’s language, reading, and IQ outcomes (Suskind, 2015). Language learning and knowledge acquisition begins in infancy. Because language and information are learned best in a social interaction with the people who love the baby, the parents generally become their child’s first teacher and teach the child the language and knowledge of the home (Hirsh-Pasek et al., 2015; Rhoades & Duncan, 2017). Thus, all families are advised to speak the language they know best right from the beginning, whether that language is English, Spanish, Russian, sign language, etc. Science tells us that parents should speak the language where they have the most knowledge, experience, words, and information to pass to their child to grow their child’s brain with knowledge (Chen, Kennedy, & Zhou, 2012; Rhoades & Duncan, 2017). Therefore, based on the science of general early language acquisition, families of children who are deaf or hard of hearing can best provide early brain and literacy development experiences by immersing their children in the family’s home language. Research on speech perception capabilities of typical neonates, using a nonnutritive sucking paradigm, confirms that infants acquire native languages by listening; they start life being prepared to speak (Werker, 2012, 2018; Winegert & Brant, 2005). At birth, infants prefer their mother’s speech, and they even prefer songs and stories heard before birth. How can this be? The fact is that infants are born with 20 weeks of listening experi-

ence because their cochleas are formed and functional by the 20th week of gestation (Gordon & Harrison, 2005). There is a great deal of auditory information available to the fetus in utero (Moon, Lagercrantz, & Kuhl, 2013). That is why early identification and amplification and enriched auditory-linguistic environments are essential. Newborns with hearing loss may have already missed 20 weeks of auditory brain development.

May et al. (2017) found that at the time human infants emerge from the womb, the neural preparation for language is specialized to speech, due to spoken language being heard in utero. At birth, the human brain responds uniquely to speech.

In the first months of life, babies can discriminate many speech sounds, even those not heard in their home-spoken language(s). However, within a few months, the brain becomes a more efficient analyzer of speech. By the end of the first year, there is a functional reorganization of the brain to distinguish phonemes specific to language(s) heard daily (Choi, Black, & Werker, 2018). This neural reorganization improves and tunes the phonetic categories required for the infant’s language and attenuates those phonemic distinctions not required for the infant’s mother tongue (Vouloumanos & Werker, 2007; Werker, 2012, 2018). At 6 months of age, babies can remember words they hear in short passages, if those words follow their own names and not someone else’s. They can also recognize words that come after “Mommy,” showing that babies are processing the speech stream from the top down, using



1.  Neurological Foundations of Listening and Talking:  We Hear With the Brain

words they know (Golinkoff, 2013). Babies are born pattern seekers, eager to learn and highly social because they learn best with people present (Caskey, Stephens, Tucker, & Vohr, 2011). Within the first year, infants have some sense of causality, gravity, paths of motion, and spatial aspects of objects (Pulverman, Song, Golinkoff, & Hirsh-Pasek, 2013). By 17 months of age, phonetic distinctions guide new word learning as infants use their phonetic categories as the foundation for learning new words (Werker, 2012, 2018). Thus, listening experience in infancy is critical for the development of both speech and language in young children, and a strong auditory language base is essential for reading and learning (Robertson, 2014; Sloutsky & Napolitano, 2003; Zupan & Sussman, 2009). Figure 1–1 dis-

When the word “listening” is used, what is meant is purposeful attention by the child to auditory information as evidenced by activation of the prefrontal cortex (Musiek, 2009).

plays this progression. Given that listening is the core event, how much listening experience is necessary for adequate language development? Hart and Risley (1999) explored the issue of language experience in their extensive, longitudinal study of personspoken words heard by children ages birth to 4 years. Electronic words (e.g., from TV, books on tape, computer, etc.) were not counted. Some of their results are described below.

Figure 1–1.  Listening experience in infancy is critical for adequate language development, and adequate language development is essential for reading.

7



8

Children With Hearing Loss:  Developing Listening and Talking

The average number of words per hour addressed to children by parents (Hart & Risley, 1999, p. 169) is as follows: n 2,100 in a professional family n 1,200 in a working-class family n 600 in a family receiving welfare

Hart and Risley (1999) noted that, “The extra talk of parents in the professional families and that of the most talkative parents in the working-class families contained more of the varied vocabulary, complex ideas, subtle guidance, and positive feedback thought to be important to cognitive development” (p. 170). They further explained that, “Parents who talked a lot about such things [ideas, feelings, or impressions] or only a little ended up with 3-year-olds who also talked a lot, or talked only a little” (Hart & Risley, 1999, p. xii). Hart and Risley concluded that their data “show that the first 3 years of experience put in place a trajectory of vocabulary growth and the foundations of analytic and symbolic competencies that will make a lasting difference to how children perform in later years” (Hart & Risley, 1999, p. 193). The bottom line is that all infants and children require a great deal of listening experience — about 20,000 hours in the first 5 years of life — to create the neural

Human beings are organically designed to listen and talk if we do what it takes to access the brain with auditory information (through technology for children with hearing loss, within the first weeks of life) and practice, practice, practice listening and interacting using spoken language and reading.

infrastructure for literacy (Dehaene, 2009; Suskind, 2015). These data collected on typical infants and children have significant implications for children with hearing loss.

Auditory Neural Development The problem with hearing loss is that it keeps sound from reaching the brain (Figure 1–2). The purpose of a cochlear implant (or a hearing aid) is to access, stimulate, and grow auditory neural connections throughout the brain as the foundation for spoken language, reading, and academics (Gordon, Papsin, & Harrison, 2004; Kral et al., 2016; Sharma & Nash, 2009). Due to the limited time period of optimal neural plasticity, age at implantation is critical — younger is better (Dettman et al., 2016; Ching et al., 2018; Fitzpatrick & Doucet, 2013; Geers et al., 2011; Northern & Downs, 2014; Sharma et al., 2005; Sharma & Nash, 2009).

The brain, unlike any other organ, is essentially unformed when one is born, and brain development is completely dependent on environmental experience. So that’s why, in the first 3 years of life, the foundation for all thinking and learning is being built through parent talk and conversational interaction.

Studies in brain development show that sensory stimulation of the auditory centers of the brain is critically important and influences the actual organization of auditory brain pathways (Berlin & Weyand, 2003; Boothroyd, 1997; Chermak, Bellis, & Musiek, 2014; Clinard & Tremblay, 2008; Sharma & Glick, 2018).



1.  Neurological Foundations of Listening and Talking:  We Hear With the Brain

Figure 1–2.  We hear with the brain — the ears are just a way in. Hearing loss is not about the ears; it is about the brain.  Dreamstime.com

Furthermore, neural imaging has shown that the same brain areas — the primary and secondary auditory areas — are most active when a child listens and when a child reads. That is, phonological or phonemic awareness, which is the explicit awareness of the speech sound structure of language units, forms the basis for the development of literacy skills (Pugh, 2005; Robertson, 2014; Strickland & Shanahan, 2004; Tallal, 2004). Clearly, anything we can do to access and “program” those critical and powerful auditory centers of the brain with acoustic detail will expand children’s abilities to listen and learn spoken language. As Ching, Zhang, and Hou (2019) contend, early and ongoing auditory intervention is essential. Important neural deficits have been identified in the higher auditory centers of the brain due to prolonged lack of auditory stimulation and, furthermore, the auditory cortex is directly involved in

speech perception and language processing in humans (Chermak & Musiek, 2014a; Kretzmer, Meltzer, Haenggeli, & Ryugo, 2004; Kral et al., 2016; Sharma & Glick, 2018; Sharma & Nash, 2009; Shaywitz & Shaywitz, 2004). For auditory pathways to mature, acoustic stimulation must occur early and often because normal maturation of central auditory pathways is a precondition for the normal development of speech and language skills in children.

Sound processing is one of the most computationally demanding tasks the nervous system has to perform (Kraus & White-Schwoch, 2019). The task relies on the exquisite timing of the auditory system, which responds to input more than 1,000 times faster than the photoreceptors in the visual system. Humans can hear faster than they can see, taste, smell, or feel.

9



10

Children With Hearing Loss:  Developing Listening and Talking

Research also suggests that children who receive implants very early (in the first year of life) benefit more from the relatively greater plasticity of the auditory pathways than do children who are implanted later within the developmentally sensitive period (Ching et al., 2018; Kral et al., 2016; Sharma, Dorman, & Kral, 2005). These results suggest that rapid changes in P1 latencies and changes in response morphology are not unique to electrical stimulation but rather reflect the response of a deprived sensory system to new stimulation. Gordon and colleagues (2003, 2004) concurred and reported that activity in the auditory pathways to the level of the midbrain can be provoked by stimulation from a cochlear implant (CI). The hypothesis that early implantation appears to be promoted by changes in central auditory pathways was supported by evidence provided by Gordon and colleagues (2003, 2004) and by Kral et al. (2016). Robbins et al. (2004) and Suskind (2015) found that skills mastered as close as possible to the time that a child is biologically intended to do so results in developmental synchrony. As human beings, we are preprogrammed to develop specific skills during certain periods of development. If those skills can be triggered at the intended time, we will be operating under a developmental and not a remedial paradigm. That is, we will be working harmoniously within the organic design of the human structure. When we intervene later in a child’s life, out of harmony with the typical developmental process, we are forced to work within a remedial model. A brain can only organize itself around the sensory stimulation that it receives. Remedial intervention means that we need to undo the neural organi-

zation that the brain has initially acquired and reorganize the brain around different stimuli. A remedial model takes longer and typically has reduced outcomes because the child is now neurologically and psychosocially out of synchrony with the typical developmental process. Think of how long it takes an adult to learn to ski. It is a very long and effortful time, compared with the amount of time that it takes a child to learn to zoom down the hill.

Acting in harmony with a structure can be illustrated by a child learning to pump on a swing. When the child first begins to try to make the swing move without being pushed, the child expends a great deal of energy but the swing moves only a little bit. Then something happens, and the child learns how to move in harmony with the swing. As a result of child and swing synchrony, a little bit of energy input by the child causes large movement by the swing (Figure 1–3). In an analogous fashion, a developmental model allows for synchrony of intervention and development, promoting large and rapid gains.

Mastery of any developmental skill depends on cumulative practice; each practice opportunity builds on the last one. Therefore, the more delayed the age of acquisition of a skill, the further behind children are in the amount of cumulative practice they have had to perfect that skill relative to their age peers. The same concept holds true for cumulative auditory practice. Delayed auditory development



1.  Neurological Foundations of Listening and Talking:  We Hear With the Brain

Figure 1–3.  When the child learns how to act in harmony with the swing, a little bit of energy input leads to startling effects.

leads to delayed language skills, which will necessitate using a remedial rather than a developmental paradigm.

No skill is acquired without practice. Children learn to walk by practicing walking. They learn to listen by practicing listening. They learn to talk by practicing talking. A great deal of practice is necessary for the acquisition of any skill. Learning following a developmental model allows for the necessary, substantial practice.

To summarize, neuroplasticity is greatest during the first 3.5 years of life (Ching et al., 2018; Kral, 2013; Kral et al., 2016; Sharma, 2013; Sharma & Nash, 2009). The younger the infant, the greater is the

neuroplasticity (Sharma, 2013; Sharma et al., 2002, 2004; Sharma, Dorman, & Kral, 2005). Rapid infant brain growth requires prompt intervention, typically including amplification and a program to promote auditory skill development. In the absence of sound, the brain reorganizes itself to receive input from other senses, primarily vision; this process is called cross-modal reorganization, and it reduces auditory neural capacity. Early amplification or implantation stimulates a brain that is in the initial process of organizing itself and is therefore more receptive to auditory information, resulting in greater auditory capacity (Sharma & Glick, 2018; Sharma & Nash, 2009). Furthermore, early implantation synchronizes activity in the cortical layers. Therefore, identification of newborn hearing loss should be considered a neurodevelopmental emergency.

11



12

Children With Hearing Loss:  Developing Listening and Talking

The popular book about change, Who Moved My Cheese? by Spencer Johnson, is particularly meaningful in the world of hearing loss. Technology and early hearing detection and intervention (EHDI) programs have allowed outcomes of listening and talking only dreamed of a few years ago. It is important to realize that the new outcomes available today do not invalidate the decisions that we made for intervention in the past. We did what we did when that is what there was to be done. Because we now know more, we can offer better services. We do the best

New Context for the Word Deaf Today, hearing aids and/or cochlear implants and wireless microphone technologies can allow the brains of infants and children with even the most profound hearing loss/deafness access to the entire speech spectrum. There is no degree of hearing loss that prohibits the brain’s access to auditory information if cochlear implants are available. Degree of hearing loss as a limiting factor in auditory acuity is now an “old” acoustic conversation. That is, when one uses the word deaf, the implication is that one’s brain has no access to sound/auditory information, period. The word deaf in 1970 or even in 1990 occurred in a very different context than the word deaf as used today. Today’s child who is deaf (closed doorway) but who is given appropriate hearing aids or cochlear implants early in life can actually hear well enough to perceive and understand spoken language — if the device is programmed well, if it is worn at least 10

that we can in today’s world. Tomorrow’s world will bring new possibilities, and we need to “move with the cheese.” Our job is to prepare today’s babies to be the takecharge adults in the world of 2030, 2040, and 2050 . . . not in the world of 1990, 2000, or even 2020. Because information and knowledge are the currencies of today’s cultures, listening, speaking, reading, writing, and the use of electronic and digital technologies must be made available to our babies and children to the fullest degree possible.

hours per day (12 hours is better), and if the child is in an auditory/language enriched environment. This means that his or her brain can be developed with meaningful sound — that is, with auditory-linguistic information. That child is deaf without his or her equipment to breach the doorway but certainly does not function as deaf in the 1970 or even in the 1990 sense, because they have developed a hearing brain. Perhaps we need some new words because the context has changed. So, how do today’s listening and talking children describe their hearing loss? Rhoades (2014), in her work with children, suggests using the term “person with hearing differences,” rather than “person with a hearing loss.” Addressing the same issue, Kemmery and Compton (2014) conducted an in-depth exploration of four teenagers’ perceptions of their identity related to hearing loss. The researchers identified a fluid continuum of identity types that included (a) hearing, (b) person with hearing loss, (c) hard of hearing, (d) hearing impaired, and (e) deaf (medical definition). Although the teenagers in the study



1.  Neurological Foundations of Listening and Talking:  We Hear With the Brain

chose the “hearing identity type” as the one sought after, three key factors influenced which identity type was selected at any point in time: interactions with others, environment, and life experiences. That is, a person may choose one identity type in one context and a different identity type in another context. Finally, parents, practitioners, and teachers need to realize that the ways they view a child may differ from the child’s own view of themselves and their hearing loss (their hearing differences) at any point in time.

Hearing Versus Listening There is a distinction between hearing and listening. Hearing is acoustic access of auditory information to the brain. For children with hearing loss, hearing includes improving the signal-to-noise ratio by managing the environment and utilizing hearing technologies. Listening, on the other hand, is when the individual attends to acoustic events with intentionality. Sequencing is important. Hearing must be made available before listening can be learned. That is, parents and interventionists can focus on developing the child’s listening skills, strategies, and choices only after the audiologist channels acoustic information to the brain by fitting and programming technologies — not before.

A Model of Hearing Loss:  The Invisible Acoustic Filter Effect Hearing loss of any type or degree that occurs in infancy or childhood can interfere with the development of a child’s

spoken language, reading and writing skills, and academic performance (Davis, 1990; Ling, 2002; Madell, Flexer, Wolfe, & Schafer, 2019b). That is, hearing loss can be described as an invisible acoustic filter that distorts, smears, or eliminates incoming sounds, especially sounds from a distance — even a short distance. The negative effects of a hearing loss may be apparent, but the hearing loss itself is invisible and easily ignored or underestimated.

Lack of high-fidelity acoustic information to the brain is the biggest challenge worldwide. It is imperative to have very high expectations for today’s technologies to make soft sounds available to the brain at a distance and in the presence of noise. Children must use technologies that enable them to overhear conversations and to benefit from incidental learning to maximize their auditory exposure to knowledge for social and cognitive development.

It is critical to note that as human beings we are neurologically “wired” to develop spoken language (speech) and reading skills through the central auditory system (Figure 1–4). Most people think reading is a visual skill, but research on brain mapping shows that primary reading centers of the brain are located in the auditory cortex, in the auditory portions of the brain (Chermak et al., 2014; Kraus & White-Schwoch, 2019; Pugh et al., 2006; Tallal, 2005). That is why many children who are born with hearing losses and who do not have brain access to auditory input when they are very young (through hearing aids or cochlear implants and auditory teaching) tend to have a great deal of difficulty reading even though their

13



14

Children With Hearing Loss:  Developing Listening and Talking

Figure 1–4.  The design of human beings is such that hearing (auditory neural growth) is a first-order event for the development of language, reading, and academic skills, not an isolated activity.

vision is fine (Robertson, 2014). Therefore, the earlier and more efficiently we can allow a child access to meaningful sound with subsequent direction of the child’s attention to auditory information, the better opportunity that child will have to develop spoken language, literacy, and academic skills. With the technology and early auditory intervention available today, a child with a hearing loss can have the same opportunity as a typically hearing child to develop spoken language, reading, and academic skills even though that child hears differently — if we do what it takes.

Putting It All Together in a Counseling Narrative: Think About Hearing Loss as a Doorway Problem Here is a conversation that can assist in developing a context for hearing loss. This “big picture” narrative could start with a general discussion of how information gets into the brain; the brain is an organ entirely encased in bone with no direct access to the environment. Information gets into the brain through five senses



1.  Neurological Foundations of Listening and Talking:  We Hear With the Brain

(hearing, sight, smell, taste, and touch). Each sense captures different types of raw environmental information and transforms that information into neural impulses that can be read by the brain. For example, the nose is the doorway to the brain for olfactory molecules, for the sense of smell. But we smell with the brain, not with the nose. The nose doesn’t know whether a particular smell means “cookie” or “smoke”; the brain knows the meaning of the smell through exposure, experience, and language. Adding another example, the eye is the portal to the brain for optic wavelengths, but knowing the meaning of what we see occurs in the brain. Knowing the visual image is a dog or a chair doesn’t occur in the eye; the brain knows and learns the meaning of the visual event through exposure, experience, and language. Now, we come to the ear. The ear is the “doorway” to the brain for sound ​ — ​spoken language and information such as talking and reading. But hearing occurs in the brain. The ear doesn’t know that particular vibrations mean “mommy” or “doorbell.” Learning, and thus knowing, the meaning of auditory events occurs in the brain, not in the ear.

Organs that control the five senses — ​ hearing, sight, smell, taste, and touch ​ — are doorways to the brain for unique types of raw environmental information. The meaning of that information occurs in the brain, not in the sense organs.

So, what is hearing loss? Because we hear with the brain, we can counsel families to think about hearing loss as a doorway problem (Figure 1–5A). That is, the hearing loss (damage in the outer, middle,

and/or inner ear) prevents auditory information from passing through the doorway to the brain. If sound and information does not reach the brain in sufficient quantity and quality, neural connections will not develop well, and the child’s spoken language, literacy, and knowledge will be limited. Technology (e.g., hearing aids, cochlear implants, remote microphone (RM) systems, etc.) are devices designed to transmit auditory information through the doorway to the brain where actual hearing occurs (Figure 1–5B).

Summary Actual hearing can be defined as brain perception of auditory information (Flexer, 2017). Children who are born deaf or hard of hearing or who acquire hearing loss at a young age do not have the same opportunities, biologically, for their auditory neural development to progress at the same rate as their hearing friends. So, children with hearing loss whose parents want a spoken language outcome must be appropriately fitted with hearing technology (e.g., hearing aids or a cochlear implant) by a pediatric audiologist as early as possible (in the first weeks of life for newborns) to mitigate any periods of auditory neural deprivation and to provide brain access to a rich and robust model of intelligible speech and fluent language information. The early fitting of technology will nourish the auditory cortex and promote synaptic connections between the primary and secondary auditory cortex as well as establish functional neural networks between the secondary auditory cortex and the rest of the brain. Through all of these neural connections, the child’s brain will learn to make incoming sound

15

Figure 1–5.  The ear is the “doorway to the brain” for sound/ auditory information. A. Hearing loss obstructs that doorway, preventing auditory information from reaching the brain. B. Hearing aids and cochlear implants break through the doorway to allow access, stimulation, and development of auditory neural pathways.

A

B

16



1.  Neurological Foundations of Listening and Talking:  We Hear With the Brain

possess higher-order meaning, which is critical to language, knowledge, and literacy development.

Next Steps:  What Will It Take to Optimize the Probability of Attaining a Listening and Spoken Language Outcome? The second half of this book contains detailed instruction in the development of listening and talking. Here is the answer, in a nutshell. Developing LSL requires a systematic approach with cohesive attention to all links in the Logic Chain (Flexer, 2017): Brain Development > General Infant/Child Language Development in the Family’s Home Language > Early and Consistent Use of Hearing Technologies > Family-Focused LSL Early Intervention > LSL Early Intervention for Literacy Development. Another way of describing what it takes to develop LSL includes: n Early identification and intervention

to take advantage of neuroplasticity and developmental synchrony. n Vigilant, ongoing, and kind audiologic management. n Immediate and consistent brain access of auditory/linguistic information via technology — hearing aid loaner banks — to preserve and develop auditory neural capacity. n Guidance from a professional who is highly qualified in the development

of LSL through techniques of parent coaching. n Following the professional’s coaching with daily and ongoing formal and informal auditory, language, cognitive, literacy, and music enrichment. n Integrating and using auditory strategies in all-day, every-day, routine interactions with the child to “grow the baby’s brain.” See Appendix 1 for a handout of How to Grow Your Baby’s or Child’s Brain.

What Will It Take? Because early intervention is a familyfocused strategy, it is necessary that families assist in formulating their role. A “what will it take” conversation with families is important to clarify the necessary steps to attaining the family’s desired outcome of listening and talking. One strategy is to use the analogy of driving a car. What are the steps necessary to acquire the desired outcome of a driver’s license? Well, to begin with, one needs to have access to a car. Then, a coach or driving instructor is sought. Most states then require up to 150 hours of supervised driving practice with a licensed, adult driver. Next is the necessity of passing a written exam and a vision exam, followed by a practical exam — the driving test. If all that is completed satisfactorily, one has a driver’s license. If any requirement is deleted, the driver’s license is not achieved no matter how badly one wants it or how sympathetic the excuses.

17

2 The Audiovestibular System

Key Points Presented in the Chapter n A child has an audiovestibular

n Sounds, in the form of vibrations,

system, not just an auditory system. In fact, the primary survival function of the inner ear is balance. n The subtle and often ambiguous functions of hearing can be divided into three levels: subconscious (primitive), signal warning, and spoken language. n The Ling 6-7 Sound Test allows parents, professionals, and teachers to know the child’s distance hearing limits, that is, their “earshot” distance. n The human ear is an extremely sensitive and delicate yet amazing structure for sound and balance perception; therefore, it is important to note the significance of each function of the ear’s many parts and the effects that environmental, physiological, and psychological factors have on the ability to hear and to maintain equilibrium.

are received through the peripheral auditory system: the outer, middle, and inner ear. n Ossicles, the smallest bones in the body, are made up of the malleus, incus, and stapes, which are attached to two membranes: the tympanic membrane and the oval window. n The peripheral and central auditory systems provide two distinct hearing processes: (a) the transmission of raw environmental vibrations — a stream of acoustic energy — to the brain through the outer, middle, and inner ear; and (b) analyzing and making sense of those sounds, which occurs in the brain. Therefore, auditory perception in the brain requires the sensory and vibratory evidence provided by the peripheral system in the form of auditory information.

19



20

Children With Hearing Loss:  Developing Listening and Talking

n The vestibular system, which

specializes in the detection, coding, and processing of movement such as gait, stance, locomotion,

The Nature of Sound To appreciate the impact of hearing loss (“doorway” problem) and the benefits of hearing management strategies designed to get auditory information to the brain, it is useful to understand first the vital and often subtle role that sound and auditory information plays in our physical and psychological lives. The functions of hearing (defined as brain perception of auditory information), can be divided, roughly, into three levels: subconscious (primitive), signal warning (environmental monitoring), and spoken language learning for communication (Northern & Downs, 2014).

Subconscious Function The subconscious level distinguishes the most primitive function of hearing by carrying the auditory background and auditory information that serve to identify a location. A hospital sounds different compared to a school, which sounds different from a department store. Typically, one does not notice these sounds; however, if a location does not produce the expected auditory information, one becomes uneasy. For example, entering an empty hospital or deserted school may evoke feelings of anxiety. Also heard at the primitive level are one’s own biological sounds such as breath-

balance, and spatial orientation, is composed of three semicircular canals, and the utricle and saccule.

ing, swallowing, chewing, heartbeat, and pulse. All furnish proof that we are indeed alive and functioning. Some people who suddenly lose their hearing may experience acute psychosis due to feelings of disconnectedness from the environment, time, and their own bodies.

A new hearing aid, cochlear implant, or assistive listening device changes the wearer’s auditory background because sounds not previously heard become amplified or altered. A wearer’s anxiety may reflect this alteration in auditory background, although he or she may be unable to understand why. Certainly, a baby or young child cannot explain in words that he or she has noticed a different auditory background, but may express it with a big smile, or by stilling suddenly and looking around, or by fussing or crying. Yet, as professionals, we should be mindful that there is an adjustment period to amplification as the wearer’s brain reorganizes itself to the altered auditory background. One could also speculate about the confusing effects of a constantly fluctuating hearing loss (e.g., from chronic ear infections) on a child’s auditory background — effects that could impact the child’s emotional state and also could begin to set up habits of auditory inattention.



Signal Warning Function The second level of hearing function, signal warning, is less subtle and pertains to monitoring the environment. Classified as distance senses, hearing and vision allow us to know what happens away from our bodies. However, we cannot always actually see events to know what occurs around us. A baby in bed at night can hear people talking, the television prattle, a computer’s clatter, dogs barking, cars roaring, and so forth. Knowledge of what happens around us promotes feelings of security, whereas being uninformed about the environment may induce anxiety. Unfortunately, persons with hearing loss of any degree, even while wearing suitable hearing aids, lack the ability to hear well over distances. Distance hearing becomes problematic because the speech signal loses both intensity and critical speech elements as the signal travels away from its sound source (Boothroyd, 2019; Leavitt & Flexer, 1991; Smaldino & Flexer, 2012). The greater the hearing loss, the greater is the reduction in earshot or distance hearing (Ling, 2002; Moeller, 2007). Said another way, the greater the hearing loss, the closer the listener must be to the talker for speech to be heard at levels loud enough and with sufficient redundancy to be understood. A child with an unaided hearing loss, whether mild or unilateral (hearing loss in only one ear), may not casually hear what people are saying or the events that are occurring (McCreery et al., 2015). Often, children with normal hearing seem to absorb information passively from the environment; they can learn from overhearing the speech of others. In contrast, children with untreated and unmanaged hearing problems may seem distractible

2.  The Audiovestibular System

or oblivious to environmental events; they often experience a disconnection from the environment. These reductions in signal intensity and integrity that occur with distance from the speaker cannot be completely overcome, even with today’s fantastic technology. Consequently, children experiencing hearing problems have limitations in their brain’s access to auditory information in the environment. Therefore, these children may need to be taught directly many skills that other children learn incidentally. Lack of brain access to social clues and the redundancy of spoken information that occurs in day-to-day transactions may also result from reduction of distance hearing. Tangential information not directed to the child is, nevertheless, important for learning. Listening in to others’ conversations

Following is one example of how distance hearing and incidental learning impact day-to-day family life. A young child, later discovered to have a moderate hearing loss, would repeatedly take food from the kitchen without asking permission, an activity the family viewed as disruptive and rude. The family later recognized that while the child could observe his siblings taking food from the kitchen, because of his hearing loss, he could not overhear them first ask the parent’s permission. The child’s behavior improved dramatically once the family realized that the child needed to wear his hearing aids all the time, not just at school. In addition, he required direct instruction of the family’s social expectations, even though the other children learned social cues passively.

21



22

Children With Hearing Loss:  Developing Listening and Talking

teaches children how to start a conversation, make a request, solve a problem, negotiate, compromise, joke, tease, and use sarcasm. Any degree of hearing loss presents a barrier to the casual acquisition of information from the environment.

Spoken Communication Function Children learn to talk through listening ​— ​ through paying attention to auditory information (Golinkoff, 2013; Hirsh-Pasek et al., 2015; Werker, 2006). Not surprisingly, spoken communication does not develop naturally or completely for an infant or child whose brain does not receive complete auditory information, unless technology and specific auditory strategies are used to improve the quantity and quality of their auditory brain access. Verbal language takes form after a great deal of auditory information reaches the brain, augmented by active listening experience (Estabrooks, 2006; Golinkoff, 2013; Hayes et al., 2009; Lew et al., 2014; Ling, 2002). Because the inner ear is fully developed by the fifth month of gestation, an infant with normal hearing potentially has 4 months of in utero auditory brain stimulation prior to birth (Moon et al., 2013; Simmons, 2003). At approximately 1 year of age, after 12 months (or maybe 16 months if we count before-birth listening) of meaningful and interactive listening, a normal hearing child begins to produce words. The point is that listening time cannot be skipped, and a child who misses months of brain access due to hearing loss must make them up (Estabrooks, 2012; Estabrooks et al., 2016). The brain requires extensive listening experience to properly organize itself around the speech signal. Importantly, infants

also must hear their own vocalizations, creating an auditory feedback loop that is a critical component motivating early vocalization frequency (Fagan, 2014).

The brain benefits from practice listening to music as well as speech. For babies and young children, we mean adult-directed, in person, singing as a social experience — not turning on a recording. Studies provide biological evidence that listening to music boosts the brain’s ability to distinguish emotional aspects of speech by focusing on the paralinguistic (nonverbal) elements of speech (Sheridan & Cunningham, 2009). In addition, music training in children activates the cognitive functions of attention and memory and improves speech-in-noise performance (Kraus & Anderson, 2014).

Acoustics To understand the causes of hearing loss, it is first important to establish an appreciation of the human ear. Though this intricate and complex mechanism may not appear very impressive from the outside, the inner structures of the ear enable a person to access a 10,000,000-to-1 range of sensitivity from the softest sound that can barely be detected to one that is loud enough to cause pain (Hall, 2014). The typical human ear has such amazing sensitivity that it can nearly detect random molecular movement. When hearing sound, the brain interprets a pattern of vibrations initiated from a source in the environment. Vibrational energy produces the sound waves we



2.  The Audiovestibular System

can hear. Therefore, hearing is an event (Boothroyd, 2019). That is, we don’t “hear mommy.” We hear mommy talking, walking, eating, and so on. We hear the vibrations created by an action. Vibrations need two properties: mass and elasticity. Sound waves are similar to ring patterns in water produced when a pebble is thrown into a pond, only with sound, the waves are formed by the vibration of air molecules. Much like the effect of water ripples, sound waves originate from a point where the sound is first generated and spread out in circles of waves. The speed at which sound travels through the air is called velocity. Sound travels through the air as fast as 770 miles per hour at sea level, or 1,130 feet per second. Water waves travel much slower, only a few miles an hour. Graphed as a circle on a flat plane plot, a sine wave is the simplest form of sound called simple harmonic motion; it is heard as a pure tone (Figure 2–1).

The riddle, “If a tree fell in the forest and no one was around, would a sound still be made by the falling tree?” serves as a good example of whether one describes sound from a physical or psychological basis. The answer depends on one or the other. Existing independently of the observer are the physical parameters of intensity, frequency, and duration; these features can be measured by equipment. However, the psychological perceptions of loudness, pitch, and time require the presence of a person for interpretation.

Our ability to hear sound is defined by the three characteristics of frequency, intensity, and duration. A hearing loss may affect one’s ability to receive and thus to perceive one or all of these features. The psychological attributes of sound are pitch and loudness. The physical

Figure 2–1.  Graphed as a circle on a flat plane plot, a sine wave is the simplest form of sound.

23



24

Children With Hearing Loss:  Developing Listening and Talking

parameter of frequency corresponds to the psychological attribute of pitch. The listener hears high-frequency sounds as high in pitch, whereas the perception of low-frequency sounds is that of a low pitch. The physical parameter of intensity or sound pressure is the psychological correlate of loudness. So, when a sound has more intensity — more sound pressure ​ — the listener perceives it as louder. The number of back-and-forth oscillations produced by a sound source in a given time period is called frequency. The terms hertz (Hz) and cycles per sec-

ond (cps) are measures of frequency. By counting the number of cycles per second of a sound wave, one can measure the frequency of sound. The distance between the top (crest) of one sound wave and the top of the next sound wave is one cycle. To have a frequency of 1000 Hz, a sound source would need to complete 1,000 back-and-forth cycles in 1 second. Low frequencies are characterized by fewer cycles per second, whereas many cycles per second indicate a higher-frequency sound. In Figure 2–2 the frequency of the sound increases as the wavelengths

Figure 2–2.  Low frequencies are characterized by fewer cycles per second, whereas many cycles per second indicate a higher-frequency sound.



become shorter and the number of cycles per second increases. Interestingly, frequency is independent of distance (Chasin, 2018). For example, the vowel /a/ will still sound like /a/ whether you are close to the speaker or at the other end of the room. Although humans can hear frequencies ranging from 20 to 20,000 Hz, frequencies between 250 and 8000 Hz are of particular importance for the perception of speech. Middle C on the piano corresponds to 262 Hz. Frequencies of 500 Hz and below are identified as lowpitched sounds, and they have a bass quality. Relative to individual speech sounds (phonemes), those that are low pitch carry melodies of speech, vowel sounds, and most environmental sounds (Kramer & Brown, 2019). On the other hand, highpitched sounds are 1500 Hz and above, and they have a tenor quality. The high frequencies are important for understanding speech because they carry the energy that helps distinguish among the consonants. Put simplistically, the meaning of speech is carried by high frequencies (mostly consonants), whereas low frequencies carry the melody. About 70% of all English speech sounds are above 1000 Hz, making speech discrimination more of a high-frequency event. Contrastively, music is more of a low-frequency event because about 70% of all fundamental frequencies are below 1000 Hz. The softest and loudest sounds in conversational speech are separated by approximately a 30 dB range. The th sound, as in thin, is the least intense speech sound. The most intense speech sound is /a/ as in law. Table 2–1 has a more detailed description of speech information carried by each relevant frequency. A sound’s intensity can be defined as pressure or power. Graphed as the amplitude of a sine wave caused by dis-

2.  The Audiovestibular System

placement of air molecules, intensity is measured in decibels (dB) and perceived as loudness (Figure 2–3). The degree of air particle displacement that occurs while a sound is made determines a sound’s intensity. The abbreviation dB stands for decibel; the “B” is capitalized in honor of Alexander Graham Bell. The hearing level (HL), defined as 0 dB HL, is the softest sound a young adult with normal hearing can just barely detect. The loudness of a person whispering a few feet away is about 25 to 30 dB HL. Typical conversational speech in a quiet environment when the speaker is a few feet away is heard at 45 to 50 dB HL. City traffic is about 70 dB HL, a food blender is about 90 dB HL, an alarm clock is about 80 dB HL, a rock concert can be louder than 110 dB HL, a jet airport can produce sounds reaching an intensity as high as 120 dB HL, and a firecracker can produce a bang at approximately 140 dB HL (Hall, 2014; Northern & Downs, 2014). Because the decibel has a logarithmic and not a linear scale, every dB counts a great deal. For example, if a sound increases or decreases 10 times in intensity, the change is only 10 dB. A 20 dB change means a sound is 10 × 10, or 100 times the intensity of the original sound. The average hearing loss caused from ear infections is about 20 dB — usually described as a “slight” or “minimal” hearing loss. When understanding the dB scale as logarithmic, one recognizes that in reality, the child’s slight hearing loss means that the auditory information available to the brain is approximately 100 times weaker when the fluid is present than can be heard when the fluid is cleared. Most environmental sounds are complex in structure. Complex sounds are composed of many frequencies with varying intensities occurring at the same time.

25

26

• Harmonics of all voices (male, female, child)

• Fundamental frequency of females’ and children’s voices

• Consonant quality

– past perfect

– copulas

– questions

– third-person singular verb forms

– auxiliaries

– possessives

– idioms

– plurals

• The key frequency for /s/ and /z/ audibility that is critical for language learning:

4000 Hz

Source:  Adapted from Ling, D. (2002). Speech and the Hearing-Impaired Child (2nd ed.). Washington, DC: The Alexander Graham Bell Association for the Deaf and Hard of Hearing (http://www.listeningandspokenlanguage.org). Reprinted with permission. All rights reserved.

• Fricative turbulence

• Affricate bursts

• Acoustic information for the liquids /r/ and /l/

• Consonant-vowel and vowel-consonant transition information

• Second and third formant information for front vowels

• Voicing cues

• Unstressed morphemes

• Suprasegmentals

• Voicing cues

• Some plosive bursts

• Consonant-vowel and vowel-consonant transition information

• The key frequency for speech intelligibility

• Second formants of back and central vowels

• Plosive bursts

• Some plosive bursts associated with /b/ and /d/

• Suprasegmentals

• Nasality cues

• The important acoustic cues for place of articulation

2000 Hz

• The important acoustic cues for manner of articulation

1000 Hz

• Male voice harmonics

• Suprasegmental patterns (stress, rate, inflection, intonation)

• Prosody

• Nasal murmur associated with the phonemes /m/, /n/, and /ng/

• First formants of most vowels

• First formant of vowels /u/ and /i/

• Voicing cues

500 Hz

250 Hz

Table 2–1.  Speech Information Carried by the Key Speech Frequencies of 250 to 4000 Hz (± one-half octave)



2.  The Audiovestibular System

Figure 2–3.  Graphed as the amplitude of a sine wave caused by displacement of air molecules, intensity (sound pressure) is measured in decibels (dB) and perceived as loudness.

Speech is one example of a complex sound. The spectrum of a speech wave specifies the frequencies, amplitudes, and phases of each of the waves’ sinusoidal components. —  h ome, In all learning domains  school, and community settings — there are a variety of sounds from many sources. Undesirable sounds typically are referred to as noise. Noise levels tend to be more intense in the low frequencies and less powerful at the high frequencies. The weaker consonant sounds (i.e., th, s, sh, f, p) are more affected by noise than are the louder vowel sounds. In noisy situations,

the intelligibility of speech (the ability to hear word-sound distinctions clearly) is compromised even though the sounds might still be audible.

Audibility Versus Intelligibility of Speech There is a big difference between an audible signal and an intelligible signal. Speech is audible if the person is able simply to detect its presence. However, for speech to be intelligible, the person must

27



28

Children With Hearing Loss:  Developing Listening and Talking

be able to discriminate the word-sound distinctions of individual phonemes and speech sounds. Consequently, speech might be very audible but not consistently intelligible to a child with even a minimal hearing loss, causing the child to hear, for example, words such as walked, walking, walker, and walks, all as “ah.” Vowel sounds (e.g., o, u, ee) have strong low-frequency energy, about 250 to 500 Hz. They are the most powerful sounds in English. Vowels carry 90% of the energy of speech. On the other hand, consonant sounds (like f and s) are very weak high-frequency sounds, with energy focused at 2000 and 4000 Hz and above (see Table 2–1). Consonants carry only 10% of the energy of speech but 90% of the information needed to perceive the differences among the sounds. The octave band at 2000 Hz carries the greatest amount of speech information. For speech to be perceived clearly, both vowels and consonants must be acoustically available to the brain. Persons with hearing losses typically have the most difficulty hearing the weak, unvoiced, high-frequency consonant sounds.

There is a science called speech acoustics that informs our analysis of the relationship between the child’s brain perception of auditory information and their motor production of speech; children speak what and how they hear.

The Ling 6-7 Sound Test: Acoustic Basis and Description The Ling sound test allows a quick and easy way to verify that a child detects the vowel and consonant sounds of spoken

language (Ling, 2002; Perigoe & Paterson, 2013). The sounds used for this test were selected because they encompass the entire speech spectrum as noted below: n /m/ corresponds to 250 Hz, plus-

minus one-half octave. n /u/ is like a narrow band of noise

corresponding to 500 Hz on the audiogram, plus-minus one-half octave. n /a/ corresponds to 1000 Hz, plusminus one-half octave. n /sh/ is a band of noise corresponding to 2000 Hz, plus-minus one-half octave. n /s/ is a band of noise corresponding to 4000 Hz and higher. n /i/ has a first formant (resonance of the vocal track) around 500 Hz, and a second formant around 2000 Hz; the second formant must be heard for

The Ling 6-7 Sound Test allows parents, professionals, and teachers to know the child’s distance hearing or earshot. Knowing a child’s distance hearing has vital instructional ramifications if one intends to use audition as a viable modality for the reception of information. Sounds must first be detected before the brain can be stimulated. Frequent administration of this test can help parents, professionals, and teachers monitor for hearing aid malfunction and/or cochlear implant failure, changes in the child’s hearing, or onset of middle ear conductive involvement that would be reflected by reduced earshot. Refer to Chapter 4 for a further discussion of distance hearing, and to Appendix 2 for specific Ling 6-7 Sound Test application and instructions.



2.  The Audiovestibular System

the listener to be able to distinguish between front and back vowels. n The silent interval is really a seventh “sound” that is necessary to track false-positive responses.

Audiovestibular Structures The ear develops from the same embryonic layer as the central nervous system (Northern & Downs, 2014). This means that normal development of the auditory mechanism including neural innervation begins in the early weeks of embryological development. At the time of birth, the auditory system of a baby with normal hearing has been sufficiently developed to have allowed acoustic brain stimulation beginning in the 20th week of gestation. Middle ear structures are developed and functional by the 37th week of prenatal development. Interestingly, the external ear and auditory canal continue to grow as the child grows, up to the age of about 9 years (Hall, 2014; Simmons, 2003). The auditory system is an incredible structure. Figure 2–4 shows a cross section of the auditory system. The peripheral portion can be subdivided into three general areas: the outer ear, the middle ear, and the inner ear; the central portion includes the brainstem and cortex. To understand how the auditory system breaks down resulting in hearing loss, it is first necessary to discuss how the ear is put together and how it normally functions.

Data Input Analogy Together, the outer, middle, and inner ear are known as the peripheral auditory sys-

tem and function to receive sounds. The brainstem and auditory cortex, called the central auditory system, take the sounds that are received through the peripheral system and code them for more complex, higher-level auditory processes. Therefore, two general processes of hearing exist: (a) transmission of sounds to the brain through the outer, middle, and inner ear; and (b) understanding the meaning of those sounds once they have been transmitted to the brain. Before understanding can take place, sounds (auditory information) must first be heard, similar to how computer data input precedes data manipulation. There can be no auditory perception without the sensory evidence of sound input (Berlin & Weyand, 2003; Boothroyd, 2019). One way to illustrate the potentially negative effects of any type of a degree of hearing loss on a child’s language and overall development and to further explain the peripheral-central distinction is to use a computer analogy. The primary concept is that data input precedes data processing.

It is important to recognize that when we talk about hearing “sounds,” what is meant is the reception and transmission of auditory information and knowledge from the environment through the outer, middle, and inner ear to auditory brain centers where the child learns, through exposure and practice, the meaning of the auditory event.

An infant or toddler (or anyone) must have information or data to learn. A primary avenue for entering information into the brain is through the ears, via

29



30

Children With Hearing Loss:  Developing Listening and Talking

Sound is perceived here

Auditory Nerve to Brain

Temporal Bone Semicircular Canals Malleus Incus Posterior Stapes Lateral Superior

Facial Nerve Vestibular Nerve Cochlear Nerve Cochlea

Pinna

Eustachian Tube

Tympanic Membrane

External Auditory Meatus (Ear Canal)

Oval Window

Vestibule

Figure 2–4.  The peripheral portion of the ear can be subdivided into three general areas — the outer ear, the middle ear, and the inner ear; the central portion includes the brainstem and cortex.

hearing. So, the ears can be thought of as analogous to a computer keyboard, and the brain could be compared to a computer hard drive. As human beings we are neurologically wired to code and hence to develop spoken language and reading skills through the auditory centers of the brain, the hard drive (Pugh, Sandak, Frost,

Moore, & Menci, 2006; Sharma et al., 2004; Sharma & Nash, 2009). Therefore, auditory data input is critical, and it is worth our time and effort to make detailed auditory information available to a child with any degree of hearing loss. If data are entered inaccurately, incompletely, or inconsistently, analogous to using a malfunction-



ing computer keyboard or to having one’s fingers on the wrong keys on a computer keyboard, the child’s brain or hard drive will have incorrect or incomplete information to process. How can a child be expected to learn when the information that reaches his or her brain is deficient? The brain can organize itself only around the information that it receives. Amplification technologies, such as hearing aids, personal remote microphone (RM) systems, sound-field systems, and biomedical devices such as cochlear implants can all be thought of as keyboards — as a means of entering acoustic information into the child’s hard drive. So, technology is really a more efficient keyboard. Unfortunately, technology is not a perfect keyboard and it does not have a life of its own, any more than a car has a life of its own. Technology is only as effective as the use to which it is put, and only as efficient as the people who use it. Conversely, without the technology, without acoustic data input, auditory brain access is not possible for persons with hearing loss. To continue the computer analogy, once the keyboard is repaired or the figurative “fingers” are placed on the correct keys of the keyboard — allowing data to be entered accurately, analogous to using amplification technology that enables a child to detect word-sound distinctions — what happens to all of the previously entered inaccurate and incomplete information? Is there a magic button that automatically converts inaccurate data stored in the brain to complete and correct information? Unfortunately, all of the corrected data need to be reentered. Thus, the longer a child’s hearing problem remains unrecognized and unmanaged, the more destructive and far-reaching are the snowballing effects of hearing loss.

2.  The Audiovestibular System

Early intervention is critical — the earlier the better. Hearing is only the first step in the intervention chain. Once the auditory brain has been accessed as much as possible through appropriate amplification or biomedical technology, the child will have an opportunity to discriminate wordsound distinctions as a basis for learning language, which in turn provides the child with an opportunity to communicate and acquire knowledge of the world. All levels of the acoustic filter effect of hearing loss discussed previously need to be understood and managed. In other words, just medically managing/correcting a conductive hearing loss, or simply wearing hearing aids or a cochlear implant, does not ensure development of an effective language base. The longer a child’s peripheral auditory system remains unmanaged, the longer data entry to the central auditory system (the brain) will be incomplete and inaccurate, causing the snowballing effects of the acoustic filter to negatively impact the child’s overall knowledge and life development. Conversely, the more intelligible and complete the data entered are, the better opportunity the infant or toddler will have to learn language that serves as a foundation for later reading and academic skills. It cannot be said enough; the peripheral auditory system (the doorway) must be managed and accessed early to provide the best acoustic access of information to the central auditory network — the brain. The point is, from the inception of early intervention programming, comprehensive audiologic and hearing management is an absolutely necessary first step for a child of any age and with any type of hearing or listening difficulty to have an opportunity to learn.

31



32

Children With Hearing Loss:  Developing Listening and Talking

Outer and Middle Ear Except for the pinna portion of the outer ear, the structure of the human ear is located entirely inside the head and is well protected by the temporal bone, which is the hardest bone in the body (Hall, 2014; Kramer & Brown, 2019) (refer to Figure 2–4). The conductive mechanism is composed of the outer ear (pinna and ear canal) and the middle ear, which includes the eardrum and a tiny air-filled cavity containing the three smallest bones in the body — the malleus, incus, and stapes. The portion of each ear that is the most external and can be easily seen is a flap of skin and cartilage called the pinna. The pinnae are located on each side of the head, and they function to collect sound waves and focus them into the ear canal. In turn, the ear canal resonates critical high-frequency components of speech sounds. The resonant frequency of the typical ear canal is 2700 Hz; it is a crucial frequency for consonant intelligibility. Small hairs and earwax within the ear canal protect the delicate tympanic membrane (eardrum) from debris. Located at the end of the inside portion of the ear canal (refer to Figure 2–4), the eardrum has a near-horizontal position within the ear canal at birth, making the eardrum difficult to view during infancy. The eardrum does not reach its more vertical position of 50 to 60 degrees from horizontal until the third year of life (Northern & Downs, 2014). The eardrum marks the boundary between the outer and middle portions of the ear. The middle ear is a very small cavity filled with air. Efficiently transmitting the airborne sound that hits the eardrum to the fluid-filled inner ear is the middle ear’s primary function. Sound waves enter the

ear canal and strike the eardrum causing it to vibrate. Next, vibrations travel from the middle to the inner ear through the three smallest bones in the body, known as the ossicles. The malleus, incus, and stapes are interconnected and attached to the tympanic membrane laterally, and oval window medially. The ossicular chain pumps this vibrational information to the oval window and then to the inner ear. The eustachian tube passes air to the middle ear and runs from the back of the throat to the anterior (front) wall of the middle ear cavity. When the eustachian tube functions appropriately, the air that is used by the middle ear structures for metabolism is constantly replenished. If and when the eustachian tube does not open often enough to allow air into the middle ear, disease and fluid can develop from lack of sufficient oxygen. These diseases are discussed in the next chapter.

Inner Ear to the Brain The organs of hearing (cochlea) and balance and equilibrium (vestibular system — ​ semicircular canals) are housed together deep within the temporal bone of the skull. Refer to Figure 2–4. It is not uncommon for an infant or toddler to have both a hearing and balance disorder, because the organs for vestibular and auditory detection are located together within the inner ear and have similar embryologic origins. The sensorineural mechanism is composed of the cochlea (sensory portion) and the eighth cranial nerve (neural portion). The highly specialized sense organ of the ear is called the organ of Corti, which acts as an acoustic intensity and frequency analyzer, all in a volume the size of a small grape.



The organ of Corti is contained within the cochlea, which is a small, snail-shellshaped, coiled tube that has approximately 2.75 turns and is about 3.5 cm long. The cochlea is a fluid-filled structure. Hair cells found in the organ of Corti contain stereocilia that project into a gelatinous structure that covers the specialized hair cells. The hair cells are the sensory receptors for sound. Vibrational movements generated from the middle ear ossicles instigate a wave complex in the cochlear fluids that displaces the inner ear fluid and hair cells. The hair cells transform the hydraulic movement of the inner ear fluids into electrical impulses. The electrical impulses generated from these hair cells stimulate neural impulses in the auditory nerve. In turn, the auditory nerve transmits these neural impulses to the many auditory centers of the brainstem and brain. The central nervous system is organized hierarchically. Neural structures become more complex as signals are transmitted from the brainstem to the cortex. Some scientists section the auditory cortex into primary and association areas. Others refer to the auditory cortex as those areas of the brain that are exceedingly receptive to various acoustic stimuli and have similar cell types (Musiek, 2009). Sound enters the auditory neural system through approximately 16,000 hair cells in the cochlea; about 3,500 are the inner hair cells that are key for transmitting sound to the brain. From the hair cells, impulses are sent to the eighth cranial nerve that has both auditory and vestibular branches as it reaches the brainstem. There are approximately 25,000 afferent nerve fibers. Ultimately, auditory signals are processed by more than 200 million neurons in the brain.

2.  The Audiovestibular System

The vestibulocochlear nerve (also called the auditory vestibular nerve and labeled as the eighth cranial nerve), transmits sound and equilibrium (balance) information from the inner ear to the brain. This eighth cranial nerve can be identified as a “bottleneck” in the auditory pathway because all acoustic and vestibular information passes through these neurons on their way to the brain. Note that the seventh cranial nerve, the facial nerve, also exits the temporal bone through the internal auditory canal along with the eighth cranial nerve.

The Vestibular System:  The Sensory Organs of Balance Equilibrium is maintained through a complex integration of the vestibular system, vision, somatosensory-proprioception, and the central nervous system. The vestibular system is the primary sensory modality contributing approximately two-thirds of the critical data about where we are in space, including our sense of motion, speed, and direction. The vestibular system is an internal reference, whereas vision and somatosensory are external reference systems, telling the brain about the status of the outside world. The vestibular system is composed of three semicircular canals (anterior, posterior, and horizontal) and two otolith organs (the saccule and utricle). The semicircular canals detect information about rotational and angular accelerations (such as nodding or shaking the head), while the otolith organs provide information about linear acceleration and gravitational

33



34

Children With Hearing Loss:  Developing Listening and Talking

changes, such as moving in a car (Kramer & Brown, 2019). The vestibular system is responsible for three critical reflexes for postural control, head and neck stability, and steady vision during head movement. Without a fully operational vestibular system, a child may be unable to sense gravity, so swimming in the dark or in murky water may not be safe (Handelsman, 2018). All newborns and infants with identified congenital or acquired sensorineural hearing loss should be considered at risk for comorbidity of vestibular dysfunction (Gans, 2019). Jankey et al. (2018) recommend that all children whose hearing loss is greater than 65 dB HL and who sit later than 7.5 months or walk later than 14.5 months should be referred for a vestibular evaluation. Parental concerns about gross motor development also should be viewed as a red flag for vestibular issues; practi-

tioners should ask about and screen the motor development of all children with hearing loss. Because physical therapy is very beneficial, children identified with vestibular loss should be referred to a physical therapist for further evaluation and treatment. This chapter has offered a brief overview of acoustics and the structure and function of the audiovestibular system. The next chapter discusses how these structures can break down.

The primary survival function of the inner ear structure is equilibrium as managed by the vestibular system. One can live without hearing, but actual survival is possible only if the organism can resist the pull of gravity.

3 Hearing and Hearing Loss in Infants and Children

Key Points Presented in the Chapter n Hearing losses are classified as

n Today, the degree of hearing loss

congenital or acquired depending on when they first occur in a child’s life. n Endogenous (genetic) or exogenous (environmental) factors may cause either congenital (early-onset) or acquired (later-onset) hearing losses. n Genes may predispose a baby to develop a hearing loss when triggered by an environmental event; this is called a multifactorial situation. n In addition to cause and time of onset, classification of hearing loss includes severity — from minimal to profound, depending on how much sound reception and auditory information is blocked from reaching the brain due to a “doorway problem” (see Chapter 1). n Every childhood hearing loss, no matter how slight, warrants auditory management.

does not determine functional outcomes if we provide early use of technology and auditory habilitative services; there is no degree of hearing loss that precludes the primary use of audition for the development of spoken language and literacy if we do what it takes. n Hearing losses also can be classified as conductive, sensorineural, or mixed, depending on where they occur in the auditory system. n Every hearing loss, whether or not it has a treatable medical cause, requires audiologic management to provide acoustic access of auditory information to the child’s brain. n Progressive sensorineural hearing losses, which occur about 50% of the time, may result from large vestibular aqueduct syndrome

35



36

Children With Hearing Loss:  Developing Listening and Talking

(LVAS), perilymphatic fistula (PLF), cytomegalovirus (CMV), meningitis, congenital syphilis, ototoxicity, endolymphatic hydrops, autoimmune disorders, delayed hereditary hearing losses, and other unknown causes. n Auditory neuropathy spectrum disorder (ANSD) is a complicated disorder in which sound enters the inner ear normally, but the

transmission of auditory signals from the inner ear to the brain is impaired. n Among children with hearing difficulties, tinnitus affects as many as 24% to 29% to some degree. n Studies show that up to 90% of children with sensorineural hearing loss also exhibit vestibular/ balance dysfunction.

Introduction

Classifications

Infants and children require brain access to detailed sound for auditory learning to occur. A primary problem with hearing loss is that it interferes with sound and information reaching the brain, precluding or diluting auditory capabilities (Fitzpatrick & Doucet, 2013; Kral & Lenarz, 2015; Musiek, 2009). Neural development and organization of the auditory brain centers require sensory input and extensive auditory experience (Chermak & Musiek, 2014a; Estabrooks, 2012; Kilgard et al., 2007; Kral et al., 2016; Sharma et al., 2004; Sharma, Dorman, & Kral, 2005). Three general types of peripheral hearing loss interfere with delivery of auditory information to the brain. Location of the damage in the auditory system, also called site of lesion, determines the classification of hearing loss as conductive, sensorineural, or mixed (Figure 3–1) (Hall, 2014; Kramer & Brown, 2019). The purpose of this chapter is to provide an overview of the general classifications of hearing impairment, and then to detail some of the specific pathologies that can cause hearing loss.

Degree (severity), timing of the onset, and overall cause are three general classification schemes for hearing loss.

Degree (Severity): Minimal to Profound An important classification of hearing loss is severity; how much sound reception is blocked from reaching the brain (Northern & Downs, 2014). Normal hearing sensitivity in children is 15 dB HL at all frequencies and in both ears with normal middle ear function. Anything less places the child at risk for spoken language, literacy problems, and academic failure if we do not do what it takes to access and develop the child’s auditory brain centers with enriched auditory information. Audiometric degree of hearing loss used to be a primary variable in functional outcomes — whether a child would likely be auditory or visual in orientation. Now, degree of hearing loss is not a variable if we do what it takes, technologically and habilitatively, to access and develop audi-



3.  Hearing and Hearing Loss in Infants and Children

B

A

Mixed Hearing Loss

Figure 3–1.  Location of the damage in the auditory system, also called site of lesion, determines the classification of hearing loss as conductive, sensorineural, or mixed. A. Conductive hearing loss: outer ear and/or middle ear. B. Sensorineural hearing loss: inner ear.

tory brain centers. That is, a child with any degree of hearing loss, no matter how profound, can function like a child with a mild to moderate hearing loss who is well managed if we access the brain using appropriate technology early on (hearing aids or cochlear implants) and provide enriched auditory language information as discussed in the second part of this book (Houston et al., 2012).

Normal Hearing Sensitivity: 0 to 15 dB HL for Children An adult with normal hearing sensitivity has a hearing threshold equal to or better than 25 dB HL. A child, in the unique situation of developing crucial word-sound distinctions of language, requires better hearing sensitivity. To receive the complete speech signal for even soft conversation, a child’s hearing sensitivity must be 15 dB

HL or better at all frequencies in both ears (Dobie & Berlin, 1979; Hall, 2014; Northern & Downs, 2014; Tharpe, 2006). Good hearing sensitivity does not always indicate effective auditory brain access of spoken information in different auditory environments. A baby or child may hear well in the perfect quiet of the audiologic test room or the quiet of a close, one-on-one conversation, but he or she will understand significantly less in the background noise of more public settings such as daycare or classrooms.

Minimal or Slight Hearing Loss: 15 to 25 dB HL for Children A minimal, borderline, or slight hearing loss may cause difficulty for children in the following areas (Northern & Downs, 2014; Tharpe, 2006): distinguishing distant or soft speech (the child may miss at

37



38

Children With Hearing Loss:  Developing Listening and Talking

least 10% of classroom learning), responding appropriately to subtle conversational cues, keeping up to speed in rapid-paced interchanges, overhearing social exchanges, and detecting the subtle phonetic markers in grammar, such as those indicating plurality, possessives, or regular past tense. The greater effort required for the child to hear may cause fatigue and create the appearance of immaturity; in fact, many children with hearing loss experience significant fatigue that can compromise their classroom learning (Anderson, 2004; Bess & Hornsby, 2014). Complete brain access to clear speech is essential, so a minimal hearing loss can have major negative consequences. Due to inadequate hearing screening environments and lack of information about the cornerstone role of hearing in the development of literacy and academics, many of these slight hearing losses fall under the radar. Every childhood hearing loss, no matter how slight, warrants auditory management (Moeller & McCreery, 2017). A mild hearing loss (greater than 15 dB HL but less than 40 dB HL) or a unilateral hearing loss (UHL) may not be problematic for a linguistically mature person who has sophisticated and disciplined attending skills, but even a “minimal” hearing impairment can seriously affect the overall development of an infant or child who is in the process of learning language and developing spoken communication skills as the basis for literacy and acquiring knowledge and social skills (Kerns & Whitelaw, 2014).

Unilateral Hearing Loss A person with normal hearing in one ear and at least a mild permanent hearing loss in the other has a UHL. There also is a category called single-sided deafness (SSD), which is nonfunctioning hearing in one ear, with normal hearing in the other.

A UHL can cause significant trouble hearing, contrary to popular opinion. In fact, children with UHLs are at 10 times greater risk for academic failure than are children with normal hearing in both ears (Bess, Dodd-Murphy, & Parker, 1998; Fitzpatrick, Rhoades, Dornan, Thomas, & Goldberg, 2014; Tharpe, 2006). Identifying speech in noise, localizing sound sources, and other important skills for classroom learning are difficult for the unilateral listener. A child with UHL has more difficulty discerning speech, even when directed into the good ear, than a child with two fully functional ears (McCreery, 2014). There are differences in behavior between children with normal hearing and children with UHL (Bess et al., 1998; Tharpe, 2006). When compared to peers with normal hearing, children with UHL are described as more distractible, more frustrated, more dependent, less attentive, and appearing less confident in the classroom. Often, behavioral difficulties are more obvious than hearing difficulties in a child with UHL. The connection is rarely made between behavior and hearing because the child has one perfectly good ear and is thought to be able to hear when he or she wants to. Due to newborn hearing screening programs, UHLs are being identified in neonates for the first time, and professionals are wrestling with appropriate management strategies for this new population (Fitzpatrick et al., 2013; Scholl, 2006). Below are some summary points (Cui, 2014; Dillon, 2012; McCreery, 2014; McKay, 2008; Smith & Wolfe, 2018).

Unilateral Hearing Loss in Infants:  A Summary n Definition of UHL in babies: good

ear has pure-tone average (PTA) of 15 dB HL or better; poor ear has PTA



3.  Hearing and Hearing Loss in Infants and Children

of 20 dB or worse, or air conduction thresholds poorer than 25 dB HL at two frequencies above 2000 Hz. n Definition of SSD: nonfunctioning hearing in one ear, with normal hearing in the other. About 25% of people with UHL have SSD. n Newborn hearing screenings may not identify mild hearing losses, as screenings are usually conducted at 35 dB. n The biggest problems posed by UHLs are difficulty localizing the sound source; poor understanding of speech in noisy environments, especially with multiple speakers; and a reduction of distance hearing that interferes with incidental learning and the acquisition of information including social cues. n There is not yet research to determine whether hearing aid fitting before 12 months of age is better than waiting until that time, because it is possible to create a good signal-tonoise ratio (S/N ratio) by physically positioning the baby close to the talker and by being vigilant about reducing noise in the environment, particularly on the side of the baby’s good ear. On the other hand, there is an argument that neural pathways from both ears should be stimulated and integrated right away; therefore, some professionals suggest amplifying earlier. n After 12 months of age, amplify a mild to moderately severe UHL having obtained the ear and frequencyspecific air and bone conduction thresholds that are required for accurate amplification. Use a hearing aid loaner bank initially to determine if a hearing aid is helpful before requiring parents to purchase a hearing aid.

n The goal of amplification for a UHL

is to stimulate the neural pathways of both ears, thus creating more equal binaural hearing. n There is growing evidence that some children with SSD can benefit from a cochlear implant (Gifford, 2017). n Extra and enriched auditory, language, and preliteracy stimulation must be provided to the infant from the beginning; early intervention is critical. Also, the parent ought to receive counseling about UHL issues. n Data show that the more severe the UHL, the greater the likelihood of academic failure if the child does not receive management. n Right ear UHLs seem to be more problematic than left ear UHLs. n When retesting the hearing of the infant with UHL, test and monitor both ears often — UHLs can progress to bilateral hearing losses. n Possible causes of unilateral to bilateral hearing loss progression are CMV, enlarged vestibular aqueduct, and genetics. See the website http://www.cdc.gov/ ncbddd/hearingloss/conference.html for more information.

Mild Hearing Loss:  25 to 40 dB HL Mild hearing loss can be thought of as an audibility deficit (Moeller & McCreery, 2017). Depending on environmental noise level, distance from talker, and the configuration (pattern) of the hearing loss, a child who experiences a 30-dB hearing loss can miss 25% to 40% of the speech signal if he or she does not receive audiologic management, because soft speech, word endings, and unstressed words will not be audible (Killion & Mueller, 2010; Martin, 2008; Northern & Downs, 2014).

39



40

Children With Hearing Loss:  Developing Listening and Talking

The child likely also misses passive learning opportunities by being unable to overhear conversations. A child with a 35- to 40-dB hearing loss without hearing technology can miss up to 50% of class discussions, especially with far-off or soft voices (Anderson & Arnoldi, 2011). Accusations of “daydreaming,” “hearing when he or she wants to,” or “not trying” may foster a negative self-image in a child (Estabrooks, 2012; Moeller & McCreery, 2017; Tharpe, 2006). The extra effort exerted to hear often leaves the child more fatigued or irritable than peers with normal hearing (Anderson, 2004; Bess & Hornsby, 2014). Though this degree of hearing loss is labeled “mild,” the repercussions are great in the life of the baby or child, especially the impact of not hearing clear speech and thus missing information.

Missing auditory information, caused by an unmanaged hearing loss — even a mild hearing loss — results in the child experiencing a cumulative loss of knowledge, over time. Appropriately fitted hearing aids boost audibility and therefore facilitate brain access of information; technology should be fitted early and worn for at least 10 to 12 hours per day (McCreery et al., 2015; Moeller & McCreery, 2017).

With a mild, unmanaged hearing loss, a child may be at least one grade level behind his or her peers (Johnson & Seaton, 2012; Northern & Downs, 2014; Tharpe, 2006). However, appropriate hearing aids/ remote microphone (RM) systems and auditory-based intervention and auditory information enrichment can overcome the secondary negative effects of hearing loss.

Moderate Hearing Loss: 40 to 55 dB HL If the content and vocabulary of the message are known, a child with a moderate, unmanaged hearing loss may understand face-to-face conversational speech from 3 to 5 feet away in a quiet room, causing the parent or teacher to vastly overestimate this child’s auditory brain access of information. However, in typical classroom situations, with a 40- to 50-dB hearing loss, 50% to 75% of speech information may be missed, and with a 50-dB hearing loss, 80% to 100% might be missed (Anderson & Arnoldi, 2011; Killion & Mueller, 2010). Without intervention, the child often has impaired speech production, delayed or defective syntax, and limited vocabulary and knowledge. Additional negative effects may include deficits in maturity, communication, and social interaction (Meyer, 2003; Westby, 2017). By fourth grade, children without appropriate early (and often continuing) intervention for their moderate hearing losses often fall at least two grade levels behind ( Johnson & Seaton, 2012; Northern & Downs, 2014; Ross, Brackett, & Maxon, 1991). Therefore, it is critical to use appropriate hearing aids, RM systems, and auditorybased intervention for auditory information enrichment to overcome the secondary negative effects of hearing loss.

Moderately Severe Hearing Loss: 55 to 70 dB HL One hundred percent of classroom content can be missed with an unamplified 55-dB hearing loss (Killion & Mueller, 2010), and spoken communication must be very close and loud to be minimally understood if amplification technologies are not worn. Without appropri-



3.  Hearing and Hearing Loss in Infants and Children

ate early and continual assistance, most children with this degree of hearing loss have significant difficulty in school and show evidence of delayed language and knowledge, delayed syntax and reading, reduced speech intelligibility, and perhaps an atonal voice quality. Social interactions are, of course, likely difficult. Appropriate use of hearing aids and RM systems and, in some cases, cochlear implants, along with auditory-based intervention for auditory information enrichment, can overcome the secondary negative effects of hearing loss.

Severe Hearing Loss: 70 to 90 dB HL With appropriate amplification (hearing aids or, more likely, cochlear implants), a child with a severe hearing loss should be able to detect all speech sounds as well as environmental sounds, even though he or she cannot hear conversational speech at all without using technology (Dillon, 2012; Ling, 2002; Madell, Flexer, Wolfe, & Schafer, 2019b). Spoken language will not develop well or at all without appropriate early use of technology for auditory brain access, followed by auditory language intervention and auditory information enrichment. A child with appropriate auditory technology and auditory-based intervention can be functionally hard of hearing and thus learn and live in a mainstreamed environment, perhaps with the assistance of some support services (Fitzpatrick & Doucet, 2013; Geers et al., 2011; Ling, 2002).

Profound Hearing Loss: 90 dB HL or Greater A baby or child with a profound hearing loss cannot hear speech or environmental

sounds without amplification. However, in this day and age, the degree of hearing loss does not determine functional outcomes when there is early use of technology to get auditory information to the child’s brain, coupled with family-focused auditory habilitative services. Audiometric deafness does not preclude auditory brain access and auditory neural development when we use cochlear implants. That is, if the family has spoken language as a desired outcome for their baby who is born with profound deafness, that outcome is likely if we do what it takes, as detailed in this book.

Timing:  Congenital or Acquired The first occurrence of hearing loss in a child’s life determines the classification of that hearing loss as congenital or acquired. Congenital hearing losses typically occur before, at, or shortly after birth but prior to the learning of speech and language, usually before age 3 (Hall, 2014; Northern & Downs, 2014). In contrast, acquired hearing losses happen after speech and language have developed. Because neural programming for language and spoken communication already had occurred in auditory brain centers, the negative effects of an acquired hearing loss tend to be less severe. The earlier the hearing impairment occurs, the more the hearing loss interferes with language, learning, and development of auditory brain function, unless the child receives effective and intensive technological support and auditory/linguistic intervention (Kral et al., 2016). Though an acquired hearing loss remains subject to the acoustic filter phenomenon discussed in Chapter 1, the consequences of an acquired hearing loss are not as

41



42

Children With Hearing Loss:  Developing Listening and Talking

pervasive if the child had already established a complex linguistic system and mature neural connections in the auditory cortex. Still, with a severe later-onset hearing loss, the child’s speech can degenerate if amplification and auditory interventions are not provided.

General Causes:  Endogenous, Exogenous, or Multifactorial Endogenous or exogenous factors may cause either congenital (early-onset) or acquired (later-onset) hearing losses. Endogenous hearing losses have hereditary or genetic sources and different likelihoods of occurrence (Gravel et al., 2019; Van Camp, 2006). In the case of a hearing loss being genetically dominant, even if one parent carries the gene, each child has a 50% chance of receiving the gene and the hearing loss. When the hearing loss is recessive, both parents must carry the gene and there is a 25% probability of each child receiving both genes and the resulting hearing loss. For an X-linked recessive hearing loss, each son has a 50% chance of inheriting the gene from his mother; the daughters will be unaffected. In a mitochondrial mutation, all children will receive the gene from the mother, but the father’s sperm does not contribute mitochondria to the developing child (Gravel et al., 2019; Lafferty, Hodges, & Rehm, 2014). In a multifactorial inheritance, the additive effects of several minor gene-pair abnormalities combine with nongenetic environmental factors or “triggers” (Lafferty et al., 2014). When this happens, hearing loss may manifest in a child with a genetic predisposition for hearing impair-

ment who experiences an environmental event such as a virus or lack of oxygen. Because of the complexity of genetic transmission, families requiring information about endogenous hearing losses should consult a genetic counselor. Of the 50% of congenital hearing losses that have no identifiable cause, it is speculated that 90% are of genetic origin (Northern & Downs, 2014; Van Camp, 2006). There is increasing interest in a condition called delayed hereditary sensorineural hearing loss. Though the child has normal hearing at birth and passes a newborn hearing screening, a hearing loss develops in the first 12 months of life and may continue to progress in severity until about six years of age. The parents may experience their baby responding to their voice and saying some words, only to have the child later stop hearing and talking altogether. Even if the hearing loss ultimately progresses to profound in degree, because of the early listening experience, the child typically responds well to auditory-based intervention because the brain was initially organized around auditory input. Delayed hereditary sensorineural hearing loss may affect more than one child per family, though the genetic transmission seems to be recessive in nature (Van Camp, 2006). In contrast, when external events cause hearing loss, the classification is exogenous. Because exogenous hearing loss comes from environmental events, such as lack of oxygen, bacterial or viral infection (such as meningitis, measles, chickenpox, influenza, encephalitis, or mumps), noise exposure, head injury, and medications that damage the inner ear, this kind of hearing loss is not transmitted from parents to offspring in the genetic code.



3.  Hearing and Hearing Loss in Infants and Children

Genetics, Syndromes, and Dysplasias Connexin 26 In the past few years, many genes have been identified as causing hearing loss (Lin, 2014). The most common one is connexin 26. Mutations in the connexin 26 gene, on chromosome 13, are believed to be responsible for up to half of recessive nonsyndromic hearing losses. Connexin 26 is a protein that is a critical part of the potassium exchange that occurs in the organ of Corti (inner ear). Hearing loss is caused when a connexin 26 protein cannot be produced during embryological development. Connexin 26 hearing loss is most often seen in a baby or child with a mild, moderate, severe, or profound congenital hearing loss without any other medical problems and with no identified cause for the hearing loss; more than 50% will experience some hearing loss progression, usually gradually but sometimes precipitously (Kenna et al., 2010). Because the hearing loss is recessive (both parents must carry the gene), no other family member may have a hearing loss. Certain connexin 26 mutations are more common in specific groups, such as Caucasians and Ashkenazi Jewish populations (Lafferty et al., 2014).

Genetic Testing Genetic testing has been successful in identifying connexin 26 as the cause of a baby’s hearing loss. Genetic testing is the process of comparing the sequence of a child’s gene with that of the regularly

occurring gene sequence; the comparison might detect mutations that could make the gene stop functioning. There are several comprehensive genetic tests for hereditary hearing loss currently on the market. Examples of commonly used comprehensive panels include the OtoGenome Test (performed at Harvard Partners Laboratory for Molecular Medicine) and the OtoSCOPE v8 Test (performed at the Molecular Otolaryngology and Renal Research Laboratories at the University of Iowa). These panels efficiently test, at one time, more than 100 genes known to cause hearing loss rather than performing sequential testing of single genes (Pandya, 2018). Typically, the patient provides one blood sample taken at their physician’s office; there is a fast turn-around time with 99% diagnostic specificity. See the following websites for more information: 1. https://morl.lab.uiowa.edu/ clinical-diagnostics/deafness 2. http://personalizedmedicine​ .partners.org/Laboratory-ForMolecular-Medicine/Tests/HearingLoss/OtoGenome.aspx

The value of genetic testing is that knowing the cause of a baby’s hearing loss can lead to earlier and better treatment and to targeted management decisions. For example, children with severe to profound hearing losses caused by connexin 26 tend to do very well with cochlear implants because the underlying auditory neural tissue is intact.

43



44

Children With Hearing Loss:  Developing Listening and Talking

Syndromes A set of symptoms that together characterize a disease or disorder make up a syndrome. Phenotype is the word used to identify the outward, physical manifestation of the organism — anything that is part of the observable structure, function, or behavior of a person (Gravel et al., 2019). On the other hand, genotype is the internally coded, inheritable information found in almost all cells that store the blueprint for building and maintaining a living organism.

All syndromes can be described in terms of their phenotype (how they look) and genotype (how they are coded inside cells).

An associated hearing loss accompanies more than 200 syndromes; many of them are rare or affect only single families. For a detailed listing and discussion of syndromes that often include hearing loss, please refer to Anne, Lieu and Kenna (2018), Gravel et al., (2019), and to numerous websites, including National Institutes of Health, Medline Plus, Harvard Medical School Center for Hereditary Deafness, and Boystown National Hospital in Omaha, NE. A single structural abnormality raises the probability of additional abnormalities. Therefore, the presence of any structural abnormality in a child warrants investigation of the child’s hearing. Conversely, a child diagnosed with a hearing loss should be monitored for additional abnormalities (Schein & Miller, 2010). In about 30% of babies with hearing loss, the hearing loss is part of a syndrome; the

baby has other problems. Pigmentation abnormalities, such as a white forelock or eyes of different colors, for example, often accompany sensorineural hearing losses (Waardenburg syndrome). Middle ear malformation caused by skeletal abnormalities can cause a congenital, conductive hearing loss. Sensorineural hearing loss may accompany significant visual deficits.

In one family, the grandmother had a unilateral, profound, sensorineural hearing loss and no other anomalies. The father had a white forelock and one blue eye and one brown eye, but normal hearing. The baby had a bilateral, profound sensorineural hearing loss, but no other identifiable physical features. Their phenotypes differed, but genetic testing revealed that all had variable expressions of Waardenburg syndrome.

The most common disorders that coincide with hearing loss include asthma, vision disturbance, disorders of executive function, neuropsychiatric problems, arthritis, heart trouble, intellectual disability, cerebral palsy, and cleft palate (Kramer & Brown, 2019). Often the number and severity of symptoms vary from person to person and from family member to family member; this is called variable expression. Below are four of the most common dominantly transmitted syndromes; only one parent has to carry the gene for the traits to be expressed, often in a variable fashion (Anne, Lieu, & Kenna, 2018; Lafferty et al., 2014). n Alport’s syndrome.  Hearing and

vision problems are associated with this renal (kidney) disorder. Females



3.  Hearing and Hearing Loss in Infants and Children

tend to be less severely affected than males, and the accompanying sensorineural hearing loss is usually bilateral, high frequency, ranges from mild to severe, and appears in 40% to 60% of the cases. The hearing loss usually begins in adolescence and is progressive. n Crouzon’s syndrome. Primary diagnostic features include an abnormally shaped head characterized by a central prominence in the frontal region and a nose resembling a beak. The premature closing of the cranial sutures causes the abnormal shape of the skull. Pinna may be low set, and in one-third of the cases, structural abnormalities occur in the middle ear, causing conductive hearing impairment. n Treacher Collins syndrome. Prominent diagnostic features include facial bone abnormalities such as deformed pinna, receding chin, depressed cheek bones, and large fishlike mouth with dental abnormalities. Atresia (absence or closure) of the ear canal, malformation of the middle ear ossicles, and cleft palate commonly appear. The manifestation of these features in persons with Treacher Collins syndrome varies in severity and appearance. A sensorineural component may be present, but the hearing loss is primarily conductive. n Waardenburg syndrome. Primary diagnostic characteristics include a white forelock, lateral displacement of medial canthi, prominence of the root of the nose, and occasionally cleft palate. Fifty percent of the children have congenital sensorineural hearing impairment ranging from mild to

profound, and the hearing loss may be unilateral, bilateral, and/or progressive. Below are three of the most common recessively transmitted syndromes. Both parents need to carry the gene for the traits to be expressed in offspring: n Jervell and Lange-Nielsen syndrome.

Also known as long Q-T syndrome (LQTS), this cardiovascular disorder correlates with severe to profound, symmetrical, congenital hearing loss. The child may experience fainting attacks or seizures when no blood leaves the heart. The health of the child appears normal except for two abnormalities in electrocardiogram readings. Thinking it to be a seizure disorder, LQTS often gets treated improperly. Death can result without treatment. Therefore, an electrocardiogram is an important test to perform for a child with profound, congenital sensorineural hearing loss who experiences any fainting spells or seizures. Cardiac events may be precipitated by exercise or emotion, and vestibular dysfunction often is a part of this syndrome (Siem et al., 2008). n Pendred syndrome.  This endocrinemetabolic disorder includes goiter (an enlarged thyroid gland). Moderate to profound hearing loss usually appears in the first 2 years of life. Large vestibular aqueduct often is part of this syndrome. n Usher syndrome.  This major syndrome involves both hearing and vision disorders. Vision deteriorates beginning in the early teens or twenties, narrowing the visual

45



46

Children With Hearing Loss:  Developing Listening and Talking

field until blindness occurs. This sensorineural hearing loss varies in age of onset, severity, and speed of progression, and is typically bilateral and moderate to severe in degree. There are three clinical types of Usher syndrome: Type I, Type II, and Type III distinguished primarily by severity of hearing loss, presence or absence of vestibular function, and time of onset of vision loss (Lafferty et al., 2014). Another syndrome that refers to children with a specific set of birth defects is CHARGE syndrome. Babies with CHARGE syndrome are often born with life-threatening birth defects, including complex heart defects and breathing problems. They usually spend many months in the hospital and undergo many surgeries and other treatments. The name originally came from the first letter of some of the most common features seen in these children, although a gene has since been identified: n C = coloboma of the eye. n H = heart defects. n A = atresia of the choanae, passages

that go from the back of the nose to the throat. n R = retardation of growth and development. n G = genital and urinary abnormalities. n E = ear abnormalities and/or hearing loss. The diagnosis of CHARGE is based on finding several of these and possibly other features in a child. The diagnosis should be made by a medical geneticist who has ruled out other disorders that may have similar phenotypes.

Inner Ear Dysplasias Malformation or incomplete development of the inner ear/cochlea is called a dysplasia. The inner ear has many anatomic abnormalities that are associated with hearing loss. The three most common include the following (Djalilian, 2018; Stach & Ramachandran, 2019): n Scheibe.  Scheibe is incomplete

development, malformation, or degeneration of the membranous cochlea, and is the most common dysplasia. A computed tomography (CT) scan will not identify this dysplasia since the osseous cochlea is normal. n Mondini.  A child with Mondini dysplasia has incomplete development or malformation of the inner ear, including the osseous cochlea, and has increased susceptibility to a PLF, LVAS, and endolymphatic hydrops (Holden & Linthicum, 2005). The hearing loss may be fluctuating or progressive, and the malformed cochlea may cause some difficulty inserting a cochlear implant. A CT scan can identify this dysplasia. n Michel aplasia.  The term aplasia means an absence of an organ or tissue. This relatively rare aplasia is the total absence of differentiated inner ear structures, leaving absolutely no residual hearing. Michel aplasia also may be described as a common cavity inner ear deformity and is the result of an insult to the cochlea early in the fourth week of embryonic development. A CT scan can identify a Michel aplasia.



3.  Hearing and Hearing Loss in Infants and Children

Medical Aspects of Hearing Loss Conductive Pathologies and Hearing Loss The outer and/or middle ear is the location of damage (also called site of lesion) that causes conductive hearing loss. The conductive mechanism transmits sound to the inner ear and thus does not contain irreplaceable nerve endings, enabling this part of the auditory structure to respond to treatment by medical or surgical interventions. Both medical and educational issues and treatments need to be considered for a baby or child with a conductive hearing loss; one treatment does not substitute for the other. The pathology that causes the hearing loss may affect the child’s general health, and medical treatment often restores the hearing lost as a result of disease in the outer and/or middle ear. Any potential impact on language acquisition and educational performance is also crucial to address when treating conductive hearing loss. That is, physicians are trained to deal with medical and surgical management of disease, and audiologists are skilled to deal with diagnosis and treatment of the resultant hearing loss, which is caused by this doorway problem (see Chapter 1). Both professionals must be involved in managing conductive hearing losses. The most common conductive hearing losses in infants and toddlers are otitis media (middle ear infection), collapsed ear canals, abnormalities of the middle ear ossicles, atresia, stenosis, cerumen impaction, otitis externa, perforated tympanic membrane, objects in the ear canal, cholesteatoma, and mastoiditis.

Baby Alice was identified at birth with a moderate, bilateral sensorineural hearing loss. Subsequently, she was fitted with hearing aids that allowed detailed access to her brain of the complete speech spectrum. At 6 months of age, she began experiencing upper respiratory infections followed by ongoing bouts of otitis media. The added conductive hearing loss caused by otitis media caused her hearing to intermittently progress from a moderate sensorineural hearing loss to a severe mixed hearing loss. The initial fit of her hearing aids was no longer adequate to provide access of auditory information to her brain; the hearing aids needed to be readjusted (made stronger) if her early intervention program was to be effective. At the very least, her hearing aids must have an external volume control or multiple programs to allow more or less gain (amplification) depending on the status of her middle ears. In addition, more aggressive medical management was needed.

Otitis Media (Commonly Called Middle Ear Infection) For children, ear infections are the most common cause of conductive hearing loss and subsequent hearing problems; the leading cause of pediatric health care visits; and the most frequent reason children are prescribed antibiotics or undergo surgery (American Academy of Pediatrics, 2013; Einhorn, 2017). Approximately 90% of infants have at least one episode of middle ear disease within the first 2 years of life, and almost 1 in 3 children experience chronic or recurrent problems.

47



48

Children With Hearing Loss:  Developing Listening and Talking

Otitis media needs to be considered from two perspectives. The first is when otitis media occurs in the absence of other disabling conditions. The second is when otitis media is added to a pre-existing sensorineural hearing loss, worsening the degree of hearing loss and the difficulty of appropriately managing the hearing loss. For children with coexisting sensorineural hearing losses, therefore, more aggressive medical management for ear disease is warranted to preclude adding to the child’s hearing problem and perhaps sabotaging intervention (Walker, Horn, & Rubinstein, 2019). Otitis media is an inflammation of the middle ear. Otitis media with effusion (OME) is an inflammation of the middle ear that includes fluid in the normally air-filled middle ear space (Stach & Ramachandran, 2019). Although this fluid can occur with or without the presence of an active infection, it almost always causes hearing loss. Acute otitis media (AOM) indicates a middle ear infection of recent onset, characterized by symptoms that include fever, pain, and irritability that can last as long as 2 to 3 weeks. Subacute otitis media indicates a chronic ear infection that fails to clear and can last anywhere from 3 weeks to 3 months, usually with effusion (fluid). Chronic otitis media indicates otitis media of more than 3 months’ duration with or without perforation of the tympanic membrane (hole in the eardrum) and drainage. Approximately half of the episodes of OME are classified as “silent” and are undetected by parents because the child often does not appear to be sick. Noninfectious fluid, one cause for hearing difficulty, gathers in the middle ear. Therefore, when a baby visits the pediatrician for a general checkup, it is a shock to find that OME is discovered nearly 50%

of the time (Northern & Downs, 2014). These unrecognizable bouts of OME often indicate that parents are not necessarily accurate reporters of a child’s history of a dysfunctional middle ear. Unfortunately, if the first bout of OME occurs in babyhood, the likelihood of recurrent and severe episodes increases (American Academy of Pediatrics, 2013), because the disease process of OME alters the structure of the middle ear lining. Recovery occurs more slowly with younger age and with each additional ear infection.

Ear Infections:  Causes and Effects Malfunctioning eustachian tubes are the cause of OME, and viral upper respiratory infections are the typical precursors of that malfunction. After the onset of a cold or other upper respiratory infection, the usual pattern is for an ear infection to occur a few days later because the eustachian tube’s function becomes obstructed by infectious mucus. The prevalence of OME increases during the winter months (Kouwen & DeJonckere, 2007). Composed of cartilage and muscle along with a flutter valve opening, the eustachian tube is a tiny structure (1 cm long) that connects the back of the throat to the middle ear. The three main functions of the eustachian tube are equalizing air pressure differences between the middle ear and the environment, protecting the delicate middle ear from nose and throat secretions, and draining normal secretions from the middle ear cavity into the throat (Stach & Ramachandran, 2019). The most important function is pressure regulation. When the eustachian tube fails, normal atmospheric pressure cannot be maintained within the middle ear, resulting in negative pressure that can often lead to fluid buildup from the near vacuum



3.  Hearing and Hearing Loss in Infants and Children

“sucking” fluid out of the mucous lining of the middle ear cavity. Fluid can exist “silently,” resulting in decreased hearing, or it can contain infection from bacteria and/or viruses. To reiterate, conductive hearing loss typically will accompany an ear infection. It is possible for hearing loss to occur even when fluid is not present, but the child has negative middle ear pressure. On average, hearing loss is 25 dB, but it can increase up to 50 dB (Northern & Downs, 2014). Hearing loss may be more severe at some frequencies than others. Fluid and/ or negative pressure in the middle ear, coupled with coexisting hearing loss, may persist from 2 weeks to 3 months following a single bout of OME (NIDCD, 2019). If and when a child is prone to otitis (having recurring ear infections), the child may experience the effects of constant middle ear fluid. Therefore, some children with OME may suffer temporary, fluctuating hearing impairment, whereas others will endure long-term hearing losses (Stach & Ramachandran, 2019).

High Incidence Populations The peak prevalence of otitis media is between 6 and 24 months of age, and there is a second, lower peak between 4 and 5 years of age corresponding to entering school (Einhorn, 2017). One in 4 children with learning disabilities, twice the number as in the non-learning-disabled population, has recurrent OME. The following conditions and situations predispose a baby or child to otitis media (NIDCD, 2019): n Congenital abnormalities such as

cleft palate (even after repair), congenital CMV, perinatally acquired HIV infection, and other immune deficiencies.

n Down syndrome and other genetic

factors. n Specific infections, especially those

of the upper respiratory tract (i.e., incompletely treated sinusitis and purulent conjunctivitis). Both bacteria and viruses are seemingly involved. n Environmental factors such as attendance at day care centers and exposure to passive smoking. n Allergies are a factor in 30% to 40% of recurring OME cases. n Immune deficiency that does not permit the body to fight the resultant bacterial or viral infection. n Children in neonatal intensive care units (NICUs) as infants have an incidence of OME of around 80%. Even temporary hearing loss during the first year of life can cause long-term attention problems, figure-ground deficits, spatial processing disorders, and language learning problems (Graydon et al., 2017; Rance, 2014; Tomlin & Rance, 2014). According to Nozza (2006), children who experience OME during their first year may experience difficulty learning to discern individual phonemes because the presence of even a slight hearing loss necessitates a more favorable speech-tonoise ratio than exists in typical communication environments. These findings emphasize the need for early intervention and an awareness of the potential for listening difficulties even after hearing has returned to normal.

Medical Diagnosis and Treatment of OME The use of pneumatic otoscopy — an otoscope with a bulb and tube attached that blows small puffs of air through the ear canal to cause eardrums to move — is the

49



50

Children With Hearing Loss:  Developing Listening and Talking

most reliable method to diagnose ear infections (NIDCD, 2019). Medical personnel may diagnose acute otitis media in children who present with mild bulging of the tympanic membrane (TM) and recent (less than 48 hours) onset of ear pain that includes holding, tugging, and rubbing of the ear in a nonverbal child (American Academy of Pediatrics, 2013). Compared to adult eardrums, those of infants are more horizontal and thus more difficult to see (Northern & Downs, 2014). Behavioral symptoms and other case history information also aid in diagnosis (American Academy of Pediatrics, 2013). Irritability (the most common symptom), listlessness, and distractibility are some general symptoms as well as pulling the ears, head banging, and rolling the head from side to side. The recommended medical treatment for OME is no treatment — watchful waiting to avoid antibiotic resistance because in otherwise healthy children, OME will resolve itself (American Academy of Pediatrics, 2013; Finkelstein, Stille, RifasShiman, & Goldmann, 2005). If a decision is made to treat with an antibacterial agent, amoxicillin is prescribed for most children. Studies of other adjunctive therapy for AOM and OME have shown that decongestants and antihistamines provide no obvious benefits. In an effort to prevent occurrences of OME, the American Academy of Pediatrics recommends that all children: n receive the pneumococcal conjugate

vaccine and an annual influenza vaccine according to the schedule of the Advisory Committee on Immunization Practices of the Centers for Disease Control and Prevention; n receive exclusive breastfeeding for at least 6 months; and

n be in environments with no tobacco

smoke exposure. The surgical intervention, called tympanostomy, entails making an incision through the eardrum, suctioning the fluid out of the middle ear, then inserting tympanostomy tubes (also called ventilation tubes or pressure equalization [PE] tubes) through the tympanic membrane (Oomen et al., 2005; Walker, Horn, & Rubinstein, 2019). These tubes provide air to the middle ear space by functioning as a substitute for the malfunctioning eustachian tubes. The procedure is regarded as very safe, takes about 10 minutes, and requires a general anesthetic. The possibility of scarring exists with repeated tube insertions. The tubes require careful management to prevent infection, and one should keep the ear canal clean and dry. After 6 to 18 months, the tympanostomy tubes usually fall out naturally. If infections continue, the tubes may need to be replaced. Visit the following websites for additional and updated medical information about OME: NIDCD, the American Academy of Pediatrics, and MedlinePlus.

Audiologic Diagnosis and Treatment of Otitis Media Every pediatric audiologic evaluation has three main goals: (a) to evaluate hearing sensitivity, (b) to evaluate auditory behaviors (Does the child’s response to sound correspond to her particular developmental age?), and (c) to make sure that necessary treatment is provided to allow auditory information to reach the child’s brain. Since the prerogative to diagnose middle ear disease is medical and not audiologic, the audiologic diagnosis of hearing loss caused by otitis media does not differ from the diagnosis of hearing loss from



3.  Hearing and Hearing Loss in Infants and Children

any other pathology. The audiologist deals with audiologic treatment of the hearing loss, not medical treatment of the ear disease. The audiologist does, however, make sure that appropriate medical referrals are made and that medical management is received.

Children with chronic otitis media are at risk for developing permanent damage to the cochlea due to the passage of toxins from the middle ear to the inner ear through the round window membrane. Therefore, audiologists should be aware of and test for high-frequency sensorineural hearing loss secondary to chronic otitis media (Tiido & Stiles, 2013).

Though the outcomes of continuing, persistent, and current hearing losses are more definite, the negative repercussions of early and fluctuating hearing loss stem from many factors. Some factors are irregular medical management, low socioeconomic level, large family size, multiple languages spoken to the child, general malaise caused by diseases, effectiveness of the child’s cognitive style, and reduced parental linguistic interaction due to nonresponsiveness of the child (Northern & Downs, 2014). Accurate prediction of how devastating a fluctuating hearing loss will be on the child’s later verbal and academic performance becomes difficult because of these factors. Golz et al. (2005) found that children with recurrent AOM during the critical early years are at greater risk for delayed reading than same-age peers with no history of middle ear problems. Therefore, rather than wait to see whether difficulties develop and then attempt to remediate, it seems prudent to always initiate

early intervention for any hearing loss to prevent later speech, language, literacy, and academic deficits. Following are some additional causes of conductive hearing losses in infants and children (Stach & Ramachandran, 2019; Walker, Horn, & Rubinstein, 2019).

Collapsed Ear Canals If external earphones are used during hearing tests, the pressure caused by their placement can close or collapse the ear canal if the infant’s or child’s ear canal is very soft and small. An erroneous conductive hearing loss results from a collapsed ear canal; the hearing loss is a testing artifact, not a medical condition and needs to be identified as such. Therefore, earinsert earphones are recommended for all audiologic testing of infants and children (see Figure 4–3 in Chapter 4).

Ossicular Abnormalities Ossicular abnormalities mean that the tiny middle ear bones did not develop appropriately, embryologically. The resultant conductive hearing loss is usually treated very successfully with hearing aids rather than with surgery. The nature of conductive hearing loss is that the sound needs only to be made louder without having to deal with the distortion caused by a sensorineural hearing loss.

Atresia and Microtia Atresia is a type of birth defect that causes a complete closure of the ear canal. Obviously, sound cannot travel through an absent or blocked ear canal. Atresia is often accompanied by microtia, an abnormally small external ear/pinna. Through CT scans, Mayer, Brueckmann, Siegert, Witt,

51



52

Children With Hearing Loss:  Developing Listening and Talking

and Weerda (1997) identified a variety of external, middle and, less frequently, inner ear changes in connection with microtia. Because cases of atresia feature an unknown position of the facial nerve, fear of facial paralysis typically precludes surgical opening of the ear canal until the child becomes an adult and an informed decision of risk can be made. Another potential problem with atresiaplasty is that the ear canal tends to become smaller over time, reducing auditory brain access, and usually requires lifelong medical cleaning (Dodson, 2015). Bone conduction amplification, including the bone anchored hearing aid implant (BAI), is typically used. Chapter 5 discusses amplification possibilities.

Stenotic Ear Canal The stenotic ear canal is abnormally small and narrow. Hearing loss can result from small amounts of cerumen or earwax plugging the small ear canal, acting like an earplug. Comfortable fitting of earmolds for hearing aids and physical examination of the eardrum are very difficult. This seemingly minor problem can thus become a source of constant discomfort for the child and a cause of fluctuating or even continual conductive hearing loss.

Cerumen, or Earwax, Impaction Earwax is a normal secretion of the body that functions to protect and cleanse the delicate ear canal (Kramer & Brown, 2019). Its color and consistency may vary and look like blood clots, black tar, yellow crayon, white rocks, or amber liquid. The appearance may be dark due to the oxidation that comes with age, or gold if the cerumen is relatively new. Normally, the cells lining the ear canal move the ear-

wax down the ear canal to the opening in the pinna, where it dries and flakes away.

Using Q-tips and other foreign objects to remove wax is potentially dangerous because the objects interfere with the ear canal’s natural cleansing process by forcing earwax back down the ear canal.

Wax production varies from person to person, and a large producer may make enough to block the ear canal. This earwax needs to be removed, often by a physician, although the audiology scope of practice does include routine cerumen removal. Genetic makeup, emotional states, and medications can influence the amount of secretion due to cerumen glands’ similarity to sweat glands. High-frequency hearing loss may result from a partial cerumen impaction, where the mass of the earwax narrows the ear canal diameter but does not totally block the opening. Complete cerumen impaction or blockage may result in a 30- to 50-dB HL hearing loss across all frequencies (Northern & Downs, 2014). Approximately 10% of children and about 30% of persons with developmental disabilities have cerumen problems that cause hearing loss until the cerumen is removed. Impacted cerumen may also result in tinnitus (ringing in the ears), dizziness, itching, earache, otitis externa (infection in the ear canal), cardiac depression, and chronic cough. Hearing aids can interfere with the natural cleansing process of the ear canal, and earmolds may even push the wax deeper into the ear canal. Because cerumen can block the ear canal or clog the earmold, infants and toddlers who wear



3.  Hearing and Hearing Loss in Infants and Children

hearing aids should be monitored carefully for excess cerumen.

Otitis Externa An infection of the ear canal is called otitis externa; one example is swimmer’s ear. Bacterial or fungal infections many flourish due to collected moisture in the ear canal, and result in pain, swelling, and drainage. Excessive moisture is a problem because it displaces cerumen and increases the pH of the ear canal, providing a favorable medium for bacterial growth. The infection requires medical treatment, typically with topical medication applied directly to the ear canal. Earmolds can cause discomfort and impede healing of otitis externa, making the infection a big problem for wearers of hearing aids. The child misses valuable educational time when amplification is discontinued to allow air into the ear canal until the infection is cleared. Prevention is the key. Keep the ear canals clean and dry and wash the earmolds regularly in mild, soapy detergent.

Because the pinna and ear canal have very thin skin with little subcutaneous tissue, skin lesions can be quite painful; most are easily treated by a physician (Garvey, Garvey, & Hendi, 2008).

Perforated Tympanic Membrane A hole in the eardrum, often caused by ear infection or trauma, is called a tympanic membrane perforation. The eardrum may rupture from a sharp blow to the ear, and the ossicles behind it may also receive damage. Natural healing may occur or surgical repair may be necessary.

The perforation causes varying degrees of hearing loss depending on its size and location (Stach & Ramachandran, 2019).

Object in the Ear Canal Small children have been known to push objects into their ear canals, such as hairpins, cereal, marbles, rocks, earrings, small hearing aid batteries, bugs, beads, and other food and toys. The objects can cause discomfort, infection, and/or hearing loss. Usually, a physician removes them.

Cholesteatoma A nonmalignant tumor that grows from a perforated eardrum is called a cholesteatoma. The tumor usually results from chronic ear infections, though a child may also be born with it. Cholesteatomas often take the form of a cyst or pouch that sheds layers of old skin that build up inside the middle ear. Over time, it can increase in size and destroy the tiny ossicles of the middle ear. A cholesteatoma poses a threat to health and can cause hearing loss, dizziness, and facial muscle paralysis; it must be surgically removed. Desire to prevent such a tumor should reinforce diligent medical management of ear infections. Children with cholesteatomas typically require temporary amplification while they undergo a series of middle ear surgeries.

Mastoiditis An infection of the mastoid process, the bony projection behind the pinna, is called mastoiditis. Ear infection spreading from the middle ear space to the mastoid was common before the discovery of antibiotics.

53



54

Children With Hearing Loss:  Developing Listening and Talking

Mastoidectomy, a surgery that literally cleared out the entire middle ear space and caused significant conductive hearing loss, was often a necessary treatment. Mastoiditis is easily avoided with good medical management of otitis media.

Sensorineural Pathologies and Hearing Loss The inner ear/cochlea is that part of the ear that houses thousands of tiny sensory receptors called hair cells that currently cannot be repaired once they are dam-

aged or destroyed. Thus, injury or pathology in this area causes a sensorineural hearing loss that is permanent. To mitigate the effects of sensorineural hearing loss, hearing aids, cochlear implants, or assistive listening devices are employed. Amplification does not correct damage to the inner ear; rather, sound is amplified and shaped to make auditory information audible to the brain. The only purpose of amplification technologies is to get auditory information through the damaged doorway to the brain. Table 3–1 lists main causes of acquired sensorineural hearing losses in children.

Table 3–1.  Main Causes of Acquired Sensorineural Hearing Loss Medications

Infectious

Neonatal

• Aminoglycosides

• CMV

• Anoxia and hypoxia

• Antineoplastic drugs

• Toxoplasmosis

• Prematurity

• Loop diuretics

• Syphilis

• Low birth weight

• Vancomycin

• Bacterial meningitis

• Sepsis

• Salicylates

• Measles

• Mechanical ventilation

• Antimalarials

• Mumps

• Hyperbilirubinemia

• Macrolides

• Herpes simplex encephalitis

• ECMO

• HIV

• CDH

• Extended NICU stay • TORCH • Ototoxic medications

Trauma • Barotrauma

Noise Exposure

Miscellaneous

• Temporal bone fracture

• Tumor (e.g., vestibular schwannoma)

• Labyrinthine fistula

• Sudden hearing loss

• Iatrogenic/surgery

• Autoimmune • Heavy metals

Note.  CMV: cytomegalovirus; HIV: human immunodeficiency virus; ECMO: extracorporeal membrane oxygenation; NICU: neonatal intensive care unit; CDH: congenital diaphragmatic hernia; TORCH stands for a group of infectious diseases: toxoplasmosis, other agents, rubella, cytomegalovirus, herpes simplex. Source:  Reprinted with permission from Anne, S., Lieu, J. E. C., & Kenna, M. A. (2018). Pediatric sensorineural hearing loss: Clinical diagnosis and management (p. 148). San Diego, CA: Plural Publishing, Inc.



3.  Hearing and Hearing Loss in Infants and Children

Genetic, endogenous, and syndromic hearing losses were discussed earlier in this chapter.

Noise-Induced Hearing Loss Noise-induced hearing loss, also called acoustic trauma, is permanent inner ear damage caused by prolonged exposure to loud sounds. It can occur in infants and children (Figure 3–2). In addition to loud music, parents need to be aware that many toys are louder than a chainsaw when held close to a baby’s ear, and can potentially cause or worsen hearing loss (Hanin, 2018). Parents should listen to the toy before they buy it, put masking or packing tape over the speaker of the toy to reduce its volume if necessary, or remove the batteries if the volume cannot be controlled. Some studies have found that antioxidants have been effective in reducing hearing loss caused by noise, but other studies have found no benefit (Kramer

et al., 2006). Research exploring antioxidant use in different noise conditions is ongoing and shows promise in preventing noise induced hearing loss and tinnitus. For more information about noise-induced hearing loss in children and about school hearing conservation programs, refer to the “Wise Ears!” website of the National Institute on Deafness and Other Communication Disorders. Noise exposure at young ages can negatively affect the auditory cortex. Moreover, diet, stress, and genetics all influence the child’s susceptibility to drug- or noise-induced hearing loss (Campbell, 2009).

Viral and Bacterial Infections Some infections are transferred from the mother through the placenta to the fetus. Whether or not the mother feels ill herself,

Figure 3–2. Noise can be a problem, especially for children.  Dreamstime.com

55



56

Children With Hearing Loss:  Developing Listening and Talking

the infection can affect the fetus in various ways. Infections may cause mild to profound hearing losses and minimal to serious additional abnormalities. Blood tests can usually identify exposure to infections such as cytomegalovirus, rubella, herpes, toxoplasmosis, and syphilis. Infections that occur in the child after birth also might cause hearing loss, with bacterial meningitis being the worst. Refer to the Centers for Disease Control and Prevention (CDC) website for more information about bacterial and viral infections.

Bacterial Meningitis Meningitis is primarily a disease of the central nervous system, causing inflammation of the coverings (meninges) of the brain and its fluids. Historically, bacterial meningitis has been the most common postnatal cause of hearing loss in infants and children (Stach & Ramachandran, 2019). Symptoms include high fever, headache, and stiff neck in anyone over the age of 2 years. These symptoms can develop over several hours, or they may take 1 to 2 days. Other symptoms may include nausea, vomiting, discomfort looking into bright lights, confusion, and sleepiness. In newborns and small infants, the classic symptoms of fever, headache, and neck stiffness may be absent or difficult to detect. The infant might appear lethargic, inactive or irritable, and vomit or feed poorly. As the disease progresses, patients of any age may have seizures. Making a definite diagnosis should occur quickly and requires a lumbar puncture (spinal tap). The hearing loss that results from bacterial meningitis is caused by inflammation of the membranous structures of the cochlea from the spreading of the infection from the menin-

ges (Anne, Lieu, & Kenna, 2018). After the child has recovered from the acute symptoms of the meningitis, complete hearing loss may occur if the cochlea ossifies over time, as revealed by a CT scan. Therefore, if listening and talking are desired outcomes for the baby or child, a cochlear implant must be inserted quickly before cochlear ossification occurs (Alexiades & Hoffman, 2014). Chapter 5 discusses cochlear implants. Postmeningitic children have varying amounts, symmetries, and configurations of hearing loss. Commonly, patients possess fluctuating but progressive hearing loss (Northern & Downs, 2014). All children who experience bacterial meningitis, especially Haemophilus influenzae, require careful and ongoing audiologic monitoring and management. There are several vaccines against meningitis that are safe and highly effective. They are recommended for all children, and especially for children who are going to receive a cochlear implant (Cohen, Roland, & Marrinan, 2004; Walker, Horn, & Rubinstein, 2019). Postimplant meningitis has been found to be related to patient, surgical, and (implant) device factors, but by adhering to sound surgical principles, vaccinating patients, and eliminating potentially traumatic electrode arrays, the incidence of meningitis has been significantly diminished in this population (Cohen et al., 2004).

Cytomegalovirus (CMV) Congenital CMV is the primary cause of viral-type sensorineural hearing losses in the United States (Pallarito, 2013). Hearing losses range from stable to fluctuating and progressive, and bilateral to unilateral, and vary in severity from mild to profound (Fowler & Ross, 2017). Consis-



3.  Hearing and Hearing Loss in Infants and Children

tent audiologic evaluations and flexible amplification fittings are a must for a child suspected of having CMV. When severe to profound hearing loss exists, cochlear implants can provide useful auditory input even though the child with CMV may have additional medical and cognitive deficits (Ramirez & Nikolopoulos, 2004). Along with herpes simplex virus (cold sores), varicella-zoster virus (chicken pox and shingles), and Epstein-Barr virus (mononucleosis), CMV is a member of the herpes family and thus can cause latent infections by persisting in the body indefinitely (Hall, 2014). CMV infects almost everybody at some point without negative consequences. However, 1 in 10 cases of acute CMV during pregnancy are estimated to result in congenital CMV disease.

CMV accounts for about 15% to 20% of newborn sensorineural hearing loss (Anne, Lieu, & Kenna, 2018). Therefore, infants who do not pass their hearing screening should be screened for CMV immediately.

Only 10% of infants with CMV show symptoms at birth, such as central nervous system disabilities, hearing loss, visual disorders, developmental delay, psychomotor retardation, enlarged liver and spleen, decreased platelet count in the blood, inflammation of the retina, cerebral palsy, and language or learning disorders. CMV appears in the saliva, urine, blood, tears, stool, cervical secretions, and semen of an infected person for many months or years with or without obvious symptoms. Though an infant appears healthy, he or she can still secrete CMV. Collection and testing for CMV in saliva and/or urine

within the first 3 weeks of life is the only way to distinguish congenital CMV from postnatal CMV infection. When working in clinical and educational settings, infection control procedures such as regular hand washing are very important. Antiviral drugs, such as Ganciclovir or Valganciclovir, may help newborns born with symptomatic congenital CMV (Fowler & Ross, 2017). These antiviral treatments may prevent or lessen the severity of hearing loss and may improve head and brain growth. However, these antiviral drugs can have serious side effects and require careful medical monitoring. See the National CMV Foundation website for more information (https:// www.nationalcmv.org/).

Anoxia Anoxia, defined as an absence of oxygen, is caused by lack of oxygen before, during, or shortly after birth. Respiratory problems are often associated with low birth weight (less than 1,500 grams). Congenital hearing loss due to anoxia has a unique audiometric configuration, with essentially normal low-frequency hearing, but more severe hearing loss in the high frequencies. The infant or toddler can thus hear sounds (mostly vowels), but individual speech sounds like consonants remain difficult or impossible to distinguish. Because the child responds to low-frequency sounds, hearing loss may be overlooked. This severe to profound high-frequency sensorineural hearing loss may be difficult to fit with hearing aids. That is, if the high-frequency hearing loss is beyond the reach of conventional amplification, cochlear implantation ought to be considered to allow the baby or child’s brain access to the critical consonant sounds.

57



58

Children With Hearing Loss:  Developing Listening and Talking

Any child with a history of anoxia, prematurity, or low birth weight needs repeated hearing tests focusing on high-frequency hearing. Be sure to start audiologic testing and conditioning the child in the low frequencies (e.g., 250 Hz) where hearing sensitivity is likely to be better.

Ototoxicity High doses of certain drugs can cause ototoxicity, which is a poisoning of the delicate inner ear. These drugs can also cause vestibulotoxicity, poisoning of the vestibular/ balance system with symptoms of imbalance, unsteadiness, and severe fatigue. Problem drugs include mycin drugs (kanamycin, gentamicin, tobramycin, amikacin,

and neomycin), some diuretics, quinine, cisplatin (a chemotherapeutic agent effective in the treatment of cancer), and aspirin (Walker, Horn, & Rubinstein, 2019) (Table 3–2). Mycin drugs in conjunction with loop diuretics appear especially problematic. The term high dose is relative; a normal dose for a 130-pound woman is very high for a 1-pound fetus. A baby or child can receive ototoxic drugs directly, usually as a life-saving measure, or the fetus can receive them in utero through medications the mother takes. Interestingly, mutations in a mitochondrial gene can cause hearing loss that begins after antibiotic therapy in an infant (Campbell, 2009; Lafferty et al., 2014). Ototoxic hearing losses are usually high frequency and permanent, but some fluctuate, such as those caused by aspirin. Most important, delayed-onset hear-

Table 3–2.  Ototoxic Drugs and Main Manifestations Deafness

Vestibular or Vestibular-Like

Tinnitus

Aminoglycosides

P

X

X

Neomycin and gentamicin most cochleotoxic

Antimalarial

R

X

X

Cinchonism when overdose

Platinum drugs

P

X

X

Cisplatin is more frequent

Salicylate

R

X

X

Worse in overdose

Loop diuretics

R

X

X

Furosemide is more frequent

Macrolides

R

X

X

Not well established

Minocycline

R

X

X

Teenage population

Vancomycin

P





Controversial

Drugs

Comments

Note.  P: permanent; R: reversible; X: symptom present. Source:  Reprinted with permission from Anne, S., Lieu, J. E. C., & Kenna, M. A. (2018). Pediatric sensorineural hearing loss: Clinical diagnosis and management (p. 149). San Diego, CA: Plural Publishing, Inc.



3.  Hearing and Hearing Loss in Infants and Children

ing loss frequently occurs with ototoxicity, so monitoring hearing sensitivity must continue even after discontinuing medications. Specifically, children treated for malignancies with cisplatin require longterm surveillance to avoid missing hearing deficits (Anne, Lieu, & Kenna, 2018; Bertolini et al., 2004).

Large Vestibular Aqueduct Syndrome (LVAS) A vestibular aqueduct diameter larger than 1.5 to 2 mm (identified in CT scan) usually identifies this congenital malformation of the temporal bone, predisposing the affected person to early-onset, highfrequency, fluctuating, and/or progressive sensorineural hearing loss as well as vestibular dysfunction (Wu, Chen, Chen, & Hsu, 2005). Arjmand and Webber (2004) found that almost 50% of the children they studied who had unilateral LVAS, also had bilateral, asymmetrical sensorineural hearing loss. LVAS also is called “enlarged vestibular aqueduct” (EVA) (Dabrowski, Myers, & Danilova, 2009; Walker, Horn, & Rubinstein, 2019). Clark and Roeser (2005) noticed that a conductive component may co-occur in some people, perhaps due to decreased mobility of the stapes or incomplete bone formation around the inner ear. Note that a medical misdiagnosis of otitis media with effusion can easily delay the eventual finding of LVAS. There are several different theories proposed to explain the progressive hearing loss associated with LVAS; most are related to the architectural changes in the cochlea (Clark & Roeser, 2005). Reportedly, events such as minor head trauma can trigger decreases in hearing. Positive medical diagnosis requires a CT scan, and because CT scans are not routine,

LVAS may be more common than initially expected. Children with Pendred syndrome often have LVAS as the cause of their hearing loss (Anne, Lieu, & Kenna, 2018).

LVAS is the most common radiographically identified cause of congenital sensorineural hearing loss, and it is estimated that approximately 15% of pediatric sensorineural hearing loss is caused by this malformation (Walker, Horn, & Rubinstein, 2019).

Audiologists should be aware that children or adolescents who have a sensorineural hearing loss of unknown cause, with or without accompanying dizziness, may have LVAS, especially if their hearing loss is progressive. A decrease in sensitivity at 500 Hz is a prognostic factor for progression of the hearing loss (Lai & Shiao, 2004). Hearing loss requires close monitoring. If hearing loss progresses to the severe to profound range, a cochlear implant should be considered sooner rather than later to advance listening, talking, and literacy skills (Clark & Roeser, 2005).

Perilymphatic Fistula (PLF) A PLF is an abnormal communication between the fluid-filled perilymphatic space of the inner ear and the air-filled middle ear cavity, usually through the round or oval windows, resulting in sensorineural hearing loss and/or vestibular symptoms. The existence of PLF was proposed more than a century ago, yet it remains a topic of controversy, especially regarding the existence of “spontaneous” etiologies. The diagnosis is made by clinical history and otoneurological tests with a definitive diagnosis being made at surgery, although

59



60

Children With Hearing Loss:  Developing Listening and Talking

the use of surgical findings as a criterion standard for PLF has been questioned. In the absence of prior surgery or definite traumatic event, it may be difficult to distinguish a PLF from Ménière’s disease. Fitzgerald (2001) wrote that the issue is not whether the patient has PLF or Ménière’s disease, but whether the Ménière’s disease is primary, or the PLF is primary with a secondary Ménière’s disease. Both conditions have various presenting symptoms of fluctuating hearing loss, often spinning vertigo, feelings of fullness, and tinnitus. PLF can have many causes, such as a congenital weakness or defect in the bony partition between the perilymphatic compartment of the inner ear and the middle ear, or a traumatic event such as a head injury or sudden barometric change. Sudden and dramatic hearing loss or increase in severity of an existing hearing loss can result from a fistula (Alexiades & Hoffman, 2014). To preserve hearing, diagnosis and surgical repair of a PLF must be virtually immediate. Weber, Bluestone, and Perez (2003) found that surgical repair, including packing the oval and round windows even if a leak is not visualized, may prevent further deterioration of hearing loss, may alleviate symptoms, and does not result in significant risk for postoperative hearing loss or additional complaints of vertigo.

Acoustic Neuroma When the main nerve trunk that carries auditory and balance information from the inner ear to the brain grows a nonmalignant tumor, it is called acoustic neuroma, or eighth nerve tumor. If not diagnosed and surgically removed, a neuroma can grow into a brainstem tumor that causes hearing loss and could ultimately be fatal.

The hearing loss is usually unilateral, high frequency, and progressive, and may have accompanying tinnitus, facial nerve involvement, and dizziness. The tumor is usually slow growing, diagnosed best with magnetic resonance imaging (MRI), and surgically removed if it becomes large. Sometimes, surgery is not recommended for a small, slow-growing tumor. Monitoring is done using periodic MRIs.

Rh Incompatibility Rh incompatibility occurs when maternal antibodies from her Rh-negative blood destroy the Rh-positive blood cells of the fetus. Hearing loss, jaundice, and possible brain damage can result. Thanks to medical advances, Rh incompatibility, such as rubella, is no longer a primary cause of sensorineural hearing loss in children.

Tinnitus Tinnitus, often associated with sensorineural hearing loss, involves hearing ringing, buzzing, roaring, or chirping sounds, not related to an external auditory event. Tinnitus can occur in children. The sounds range from being hardly perceptible to very distracting, and from being infrequent to constant. The onset of tinnitus or the increase in severity of tinnitus has been associated with progressive sensorineural hearing loss. In addition, tinnitus may cooccur with hyperacusis, a condition characterized by an abnormally strong reaction or reduced tolerance to typical environmental sounds (Sun, 2009). Baguley and McFerran (2002) have summarized the following points about childhood tinnitus: n Between 6% and 13% of children with

normal hearing experience tinnitus from time to time.



3.  Hearing and Hearing Loss in Infants and Children

n Tinnitus may be caused by ear

infections, noise exposure, and head injuries in children, just like in adults. n Among children with hearing difficulties, tinnitus affects as many as 24% to 29% to some degree. n There appears to be greater prevalence and severity of tinnitus in children with moderate to severe sensorineural hearing losses than in children with profound sensorineural hearing losses. n An acquired (later-onset) sensori­ neural hearing loss is more likely to be associated with tinnitus than a congenital hearing loss. n A child with sensorineural hearing loss rarely complains of tinnitus if not asked about it.

Mixed, Progressive, Functional, and Central Hearing Losses Mixed Hearing Loss A mixed hearing loss is when two or more ear pathologies, occurring at the same time, cause both conductive and sensorineural hearing losses. For example, a child with a congenital, genetic sensorineural hearing loss could also have ear infections. Another child might have stenotic ear canals blocked by wax, a cholesteatoma (tumor in the middle ear), and a sensorineural hearing loss from anoxia (lack of oxygen at birth). The hearing loss becomes the sum of each individual component, and all pathologies and sites of lesion need to be identified and managed. Unfortunately, conductive hearing losses often obscure coexisting sensorineural hearing losses (Clark & Roeser, 2005; Hall, 2014). For example, once otitis

media is diagnosed, the ear infection is often considered the cause of the child’s entire hearing problem. A careful audiologic assessment should identify all locations in the auditory system where pathologies occur, and appropriate medical (for the conductive component) and audiologic management (for the hearing loss) must occur (Northern & Downs, 2014).

Walker, Horn and Rubinstein (2019) emphasize that coexisting conductive hearing loss can make the evaluation and management of sensorineural hearing loss more complex, and sensorineural hearing loss can complicate the management of the conductive component.

Progressive Hearing Losses A hearing loss that gets worse over time is said to be progressive. The way to identify if a hearing loss is stable or progressive is to compare all previous audiograms (graphs of hearing sensitivity), not just the most recent two audiograms. Progressive sensorineural hearing losses may result from large vestibular aqueduct, PLF, CMV, meningitis, congenital syphilis, ototoxicity, endolymphatic hydrops, autoimmune disorders, delayed hereditary hearing losses, and unknown causes (Northern & Downs, 2014; Stach & Ramachandran, 2019). The child needs an immediate medical referral to an otologist whenever the hearing loss appears to be progressive. A CT scan often aids medical diagnosis. In some cases, medical treatment may halt or slow progressive hearing losses if the cause is identified. Children with progressive hearing losses may be candidates for cochlear implants sooner

61



62

Children With Hearing Loss:  Developing Listening and Talking

rather than later; each case needs to be individually evaluated (Sweetow, Rosbe, Philliposian, & Miller, 2005).

It is likely that about 50% of childhood hearing losses will be progressive; therefore, careful monitoring of hearing following early hearing loss identification is essential to ensure optimal amplification and intervention (BarreiraNielsen et al., 2016).

Functional Hearing Loss If a child claims to have a nonexistent hearing loss or exaggerates an existing one, the child is said to be experiencing a functional auditory disorder. The child may be using the fabricated hearing loss as a call for help, a bid for attention, or an excuse for poor performance at home or at school. The true extent of the hearing problem needs to be carefully identified and managed by an audiologist along with an appropriate supportive team of professionals and family members.

Central Auditory Processing Disorder (CAPD) Central auditory processing disorder (CAPD) impairs the understanding of meaning for incoming sounds, as opposed to a peripheral hearing loss of auditory sensation. As early as 1954, Myklebust identified a child experiencing a central auditory problem as one who could “hear,” but lacked the ability to structure the auditory world and select immediately relevant sounds. Some causes of CAPD include congenital brain damage, head trauma, and stroke, which often show as problems understanding speech.

When a child fails to respond to sounds, an audiologist must determine whether the cause is (a) a function of not detecting the sound (peripheral hearing loss), (b) not being able to interpret the meaning of the sound that is “heard” (CAPD), or (c) a combination of these problems because all aspects of the auditory system are interdependent and influence each other. Please refer to Chermak and Musiek (2014a, 2014b) for detailed information about CAPD.

What Are the Behaviors of Children With CAPD? Children who have auditory processing disorders may behave as if they have a hearing loss.

Central auditory processing is fundamental to listening, and listening is a crucial process underlying communication and learning (Abramson et al., 2018).

Following are some examples of behaviors displayed by children with CAPD. Note that not all children present with the same behaviors, or the same severity of behaviors, and it is important to rule out an ANSD when evaluating a child for CAPD: n Normal audiogram but difficulty on

tests of speech perception. n Child does not appear to be listening. n Frequent requests for repetition. n Parent, clinician, or teacher reports

that the child does not seem to hear consistently. n Making mistakes in sound vocalization.



3.  Hearing and Hearing Loss in Infants and Children

n Appearing to become confused when

several people are talking or in the presence of noise. n Appearing to have more problems understanding speech directed to one ear than the other. n Weak short-term auditory memory. n Difficulty localizing the sound source. n Poor expressive and receptive language abilities. n Poor reading, writing, and spelling skills. n Poor phonics and speech sound discrimination. n Difficulty taking notes. n Difficulty learning foreign languages. Note that all of these behaviors and characteristics can also be displayed by a child with a peripheral hearing loss, especially if that child did not have early and effective amplification and auditory intervention to strengthen and program the auditory neural centers. A child whose brain does not receive clear and complete sounds will certainly have limited information to process — muddled information into the brain, muddled information out (Doidge, 2007). Therefore, it is extremely difficult to separate peripheral hearing loss from an (central/neurological) auditory processing disorder, which is why normal hearing needs to be confirmed before testing for CAPD. Ahmmed et al. (2014) performed a factor analysis of outcomes from 110 children with suspected auditory processing disorders and identified a general auditory processing factor in addition to two other cognitive factors: working memory and executive attention, and processing speed and alerting attention. The authors further reported that children with suspected auditory processing difficulties need multiprofessional assessment to rule

out comorbid conditions because impairment localized entirely in the auditory domain was rare. Regarding treatment for auditory processing problems, several studies conducted to date support the effectiveness of auditory/listening training and personal RM systems (e.g., the Roger Focus) for children with CAPD in obtaining shortterm and long-term objectives, including measurable changes in brain function (Abramson et al., 2018; Hall, 2011; Hornickel, Zecker, Bradlow, & Kraus, 2012). Smart, Purdy, and Kelly (2018) found that personal RM systems produced immediate speech perception benefits and enhancement of speech-evoked cortical responses. Teacher and parent questionnaires also revealed positive outcomes.

Studies strongly suggest that personal RM systems, designed to improve the signal-to-noise ratio, could be considered a viable treatment option for children with CAPD, with or without additional accommodations.

Synergistic and Multifactorial Effects Two or more conditions occurring together may increase or magnify the effect of each condition. For example, a lowbirth-weight baby with some difficulty breathing may be placed in an incubator with safe noise levels, but with some noise and impulse sounds. The baby also may receive medication, such as a mycin drug for possible infection, but in small, safe doses not associated with ototoxicity. Though each condition by itself has minimal to no risk for hearing loss, together

63



64

Children With Hearing Loss:  Developing Listening and Talking

they can work synergistically to cause hearing impairment. Added to the previous scenario is the current knowledge that genes may predispose a baby to develop a hearing loss when triggered by an environmental event; this is called a multifactorial situation (Lafferty et al., 2014). Predicting or proving synergistic effects is difficult. It is important to recognize when taking a case history that risk factors can act together and cause a result that seems unlikely when each factor is considered individually.

Auditory Neuropathy Spectrum Disorder (ANSD) A neuropathy is a disease of the nervous system, specifically, damage to the peripheral nerves. So auditory neuropathy is a complicated disease or disorder of the auditory neural system, including the inner hair cells, eighth cranial nerve (auditory nerve), spiral ganglion, brainstem, and ribbon synapses in which cochlear amplification (outer hair cell) function is normal but afferent neural conduction in the auditory pathway is disordered (GardnerBerry, Hou, & Ching, 2019; Kraus, 2014; Neault, 2014). Berlin et al. (2010) state that the condition is better named auditory dyssynchrony because it is a disorder of the timing of the auditory nerve. Currently, the term ANSD is used due to the broad and variable nature of presenting symptoms (Lindsey, 2014). There are many possible causes of ANSD, including genetic ones (the most common genetic cause is mutation of the gene encoding otoferlin), but in newborns the most common is a mild hyperbilirubinemia (12 to 16 cc/dL). ANSD is indicated when the auditory brainstem response (ABR) looks absent or abnormal but the otoacoustic emissions (sounds that come

from the ear) are normal if not robust, and acoustic reflexes are absent. Auditory neuropathy was first identified in the 1980s, when advanced testing procedures became available to measure the action of the hair cells in the cochlea. About 40% of auditory neuropathies are caused by a genetic disorder, and the majority of children with ANSD have spent time in the NICU and have medically related risk factors. The hearing loss associated with ANSD may be present at birth or have a later onset, and it may progress, improve, or fluctuate. Based on the child’s pure-tone thresholds, the ability to recognize words generally is poorer than expected. The child typically has normal otoacoustic emissions (OAEs), but ABRs and acoustic reflexes are elevated or absent (Berlin et al., 2010; Neault, 2014). Regarding management of ANSD, current evidence from the Australian Hearing Services Longitudinal Outcomes of Children with Hearing Loss (LOCHI) study indicates that positive advantages for language development are associated with early intervention, including hearing aids or cochlear implants, for both children with sensorineural hearing loss and ANSD (Ching, Dillon, Leigh, & Cupples, 2018; Dillon, Cowan, & Ching, 2013; Gardner-Berry, Purdy, Ching, & Dillon, 2015). To facilitate an early diagnosis and to guide management over the first year of life, cortical auditory evoked potential (CAEP) assessment is being used as an alternate measurement of auditory function, together with the Parent’s Evaluation of Aural/Oral Performance of Children (PEACH) questionnaire (Ching & Hill, 2007).

Summary of ANSD The following are the facts as we know them so far; information is changing with



3.  Hearing and Hearing Loss in Infants and Children

ongoing research (Neault, 2014; Smith, Wolfe, & Neumann, 2017). Possible mechanisms that produce ANSD are as follows: n Inner hair cell damage, loss, or misfiring. n Pre- or postsynapse disorder. n Auditory nerve disorder. n Myelin or axonal disorder. n Neuropathy of the spiral ganglion.

Characteristics of ANSD are variable across patients and over time:

risk for ANSD due to higher incidence of (mild) hyperbilirubinemia (12 to 16 cc/dL); OAEs will not identify potential ANSD. n As many as 50% of infants with ANSD may experience improvement in their hearing thresholds over time (12 to 18 months), especially if their ANSD is associated with hyperbilirubinemia/ jaundice. What can be done about ANSD?

n Approximately 7% to 10% of children

n Early identification is critical. n Provide family-focused intervention. n The use of CAEPs together with

with moderate or greater hearing losses also have ANSD, and 50% have spent time in the NICU. n Fifty percent of children with ANSD have at least a moderate hearing loss. n ANSD is characterized by absence or gross abnormality of short-latency evoked potentials (CAP and ABR). n There is an elevation or absence of acoustic reflexes (an important indicator). n OAEs may be present, showing normal outer hair cell function. n There is poorer speech discrimination than expected, which is inconsistent with the degree of hearing loss. n There is difficulty understanding speech in noise. n Hearing loss may appear to fluctuate from day to day or from hour to hour because these may be “pseudo thresholds” that do not represent the degree of the child’s hearing problem. n Additional peripheral neuropathies (in some children) may affect coordination for activities such as walking, talking, and writing. n Screening with OAEs alone will miss ANSD; ABR is needed. It is recommended to screen NICU babies with ABR since they are at greater

functional auditory assessments, such as the PEACH, enable early decisions for fitting hearing aids and cochlear implants. n Carefully and frequently monitor functional hearing and emerging language and speech perception abilities. n Regarding communication mode, evaluate the child’s progress over time and be flexible. Emphasize auditory enrichment. n Cochlear implantation (CI) of children with ANSD has resulted in synchronous neural firing and similar benefits as those received by agematched children with sensorineural hearing loss who have received a CI. Cardon and Sharma (2013) found that for children with ANSD in their study, cochlear implantation resulted in measurable cortical auditory development. They further identified that children fitted with cochlear implants younger than age 2 years were more likely to show age-appropriate cortical responses within 6 months after implantation, suggesting a possible sensitive period for cortical auditory development in ANSD.

65



66

Children With Hearing Loss:  Developing Listening and Talking

Please refer to the website of the National Institute on Deafness and Other Communication Disorders (NIDCD) for further, up-to-date information.

Vestibular Issues Any endogenous or exogenous agent that causes hearing loss can also cause vestibular dysfunction. Therefore, all newborns and infants with identified congenital or acquired sensorineural hearing loss should be considered at risk for comorbidity of vestibular dysfunction; many experience undiagnosed problems (Valente, 2011). It is likely that 90% of children with congenital sensorineural hearing loss have vestibular abnormalities ( Janky, 2013). Do not assume children with “one good ear” are immune from bilateral vestibular labyrinthine dysfunction. An infant’s balance function may be evaluated as early as 3 months of age, and evaluation protocols include motor milestones (Gans, 2019). In fact, the majority of equilibrium problems that occur in infants and children manifest as delayed gross motor and balance problems, not as vertigo or dizziness. Low muscle tone is another important aspect of an infant/ child vestibular evaluation because it is closely associated with the integrity of the vestibular system. Delayed maturational motor milestones may be the earliest signs of a vestibular dysfunction. Vestibular system dysfunction can pre­ sent as hypersensitivity or hyposensitivity conditions. Hypersensitivity is intolerance to movement and gravitational insecurity; the child may be worried about falling

and cannot tolerate spinning. On the other hand, a child with hyposensitivity shows increased tolerance for movement; the child may appear to be a thrill seeker, jumping and falling a great deal. Overall, children with vestibular dysfunction have difficulty with movement and balance; they may exhibit low muscle tone, poor bilateral skills, poor auditory– language processing, poor visual–spatial processing, poor motor planning, and emotional insecurity — not feeling “grounded” (Gans, 2019). Because equilibrium is a complex integration of the vestibular system, vision, somatosensory proprioception, and the central nervous system, if a child has abnormal or no vestibular function in either ear, intervention needs to focus on compensatory strategies. Such strategies may include physical therapy to strengthen core muscles and lower extremities (for balance); occupational therapy to enhance eye coordination; and, above all, safety issues because a child without vestibular function cannot function in the dark since vision is a primary compensatory sense (Handelsman, 2018).

Jankey et al. (2018) recommend that all children whose hearing loss is greater than 65 dB HL and who sit later than 7.5 months or walk later than 14.5 months should be referred for a vestibular evaluation.

Summary The purpose of this chapter has been to describe and discuss the types and degrees of hearing loss along with various auditory pathologies. If listening is to



3.  Hearing and Hearing Loss in Infants and Children

be a vital part of intervention, we need to first understand and manage all hearing problems that prevent full acoustic access of auditory information to the brain. Hearing loss happens for a reason;

there is disease or damage somewhere in the auditory pathway that causes the problem. All aspects of the hearing loss need to be diagnosed and understood before effective treatments can be initiated.

67

4 Diagnosing Hearing Loss

Key Points Presented in the Chapter n Infants of any age can have their

n The primary job of a pediatric

hearing screened, assessed, and managed with technology and auditory intervention protocols. n By providing data about the value of early identification and intervention, newborn hearing screening programs have reinforced everything that we have believed to be true about hearing loss. n Early hearing detection and intervention (EHDI) programs have made possible new paradigms in service delivery and expectations. n Today, degree of hearing loss is no longer a factor in determining the auditory functional outcome for infants and children who are young enough to have maximum neural plasticity; these children’s auditory brain centers can be accessed, stimulated, and developed with auditory information through the early use of amplification (e.g., hearing aids or cochlear implants).

audiologist is to diagnose a baby’s or child’s peripheral hearing loss and to evaluate the maturity of his or her auditory behaviors. These tasks typically require several test sessions. n Making a differential diagnosis, which is the separation of hearing loss from other problems that display symptoms similar to hearing loss, requires collaboration between the audiologist and other professionals such as listening and spoken language (LSL) specialists, teachers, early interventionists, physicians, and psychologists. n An accurate audiologic diagnosis requires a test battery approach (using several tests) that consists of both behavioral and objective (also called electrophysiologic) tests appropriately selected on the basis of a child’s developmental age. n Parents should be included in all hearing test sessions (especially

69



70

Children With Hearing Loss:  Developing Listening and Talking

behavioral tests) and participate in the diagnosis. Inclusion promotes parents’ acceptance of hearing loss and helps make them partners in family-focused intervention procedures from the very beginning. n Even though objective tests such as otoacoustic emissions (OAEs),

Introduction A hearing loss must be identified before intervention can take place. Hearing screening programs indicate that a hearing loss (“doorway” problem) is likely present, and an audiologic diagnostic assessment specifies the nature, type, and degree of the hearing loss. The purpose of this chapter is to discuss issues and procedures for audiologic screening and diagnosis.

Newborn Hearing Screening and EHDI Programs Universal newborn hearing screening has reinforced everything that we had believed to be true about hearing loss. We believed that the earlier we diagnosed hearing loss the better, and we believed that early intervention was important, but we had no real data to substantiate those assumptions — now we do (Ching, Zhang, & Hou, 2019; Geers et al., 2011; Halpin, Smith, Widen, & Chertoff, 2010; Nicholas & Geers, 2006; Robbins, Green, & Waltzman, 2004; Sharma et al., 2004; Sharma, Dorman, & Kral, 2005; Sininger

auditory brainstem response (ABR), and immittance procedures provide important data about physiological function, these tests do not provide information about how the child actually will process speech; behavioral speech perception tests are necessary for this information.

et al., 2009; Stredler-Brown, 2014; Yoshinaga-Itano, 2004). The key issue concerns neuroplasticity. The earlier the brain is stimulated with sound/auditory information, the more complete will be the auditory brain development. The National Institute on Deafness and Other Communication Disorders estimates that as many as 12,000 new babies with hearing loss are identified every year. In addition, estimates are that in the same length of time, another 4,000 to 6,000 infants and young children between birth and 3 years of age who passed the newborn screening test acquired lateonset hearing loss. The total population of new babies and toddlers identified each year with hearing loss, therefore, is about 16,000 to 18,000; approximately 85% have hearing loss of 70 dB HL or better. Of the approximately 15% who have 71 to 110 dB hearing loss, about half receive cochlear implants. Finally, based on one state’s 2013 data (North Carolina), of the families who chose a communication option, 92% chose spoken language for their children. Statewide programs to screen every newborn were prompted by the research of Marion Downs in the 1960s (Northern & Downs, 2014). Then, in March of 1993, the National Institutes of Health (NIH) Consensus Development Conference rec-



ommended that all babies be screened for hearing loss before being discharged from the hospital. More recently, the Joint Committee on Infant Hearing’s Year 2007 position statement, reinforced by its 2013 supplement, continued to emphasize that all infants should be screened for hearing loss prior to leaving the hospital ( JCIH, 2007, 2013). Further, the degree, type, and configuration of the hearing loss should be identified by 3 months of age, and intervention begun by 6 months of age. All states require establishment of state-based systems, with various levels of funding, for early identification of newborns with hearing loss (White & Munoz, 2019). We now screen 95% to 98% of all infants. Unfortunately, even though states are becoming more efficient with their screening paradigms, many locations continue to lag far behind in follow-up diagnostic evaluations, fitting of amplification, and provision and documentation of early intervention services (Krishnan, 2009; Krishnan & Van Hyfte, 2014; Tharpe, 2009; Windmill & Windmill, 2006). The greatest challenge EHDI programs now face is lost to follow-up (Thomson & YoshinagaItano, 2018). An effective and complete EHDI program should have three basic components: newborn hearing screening, audiologic diagnosis, and early intervention. Each of these three components should contain culturally competent family support, a medical home plan, data management, legislative mandates, and program evaluation tools. See websites for the Centers for Disease Control and Prevention (CDC) and http://www.infanthearing.org for more and updated information about EHDI programs. Current hearing screening tools include automated ABR testing and OAE testing (Figure 4–1). Both assessment tools

4.  Diagnosing Hearing Loss

are objective, safe, and noninvasive, and have a relatively high degree of accuracy. A description of these tests is provided later in this chapter. Below are some facts about infant hearing: n Hearing loss is the highest-incidence

birth defect/difference. n The incidence is 1 to 3/1,000 births; each well baby has a 1 in 600 chance of having a hearing loss. n There is a 1 in 20 chance of having a hearing loss if a baby fails even one newborn hearing screening test. n Ninety-five percent of babies with hearing loss are born to hearing and speaking families. n ABR testing should be used for neonatal intensive care nursery (NICU) screening to identify auditory neuropathy. n OAE testing can be used for well babies who have a much lower risk of auditory neuropathy. n Instead of spending 30 minutes with the baby during an initial screening attempt, it is much more efficient to make a quick first attempt, followed by a second or even third attempt a few hours later. These second efforts before discharge can substantially reduce the number of babies who need to come in for outpatient screens or diagnostic procedures. n Only 10% of babies with hearing loss have profound hearing losses, and only 10% of these babies are born to parents who are profoundly deaf. Following are some interesting discoveries about infant hearing: n Unilateral hearing loss is not benign;

a unilateral hearing loss can cause

71

ear phone electrode

electrode

A

B Figure 4–1. A. Automated ABR (AABR) being performed on a newborn. In this technique, the electrodes are attached to the forehead and nape of the neck. Clicks are presented to the ear through earphones coupled to the ear. B. Newborn hearing screening using an otoacoustic emission screener. The infant needs to remain quiet or asleep during the test to make the testing run faster. Source: Reprinted with permission from Kramer, S., and Brown, D. K. (2019). Audiology: Science to practice (p. 310). San Diego, CA: Plural Publishing, Inc.

72



4.  Diagnosing Hearing Loss

later language, learning, psychosocial, and attention problems. See Chapter 3 for more information about unilateral hearing losses. n A mild hearing loss is not mild relative to impact. See Chapter 3 for a discussion of the potential implications of an untreated mild hearing loss in infants and children. n Screening at 35 dB misses a 30 dB hearing loss; normal hearing for children is 15 dB HL but we do not screen infant hearing at this level. Consequently, mild hearing losses are usually not identified at birth. n All babies, at risk or not at risk, can have later-onset hearing losses. n Early rescreening is better than later rescreening; it is easier to screen a younger baby. So, do not wait to confirm a possible hearing loss. Key risk factors for later-onset hearing loss include a family history of childhood hearing loss, cytomegalovirus (CMV) infection, unilateral hearing loss (UHL), and extracorporeal membrane oxygenation (ECMO) (Glantz, 2018). Extracorporeal life support (ECLS) is used when a baby or child has a condition that prevents the lungs from working properly; that is, transferring oxygen into the blood and removing carbon dioxide. Many illnesses may result in lung failure. EHDI programs have made possible the following new paradigms in service delivery and expectations (Fitzpatrick, Angus, Durieux-Smith, Graham, & Coyle, 2008; Sininger et al., 2009; White & Munoz, 2019): n We can now implement a develop­

mental rather than a remedial model of intervention.

n As a result of newborn hearing

screening and early intervention, 90% of children with hearing loss can be going into general education rather than into special education classrooms by kindergarten. n The emphasis is on family and child involvement with the interventionist as coach, rather than a teacher–child dyad. Early intervention can and should focus on adult education. n Degree of hearing loss is no longer a factor in outcome; there is no degree of hearing loss that precludes auditory brain access. n The audiologist is a key player on the intervention team and is the professional who makes auditory brain access possible through close communication with parents and the LSL specialist about the child’s auditory perceptual abilities. n An accessible auditory world (rather than a visual world) is now possible for children with all degrees of hearing loss — if we do what it takes. To conclude, current research confirms several facts for families who desire an LSL outcome for their infant or toddler who experiences profound deafness. Families need to know that very early use of hearing aids or insertion of a cochlear implant for severe to profound degrees of hearing loss (to access, stimulate, and grow auditory centers of the brain with auditory information during times of critical neuroplasticity), followed by thoughtful, intense, and ongoing auditory skill development activities (to take advantage of developmental synchrony and cumulative practice), offers a high probability of reaching their desired outcome. Thanks to EHDI

73



74

Children With Hearing Loss:  Developing Listening and Talking

programs, we live in a time of exciting new possibilities for infants and children with hearing loss (White & Munoz, 2019).

Test Equipment and Test Environment To ensure accurate results, hearing testing must be performed in a sound-isolated booth. Therefore, an audiologic assessment cannot be conducted in a classroom, therapy room, or physician’s office, unless a sound-isolated booth is used, because any noise can interfere with a child’s ability to detect very soft sounds. An audiometer is a specialized piece of test equipment used to present controlled and calibrated sound stimuli to the child (Figure 4–2). If the child will tolerate them, earphones also are used to test each

ear separately. Although supra-aural earphones that fit over the ear were used in the past, ear-insert earphones are the standard today (Figure 4–3). Insert earphones are desirable because they eliminate the possibility of collapsed ear canals caused by the weight and placement of supraaural earphones. Additionally, insert earphones are more comfortable; they reduce the need for masking and improve the stability of sound relayed to the ear. Conditions preventing the use of insert earphones include atresia (absent or closed ear canal), stenosis (an abnormally small ear canal), and discharge from the ear canal as a result of an infected outer or middle ear. Sound-field results are obtained when earphones are not or cannot be used due to the child’s young age or their resistance to having something in the ears. Test stimuli are presented through the loudspeak-

Figure 4–2.  An audiometer is used to present controlled and calibrated sound stimuli to the child in a sound-isolated booth.



4.  Diagnosing Hearing Loss

Figure 4–3.  Supra-aural earphones that fit over the ear were used in the past; however, ear-insert earphones are the standard today even when obtaining ear-specific responses from babies.

ers located in the sound room. (Figure 4–4 shows outside [4–4A] and inside [4–4B] views of sound rooms). Testing in a sound field can measure only the response of the better ear; a difference in sensitivity between the ears cannot be identified unless earphones (supra-aural or insert) are used to test each ear separately.

Audiologic Diagnostic Assessment of Infants and Children Guidelines for testing the hearing of infants and children have been developed by the Alexander Graham Bell Association for the Deaf and Hard of Hearing (AG Bell, 2014), and the American Academy of Audiology (AAA, 2012). All infants who

have failed a newborn hearing screening should be seen for a complete audiologic diagnostic evaluation as soon as possible, ideally within the first month of life.

According to the AG Bell guidelines (2014), after hearing loss is diagnosed, routine evaluation should occur at three-month intervals through age 3 years. Assessment at six-month intervals from age 4 years is appropriate if progress is satisfactory and if there are no concerns about changes in hearing. Immediate audiological evaluation should be performed if parent or caretaker concern is expressed or if behavioral observation by parent, therapist, or teacher suggests a change in hearing or device function.

75

A

B Figure 4–4.  A. A single-room audiometric test environment. The patient sits inside the room and the tester and equipment are outside the room. The tester’s equipment is connected to the room through a connector (jack) panel, and there is a window to maintain visual contact. In many clinics, the tester is also in another similarly treated test room that is connected to the patient’s test booth, called a double-room test environment. Source: Reprinted with permission from Kramer, S., and Brown, D. K. (2019). Audiology: Science to practice (p. 118). San Diego, CA: Plural Publishing, Inc. B. Sound-field testing is performed when test stimuli are presented through the loudspeakers located in the sound room; earphones are not used. 76



4.  Diagnosing Hearing Loss

It is recommended that audiologic assessments of infants and young children include a thorough case history, otoscopy (performed by looking in the child’s ear with an otoscope — see Figure 4–5), and both behavioral and physiologic measures. A description of specific pediatric behavioral and electrophysiologic tests is presented in a following section. The recommended test protocol will vary according to the chronological and developmental age of the child.

Test Protocols There are four main purposes for a pediatric audiologic assessment: (a) to obtain a measure of peripheral hearing sensitivity

that rules out or confirms hearing loss as a cause of the baby’s or child’s problem; (b) to confirm the status of the baby’s middle ear on the day of the test; (c) to obtain ear-specific and frequency-specific information through testing the pure tones that compose speech sounds ( Jerger, 2018); and (d) to observe and interpret the baby’s auditory behaviors. To this end, the use of a test battery approach is standard (Hall, 2018). Several appropriate behavioral and electrophysiologic tests are utilized to determine the extent of a child’s auditory function (Baldwin, Gajewski, & Widen, 2010; Hall, 2014). A test battery approach furnishes detailed information, avoids drawing conclusions from a single test, allows for the identification of multiple pathologies, and

Otoscope

Specula

Video Otoscope

Head

Specula

Handle

Figure 4–5.  Otoscopes used in visualizing the ear canal and tympanic membrane. The otoscope on the left is a hand-held unit with a rechargeable battery pack in the handle and uses disposable specula to place in the ear canal. On the right, is a video otoscope with a disposable speculum. There is a small camera in the unit that displays the image on a monitor and can record the video feed for off-line viewing or printing. Source: Reprinted with permission from Kramer, S., and Brown, D. K. (2019). Audiology: Science to practice (p. 212). San Diego, CA: Plural Publishing, Inc.

77



78

Children With Hearing Loss:  Developing Listening and Talking

provides a comprehensive foundation for observing a child’s auditory behaviors (Madell, Flexer, Wolfe, & Schafer, 2019a). Tables 4–1, 4–2, and 4–3 at the end of this chapter summarize audiologic diagnostic information and pediatric audiology test procedures. AAA (2012) and AG Bell (2014) recommend the following test protocols according to the chronological/developmental age of the child: 1. Birth Through 4 Months of Age (age is adjusted for prematurity). When children are very young or experiencing severe developmental disabilities, the testing of infants or children should rely primarily on physiologic measures of auditory function, such as ABR using frequency-specific stimuli to estimate the audiogram. In addition, OAEs and acoustic immittance measures should be used to supplement ABR results. Case history, parent/caregiver report, behavioral observation of the infant’s responses to a variety of sounds, developmental screening, and functional auditory assessments (see Chapter 5) should also be performed. Refer to Madell et al. (2019b) for detailed information about how to behaviorally assess an infant using a sucking paradigm. 2. Five Through 24 Months of Age.  At these ages, behavioral assessments should be performed first, with VRA (visual reinforcement audiometry) being the behavioral test of choice. OAEs and ABRs should be assessed only when behavioral audiometric tests are unreliable, ear-specific thresholds cannot be obtained, behavioral results are inconclusive, or auditory neuropathy is suspected.

Developmental screening and functional auditory assessments also should be performed (see Chapter 5). 3. Twenty-five Through 60 Months of Age.  Behavioral tests (VRA or conditioned play audiometry [CPA]), acoustic immittance tests, and OAE tests are usually sufficient. Speech perception tests also should be performed in combination with developmental screening and functional auditory assessments.

Observe and ask parents and teachers how the child responds to sounds outside of the audiologic test suite. Our audiological results need to be reconciled with the baby’s real-world behaviors. For example, if the child is audiometrically hard of hearing, some unamplified behaviors to sound ought to have been observed at home. Conversely, if the child is audiometrically deaf, there would be no brain access to unamplified sound. The baby’s behaviors should reflect that fact.

The expected outcomes of audiologic protocols include: n identification of hearing loss, auditory

neuropathy, or a potential central auditory processing disorder; n quantification of hearing status based on electrophysiologic tests; n development of a comprehensive report of history, physical and audiologic findings, and recom­ mendations for treatment and management; n implementation of a plan for monitoring, surveillance, and habilitation of the hearing loss; and



4.  Diagnosing Hearing Loss

n provision of family-centered

counseling and education. The next section contains a brief summary of pediatric test procedures, both behavioral and objective. For detailed discussions of administration, methodology, and interpretation of pediatric assessments, refer to Baldwin, Gajewski, and Widen (2010); Madell, Flexer, Wolfe, and Schafer (2019a, 2019b); Northern and Downs (2014); and Roeser and Downs (2004).

Begin the audiological diagnostic assessment by using appropriate speech material rather than pure tones. If the child stops responding to stimuli during the test session, return to a stimulus type and intensity where you are certain that the child responded. If the child responds again, we can infer that his or her later lack of response occurred because the stimuli were not heard. However, if the child does not respond to a stimulus that he or she previously gave a response to, we can infer that the child may have habituated (become bored with the procedure).

Use warble tones instead of pure tones to obtain frequency-specific data because they are easier to hear and more interesting to babies and children. Narrowband noise, the Ling 6-7 Sound Test, or both can be used to obtain some frequency specificity if the child does not respond well to warble tones. Begin testing at 250 Hz (low frequencies) rather than at 1000 Hz when testing with warble tones or narrowband noise, especially if the child has a history of anoxia.

Pediatric Behavioral Tests: BOA, VRA, CPA, Speech Perception Testing Test procedures in which a response to sound is elicited and measured, and the function of the auditory system is subsequently inferred, are known as behavioral audiometric tests. The auditory system is not measured directly. For example, a child’s localization behavior to a sound source might be observed with the subsequent inference that the child “heard” the sound. The actual function of the organ of Corti was not measured. Selecting the behavioral test to be used depends upon the ability of the infant or toddler to perform the required tasks. Pediatric behavioral tests include, in the order of task complexity from the least to the most complicated: behavioral observation audiometry (BOA), visual reinforcement audiometry (VRA), and conditioned play audiometry (CPA). Other behavioral tests such as tangible reinforcement operant conditioning audiometry (TROCA) and puppet in window illuminating (PIWI) are not commonly used (Madell et al., 2019a; Northern & Downs, 2014). It is advisable to use the most complicated developmental pediatric test that a child is capable of performing, due to the potential to obtain more precise results through strong stimulus-response conditioning bonds. Additionally, the tester must be mindful of appropriate positioning of the baby or child during the test session to allow the child to display a variety of response behaviors. Appropriate positioning is especially important for a child who experiences motoric disabilities. Parents should be included in all test sessions. Pediatric behavioral tests also require the help of a test assistant. The test assistant is a trained observer who is in the test

79



80

Children With Hearing Loss:  Developing Listening and Talking

Before entering the sound room, begin the test session with noisemakers or the Ling 6-7 Sound Test. Observe the infant or child’s auditory behaviors, first unaided, and then with hearing aids. The baby’s or child’s responses will give an indication of the level of behavioral test to be used (i.e., can the child localize a sound?) and the loudness level of where to begin testing (i.e., does the child respond to softer noisemakers, or must the noisemakers be very loud and very close to the child?).

room with the baby or child and the parent. The test assistant’s job is to observe and interpret auditory behaviors; to keep the baby or young child quiet, alert, and focused; and to reinforce responses when appropriate (Madell et al., 2019b).

Behavioral Observation Audiometry (BOA) The simplest procedure in terms of the child’s task requirements is BOA, in which a selected test stimulus is presented through loudspeakers or through insert earphones in a sound-isolated room, and a baby’s unconditioned response behaviors are observed. BOA is used from birth until a child can be conditioned to use a lighted toy reinforcer (VRA) at about 5 months of age. A baby’s responses to BOA are not obtained at threshold — the point where the individual can just barely hear a sound 50% of the time — but rather at what is called minimum response level (MRL). MRL is the minimum or softest level where a baby displays identifiable behavioral changes to sound. In general, these behavioral changes are observed at levels

that may be louder than the baby’s actual thresholds. There are several concerns, as discussed below, in interpreting BOA results. These include the infant’s state, methodology of stimulus presentation, stimulus parameters, specific responses displayed by the baby, and control of observer bias. When BOA is used as the test procedure, each of these areas should be addressed in an audiometric report. Refer to Madell et al. (2019a) for information about the use of a sucking paradigm to obtain BOA results. An infant’s or toddler’s behavioral state is a key variable that influences the responses they will make to sound (Bench, Hoffman, & Wilson, 1974; Northern & Downs, 2014). Behavioral state is the physical level of awareness of the child. Examples of these varying states include deeply sleeping, light sleeping, quiet awake, active awake, fussy, crying, and so forth. If a baby is deeply asleep, he or she can respond only to very loud sounds followed by reflexive behaviors such as startle or limb movement. This of course does not mean that he or she cannot respond to softer signals. If a baby is quietly awake and alert, then there exists the likelihood of obtaining meaningful, attentive-type behavioral responses to softer sounds. There is a general belief that neonates are reflexive responders. However, if one reads carefully, reports typically note that very intense stimuli were used while the infant was fast asleep. If the same neonate had been awake and alert and had essentially normal hearing, it is likely that one would notice some attentive type behaviors to softer sounds. Thus, BOA responses become a function of an infant’s or child’s state, which should be described in the audiological report. The Law of Initial Value (LIV) can provide valuable clues for the interpretation



4.  Diagnosing Hearing Loss

of an infant’s behavioral changes to sound (Bench et al., 1974). LIV means that the auditory behaviors displayed in response to a sound are in the opposite direction of the infant’s prestimulus behavior. For example, if the baby was active before the presentation of the stimulus, his or her poststimulus behavior would be in the opposite direction — the baby would quiet or stop sucking if the sound was heard. If the baby did in fact hear the sound and was very quiet before its presentation, then afterward, his or her activity (e.g., sucking) would increase.

Attentive and Reflexive Auditory Behaviors Provided below is a listing of behaviors that an infant or child could exhibit in response to speech and/or environmental sounds. Take note of the specific speech and environmental sounds that appear to elicit responses. Additionally, observe the sounds that the child can detect without amplification, compared to sounds that are detected only when the child wears amplification or a cochlear implant. Consider the influence that background noise, distance from the source, competing sensory stimuli, ability to move, behavioral state, and attention all can have on a child’s auditory behaviors. Because distance, environment, and infant positioning are controlled during audiologic tests, the audiologist may elicit auditory behaviors that are not apparent to parents or teachers in other learning situations. The following attentive-type behaviors often suggest learning: n When sleeping, the child awakens to

sudden noises. n Mother’s voice soothes the child

(quieting).

n The child’s eyes widen, expressing a

“What is it?” type of response. n The infant’s or child’s eyes are

observed to search for the sound but the head is held still. n The eyes appear to localize directly to the sound. n The child exhibits head searching, sometimes a rudimentary head turn toward the sound. n The child localizes directly when sound is presented to the side, below, and/or above. n The infant or child responds to the sound by smiling. The following reflexive behaviors, typically elicited at the level of the lower brainstem, do not imply learning: the infant or child startles when sound is presented in a quiet room; the child’s arms or legs move when sound is presented; the eyes blink when sound is presented in a quiet environment; the baby starts or stops sucking when sound is presented; the child’s breathing rate changes when sound is presented; facial twitches, frowns, or grimaces are noted when sound is presented; or the child starts or stops crying to loud sounds. Notice that arousal and quieting behaviors may be interpreted as either reflexive or attentive behaviors. Although an infant or child initially may respond to sound with reflexive behaviors, he deserves an opportunity to learn to respond more meaningfully. Current auditory function does not always predict future auditory potential. Even though conditioned audiometric procedures are preferred, conditioning is not always a possibility. BOA can provide a wide range of information about a baby’s auditory ability, information that can be used in planning effective habilitative

81



82

Children With Hearing Loss:  Developing Listening and Talking

strategies. MRLs can be obtained to different types of stimuli, including the Ling 6-7 Sound Test and narrowband signals, to provide some frequency-specific information. Behavioral responses, therefore, can serve as a gauge for estimating auditory function.

Visual Reinforcement Audiometry (VRA) VRA (Liden & Kankkunen, 1969) and conditioned orientation reflex (COR) audiometry (Suzuki & Ogiba, 1961) are the two visual reinforcement tasks most commonly used. A child’s head-turning response is rewarded with a visual display, usually a lighted, moving mechanical toy such as a dancing bear or a running dog, or a video of a cartoon. The more complex the reinforcer, the more appealing it likely will be for the child. Typically, a baby is conditioned to localize to a toy every time a sound is presented (Figure 4–6). Because

the baby is now older, in addition to being awake and alert, auditory responses are elicited to softer sounds than those obtainable with BOA — usually within standard limits for babies with normal hearing. VRA procedures are used until about 2.5 to 3.5 years of age, when play audiometry can be introduced. A test assistant is essential for the effective implementation of VRA.

There are substantial issues with government and insurance reimbursement for most audiologic tests. For example, the average Medicaid reimbursement for VRA, a task that takes about $60,000 worth of equipment, a highly trained pediatric audiologist and a test assistant, and 1 to 2 hours test time per session, is only $39.69 — ​ and many insurance plans pay less and will allow only one hearing test per year.

Figure 4–6.  In VRA, a baby is conditioned to localize to a toy or video every time a sound is presented in the sound room. Source: Reprinted with permission from Wolfe, J. (2020). Cochlear implants: Audiologic management and considerations for implantable hearing devices (p. 159). San Diego, CA: Plural Publishing, Inc.



4.  Diagnosing Hearing Loss

Conditioned Play Audiometry Conditioned play audiometry, also known as the “listen and drop” task (Lowell, Rushford, Hoversten, & Stoner, 1956), is described as the most sophisticated pediatric task. CPA demands the active cooperation of the child in dropping a block in a bucket or putting a ring on a peg (Figure 4–7). Have a variety of fun and interesting toys available (Neumann, Smith, & Wolfe, 2016). Some children might enjoy

dropping plastic spiders, driving cars in a tube, arranging fairy wings, or putting stickers on a page. Most children are near 2 or 2.5 years of age before they are able to participate in play audiometry consistently, although some 18-month-olds are quite proficient. Conditioning may not be possible for some children with disabilities, or for children who have not had auditory brain access, until an even older age. Threshold information can be obtained when the child is mature enough

Figure 4–7. CPA, the “listen and drop task,” demands the active cooperation of the young child in, for example, dropping a plastic toy in a bottle of water or putting a peg in a hole.

83



84

Children With Hearing Loss:  Developing Listening and Talking

to employ active listening strategies and to maintain attention to task. The hand-raising or button-pushing response used by older children and adults is a more abstract task. Some 3-year-olds and almost every child 5 years and older can raise his or her hand. However, in a relatively long test session in which each ear is tested twice (by air and bone conduction), operant and social reinforcement can help maintain the child’s attention throughout the procedure. A test assistant is also extremely helpful in varying operant reinforcers and in keeping the child conditioned and focused.

Speech Perception Testing The goal of pediatric audiologic practice is to assist infants and children in being able to hear typical and soft conversational-level speech in both quiet and noisy situations. Speech perception measures must be obtained regularly to demonstrate hearing aid and cochlear implant benefit; demonstrate improvement and progress over time; identify equipment and child development problems; identify the child’s speech perception errors; and assist in selecting the most appropriate educational environment (Madell & Schafer, 2019). Speech perception testing should be performed when hearing loss is first identified, at re-evaluations, and when selecting or altering technology. Test materials should be linguistically appropriate and should be presented at levels that can assess daily functioning (AG Bell, 2014). Functional presentation levels include the average level for conversational speech (50 dB HL) and also soft conversational levels (35 dB HL), in quiet, and in noise (+5 S/N).

Following are some factors that can affect a child’s speech perception capabilities: n Degree of hearing loss. n Length of time that the child has had

the hearing loss. n Cause of the hearing loss. n The child’s language level. n The child’s experience with his

technology including length of use. n The appropriateness of technology fit

and settings. n Experience of the audiology or implant team. n The nature and quality of the auditory intervention and education the child has been receiving. There are many closed- and open-set speech perception tests that can be used. For very young children, informal speech perception measures can be obtained by using body parts, familiar toys or objects, and Mr. Potato Head. For detailed information about speech perception tests, please see Estabrooks, MacIver-Lux, and Rhoades (2016), Madell and Schafer (2019), Northern and Downs (2014), and Roeser and Downs (2004). An additional reason for audiologists to perform speech tests is to document acoustic accessibility issues to assist the child in qualifying for services under the new regulations for Part B of the Individuals with Disabilities Education Act (IDEA). That is, if we perform early intervention correctly, the child may enter school without language and preacademic deficiencies that can be quantified using existing measures. Keep in mind that the special education system operates using a failure model (Ackerhalt & Wright, 2003). The child must demonstrate deficits to qualify



4.  Diagnosing Hearing Loss

for services. If we cannot qualify the child for services under Part B due to measurable academic deficits, then we need to show eligibility under Section 504 of the Rehabilitation Act by focusing on acoustic accessibility (Rehabilitation Act, 1973; Rehabilitation Act Amendment, 1992). Unfortunately, Section 504 may be a poor substitute for obtaining needed services because that law is not funded by state educational agencies; local districts must assume the cost, and parent and child rights are restricted relative to the rights that are guaranteed under Part B of IDEA ( Johnson & Seaton, 2012).

Children speak what they hear, and how they hear. Therefore, while performing speech perception tests, one should also listen carefully to the child’s speech or prespeech utterances. Based on how the child sounds, an audiologist ought to be able to plot the child’s audiogram based on knowledge of speech acoustics.

Electrophysiologic Tests:  OAE, ABR/ASSR, and Immittance Outer hair cell function, neural responses in the lower brainstem, and eardrum and middle ear mobility are examined through objective/electrophysiologic tests that provide some direct measurement of the auditory system. It is important to note that objective tests do not evaluate hearing, cognitive function, or the central processing of sound. Therefore, even though they provide important data about physiological function, these tests do not provide information about how the child

will actually process speech; behavioral speech perception tests are necessary for this information. Three main objective/electrophysiologic tests used with infants and young children are OAE, ABR and auditory steady-state response (ASSR), and immittance — middle ear assessment. Please see Hall (2014), Hall and Swanepoel (2010), and Madell et al. (2019b) for more information about objective assessments.

Otoacoustic Emissions OAEs evaluate outer hair cell integrity. The cochlea (inner ear) actively produces energy during the hearing process (Abdula, 2018). These emissions are a normal by-product of micromechanical actions of the cochlear amplifier, which is thought to be located in the outer hair cells. OAEs, therefore, assist in the differentiation between sensory and neural components of a sensorineural hearing loss. Because they are noninvasive and easy to set up, and can be obtained quickly, OAEs have been used extensively in screening for the very beginning stages of hearing loss, especially in newborns and in other difficult-to-test patients (White & Munoz, 2019). OAEs are separated into two general categories: spontaneous and evoked. Spontaneous emissions are detected in approximately half of the persons with normal hearing sensitivity, and they occur in the absence of deliberate acoustic stimulation of the ear. Conversely, evoked OAEs necessitate acoustic stimulation of the ear. If a child has normal OAEs but lacks ABRs and acoustic reflexes, auditory neuropathy may exist (Smith & Wolfe, 2019). A small probe is inserted in the ear canal, sounds are presented, and response

85



86

Children With Hearing Loss:  Developing Listening and Talking

tracings are recorded to measure evoked OAEs (Figure 4–8). The child feels no discomfort and is not required to cooperate beyond holding still. Evoked emissions are present in the ears of persons possessing normal hearing sensitivity and are systematically reduced or absent in the ears of persons who have a sensorineural hearing loss (Hall, 2014). The procedure also has limitations. For example, the middle ear must be essentially normal for OAEs to be measured.

ABR and ASSR ABR is a safe, noninvasive test that does not require a child’s voluntary coopera-

tion. Several electrodes, attached to the top of the head as well as on or near each ear, measure the very tiny electrical signal caused by the nerves’ respondent firing to click or chirp stimuli presented through earphones or a bone oscillator (refer back to Figure 4–1) (Lightfoot & Brennan, 2019; Wolfe & Smith, 2017). Although not a hearing test per se, an ABR’s tracing represents the synchronous discharge of first through sixth-order neurons in the eighth cranial nerve (auditory nerve) and brainstem. If appropriately performed, the procedure provides some information about the auditory sensitivity of each ear that can be inferred from the resultant tracing (Smith & Wolfe, 2019).

Figure 4–8.  A small probe is inserted in the ear canal, sounds are presented, and response tracings are recorded to measure evoked OAEs.



Patients undergoing ABR must be very still because the slightest movement can invalidate the test. Therefore, in the case of young children, a mild sedation can be administered by a physician just before the test. Determining ABR thresholds requires skill on the part of the observer because responses are small and are embedded in a background of ongoing brain activity and muscle activity (Gorga et al., 2004). In addition, middle ear pathology complicates results and damage to the brainstem can affect or obscure ABR responses. If a child has normal bilateral ABR tracings, one can be confident that the child has normal peripheral hearing. However, if the ABR responses are abnormal, the interpretation of hearing loss remains more ambiguous. Because of these problems with ABR, an alternative, more robust technique is often used to predict behavioral hearing thresholds — the ASSR (Cone & Garinis, 2009). The ASSR response is elicited by a modulated pure tone and it does not rely so much on the experience and expertise of the observer. Even though ASSR can better differentiate between severe and profound hearing losses than can ABR, Gorga et al. (2004) caution that reliable ASSR measurements cannot be made for stimulus levels at and above 100 dB HL.

Immittance (Impedance) Testing Immittance, also called aural immittance measures, consists of an estimation of external ear canal volume, documen­ tation of the integrity of the eardrum, and a description of mechanical properties of a normal or abnormal middle ear (Sanford & Feeney, 2019). Aural immittance testing also typically includes acoustic reflex testing.

4.  Diagnosing Hearing Loss

It is important to recognize that immittance is not a test of hearing. A child can have normal immittance results and still have a significant sensorineural hearing loss.

Immittance measures are predicated on the principle that when acoustic energy (sound) is presented to the ear, a certain quantity of the sound energy is reflected back from the eardrum. The amount of energy that is reflected is based on the stiffness or flaccidity of the middle ear system. To take immittance measures, a small rubber plug is inserted in the child’s ear canal (Figure 4–9). Small air pressure changes that cause the eardrum to move accompany the continuous tone relayed to the ear. Patterns of eardrum movement as well as protective muscle reflexes in the ear are measured. If the child remains still so that test results are not compromised, immittance measures may take only 1 to 2 minutes to obtain. Because it often provides primary evidence of middle ear pathology (mostly fluid or ear infection), immittance audiometry is as an important part of the audiologic test battery (Hall & Swanepoel, 2010). Tympanometry shows the eardrum’s range of mobility from normal, to abnormally stiff, to abnormally compliant (Kramer & Brown, 2019). In regard to otitis media, middle ear function may not always be stable. A child may experience negative pressure in the middle ear one day and fluid the next. Therefore, interpretation of immittance results applies only to the day of test. Figure 4–10 shows tympanometric tracings associated with different middle ear conditions. Interpretation of immittance results from babies under approximately 7 months

87

Figure 4–9.  To take aural immittance measures, a small rubber plug is inserted in the baby’s or child’s ear canal to evaluate middle ear function; fluid in the middle ear is the most common reason for abnormal immittance results.

A

B

C

Figure 4–10. Three primary tympanogram classifications for a 226 Hz probe tone. Type A (Panel A) is consistent with normal middle ear function. Type B (Panel B) is consistent with middle ear dysfunction, such as the presence of middle ear fluid. Type C (Panel C) suggests the presence of negative middle ear pressure. Source: Reprinted with permission from McCreery, R. W., and Walker, E. A. (2017). Pediatric amplification: Enhancing auditory access (p. 31). San Diego, CA: Plural Publishing, Inc.

88



4.  Diagnosing Hearing Loss

of age can be difficult unless multifrequency tympanometry is employed. Specifically, a probe-tone frequency of 1000 Hz is recommended for tympanometry in neonates and young infants, and ear canal volume measurements should be conducted with a low-frequency probe tone such as 226 Hz ( Joint Committee on Infant Hearing, 2007). Especially for the otitis-prone child, immittance is most meaningful when multiple measures are performed over time, providing a long-range picture of the child’s middle ear function (Mazlan, Kei, & Hickson, 2009).

Because some babies and young children resist having the probe inserted in their ear canal, conduct immittance testing as the last procedure in the test session unless the child is older or you know that the child will not be frightened or annoyed by the procedure.

The Audiogram An audiogram is a simple graph that charts the softest sounds that a person is actually able to hear. The audiogram is obtained by testing a person in a soundisolated booth using pure tones that are presented to each ear through earphones. The audiogram shows the type of hearing loss, whether it is conductive, sensorineural, or mixed; the degree of hearing loss, whether it ranges from minimal to profound; and the pattern of the hearing loss, how much hearing loss exists at different frequencies (Kramer & Brown, 2019). Frequencies (pitches) from 250 Hz through 8000 Hz are shown along the

horizontal dimension. Displayed along the vertical dimension is intensity (loudness) in dB HL, showing the amount of the hearing loss (Figure 4–11). The higher the number of decibels, the louder is the sound and the greater the hearing loss. Threshold is defined as the softest sound that a person can just barely hear. All sounds greater than threshold are audible and located toward the bottom of the graph. Sounds softer than threshold are inaudible and are found toward the top of the audiogram. Measuring a child’s response to an array of distinct frequencies allows us to study the relationship between speech sounds and test frequencies. The typical tones presented in hearing tests are spaced in octave intervals from 250 Hz, which is about middle C on the piano, through 8000 Hz. These particular frequencies are used because together they compose speech sounds much like individual threads weave a cloth (Ling, 2002; McCreery & Walker, 2017). One is not consciously aware of the individual pure tones contained in someone’s speech, any more than one is aware of each individual thread contained in a dress. However, each individual tone makes a precise and important contribution to speech detection and understanding, as discussed in Chapter 2 of this book. The frequencies of 500, 1000, and 2000 Hz are considered to be the main speech frequencies because the majority of vowel and consonant energy is concentrated in this region. The 100 dots from Killion and Mueller’s Count the Dot audiogram (2010), shown in Figure 4–11, reflect the speech spectrum and the relative contribution of the different frequencies and intensities to the perception of speech. The greater the number of dots displayed at a particular frequency, the

89

Pure-Tone Audiogram

Figure 4–11.  Pure tone audiogram — Count the Dot audiogram and degrees of hearing loss. Source: Reprinted with permission from Estabrooks, W., MacIver-Lux, K., and Rhoades, E. A. (2016). Auditory-verbal therapy: For young children with hearing loss and their families, and the practitioners who guide them (p. 98). San Diego, CA: Plural Publishing, Inc.

90



4.  Diagnosing Hearing Loss

more important that frequency is for speech perception. Note there are also many audible dots above 4000 Hz in the speech spectrum. The threshold of sensitivity for each ear is displayed on an audiogram using the following symbols: 1. O = the softest sound a person can hear with his or her right ear under headphones. 2. X = the softest sound a person can hear with his or her left ear under headphones. 3. ^ = the softest sound a person can hear while being tested by bone conduction; the test ear is not specified. Figure 4–12 shows a photograph of a bone conduction transducer, also called a bone oscil­ lator, as placed on a child’s head. All the above audiometric symbols specify that the nontest ear was not

masked or removed from the test situation by having a static noise “keep it busy.” Different symbols designate when masking has been used. For example, a triangle is used when testing the right ear by air conduction while masking the left ear. When the left ear is being tested by air conduction and the right ear is masked, an open box is used. A left bracket ( [ ) means the right ear was tested by bone conduction while the left ear is masked. A right bracket ( ] ) means the left ear was tested by bone conduction while the right ear was masked. Masking is used when there is a possibility that the ear not being tested responds instead of the test ear, thereby contaminating the test results. Masking is then used to ensure that the ear being tested is indeed the ear responding to the test signal. A legend that defines all symbols used should be included on the audiogram. Additional audiometric symbols include S, which represents sound-field

Figure 4–12.  A bone oscillator is a piece of audiometric equipment that looks like a small black box attached to a headband, as seen on this child. It is used to measure bone conduction thresholds.

91



92

Children With Hearing Loss:  Developing Listening and Talking

thresholds  —  the tones are presented through loudspeakers directly into the sound room. Sound-field thresholds are often obtained when a baby or young child will not wear earphones or when a comparison is needed between hearingaided and unaided stimuli. A or B (binaural hearing aids) represent hearing-aided sound-field thresholds, which are the softest sounds that a person can hear while wearing their hearing aids. C represents cochlear implant sound-field thresholds, which are the softest sounds that a person can hear while wearing a cochlear implant. Whenever a child wears hearing aids or a cochlear implant, aided or cochlear implant measurements must be made to determine how the child functions with amplification. A child’s ability to detect soft tones while wearing technology cannot be determined without obtaining data. Therefore, it is important to include aided or implant measurements as well as unaided sound-field thresholds on a child’s audiogram to compare, according to acoustic phonetics, speech sounds that are audible to the brain with and without amplification. Note the example in Figure 4–13 and the relationship shown between thresholds obtained with and without the hearing aids. The young child depicted on the audiogram has a moderately severe to severe hearing loss as measured in the sound field. Average conversational speech would not be audible to this child, though some environmental sounds could be heard without amplification. When wearing her hearing aids, all conversational speech and all speech sounds would be acoustically available to her brain if the talker is close to the child, the environment is relatively quiet, the child is attending, and many meaningful listening opportunities are presented.

The child’s amplification (see Figure 4–13) provides a wonderful “keyboard” through which data can be entered into her brain. What if this child was brought to your early intervention program without wearing her hearing aids? What if the battery died during therapy? What if the parents put the hearing aids on her for only a few hours each day? This child possesses a great deal of very usable residual hearing; however, residual hearing is of no significant value unless and until it can be accessed via appropriate amplification that maximizes the brain’s opportunity to receive auditory information.

Plot all unaided, aided, RM, and cochlear implant thresholds on a Familiar Sounds audiogram or on a count-thedots audiogram to assist in parent and teacher counseling.

Configuration (Pattern) of Thresholds on the Audiogram The pattern formed when pure-tone thresholds are plotted and connected on an audiogram can indicate the etiology or cause of the hearing loss. The shape formed by the pattern is called the configuration or profile of the hearing loss (Baldwin, Gajewski, & Widen, 2010; Hall, 2014; Kramer & Brown, 2019). Configurations can be described as follows: n Bilateral:  Hearing loss is present in

both ears. n Unilateral:  One ear has normal

hearing and the other ear has at least a mild hearing loss n Symmetrical:  The degree (severity) and configuration of the hearing



4.  Diagnosing Hearing Loss

Frequency in Hertz (Hz)

B = binaural aided thresholds S = binaural unaided sound-field thresholds

Figure 4–13. This audiogram (graph of a person’s hearing sensitivity) shows binaural aided thresholds (B), which are the softest tones that the child can hear while wearing both of her hearing aids, compared to unaided sound-field thresholds (S), which show the softest sounds that the child can hear with both ears when not wearing hearing aids.

loss are about the same for both ears. n Asymmetrical:  The degree and/or the configuration is different for each ear. n Notched:  There is a specific frequency or frequencies on the audiogram where hearing loss exists;

all other frequencies show better hearing. n Stable:  Hearing loss is about the same over time; it does not seem to be getting worse. n Progressive:  Hearing loss becomes worse over time.

93



94

Children With Hearing Loss:  Developing Listening and Talking n Fluctuating:  Hearing loss varies over

time. n Sloping hearing loss:  Hearing loss

is worse in the high frequencies; low frequencies show better hearing sensitivity. n Falling:  Hearing loss is substantially worse in the high frequencies. n Rising:  Hearing loss is worse in the low frequencies, becoming better as the frequencies get higher. n Cookie-bite (also called saucershaped):  Hearing sensitivity is poorest in the middle frequencies, with better hearing in the low- and high-frequency regions. n Reverse cookie-bite:  Significant hearing loss in the lower and higher frequencies and near-normal thresholds in the middle frequencies. n Corner audiogram:  Measurable hearing occurs only in the lowest

An example of configuration interpretation occurred in the following scenario. A young mother brought her 22-monthold to the clinic for hearing aids after the toddler had been diagnosed with a hearing loss. The mother sobbed that the hearing loss was her fault because the baby had fallen out of his high chair while the mother was talking on the phone with her back to the child. The child had suffered a bump to the head and a slight concussion. The mother stated that she had been told by the physician that the hearing loss was probably caused by the fall because it was identified after the fall, and the baby had passed his newborn hearing screening. Following testing using a combination of visual reinforcement audiometry and conditioned play audiometry with insert earphones, a bilateral, symmetrical, mild hearing loss in the low frequencies,

frequencies, and those thresholds typically are in the severe to profound hearing loss range. Some configurations or audiometric profiles of hearing loss are suggestive of particular etiologies. Said another way, some patterns are rarely associated with certain etiologies. For example, large vestibular aqueduct syndrome (LVAS) (see Chapter 3), almost always presents as an asymmetrical, fluctuating, and progressive sensorineural hearing loss. It rarely, if ever, presents as a symmetrical, stable hearing loss. A cookie-bite configuration is usually symmetrical, often progressive, and suggestive of a recessive genetic etiology. A bilateral, symmetrical, rising, sensorineural hearing loss typically is genetic and may be progressive. A hearing loss caused by anoxia typically shows as symmetrical, sharply falling (with relatively

sloping to moderate hearing loss in the middle frequencies, and rising again to a mild hearing loss in the highest frequencies was noted. Tympanometry revealed normal middle ear function on the day of test, ruling out a conductive component to the hearing loss. This symmetrical, cookie-bite-shaped, bilateral hearing loss had a very low chance of being caused by a fall. If the hearing loss was caused by a concussion, it likely would not be symmetrical and saucer shaped. It is far more likely that the hearing loss was progressive and genetic; it probably was not present at birth. The fall did not cause the hearing loss. Rather, the fall caused the physician and family to test the child to make sure that he was okay. When the hearing loss was revealed, a cause-effect relationship was inferred, when really the relationship likely was coincidental.



4.  Diagnosing Hearing Loss

good low-frequency hearing), and stable. A hearing loss caused by noise exposure often presents with a “noise notch” at 4000 Hz. That is, the worst hearing is shown at 4000 Hz with all other frequencies having better hearing sensitivity. The conductive hearing loss caused by otitis media (ear infections) may by asymmetrical, and most often presents with a rising pattern. Please see Madell, Flexer, Wolfe, and Schafer (Pediatric Audiology Casebook, 2nd Edition, 2019c) for detailed information about pediatric audiometric profiles. The audiogram depicted in Figure 4–13 presents as a moderately severe to severe cookie-bite-shaped hearing loss, suggestive of a genetic etiology. Past audiograms need to be evaluated carefully for evidence of progressive hearing loss, and future monitoring of the hearing loss must be responsibly maintained because this configuration usually is not associated with a stable hearing loss.

Formulating a Differential Diagnosis The separation of a hearing loss from other problems that have effects similar to those of hearing loss is known as a differential diagnosis. Certainly, a peripheral hearing loss should always be ruled out first as a primary or contributing cause of a child’s lack of or inconsistent auditory responses and/or difficulty developing spoken communication skills. Conditions such as general developmental delay, autism, childhood psychosis, auditory neuropathy spectrum disorder (ANSD), and central auditory processing disorders (CAPDs) may resemble hearing loss by showing a similar lack of response to sound and/or lack of speech and language

development. ANSD must always be ruled out when a child displays inconsistent auditory behaviors (see Chapter 3). An auditory processing problem may be present when sounds get into the auditory system, but the brain is unable to interpret efficiently, or at all, the meaning of the sounds due to some degree of brain damage. In an extreme case, meaningful sounds cannot be differentiated from nonmeaningful sounds. See Chapter 3 for more information about auditory processing issues, including how these issues relate to peripheral hearing loss. With general developmental delay, a child experiences delay in many developmental areas including auditory behaviors. A child with autistic spectrum disorder (ASD) may appear unresponsive to most sensory stimuli. Childhood psychosis may elicit reactions in the child that did not appear to be present at birth, such as deviant responses to sounds. All of these conditions can occur separately or in combination. Therefore, it is not always easy to identify a hearing loss. Careful case histories, observations, and multiple hearing tests are often required. Making a differential diagnosis requires collaboration between the audiologist and other professionals such as teachers, speech-language pathologists, early interventionists, physicians, and psychologists.

Sensory Deprivation The phenomenon of sensory deprivation is another factor that may complicate the diagnosis of hearing loss in children. Sensory deprivation is characterized by a lack of sensory stimulation to the brain as a result of a hearing loss that can cause delayed and/or deviant behaviors. As mentioned in Chapter 2, hearing functions

95



96

Children With Hearing Loss:  Developing Listening and Talking

on three levels: the unconscious, signal warning, and spoken language levels. A peripheral hearing loss can eliminate the environmental auditory background and deny a child’s brain access to his own biological sounds. The resultant delayed and/or atypical behaviors displayed by the child may resemble other developmental problems due to this lack of sensory input to the auditory centers of the brain. An example of sensory deprivation was seen in a 16-month-old girl who was referred for a hearing test. She did not yet walk or make any speech sounds, would not make eye contact, and wanted only to nurse and rub her mother’s clothing. Completely unresponsive to sound, the baby also had a family history of childhood hearing loss. Through multiple behavioral and electrophysiologic tests, a severe to profound sensorineural hearing loss was revealed. Following the fitting of her hearing aids and meaningful auditory brain stimulation, the child’s unusual behaviors declined. Professionals initially were uncertain as to the co-occurrence of other developmental problems. Two months after powerful hearing aids and auditory enhancement had alleviated the child’s sensory deprivation, the auditory centers of her brain were stimulated and she was walking, listening, interacting, and finally beginning to vocalize. Subse-

Due to sensory deprivation, an infant’s or young child’s development can look very limited at the first visit to an audiologist, but that same child could end up functioning at a much higher level than we would initially predict. Therefore, beware of allowing our initial impression of the child to limit our prognosis or to lower our expectations.

quently, she received bilateral cochlear implants at 24 months of age with excellent results. This particular case serves as one example of how hearing loss does not necessarily manifest itself in a straightforward manner.

Ambiguity of Hearing Loss The very ambiguity of hearing loss poses yet another factor that complicates a differential diagnosis. The notion of “all or none” cannot be applied to hearing loss because a child might hear some sounds but not others. The audibility and intelligibility of speech for a child with a hearing loss are influenced by many factors such as the distance from a talker, background noise, room reverberation, and attention. One uninformed therapist, upon removing a toddler’s hearing aids, found this ambiguous nature of hearing loss to be confusing. The therapist stated that for a period when the hearing aids were removed, the child still turned his head when his name was called. From this observation the therapist believed that if the toddler could hear his name, he must be able to hear everything, so the hearing aids were probably a mistake. From there, the conclusion was drawn that the toddler’s speech and language delay must be linked to causes other than hearing loss. Although this particular child might have multiple difficulties, his moderate degree of hearing loss allowed his brain some low frequency audibility, even without amplification, but not intelligibility of consonant sounds. Without hearing aids, the child certainly was not “deaf.” However, he could not hear well enough to identify the word-sound distinctions necessary for the establishment of competent spoken



4.  Diagnosing Hearing Loss

communication. His brain did not receive phonemic details. Therefore, making an accurate differential diagnosis of hearing loss in children is not a simple matter. Other problems can arise with symptoms that are similar to hearing loss, or that can coexist with the hearing loss. In addition, sensory deprivation can cause the hearing loss to appear to be more than a simple lack of response to sound. Finally, the ambiguity of hearing loss relative to type and degree, stability, noise, and distance can cause one to misinterpret a child’s auditory responsiveness.

Measuring Distance Hearing Subjective measures of distance hearing, using the Ling 6-7 Sound Test, are a vital part of audiologic assessments and validation of technology function (Hung & Ma, 2016). Children with hearing losses, even minimal ones, cannot receive intelligible speech well over distances. This reduction in “earshot” has tremendous negative consequences for life and classroom performance because distance hearing is linked to passive, casual, and incidental listening and learning (Ling, 2002; Perigoe & Paterson, 2013). Research in the field of developmental psychology tells us that about 90% of what very young children know about spoken language and the world they learn incidentally. That is, young children learn a great deal of information unintentionally because they have access to overhearing conversations that occur at distances (Akhtar, Jipson, & Callanan, 2001). Thus, any type and degree of hearing loss can present a significant barrier to an infant’s or child’s ability to receive information from the environment.

Because of the reduction in acoustic signal intensity and integrity with distance, a child with a hearing problem has a limited range of distance hearing; that child may need to be taught directly many skills that other children learn incidentally. Refer to Chapter 2 for a discussion of the acoustic basis of Ling 6-7 Sound Test, and to Appendix 2 for instructions for the administration and application of the test.

Because of reduced bandwidth of acoustic stimuli caused by hearing loss, children with hearing loss need about 3 times the exposure to new words and concepts to learn them (Pittman, 2008). Extension of acoustic bandwidth and distance hearing (by using amplification and remote microphone technology), is critical for maximizing the child’s exposure to linguistic and social information in the home, playground, and classroom.

Summary The purpose of this chapter has been to discuss issues, descriptions, and procedures involved in the audiologic assessment of infants and young children with hearing loss. Test protocols were presented, and behavioral and objective tests were described. The next chapter will discuss the technological management of identified hearing loss. Refer to Table 4–1 for a summary of audiologic tests included in a basic audiologic test battery. Refer to Table 4–2 for a summary of specific pediatric audiology test procedures. Refer to Table 4–3 for a listing of common abbreviations used by audiologists on reports.

97

Table 4–1.  Summary of Audiologic Tests in a Basic Test Battery Test

Measurement

Purpose

Pure-tone thresholds on an audiogram

Hearing sensitivity; symbols are plotted in dB HL from 250 Hz through 8000 Hz on the audiogram

• Degree of hearing loss • Type of loss: conductive, sensorineural, or mixed • Shape of loss (i.e., falling, rising, saucer shaped) • Ear symmetry

Aided or CI thresholds

Access to sound while wearing hearing aids or CI; obtained in the sound field; recorded as dB HL on the audiogram

• Access to speech sounds when amplified • Access to environmental sounds when amplified • Benefit of technology

Speech awareness threshold; also known as speech detection threshold (SAT or SDT)

Hearing sensitivity for speech; noted in dB HL

Speech reception threshold (SRT)

Hearing sensitivity for speech recognition using spondee words such as “baseball” or “hot dog”; noted in dB HL

• The softest level that speech can just barely be recognized; focuses on vowels; closed-set task

Identification of speech by distinguishing singlesyllable words; recorded as percent correct as a function of presentation level

• Tests the child’s ability to identify both vowels and consonants of test words; open-set task

Objective measure of middle ear function

• Identifies a need for medical intervention

Word discrimination score

Immittance testing

• The softest level that speech can be detected — not necessarily understood • Provides mostly low-frequency cues

• SRT should agree with lowand mid-frequency pure-tone thresholds

• Identifies differences in word recognition ability between ears • Words are presented at an average conversational level (50 dB HL) and again at a level loud enough to overcome the hearing loss

• Abnormal immittance could explain a temporary shift in hearing sensitivity • Abnormal results could signify that the hearing aid volume should be increased 98

99

Visual reinforcement audiometry (VRA)

Behavioral observation audiometry (BOA)

Test

Expected response is a conditioned head turn to a visual reinforcer, usually a lighted, animated toy.

Reflexive and attentive responses also can be noted as indicators of the infant’s state and auditory development.

Expected is a change in sucking in response to auditory stimulus; other behavioral changes are not accepted for the test procedure, because they usually indicate suprathreshold responses. Refer to Madell and Schafer (2019) for details about BOA testing.

Expected Infant/Child Response

5 months to 3 years

Birth to 6 months

Cognitive Age Range Necessary for the Child to Engage in the Test

Table 4–2.  Pediatric Audiology: Test Procedures

BOA has not been generally accepted in the audiology community as a reliable way to obtain MRLs because audiologists typically have not been trained to use a sucking response pattern. Testing can reinforce accurate fitting of technology because minimal response levels (MRLs) can be obtained.

Because responses are conditioned, more responses can be obtained during one test session. continues

Some children will not accept earphones, so obtaining individual ear information can be challenging.

BOA testing can be performed only when the infant is in a calm awake or light sleep state.

Testing can be conducted in sound field with earphones, bone oscillator, hearing aids, or cochlear implants.

This enables the audiologist to obtain valuable behavioral responses in infants and young children.

This requires the audiologist to carefully observe the infant sucking, and to note other auditory developmental behaviors.

Challenges to Performing the Test

This enables audiologists to obtain valuable behavioral responses in infants; part of a complete diagnostic assessment.

Benefits of the Test

100

Child performs a motor act in response to hearing a sound (e.g., listen and drop task).

None

Conditioned play audiometry (CPA)

Immittance

continued

Visual reinforcement audiometry (VRA)

Test

Expected Infant/Child Response

Table 4–2.  continued

All

30 months to 5 years

Cognitive Age Range Necessary for the Child to Engage in the Test

This provides information about middle ear functioning and about the intactness of the auditory system (acoustic reflex arc).

Testing can be conducted in a sound field with earphones, bone oscillator, hearing aids, or cochlear implants.

Accurate responses can be obtained at threshold levels.

The state of the infant or child is less problematic because the child can be more easily engaged.

Testing enables accurate fitting of technology because MRLs can be obtained.

Testing can be conducted in a sound field with earphones, bone oscillator, hearing aids, or cochlear implants.

Benefits of the Test

The child must sit still and not speak during the test battery.

Multiple reinforcers, computer programs, and so on, are often needed.

Keeping the child entertained and involved long enough to obtain all the necessary information can be challenging.

Challenges to Performing the Test

101

None

None

All

All

Cognitive Age Range Necessary for the Child to Engage in the Test

The infant or child must be asleep, sedated, or very still for the duration of testing. ABR testing is not a direct measure of hearing and is not a substitute for behavioral audiological testing.

Click ABR provides information about the intactness of the auditory pathways, including measures contributing to the diagnosis of auditory neuropathy.

Testing cannot rule out a mildmoderate hearing loss.

The infant or child must sit still, not speaking during testing.

Challenges to Performing the Test

Tonal ABR provides frequency-specific threshold information.

This contributes to the evaluation of the overall function of the auditory system.

If OAEs are present, they suggest no greater than a mild to moderate hearing loss.

This measures cochlear outer hair cell function.

Benefits of the Test

Source:  Adapted from Madell, J. R., Flexer, C., Wolfe, J., & Schafer, E. C. (2019b). Pediatric audiology: Diagnosis, technology, and management (3rd ed). New York, NY: Thieme.

Auditory brainstem response testing (ABR)

Otoacoustic emissions (OAEs)

Test

Expected Infant/Child Response

Table 4–3.  Common Abbreviations Used by Audiologists on Reports BOA

Behavioral observation audiometry; method of assessment used with babies where a sound is presented and the baby’s unconditioned behavioral responses are observed

CNT

Could not test (tried, but the audiologist could not obtain information; child was unable or unwilling to perform the task)

CPA

Conditioned play audiometry; method of assessment used with children with cognitive ages of 2 to 5 years where a sound is presented and child is taught to perform some play activity each time he or she hears the sound; also called the listen and drop game

DNT

Did not test — did not even attempt a test or a specific procedure

MCL

Most comfortable loudness level

MRL

Minimal response level — the softest level that a baby responded to the test signal; usually louder than actual threshold

NBN

Narrowband noise; noise bands centered at frequencies of 250 to 8000 Hz used in masking and in obtaining some frequency specificity for unaided and aided sound-field testing

NR

No response; no measurable hearing when the test signal was presented at the loudest limits of the audiometer

PTA

Pure-tone average; average of the pure-tone thresholds at 500, 1000, and 2000 Hz in each ear. These are viewed as the key speech frequencies that give an indication of hearing sensitivity in the mid range; a high-frequency hearing loss is not identified by the PTA: PTA = −10 to 15 dB:  Normal PTA for children PTA = 16 to 25 dB:  Minimal hearing loss PTA = 26 to 40 dB:  Mild hearing loss PTA = 41 to 55 dB:  Moderate hearing loss PTA = 56 to 70 dB:  Moderately severe hearing loss PTA = 71 to 90 dB:  Severe hearing loss PTA ≥ 91 dB:  Profound hearing loss PTA ≥ 15 dB in one ear:  Unilateral hearing loss

SAT

Speech awareness threshold, sometimes called speech detection threshold; the faintest level where the child can just detect the presence of speech 50% of the time but does not indicate that the speech was understood

S

Sound-field testing; refers to the way sound is presented to the child (e.g., through loudspeakers rather than earphones in a sound treated suite) in unaided and aided cochlear implant testing; results always reflect hearing in the better ear

SRT

Speech recognition threshold; the faintest level where the child can repeat familiar two-syllable words, like “baseball,” “cowboy,” etc. (or point to pictures of them), 50% of the time

102

Table 4–3.  continued VRA

Visual reinforcement audiometry; method of assessment where the child is conditioned to localize to a toy each time she hears the sound; used with children with cognitive ages of 5 to 8 months up to 3 years

WDS

Word discrimination score; the percentage score obtained by repeating open-set, one-syllable words (or pointing to pictures of them) in a 25- or 50-item list; interpretations of the scores vary; e.g.: 90% to 100%:  Excellent word discrimination ability 75% to 89%:  Slight difficulty 61% to 74%:  Moderate difficulty 50% to 60%:  Poor word discrimination ability

103

5 Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

Key Points Presented in the Chapter n The purpose of all environmental

n Hearing aids are typically the initial

and technological management strategies is to enhance the reception of clear and intact acoustic information to access, develop, and organize the auditory centers of the brain. n Speech intelligibility is based on the science of signal-to-noise ratio (S/N ratio) — the relationship of the desired signal to all background/ competing sounds. Children need the desired signal to be 10 times, or approximately 15 to 20 dB, louder than background noise to clearly discriminate words. n One strategy for improving the S/N ratio is to move closer to the infant’s or child’s ear so that the talker’s voice will take precedence over all background sounds.

technology used to deliver auditory information to the brain of a baby or child who experiences a hearing loss, and the hearing aid fit needs to be validated functionally (speech perception tests) as well as verified electroacoustically (real-ear measurements). n Bone anchored implants (also called osseointegrated implants), an alternative to traditional bone conduction hearing aids, are suitable for children with a conductive or mixed hearing loss or single-sided deafness (SSD). n Traditionally, personal-worn remote microphone (RM) systems coupled to hearing aids have been used for school-aged children in school settings, but growing

105

106

Children With Hearing Loss:  Developing Listening and Talking

evidence suggests that infants and toddlers also can benefit from personal RM systems used at home to increase auditory exposure to direct and overheard conversations. n Sound-field amplification technology (classroom audio distribution [CAD] system) is an effective educational tool that allows control of the acoustic environment in a classroom, thereby facilitating acoustic accessibility of teacher instruction for all children in the room.

Introduction As detailed in the white paper “Start with the Brain and Connect the Dots” on the Hearing First website, our biology is that we hear with the brain (Flexer, 2017). The ear is the structure that captures raw, vibratory sound from the environment and directs it to the brain, but it is the brain that processes and gives meaning to that auditory information (Kral, 2013; Kral et al., 2016). Our ears are merely “doorways” to our auditory neural centers. Authentic “hearing” occurs in the brain, and not in the ears, just like “seeing” occurs in the brain and not in the eyes. Hearing loss, therefore, can be viewed as an obstruction in the doorway that prevents a little (could be called “hard of hearing”), a lot (could be called “more hard of hearing”), or all (could be called “deaf”) auditory information from reaching the brain. To oversimplify, hearing loss is a “doorway problem,” and spoken language development depends on overcoming the doorway problem and getting

n The cochlear implant is very

different from a hearing aid; hearing aids amplify sound, but cochlear implants compensate for damaged or nonworking parts of the inner ear. n Efficacy of technology use may be measured through educational performance, behavioral speech perception tests, direct measures of changes in brain development, and functional assessments. n If the infant or child is not performing or progressing as expected, suspect equipment problems first.

information to the brain. Hearing technologies (e.g., hearing aids, cochlear implants, bone anchored devices and RM systems) are engineered to break through the ear/ doorway to allow access, activation, stimulation, and development of auditory neural pathways with auditory information, including spoken language.

The only point of wearing hearing technologies is to get auditory information through the doorway to the brain; there is no other purpose for wearing these devices.

With the availability of newborn hearing screening, we can identify a doorway problem at birth, so we can — and must — fit auditory technologies in the first weeks of life to activate and grow auditory neural connections as a foundation for language and knowledge development (Ching et al., 2018). It should be noted that while hearing aids can be fitted within



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

days of birth, cochlear implants, currently appropriate for babies with severe to profound hearing loss (closed doorways), are not able to be surgically implanted until 6 to 12 months of age, depending on the protocols of a particular country. The components of technological management for hearing and hearing problems are discussed in this chapter.

For Intervention, First Things First:  Optimize Detection of the Complete Acoustic Spectrum To facilitate the reception of clear and intact spoken language and environmental sounds, the following must occur: control and management of the listening environment; favorable positioning of talkers, whether parent, teacher, or clinician, so they are always within earshot of the baby or child; and consistent use of the appropriate amplification — hearing aids, cochlear implants, and/or RM systems, such as frequency modulated (FM) systems (Boothroyd, 2019). Sounds reach the brain by traveling through the environment, the auditory system, and any technology worn by the child. An unmanaged hearing loss (doorway problem) of any degree prevents certain sounds from having access to the brain, thereby interfering with the child’s acquisition of language, literacy, and academic competencies.

Listening and Learning Environments The integrity of the learning environment is the first item to be considered

(Kraus, 2017; Latham & Blumsack, 2008). In schools and in many other learning domains, including the home and clinic, children often are placed in demanding, degraded, and constantly changing listening situations due to noise and distance from the talker (Kraus & White-Schwoch, 2015; Smaldino & Flexer, 2012). The farther an infant or child is from the sound source, the greater the background noise and the poorer the speech-to-noise (S/N) ratio (Anderson, 2001; ANSI, 2010).

Distance Hearing/Incidental Learning and S/N Ratio Distance Hearing/ Incidental Learning Incidental learning through “overhearing” refers to times when the child listens to speech that is not directly addressed to him or her and learns from it. Akhtar, Jipson, and Callanan (2001) found that typical children as young as 2 years of age can acquire new words from overheard speech, showing the active role played by children in acquiring language. Furthermore, Knightly, Jun, Oh, and Au (2003) found that childhood overhearing helped improve speech perception and phonologic production of all languages heard. Unfortunately, children with hearing losses, even minimal ones, have reduced overhearing potential because they cannot receive intelligible speech well over distances (Moeller, Hoover, Peterson, & Stelmachowicz, 2009). This reduction in distance hearing, also called earshot, can pose substantial problems for life and classroom performance because distance hearing is necessary for the passive, casual, or incidental acquisition and use of spoken language and pragmatic skills

107

108

Children With Hearing Loss:  Developing Listening and Talking

(Moeller, 2007). Therefore, a child’s distance hearing needs to be extended as much as possible through the use of technology (Dillon, 2012; Ling, 2002; Madell & Flexer, 2018; Peters, 2017). Moeller, Donaghy, Beauchaine, Lewis, and Stelmachowicz (1996) found that more consistent auditory input can be obtained if RM systems are used at home as well as in school settings, and Benitez-Barrera, Angley, and Tharpe (2018) found that children with hearing loss have increased access to caregivers’ speech from a distance when the parent uses a RM system in the home environment.

Signal-to-Noise Ratio, Also Called Speech-to-Noise Ratio (S/N Ratio) S/N ratio is the relationship between a primary signal, such as the parent’s speech, and background noise. Noise is everything that conflicts with the auditory signal of choice and may include other talkers; heating, ventilating, or cooling systems (HVAC); classroom or hall noise; playground sounds; computer noise; and wind; among others. The quieter the room and the more favorable the S/N ratio, the clearer the auditory signal will be for the brain (Flexer & Rollow, 2009; Leavitt & Flexer, 1991). The farther the listener is from the desired sound source and the noisier the environment, the poorer the S/N ratio and the more garbled the signal will be for the brain. As stated previously, all children need a quieter environment and louder signals than adults do to learn (Anderson, 2004; Madell & Flexer, 2018; Smaldino & Flexer, 2012). Adults with normal hearing and intact cognitive and language skills require a consistent S/N ratio of approximately +6 dB. This means that the desired signal needs to be about twice as loud as

Because overhearing conversations is a critical way that children develop social and cognitive skills such as Theory of Mind and acquire and use spoken language, a child’s distance hearing in various environmental settings must be known and extended. The Ling 6-7 Sound Test as a means of estimating distance hearing can assist in this regard. See Appendix 2 for directions about how to use the test. Furthermore, RM systems should be fitted at the same time or very shortly after hearing aids are fitted on babies and toddlers to extend earshot. Families require coaching in the effective use of the RM as a means of promoting overhearing.

background noise for the reception of intelligible speech (Crandell, Kreisman, Smaldino, & Kreisman, 2004). Children, on the other hand, need a much more favorable S/N ratio because their neurological immaturity and lack of life and language experience reduce their ability to perform auditory and cognitive closure. That is, children with normal hearing require a better S/N ratio than adults, and children with hearing loss require even more favorable acoustic environments. In fact, even adults with hearing loss need a very favorable S/N ratio. These populations need the desired signal, including their own voice, to be about 10 times louder than competing sounds, which would be a S/N ratio of about +15 to +20 dB (remember that the decibel scale is logarithmic). Due to noise, reverberation, and variations in teacher position, the S/N ratio (the intensity of the teacher’s voice relative to normal classroom noise) in a typical classroom is unstable and averages only about +4 dB and may be 0 dB — often less than



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

ideal even for adults with normal hearing (Peters, 2017; Smaldino, 2004). Auditory brain access of the speech signal in a typical classroom environment is more problematic than initially thought. Leavitt and Flexer conducted a key study in 1991 to measure the effects of a classroom environment on a speechlike signal. They used the Brüel and Kjaer Rapid Speech Transmission Index (RASTI), which emits an amplitude-modulated broadband noise centered at 500 and 2000 Hz. This signal was transmitted from the RASTI transmitter to the RASTI receiver that was placed at 17 different locations around a typical occupied classroom. The RASTI score is a measure of the integrity of signals as they are propagated across the physical space. A perfect reproduction of the RASTI signal at the receiver has a score of 1. Results showed that sound degradation occurred as the RASTI receiver was moved away from the RASTI transmitter, as reflected by a decrease in RASTI scores (Leavitt & Flexer, 1991). In the front row center seat, the most preferred seat, the RASTI score dropped to 0.83. In the back row, the RASTI score was only 0.55, reflecting a loss of 45% of equivalent speech intelligibility in a quiet, occupied classroom. Only at the 6-inch reference position could a perfect RASTI score of 1 be obtained. Note that the RASTI score represents only the loss of speech fidelity that might be expected at the student’s ear or hearing aid microphone in a quiet classroom. The RASTI score does not measure the additional negative effects of a child’s hearing loss, weak auditory or language base, or attention problems. Even in a front-row center seat, the loss of critical speech information is noteworthy for a child who needs accurate data entry to their brain to learn. The most sophisticated of hearing aids or cochlear

implants cannot recreate those components of the speech signal that have been lost in transmission across the physical space of the classroom.

ANSI/ASA S12.60-2010: Acoustical Guidelines for Classroom Noise and Reverberation There is increasing awareness about the importance of managing classroom acoustics. In 2002, the American National Standards Institute (ANSI) issued a landmark document titled Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools. This 36-page booklet defined the acoustic criteria for classrooms and provided a great deal of information about how to attain a favorable background noise level of 35 dBA and a reverberation time of 0.6 seconds. A-weighted decibels, abbreviated dBA, or dBa, or dB(a), are an expression of the relative loudness of sounds in air as perceived by the human ear. In the A-weighted system, the decibel values of sounds at low frequencies are reduced, compared with unweighted decibels, in which no correction is made for audio frequency. A second review of the standard resulted in some minor modifications and a separate part devoted to issues unique to portable classrooms. The first part of the revised standard, American National Standard Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools, Part 1: Permanent Schools (ANSI/ ASA S12.60-2010), is a refined version of the 2002 standard. The primary performance requirement for furnished but unoccupied classrooms is basically unchanged from the 2002 standard. Specifically, the

109

110

Children With Hearing Loss:  Developing Listening and Talking

1-hour average (dB)A-weighted background noise level cannot exceed 35 dB (55 dB if C-weighting is used ​— ​C-weighting does not attenuate low frequencies), and for averaged-sized classrooms the reverberation time (RT60) cannot exceed 0.6 seconds (35/55 dBA/C and 0.7 seconds if the volume is greater than 10,000 but less than or equal to 20,000 cubic feet). Among other changes are improvements of the requirements for exterior walls and roofs in noisy areas, consideration of activities close to classrooms, clarification of the definition of a “core learning space,” addition of the limit of 45 dBA for sound in hallways, clarification and simplification of measurement procedures, and addition of the requirement that if a classroom audio distribution system (CADS) is believed appropriate it should provide even coverage and be adjustable so as not to disturb adjacent classes (Smaldino & Flexer, 2012). These minimally acceptable background noise levels are specified with the intent of producing a classroom S/N ratio of +15 dB; the desired speech signal ought to be 15 dB louder than background noise. This S/N ratio is believed to be necessary to provide children with access to intelligible classroom instruction. Note that acoustical measurements are taken in unoccupied facilities. Currently, compliance with the ANSI standard is voluntary, depending on individual state guidelines, and few classrooms meet the criteria. Steps can be taken to influence the level of noise in a school classroom (Smaldino & Crandell, 2004). To begin with, the greatest sources of noise in a room are the number of children in the room and the number of simultaneous auditory activities occurring in the room (Flexer & Rollow, 2009). Thus, classroom noise levels are largely determined by class-

room activity; the more children and the more active learning centers there are, the noisier the room (Nelson, Smaldino, Erler, & Garstecki, 2008). In addition, note the construction of the floor and ceiling in the room. The floor and ceiling areas compose 60% of the total surface area of a school classroom and have considerable impact on the noise levels in the room. Rooms with hard floors and high ceilings will be much more reverberant (echo) and noisy than a carpeted room with acoustical tile and lower ceilings (Adrian, 2009). HVAC systems are also substantial sources of noise that need to be managed (ASHA, 2005). For additional details about ANSI/ ASA S12.60 acoustical guidelines, which were reaffirmed on April 1, 2015, please refer to: https://global.ihs.com/search_res​ .cfm?&rid=ASA&input_doc_number=​ ANSI%2FASA%20S12%2E60&input_doc​ _title=%20

Home and School Environmental Noise Measurements Using Smartphone and Tablet Apps Because parents, teachers, and practitioners who work with children with hearing loss are very focused on acoustic accessibility of spoken information, all should have sound-level-meter apps on smartphones and tablets to obtain a general idea of the noise levels in all environments. A recent search of sound-level-meter apps available for Apple and Android platforms identified several that could be useful for the measurement of classroom and home acoustics. These apps can be extremely informative, functional, and convenient (Smaldino & Ostergren, 2012). A few appeared to have been designed by audio professionals and could possibly be considered equivalent to stand-alone,



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

Type II sound-level meters (SLMs). The other apps seem to be designed more for estimation purposes, lacking features such as A-weighting, spectral analysis, and measures of reverberation time.

Many popular sound-level-meter apps can be used to screen the acoustic environments of home, school, car, and draw awareness to surrounding noise. Be aware that the smart device’s internal microphone may limit accurate noise measures; if precise measures are desired, an external microphone and calibration capabilities should be considered (Serpanos et al., 2018).

Talker and Listener Physical Positioning One strategy for improving the S/N ratio is to move closer to the infant’s or child’s ear so the talker’s voice takes precedence over all background sounds (Figure 5–1). Yelling from across the room may offer partial audibility (hearing the presence of speech), but not intelligibility (hearing differences among speech sounds), whereas an average loudness of speech very close to the ear facilitates intelligibility of the entire speech signal for the listener. Sitting across from a child does not produce as favorable an S/N ratio as does sitting right next to a child or holding a baby and speaking close to the ear or microphone.

Figure 5–1.  One strategy for improving the S/N ratio is to move closer to the infant’s or child’s ear so that the talker’s voice will take precedence over all background sounds. This child is wearing hearing aids as he listens to his mother who is seated closely beside him, rather than across from him.

111

112

Children With Hearing Loss:  Developing Listening and Talking

A noisy room cannot possibly have an effective S/N ratio. In fact, typical home and building noises (HVAC systems, televisions, computers, printers, lawnmowers, hallway noise, and lawn sprinklers) can interfere with the child’s capacity to receive a clear speech signal.

Understanding speech in noise is one of the hardest tasks for the brain (Kraus, 2017).

If speech is being used as the primary communication vehicle, then educators and family members must be mindful of the potential intelligibility of the speech signal. The environment needs to be as quiet as possible. Turn off televisions, computers, videos, food processors, dishwashers, popcorn makers, electric mixers and blenders, lawn mowers, and radios. Close windows next to a noisy area such as a busy street. Position yourself close to the child’s ear or use an RM. The acoustic characteristics of the learning environment, whether the environment is in the home, car, school, store, or zoo, are learning variables that must be managed to provide the child’s brain with access to clear “data input.” An additional problem caused by noise in the learning environment is listening fatigue experienced by all children, and by children with hearing loss in particular. In fact, cognitive fatigue from mental exertion during listening tasks is very problematic for children with hearing loss in classrooms (Davis et al., 2018; McCreery, 2015). Because the child has to allocate more of their cognitive resources to listening tasks, fewer brain resources are available for higher-level processes such as problem solving and cognitive integration of new information.

Amplification for Infants and Children Hearing Aids Hearing aids (also called hearing instruments) are FDA-cleared medical devices that amplify sound for persons with hearing loss, and usually are the initial technologies used to access auditory information for a baby or child who experiences a doorway problem (hearing loss). Hearing aids do not actually correct the hearing loss. Rather, they amplify and shape the incoming sounds to make them fully audible and intelligible to the child’s brain without being uncomfortably loud. The hearing aid fit needs to be validated functionally (speech perception) as well as verified electroacoustically (real-ear measurements). There are many excellent resources for detailed descriptions and discussions of both fitting and management strategies for hearing aids, cochlear implants, and RM technologies. The reader is referred to Dillon (2012); McCreery and Walker (2017); Madell, Flexer, Wolfe, and Schafer (2019b); Ricketts, Bentler, and Mueller (2019); Schaub (2009); Wolfe, (2019); and Wolfe and Schafer (2015). Other first-rate resources are the Pediatric Amplification Protocol and the Pediatric Amplification Guideline (American Academy of Audiology [AAA], 2013), which address issues such as candidacy, preselection and procedures, hearing aid circuitry and signal processing, electroacoustic verification and functional validation of the hearing aid fitting, hearing instrument orientation and training, and follow-up and referral. Refer to the AAA website for continual updating of pediatric protocols and standards (http://www.audiology.org/publications/ guidelines-and-standards).



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

Also, in 2014, the Alexander Graham Bell Association developed guidelines titled Recommended Protocol for Audiological Assessment, Hearing Aid and Cochlear Implant Evaluation, and Followup. This document recommends procedures to assess amplification and to manage children with cochlear implants. See the following website for specifics: https:// www.agbell.org/Advocacy/AlexanderGraham-Bell-Associations-RecommendedProtocol-for-Audiological-AssessmentHearing-Aid-and-Cochlear-Implant-Evalu​ ation-and-Follow-up

New Context for Hearing Instruments There is a new context for hearing instruments in today’s world. In the past, hearing aids may have had a negative connotation — a visible declaration that something was wrong. The small size of the hearing instrument was emphasized to parents with the subtext that the hearing loss should be concealed. In today’s world, “earwear” is a common and popular sight. Children and adults are observed with streamers, cords, buttons, flashing blue devices, cell phones, iPods, and so on dangling from their ears. Hearing aids and cochlear implants attached to ears and heads are no longer noteworthy — no longer a unique proclamation that “something is wrong.” This is the electronic and technological age where gadgets and devices are the rule rather than the exception. Using electronic technology (such as hearing aids, RM systems, and cochlear implants) for information access makes a positive statement in this era.

New Amplification Products — NOT for Children There are new products on the market that can be purchased directly by the

consumer from a store/retailer or online (DiFrancesco & Tufts, 2018); these new devices are NOT to be worn by children! Examples of these new, direct-to-the consumer devices are over-the counter (OTC) hearing aids designed for adults with mild to moderate hearing losses and personal sound amplification products (PSAP) that are marketed for people with normal hearing, even though they look like a hearing aid.

All children with any type and degree of hearing loss must be audiologically evaluated and have their hearing aids fitted by a pediatric audiologist; children cannot wear amplification devices off the shelf. The child’s auditory brain development is at stake!

Hearing Aid Styles and Use Hearing aid styles include the following: n Behind-the-ear (BTE), also called

ear level (the most appropriate for children, Figure 5–2), hearing aids are fitted behind or on top of the ear. An ear hook is attached to a tube that runs from the hearing aid to the entrance of the ear canal, and an earpiece (earmold) or ear tip attaches to keep it in place. There are two types of BTEs: traditional and slim-tubing. Traditional BTE is more robust; it has thicker tubing and more stability and power. Slim tubing is also known as a mini BTE, an open BTE, or an open-ear fit hearing aid, and is most often used for highfrequency hearing losses. The slim tubing is more fragile and cannot handle as much amplification as the larger tubes, so it is more appropriate

113

114

Children With Hearing Loss:  Developing Listening and Talking

Figure 5–2.  Ear-level hearing aids are the most appropriate style for infants and children.

for mild through moderate hearing losses and for older children. n Receiver-in-the-canal (RIC), which is a variation of receiver-in-theear (RITE), fittings look similar to slim-tube open fittings; however, instead of resting behind the ear, the receiver is placed within the ear canal. Because the receiver is not located in the BTE unit but rather is in the ear canal (connected to the BTE component via thin tubing), the BTE unit is particularly small, light,

and inconspicuous and allows a better sound quality at the eardrum. However, RICs are more fragile than traditional BTEs, and they typically are not recommended for infants and young children (Figure 5–3). n In-the-ear (ITE) hearing aids were the first custom hearing aids available. They are larger, more secure because the top of the ear helps hold them in place, and can provide more power. n In-the-canal (ITC) hearing aids are smaller than the ITE version, because



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

Figure 5–3.  A receiver-in-the-ear (RITE) hearing aid with a noncustom dome coupling (left ) and a RITE hearing aid with a micro-earmold coupling (right ). Neither coupling offers an effective solution for infants and young children, though adolescents and teenagers may be able to use these coupling options as they get older. Source: Reprinted with permission from McCreery, R. W., and Walker, E. A. (2017). Pediatric amplification: Enhancing auditory access (p. 57). San Diego, CA: Plural Publishing, Inc.

they fit in the concha, or bowl of the ear. n Completely-in-the-canal (CIC) hearing aids are made to fit fully in the ear canal and can be minimally visible. The last three hearing aid styles are usually not suitable for children due in part to the inability of these styles to connect to personal RM systems, and also to continual ear canal growth. A fourth style is body-worn, once the rule for children, but now rarely used due to today’s small and powerful BTE hearing aids. However, for infants, sometimes a body-worn hearing aid can best accommodate the usual infant behaviors of lying down and drooling, as well as their small and soft ears and small ear canals. Once sounds are made audible and available to the brain by the hearing aid, the baby or child must then have rich, meaningful, daily listening experiences to learn to make sense of the new incoming auditory information. Sounds must be detected before higher-level auditory

Hearing aids for infants and young children should: (a) provide an amplified signal that can be shaped to accommodate the configuration of the hearing loss and make every speech sound audible and intelligible to the child’s brain, (b) provide amplitude compression to make soft sounds sufficiently audible to the brain while limiting the maximum output of loud sounds to be safe and comfortable, and (c) be free of distortion.

skills and development can occur. Therefore, amplification is an initial step in the educational management of hearing loss, but it is not the only step, and it certainly is not the last step (Bradham & Houston, 2015; Ching et al., 2018; Dettman, Wall, Constantinescu, & Dowell, 2013; Estabrooks, 2006, 2012; Fitzpatrick & Dou­cet, 2013; Ling, 2002; Madell, Flexer, Wolfe, & Schafer, 2019b; Sininger et al., 2009).

115

116

Children With Hearing Loss:  Developing Listening and Talking

Fitting Hearing Aids on Infants Hearing aids can and must be fitted on infants of any age who have any degree of hearing loss. The earlier an infant’s auditory brain centers are accessed, the better, because of the need to take advantage of the time of maximal neural plasticity and to avoid loss of information; therefore, every effort should be made to obtain a good fitting on very young children. However, obtaining a successful hearing aid fitting on infants less than 6 months of age is far from easy. Following are some practical challenges that need to be considered when fitting hearing aids on infants (Kuk & Marcoux, 2002; McCreery & Walker, 2017; Powers, 2010): n Consider parental acceptance of

their infant’s hearing loss (Moeller, Hoover, Peterson, & Stelmachowicz, 2009; Roush & Kamo, 2019; Sjoblad, Harrison, & Roush, 2004) and their commitment to keeping the hearing aids on the baby all waking hours. Infant temperament, developmental stage, and life situations all influence the ease or difficulty of keeping hearing aids on the infant. n The child may have small ear canals and small overall ear size that can cause the earmold fit to be difficult. When choosing a material for the earmold, vinyl is recommended because it is soft enough for comfort but rigid enough to keep the sound bore open when it is inserted into the canal. Also, the shape of a vinyl earmold can be modified easily and the material accepts adhesive readily to firmly hold the tubing in place. We have yet to obtain ergonomi­ cally appropriate earmolds and hearing aids for infants (Wolfe & Scholl, 2009).

For infants who may need earmolds made every 2 weeks due to ear canal growth, wrapping a Comply strip around standard size #13 tubing can create a functional earmold for a temporary fix. As 3-D scanning and printing of earmolds become readily available in pediatric clinics, making perfectly fitting earmolds will happen easily during an office visit (Chen, Chen, Chiu, Cheng, & Tsui, 2012). n Soft, pliable infant ears can make it difficult to keep the hearing aids from “flopping.” Figure 5–4 shows a variety of hearing aid retention devices that can help keep the hearing aid on the ear. n Consider acoustic feedback; infants tend to be lying down or held close to the parent’s body, commonly resulting in feedback from the hearing aid (Kolpe, 2003), although the feedback reduction feature in digital hearing aids helps in some instances. n Consider money issues and funding availability; hearing aid loaner banks must be established to allow rapid access to appropriate technology. n There may be additional child health concerns. n Consider the possibility of a conductive component to the hearing loss that can change and add to the degree of the child’s overall hearing loss on a daily basis.

Loaner hearing aids, RM systems, and specialized devices to help with hearing aid retention such as kids’ ear hooks, double-sided tape (wig tape), ear gear, headbands, and bonnets can be used to address some of these aforementioned practical issues (Madell & Anderson, 2014). See Figure 5–4.

Figure 5–4.  Examples of hearing aid retention devices (Top L–R: caps and bonnets, oto clips, wig/toupee tape; Bottom L–R: ear gear, critter clips, and headbands). Source: Reprinted with permission from McCreery, R. W., and Walker, E. A. (2017). Pediatric amplification: Enhancing auditory access (p. 118). San Diego, CA: Plural Publishing, Inc.

117

118

Children With Hearing Loss:  Developing Listening and Talking

Digital Hearing Instruments: Description and Terminology There is much discussion about digital equipment for children. The advantage of digital instruments is their flexibility and more precise fit. Digital hearing instruments have a computer-controlled sound processor inside the hearing aid. Traditional analog hearing instruments, which are mostly unavailable today, have a limited number of control options or have preset options that cannot be manipulated. The following terms are often used when discussing and describing digital hearing aids: n Advanced signal processing

automatically adjusts the amount of amplification (gain) provided by the hearing aid according to the loudness of the sound reaching its microphone. So, softer sounds are amplified more than louder sounds and the compression component constantly adjusts the amount of gain available so that softer sounds can be heard more easily, while avoiding discomfort from louder sounds. Different-signal processing configurations can vary the amount of compression, as well as the loudness level at which compression is activated, providing some benefit in noisy situations. Wide dynamic range compression (WDRC) may help children hear conversations at different listening distances. However, it should be noted that some persons, such as those with severe to profound hearing losses, auditory neuropathy spectrum disorder, and central auditory processing disorders (CAPDs) may benefit more from linear, “analog-like” hearing

aids because WDRC does not allow enough temporal resolution of the speech signal (Spirakis, 2011). n The amount of gain (amplification) a hearing instrument provides at each frequency is called its frequency response. Digital hearing instruments with multichannel capability divide the frequency response into two or more channels of control. Each channel can be adjusted independently so that different advanced signal-processing schemes can be applied to each frequency region. n Many digital hearing instruments have memory, allowing them to store more than one frequency response or program, called multimemory capability. Multiple memories allow the child, parent, teacher, or therapist to choose from different frequency responses and signal-processing options by using a remote control or a special app on a smartphone that is available for some hearing aids, or by pressing a button on the hearing instrument — or the hearing aid may have the ability to switch programs automatically. Multiple memories can be useful for children who communicate in many different listening situations or have fluctuating hearing losses. For example, for a child who often has otitis media (fluid in the middle ear), one program in the hearing aid might have added gain (amplification) to allow the child’s brain better access to sound when her hearing loss has been made worse by the addition of middle ear fluid. n Some digital hearing instruments have separate microphone settings (multimicrophone capability) that allow the child to pick up



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

sound either from a broad area (omnidirectional) or from a narrower listening range (directional microphone), analogous to a camera having a wide-angle and zoom lens. In noisy listening situations, the directional microphone can somewhat suppress sounds that come from behind (usually competing noise), which may improve the child’s ability to hear speech that comes from the front. The assumption is that the most desirable speech comes from the front and the child likely will look at the intended auditory source, but such is not always the case. Some instruments will automatically switch between microphone modes depending on the listening environment.

Infants and children need to be able to overhear social conversations that occur around them. This overhearing or distance hearing is essential for learning the social cognitive skill of Theory of Mind (see Chapter 7). The most important acoustic information may not be in front of the child — it might be to the side or behind. Therefore, if a directional microphone setting is used, it is important that some access remains to sounds from the side and back.

n Some digital instruments are

designed to identify noise and automatically reduce the amplification in the frequency regions where it is detected. This noise reduction capability may provide the child with some improved ease of listening in background noise, but an RM system is still necessary.

n Real-ear measurement is a term used

to describe electroacoustic verification of the hearing aid fit that is determined by using prescriptive targets; it is a measurement of amplified sound in the ear canal through the use of a probe microphone. Real-ear measurements must be obtained to verify the prescriptive hearing aid fitting for all infants and children (Figure 5–5). The terms verification and validation often are confused when used in the context of the hearing aid fitting process. Hearing aid verification techniques primarily focus on ways to confirm that the gain in the hearing aid matches the prescribed targets, typically by using real-ear measurements. On the other hand, hearing aid validation refers to outcome measures designed to assess treatment efficacy (e.g., whether the hearing aids are beneficial). Speech perception testing often is performed to validate the fitting to determine what auditory information is getting to the child’s brain. n Prescriptive targets are formulas or fitting procedures for prescribing gain (amplification) at each frequency for hearing aids (Moodie et al., 2017). There are two main formulas used for determining prescriptive targets: desired sensation level (DSL) and National Acoustic Laboratories (NAL) in Sydney, Australia. DSL is most often used with children because this formula has higher prescriptive gain that allows better detection and discrimination of soft speech sounds (Marriage et al., 2018). n Cortical auditory-evoked potential (CAEP) can be used as a measure of aided functional performance by presenting speech stimuli through

119

120

Children With Hearing Loss:  Developing Listening and Talking

Figure 5–5.  Real-ear measurements are an essential part of every hearing aid fitting (Kahley & Dybka, 2016). Source: Photo provided courtesy of Oticon.

loudspeakers, and recording an electrophysiological response from the cortical region of the child’s brain through electrodes placed on the infant’s head (Krishnamurti & Wise, 2018). The infant is awake, positioned on the parent’s lap, and distracted by quiet toys. Research suggests that the recording of CAEPs can assist in verifying the hearing aid fitting in infants and young children who cannot respond behaviorally (Hall, Zayat, & Aghamolaei, 2019). Families often are overjoyed to view the recorded CAEPs to speech stimuli in their aided infant because they now

have visible evidence that the stimuli are detected at the cortex, showing that conversational speech is indeed audible to the infant through his or her amplification technology. n Articulation index (AI) is an expression of the proportion of the average speech signal that is audible to a person with hearing loss. Typically, the AI is compared in the unaided and then in the aided condition as one way of determining the extent of hearing aid benefit in making the speech spectrum audible (Killion & Mueller, 2010). The AI is calculated by dividing the average



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

speech signal into several bands and obtaining an importance weighting for each band; a process first described by French and Steinberg (1947). One way to compute the articulation index is to use a “count the dots” audiogram (Killion & Mueller, 2010) that represents the weighted speech spectrum depicted as 100 dots, representing 100% of the

speech spectrum on an audiogram (Figure 5–6). Note that not all speech sounds carry equal audibility importance. High frequencies carry more importance for speech intelligibility; however, children must also have low-frequency access for the critical vowel sounds that underlie the phonological process of reading (Robertson, 2014).

O = unaided right ear thresholds obtained with insert earphones R = Right ear aided sound-field thresholds Figure 5–6.  Plotting the child’s unaided (O) and aided (R) right ear thresholds and counting the dots on this audiogram clearly displays the necessity of wearing the hearing aid during all waking hours. Without the hearing aid, this child has access to only 55% of the speech spectrum (AI = 55% or 0.55). Whereas, when wearing his hearing aid, he has access to almost all of the speech information (AI = 97% or 0.97). Of course, distance from the sound source and noise in the environment will still cause difficulty and will necessitate the added use of an RM system. Babies and children, because they are young, require a high-fidelity speech signal due to their immature neural development and shallow language and knowledge foundation. Source: Reprinted with permission from Killion, M. C., and Mueller, H. G. (2010). Twenty years later: A NEW Count-The-Dots method. The Hearing Journal, 63(1), 10–15.

121

122

Children With Hearing Loss:  Developing Listening and Talking

An audiologist can plot the child’s unaided and aided (or cochlear implant) thresholds on a count-the-dots audiogram and use that graph as a counseling tool to explain to the family the proportion of the speech signal that is audible with and without technology. Viewing and counting the dots provide evidence that their child with a mild to moderate hearing loss, who seems to hear so well unamplified, is really missing a great deal of auditory information when not wearing his or her hearing aid as shown in Figure 5–6. The AI is a very important tool to use in determining the effectiveness of a hearing aid, cochlear implant,

n Frequency lowering is a general term

that describes current technologies that take high-frequency input signals, usually consonant speech sounds, and shift these sounds to a lowerfrequency region where the person has more residual hearing (Bentler, 2010; Smith & Wolfe, 2018b). The idea is that this processing scheme will allow better speech understanding by making high frequencies audible. The concept of frequency lowering has been around for a long time, but better outcomes are now obtainable due to the digital processing chip that is part of most current hearing aids; this chip allows real-time manipulation of the input signal. The two methods most used to control input frequencies are called frequency transposition and frequency compression (Glista et al., 2009). Some children with moderate to profound high-frequency hearing losses benefit from this

and RM system in making the speech signal audible (McCreery & Walker, 2017). McCreery, Bentler, and Roush (2013) reported that children with mild to moderate hearing losses demonstrated greater delays in vocabulary development and experienced poorer outcomes when their aided Speech Intelligibility Index (SII) for conversational-level speech was lower than 0.80; that poor SII did not allow the brain to receive sufficient auditory information. Strategies for improving the SII must be employed for these children (e.g., changing hearing aids or earmolds or transitioning to a cochlear implant if hearing loss progresses).

fitting protocol, and some will do better with a cochlear implant. The effective programming of frequencylowering technologies can be very tricky; not all children benefit, and real-ear measurements must be made to verify the appropriateness of the fitting. Positive outcomes are possible; however, more evidence is needed (Alexander, Kopun, & Stelmachowicz, 2014).

The audiologist should always conduct a simple listening check to evaluate the effect of frequency-lowering processing on sound quality. If the audiologist has normal auditory function and cannot distinguish between “s” and “sh,” it is unlikely the child with hearing loss will be able to do so (Smith & Wolfe, 2018b).

n Some hearing aids and cochlear

implants offer a very useful feature



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

called data logging. Data logging allows the hearing aid or cochlear implant to retain relevant usage data that lets the audiologist know exactly how long the hearing aid or implant has been worn, and sometimes, the types of signals received (McCreery, 2013). As part of a family-centered intervention paradigm, children and parents should be made award of data logging as a way of tracking and promoting device usage. Insufficient wear time is one of the primary reasons for limited progress. Children should wear their hearing devices at least 10 to 12 hours per day (McCreery & Walker, 2017).

Hearing Aids for Infants and Children:  Points to Ponder Following are some issues about hearing aids that parents, early interventionists, clinicians, and teachers should consider. The hearing aid fit needs to be validated functionally (aided thresholds and

The degree of success a child experiences from a hearing aid results from several interactive factors such as the child’s residual or remaining hearing; the quality of the hearing aid fitting; the amount of time each day the hearing aids are worn; the baby’s age at amplification; the quality and quantity of auditory-based therapy; the ability of family, therapists, and teachers to create an “auditory world” for the child; and the use of necessary additional technologies such as an RM system to facilitate the child’s brain access to soft speech at a distance.

speech perception measures) as well as verified electroacoustically (real-ear measurements). That is, one needs to know how the hearing aid actually functions to deliver the speech spectrum to the brain of the baby or child (Madell & Schafer, 2019). Comparisons should be made between amplified and unamplified speech perception capabilities, and also between unaided and aided audiograms for detection data. Although initially it was believed that aided thresholds were useful only for linear hearing aids, aided thresholds can still be very effective for estimating potential hearing for soft sounds, even with wide dynamic range compression hearing aids (Cox, 2009; Dillon, 2012). Please refer to the efficacy discussion at the end of this chapter for more information about how to obtain functional speech perception information. Children require more gain (amplification) for their hearing loss than do adults, and hearing aid prescriptions should reflect that fact. That is, children and adults need different hearing aid prescriptions (Dillon, 2012; Marriage et al., 2018). Our goal for hearing aid fittings is to have the brains of children with hearing loss receive every speech sound at soft as well as at average conversational levels. One tool that has been used for years to help demonstrate the frequencies at which different speech sounds or phonemes are found is the “speech banana” (Figure 5–7). By looking at the audiogram in Figure 5–7 that has a drawing of the speech banana, one can see that the phoneme /m/ has acoustic energy around 300 Hz, and the phoneme /s/ has acoustic energy at about 5000 Hz and above. Note that phonemes are located both at different frequencies and at different intensities. By plotting a child’s unaided and aided thresholds on

123

124

Children With Hearing Loss:  Developing Listening and Talking

Figure 5–7.  This “speech banana” audiogram shows degrees of hearing loss, and the general frequency and intensity placement of English phonemes/speech sounds and environmental sounds. Source: Reprinted with permission of the Alexander Graham Bell Association for the Deaf and Hard of Hearing.

the speech banana, one could compare which phonemes could be detected and which ones will still be missed. The only way to be confident that the child can detect every speech sound at soft conversational levels is to fit technology so that aided thresholds are at the top of the speech banana, at approximately 20 dB HL. Therefore, Madell (2016) recommends an adjustment to the speech banana, called the “speech string bean”

(Figure 5–8). The green string bean seen at the top of the speech banana is the goal for aided thresholds, not somewhere inside of the larger speech banana. Aided sound-field thresholds are one way that the hearing aid fitting can be validated, after real-ear measures are obtained for verification of the prescriptive formula. Aided sound-field thresholds plus speech perception testing are very important validation measures.



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

Figure 5–8.  This audiogram has a drawing of linked string beans at the top of the shaded area of the general speech spectrum. Aided thresholds should be in the speech string bean range for maximum audibility of the speech spectrum. Source: Reprinted with permission of Jane R. Madell, PhD.

Hearing aids fitted on infants require vigilant audiologic monitoring. For infants birth to 3 months of age, audiologic hearing aid checks are necessary every 1 to 2 weeks. From ages 3 to 6 months, audiologic hearing aid checks should occur every 3 to 4 weeks. Hearing losses can change (typically the change is a progression) as will ear canal size, and families need support for hearing aid management.

If a hearing loss exists in both ears, two hearing aids should be fitted, one for each ear. Binaural fitting is the rule, not the exception. Hearing aids should be worn during all waking moments (eyes open, technol-

ogy on), at least 10 hours per day (McCreery et al., 2015), within several days of fitting, augmented by the necessary assistive listening devices. It is important to access the brain with clear and relevant auditory information during the early months

125

126

Children With Hearing Loss:  Developing Listening and Talking

and years of maximum brain neuroplasticity and language learning. The latest generation of hearing aids now offer wireless connectivity to RM systems or personal devices such as tablets, computers, or cellular phones. A telecoil could be useful when connecting to a loop system but could also cause distortion due to head orientation and interference. A telecoil is an internal hearing aid component that is activated by an external switch, labeled “T,” or by a remote control to access the telecoil program. The telecoil picks up magnetic leakage from a telephone or an assistive listening device such as a loop system. All of a child’s listening, learning, and communicative environments must be considered when fitting hearing instruments. Hearing aids should be programmable for flexibility regarding frequency response and power requirements. Sometimes initial fittings need to be adjusted as the child becomes a more reliable test taker and listener. It also is possible that a child’s hearing loss could change over time. In addition, fluctuating hearing losses demand adjustable hearing aids. Water-resistant hearing aids (and cochlear implants) should be recommended for all children. They can be very useful at the beach and pool, in the bathtub, in wet conditions, and can be worn during surface swimming; some can even be worn for underwater swimming or diving. In addition, there are hearing aid socks or sleeves made out of absorbent cotton or rubber that can assist in absorbing or repelling moisture. Some children wear these during hot, strenuous activities when they perspire a great deal. A listening check must be performed on hearing instruments at least once a day by parents and/or teachers who have normal hearing (Estabrooks, 2006, 2012). A hearing aid stethoscope should be read-

ily available for this purpose. Hearing aids break, and a malfunctioning hearing aid is of no value to anyone, especially to a baby with a hearing loss. About 50% of amplification devices checked in school programs are found to be malfunctioning at any given time (Edwards & Feun, 2005; Sjoblad, Harrison, & Roush, 2004). Families and teachers need to be coached in performing this daily troubleshooting of hearing aids and other technologies. Troubleshooting means performing various visual and listening inspections to determine whether an amplification unit is malfunctioning and, if so, evaluating the nature and severity of the malfunction. Some hearing aids, through wireless connectivity, can report their status to a mobile (Android or iPhone) app designed for that purpose. Hearing aids are electronic devices, so moisture can cause a hearing aid to malfunction. Therefore, hearing aids need to be kept as dry as possible in a specialized device called a dry and store kit that works in three ways: gentle heating, silica gel dehumidifying, and disinfecting with a UV sterilizer lamp that kills bacteria that can cause earmold itching and ear canal infections. This kit should be used to store hearing aids whenever they are not on the child’s ear. Parents also should order a hearing aid care kit when they purchase their child’s first hearing aids if the kit isn’t already included as part of the hearing aid package. This kit contains earmold and hearing aid cleaning tools, a hearing aid stethoscope to listen to the hearing aid, a battery tester, and OtoFirm (a cream to put on the earmold that can help seal it in the ear canal and prevent feedback — glycerin and mineral oil, but not baby oil, which has fragrance, also work as lubricants). The audiologist should provide informa-



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

tion, literature, and demonstration of all items in the kit. Battery problems cause many hearing aid malfunctions. Unlike watch batteries, the tiny hearing aid batteries last only a few days to a week in a high-power hearing aid. Battery life depends on the power output of the hearing aid, the added battery drain of engaging the wireless hearing aid features, and how often the hearing aid actually is worn. Currently, there are “mercury free” zinc-air cells. A zinc-air cell is actually a mini fuel cell such as the ones powering spacecraft, except instead of carting big oxygen tanks around, the zinc-air cell pulls oxygen out of the atmosphere. With these new zinc-air cells, one needs to pull off the battery tab and then wait a minute or two for the atmospheric oxygen to permeate the electrolyte. Earmolds are custom-made earpieces that direct the sound from the hearing aid into the ear canal. They are a crucial aspect of an appropriate hearing aid fit (Dillon, 2012) (Figure 5–9). The rule-ofthumb for earmold material selection is hard acrylic material for users who have soft ear texture, such as elderly and longtime earmold users to keep from “pulling” the skin; and soft material for the firm texture of a young ear or new hearing aid user for comfort and sealing. The two main types of earmold materials are rigid dental acrylic (methacrylate) and silicone (vinyl polysiloxane), which is also used for ear impression material. The third main material is vinyl or plasticized polyvinyl chloride (PVC), which, although less expensive, shrinks with usage and absorbs skin oils that may cause the material to smell after a couple weeks of use. Some hospitals and clinics are now using 3-D ear scanners for earmold impressions, and some are even using 3-D printers to manufacture the earmold in the clinic. Some children’s skin may develop an allergic

reaction to the specific earmold material or to the prolonged occlusion of the ear canal. Solutions can include, in addition to medical consultation and management, trying a different earmold material without any pigmentation in the earmold, trying a more open earmold to allow air into the ear canal, or alternating hearing aid use between the ears. Earmolds and tubing can be modified to allow better sound transmission to the ear canal because the earmold and tubing shape the sound coming from the hearing aid — this is basic acoustic science (Pirzanski, 2006). For example, a horn-shaped earmold or flared tubing will enhance high-frequency transmission.

Chasin (2013) explains that the use of flared tubing can add 6 dB to both the gain and output for sounds above about 2200 Hz (closer to 3000 Hz for children) even when using a slim tube fitting. Specifically, the last 20 to 22 mm of the slim tube can be trimmed off and a piece of #13 tubing glued on the end. A RITE tip can fit snugly over the piece of #13 tubing. Therefore, flared tubing should be considered for any hearing aid fitting where high-frequency amplification is desired.

An “open” earmold or a vented earmold allows some of the low frequencies to filter back out of the ear canal as well as letting air in, an important consideration if the child has ear infections or ventilating tubes in their ears. If earmolds do not fit well, feedback can occur. Feedback is that high-pitched annoying squeal that comes from hearing aids. Finally, children’s earmolds may need to be remade often, sometimes monthly, to accommodate growing ears and to obtain a comfortable

127

Figure 5–9.  Advantages and limitations of a few common earmolds. This information is particularly useful when deciding on an earmold style within the constraints of individual differences, cosmetics, and gain requirements. Source: Reprinted with permission from Ricketts, T. A., Bentler, R., and Mueller, H. G. (2019). Essentials of modern hearing aids: Selection, fitting, and verification (p. 285). San Diego, CA: Plural Publishing, Inc. 128



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

fit. A suggestion is that anytime new shoes are needed for growing feet, new earmolds likely will be needed for growing ear canals. Amplification typically should emphasize the high frequencies if the child has any high-frequency residual (remaining) hearing. The higher frequencies carry the acoustic energy necessary for discrimination of consonant sounds, and consonants are necessary for intelligibility of speech.

If a baby or child resists wearing his or her hearing aids, it may be because the child has somehow sensed that the parent is not yet committed to having the child wear the hearing aids as they are a clear and constant reminder of the child’s hearing loss. Professionals need to know that the parents’ ability to cope effectively with the technology may be affected by the parents’ raw emotional state. Sometimes parent groups or professional counseling are helpful — but sometimes just getting the hearing aids on and having the child begin to respond is the best therapy possible. However, sometimes the problem is an audiologically correctable one. The first step is to investigate the following issues and then work with the audiologist to rectify any identified problems: n Earmold fit:  The earmold could be

too small or too large, or it could rub and irritate specific parts of the ear or ear canal. n Hearing aid fit:  The hearing aid could rub or irritate the child’s ear or flop around annoyingly when the child moves. n Hearing sensitivity change:  The child’s hearing loss may have progressed so that current hearing

Low frequencies, however, are also important for suprasegmentals and vowels and should not be deleted. The relationship between low- and high-frequency amplification needs to be evaluated carefully in the applied hearing aid fitting formula (Dillon et al., 2014; Killion & Mueller, 2010; Ling, 2002). Also, always consider flared tubing and horn-shaped earmolds, as mentioned in the previous section, for an additional high-frequency boost.

aid settings no longer provide sufficient amplification. n Ear canal or middle ear infection: The child may have an infection that hurts when the hearing aid is put on. Medical evaluation and treatment would then be necessary. n Hearing aid settings:  The hearing aid could be shorting out; it may present sound that is too loud, too soft, or too distorted; or it may provide inappropriate amplification of the speech spectrum. For example, a 3-year-old with a mild to moderately severe hearing loss did not want to wear his newly fitted hearing aids. He cried and ripped them off his ears about 10 minutes after they were put on. The parents were afraid the hearing aids were too loud. However, a second opinion revealed that the hearing aid settings were too soft, so that, when inserted, the earmolds and hearing aids functioned like earplugs. The child heard better without the hearing aids than he did with them because he was underamplified. When the hearing aids were adjusted to allow more amplification, he not only kept his hearing aids on, he was reluctant to take them off.

129

130

Children With Hearing Loss:  Developing Listening and Talking

Visual deviancy and stigma issues should be considered when fitting amplification devices (Madell & Anderson, 2014; Sjoblad, Harrison, & Roush, 2004). Equipment can be fitted in an attractive, colorful, and interesting manner. In fact, many children use charms, stickers, tube riders, nail foils, shoe charms, and other hearing aid bling to accessorize their earmolds, hearing aids, and cochlear implants for holidays or just for exciting everyday wear. Technology should facilitate, not intimidate, communicative interactions.

Bone Anchored Implants for Children (Also Called Osseointegrated [Osseo] Implants) or Bone Conduction Hearing Devices The bone anchored implant, also called an osseointegrated (osseo) implant, is an alternative to a traditional bone conduc-

tion hearing aid. Terminology such as osseointegrated implants, bone conduction hearing devices, and bone anchored implants have all been used to describe this type of implantable device (Wolfe, 2019). Currently, the descriptive term osseointegrated (osseo) implant is most often used. Osseo implants are suitable for children with conductive or mixed hearing losses or SSD (complete sensorineural hearing loss in one ear) (Christensen, 2019) (Figure 5–10). The conductive hearing loss could have been caused by a congenital middle or external ear malformation, or could occur as part of a syndrome such as Treacher Collins syndrome (Dillon, 2012). The traditional bone conduction hearing aid is held on the head with a steel spring headband and may be uncomfortable due to the constant pressure of the headband. The osseo implant is different. The osseo sound processor is attached to a small titanium fixture that is surgically

Figure 5–10. This is a schematic showing the osseo sound processor attached to a small titanium fixture that is surgically inserted in the bone behind the ear. Source: Reprinted with permission from Estabrooks, W., MacIver-Lux, K., and Rhoades, E. A. (2016). Auditory-verbal therapy: For young children with hearing loss and their families, and the practitioners who guide them (p. 186). San Diego, CA: Plural Publishing, Inc.



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

Currently, there are several companies that offer variations of osseo implant technology. Some models are designed with an abutment protruding through the skin that offers a direct connection between implant and sound processor (see Figure 5–10). Another model uses internal and external magnets that attract one to another to offer an invisible link between the implant and sound processor with no intrusion through the skin.

inserted in the bone behind the ear (Christensen, 2019; Tjellstrom, 2005). A titanium sleeve connects the fixture through the skin; therefore, thorough daily hygiene using a special cleaning brush and soap and water is necessary. Through a process called osseointegration, which takes 10 to 12 weeks, the fixture bonds with the surrounding bone (Christensen, 2019). Osseointegration is a direct structural and functional connection between liv-

ing bone and the titanium surface of the implant that provides secure retention of the sound processor (Tjellstrom, 2005). Sounds therefore bypass the outer and middle ear and stimulate the inner ear directly, a process known as direct bone conduction (Dillon, 2012). At this point in time, the minimum age for osseo implantation in the United States is 5 years; however, some children receive the osseo implant at 2 to 3 years of age. For infants, a softband can be used until the child is old enough for the surgery. The softband is a transcutaneous (across the skin) application of the osseo implant. The softband arrangement consists of an adjustable headband and a snap coupler that holds the sound processor to the band and holds the snap coupler to the skin (Figure 5–11). Previous softbands were composed of an elastic band with a Velcro fastener. However, both manufacturers recently changed to a latex-free and Velcro-free type of softband to prevent the allergic reactions and skin irritations that were sometimes seen with previous

Figure 5–11.  An example of how a bone conduction sound processor may be worn on the head with a soft headband connector. Source: Reprinted with permission from Estabrooks, W., MacIver-Lux, K., and Rhoades, E. A. (2016). Auditory-verbal therapy: For young children with hearing loss and their families, and the practitioners who guide them (p. 188). San Diego, CA: Plural Publishing, Inc.

131

132

Children With Hearing Loss:  Developing Listening and Talking

versions of the softband (Beck, 2010; Christensen, 2019). There is also an option that enables the wearing of two sound processors on one softband for a bilateral fit. The softbands come in a variety of color options to appeal to children and their families. Bone conduction devices are verified by attaining aided sound-field thresholds and comparing unaided to aided results. Christensen et al. (2010) demonstrated the benefits of both the softband and osseo implanted systems over traditional bone conduction hearing aids and recommended the osseo implant system as the first choice for intervention of inoperable conductive hearing losses. Moreover, Dun, Agterberg, Cremers, Hol, and Snik (2013) found that children with bilateral, severe conductive hearing losses demonstrated a clear benefit from bilateral osseo implants over unilateral osseo implant in localization and hearing in noise abilities. In addition, the osseo implants can and should be used with an RM system in both home and school learning situations (Christensen, 2019).

Wireless Connectivity Wireless connectivity is the ability to connect hearing aids to other audio devices such as microphones, phones, MP3 players, personal computers and tablets, televisions, and so on (Nyffeler & Dechant, 2010; Schum, 2010; Wolfe & Schafer, 2015). Some hearing aid manufacturers use “streamers” (devices worn around the neck to allow an interface between the hearing aids and the accessory), and many manufacturers have developed hearing aids that allow a wireless connection without an interface device (Atcherson, Franklin, & Smith-Olinde, 2015). Some

hearing aids also offer a wireless microphone accessory — a small microphone worn by the parent that can pick up the parent’s voice, for example, and send it directly to the child’s hearing aids (Neumann & Wolfe, 2013). In addition, there are now apps on smartphones that can allow control of hearing aids (e.g., volume control) and can indicate when there is a hearing aid malfunction or a low battery. Wireless communication can also allow a pair of hearing aids to stream a full audio signal between them to preserve the natural differences that exist between sounds arriving at the two ears for directionality and spatial awareness, and some hearing aids can also allow the user to hear the voice from a telephone in both ears at the same time (Atcherson et al., 2015; Timmer, 2013).

Connectivity Issues There are many educational technologies used in today’s classrooms, such as tablets, iPods, smart boards, e-learning, e-books, computers, laptops, streaming of videos, YouTube videos, apps for educational learning, and standardized tests on computers, and every kind of technology will have different connectivity issues and problems (Atcherson et al., 2015; Spangler, 2014). Connectivity, in this instance, means how we couple or connect the child’s hearing aids and/or personal hearing assistance technologies (HATs) to the various educational technologies. Connectivity problems involve finding the appropriate connecting cords, ensuring adequate volume and quality of sound for each device (e.g., the child’s hearing aids or personal RM system) when coupled to an educational device, and eliminating distortion of the signal. Parents, teachers and, of course, the child must have knowl-



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

edge of connectivity issues and employ troubleshooting strategies when connectivity is problematic.

HATs for Infants and Children: Personal-Worn RM and SoundField FM and IR (Classroom Amplification) Systems Even though hearing instruments are the initial form of amplification for infants and children, hearing aids are not designed to deal with all listening needs. Their biggest limitation is their inability to make the details of spoken communication available when there is competing noise, when the room is reverberant, when the listener cannot be physically close to the speaker, and especially when several conditions exist together. Because a clear and complete speech signal greatly facilitates the development of spoken language and reading skills, some means of improving the speech-to-noise ratio (S/N ratio) must be provided in all of a child’s learning domains (Anderson, 2004; Estabrooks, 2006; Madell & Flexer, 2018; Madell, Flexer, Wolfe, & Schafer, 2019b; Smaldino & Flexer, 2012). HAT is a term used to describe a range of products designed to solve the problems of noise, distance from the speaker, and room reverberation or echo that cannot be solved with a hearing aid alone (AAA, 2011). The term assistive listening device (ALD) is also used to describe these products. HATs enhance the S/N ratio to improve the intelligibility of speech and expand the person’s distance hearing and incidental learning. There are several modes of wireless transmission from the transmitter to the receiver: FM; digital radio frequency (RF); electromagnetic induction; and infra-

The key component in HAT effectiveness is the fact that the microphone has been moved closer to the talker or other sound source, which results in the improved S/N ratio.

red transmission (IR). The types of HATs most relevant to the population addressed in this book include personal-worn FM/ (digital) RF systems, and sound-field IR and FM (classroom) amplification systems. All HATs enhance the S/N ratio by improving the audibility and intelligibility of the speaker’s voice, and are referred to by the general term of RM technologies. Note that a pediatric or educational audiologist must be involved in the recommendation, fitting, evaluation, and management of all hearing aids and hearing assistance technologies.

Personal-Worn RM Unit A personal-worn RM unit is a wireless personal listening device that includes a remote microphone placed near the desired sound source (usually the speaker’s mouth, but it could also be a TV or other audio device) and a receiver for the listener, who can be situated anywhere within approximately 30 to 50 feet of the talker. No wires are required to connect the talker and listener because the unit is a small radio that transmits and receives on a single frequency (Figure 5–12). FM and other types of wireless systems improve signal quality and intelligibility far more than any signal processing scheme, such as noise reduction and directional microphones, located entirely within the hearing aid (Dillon, 2012). Because the talker wears the RM within 6 inches of his or her mouth, the personal

133

134

Children With Hearing Loss:  Developing Listening and Talking

Figure 5–12.  Personal RM in the classroom. To facilitate development of the child’s “auditory feedback loop,” the child is speaking into the RM to highlight her own speech, thus allowing her to auditorily monitor and control her spoken language production.

RM unit creates a listening situation that is comparable to a parent or teacher being within 6 inches of the child’s ear at all times, thereby allowing a positive and constant S/N ratio (Nguyen & Bentler, 2011; Smaldino & Flexer, 2012). The amount of S/N ratio benefit will vary depending on a number of factors such as FM ratio (the relationship of the loudness of the wireless signal compared to the loudness of the cochlear implant (CI) or hearing aid microphone signal), whether or not the hearing aid or CI microphone is activated, how well the personal technology is programmed, how appropriately the remote microphone is physically placed on the talker, and how effectively the talker uses the RM trans-

mitter. S/N ratio improvement using the RM can be as much as +20 dB (American Academy of Audiology (HAT), 2011).

Types of Personal RM Systems Personal RM systems, historically, have employed an analog FM RF signal to transmit the desired signal (Figure 5–13). Many manufacturers now use digitally modulated RF systems, which are similar to Bluetooth® technologies often used in recreational and business products. Currently, the most inclusive terminology for all digital systems is digital RM system. Digital RM systems provide a higher level of analysis and control over the signal received by the microphone and then

135

Figure 5–13.  Types of personal FM and digital RF systems for hearing aids and cochlear implant sound processors. Source: Reprinted with permission from Schafer, E. C., and Wolfe, J. (2019). Remote microphone technologies. In J. R. Madell, C. Flexer, J. Wolfe, and E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 257–266). New York, NY: Thieme Medical Publishers.

136

Children With Hearing Loss:  Developing Listening and Talking

delivered to the listener, thereby improving signal quality in noisier environments (Thibodeau & Schaper, 2014). Furthermore, these digital RM systems typically use a carrier frequency that “hops” from frequency to frequency hundreds of times per second, a feature that makes these systems less susceptible to interference from nearby RF devices (Wolfe et al., 2013). Studies have shown that hearing aid and cochlear implant users receive great benefit from dynamic and adaptive digital RM systems when compared to traditional/classic and dynamic FM systems, especially in higher levels of background noise (Thibodeau & Schaper, 2014; Wolfe et al., 2013).

Personal RM Use in School Personal RM units are essential for a child with any type and degree of hearing loss, from minimal to profound, who is in any classroom or group learning situation (Anderson, Goldstein, Colodzin, & Inglehart, 2005; ASHA, 2000; Flynn, Flynn, & Gregory, 2005; Madell & Flexer, 2018). Listening in a classroom is difficult, even for a child with a mild hearing loss. Parents often report that children with hearing loss arrive home exhausted, because even if the child appears to be doing relatively well in school, listening requires a great deal of work causing cognitive fatigue (Davis et al., 2018; Hornsby, 2013). An RM system significantly reduces the level of effort the child expends just receiving instructional information, leaving the child with more cognitive capacity for thinking about the new concepts offered by the teacher (Anderson, 2004). There are four basic wireless delivery modes used in classrooms: (a) FM (radio frequency modulated on the 72-MHz band); (b) Roger, a digital RM system (on

the 2.4 GHz band); (c) IR (infrared light waves — used for large area classroom systems); and (d) induction loop (magnetic). The most common styles of personal RM technology currently used include one in which the RM receiver is integrated into the ear-level hearing-aid case, another in which a small RM receiver boot or audio shoe is attached directly to the bottom of the ear-level hearing aid or to a cochlear implant speech processor, a third in which the receiver is attached to a streamer, and a fourth in which a neck loop is used with an internal receiver.

Personal RM Use Outside of School Traditionally, personal RM systems coupled to hearing aids have been used for school-aged children in the classroom setting, but growing evidence suggests infants and toddlers can also benefit from personal RM systems use outside of school. Acoustic accessibility is critical in all of a child’s environments. That is, auditory information must reach the child’s brain in all situations for psychosocial, academic, and neural development to occur. A personal worn RM is therefore critical outside of school, too, such as at sports practices, dance lessons, Brownies, etc. The work of Hart and Risley (1999) clearly demonstrated that the number of words children speak is directly related to the number of words spoken to the child. For a child with hearing loss, hearing enough words is clearly an issue. When the primary speaker is not in close proximity to the child, the child hears fewer words. Parents report that wearing the microphone is a reminder for them to speak more, so an RM system can significantly increase the number of words actually heard by a child with hearing



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

loss (Moeller et al., 1996). Parents also reported that children felt an increased sense of security when they could hear their parents from a distance. RM use in the car, on the playground, or in difficult communication situations provides additional listening and language enrichment time. Nguyen and Bentler (2011) found that children are more likely to practice their emerging spoken language with caregivers when using an RM system, and when the infant or preschooler is not close by, parents still can communicate easily. Benitez-Barrera, Angley, and Tharpe (2018) found that children with hearing loss have increased access to caregiver talk at a distance when using an RM in the home environment. Friedman and Margulis (2017) also found better quantity and quality of parental speech and language when the RM was used at home with toddlers and preschoolers. Parents uniformly report that the systems are easy to manage and that the children seem more attentive and more alert to sound when the RM is being used. Mulla and McCracken (2014) conducted a longitudinal study of the benefits of out-of-school use of RM technology with preschool children who used hearing aids. The families in the study established regular RM use in a range of settings such as at home, in the car, at nursery school, during shopping, and when outdoors. Improvements in auditory language, listening in noise, and distance hearing were identified over time. As children grow older and become involved in other activities, more opportunities for use develop. For example, children who play sports can have a great deal of difficulty hearing on a playing field (Hopkins, Kyzer, Matta, & Dunkelberger, 2014). If the coach uses an RM system, the child will have good acoustic access to

directions. Participating in family conversations at the dinner table and the kitchen also can be an acoustic challenge, and an RM system with “conference microphone” capabilities could be beneficial. The transmitter will also be useful in ballet class, religious school, Scouts, and on the playground (Hopkins et al., 2014). Friends, as well as adults, can learn to use the transmitter to assist in communication.

In light of the advantages of RM technology use in home environments, and because babies and children require distance hearing for incidental learning, personal RM systems ought to be attached to hearing aids soon after the initial hearing aid fitting. Families require coaching about when and how to use the personal RM unit outside of school environments. Most families use the RM system about 60% to 70% of the baby’s or toddler’s day; the hearing aids are worn every waking moment. The wearer of the RM transmitter could be the mother, father, grandmother, sibling, playmate, caretaker, and so on. Typical RM use situations include the car, stroller, playground, store, dance class, Sunday school, soccer field (baseball, etc.), kitchen, dining room, and so on. Think of the RM as the baby’s “third ear.”

Suggestions for RM Use: To Mute the Remote Microphone, or Not to Mute When the primary talker is not speaking to the child wearing the RM receiver, the microphone could be set on “mute” to avoid having the child listen to information that is not relevant, or to conversations

137

138

Children With Hearing Loss:  Developing Listening and Talking

that may interfere with the child’s current task. This information conflict would be especially true in a school setting if the child is working on one project and the teacher is speaking about something else with another child. On the other hand, there could be value in purposely letting the child listen in to the back-and-forth exchange between family members and perhaps between the teacher and another student. Overhearing conversation of others is a valuable way that typical children develop Theory of Mind (capacity to understand that others are separate individuals with their own thoughts and emotions) and gain an understanding of the emotional exchanges between people. The challenge for parents and teachers is to determine how to enable listening in to other conversations without compromising the child’s independent work. The point is, by always turning off the RM when the child is not the direct recipient of the conversation, we may be limiting the child’s opportunity to learn from overhearing conversations of others.

Microphone Options:  Technology and Style for RM Systems The primary microphone technologies include the following (Dillon, 2012): n Directional, where noise/sound

coming from some directions is suppressed (usually from the back and sides of the listener in different “polar sensitivity patterns”), whereas good sensitivity to sounds arriving from one direction (usually from the front, where the listener is looking) is retained. n Omnidirectional or nondirectional, where no sounds are suppressed from any direction; microphones also come

in different styles: lapel/lavaliere, clip-on, cheek, boom, (Roger) pen, and conference. Users of RM systems need to be instructed about the technology and style of the mic and about microphone placement. Ideally, the microphone should be a boom mic worn on the head and always in the correct position in front of the talker’s mouth. If a clip-on or lavaliere microphone is used, it should be clipped in the middle of the shirt a few inches below the talker’s mouth. Clipping the microphone to the side likely will result in a signal that is significantly reduced when the talker turns their head to the opposite side. Care should be taken to clip the mic in such a way that it does not slip under clothes or become tangled in jewelry.

Auditory Feedback Loop: A Concept Facilitated by Using a Remote Microphone The auditory feedback loop is the process of self-monitoring (input) and correcting one’s own speech (output). Auditory feedback is crucial for the attainment of auditory goals and fluent speech, and is a critical component of motivating early vocalization frequency (Fagan, 2014; Perkell, 2008). That is, a child must be able to hear his or her own speech clearly to produce clear speech sounds (Kraus & White-Schwoch, 2018; Laugesen, Nielsen, Maas, & Jensen, 2009). Improving the signal-to-noise ratio of the child’s own speech can boost the salience of the speech signal (refer back to Figure 5–12). To facilitate development of the child’s auditory feedback loop, place the microphone of the RM system within 6 inches of the child’s mouth when he or she is speaking or reading aloud. Because speaking



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

and reading are interrelated, speaking into the remote microphone will highlight the child’s speech, thus allowing the child to monitor and control his or her speaking and reading fluency. The small lapel microphone accessory of a cochlear implant can be used in the same way.

CADS, Also Called Sound-Field FM or IR for a Large Area Sound-field technology is an exciting educational tool that, when installed and used appropriately, allows control of the acoustic environment in a classroom, thereby facilitating acoustic accessibility of teacher instruction for all children in the room (Smaldino & Flexer, 2012). A  sound-

field system looks like a wireless public address system, but it is designed specifically to ensure that the entire speech signal, including the weak high-frequency consonants, reaches every child in the room. By using this technology, an entire classroom can be amplified through the use of one, two, three, or four wall- or ceiling-mounted loudspeakers (Figure 5–14). The teacher wears a wireless remote microphone transmitter, and his or her voice is sent via radio waves (FM or digital RM) or light waves (infrared — IR) to an amplifier that is connected to the loudspeakers. There are no wires connecting the teacher with the equipment. The radio or light wave link allows the teacher to move about freely, unrestricted by wires.

Figure 5–14.  Sound field or CAD systems. Source: Reprinted with permission from Schafer, E. C., & Wolfe, J. (2019). Remote microphone technologies. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 257–266). New York, NY: Thieme Medical Publishers.

139

140

Children With Hearing Loss:  Developing Listening and Talking

Why Is CADS a More Descriptive Term? CADS is more accurately descriptive of sound-field function than the terms soundfield amplification or classroom amplification, which some teachers, parents, and acoustical engineers interpret to mean that all sounds in the classroom are made louder. This misunderstanding may give the impression that sound is blasted into a room, causing rising noise levels, interfering with instruction in adjacent rooms, and provoking anxiety in pupils. In actuality, when the equipment is installed and used appropriately, the reverse is true. The amplified teacher’s voice can sound soothing as it is evenly distributed throughout the room, easily reaching every child. The room quiets as students attend to spoken instruction. In fact, the listener is aware of the sound distribution and ease of listening only when the equipment is turned off. The overall purpose of the equipment is to improve detection of all audio signals

The use of CADS in a general education arena is a critical part of the goal of Universal Design for Learning (UDL). UDL is an educational framework based on research in the learning sciences, including cognitive neuroscience, that guides the development of flexible learning environments that can accommodate individual learning differences. UDL is intended to increase access to learning by reducing physical, cognitive, intellectual, and organizational barriers to learning, as well as other obstacles. UDL principles also lend themselves to implementing inclusionary practices in the classroom (Gargiulo & Metcalf, 2013).

(teacher and media), by having the details of auditory information continually reach the brains of all pupils.

Who Might Benefit From CADS? It could be argued that virtually all children could benefit from CADS because the improved S/N ratio creates a more favorable learning environment. Hornsby, Camarata, and Bess (2014) found that all children — those with typical hearing and those with hearing loss — experience fatigue when they have to listen in less-than-ideal listening environments. So, if children could hear better, clearer, and more consistently, they would have an opportunity to learn more efficiently, with less cognitive effort and less fatigue (Anderson & Arnoldi, 2011; Edwards & Feun, 2005; Flexer & Rollow, 2009; Madell & Flexer, 2018; Smaldino & Flexer, 2012). No one disputes the necessity of creating a favorable visual field in a classroom. A school building would never be constructed without lights in every classroom. However, because hearing is invisible and ambiguous, school personnel may question the necessity of creating a favorable auditory field. Nevertheless, studies continue to show that sound distribution systems facilitate opportunities for improved academic performance (Flexer & Long, 2003; Mendel, Roberts, & Walton, 2003; Smaldino & Flexer, 2012). The populations that seem to be especially in need of S/N ratio enhancing technology include children with fluctuating conductive hearing losses (ear infections), unilateral hearing losses, “minimal” permanent hearing losses, auditory processing problems, cochlear implants, cognitive disorders, learning disabilities, attention problems, articulation (speech) disorders, and behavior problems. In fact,



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

Vickers et al. (2013) found that CADS leads to improved speech perception, especially when the speech signal has been degraded because of poor classroom acoustics or background noise, and CADS has a particularly large and positive effect for children with lower vocabulary ages. Teachers who use CADS report they also benefit. Many state they need to use less energy projecting their voices; they have less vocal abuse and are less tired by the end of the school day (Blair, 2006). Teachers also report that the unit increases their efficiency as teachers, requiring fewer repetitions and allowing for more actual teaching time. With more and more schools incorporating principles of inclusion where children who would have been in selfcontained placements are in the mainstream classroom, CADS offers a way of enhancing the classroom learning environment for the benefit of all children. It is a win-win situation. Keep in mind that about 90% of today’s population of children who are identified at birth with hearing loss will likely go directly into general education classrooms by 5 years of age. Those classrooms must be acoustically ready for them.

A primary value of CADS is that it can focus the pupils and facilitate attention to relevant auditory information. To that end, the effective use of the sound system’s microphone can be a powerful teaching tool. Teachers need to be trained in how to use the microphone to create a listening attitude in the room; the purpose of the microphone is to quiet and focus the room — to guide the children in “where” to listen — not to excite or distract them.

To summarize, CADS facilitates the reception of consistently more intact signals than those received in an unamplified classroom, but signals are not as complete as those provided by using a personal RM unit (Smaldino & Flexer, 2012). In addition, the equipment, especially the loudspeakers, must be installed appropriately, and teachers must be trained in the rationale and effective use of the technology.

Choosing a System Should an audiologist recommend a personal RM system or a sound-field system or both for a given child? Once it is determined that a child has a hearing problem that interferes with acoustic accessibility, the next step involves recommending, fitting, and using some type of S/N ratio enhancing technology. One thing is certain: we cannot manage hearing by not managing hearing. Preferential seating does not control the background noise and reverberation in the classroom, stabilize teacher or pupil position, or provide for an even and consistent S/N ratio everywhere in the room. In many instances the best listening and learning environment can be created by using both a CADS IR and a personalworn RM system at the same time. The sound-field IR unit, appropriately installed and used in a mainstreamed classroom, improves acoustic access for all pupils and creates a “listening” environment in the room. The individual personal-worn RM system allows the particular child with hearing loss to have the most favorable S/N ratio. In several parts of the country, many general education classrooms already have infrared CADS in them with two microphones: a teacher-worn microphone

141

142

Children With Hearing Loss:  Developing Listening and Talking

and a pass-around microphone for pupils. Teachers have been in-serviced about how to use the classroom system as a teaching tool to manage the class, create an auditory focus, and deliver direct and incidental instructional information. Of course, children with hearing loss must use a personal RM system for a superior signal-to-noise ratio. Due to the continuous advancements in HAT, it is advisable to consult with the educational or clinical audiologist to determine how the child’s personal RM unit and the CADS best can be used together. That is, it would appear that the simplest solution would to be to couple the child’s personal RM transmitter to the designated FM port of the CADS, using an appropriate patch cord. In this instance, the teacher needs to wear only the CADS microphone, and his or her voice will be transmitted to both the child’s personal RM receiver and the classroom loudspeakers. Unfortunately, for some technologies, this coupling causes a poorer signal to be transmitted to the child’s personal RM. Therefore, in the absence of data to the contrary, it is advisable for the teacher to wear two microphones — one for the personal RM and one for the CADS; electroacoustic verification of personal RM benefit should be performed for each arrangement according to HAT guidelines (American Academy of Audiology (HAT), 2011, p. 10).

Small Group Instruction Sound System for a Targeted Area There are three main types of CADS. The first is a large area system (the type discussed so far in this chapter), where the speech signal is sent to one or more strategically positioned loudspeakers to distribute the signal throughout the room. The second type is a system designed for

a targeted area, where the speech signal is sent to a single or to multiple small loudspeakers placed close to the listener or to a specified table or group. The third type is an induction loop system where the speech signal is delivered to the telecoil of the personal hearing aid, cochlear implant, or other hearing device via a magnetic signal generated by a loop of wire or other inductor. One type of targeted CADS uses between two and six speaker/microphone units that allow two-way communication between the table of students and the teacher (see Figure 5–14C). Anderson et al. (2005) found either the personal RM system or the small, targeted sound-field system to be particularly useful for children with cochlear implants. Following recognition and resolution of the equipment-related issues mentioned above comes the realization that, at best, technology is simply a tool, a means to an end. The purpose of HAT, or any S/N ratio enhancing technology, is to facilitate the reception of the primary speech signal. Once children can detect word-sound differences clearly, they will have an opportunity to develop and improve their language skills and to acquire knowledge of the world.

Managing Personal RM Systems and CADS Using the American Academy of Audiology (AAA) Guidelines for Remote Microphone Hearing Assistance Technologies for Children and Youth From Birth to 21 Years (2011) To ensure the child’s personal amplification devices and HATs have been fitted and are functioning appropriately, the child’s audiologist and intervention team must collaborate. The AAA’s Clini-



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

cal Practice Guidelines for Remote Microphone Hearing Assistance Technologies for Children and Youth From Birth to 21 Years (2011) can be very helpful in the collaboration process. These guidelines provide a rationale and comprehensive protocol (selection, fitting, and management of HAT) for devices (hearing aids and cochlear implants) that use RMs of personal-worn systems, CADS, and loop systems. The guidelines also discuss regulatory considerations and qualifications of personnel. Strategies for monitoring and managing equipment are discussed in detail including procedures for checking systems to be sure they are working. Policies for implementing guidelines in the schools are offered. The following information in the HAT Guidelines is particularly useful:

Whichever type(s) of HAT are selected (personal-worn, large area CADS, or targeted area small speaker units), the following common-sense tips can facilitate use and function of the technologies: n Try the equipment. People must

experience HATS for themselves; they cannot speculate about benefit. n Be mindful of appropriate wearing and placement of the RM on the teacher or parent. Microphone placement dramatically affects the output speech spectrum. Specifically, high frequencies are weaker in offaxis positions. A head-worn boom microphone provides the best, most complete, and most consistent signal. A collar microphone, worn around the teacher’s neck, also allows some level of control of microphone

n HATS Table 5.3.3 — Listening

environment considerations and school, p. 14. n HATs Table 5.3.4 — Listening considerations for home and community. n HATS Appendix B — Remote microphone HAT implementation worksheet; there is an in-school form and out-of-school form. n HATS Appendix D — Common functional outcome measures used to assess amplification benefit, pp. 44–46. For Internet access to the full document, please refer to the AAA, 2011 (https://www​ .audiology.org/publications-resources/ document-library/hearing-assistancetechnologies).

distance. If a lapel or lavaliere microphone is worn, it should be situated midline on the chest about 6 inches from the mouth. n Check the batteries first if any malfunction occurs. Weak battery charge can cause interference, static, and intermittent signals in all RM technologies. n Audiologists should write clear recommendations for all HATS, specifying the rationale, type or types of S/N ratio enhancing technology needed, equipment characteristics, coupling arrangement chosen, parent and teacher training needed, and follow-up visits, and make sure this information is specified in the child’s individualized education program (IEP) or individual family service plan (IFSP) or 504 plan.

143

144

Children With Hearing Loss:  Developing Listening and Talking

Cochlear Implants A cochlear implant is a surgically inserted biomedical device designed to provide sound/auditory information to the brains of children and adults who experience severe to profound hearing loss (ASHA, 2004b; Dillon, 2012; Wolfe, 2019). The cochlear implant is very different from a hearing aid. Hearing aids amplify sound, but cochlear implants compensate for damaged or nonworking parts of the inner ear (Niparko, 2004). In fact, the cochlear implant bypasses some of the damaged parts (typically the hair cells) of the inner ear; coded electrical signals stimulate different hearing nerve fibers, which then send information to the brain. Hearing through an implant might sound different than typical hearing, but a properly inserted and programmed cochlear implant can allow many children to develop complete spoken communication skills if they receive appropriate early intervention (Geers et al., 2011; Geers et al., 2017; Madell, Flexer, Wolfe, & Schafer, 2019b; Smith, Wolfe, & Dettman, 2018). The main components of a cochlear implant are the surgically implanted electrode array and the external speech processor that can be worn on the body, behind the ear like a hearing aid, or directly on the head (Wolfe & Schafer, 2015) (Figure 5–15). Wolfe (2019) provides comprehensive information about cochlear implant development, design, insertion, programming, and management with detailed descriptions of each cochlear implant device currently on the market. Auditory information in the wearer’s environment is processed by the components of the cochlear implant as follows: n First, at the level of the speech

processor, the microphone picks up

sounds/auditory information from the environment and transmits them to the circuitry inside the speech processor. n The sophisticated digital circuitry in the speech processor then changes the input from the microphone into an electrical signal that is sent to the transmitter. n The signal from the processor’s transmitter is then sent, via an FM signal, through the skin behind the ear to the internal electrode array that has been surgically inserted into the cochlea. n The internal electrode array stimulates auditory nerve fibers, and the signal is sent to the brain. n Finally, with auditory experience, listening practice, and auditory language learning, the signal sent by the cochlear implant can be interpreted by the brain as meaningful auditory information. See the following company websites for more specific and updated product information: n http://www.cochlear.com n http://www.advancedbionics.com n http://www.medel.com n https://www.oticonmedical.com/

cochlear-implants Once implanted, the speech processor is linked to a computer and a custom MAP program is developed for the individual child over several months (Wolfe & Scafer, 2015; Zeng, Popper, & Fay, 2004) (Figure 5–16). The term “MAP” refers to the setting of the electrical stimulation limits necessary for the cochlear implant user to perceive soft and comfortably loud sounds.

Figure 5–15.  A cochlear implant depicting the following: 1. External speech processor captures sound and converts it onto digital signals. 2. Processor sends digital signals to the internal implant. 3. Internal implant converts signals into electrical energy, sending it to an electrode array inside the cochlea. 4. Electrodes stimulate the auditory nerve, bypassing damaged hair cells, and the brain perceives signals. With exposure and practice, the brain learns the meaning of the auditory information that has been transmitted via the implant. Source: Picture provided courtesy of Cochlear Americas, © 2015 Cochlear Americas.

145

146

Children With Hearing Loss:  Developing Listening and Talking

Figure 5–16.  Once implanted, the child’s speech processor is custom programmed and linked to a computer, either at a cochlear implant center, school, or home. Multiple and repeated programming sessions need to be conducted to obtain and maintain an optimal program, also called a MAP, for the child.

All current devices on the market have multiple program capacities that facilitate the MAPPING process and reduce the amount of time spent for programming at the implant center. Children who use cochlear implants are now able to enjoy waterproof or water-resistant processors, wireless streaming, adaptive processing, digital noise reduction, and interaural processing. Please note that if a child is not progressing as expected or behavior and performance alter in a negative direction, contact your implant center and check the integrity of the device’s MAP program immediately (Wolfe, 2019; Wolfe & Schafer, 2015).

Implant Candidacy As cochlear implant technology improves and as more outcome data are available, candidacy criteria are changing and becoming more liberal (Grimes, 2017; Tobey, 2010; Zwolan & Sorkin, 2017). General trends are that children are being implanted at increasingly younger ages and they have more residual hearing (Fitzpatrick et al., 2009; Waltzman, 2005; Wolfe, 2019). Why? Because data continue to show that the younger a child is when implanted, the better are the desired outcomes of meaningful spoken language and literacy development due to stimulation of critical audi-



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

tory brain centers during the early years of n The family and, if old enough, the child should be highly motivated and neural plasticity (Ching et al., 2018; Detthave realistic expectations. man et al., 2016; Geers et al., 2017; Nicholas & Geers, 2006; Robbins et al., 2004; Smith, Wolfe, & Dettman, 2018). Identifying a baby or child as a canThink of the initial use of a hearing didate for a cochlear implant means that aid in infancy (prior to receiving the he or she should obtain significantly betcochlear implant at about 12 months ter auditory brain access from a cochlear of age), not as a trial, but rather as implant than he or she would from the a brain activation opportunity. Even most optimally fitted hearing aids. Folif the hearing aid allows only lowlowing are generally accepted candidate frequency information to reach the criteria. There might be some variability brain, that baby’s brain will be sparked between countries and implant centers ​ and the parents will have the satisfac— and it is the implant team that ultimately tion of talking, singing, and reading determines who is or is not a candidate with their baby as they begin the pro(ASHA, 2004b; Wolfe, 2019): cess of their baby’s neural and language development. n Age is 12 months and older, although more children are being implanted even younger than 12 months. n Severe to profound bilateral View the cochlear implant as a valuesensorineural hearing loss is present. added technology rather than as a n The auditory nerve is relatively intact. “failed with hearing aids” technology. n There is little benefit from hearing Even if the child is doing okay with aids — although the term benefit is hearing aids, can the implant add to subject to interpretation. the child’s auditory brain access of n Prior to approval for cochlear spoken information and distance hearimplantation, a 3- to 6-month trial ing? That is, can the implant facilitate with well-fitted hearing aids typically the child’s exposure to conversations is recommended in conjunction with and information, both directly and intensive auditory therapy. These indirectly conveyed? Obtain audiologicriteria may be waived or compressed cal evidence to answer these questions. under certain circumstances, such as a history of meningitis with subsequent The following criteria are often added cochlear ossification. to the above in determining who to refer n There are no medical contraindi­ to an implant center for evaluation: cations to undergoing surgery. The surgery is typically performed on n The family’s desired outcome for the an outpatient basis and takes 2 to baby or child is spoken language and 3 hours. reading skills consistent with hearing n There is no active middle ear disease. peers. n An educational setting that n Mainstream classroom placement is emphasizes auditory therapy and desired. spoken communication is provided.

147

148

Children With Hearing Loss:  Developing Listening and Talking n When wearing well-fitted hearing

aids, the child is unable to understand speech at normal conversational levels without lip reading. The child can score up to 30% on open-set single-syllable words and still be referred to an implant center. A child with a severe to profound hearing loss who demonstrates some benefit from hearing aids typically makes rapid and substantial progress with a cochlear implant. Why? Some benefit from hearing aids means that the auditory centers of the brain have been activated and stimulated so that when the superior “keyboard” of the cochlear implant is provided, the child already has a neurological network in place for sound utilization (Chiossi & Hyppolito, 2017). n Unaided thresholds of 70 to 90 dB HL or worse from 1000 through 8000 Hz are obtained. n An aided speech intelligibility index (SII) is lower than 0.65. n An aided speech recognition threshold (the softest level that known speech material can just barely be recognized using a closedset task; this task focuses on vowels) is 35 dB HL or worse. There is a distinction between hearing aid gain and sound-quality benefit. n Aided distance hearing is less than 10 feet for /s/ and less than 20 feet for /sh/. Distance hearing with a cochlear implant can often exceed 40 feet in a quiet environment, even for /s/. The greater the distance hearing, the better opportunity the baby or child will have for incidental learning. n There is a lack of progress on auditory skills development and language — the benchmark is at

least month-for-month auditory and speech-language developmental progress with appropriately fitted hearing aids and intervention (Bradham & Houston, 2015).

In this era of constant refinement of medical technologies and treatments and rapidly evolving electronic and communications technologies, consumers expect the most current advice and interventions. One does not want the hip replacement that the mother had 10 years ago, or the cardiac treatment that the uncle had 5 years ago, or the knee surgery that the sister had just last year. Up-to-date information, recommendations, and interventions are required.

Therefore, if a family previously was told (even in the recent past) that their child was not a candidate at that time — ​ because he or she had too much residual hearing, or was getting too much gain from the hearing aid, or had some openset discrimination, or was too young or too old — they should ask the implant center for an updated opinion (Tobey, 2010; Zwolan & Sorkin, 2017). Neither medicine nor technology is static. As we learn more about this wonderful device and about the possibilities it holds, different decisions will be made about candidacy.

Benefit of Cochlear Implants Data are very exciting and continue to show that children with multichannel cochlear implants demonstrate significant improvements in all areas of speech perception and production as compared to



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

their preoperative performance with hearing aids if they receive the necessary auditory intervention (Bradham & Houston, 2015; Ching et al., 2018; Geers et al., 2017; Lovett, Elizabeth, Vickers, & Summerfield, 2015; Percy-Smith, 2018; Waltzman, 2005; Zwolan et al., 2004). Benefits range from improved detection of sounds to understanding speech without lip reading and often include easy telephone use. In general, children who benefit the most from cochlear implantation are younger (implanted at 12 months of age or earlier) and/or have received the implant after a shorter duration of deafness. They are in good auditory-based intervention and educational programs and have families who are involved and firmly committed to the training process (Percy-Smith, 2018). An additional key factor is the length of time the child has used the implant. Even children who have used their devices all day, every day for more than 3 years continue to show substantial improvement in listening abilities. In general, poorer outcomes following implantation tend to be associated with the presence of additional disabilities, late implantation, poor device programming, limited wear time, cochlear nerve aplasia, bacterial meningitis, and severe cochlear malformations (Wolfe, 2019). More research needs to be conducted to identify additional predictive factors of poor outcomes and to propose treatments for improving cognition and memory for integrated spoken language functions (Pisoni et al., 2017). Like the hearing aid and HAT technologies discussed previously in this chapter, a cochlear implant is only a tool, analogous to a computer keyboard that improves “data entry” to the brain. The ultimate effectiveness of the tool, how-

ever, is determined largely by the type and degree of aural rehabilitation and education that follow implantation (Bradham & Houston, 2015; Geers et al., 2017; Smith, Wolfe, & Dettman, 2018; Tobey, 2010; Waltzman, 2005; Zwolan et al., 2004). Is the child placed in an environment that emphasizes spoken communication? Are listening skills systematically developed? What are the expectations for the child’s ultimate use of the cochlear implant? Dettman et al. (2016) and Ching et al. (2018), among others, found that a key variable is the age of the baby or child at implantation. Those children who were implanted at the youngest ages, when the brain has the most neuroplasticity, developed the highest skills if they received appropriate auditory-based intervention. Data entry and auditory-linguistic skill building take time, often years. The child who has an implant needs to be taught to interpret and derive meaning from the auditory information that now enters his or her brain via the cochlear implant. The child whose brain has access to an appropriate, auditory-based language program has the best opportunity to benefit from a cochlear implant. The remainder of this book details the necessary intervention strategies to attain a listening and spoken language outcome.

There is now enough evidence that every month that the brain has access to auditory information makes a difference. Because of that, the decision to implant should be made by the time the infant is 6 to 9 months of age so that implantation can occur no later than 12 months of age, and perhaps by 9 months of age.

149

150

Children With Hearing Loss:  Developing Listening and Talking

Bilateral Implants Just as with bilateral hearing aids, bilateral cochlear implantation offers assistance in difficult listening situations and subsequently will optimize speech, language, educational, and psychosocial development. Research has shown that bilateral cochlear implantation offers substantial benefits to the child (Dorman & Gifford, 2017; Grimes, 2017; Sarant, Harris, Bennet, & Bant, 2014) (Figure 5–17). The value of bilateral implants is based on the value of bilateral hearing in general. Lovett et al. (2015) found that children with an unaided four-frequency pure tone average (PTA) of 80 dB HL or poorer in both ears should be considered candidates for bilateral cochlear implantation. In cases where a four-frequency PTA cannot be measured, the criterion of

candidacy should be a two-frequency PTA of 85 dB HL or poorer in both ears. Research on bilateral implants has shown advantages for many children in the following areas (Litovsky, 2010; Litovsky, Parkinson, & Arcaroli, 2009; Sarant et al., 2014; Wolfe, 2019): improved speech perception in noise through the binaural squelch; improved localization of speech and environmental sounds (also called spatial hearing); better access to sound originating from the second CI side; access to binaural summation; opportunities to capture the better ear; and better vocabulary and language outcomes. Team collaboration is critical in determining candidacy for a second implant and in arranging for the appropriate follow-up to maximize functional outcomes for each child. The wear-time schedule for first and second cochlear implants, therapy activi-

Figure 5–17.  This child is wearing bilateral cochlear implants with ear-level speech processors.



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

ties, and therapy structure are dependent upon a number of individual variables, including the following: n age of the child, n auditory skill development with first

CI or with hearing aids, n time elapsed between the first and

second implant, and n family commitment to structured wear time and activities for the second implant. Following are some observations about binaural implants that have been made so far. The better the child’s auditory abilities (with either hearing aid or first cochlear implant) prior to implantation, the faster the child will learn to utilize the new electroacoustic input of the second implant. It must be that the central cortex that is accustomed to dealing with auditory information more quickly recognizes and makes sense of the new source of auditorily coded input. The older the child is at the second implantation, the longer the adjustment might be, and the longer that therapy tasks must be used focusing on the second implant alone. It appears that the younger the child is at the time of the first and second implants, the shorter the time needed for the second device to “catch up” (Litovsky, 2010). In addition, the shorter the time between implants, the less structured therapy is needed in the second implant-only condition. The degree to which children tolerate wearing the second implant alone varies, and parents should encourage that activity at home.

Bimodal Fittings Bimodal hearing means the child is wearing a cochlear implant in one ear and a hearing aid in the other. The hearing aid

provides acoustic stimulation to one ear, and the cochlear implant provides electrical stimulation to the other ear, hence the term bimodal. The cochlear implant offers the advantage of high-frequency audibility, and the hearing aid provides low-frequency brain access because residual hearing typically occurs in the low frequencies (Shpak, Most, & Luntz, 2013; Wolfe, 2019). When cochlear implants were first approved for children age 2 years and older in 1998, most candidates had very profound hearing losses and received virtually no benefit from continuing to wear a hearing aid in the unimplanted ear (Tobey, 2010). Subsequently, children with more residual hearing have been implanted and questions arose as to the value of wearing a hearing aid in the unimplanted ear. That is, for children who are not using bilateral implants and who have more residual hearing, is there benefit to wearing a hearing aid? Research has demonstrated that there can be substantial value to wearing a hearing aid in the ear contralateral to an implant if that ear has some low-frequency residual hearing, and such use should be encouraged (Cullington & Zeng, 2010; Shpak et al., 2013). It seems that the lowfrequency acoustic information below 250 Hz can provide speech perception benefit, assist when listening to multiple talkers, and add value to music appreciation (Cullington & Zeng, 2010; Shpak et al., 2013; Zhang, Spahr, & Dorman, 2010).

Children generally achieve better brain access to auditory information with the use of bilateral cochlear implants, or the use of a hearing aid and a cochlear implant (bimodal use), compared with the use of a single cochlear implant alone.

151

152

Children With Hearing Loss:  Developing Listening and Talking

Electric-Acoustic Stimulation (EAS) and Hybrid Cochlear Implants EAS occurs by delivering both electric (cochlear implant) and acoustic (hearing aid) stimulation to the same ear via a single device called a hybrid cochlear implant (Fabry, 2008; Wolfe & Schafer, 2015; Zwolen & Sorkin, 2017). The primary purpose of a hybrid cochlear implant is to provide electrical stimulation to the cochlear nerve for mid- to high-frequency auditory information via a shorter electrode array, thus potentially preserving low-frequency acoustic hearing. An additional objective is to amplify the child’s low frequency residual hearing by using an acoustic component (like a hearing aid) that is built into the cochlear implant’s sound processor.

It should be noted that some people who receive a hybrid device lose their lowfrequency acoustic hearing.

Persons with precipitously sloping severe to profound high-frequency hearing losses with good low-frequency residual hearing are likely to benefit from the use of bilateral EAS (Zwolan & Sorkin, 2017). Programming hybrid implants can be complex because the audiologist needs to consider the frequency range for the electrical stimulation of the cochlear implant component and the prescriptive fitting strategy for the hearing aid component (Wolfe, 2019).

SSD SSD means one ear has normal or nearnormal hearing, and the other ear has a severe to profound hearing loss. SSD

impairs speech recognition in noise (and often in quiet, too) and spatial hearing abilities. In children, SSD also can negatively affect language, academic, and social development (Bess & Hornsby, 2014; Tharpe, 2008). Refer to Chapter 3 of this book for more information about unilateral hearing loss. There have been multiple technological options for children with SSD, including CROS hearing aids (contralateral routing of the signal), osseo-integrated devices, and remote microphone systems fit to the better ear (Grimes, 2017). Recently, cochlear implantation of the poorer ear has shown promise, especially when the duration of deafness is short and the child is implanted before age 4 (Arndt et al., 2016; Chen et al., 2016; Gifford, 2017). Cochlear nerve aplasia should be ruled out as a cause of SSD before a CI is considered (Wolfe, 2019). More research is needed; however, expanding CI criteria to include the consideration of SSD offers exciting auditory opportunities for children with unilateral hearing loss.

Cochlear Implants and RM Systems Cochlear implants, like hearing aids, perform better in quiet than in noise. In fact, real-world, typically noisy listening environments may slow language processing in young children with cochlear implants (Grieco-Calub, Saffran, & Litovsky, 2009). Because children must have access to soft speech at a distance in less-than-ideal environments, some sort of RM arrangement should be used. Schafer and Thibodeau (2003) and Anderson et al. (2005) evaluated the benefit of RM use with cochlear implants. They found that either a personal-worn RM coupled (attached) to the implant receiver or a desktop/targeted sound-field system resulted in significantly



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

improved performance in noise when compared to the implant-alone condition (Figure 5–18). Dorman and Gifford (2017) evaluated speech understanding in complex listening environments by listeners fit with cochlear implants and found that an RM provided the largest improvement in speech understanding. The personal-

Research suggests that personal digital RF systems result in significantly better speech-recognition performance in high levels of background noise than traditional FM systems and should be used routinely in difficult listening situations (Wolfe & Schafer, 2015).

worn RM must be programmed to the CI speech processor by the implant audiologist to avoid electrical interference, and to verify the relationship between the level of the RM signal and the environmental signal.

Remote Programing of Cochlear Implants Telepractice, also called e-health, teleaudiology, telemedicine, or telehealth, is the use of a video conferencing platform to provide audiology services (Steuerwald, Windmill, Scott, Evans, & Kramer, 2018). Because of long distance from implant centers, long wait times for appointments, and complicated family needs, telepractice

Figure 5–18.  This child has an FM receiver boot coupled to his cochlear implant, and his teacher is wearing an RM transmitter to improve the signal-to-noise ratio in a classroom environment.

153

154

Children With Hearing Loss:  Developing Listening and Talking

can be used to provide some audiology services that result in improved access to technology management without increased costs of equipment and staff. Hughes, Sevier, and Choi (2018), found that distance technology can be used successfully to program CI sound processors for young children. Issues to be considered for an effective telepractice include the navigation of legal, regulatory, and insurance protocols; Internet and equipment integrity at multiple sites including homes; training of practitioners; and identification of outcome measures to validate the treatment paradigm (Steuerwald et al., 2018). The exciting news is that telepractice can facilitate the timely management of cochlear implant function for children all over the world.

Auditory Brainstem Implant (ABI) An ABI is a surgically implanted electronic device that can deliver auditory information to the brain of a person who is profoundly deaf due to illness or injury

damaging the cochlea or auditory nerve, or to a congenital, genetic condition such as cochlear or eighth nerve aplasia. A cochlear implant will not be effective under these circumstances (Gifford, 2013; Wolfe, 2019). The auditory brainstem implant uses technology similar to that of the cochlear implant, but instead of electrical stimulation being used to stimulate the cochlea, the ABI electrode “paddle” bypasses the cochlea and is placed directly on the cochlear nucleus of the auditory brainstem (Figure 5–19). In the United States, ABIs were previously approved only for adults (18 and over) and only for patients with neurofibromatosis type II (NF2); however, in January 2013, the U.S. FDA approved a clinical trial of ABIs for children (Gifford, 2013). ABI surgery has been found to be safe in young children without NF2 (Teagle et al., 2018). In Europe, ABIs have been used in children and adults and in patients with NF2 as well as other auditory complications such as auditory nerve aplasia and cochlea ossification (Eisenberg, Wilkinson, & Winter, 2013).

Figure 5–19.  An example of an auditory brainstem implant. Source: Reprinted with permission from Estabrooks, W., MacIver-Lux, K., and Rhoades, E. A. (2016). Auditory-verbal therapy: For young children with hearing loss and their families, and the practitioners who guide them (p. 184). San Diego, CA: Plural Publishing, Inc.



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

Some studies report that children using the ABI rely heavily on visual cues, but other practitioners have observed high levels of open-set speech recognition without visual cues in some children who lack an eighth nerve from causes other than NF2 (Cohen & Brill, 2018; Eisenberg et al., 2013). Children must be reliable reporters of acoustic information to manage electrodes and current levels eliciting auditory and nonauditory information (Gifford, 2013). In addition, children with ABIs need to be closely monitored for potential changes in auditory perception and unfavorable nonauditory sensations (He et al., 2018). Research is ongoing with high expectations for brain access of auditory information with this new and evolving technology.

Measuring Efficacy of Fitting and Use of Technology The fact that a child wears technology does not guarantee that the child is hearing well with the technology. Efficacy can be defined as the extrinsic and intrinsic value of a treatment (Crandell et al., 2005). Even though technological treatments for hearing loss clearly have value, measurements need to be made to show that the individual baby or child in question is obtaining benefit. Benefit may be measured and validated through educational performance, behavioral speech perception tests, direct measures of changes in brain development, and/or functional assessments. If a child is using hearing aids or a cochlear implant, the first step is to know that the primary technology is providing the expected benefit. Even though an RM system will reduce the problems

associated with distance and noise, an RM cannot compensate for poor hearing aid settings. Assessing speech perception in a quiet sound room at a comfortable loudness level may provide sufficient information when checking technology in the audiology clinic; however, sound room testing does not provide sufficient information about acoustic accessibility at home, in school, and in after-school activities. To measure auditory accessibility in critical situations, it is important to test hearing aids and cochlear implants in the sound room, at normal and soft conversational levels, in quiet, and in competing noise (Madell & Schafer, 2019). For RM systems to work appropriately, hearing aids first must be correctly selected and programmed (Dillon, 2012; McCreery & Walker, 2017). The hearing aid needs to meet the needs of the specific hearing loss. Benefit needs to be validated using aided thresholds to demonstrate hearing at sufficiently soft levels to be certain that, with technology, children have access to speech information throughout the frequency range, especially high frequencies (Anderson, 2011). Such testing will provide information about modification of hearing aid settings to permit children to hear what they need to hear (Madell & Schafer, 2019). Functional auditory assessments are another important and easily administered efficacy measure (Tharpe, 2004; Wolfe, 2019). Table 5–1 has an overview of functional auditory assessment tools.

Equipment Efficacy for the School System The key to securing technology from the school system is to obtain data documenting the student’s need (Ackerhalt &

155

Table 5–1.  Functional Auditory Assessment Tools for Infants and Young Children Measurement Tool

Authors

Age Range

Purpose

Auditory Behavior in Everyday Life (ABEL) (2002)

Purdy et al.

2 to 12 years

Twenty-four-item questionnaire with three subscales (aural-oral, auditory awareness, social/ conversational skills) that evaluates auditory behavior in everyday life

Children’s Home Inventory for Listening Difficulties (CHILD) (2000)

Anderson & Smaldino

3 to 12 years

Parent- and child-reported versions that assess listening skills in 15 natural situations

Children’s Outcome Worksheet (COW) (2003)

Whitelaw, Wynne, & Williams

4 to 12 years

Teacher, parent, and child rating scales of classroom and home listening situations with amplification device; to specify five situations where improved hearing is desired

Early Listening Function (ELF) (2000)

Anderson

Infants and toddlers; 5 months to 3 years

Parent observational rating scale of structured listening activities conducted over time to record distance learning

Functional Auditory Performance Indicators (FAPIs) (2003)

StredlerBrown & Johnson

Infants through school age

Parent or interventionist assessment of functional auditory skills over time

Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS) (1997)

Robbins, Renshaw, & Berry

Infant-toddler and older child versions

Structured parent interview scale designed to assess spontaneous auditory behaviors in everyday listening situations

Listening Inventories for Education (LIFE) (1998)

Anderson & Smaldino

6 years and above

Student and teacher rating scales designed to assess listening difficulty in the classroom

Little Ears (2003)

Kuhn-Inacker et al.

Birth and up

Questionnaire for the parent with 35 age-dependent questions that assess auditory development

Meaningful Auditory Integration Scale (MAIS) (1991)

Robbins et al.

3 to 4 years and up

Parental interview with 10 questions that evaluates meaningful use of sound in everyday situations; attachment with hearing instrument, ability to alert to sound, ability to attach meaning to sound

156



5.  Hearing Aids, Cochlear Implants, and Remote Microphone (RM) Systems

Table 5–1.  continued Measurement Tool

Authors

Age Range

Purpose

Parent’s Evaluation of Aural/Oral Performance of Children (PEACH) (2000)

Ching et al.

Preschool to 7 years

Interview with parent with 15 questions targeting the child’s everyday environment; includes scoring for five subscales (use, quiet, noise, telephone, environment)

Preschool Screening Instrument for Targeting Educational Risk (Preschool SIFTER) (1996)

Anderson & Matkin

3 to 6 years

Questionnaire with 15 items completed by the teacher that identifies children at risk for educational failure; has five subscales (academics, attention, communication, class participation, behavior)

Screening Inventory for Targeting Educational Risk (SIFTER) (1989)

Anderson

6 years through secondary school

Teacher questionnaire designed to target academic risk behaviors in children with hearing problems; has five subscales (academics, attention, communication, class participation, behavior)

Teacher’s Evaluation of Aural/ Oral Performance of Children (TEACH) (2000)

Ching et al.

Preschool to 7 years

Interview with teacher having 13 questions targeting the child’s everyday environment; includes scoring for five subscales (use, quiet, noise, telephone, environment)

Wright, 2003; Johnson & Seaton, 2012). A multifactored comprehensive evaluation, which is a thorough evaluation by a multidisciplinary team, is necessary to document the need for a child to receive special services. The last category on most multifactored comprehensive evaluation forms is assistive technology needs. For assistive technology to be recommended within any legislative framework, some type of evaluation must be conducted. That is, even though we know that a personal RM helps every child with a hearing

loss deal with the problems of distance and noise, we typically need to document that the child in question cannot obtain an appropriate education unless a personalworn RM system or a CADS is utilized. Specifically, can the child’s access to and performance in the general education environment be linked to hearing difficulties in the classroom? What tests can be administered to document acoustic access and listening difficulties in the classroom? Functional assessments can assist in this regard.

157

158

Children With Hearing Loss:  Developing Listening and Talking

One way an audiologist can document the need for technology and measure efficacy of technology fitting is to perform speech-in-noise testing in the clinical sound room. Typically, audiologists test children’s speech perception in quiet at advantageous intensity levels. If phonetically balanced (PB) words are presented only at 40-dB sensation level (SL) under earphones, functional information is not provided about accessibility to typical classroom instruction. Words presented at a loud level in perfect quiet always overestimate a child’s real-life auditory discrimination ability. By adding two speech assessments to the basic audiologic test battery, an audiologist in a school or clinical setting can document efficacy

Conclusion The purpose of technology is to efficiently, effectively, and consistently breach the doorway and channel auditory information to the brain. The purpose of acoustic accessibility is to open a pathway or doorway to the brain — all sounds must pass through an environment to reach the child’s technologies. Even the best educational programming cannot substitute for poor acoustic access if listening, spoken language, and literacy skills are the desired outcomes. The purpose of auditory intervention is to use the technology and favorable environments to provide the child with redundant and meaningful auditorylinguistic communicative experiences to allow the brain to develop a substantial auditory neurolinguistic infrastructure. The development and programming of the

by providing functional information about the child’s acoustic accessibility to the speech spectrum. Using appropriate speech material, such as the WIPI, BKBSIN, and QuickSIN, word identification testing should be performed in the sound field in the unaided condition. Then, while wearing hearing aids or cochlear implant(s), these same tests also should be performed in the aided condition, in quiet, and in noise. Appropriate speech stimuli for the child’s language level should be presented at the average loudness level at which a child in a favorable classroom environment — 50 dB HL ​ — receives speech. Because the child must also hear soft speech, another list of words should be presented at 35 dB HL.

baby’s or child’s auditory neural centers serve as the basis for spoken language, literacy, academic, and social-emotional growth. Therefore, the fitting and use of hearing aids, cochlear implants, and RM technologies are critical first steps in attaining the desired outcomes of listening, talking, reading, and learning. There needs to be a systemwide attention to all links in the Logic Chain — no link can be skipped (Flexer, 2017): n Brain development > n General infant/child language

development in the family’s home language > n Early and consistent use of hearing technologies > n Family-focused LSL early intervention > n LSL early intervention for literacy development.

Part II

Developmental, Family-Focused Instruction for Listening and Spoken Language Enrichment

6 Intervention Issues

Key Points Presented in the Chapter n The framework for intervention

presented in this book is based on the premise that the most effective ingredients of intervention for young children who have hearing loss include beginning intervention (both technological and instructional) when the child is very young, following a normal developmental sequence, and having parents be the primary teachers of their children. n For children with hearing loss, challenges to the process of learning spoken language can arise from factors such as late wearing of hearing aids or cochlear implants, from severe disabilities in addition to the hearing loss, or from the child living in an environment where the television is constantly

on and little interactive talking is happening with the child. However, each of those challenges can be overcome through guided, sustained effort. n Educational programs for children who have hearing loss differ on several important dimensions. Parents and professionals need to know what those dimensions are so they can ask questions to find out where each program stands on each of the dimensions. n Preschoolers with hearing loss have widely varying abilities and needs, and their educational programs need to be carefully and individually planned. A full continuum of options needs to be available to provide a program that most appropriately fits the child’s needs.

161

162

Children With Hearing Loss:  Developing Listening and Talking

Basic Premises

following four premises, which are illustrated in Figure 6–1:

The goal of this text is to provide a framework for professionals and families who want to help promote spoken language development in their young children who have hearing loss. Ideas about intervention in this book are grounded on the

n The first premise is that it makes

sense to begin intervention with children who have hearing loss when they are very young because of the research on particular sensitivity of the brain’s neural pathways to

LISTENING & SPOKEN LANGUAGE INTERVENTION

Figure 6–1. This is a conceptual drawing of the four premises underlying the auditory and spoken language intervention described in this book. Given each of the elements on the inner circle, each of the components on the outer circle can be realized.



auditory input prior to the age of 3 (Ching et al., 2018; Kal et al., 2016; Sharma, Dorman, & Spahr, 2002; Sharma et al., 2004; Sharma, Dorman, & Kral, 2005; Sharma et al., 2005) as well as because of the verbal and academic deficits often seen in children whose audiological and educational management begin later (Geers et al., 2017; Geers & Moog, 1989; Nicholas & Geers, 2006; see Chapters 1, 2, and 7). n The second premise is that it makes sense to provide the best possible opportunity for young children with hearing loss to learn to listen and talk to keep as much of the world open and available to them as possible (see Chapters 1 and 7). Auditory brain stimulation, specifically auditory stimulation from spoken language, lays the most effective foundation for reading and higher cognitive abilities needed for academics, as well as for social connections and economic well-being after secondary school (Geers et al., 2011; Hayes, Geers, Treiman, & Moog, 2009; Robertson, 2014; Zupan & Sussman, 2009). n The third premise is that when the goal is spoken language, it makes the most sense to help the parent help the young child learn spoken language through listening. The idea is to maximize the child’s development through optimizing the family’s capacity to address the child’s needs (Rhoades & Duncan, 2017; Suskind, 2015). Parents and professionals working in partnership should result in the most effective intervention with very young children (Bromwich, 1981; Girolametto, Weitzman, McCauley, & Fey, 2006; Law, Garrett, & Nye, 2004; Rini

6.  Intervention Issues

& Hindenlang, 2006; Tannock & Girolometto, 1992; Yoder & Warren, 1998, 2002; see Chapters 9 and 10). n The fourth premise is that in acquiring spoken language through listening, a child with a hearing loss will generally follow a normal developmental path. The pace may be somewhat slower (or not!), and there may be some asynchronies in learning related to difficulty in hearing particular aspects of language (Ertmer, Strong, & Sadogopan, 2003; Estabrooks, 2006; Estabrooks et al., 2016; Kretschmer & Kretschmer, 2001; Schauwers, Gillis, & Govaerts, 2005; Tye-Murray, 2003; Uchanski & Geers, 2003). Much of the intervention described in this book is based on the child following the normal developmental sequence of spoken language learning. However, carefully preplanned and structured teacher- or parent-directed strategies will need to be used to address asynchronies or missing speech sounds or linguistic items as they are uncovered. The longer the child is without hearing (auditory brain access of spoken information), the more the need for structured, adult-directed instruction (see Chapters 7 and 10). The primary focus of much of this book is on work by parents who are helping their children learn spoken language through listening. Certainly, by the age of 3 years, when parents and school districts may be thinking of preschool, each child presents unique educational needs to be considered and met. No one single method or educational setting can meet the needs of all children who have hearing loss and their families. The section in this chapter entitled “Educational Options

163

164

Children With Hearing Loss:  Developing Listening and Talking

for Children With Hearing Loss, Ages 3 to 6” discusses some of the possible educational choices that could be considered by parents and school districts as they are trying to make decisions. When a preschool setting is being considered, Appendix 5 may be helpful in evaluating its appropriateness. Although the book is primarily directed toward aural habilitationists who are working with parents of young children who have hearing loss, much of the information in this book could be applied to work in small group preschool settings that are specifically designed to meet the needs of children with hearing loss, particularly those that include frequent parent guidance sessions. Appendix 5 lists needs that are common to all children with hearing loss who are learning in group settings. What will be different for each child will be the amount of intensive individual, family-focused, listening and spoken language (LSL) therapy required. That amount should be based on what is needed to get (or keep) the child’s auditory and spoken language learning at an appropriate level for realizing the child’s potential for cognitive, social, and academic achievement with grade-level peers. Not every child with severe or profound hearing loss learns fluent spoken language, sometimes in spite of the best efforts of parents and professionals. Reasons for a lack of satisfactory progress in spoken language can include late diagnosis, poorly fitting and/or poorly maintained hearing aids, limited wearing of hearing aids or cochlear implants, an impoverished educational program, lack of appropriate assistive technology for those with no measurable hearing and rare ear anomalies, additional disabilities, or additional general or specific problems in learning. None of these factors by

itself precludes the possibility of the child learning to talk, but a combination of several of them can hinder progress. The section in this chapter entitled “Challenges to the Process of Learning Spoken Language” provides more detail on this issue. It should also be noted that insufficient or ineffective parental involvement and/or unknowledgeable or unskilled educators can also present major challenges to the child’s learning to listen and talk.

Differentiating Dimensions Among Intervention Programs There are hundreds of programs, both publicly and privately supported, that provide services based on spoken language development for children who have hearing loss in the birth to 6-year-old age range. These programs may describe themselves as “oral,” “auditory oral,” “auditory-verbal,” or more recently, “LSL.” In increasing numbers, professionals in these programs may have acquired the relatively new certification by the AG Bell Academy as Listening and Spoken Language Specialists (either LSLS Educators [LSLS Cert. AVEd] or LSLS Therapists [LSLS Cert. AVT]). This LSLS certification is specialty training beyond a graduate degree in deaf education, speech-language pathology, or audiology and requires many hours of professional development, several years of supervised LSL therapy or teaching, and a passing score on the national examination. See the following website for more information about the LSLS certification program: https://agbellacademy.org/certification/ forms-fees-and-faqs/ Although the primary aim of all these early intervention programs and individu-



als is to help children who have hearing loss learn to listen and talk, close inspection reveals that such programs can vary on several theoretical and methodological dimensions. Parents need to be aware of these distinctions to understand the nature of the program in which their child is enrolled. Additional information for parents who need to make decisions about group placement for their children at age 3 or older can be found in Appendix 5. 1. Programs can differ in the degree to which the parents are considered to be central to the child’s intervention.  This can be reflected in important details such as the number of intervention sessions per week; whether or not the session is occurring in the family’s home or in a family-friendly therapy room or some other locale; whether the parent is present and participating in the session or observing from outside of the room; whether it is primarily the parent or the therapist who interacts with the child during the sessions; whether most of the intervention strategies are applicable to child and family routines and everyday life outside of the sessions; and whether there are parent group meetings and activities offered outside of the sessions. If the program truly is providing intervention aimed at having the parents be the child’s primary language teachers, then there are likely to be only one or two parent guidance sessions per week. The parent is expected to integrate auditory and languagepromoting strategies into all that they do with the child during all of the child’s waking hours. The

6.  Intervention Issues

teacher or therapist is the guide and the coach during the intervention sessions. What actually causes the child to learn to listen and talk is all of the work and enrichment that the parents and family do in the remaining 83 or so hours of the week that the child is awake and alert. 2. Programs vary in the emphasis placed on normal everyday interactive events as the context within which the child will learn language, versus an emphasis on the child learning language from participating in adult-directed teaching activities.  The contrast is between a normal developmental approach and a more remedial approach. Helping very young children who have hearing loss learn to listen and talk can be most effectively accomplished primarily through a normal developmental process, with “embellishments” that start with full-time wearing of hearing aids and cochlear implants (at least 10 hours per day), and parents employing strategies to help children integrate listening and talking into their way of interacting with the world. However, if the child’s hearing loss is detected late and/or if there are other interfering problems that have resulted in the child being very far behind, then the approach will need to include more adult-directed and preplanned activities. Programs differ on the extent to which they employ either approach, both in general and with regard to their work with individual children at different points in their development. Part C of the Individuals with

165

166

Children With Hearing Loss:  Developing Listening and Talking

Disabilities Education Act (IDEA, 2004) regarding the provision of services for children in the birth to age 3 range requires that “to the maximum extent appropriate to the needs of the child, early intervention service must be provided in natural environments, including the home and community settings in which children without disabilities participate” (34 CVFR 303.12[b]). From this came the concept of using “routines-based” interviews (McWilliam, 2010; McWilliam, Casey, & Sims, 2009) as the basis of determining what should be the focus of the intervention. All of this was an effort to get away from medical/clinical intervention models from the past and make the intervention directly relevant and meaningful to the child and family. On the surface, this seems completely logical. However, the law was written with broad strokes for all children with all types of disabilities. For children with hearing loss, it is crucial that intervention occur in a quiet place if the child is to hear and respond optimally, so that the parent can learn to expect that level of response in other similar situations and will make every effort to keep the child’s auditory environment as conducive to listening and learning as possible in the course of normal everyday play and routines. For children with hearing loss, carrying out every session in the child’s home may or may not be in the child’s or the parent’s best interests in terms of their ability to listen and learn.

3. Programs and practitioners also vary in their use of sense modalities in providing spoken language input, both in normal everyday conversations and in more adult-directed activities. There are three likely avenues of sensory input for spoken language: auditory, visual, and tactile. a. A program that emphasizes audition will treat the child’s need for stimulation of the auditory centers of the brain as a neurological emergency (see Chapter 1). The provider will have the parent and child come in to fit hearing aids within 2 weeks of an initial contact, teach the parent how to manage the equipment, and provide guidance on how to keep the equipment on the child all waking hours from the very beginning. Professionals in an auditory program will also talk with the parents about ways in which they can help the child learn to listen in the course of everyday interactions, right from the beginning. For example, audition can be maximized in normal interactions by speaking at a relatively close distance to the child, by sitting behind or beside him, or by having the child’s visual focus on the toys or activity at hand rather than on the speaker’s face. b. The visual sense modality in this context refers to whether or not the child is frequently encouraged to watch the speaker’s face for cues provided by speech



6.  Intervention Issues

reading — that is, watching the speaker’s lips and facial expressions when the speaker is talking. In addition, it is also possible to consciously teach differentiations among or between particular lip positions and their associated sounds (e.g., the difference in lip position and movement of a /fa/ versus a /ba/), although this is now only rarely done with young children. c. The tactile sense modality can be employed in a global way through the use of vibrotactile devices that are intended to be worn by the child to aid in perception of speech in normal conversational situations. Vibrotactile devices are also sometimes used in a more limited way for speech and auditory training exercises. With or without a vibrotactile device, touch can be employed in demonstrating features of particular speech sounds where tactile sensation is salient such as the breath stream of fricatives and plosives.

To the uninitiated, these differences may seem to be insignificant, but within the field of education of children who are deaf or hard of hearing, they have, with some justification, been hotly contested issues. There are strong research- and logic-based reasons to rely primarily on audition as the major means of helping a young child with a hearing loss learn spoken language. The key program elements seem to rest most squarely on the knowledge, persistence, and enthusiasm of the educator and the parents.

Challenges to the Process of Learning Spoken Language Providing the conditions for a young child with hearing loss to acquire spoken language through listening requires knowledge-driven effort on the part of all the adults in the child’s life. Necessary conditions (whether the child is at home, at daycare, or in a preschool setting) include the following: n Consistent, appropriately excellent

Some of the possible permutations of the use of sense modalities include programs that advocate the use of audition only; programs that include the use of vision and touch as supplements to audition when needed; programs that always encourage the child to watch the face when someone is speaking (and are thus purposely presenting the input through audition and vision at the same time), with the use of touch also, as needed; and programs that might do any of the above, depending upon the needs and abilities of a particular child.

auditory brain access to speech all waking hours from a very young age.  The child needs auditory brain access by means of appropriately fit and maintained technology (e.g., hearing aids, cochlear implants, and remote microphone [RM] systems) all waking hours , which means at least 10 to 12 hours of wearing per day (Tomblin et al, 2018). That access needs to be verified through constant adult vigilance about the functioning of the child’s equipment and about the acoustic conditions of

167

168

Children With Hearing Loss:  Developing Listening and Talking

the child’s environment, as well as through appropriate use of personal RM technology in any situation where noise or distance from the speaker is likely to reduce the child’s access to the speaker’s message. n Multiple and varied repetitions of linguistic input and linguistic practice in meaningful, everyday interactions with fluent speakers of the target language.  We do not have research that tells us exactly how much talking and interacting is enough. However, there is research on children with normal hearing and with hearing loss who experienced widely varying amounts of verbal interacting with their caregivers, which clearly demonstrates the advantages to the child’s own language and literacy growth from being exposed to a rich quantity and variety of adult talk and interaction in at least the first 3 or 4 years of life (Ambrose, VanDam, & Moeller, 2014; Hart & Risley, 1999; Hoff, 2003; Suskind, 2015; Szagun & Stumper, 2012; VanDam, Ambrose, & Moeller, 2012). For the child with hearing loss, access to the verbal messages to and around him is necessarily at least somewhat limited by the very presence of the hearing loss. How much more important must it be to have adults in his or her life who consistently go out of their way to play and interact verbally, providing a rich and varied linguistic experience for the child? The following circumstances can pre­ sent additional challenges for the child with hearing loss as he or she learns spoken language:

1. The child is late to full-time wearing of appropriate amplification, either due to late diagnosis or to the parent not being able to keep the aids or implant(s) on the child. 2. The child has disabilities in addition to the hearing loss. 3. The child’s environment is often filled with noise (e.g., TV, iPad, computer), and little verbal interaction is directed toward the child. 4. The child’s family does not speak the language used by the LSL practitioner, by teachers and students in the child’s school, and by the “mainstream” community. Each of these challenges is addressed in turn.

Late to Full-Time Wearing of Appropriate Amplification or Cochlear Implant(s) Research has not yet provided a precise age after which the child is too old to acquire spoken language. However, there is research showing that the earlier the child begins wearing appropriate audiological technology, the more quickly the child will progress (Ching et al., 2018; Dettman et al., 2016; Szagun & Stumper, 2012), and the further along the child will be by school age (Dillon et al., 2013; Nicholas & Geers, 2006, 2007; Sharma, Dorman, & Kral, 2005; Yoshinaga-Itano, 2004). Time before the child is wearing hearing aids is lost listening time, and delays the child’s immersion in listening, which is a necessary first step for the child’s brain to begin sorting out all of the auditory “noise” into meaning. A child who is 18



months old before his hearing loss is identified and hearing aids are worn full time has lost 18 months of listening and auditory brain development time, and has missed the time period during which all of his hearing peers were listening and learning spoken language. Now, to begin at 18 months, the child is out of synchrony, out of sequence — his nonverbal cognitive abilities may be those of an 18-month-old, but his ability to understand and express himself using words is not. In addition, to catch up to his hearing peers before kindergarten, the child has to learn at a faster-than-normal rate. The normally hearing 18-month-old child will proceed to do 3.5 years of learning before age 5, whereas the newly aided 18-monthold will have to do 5 years of learning in those 3.5 years to be on an equal footing with his normal-hearing friends. Children with hearing loss have caught up with their normally hearing peers, but it does take committed and consistent effort on the part of all involved. Unfortunately, the child wearing hearing aids is often working at the further disadvantage of not being able hear well in even low-level noise, or at a distance of more than about 12 feet from the speaker, which makes normal overhearing of others’ language difficult or impossible. (Notes: [1] Noise and distance problems can be addressed, although not totally eliminated, by parent use of personal-worn RM systems, and [2] children with cochlear implants are likely to have better distance hearing than this, when the processor is set for distance listening.) Children with hearing loss are able to learn at amazing rates once they wear amplification technology on a full-time basis (at least 10 hours per day), and once all of the adults in the child’s life are providing appropriate, meaningful

6.  Intervention Issues

spoken language in interactions with the child. Achieving 18 months to 2 years of progress in 1 year on standardized language measures is not uncommon for preschoolers with hearing loss who have appropriate amplification and input. Given full-time brain access to sound, the importance of parents and other adults in the child’s life providing auditory-linguistic input that is rich in vocabulary and structural complexity becomes the most important factor for the child’s language development (Szagun & Stumper, 2012). Even children with mild or unilateral hearing loss are known to have speech and language-learning problems with greater incidence than children with normal hearing (Delage & Tuller, 2007; Lieu, 2004; Lieu, Tye-Murray, & Fu, 2012; McCreery et al., 2015; Tharpe, 2008). Because today’s technology can nearly always bring the speech signal to the child’s brain more intensely and with great fidelity, it is imperative that parents explore the audiological technology that may be of help to their child. Parents (and all significant people in the child’s life) then need to insist that the child wear the amplification all waking hours as soon as it is prescribed and fit (Figure 6–2).

The challenges to keeping hearing aids or cochlear implants on young children vary, depending on the age at which the child is just beginning to wear the technology. Unlike some adults, children do not need gradual acclimation to having access to sound — they need auditory information to their brains as soon as possible, at least 10 hours per day.

169

170

Children With Hearing Loss:  Developing Listening and Talking

Figure 6–2. There can be challenges in keeping hearing aids on a child’s head at any age, but the greatest challenge has been surmounted when all of the adults in the child’s life are convinced that the child needs to wear the equipment to learn to listen and talk.

The parents’ job is to make sure the technology goes on as soon as the child wakes up and only comes off when the child is taking a bath (unless the technology is water-resistant, which is highly recommended) or sleeping (although some children like to wear their technology when they are sleeping, too). It is essential that the child has access to all of the talk around them from the moment they wake up until the moment they go to sleep. Just

as parents would not allow the child to go around without a diaper or to run in the street, they make sure that the child is not running around without his or her “ears” — without the technology that will provide the child with auditory access to develop the auditory centers of the brain during all of the experiences of childhood. With tiny babies, the problems usually are those of large (adult) fingers getting used to putting earmolds into tiny ears,



6.  Intervention Issues

and knowing that the pinna (external ear) is pliable and can be pulled a bit without causing any harm to the child. The audiologist and/or LSL practitioner will demonstrate techniques for identifying which earmold is which (right versus left), holding the earmold in a certain way, and then inserting it while gently and firmly pulling the pinna back and up and then making sure that the earmold is seated properly. It is awkward at first, but with practice, inserting an earmold becomes second nature. Another challenge can arise from the fact that the child frequently is held close to the body (e.g., during feeding), and one ear and earmold can be covered or can emit feedback simply from the positioning. Sometimes shifting position

Parental belief that the audiological equipment is helping the child has key importance to this process. If at all possible, the audiologist and LSL practitioner need to demonstrate the child’s improved hearing when the child is wearing the equipment at the first appointment. But it is one thing to know intellectually that the technology helps your child, and another to be able to be “happy” about having your child wear it all the time. That is, it is important to remember that most new parents are likely to be much less enthusiastic about the wonders of today’s technology than the professionals. Hearing loss and hearing equipment are new to the parents, and when the child is wearing the equipment, it is proof that their child has a hearing problem. The equipment is likely to bring on questions from family members and from strangers that the parent may only be beginning to be able to answer for herself or himself.

slightly can take care of the problem. In addition, most current digital hearing aids have feedback suppression circuits that help reduce feedback. (See Chapter 5 for additional information about putting hearing aids on very young babies.) With children who have the dexterity to able to take off hearing aids or cochlear implants, parents may need to cheerfully keep putting them back on time after time for the first week or so. Distracting the child with toys or with another child or adult interacting with them, as well as putting something else into the child’s hands, are additional strategies that often help. However, if the child learns that taking off the equipment brings mommy or daddy running and upset, it may increase the frequency with which the child takes off the equipment. For that reason, simply saying, “Uh-oh. We need to put this back on! Here you go,” and smiling calmly is likely to be a better strategy. That may be easier said than done the 900th time, so be easy on yourself. It is better to leave the hearing aids off for an hour and give yourself a break until you can be that cheery mom or dad and have someone else distract the child while the equipment is being put back on. With children who are 18 months or older, the ease with which the child adjusts to the equipment is likely to be similar to the ease with which he or she adjusts to any change in daily routines. Parental strategies for managing the child’s behavior also will come into play. Again, maintaining a cheerful, matter-offact demeanor is likely to be helpful, as is distracting the child’s attention with favorite toys or videos. The stipulation may need to be that the toys or videos only get played with (or only work) when the equipment is on the child’s head. Nearly all parents will want their children to have “alligator clips” with one end

171

172

Children With Hearing Loss:  Developing Listening and Talking

looped around the child’s ear hook and the other clipped to the back of the child’s shirt. If and when the child removes the equipment, it will dangle, rather than fall and be lost. Dental floss and safety pins can provide the same function. In addition, there are other means of keeping hearing aids in small ears, which may be helpful, including “huggies” (small plastic tubing that goes around the child’s ear to provide more stability), headbands, and hats. See Figure 5–4 in Chapter 5 for a photo of various types of “ear gear.” The following websites may be helpful in locating devices for helping to keep the technology on the child’s head: https:// www.gearforears.com, https://www.etsy​ .com/shop/hearinghenry, www.silkawear​ .com., or https://www.etsy.com/market/ hearing_aid_headband

Disabilities in Addition to the Child’s Hearing Loss A child with a hearing loss may have hearing loss alone, or may have disabilities in addition to the hearing loss. According to Borders (2014), the percentage of deaf or hard of hearing students with additional disabilities, of varying severity, ranges from 40% to 50% across the years from 1999 to 2012 in data reported by the Gallaudet Research Institute. The other conditions include problems such as cerebral palsy; visual problems (including blindness); learning disabilities; developmental delays; cognitive problems; orthopedic disabilities; attention deficit disorder (ADD)/ attention-deficit hyperactivity disorder (ADHD); traumatic brain injury; behavioral problems; problems with the heart, kidney, or liver; or other conditions. The data from Gallaudet reported in Borders (2014) do not indicate the sever-

ity of the additional conditions, which can vary from mild to very severe. Having clear auditory brain access to speech, and having all of the adults in the child’s life provide appropriately enriched linguistic input, is a positive situation for any child. For a child with multiple, very severe disabilities, having auditory brain access to sound may play a pivotal role in connecting the child to the environment. The child may begin to alert to some of the sounds that surround him, and may begin to smile or be more animated when adults talk with him — which further connects the adults with the child, and creates a cycle of positive interactions for both the child and others in the child’s life. Sometimes the child’s hearing and talking abilities become strengths relative to the other disabilities. The LSL practitioner and audiologist will need to work closely with the other professionals involved in the child’s life and intervention. Each of the professionals needs to bring his or her information and expertise to the table, and to collaborate in providing a cohesive program for the child and family. LSL practitioners have much to offer with regard to ways of promoting an optimal auditory environment for the child, auditory strategies to integrate into any other intervention that is being carried out, and appropriate spoken language to use in particular situations. This means, for example, that the occupational therapist, physical therapist, and nurse need to wear the RM, be comfortable in reinserting the child’s hearing aids if needed, know that it is incredibly important to talk to the child about what they are doing, know what the materials or toys are that they are using, and pause for the child’s responses. Similarly, the LSL practitioner needs to be aware of positioning the child appropriately for play and



interacting, understand the importance of a sensory diet for some children, and needs to have snipping scissors and pencil grips for activities where they are needed by other children. Protecting perceived professional turf is unlikely to be productive, whereas respectful collaboration is likely to result in a more cohesive picture for the family of what needs to be done and how. The presence of other disabilities may create additional challenges to practicalities such as keeping the hearing aids on in the presence of other equipment, the existence of ever-present noise from other life-saving equipment, and the difficulty of continuing to persist with encouraging listening and spoken language when progress is very slow. LSL practitioners need to be especially sensitive to these issues and ready to address them creatively and with compassion, along with other members of the child’s team. Regular cotreating with other professionals is an excellent way for cross-fertilization of ideas and techniques that enrich the practice of all of the adults working with the child and family, and keep everyone using similar language and similar strategies, which is much less confusing for everyone involved.

Ongoing, Persistent Noise in the Child’s Learning Environment Children who are listening through technology need a quiet environment around them to have appropriate auditory brain access to the spoken language around them. To begin with, it is important to recognize that many homes and preschools present extremely challenging listening environments, in which the speaker’s message is very difficult to access because of noise in the environment (whether the

6.  Intervention Issues

noise is from ongoing TV, music, and/ or multiple simultaneous speakers) or because the speaker is not speaking close by to the child. This creates a very poor speaker-to-noise situation, which is known as a poor signal-to-noise ratio (S/N ratio) (Anderson, 2001; Smaldino, 2004; Smaldino & Flexer, 2012). There is now technology to help parents and professionals measure the amounts of noise in the child’s listening and learning environments throughout the day. This technology is the Language Environment Analysis technology (LENA), which was developed to measure the amount of adult and child vocalizing and talking, but which also includes measures of electronic and other kinds of noise (including talk at a distance or overlapping speakers, which are both likely to be experienced as noise by the child. Researchers using the LENA have demonstrated that there are clear positive effects on children’s language development in households where the amount of electronic noise (such as that from television and electronic toys) is low (Ambrose, Van Dam, & Moeller, 2014). As stated in Chapter 5, children with normal hearing need a quieter environment and speech that is louder than what is needed for adults to understand and learn, and individuals (both children and adults) with hearing loss need an even better S/N ratio. Normally hearing adults need the signal to be about twice as loud as the noise (an S/N ratio of +6 dB) to understand, whereas children with hearing loss need the signal to be about 10 times louder than the surrounding noise (a S/N ratio of +15 to +20 dB) (Anderson, 2004; Crandell, Smaldino, & Flexer, 2005; Smaldino, 2004; Smaldino & Flexer, 2012). Without this kind of positive S/N ratio, a child may know there is talking

173

174

Children With Hearing Loss:  Developing Listening and Talking

going on, but will not be able to understand what is said because the speech will be blurred or “covered up” by the noise. Since the goal is to stimulate the auditory centers of the child’s brain during crucial times of greatest neuroplasticity, the signal must be as clear as possible in all of the young child’s potential learning environments. Furthermore, since the highest auditory neural centers of the brain (the secondary auditory association centers of the cortex) are not fully developed until a normally hearing child is about 15 years of age, the period when the child with hearing loss needs an excellent S/N ratio (i.e., a quiet learning environment) extends throughout the child’s primary and secondary school years. See Chapter 5 for further discussion of signal-to-noise needs, as well as room acoustics. The essential message is that the environment needs to be as quiet as possible, and the speaker close to the child, preferably using RM technology so that the child has appropriate auditory brain access to the speech signal (Thibodeau & Schaper, 2014; Wolfe et al., 2013). Within the context of the research cited above, statements occasional heard such as “the world is a noisy place, so [he or she] needs to learn to listen in noise” demonstrate their absurdity. Expecting the child with hearing loss to learn to listen and understand in noise, day after day, is like expecting a child to learn to read in the dark.

Multilingual Environment Being multilingual in today’s world can be an important asset, and in some families is essential for communication with all of the members of the family. Individuals with hearing loss certainly are capable of becoming fluent in more than one spoken

language (Robbins, Green, & Waltzman, 2004; Thomas, El-Kashlan, & Zwolan, 2008). Not surprisingly, consistent auditory brain access to both languages from a young age seems to be a key factor. With today’s technology, early consistent auditory access is a reachable goal for all children with hearing loss, so bilingualism also is a possibility. The route toward bilingualism may be different for different children, however. There certainly are some exceptional children with profound hearing loss who, at the age of 3 or 4 years, have made significant gains in learning more than one language. However, expecting the child to learn two languages simultaneously may be less realistic if the child’s hearing loss is identified late, or if there are multiple disabilities or other challenges. When the goal is for the child to learn a first language, it is logical that first-language learning is going to be faster and more efficient if all energies are devoted toward helping the child learn one language to a complex level before introducing a second one. There are only about 12 to 14 waking hours in most young children’s lives, and most hearing children are using all of those hours to learn only one spoken language. When the quality and quantity of listening are reduced because of the child’s hearing loss, it would seem to become even more important for that reduced input to the child to be consistently in one language. The situation may be analogous to a situation where a child is being taught to play both the flute and the guitar at the same time. It is possible to do, of course, but the amount of practice time for each will be reduced by the simple fact that there are only 12 to 14 waking hours in the child’s daily life, with many other activities also occurring during that time.



6.  Intervention Issues

The choice of which language will be the child’s first language should be entirely up to the child’s parents when the child is a baby or toddler. Factors that can weigh into the decision include the following: n Fluency and knowledge of the

adults in the child’s home in the languages being considered. The adults will want to be able to provide both simple and expanded ideas in the language selected. n Comfort of the adults in using each of the languages being considered. Emotional/love words are important for establishing adult–child bonding and connections. n Language of instruction in the child’s school environment, and number of years before the child will need to be learning in the school language.

None of the challenges mentioned here is insurmountable, but when multiple languages are being learned, the process is likely to require additional time and sustained effort on the part of the child and all the adults in the child’s life.

Educational Options for Children with Hearing Loss, Ages 3 to 6 In the United States, each state now has early intervention programs under Part C of IDEA that provide services to children with hearing loss and their families from birth until the age of 3. Starting at age 3, children with special education needs

become the financial responsibility of the school district for the town in which the child resides. This change in funding agency, causes the necessity of a transition process from one agency (Birth to Three or early intervention) to another (the school system). Early intervention programs are set up to educate and support families in their efforts to address their children’s developmental needs. For children with hearing loss, early intervention programs address the child’s hearing needs and the child’s language-learning needs, in addition to other needs as they are identified. Many, if not all, of the sessions occur in the child’s home. In contrast, school systems are set up to educate children in classrooms in schools, and with very few exceptions, are set up to do so without much involvement from parents in the actual daytime educational process. In school, teachers teach the children rather than instruct the parents in how to address their child’s developmental needs, as is done in Birth to Three early intervention programs. School systems generally are not equipped, nor are they funded, to provide educational services in ways other than through classroom instruction. Not surprisingly, the expectation often is that most children with special education needs will be educated in the district’s existing special education preschool classes, which may have some typical peers, but are primarily classes for all preschool children with special needs, regardless of disability. Readers in other countries may not have to deal with this transition from one agency to the other when the child turns 3 years old, but they are very likely to have similar issues to consider when trying to develop an appropriate educational program for a preschool-aged child with hearing loss.

175

176

Children With Hearing Loss:  Developing Listening and Talking

According to the U.S. Department of Education, children with hearing loss have unique communication needs (Federal Register, 1992; Individuals With Disabilities Act, 2004). Those unique communication needs are to be the central focus in decision making regarding each child’s educational placement, and central to the determination of what constitutes the least restrictive educational environment for a child who has hearing loss. For a child with hearing loss who is learning spoken language, the fundamental needs at school age (age 3 years and older) are related to the child’s need for auditory brain access to spoken language, as well as to the child’s need for individualized

At the age of 3 years, children with hearing loss have widely varying profiles with regard to hearing loss and communication needs. The decision to provide intensive specialized support for a child whose language lags far behind that of same-age peers may be easier to make than determining the level of support for a child whose language scores on standardized tests appear to be within the normal range (which is occurring with increasing frequency due to infant hearing screening and the availability of excellent audiological technology and improved pedagogy). However, for children who appear to be doing very well, it must be considered that the reason that the child’s language scores are within the normal range may well be because of all of the professional and parent work that has taken place during the birth-to-three period — and the amount of learning in the 3- to 6-year-old period is huge. For

spoken language stimulation. The preschooler with hearing loss needs a professional with the appropriate knowledge and experience to provide appropriate auditory/linguistic stimulation as well as to guide the parents in providing appropriate auditory/linguistic stimulation during all waking hours outside school. Even by the age of 3, children with hearing loss vary widely in factors such as the age of identification of the hearing loss, the degree of hearing loss, the age at which they began full-time wearing of appropriate hearing technology, and the quantity and quality of spoken language exposure. Each child’s ongoing educational needs must be individually considered in determining services, as dictated by

the child with hearing loss to continue to make the same gains in the two remaining years before kindergarten, it is crucially important that appropriate supports continue to be provided. Language learning is exponential between age 3 and 6 years for children with normal brain access to sound 24 hours of the day. Relative to their normally hearing peers, 3-year-old children with any degree of hearing loss continue to be at a marked disadvantage for spoken language learning because their brain access to auditory/linguistic input in the environment is significantly reduced in both quantity and quality. Removing all supports until the language of the child with hearing loss shows statistically significant deficits follows a failure-based model. Withholding educational supports cannot be justified in terms of the ongoing effects of the permanent sensory deficit on the child’s learning.



federal law. Clearly, there needs to be an array of options available, not just the existing special-education preschool. Some of the educational options that may be appropriate for individual children with hearing loss at age 3 include the following: n Child and family receive auditory-

verbal therapy (which includes audiological support services), with the parent deciding whether or not to send the child to a regular nursery or preschool for socialization experiences. If the child goes to regular nursery or preschool, classroom educational audiology support services are needed, as well as instruction and support for the regular preschool staff. n Child with appropriately aided hearing, excellent listening abilities, and language abilities apparently on age level, attends regular nursery or preschool setting, with RM equipment provided and regularly monitored by an educational audiologist, and with the regular preschool staff receiving consistent and ongoing consultation from knowledgeable professionals regarding the equipment and the child’s listening and language-learning needs. That same professional will monitor the child’s language learning throughout the year to be certain that the child’s language learning is progressing normally. n Child attends a specialized inclusive LSL preschool for children with hearing loss, which has small groups of other children with hearing loss and at least an equal number of typically developing peers, with

6.  Intervention Issues

a knowledgeable LSL practitioner (teacher of the hearing impaired or speech-language pathologist) all day long, as well as an early childhood instructor, weekly parent guidance sessions, and audiological support services. When parents cannot or do not choose to engage in auditoryverbal therapy, this may be the most appropriate option for children with hearing loss whose spoken language abilities at age 3 are more than a year behind those of their typically developing peers. This inclusive preschool offers intensive instruction specialized for children with hearing loss, which focuses on helping the child learn to listen, and on helping the child increase his or her rate of spoken language learning. The LSL preschool constitutes the least restrictive environment for these children, because the specialized instruction makes the curriculum available to the child, and because play and other regular preschool activities are carried out with typically developing peers and facilitated by the teacher of the hearing impaired and other staff. n Child with appropriately aided hearing, excellent listening abilities, and language abilities apparently on age level, attends a preschool class for children with typical hearing who are learning English as a second language, with full educational audiological support services. The advantage to this class is that it is entirely devoted to language stimulation; the disadvantage is that there may not be peers who are fluent speakers of English. The child will need full educational audiological

177

178

Children With Hearing Loss:  Developing Listening and Talking

support services in this setting, as well as ongoing direct service and consultation from a teacher of the hearing impaired or speech-language pathologist who is knowledgeable and experienced in promoting listening and spoken language development in children who have hearing loss. n Child with severe multiple disabilities including hearing loss attends special education preschool class where there are typical peers, class size is small, and the disabilities of the other children do not compromise the child’s auditory brain access. Full educational audiological support services need to be provided in this setting, as well as frequent consultation and/or direct service from a teacher of the hearing impaired or speech-language pathologist who is knowledgeable and experienced in promoting listening and spoken language development in young children who have hearing loss. The advantage to

The disadvantages of general special education preschool settings can include the following: n Excessive noise levels due to the

number of professionals and paraprofessionals required to attend to all of the varying needs of the children in the classroom and/or due to other children who vocalize nonstop. n Special educators, paraprofessionals, and administrators whose expectations for children with hearing loss may be low and inappropriately simi-

this setting is that the child’s needs in addition to hearing loss will be addressed by professionals with expertise in those areas. Appendix 5 contains a checklist for evaluating preschool group settings for children with hearing loss, with items listed according to the child’s needs. These items could apply to any preschool group setting that parents and educators may be considering for the child. What would vary according to the child’s language abilities would be the level of support needed from a teacher of the hearing impaired, certified auditory-verbal therapist/educator, or other professional with appropriate expertise for the child’s language to continue to grow exponentially at ages 3, 4, and 5. It is difficult to imagine a more effective language teacher for a 3-year-old than a parent who is home with the child all day, who loves everyday play and interacting with a baby or young child, and who thoroughly understands the importance of integrating auditory strategies throughout the child’s day. Much of this

lar to expectations for children with intellectual disabilities. n Special educators, paraprofessionals, and administrators who are resistant or unable to achieve the levels of vigilance about noise and about the use of audiological technology that are needed to provide appropriate auditory brain access for the child with hearing loss. n Lack of at least a 1:1 ratio of children with disabilities and typically developing children to provide a sufficient number of models of normal sociolinguistic behavior.



6.  Intervention Issues

book is intended to provide information for professionals and supporting parents who are doing exactly that, throughout the preschool (birth to 6) years. However, by the time the child is 3 years old, not every parent can or wants to continue to take on that role. In addition, many parents believe young children need to be exposed to group activities with their age peers for social-emotional development and/or to get ready for school.

Second only to being toilet trained, probably the most important aspect of getting ready for school for any child is acquiring as much spoken language ability as possible (Ambrose, VanDam, & Moeller, 2014; Hart & Risley, 1999; VanDam, Ambrose, & Moeller, 2012).

As a result of incredible advances in technology for children with hearing loss, in addition to early identification of hearing loss through newborn hearing screening programs, the needs of 3-year-old children with hearing loss are very different now from what they were even 8 or 10 years ago. Expectations of the general public and of many school systems have not necessarily kept up with those advances. Today, because of technological and pedagogical advances, there is no limit to what a child with early identified hearing loss can achieve — other than the limits we create by denying them auditory brain access or by denying them (and their parents) the specialized support that will help the children learn to listen, talk, and achieve. Given appropriate auditory brain access and appro­priate specialized early educational experiences, there is every reason to

The child who is staying home with the parent described above may be in an optimal situation for language learning. A child with very low language abilities due to hearing loss is likely to learn the most language from adults who keep the environment relatively quiet, and who take the time to try to help him or her understand and be understood. Even though the child may learn other important things from a preschool group experience, the child with low language abilities and hearing loss is unlikely to learn much language from his or her preschool peers, as the language level and the pace of teacher talk and instruction are likely to be well above his or her zone of proximal development (Vygotsky, 1978). Of course, the child may learn language from the adults in the preschool, particularly if those adults are vigilant in providing

expect that the children will learn spoken language well enough to achieve on grade level, go to college or other postsecondary education if they so choose, and eventually lead independent, productive lives. This expectation is what drives the field today, beginning with viewing the infant’s hearing loss as a neurological emergency, which requires speedy provision of appropriate hearing technology to provide auditory access from as young an age as possible, so that the child’s brain loses as little time as possible without auditory and linguistic stimulation. Unfortunately, this expectation can also drive parents into conflict with school personnel who are unaware of the new world of opportunities for children with hearing loss, and of what it takes from the school system to permit the child to access those opportunities.

179

180

Children With Hearing Loss:  Developing Listening and Talking

auditory brain access and in interacting in language-promoting ways. However, it should be noted that even with an excellent teacher-to-child ratio, that ratio cannot compete with a ratio of one parent to one or two children. So why should preschoolers with hearing loss go to preschool? What children learn from preschool is, above all, how to feel comfortable away from family, how to play and learn in a group, and how to conform to school expectations for group social behavior. Children also may be exposed to readiness skills (e.g., colors, shapes, letters, numbers), although they could easily also be exposed to that information at home. However, because children are likely to be taught in groups in kindergarten, there may be a rationale for exposing them to group learning situations before that time. When should a child with hearing loss go to preschool, and for how much of the child’s day? Ultimately, that should be a parental decision. Considerations that could be part of that decision include the parents’ ability and desire to be at home with the child for language stimulation, whether or not older siblings went to preschool, and other practical factors. What is appropriate can range from no preschool experience to a full-time, five-days-a-week preschool experience. Whatever decision the parents make, they should remember that even a full-time preschool experience is only for about 25% of the child’s waking hours, and they (the parents) remain their child’s primary sources of language stimulation and his or her primary advocates for maintaining their child’s auditory brain access to spoken language for the other 75% of the child’s waking hours outside of school. Preschool special educators are often wonderful, dedicated teachers and human

beings. They love young children, and they want to help. However, preschool special-education classes are almost never appropriate settings for children with hearing loss. As noted above, the reasons are related to the preschool special educators’ lack of experience and knowledge of hearing loss, the noise level in a classroom that has children with all kinds of disabilities with attending paraprofessionals, and the ratios of children with disabilities to children without disabilities. Many children in special education classes are children with developmental and/or cognitive delays. The child with hearing loss (who has a cochlear or neural problem, not an intellectual deficit) is likely to “look great”— and the teacher is unlikely to have appropriately high expectations for or knowledge of the child’s auditory and linguistic learning. This creates a less than optimal situation for the child with hearing loss, for whom the potential — given an appropriate setting  —  is unlimited. Securing an appropriate educational setting for a preschooler with hearing loss is sometimes complicated. Even with the best intentions, some school systems have a tendency to think in terms of placing children in existing classes rather than considering the individual child’s needs first, and creating a program that is appropriate for that individual child (as was the intent of the U.S. federal legislation entitled IDEA). Another complication is that school districts often do not have a teacher of the deaf and hard of hearing on their staff — or if they do, it is quite likely to be a teacher of the deaf and hard of hearing who does not have background or expertise in helping children with hearing loss learn to listen and talk. Wherever they live, it is essential for parents to become informed about the rights and responsibilities guaranteed



6.  Intervention Issues

A 3-year-old child with hearing loss, even when his or her language scores suggest normal receptive and expressive language, is at a very vulnerable stage. For typically developing children, the period between 2 and 5 years of age corresponds to Brown’s Stages II through V, when the mean length of utterance goes from two words per utterance to five words per utterance. This is “the most explosive stage of language development,” the period in which children move from telegraphic utterances to the mastery of basic sentence structures (Paul, 2007, p. 318). Normally hearing children make this progress as a result of a combination of their own innate abilities with 15,000 to 20,000 waking hours of auditory brain access to copious spoken language input from the people in their environment — input that is almost entirely auditorily based. Children with hearing loss, even with the most current, appropriate audiological technology, still miss chunks of spoken language input, particularly if they cannot (or have not yet learned to) overhear (i.e., listen to language that

under special education law, as well as to become informed about the services that are available in their town and region long before their child will need them. For families in the United States, parent advocacy groups (such as AG Bell chapters or Hands and Voices) and websites exist to assist with that endeavor (e.g., http:// www.wrightslaw.com; see also Anderson & Arnoldi, 2011, pp. 423–427, and Johnson & Seaton, 2012). In 1992, specific guidance was provided in the Federal Register on how IDEA is to be interpreted

is not directly addressed to them), and particularly if they are in noisy environments where the noise blocks out (masks) the spoken language around them. Due to the decreased quantity and quality of access and exposure imposed by the hearing loss, even the most “successful” 3-year-old with hearing loss still needs more guided experience in two areas: n Learning to overhear — to auditorily

scan the environment for spoken language that is not addressed to them, and to learn from it. n Continuing to learn all of the pragmatic, semantic, syntactic, and phonological elements and items that typically developing children learn during this early time of exponential learning. Providing the supports and resources in these early years offers the best chance of reducing the levels of support needed for all of the child’s academic learning to come.

with regard to children with hearing loss (Federal Register, 1992). This document can be used to assist decision makers in educational placement team meetings in focusing attention on the unique needs of the child who has hearing loss. In addition, many states have a Deaf Child Bill of Rights or a language and communication plan that, when used as intended, may also be helpful in securing appropriate services for the child with hearing loss who is learning spoken language through listening.

181

7 Auditory “Work”

Key Points Presented in the Chapter n Hearing is the only sense that

can perceive acoustic subtleties. Because spoken language is acoustically based, the ear is the most useful and efficient sensory receptor for speech. n Speech characteristics can be described in terms of the three primary acoustic dimensions: intensity (loudness), frequency (pitch), and time (duration). n Advances in technology such as miniaturization of powerful hearing aids; improvements in fidelity of signal transmission; digital hearing aids; small, personal, frequency, or digitally modulated receiver boots with remote microphones (RM systems); classroom audio distribution systems (CADS); and of course cochlear implants have all contributed to making learning spoken language through listening (stimulation of auditory neural centers) a strong and viable

outcome for the vast majority of children with hearing loss. n Early auditory stimulation causes measurable differences in brain organization and neural activity. n Auditory skills are the components of theoretical models that researchers have developed to explain how language is processed auditorily. Although practitioners and parents may achieve results by working on individual auditory skills, the real-life tasks of understanding spoken language and listening to one’s own spoken language are dependent on a synergy of linguistic, cognitive, neurophysiological, and auditory abilities. As they are typically described in teaching materials for children with hearing loss, auditory skills overlap, interrelate with each other and with linguistic abilities, and are not necessarily hierarchical. Given early detection,

183

184

Children With Hearing Loss:  Developing Listening and Talking

appropriate and consistent auditory brain access, and enriched spoken language interactions with fluent and loving adults, children with hearing loss are likely to develop auditory skills quite naturally. When those “givens” are not present, the child may very well need specific practice on discrete listening tasks. The models are then quite useful for assessing the child’s auditory abilities, and then determining targets and a teaching order. n In essence, auditory/linguistic learning consists of the child becoming aware of sound, then

Introduction This chapter begins with a rationale for teaching spoken communication auditorily, including specific examples of the acoustic dimensions of speech and how these acoustic dimensions relate to major aspects of language. The concept of listening age is introduced, followed by discussion of auditory development, auditory “skills,” and auditory processing models as well as of the importance of the child being able to overhear the conversations of others for his or her development of Theory of Mind (ToM) and executive functions. These sections are then followed by two explicated scenarios. The scenarios illustrate how normal everyday interactions between caregivers and children with hearing loss can be slightly altered or “embellished” to emphasize and make more salient particular acoustic aspects. Consistent use of auditory embellishments and auditory strategies by caregivers in

connecting sound with meaning, and then understanding more and more complex language — first in quiet circumstances, and then in a variety of more difficult listening conditions. n Children with hearing loss can learn to listen in the course of ordinary, everyday routines and interactive play, with the adults being vigilant about the viability of the child’s amplification, the listening conditions, the distance between speaker and the child’s microphone, and the use of auditory strategies.

varied contexts throughout the child’s waking hours is one of the central keys to helping the young child who has hearing loss learn to listen and talk through natural play interactions. Let us begin with a discussion of the reasons behind using audition as the primary sense modality for helping a baby and child learn spoken language.

The Primacy of Audition Chapters 1 and 6 have both made the case that auditory brain stimulation from spoken language is the most effective route toward spoken language acquisition, which in turn lays the most effective foundation for reading and the higher cognitive abilities needed for academics. An explanation of the relationship between acoustics and speech will be provided here to further highlight the fact that listening (audition) is the primary sense



7.  Auditory “Work”

modality for learning spoken language. The easiest way to understand the primacy of audition for speech reception is to consider what happens every time you turn on the radio or use the telephone. If you have normal hearing, you are able to understand the speaker based on listening alone, without using any other sense modality. This is because talking, or spoken language, is primarily an acoustic event (Chermak, Bellis, & Musiek, 2014; Erber, 1982; Fry, 1978; Ling, 2002). The other senses (smell, taste, touch, vision) are of very limited use in understanding others’ spoken messages. Consider each of the other sense modalities in turn. n Smell.  The sense of smell has no use

for speech reception. n Taste.  The sense of taste has no use

for speech reception. n Touch.  The sense of touch has some limited use for the reception of individual speech sounds. For example, one can feel the vibration or breathiness of sounds. Specifically, you can feel breathy sounds on your hand, or feel the difference in the temperature of the air for the /s/ versus /sh/ sounds when produced next to a wet hand, or feel vibrations in the bones of your face when producing nasal sounds. However, those tactile cues cannot be used by themselves to identify the sounds. The tactile sense is useful only for speech reception (or teaching) when the sounds are produced in isolation, not in running speech. Body-worn tactile devices can supply gross information about the duration and loudness of spoken language that reaches the microphone. But again, the prosodic features in normal conversational speech occur too rapidly, and in

such complexity, that tactile devices cannot adequately capture them in such a way that they can be used by themselves with a child learning a first language. An additional problem with tactile devices is that they respond to nonspeech noise as well as speech, further muddying the signal that the child receives. n Vision.  Vision has limited use for speech reception in that the listener can watch the speaker’s lips (e.g., lip rounding, lip spreading, lip closure), frontal tongue movements, teeth positioning, and larynx movements regarding sounds. If you turn off the sound when you are watching the news, you will see how little information is available from simply watching a person talk. Vision has very little use for speech reception in running speech but can be useful in discriminating between speech sounds produced in isolation. However, about 70% of sounds in English look like other sounds on the lips (i.e., they are homophonous, such as ba, pa, and ma), so vision is only potentially useful for about 30% of speech. However, when you are already listening, being able to see the speaker’s face often makes it easier to understand because the visual cues provide redundancy with the auditory cues, as a kind of underlining. n Hearing.  None of the other senses can deliver clear information to the brain about the speaker’s prosodic features (e.g., intensity, pitch/ intonation, rhythm/duration) or about specific speech sounds in the way that the ear can. Much of speech perception is based on perception and discrimination of timing-related

185

186

Children With Hearing Loss:  Developing Listening and Talking

Who is doing the “work?” For babies and young children, the auditory work is primarily done by the parent, who consciously employs strategies to help the child learn to have confidence in his or her ability to understand based on listening alone throughout the course of the day’s routines and in play with the child. This is in contrast to the situation that starts at about age 3 years, for children whose auditory abilities may need to be improved through the same all day, every day adult auditory strategies, but who also benefit from short periods of more adultdirected and -planned auditory-linguis-

tic practice to fill in gaps in the child’s knowledge. Having a hearing loss means that the individual misses or only partially hears some auditory events. The parts of spoken language that are missed will need to be brought directly to the child’s attention through preplanned activities that will become part of the child’s auditory work. In any case, hearing, listening, and learning will ultimately be the responsibility of the individual with the hearing loss. Therefore, whatever responsibility for the auditory work that the child is able to assume, he or she should, from as young an age as possible.

events in the speech signal. Of the sensory organs available, only the ear has the ability to respond to the speed of the temporal events that occur in speech, and the ability to discriminate fine differences among them. However, it is important to remember that it is the auditory brain that interprets, through exposure and practice, the meaning of the specific vibratory elements that pass through the peripheral auditory mechanism.

ken language is acoustically based, the ear provides the most useful and efficient sense receptors for speech.

In that hearing is the sense that can most effectively distinguish the subtleties of acoustic events, it seems the ear was “built” to capture and deal with the vibratory elements of speech, and transduce those elements through the brainstem to cortical auditory brain centers where making sense of those subtle acoustic speech events can occur. Clearly, some parts of a spoken message have redundant representation in the visual and tactile modalities and occasionally may need to be used as a supplement to audition to teach the sounds to some children. But since spo-

The Acoustics-Speech Connection This section provides explicated examples describing how each of the three primary acoustic dimensions (intensity/loudness, frequency/pitch, and timing/duration) is realized in speech. The section is based on information from Baart (2010), Cole (1994), Johnson (2012), Kent and Read (2001), Ling (2002), and Pickett (1998).

Intensity/Loudness n The average intensity of normal

conversational speech is used as a general guideline in fitting hearing aids and in mapping cochlear implants, which is not surprising as the goal of wearing audiological



7.  Auditory “Work”

technology such as hearing aids and cochlear implants is to be able to hear the entire speech signal. Dr. Daniel Ling (1981, 2012) coined the term speech banana as a means of describing the area on the audiogram that includes the essential intensity and frequency characteristics of speech (Figure 7–1). Generally speaking, audiologists try to program hearing aids and cochlear implants so that the child’s thresholds fall near the top or just above the speech banana on the audiogram so that even soft speech can be heard. As

noted in Chapter 5 and Figure 5–8, Madell (2016) coined the term speech string bean to indicate the range on the audiogram for the targeted area in which the child’s aided responses should fall, at approximately 20 dB HL. n Differences in typical loudness can assist the listener in identifying the speaker. n Speakers use differences in loudness to express particular emotional states. A very loud voice can often be an angry one, whereas a soft and quiet voice is often used to express affection or soothing.

Figure 7–1. The banana shape drawn on this audiogram illustrates the approximate area (intensity range and frequency range) within which speech would fall if it were drawn on an audiogram.

187

188

Children With Hearing Loss:  Developing Listening and Talking n Differences in the loudness of

particular words in sentences are used to emphasize the important information in the sentence (“It’s mine!”), or to mark phrase boundaries or accompany intonational changes to enhance the listener’s understanding. n Differences in the loudness of particular syllables occur within words, and using typical syllable stress will assist the listener’s comprehension. Syllable stress can also signal differences in meaning such as the meanings intended by object versus object, or by present versus present.

Frequency/Pitch n The pitch of a speaker’s voice varies

with the speaker’s age and sex and is a very important characteristic of an individual’s speech. Men’s voices tend to be the lowest in pitch, children’s the highest, and women’s in the middle. Most children with hearing loss can easily learn to discriminate pitch differences among male, female, and child voices. n Every language has its own intonational melody that goes up and down in pitch. Intonational changes can signal whether the utterance is a question or a statement, the speaker’s emotional state, or the end of a thought group. These intonational cues are within acoustic reach of nearly all children with hearing loss, although they may need to be brought to their attention. Emphasizing intonational cues with very young children is a normal part of the way that most adults talk with young children and is probably one

of the cues that babies use to know that they are being spoken to. n Each speech sound has particular frequencies associated with its production that listeners automatically use to know immediately that sound has been produced. For example, the words father and feather are heard as different words because of the acoustic differences between the two vowel sounds in the initial syllable of each word. The vowel /a/ as in father has strong energy produced at about 800 and 1200 hertz (Hz) (a term used to describe frequencies); and the vowel /ε/ as in feather has strong energy produced at about 600 and 2200 Hz. The ear (auditory brain centers), with exposure and practice, can easily discriminate between the two differing patterns of formants for these two vowels. Using knowledge of the typical characteristics of speech sounds, it should be possible to predict which speech sounds a child with a particular aided audiogram is likely to be able to learn to detect and produce using hearing alone. For example, a child with no usable hearing at about 1500 Hz and above could not be expected to imitate the /ε/ sound based on listening alone, but the child would certainly be able to imitate the /a/ based on listening alone. Knowledge of the formants of speech sounds can also be used by audiologists in setting hearing aids or mapping cochlear implants, based on the fact that a child talks the way that he or she hears. If a child is not producing an /s/ sound, for example, the audiologist may well believe that the child’s brain is not receiving the /s/ band of energy that spreads from about 4000 Hz



7.  Auditory “Work”

upward, and make adjustments in the child’s equipment to provide access to the /s/. n The frequencies of music are well within the hearing range of most children who have hearing loss. Middle C on the piano is at approximately 262 Hz, and the highest note on the piano is at 4186 Hz. Children with appropriate audiological technology are typically able to hear from at least 250 through 6000 Hz, which gives them easy auditory brain access to music. Singing with your child has multiple benefits in that the excursions of your voice (how far up and down your voice goes as you sing) are wider than in speech, and thus more salient to your child. In addition, intensity and rhythmic cues are also likely to be enhanced in the singing voice, which should make the child attend to those acoustic cues — all while enjoying the closeness and fun of singing with others.

Duration n Durational differences are usually one

of the first cues that children with hearing loss learn to listen for and produce. For example, if the parent says a long “Mmmmmmmm.” sound in connection with eating or taking bites, it will not be long before the child will be repeating or imitating a similarly prolonged sound. If the parent says, “Hop! Hop! Hop!” in short staccato syllables in making a bunny hop, then the child is likely to make similarly short sounds in play with the bunny. Approximating the length of the speech segment

in imitation is a highly desirable listening and producing behavior. Precision in the young child’s speech will come later, but imitating the correct number of syllables is clear evidence that the child is listening carefully and is an enormously important early listening milestone. This is one of the reasons that adults need to speak to children who have hearing loss using complete phrases and sentences of a length and content consistent with what they would use for any young child. Modeling for the child using an atypical durational pattern, such as “I . . . want . . . juice,” is highly inappropriate from an acoustic point of view, not to mention a linguistic one. n The overall rate of speaking is another duration-related item. Obviously, very fast speech is difficult for anyone to understand, but what is to be avoided in speech to children with hearing loss is speech that is abnormally slow and consequently overarticulated. This kind of extraordinary slowing down (as in “I . . . want . . . juice” or in saying the word “bell” very slowly) distorts both the auditory and visual cues to the message. Abnormally slow speech makes the message harder to understand, rather than easier. Because most people do not talk that way, the child runs the risk of being able to understand only when spoken to in that same slow and overarticulated manner. In the past, it was often possible to spot teachers of the deaf because of their overarticulated speech, mistakenly used to “help” the children. n In addition to the intensity and frequency characteristics mention

189

190

Children With Hearing Loss:  Developing Listening and Talking

above, individual speech sounds have characteristic durational features. Sounds can be short or long in their usual duration, such as a /k/ versus a /ʃ/ at the end of “walk” versus “wash.” They can have gradual or sudden onset, such as an /r/ versus a /g/ as in “run” versus “gun,” and they can stay relatively stable throughout their production, such as an /s/, or can change throughout their production, such as a /w/. These differences in duration of parts of speech sounds are only milliseconds of difference — so short that they can only be perceived by hearing, not by any of the other senses. n Individual speech sounds can have variations in timing as one of their primary features. For example, the unvoiced and voiced sound pairs such as /p/ and /b/ are differentiated by the presence or absence, respectively, of a 20-millisecond “airy space” (silence) between the production of the consonant and the following vowel. A grossly simplified drawing of the sounds is presented in Figure 7–2. This timing difference between the consonant and vowel makes these two sounds different. Interestingly, even though this is only a 20-millisecond difference, it is

usually fairly easy for children with hearing loss to learn to hear as well as to produce. In summary, the defining characteristics of speech are those that can be described in terms of their intensity, pitch, and duration. The human ear is the most efficient sense organ for receiving these vibratory acoustic patterns, and the auditory brain is designed for perceiving and analyzing, with exposure and practice, these central acoustic aspects of speech. Information provided by the tactile or visual senses offers only secondary correlates of the speech sounds.

The Effect of Hearing Loss on the Reception of Speech The effect of hearing loss is to reduce the amount and quality of acoustic information or speech that is available to the brain of the listener. Depending on its severity, hearing loss will distort, reduce, or eliminate a number of acoustic cues that reach the brain. However, hearing loss is rarely total. At least since the 1960s and 1970s, it has been technologically possible to provide amplification with the potential of permitting the vast majority of chil-

Figure 7–2.  This is a theoretical drawing of the difference between the syllables /ba/ and /pa/, which are identical except for the initial voiced or unvoiced bilabial. The unvoiced /p/ is followed by 20 milliseconds of aspiration (the “airy space”) before the vowel, whereas that aspiration is absent in the /ba/ syllable.



7.  Auditory “Work”

dren with hearing loss to learn to use their amplified residual hearing to access their auditory brain centers to learn to talk. With recent technological advances in cochlear implants, in the power and clarity of digital hearing aids, and in the precision of instrument fitting, the use of audition in the reception of spoken language is within reach of the vast majority of children, even those with profound hearing loss. Naturally, the more hearing the child has (whether through normal hearing or through wearing audiological technology), the more acoustic cues are available to the brain. Prior to the widespread use of cochlear implants, we used to say that even with minimal residual hearing (e.g., aided hearing within the speech range at only 250 and 500 Hz), the child could learn to perceive and monitor durational, rhythmic, and intonation cues. The result would be improved understanding by the child, as well as improved intelligibility and predictability in the child’s own speech. However, even though the preceding is accurate, current best practice would dictate that the parents of a child with such minimal residual hearing be encouraged to consider a cochlear implant, which could provide the child’s brain with access to all of the acoustic aspects of speech at closer to normal intensity levels.

A Historical Look at the Use of Residual Hearing Markides (1986) has eloquently described the history of the use of residual hearing for educating individuals with hearing loss. The first recorded (and, as Markides says, rather drastic) attempts were those of Archigene (98 to 117 ad), who tried to

stimulate residual hearing by blowing a trumpet in the deaf person’s ear. More recently, Duncan and Rhoades (2017) and Estabrooks, MacIver-Lux, Rhoades, and Lim (2016) have detailed the history of auditory-verbal practice, beginning with the physicians who were convinced that residual hearing could be stimulated as a path toward spoken language. French physician Jean Marc Gaspard (1774 to 1838) is credited with using an ear trumpet to help children with hearing loss learn to listen. Then, in the late 1800s and early 1900s, Austrian otologist Viktor Urbantschitsch (1847 to 1921) argued for stimulating the child’s residual hearing as part of helping the child develop spoken language. He, in turn, influenced other physicians such as Max Goldstein, Emil Froeschels, Henk Huizing, and Erik Wedenberg. These physicians’ professional careers occurred during the advent of vacuum tube hearing aid technology (1920s), and then transistorized hearing aids (1950s). Several of these physicians and early audiologists influenced and worked with the early pioneers of educational practice aimed at helping children use their residual hearing and learn to talk. Those pioneers include Helen Beebe, Ciwa Griffiths, Doreen Pollack, Edith Whetnall, and John Wright. There has long been supportive evidence for the importance of early auditory stimulation in studies of sensory deprivation using animals (Riesen, 1974; Tees, 1967). One of the first attempts to explore systematically the effects of early auditory brain stimulation for children with hearing loss was that of Griffiths (1955). This study compared the speech of a group of children aided at a mean age of 3 years, 8 months with a group aided at a mean age of 5 years, 3 months. The speech of the earlier-aided children was rated as more

191

192

Children With Hearing Loss:  Developing Listening and Talking

normal and more intelligible by four separate panels of judges. Of course, even the earlier-aided children in this study would be considered very late aided by today’s standards. Since Griffiths’ study in 1955, a great deal of other research has been done that strongly supports the contention that early auditory brain stimulation has long-term beneficial effects on speech reception, speech intelligibility, language development, and ultimately academic achievement. Some of those studies include Geers et al. (2017); Kirk et al. (2002); Kral (2013); Kral et al. (2016); Kral and Sharma (2012); Moog (2002); Moog and Geers (2003); Nicholas and Geers (2006); Niparko (2004); Robbins, Koch, Osberger, Zimmerman-Philips, and Kishon-Rabin (2004); and Sharma et al. (2005). This research all resoundingly supports the contention that early auditory brain stimulation is essential for normal development of the auditory system at the cortical level. Children whose brains are deprived of sound, due to unidentified hearing loss or to very profound hearing loss that does not respond to traditional amplification, demonstrate clear differences in brain waves. A number of these studies suggest that if the children begin wearing hearing aids or cochlear implants by 3.5 years of age, their brain waves (specifically, the P1 cortical auditory evoked potential) begin to reflect a normal response to sound within 3 to 6 months of the child having access to sound. In one study (Sharma et al., 2004) of two children who received cochlear implants at 13 months and 14 months of age, the latency of their P1 waves was within normal limits by 3 months post implant. The two children also demonstrated significant increases in the proportion of

more mature vocalizing (reduplicated babbling) by 3 months post implant. Although the Sharma et al. (2004) study examined the development of only two children, the results parallel other studies reporting significantly higher speech perception and language skills in children whose brains had access to auditory information through cochlear implants from a young age (by age 3 or 4 years), compared with those whose brains were deprived of appropriate and consistent access to sound until 6 to 7 years of age. The evidence clearly supports the concept that auditory brain stimulation from as young an age as possible provides the easiest route to spoken language development (Ching et al., 2018; Ching, Zhang, & Hou, 2019; Dettman et al., 2016; Dillon, Ching, & Golding, 2014; Fitzpatrick & Doucet, 2013; Geers et al., 2011, 2016; Gifford, 2014; Hayes et al., 2009; Kirk et al., 2002; Nicholas & Geers, 2006; Sharma et al., 2004; Sharma, Dorman, & Kral, 2005; Sharma et al., 2005; Sharma & Nash, 2009; Fitzpatrick & Doucet, 2013.) Figure 7–3 is a whimsical, yet serious, depiction of the need to act swiftly at the most opportune time.

The Concept of Listening Age Children’s brains need time to listen and process acoustic speech information, whether the child has normal hearing or a hearing loss. During that listening time, the child begins to understand language long before they begin to talk. Around 6 to 9 months of age, the child shows they understand an increasing number of words, beginning with often-repeated names of familiar people such as “mommy”



7.  Auditory “Work”

Figure 7–3.  Make sure that your child does not miss the boat of learning to listen and talk by providing auditory access at the youngest age possible.

and “daddy,” and other words and phrases that occur frequently in routines, such as “go bye bye,” “no-no,” and “you want up?” A child with normal hearing generally listens to spoken language for at least 10 to 12 months before he or she starts to produce a few single words, and it is not until another 10 to 12 months later that the child begins combining those words into two-word phrases. During all of this time, the child is vocalizing and babbling in a more and more speechlike fashion. A child with hearing loss may need a similar length of time to listen to spoken language before

he or she uses words singly or in combinations. Whether or not the child can do that, and do it intelligibly (so that others can understand him or her), will depend on whether or not the child’s technology provides brain access to all of the frequencies of speech (particularly the high frequencies) and allows the child to hear soft or quiet speech. It also depends on whether or not the child is wearing the technology during all waking hours (at least 10 to 12 hours per day), the auditory environment is supportive, and there are caregivers providing copious verbal interacting.

193

194

Children With Hearing Loss:  Developing Listening and Talking

Listening Age The day that the child begins wearing appropriately selected and fit technology during all waking hours is Day Zero for the child’s listening age. What does “all waking hours” mean? Important results of longitudinal research by Tomblin et al. (2015) show the number of hours of wearing audiological technology needed for optimal language growth is more than 10 hours a day. In their study, the children were followed from age 2 to 6 years. The children who wore hearing aids more than 12.5 hours per day caught up to their normally hearing peers by age 6 years. That is, they closed the gap in their language development. In contrast, the children who wore their equipment less than 10 hours per day showed a nearly flat language growth trajectory, with an increasing gap in their language scores relative to their normally hearing peers.

Beginning with even younger children, the child who begins full-time wearing of appropriate technology at 3 months of age has lost only 3 months of listening time, and will have a listening age of 9 months at her first birthday, assuming that the child is wearing her technology all waking hours. Chances are excellent that this child will make up for those lost 3 months and have language development that closely parallels her hearing peers at least by the time she is 2 years old, if not before. However, a child with hearing loss who only begins full-time wearing of technology at 2 years of age has lost 2 years of listening time, and will have a listening age of 2 years when the child’s chronological

age is 4 years. Closing that 2-year gap in listening time and spoken language learning is not impossible but is much more difficult than closing the 3-month gap for the previous child. The study by Tomblin et al. (2015) suggests it could easily take until the child is 6 years old, assuming that the child is wearing audiological technology 12.5 hours a day or more. It should be noted that there is anecdotal evidence and at least one research study suggesting that sometimes older children, when given brain access to sound, can greatly abbreviate listening and learning time. The 15-month-old with a severe hearing loss in Paterson’s study (1992), given amplification, moved in 15 days from immature vocalizations to vocalizations typical of children of a year of age. More studies of this nature are needed to document the timing of changes in vocal behaviors of much larger numbers of children, just after they are first given brain access to sound. In any case, the listening age concept is useful for emphasizing the child’s need for time to listen and process the new sensory input to grow and integrate auditory neural pathways. Listening age is not a new concept. Pollack (1970) and Northcott (1972) both discussed the concept, calling it the child’s hearing age, and used it to help parents have an idea of what to expect regarding the child’s progress after the child began wearing hearing aids. Calculating the child’s listening age can be complicated by periods of time when the child’s brain access to spoken language is compromised by events such as chronic otitis media, behavioral issues, poorly fitting earmolds, equipment malfunction, or progression of the child’s hearing loss. Listening age also can be complicated by a situation where a child was wearing powerful hearing aids prior to getting a cochlear implant, but



7.  Auditory “Work”

was gaining very little from the hearing aids. Any time the child does not have appropriate brain access to quiet speech, he or she is not able to listen to spoken language in the manner that is needed for language development. Consequently, that time should not be counted as part of the child’s listening age, even though in the case of the child who ends up getting a cochlear implant, he or she may gain some valuable environmental awareness and be able to perceive prosodic features of speech (especially intensity and rhythmic features) from the hearing aids. Some centers specify a cochlear implant listening age, just to be clear how long the child has had good access to auditory information. Patton, Hackett, and Rodriguez of Sunshine Cottage School for the Deaf (2004) developed a formula for taking a number of those issues into account in an attempt to determine a more accurate listening age. Whether or not an actual formula is used for calculating the child’s listening age, the concept of listening age is a good one to use in understanding the crucial impact of a child’s full-time auditory brain access to speech through his technology, and about reasonable expectations to have for the child’s spoken language development. Flexer and Madell (2009) identified the following factors that may advance or delay children’s auditory function and, consequently, the interpretation of their listening age: n Age at which hearing technology was

acquired. n The audibility of the speech signal

made available by the technology (including high frequencies and soft speech). n The amount of time each day that the child actually wears the technology

(e.g., 2 hours/day; 4 hours/day; 12 hours/day; all waking hours). (Reminder: typically hearing persons have auditory access to the brain 24 hours per day.) n The impact and skill of the family and therapist in providing auditory language enrichment. The listening age concept is based on the fact that high-integrity auditory brain access of the speech spectrum is absolutely essential for the child’s listening and spoken language development. This cannot be emphasized enough.

Auditory Skills and Auditory Processing Models Auditory training work done with children with hearing loss to develop auditory skills or behaviors is based on theoretical frameworks partly developed by scientists working on models of auditory processing or central auditory processing disorders, and partly developed by practitioners (e.g., audiologists, psychologists, speechlanguage pathologists, physicians, and educators of children with hearing loss) who need to understand and help individuals on their caseloads. The fact that so many disciplines have been involved in describing how we process auditory-linguistic information has resulted in some confusion about terminology, as well as widespread acceptance of the component behaviors as “real,” and not as theoretical constructs. As a result of advances in the technology used to study brain processes (such as functional neuroimaging and more advanced electrophysiologic and topographical mapping techniques), knowledge has increased exponentially

195

196

Children With Hearing Loss:  Developing Listening and Talking

regarding what seems to happen where in the brain as people demonstrate particular auditory behaviors. However, there is still much to learn about how the individual gains those skills or behaviors, which would seem to be essential information to have for developing programs of intervention. Based on research and empirical evidence, plausible conceptual frameworks have been developed that integrate existing pieces of knowledge and thinking. The researcher’s intention may have been that the models’ integrity and usefulness would then be experimentally tested and accordingly revised — not that the component units are necessarily ready to be treated as realities upon which to base intervention practice. However, the teacher or therapist cannot wait for sci-

For an overview of the sequence involved in processing auditory-linguistic information, let us assume that the child’s peripheral ear has received a bit of speech/acoustic information and that the child was motivated to attend to it (or to detect it). According to the auditory processing models cited above, there is first some sort of frequency, intensity, and time analysis done by the peripheral ear, with transformation of the input into representative neural patterns. Brief sensory memory storage must be occurring at this stage for the analysis and transformation to take place. Next, it is known that some preliminary auditory processing occurs in the brainstem related to localization, and separation of competing messages (i.e., selective listening or selective attention). Finally, when the auditory cortex receives

entific knowledge to be complete, and sometimes the practitioner’s zeal for certainty imbues the theoretical constructs with more reality than they deserve. Perhaps the essential thought to bear in mind is that the auditory processing whole is greater than the sum of current knowledge of its hypothesized parts. In an effort to describe how the process of understanding auditory-linguistic information may work, the following discussion will put in order auditory behaviors from commonly accepted conceptual frameworks (e.g., Anderson & Arnoldi, 2011; Bellis, 2003; Perigoe & Paterson, 2013; and this volume, Appendix 3). A little later in the chapter (“Two Examples of Auditory Teaching and Learning”), there are two explicated scenarios to further illustrate the auditory behaviors that are part

the input, higher levels of analysis occur about which we have the least certain knowledge and, consequently, the most theorizing. This processing presumably includes phonetic, phonological, syntactic, semantic, and pragmatic or contextual processing. It may very well require hypothesized cortical activities such as the following: discriminating from close cognates, identifying (categorizing or associating) with other similar items, and comprehending (integrating or interpreting) as meaning is derived. And clearly, higher-level processing of auditory/linguistic messages is dependent on both short-term and long-term memory and retrieval, as well as the ability to understand speech that is degraded by noise or by electronics (listening over the phone or through earphones).



of the conceptual framework as they can occur in normal, everyday events. The short discussion in the box on the previous page is undoubtedly overly simplistic in describing the exquisitely complex process of listening and understanding; however, it is an attempt to show how the skills all may fit together. (Please note that the underlined words are common descriptors of auditory behaviors.) The preceding sequence in the box has underlined a number of items that have been featured for years in most texts and curricula as the content or goals of auditory work with children who have hearing loss (e.g., Anderson & Arnoldi, 2011; Cottage Acquisition Scales for Listening, Language, and Speech [CASLLS], 2003; Cole, Carroll, Coyne, Gill, & Paterson, 2005; Erber, 1982; Estabrooks, 2006, 2012; Henderson, 2008; Hirsch, 1970; Northcott, 1978; Perigoe & Paterson, 2013; Pollack, Goldberg, & Caleffe-Schenk, 1997; Rossi, 2003; Sindrey, 2002) as well as in online resources such as “Hearing First” (http:// www.hearingfirst​.org), “The Listening Room” sponsored by Advanced Bionics (http://www.thelisteningroom​.com) and Cochlear America’s HOPE materials and courses (http://www​.cochlearamericas​ .com), especially those under the heading of Sound Foundation (for babies or toddlers). It seems reasonable to assume that each of these components is involved in some manner or at some level in processing spoken language. It is certainly beneficial (with a particular child, or at given points in time, or as part of a specific game or activity) to focus on one auditory dimension alone. But these auditory components overlap and interrelate in both natural and structured activities to such a degree that they cannot seri-

7.  Auditory “Work”

ously be considered to be discrete or necessarily hierarchical, as portrayed in Figure 7–4 (Bellis, 2003; Ling, 1986; Perigoe & Paterson, 2013). Curricula that attempt to train these as isolated auditory skills may play a useful role for the teacher of older children because the curricula systematize instructional objectives and activities related to improving the child’s use of residual hearing. But isolated rote training on hypothetical auditory subskills is generally unnecessary for children with hearing loss under 3 years of age. With young children, the auditory subskill will be in the adult’s mind, but in most cases, the child will be interacting, playing, or solving a problem in a way that just happens to require that auditory subskill. However, the adult may have preplanned the activity to have the child demonstrate or practice a particular auditory ability, as described in Chapter 10. For the young child, the ability to demonstrate capability concerning any of the highlighted auditory components is inextricably linked to the child’s cognitive and spoken language growth. That is, for the toddler to be motivated to attend selectively; to localize; to discriminate; to identify, categorize, and associate; and to integrate, interpret, and comprehend, there must be meaning attached to the event. And, to paraphrase Winnie the Pooh (Milne, 1926), the main place that meaning in acoustic input comes from is from gradual acquisition of spoken language. And the main place spoken language comes from is from normal, everyday interacting and play with caregivers using spoken language. Informality does not mean there is a lack of planning and structure in the intervention and interacting processes. For example, simply being aware of the

197

198

Children With Hearing Loss:  Developing Listening and Talking

Figure 7–4.  The auditory components involved in listening and talking are as interrelated as the lines in a Celtic knot.

importance of consistently noticing relevant sounds or remarking on the child’s noticing them could be viewed as introducing a structured or contrived element into a conversational interaction between caregiver and child. Certainly, parents of normally hearing children also notice sounds and notice their children’s reactions to them. But parents of children who have hearing loss become especially conscious of the importance of these particular events for encouraging the child’s use of his or her residual hearing, and consequently they attempt to do both kinds of noticing consistently, and perhaps with a bit more deliberate interest and enthusiasm.

Theory of Mind and Executive Functions Special note needs to be made of the importance of being able to overhear others’ conversations. This ability requires appropriate settings on audiologic equipment so that the child has access to listening to quiet speech at a distance, and may not be technologically possible for every child. Overhearing also requires motivation and practice in having one’s attention primarily focused in one direction or on an activity, but still being aware of quiet conversation going on in the vicinity, without being directly addressed. Over-



7.  Auditory “Work”

hearing is essential for natural learning of particular grammatical items, such as first- and second-person pronouns (e.g., “I,” “me,” “you”) and deictic terms (e.g., “here,” “there”). As we know, a child’s language development depends on exposure to both direct and overheard spoken conversations and on development of the auditory centers of the brain. However, overhearing has crucial importance for stimulating development of the higherlevel social and cognitive abilities and knowledge that compose executive functions and ToM, both of which are essential for becoming a competent and socially appropriate member of society.

ceptions, or beliefs about the same event. Not surprisingly, the development of ToM is dependent on one’s language abilities and social experiences, and on one’s ability to overhear conversations that are not necessarily directly experienced. ToM seems to be an innate cognitive potential in humans that requires social and linguistic experiences over many years for full development. With social experience and conversations, both direct and overheard, a child learns to gauge others’ beliefs, desires, perspectives, and intentions, and then to understand, predict, and make sense of their behavior (Moeller, 2007).

A Theory of Mind (often abbreviated as ToM) is a specific cognitive ability that enables the understanding of others as independent, intentional agents; essentially, it means understanding that other people have minds that can be perceiving and thinking in ways that are totally different from one’s own mind.

Having a ToM is critical for understanding many aspects of human social life such as surprises, secrets, tricks, mistakes, and lies. As children age and achieve more social and language skills, ToM forms the basis for inference, perspective taking, social reasoning, and empathy. Having a ToM is critical for academic development, especially in collaborative educational and work environments where listening to and understanding other points of view and engaging in joint mental activity is not only essential to the process of working together, but actually is the basis of the creative problem-solving process. According to Mercer, “Language provides us with a means for thinking together, for jointly creating knowledge and understanding” (2007, p. 15).

Having ToM means one is able to maintain, simultaneously, different representations of the world, and know there are many ways of looking at things. An example of a child who has acquired ToM would be a child who, looking at the 12 inches of snow that fell during the night, would know that he and his teenage sister would be happy about it (because of making a snowman and going skiing), but that his parents might be unhappy about it because they would have to shovel the driveway and drive on slippery roads because of it. This child would know that others could have different feelings, per-

Although 2- and 3-year-old children can pretend and do know something about others’ mental states (e.g., “The

199

200

Children With Hearing Loss:  Developing Listening and Talking

baby is sad”), a working ToM for tasks such as the false belief task typically does not develop before the age of 4 years. By that age, a child should be able to distinguish between what is so and what people believe is so. One of the most important milestones in ToM development is gaining the ability to attribute false belief; that is, to recognize that others can have beliefs about the world that are wrong (Sabbagh & Moses, 2006). The language skills in children with hearing loss are highly correlated with many of their ToM abilities (Schneider, Schumann-Hengsteler, & Sodian, 2005). The child needs to be at a fairly complex linguistic level to understand and express two opposing thoughts about the same thing. For example, the child could demonstrate ToM knowledge through being able to combine two clauses (ideas) in one sentence using coordinating or subordinating conjunctions (e.g., “I am happy about the snow, and/but Mommy is sad about the snow”); or through using “that” to combine clauses (e.g., “She believes that snow is a big bother”); or both (e.g., “I know that the cake is in the cupboard, but he thinks that the cake is still in the refrigerator”). ToM also requires rather specific vocabulary skills, such as knowing words for describing internal processes such as thinking, feeling, imagining, knowing, and believing. That is, if a child can understand and also produce sentences such as, “He thought his cake was in the refrigerator,” he is more likely to understand and predict behavior that stems from a false belief. Sometimes children with hearing loss have difficulty with social and academic situations related to ToM, not because of language deficits, but because they may have missed incidental information or social cues directed toward them that

would have helped them to interpret others’ thoughts or feelings (Westby, 2017).

Siblings in the home facilitate the development of ToM because there tend to be discussions of mental and emotional states that lead to differences in behaviors. Those discussions must be made available acoustically (through technology) to be overheard by the child with hearing loss, and the child needs to communicate in the same language as others in his or her environment. Therefore, wearing hearing aids and/ or cochlear implants at home, and also using remote microphones (FM and Bluetooth technologies) in home environments are critical for the development and maturation of social and emotional skills.

The child learns these linguistic structures and the vocabulary to describe mental activities (thoughts and feelings) after multiple exposures in varied contexts, just as he or she learns any language. Overhearing “self-talk” (of parents, or others in the environment) such as “where are those car keys” or “I forgot the doctor’s name” also contribute to the child’s understanding that others have a state of mind that is different from their own. One important way children gain an understanding of others’ thoughts is by attending to the back-and-forth viewpoint exchange of family members. Therefore, the child must be able to track multitalker conversations, a skill that demands the maximum possible auditory brain access to speech at a distance, and in the same language as other talkers in the environment. See MacIver-Lux et al. (2016, pp. 252–253) for a well-documented table of



7.  Auditory “Work”

the stages of the development of ToM, listed according to age. Executive functions are a collection of processes are responsible for guiding, directing, and managing cognitive, emotional, and behavioral functions, especially for novel problem solving (Sabbagh & Moses, 2006). The primary behaviors that comprise this collection of regulatory and management functions include the ability to initiate behavior while inhibiting competing actions, choosing relevant tasks and goals, planning and organizing problem-solving strategies, shifting attention, behaving flexibly, and monitoring and evaluating behavior (Meltzer, 2007). A very important feature for preschoolers is the inhibition of ineffective behaviors. Many executive functions continue to mature well into the third decade of life, a process that parallels the prolonged neurodevelopmental growth of the prefrontal regions of the brain.

sor of ToM, or whether ToM understanding predicts the development of executive functions. It is reasonable to assume that individual differences in vocabulary and verbal understanding are particularly important for predicting performance on executive functioning and ToM tasks in samples of young children (e.g., 3 to 4 years old). For older children, individual differences in working memory and executive functioning, rather than verbal abilities, may be better predictors of ToM performance.” In any case, though, it seems clear that in the early years, developing a ToM and developing rudimentary executive functions both depend on the child’s language development, which in turn depends on the child’s auditory brain access.

Even for normally hearing children, noise in the environment disrupts the child’s ability to access executive functions because their auditory focus is fragmented. The detrimental effects of noise on the speech perception of a child with hearing loss are known to be even greater than for a child with normal hearing. Thus, the consequences of noise for the child’s cognitive development (including ToM and executive functions) can be even more detrimental. Acoustic management of the classroom and use of personal RM technologies are crucial.

In essence, auditory/linguistic learning consists of the child becoming aware of sound, then connecting sound with meaning, and then understanding more and more complex language — first in quiet circumstances, and then in a variety of more difficult listening conditions. However, several steps need to be taken to provide a facilitative auditory environment for the child to learn to listen and talk (Figure 7–5). Step 1. The most important step that parents can do to help the process along is to be sure that the child is wearing appropriate, well-maintained amplification (hearing aids and/or cochlear implants) during all waking hours — at least 10 to 12 hours per day. The importance of keeping the hearing aids or cochlear implant on the child all

According to Schneider et al. (2005, p. 2), “An interesting question is whether executive functions represent a precur-

How to Help a Child Learn to Listen in Ordinary, Everyday Ways

201

202

Children With Hearing Loss:  Developing Listening and Talking

Figure 7–5. This mom is speaking close to the microphone on the child’s hearing aid while his visual attention is focused on the toys in front of them. Through her actions in this ordinary play situation, the mother is providing an environment, which is highly facilitative of auditory brain access and development and linguistic growth.

waking hours cannot be overemphasized. The child’s brain access to appropriate, consistent sound/auditory information is fundamental to learning to listen and talk. Accessing sound means that the child must be wearing the equipment and it must

be functioning well for auditory information to reach the child’s brain. With young children, it is the parents’ responsibility to learn what they need to know to be sure that the child’s technology is functioning in topnotch condition every day through



daily listening checks, troubleshooting, having backup accessories on hand, and keeping the audiologist’s phone number on speed dial. Step 2. The next step that parents can take is to be very conscious of the listening conditions around the child in all communication situations. These conditions include the amount of background noise in the environment and how far away the child is from the person speaking. Background noise includes any noise that could interfere with the child’s ability to hear a person speaking to him or her, or ability to hear himself or herself vocalizing or talking. Background noise includes the computer, TV, radio, cars or trucks going by outside, fan noise, or other people talking — any of those absolutely normal noises can mask a message the child was supposed to hear. Unfortunately, noise is amplified and processed by the hearing aid or cochlear implant right along with the speech signal. Current digital hearing aids and speech processors attempt to pull the speech signal out of the surrounding nonspeech noise, but are far from perfect. It is important to note that the auditory centers of the brain are being stimulated 24 hours of the day for children with normal hearing. For children with hearing loss, it is essential that we optimize the daily 14 hours or so that the child is awake and alert because every second is valuable listening and learning time. Background noise needs to be absent or minimal to provide the child with the best chance to hear, and/or an RM system should be used to improve the signal-to-noise ratio at home and in school (see Chapter 5). Step 3. The other part of the listening environment that needs to be managed is the distance between the speaker and the child’s microphone.

7.  Auditory “Work”

Amplification is usually set so that the child can easily hear about 2 yards from a speaker. However, the best distance between the speaker and the microphone for a child wearing hearing aids is still within about 6 to 8 inches, and for a child with a cochlear implant, about 3 feet. Luckily, when they are intentionally interacting, adults are usually talking with babies and young children from very close by. Once the child is mobile, parents need to make sure that they move close to the child and get down on the child’s level so that the child has optimal brain access to the sound of what the parent is saying. Speaking louder will not solve the problem, because although the speech may be louder, it will not be clearer.

All of this discussion has focused on creating optimal conditions for intentional interacting and communication. As just discussed in the Theory of Mind and Executive Function section, it also is important that the child is provided with the opportunity to learn from overhearing the conversations of others. That is, the child needs to be able to learn from conversations between other people that are not intentionally directed toward him or her. This may require adjustments on the child’s hearing aids or cochlear implants to provide the ability of the brain to receive quiet speech at a distance.

RM technology (discussed in Chapter 5) can help overcome the negative effects of noise and distance by making the speaker’s voice louder than the surrounding noise. RMs can be used in the home, outside on the playground, or in the classroom. Personal RM systems work

203

204

Children With Hearing Loss:  Developing Listening and Talking

like a small radio station, in which the speaker wears a microphone and transmitter that sends the signal directly into the receiver attached to the child’s hearing aids or cochlear implant. Step 4.  Last, here are some strategies that parents can adopt to promote the child’s listening abilities in the course of any communicative interaction: n Hold the child on your lap facing

away from you, or sit beside him or her, with the child’s attention directed toward toys or activities that you are both engaged in playing with. Talk close to the child’s microphone without the child constantly looking at you. n Expect the child to understand without watching your face. If the hearing aids or cochlear implants are properly set, the child will definitely be able to hear you. What the child needs is experience listening in the course of repeated, routine events and play with toys or activities that are of interest. n If the child does not seem to understand an auditory-only message, try saying it again, closer to the child’s microphone. If the child still does not understand, then help him or her to understand by watching or demonstrating, and then say it again, auditory only. The idea is to help the child build up confidence in his or her ability to understand auditoryonly messages.

Two Examples of Auditory Teaching and Learning What follows are two scenes that illustrate how to create a good listening environ-

ment by utilizing auditory strategies. The scenes are each followed by explanations.

Scene I:  Tony The Context A father and his 14-month-old son are building with blocks on the floor in the living room after supper. The child has a severe to profound sensorineural hearing loss, and has been cochlear implants consistently for 2 months. The mother has been in another room, but at the beginning of this scene appears in the doorway, which is about 10 feet away from the child. The child is playing beside his father, with his back to the doorway where the mother is now standing.

The Action (Takes About 15 Seconds) The mother waits for a break in the conversation between father and son, and then calls, “Tony!” The father ceases block building, leans toward Tony, puts his head to one side as his face takes on an interested, curious sort of “listening” expression. He says, “Listen! Did you hear that? I heard something” (Figure 7–6). Tony had been looking at the father’s face from the time the father stopped playing with the blocks and leaned forward. When his father continues to maintain the listening expression without saying or doing more, Tony looks back at the blocks and picks one up to start playing again. The mother pauses a second or two, and then calls again, “Tony!” Tony looks up, alertly, at the father. The father says nothing, but still looks as if he is listening to something.



7.  Auditory “Work”

Figure 7–6.  Tony and his father building blocks.

His mother takes a step forward, and calls once more, “Tony! Time for your bath!” This time Tony turns around and sees his mother. She smiles and says, “Terrific! You heard me. It’s time for your bath.” Tony smiles impishly, and toddles off, full speed, in the opposite direction.

The Explanation The mother waits for a break in the conversation between father and son, and then calls, “Tony!” The mother’s voice is loud enough to be well within this particular child’s range of amplified hearing, so it is reasonable for the child’s brain to detect it. Since the mother is 10 feet away, which

is much farther than a normal conversational distance, this could be viewed as an instance of a distance listening event. She has waited for a break in their conversation, partly because it is conversationally appropriate to do so, and partly because Tony stands a better chance of hearing her if her calling voice occurs in quiet. This is an effort to enhance the figure (signal) relative to the ground (noise). Some children, perhaps at a more advanced stage, may be expected to hear a calling voice in this sort of situation even when there is other talking going on. In any normal, everyday situation in the home, there is quite likely to be other noise in the room (e.g., from the toys at hand, from the street outside, from the dishwasher, from an

205

206

Children With Hearing Loss:  Developing Listening and Talking

older sibling turning pages in a book) that will reach the microphone of the child’s hearing aid or cochlear implant.. The signal — for example, the calling voice — must be louder than the surrounding noise for the child to detect it. The degree to which it is louder will be the degree to which it is easier for the child to hear. This simple fact is often easily forgotten in the course of normal activities where, for example, the radio or TV may be left on for long periods of time. In a sense, nearly all normal everyday listening involves selective attention or selective listening abilities. The father ceases block building, leans toward Tony, puts his head to one side as his face takes on an interested, curious sort of “listening” expression. He says, “Listen! Did you hear that? I heard something.” Tony had been looking at the father’s face from the time the father stopped playing with the blocks and leaned forward. When his father continues to maintain the listening expression without saying or doing more, Tony looks back at the blocks and picks one up to start playing again. The father’s behaviors are an attempt to alert Tony and motivate him to attend. Depending on the degree of the child’s hearing loss and his technology, his or her absorption in a task, and his or her backlog of positive experiences listening, he or she may respond immediately to a calling voice from 10 feet away. Others will need to be alerted to listen. As for the listening expression, it is impossible to actually present an appearance of listening that is unmistakable as listening. That is, one person cannot see another person hearing or listening. But it is possible to see the listener’s reaction to sound, interest, or curiosity about it, which is what the father is demonstrating here.

The mother pauses a second, and then calls again, “Tony!” In this particular instance, a certain number of repetitions of the mother’s calling seems reasonable. However, the real goal is clearly for the child to respond after only one instance of the mother calling. If parents and professionals always quickly give the child multiple repetitions of messages or task demands, the child may come to expect that a second or third repetition is coming, and may not bother to respond initially. One simple-minded but important strategy, which may lead toward the child’s responding after being called once (or after being asked to perform any kind of task, for that matter), is to provide ample time for the child to respond before calling again. (Of course, waiting time is also important from a conversational, turn-taking perspective.) It is equally important to recognize that there are times when the child is not going to respond, and to stop repeating oneself long before it becomes a pointlessly frustrating exercise. Before that, however, there are several steps described below that one can take when it is necessary to enhance the audibility of the acoustic signal. Tony looks up, alertly, at the father. The father says nothing, but still looks as if he is listening to something. This suggests that Tony detected a voice from the surrounding background noise (selective attention) but has incorrectly (but not surprisingly) assumed it was his father speaking to him. He has thus localized the sound incorrectly in that he thought it came from in front rather than in back of him. He is also not attending sufficiently to the higher fundamental frequency and harmonic cues that, one day, will tell him that it was his mother



speaking and not his father. This could be viewed as a discrimination task, with the child being required to discriminate between his mother’s and his father’s voices. In situations where a large number of people could ostensibly be calling the child, the task may be more one of identification. His mother takes a step forward, and calls once more, “Tony! Time for your bath.” This time, Tony turns around and sees his mother. She smiles and says, “Terrific! You heard me. It’s time for your bath.” Tony smiles impishly, and toddles off, full speed, in the opposite direction. The mother’s behaviors are intended to make her acoustic input more salient (i.e., more easily detected) to the child. By stepping forward, she has somewhat decreased the distance from the sound source (her mouth) to the microphone on the child’s hearing aid. This will make the amplified information a bit louder for the child’s brain. By also adding the sentence “Time for your bath!” she has provided a longer piece of acoustic input. The child may not be able to understand this bit of information auditorily, but the mother’s primary purpose here is simply to provide a lengthier clue to the child to help in his localizing (the source of the calling voice) and associating (that voice = mother) tasks. Naturally, it is important that she has used a contextually and prosodically appropriate, semantically meaningful, and syntactically correct sentence. These aspects of her input will eventually lead to Tony’s understanding. By doing all of these things in all of the hundreds of small events that occur in a day, the mother is building up Tony’s belief that he is expected to understand spoken messages, with or (often) without looking at the speaker’s face. In addition, Tony’s

7.  Auditory “Work”

mother has rewarded his turning by smiling and by using words, which describe exactly what it was that he did that was pleasing. These parent responses are intended to be part of motivating Tony to want to listen the next time(s). One interpretation of Tony’s behavior in this scene could be that Tony soon detected the sound, localized it to a source behind him, identified the mother as the speaker, and comprehended the verbal message. Another interpretation could be that Tony detected the voice, saw that it was not his father speaking, began searching visually for the sound source, and happened to find his mother in the process. Then he saw the pajamas in her hand and, knowing what that meant, toddled off. In a research study, it might be important to try to determine which of these interpretations is correct. But in a real-life situation, it does not much matter. The parents are not testing his ability to perform listening tasks. In this normal, everyday event, they are providing all the conditions that are likely to encourage and promote Tony’s learning to listen and talk. These include the provision of appropriate input in terms of the prosody, intensity, syntax, semantics, and conversational context. The conditions also include provision of motivation and social or circumstantial rewards. If, this time, Tony saw his mother in the course of visual searching for “a voice — not father’s,” then this experience will simply be part of the learning process. If he did actually localize the source and identify the speaker auditorily, then this experience is an example of his having learned. Either way, at every future opportunity, the parents will continue to make the same effort to set up events in such a way as to allow, encourage, and expect Tony to listen.

207

208

Children With Hearing Loss:  Developing Listening and Talking

Scene II:  Tamara This next scene is an example of a normal, everyday interactive event between a mother and her slightly older child. The mother smoothly and automatically employs a number of strategies to encourage the child to listen.

The Context This child is 24 months of age, has profound sensorineural hearing loss, and wore hearing aids consistently from 8 to 12 months of age but received very little benefit from them. At 12 months of age, she received bilateral cochlear implants. Consequently, she now has a cochlear implant listening age of 12 months. The mother and child are playing on the kitchen floor with blocks, cars, and little people dolls. They have just built a very tall tower with the blocks. The event took about 30 seconds in all.

The Action Tamara says, “Where block?” as she looks around for another block on the floor. She is standing up since the tower they are building is so tall. Guessing what Tamara meant, the mother says, “Where’s another block?” Tamara is not watching the mother’s face since she is still searching for blocks. Tamara says, “’Nother block?” The mother says, “Hmmm — well, let’s see.” The mother looks for more blocks; Tamara turns away. The mother says, “Oh — here’s one.” Tamara turns back, and the mother hands her a block. The mother then says, “Thank you.” Tamara says, “What?” and looks at her mother. The mother rubs her nose, partially obscuring her mouth, and says, “You can say ‘Thank you.’” Tamara again says,

“What?” This time the mother lets Tamara look at her face as she says, “You’re supposed to say ‘Thank you.’” Tamara repeats, “Thank you.” Tamara puts the block on the top of the teetering tower, which then topples. She steps back (right on the mother’s finger), and says, “Uh oh!” looking at the blocks that fell. The mother says, “Ouch! My finger!” (Figure 7–7). Tamara looks at her mother who is shaking her finger. The mother says, “You stepped on my finger! You gave Mommy a boo-boo.” Tamara looks stricken, and softly says, “Boo-boo,” looking at the finger. The mother says, “What are we going to do about it?” Tamara says, “Kiss boo-boo,” and she does. The mother hugs Tamara, and says “Oh thank you. It’s all better now.” Tamara repeats, “All better now.”

The Explanation Tamara says, “Where block?” as she looks around for another block on the floor. She is standing since the tower they are building is so tall. Guessing what Tamara meant, the mother says, “Where’s another block?” Tamara is not watching the mother’s face since she is still searching for blocks. Tamara says, “’Nother block?” The mother is expanding Tamara’s utterance in her question as she attempts to clarify her meaning and confirm her guess at what Tamara meant. This type of adult response/question is an especially important one to use in connection with expecting that the child will understand without having to look at the speaker’s face. The child is likely to be motivated to attend to the mother’s utterance since the mother is not introducing new concepts or topics. Instead she is using the child’s topic, showing her interest in clarifying her meaning, and adding only a small amount of relevant information to her



7.  Auditory “Work”

Figure 7–7.  “Ouch! My finger!” says the mom as Tamara steps back onto it, oblivious in her glee about the tower toppling.

original utterance. Clarification questions and expansions are strategies parents use that are likely to promote the child’s language growth. They are also likely to promote the child’s listening. In this instance, Tamara was not looking at the mother’s face when the mother spoke. The fact that she then imitated the mother’s question, employing the mother’s word “another,” suggests that Tamara did get the mother’s message through listening alone. The mother says, “Hmmm — well, let’s see.” The mother looks for more blocks; Tamara turns away. The mother says, “Oh — here’s one.” Tamara turns back, and the mother hands her a block. The mother actually contrived this listening opportunity. That is, she waited until Tamara had turned away to say, “Oh — here’s one.”

The mother then says, “Thank you.” Tamara says, “What?” and looks at her mother. Here, Tamara detected the voice, but did not understand what was said. The mother rubs her nose, partially obscuring her mouth, and says, “You can say ‘Thank you.’” Tamara again says, “What?” This time the mother lets Tamara look at her face as she says, “You’re supposed to say ‘Thank you.’” Tamara repeats, “Thank you.” Tamara puts the block on the top of the teetering tower, which then topples. She steps back (right on the mother’s finger), and says, “Uh oh!” looking at the blocks that fell. In instances where it is desirable to insist on auditory-only input, and the child is looking directly at the adult, there are a number of strategies one can employ. The most obvious is for the speaker to

209

210

Children With Hearing Loss:  Developing Listening and Talking

cover his or her own mouth or to otherwise obscure the child’s visual access to the mouth. Less obvious is directing the child’s visual attention away from the face. Here the mother could have gestured to the block and said, “Look, Mommy gave you a block.” When the child’s gaze went to the block, she would then say, “You’re supposed to say ‘Thank you.’” In many instances, it is possible to simply sit behind or beside the child in such a way as to discourage the child from constantly watching the face. After two attempts to give the message auditorally only, the mother let Tamara look at her face as she spoke. There is controversy regarding how long one should “stay auditory,” but in everyday situations, the most reasonable course may be to stay auditory as long as it is possible without destroying the conversational flow or unduly frustrating either adult or child. The mother says, “Ouch! My finger!” Tamara looks at her mother who is shaking her finger. The mother says, “You stepped on my finger! You gave Mommy a booboo.” Tamara looks stricken, and softly says, “Boo-boo,” looking at the finger. The mother says, “What are we going to do about it?” Tamara says, “Kiss boo-boo,” and she does. The mother hugs Tamara, and says, “Oh thank you. It’s all better now.” Tamara repeats, “All better now.” Tamara turned to the mother’s loud “Ouch!” The mother shook her finger because it hurt, but that also meant Tamara looked at the finger while the mother said, “You gave Mommy a boo-boo.” Thus, Tamara’s repeating of “boo-boo” is likely to be based on her having actually heard what her mother said. In hugging Tamara, the mother deprived her of visual access to her face. Tamara’s imitation of “All bet-

ter now” confirms that she heard and listened to her mother’s utterance. In a normal, everyday event such as the second one illustrated here, a general motivation to listen is assumed to have been established already, although every instance of listening and understanding is likely to build upon that motivation. Selective attention is operating because the child is listening in a normally noisy environment (block play); localizing is not much of an issue at the moment since the mother is the only other speaker. Each instance of the child’s having understood when she was given access only to the auditory part of the message (through the techniques described above) is an instance of the child using her abilities to discriminate, identify (i.e., categorize, associate), as well as integrate (i.e., interpret, comprehend). This example will have been successful if it also illustrates why and how these higher-level processes are all part of real-life instances, as well as why and how they are inseparable from the child’s overall cognitive, social, and linguistic development and abilities.

Targets for Auditory/ Linguistic Learning Targets for auditory/linguistic learning can be found in Appendix 3. The targets are divided into four phases of auditory development: n Phase I:  Being Aware of Sound n Phase II:  Connecting Sound With

Meaning n Phase III:  Understanding Simple

Language Through Listening



7.  Auditory “Work”

n Phase IV:  Understanding Complex

Language Through Listening Across the phases, the individual targets indicate abilities the child demonstrates as he or she becomes aware of sound, then connects various sounds with meaning, and then gradually acquires the ability to understand more and more complex language. The chart can be used to understand the general progression of auditory/linguistic learning demonstrated by the child who is learning spoken language through listening. It also can be used as a recordkeeping device, with dates noted for the child’s increasing abilities as described in Appendix 3, and could be used as the basis for developing instructional objectives related to specific abilities. The four phases represent sequential milestones in the auditory/linguistic development of the child who has hearing loss and is being taught spoken language through listening. However, individual items within each phase are not necessarily sequential, although some are logically so. This is not intended to be used as a rigid curriculum. Sometimes activities and abilities at higher levels are appropriate to try, even though the child may not have completed all of the targets at a lower level. In addition, an appropriate game or activity could easily tap multiple auditory/linguistic targets simultaneously. Examples or indicators have been included to clarify what is intended by particular targets, but the intention is that parents and interventionists will take advantage of all kinds of normal everyday events and routines where these targets will be easy to “fall over” in ordinary ways. When there is a need for more adult direction and focused structuring of preplanned activities, then the parent’s or

practitioner’s creativity will need to be used to develop activities that provide sufficient practice while remaining playful.

A Last Word The structure and knowledge of the importance of the child’s ability to demonstrate the auditory abilities need to be more in the mind of the teacher, practitioner, or parent and generally not particularly obvious in the activities occurring with the child (Cole et al., 2005). Pollack (1970) said long ago that the child’s auditory abilities develop “because emphasis is placed on listening throughout [all the child’s] waking hours so that hearing becomes an integral part of [his or her] personality” (p. 159). Similarly, Ling (1986) said that “learning to listen occurs only when children seek to extract meaning from the acoustic events surrounding them all day and every day” (p. 24). At the same time, as mentioned above, for certain children with hearing loss, some structured practice on component parts may be useful as a supplement to the emphasis on audition in normal, everyday activities. With young children, this structured work will normally be done through playful games that are meaningful and enjoyable for the child. Excellent activities of this sort are detailed in a number of sources that the reader is strongly encouraged to consult, including both print materials (Estabrooks, 2014; Estabrooks et al., 2016; Rossi, 2003; Sindrey, 2002) and online resources such as The Listening Room (http://www.thelisteningroom.com), the HOPE materials and courses (http://www.cochlearamericas​ .com/ support), and Hearing First (http://

211

212

Children With Hearing Loss:  Developing Listening and Talking

www​ .hearingfirst.org). With regard to these more structured activities, however, it is important to close the chapter by reiterating two themes: 1. The skills and abilities being worked upon are theoretical constructs; consequently, the specific exercises may be somewhat arbitrarily derived. 2. Whenever the child learns to perform any task in a structured activity or game, the task must then be integrated into real-life situations to be functional for him

or her. That is, provision must be made for allowing, encouraging, and expecting the child to demonstrate the same ability in the course of normal everyday routines and activities. The ideal situation occurs when the task or skill practice can be integrated into a daily routine or interactive play. Clearly, the more similar to everyday life that an adult-planned activity is, the greater the likelihood that the child will automatically apply what has been taught to everyday life events.

8 Spoken Language Learning

Key Points Presented in the Chapter n Communicative competence

includes knowledge of pragmatics, semantics, syntax, morphology, and phonology. n By the age of 3, most typically developing children have basic communicative competencies in all areas; by the age of 6, sophisticated communicative competencies are well established. n There are a number of theories regarding how language develops, although we still do not know

precisely what causes a child to move from one developmental stage to the next. n If appropriate auditory brain access (through technology) and linguistic experiences are provided to most children who are deaf or hard of hearing from an early age, then cognitive and linguistic functioning can be expected to follow the normal course of development.

213

214

Children With Hearing Loss:  Developing Listening and Talking

Introduction Because this book is so centrally concerned with spoken language learning through listening by infants and young children, it is appropriate to begin with a discussion about all that is involved in communicating verbally, and about how much of that learning is accomplished at a very young age. Theories about how children learn to talk are also addressed in this chapter, and are followed by consideration of the relevance of the nature versus nurture controversy for intervention. The final section in this chapter concerns an overall perspective of listening and spoken language intervention for children with hearing loss based on current informed views of the effects of hearing loss on psychological and cognitive processing.

What’s Involved in Talking? The most important thing to know about language is that it is a social tool. Language is a shared code that lets people send and receive messages; that is its purpose. Hymes (1972) was the first to use the term “communicative competence” in talking about people effectively and appropriately exchanging messages. He describes communicative competence as the speaker’s knowing not only what to say, but also how to say it, when to say it, where to say it, and to whom to say it. Knowledge of the interpersonal, cultural, and linguistic conventions that govern appropriate use of language is what is known as the area of pragmatics. To use language appropriately, the child gradually acquires three interwoven domains of communicative knowledge or prag-

matics: intentionality (using language to express things, to respond, or to accomplish things), presuppositions (what to say to whom, how, and when), and discourse conventions (the rules for how to carry on a conversation with another person). These will each be briefly discussed.

Intentionality/Speech Acts As was said at the beginning of this section, the purpose of language is to send messages back and forth between two (or more) people. The adult speaker sends a message with the intention of expressing or accomplishing something. A newborn baby attends to and responds to stimuli, but probably does not communicate with intentionality for some time. However, most parents don’t really care whether or not the child actually intends to express particular things. The adults tend to interpret most noises and utterances from the baby as communicative, which probably helps move the baby toward actually being intentional. For example, even when the child does something such as burp or sneeze, the parent responds as if it were meaningful by saying, “Oh, do you feel better now?” or “Gesundheit!” And the baby may respond by looking at the parent or smiling, which can encourage the parent to continue the interaction, thus contributing toward building the foundation for language. By about 7 to 9 months of age, the baby starts to become more consistent in the use of particular vocalizations for expressing particular things, and accompanies the vocalizations with increasingly consistent gestures. The appearance of these protowords (consistent vocalizations used in specific contexts) is extremely



important, as it is one of the clear early signs of communicative intentionality. An example would be a child who consistently says “ae-ae” and raises his arms to be picked up (Menn, 2013). By 8 to 12 months of age, the baby does intentionally gesture and vocalize to demand, ask, label things, notice, and protest. The child’s repertoire of speech acts increases so that by age 3, the number of functions has expanded to at least 50 speech acts, including more sophisticated intentions such as teasing, acknowledging others, correcting others, and joking. By age 5, the child’s speech act repertoire is ever more adultlike, with the child doing things like hinting, promising, and insulting others (Dore, 1986).

Presuppositional Knowledge Presuppositional knowledge involves the speaker’s automatic guesses about the general format or “register” that is appropriate, as well as the level of detail and explicitness that is needed for expressing oneself in a particular social context (Grice, 1975). Examples of presuppositional knowledge influencing the register used by the speaker would be the changes one makes in one’s language in speaking to a friend about a film, versus speaking to a senator in advocating for an issue, versus greeting a cousin’s baby for the first time. The child must learn that it is important to attend to and interpret particular features of the listener (e.g., age, sex, social and educational status) and the setting to determine the degrees of informativeness and politeness that are appropriate to a particular situation. This knowledge contributes to the types of sentence structures used, the directness or indirectness of reference, and a host of other gram-

8.  Spoken Language Learning

matical choices such as how much to use pronouns, subordinate clauses, and adjective embeddings. Although presuppositional knowledge is added even in adulthood, during the years between age 3 and 5, preschoolers provide evidence of their increasing presuppositional knowledge when they begin doing some of the following: n adjusting the amount of information

they provide depending on whether or not the topic of the conversation is in the immediate present and known to the listener; n appropriately using definite versus indefinite articles, pronouns, and demonstratives; and n adjusting their syntax and vocabulary to be simpler in communicating with babies and toddlers (Hulit, Fahey, & Howard, 2014; Owens, 2015).

Children typically gain presuppositional knowledge through observation of others (overhearing), as well as through explicit teaching (e.g., “We don’t use words like that at home, especially not at the dinner table.” Or: “Mr. Shepard doesn’t know who Taffy is — ​ you’ll need to tell him.”).

Discourse/Conversational Conventions The third of these interrelated domains of communicative knowledge concerns the conventions regarding the social organization of discourse — or the mostly unspoken rules for maintaining a conversation with another person. These include conventions about taking turns talking in conversations,

215

216

Children With Hearing Loss:  Developing Listening and Talking

about how to initiate a topic or maintain it, and about what to do when there is a breakdown in the communication.

Examples of Discourse/ Conversational Conventions All of these conversational conventions are typically acquired by children between the ages of 3 and 5 years. n Turn-taking with each person

taking turns at talking, and not overlapping each other. n Following the Gricean maxims (1975) by providing appropriate quantity, quality, relevance, and clarity for the listener. n Learning the following: n How to initiate a topic, as well as how to maintain a topic already established n How to change a topic. n How to close a conversation. n Seeking clarification of the speaker’s message through strategies such as asking the speaker to repeat themselves, or providing all of the sentence except what you missed (e.g. “What did you say?” “I didn’t get that.” “You said you went to the _____?”).

Other Essential Rule Systems in English The area of pragmatics has been addressed in a bit of detail because it is often not well known to professionals and parents, and because it is important to know how many of the normal things adults do in interacting with infants and young children are

actually building important aspects of language. In addition to pragmatics, other rule systems involved in learning to talk are the semantic, syntactic, morphologic, and phonologic systems. Semantics refers to knowledge of networks or hierarchies of meaning for individual vocabulary words (i.e., lexicon), as well as knowledge of meaning relationships of words in sentences (i.e., case grammar). The semantic/ syntactic rule system is concerned with the permissible ways of ordering words within sentences to express particular meanings, and with ways of transforming basic sentences into more complex structures. Morphologic rules are concerned with the permissible combinations of free morphemes with affixes (such as “un” in “unhappy” or “ness” in “happiness”), and with inflectional morphemes, which indicate grammatical features such as tense (“The dog walked.”), person (“The dog walks.”), or number (“The dogs walk.”). The phonological system includes all of the speech sounds in the language, the permissible ways of combining those sounds into words, and the conventional ways in which the language uses prosodic features such as intonation, stress, and rhythm to express meaning. Rules for semantic, syntactic, morphological, and phonologic usage are all influenced by the pragmatic conditions of the communicative event, such as the speaker’s guess about how explicit, detailed, or complex it is appropriate to be for a particular listener. Figure 8–1 is intended to illustrate the overriding importance of the pragmatics of each communicative situation in influencing the semantic, syntactic, and phonologic choices made by the speaker. Extensive discussion of all of these areas of language is outside the scope of this book. The reader is referred to texts



8.  Spoken Language Learning

Figure 8–1.  By the age of 3 years, typically developing children have a solid foundation in all of the interrelated areas of language.

such as Owens (2016) or Paul, Norbury, and Gosse (2018) for detailed and wellreferenced discussions of each area. Clearly, learning all the conventions for the pragmatics, semantics, syntax, morphology, and phonology of a language is no small task. The wonder is that by the age of 3 years, most normally developing children have knowledge in all of those areas, and by age 5, their knowledge of the conventions of spoken language allows them to communicate easily and appropriately with a wide variety of speakers; this language knowledge has also laid the foundation for learning to read. It also is a fact that, given early and appropriate auditory and linguistic experience, the language development of many children with hearing loss can be

remarkably similar to that of their sameage peers (e.g., Dornan, Hickson, Murdoch, Houston, & Constantinescu, 2010; Eriks-Brophy, Durieux-Smith, Olds, Fitzpatrick, Duquette, & Whittingham, 2012; Fulcher, Purcell, Baker, & Monro, 2012; Geers & Moog, 1989; Geers, Moog, Biedenstein, Brenner, & Hayes, 2009; Geers et al., 2011; Hayes, Geers, Treiman, & Moog, 2009; Nicholas & Geers, 2006; Nittrouer & Burton, 2001; Nott, Cowan, Brown, & Wiggelsworth, 2009; Sininger, Grimes, & Christensen, 2010; Wang, Shafto, & Houston, 2018; Yoshinaga-Itano, Sedey, Coulter, & Mehl, 1998). Children whose hearing losses have been identified early, with appropriate amplification worn full time (at least 12 hours per day) and copious verbal interactions

217

218

Children With Hearing Loss:  Developing Listening and Talking

provided, can be expected to follow a normal development sequence in their learning to listen and talk. This means normal developmental checklists can be used to describe the child’s approximate level of development and to anticipate what language milestones are likely to be achieved next. The professional and parent can then provide copious meaningful verbal interacting, which will support the child’s moving on to those next achievements. A normal developmental checklist can also be used to see if any of the targets at the child’s level (or below it) happen to be missing or only partly acquired, and to provide experiences that will help the child fully acquire all of the targets that are appropriate for his developmental level.

How Does a Child Learn to Talk? The fact is that despite many theories and a great deal of research, we still do not really know just how and/or why language develops. Even though “development” implies change, present developmental research and theories seem to be best at describing what goes on within the “plateaus of nonchange,” as Snow and Gilbreath described in 1983. In recent years, great advances have been made in brain imaging and mapping through technology such as computed axial tomography (CAT) and positron-emission tomography (PET) scans, magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and diffusion image scanning. However, although we know which regions of the

brain are activated with particular kinds of stimulation, we still do not know precisely how neurons in the brain represent human knowledge of any kind. We also don’t know how or why the child moves from one kind of knowledge or one developmental stage to the next. This is not unlike the situation in other areas of human development, where the motivation or the mechanisms for the transitions from stage to stage are still basically unexplained through research findings. Theoretical explanations of language development have traditionally fallen into either “nature” or “nurture” categories at opposite ends of a continuum. The nativist point of view is that language acquisition can be accounted for by innate, internally generated processes (a view usually attributed to nativists such as Chomsky, 1993). The extreme opposite point of view is that of behaviorism, where all learning is considered to occur through externally controlled events (the perspective of behaviorists such as Skinner, 1957). However, over the years, empirical evidence has not provided singular support for either the extreme nativist or behaviorist explanations for language acquisition in typically developing children. Enduring alternatives to the nature or nativist explanations are the interactionist (Braine, 1994) or constructivist explanations. In essence, the interactionist perspective is that both innate and environmental factors determine development. Innate, internal, developmental abilities or propensities (some of which may or may not be specifically linguistic) interact with environmental factors and experiences in ultimately building up the child’s knowledge of language. The child’s social



context interacts with his or her natural, genetic endowment in determining and promoting the acquisition of linguistic behaviors. Examples of the internal factors could include processing biases such as a capacity for symbolic representation, selective auditory and visual attention and processing, skill at “chunking” speech segments, skill at analyzing sequential information, categorical perception, a propensity for attempting to search for order and/or causality in events, and memory (Bates, Benigni, Bretherton, Camaioni, & Volterra, 1979; Braine, 1988). As Bates and Goodman (1999) put it, the question is no longer one of nature versus nurture, but one of exploring the “nature of nature” (p. 33). The debate concerning nature is now about whether language acquisition is based on language-specific abilities, or based on general-purpose cognitive abilities that apply to language learning as well as to other kinds of learning. Consider for a moment the fascinating time when the child first begins to use words to communicate. To support the appearance of those first words, a number of hitherto apparently separate threads of development must converge. The Bloom (1983), Bloom and Lahey (1978), and Lahey (1988) division of language development into form, content, and use components is a useful construct for this brief discussion. Form development during the first 12 months of life includes multiple and dramatic changes occurring in the child’s vocalizations as they gradually become more and more speechlike (Oller, 1995, 2000). Content development includes the child’s gradual acquisition of differentiated repertoires of behaviors for using, pursuing, and/or dealing with objects and with people (Sugarman, 1984). Early use development includes the child’s

8.  Spoken Language Learning

integration into the basic back-and-forth rhythm of human interaction (turn-taking), of being responsive and having others respond to him, such as in Figure 8–2, where the mom says “Push!” and waits for the child to respond in some way. Bloom (1983) suggests that the transition to the use of words comes about when these separate threads of development begin to intersect. That is, by about 12 months of age, the child knows how to behave interactionally with others, the child’s vocal productions are speechlike, and he or she knows, for example, that a person can be used to get a hard-to-reach toy. Words will appear when all of these types of knowledge begin to converge. Clearly, this is a very brief presentation of an extremely complex and multifaceted set of developments. However, there is intuitive validity to Bloom’s concept of developmental transitions occurring when separately developing threads of abilities begin to converge. To return to the nature versus nurture issue, the foregoing discussion is an excellent example of the fact that we can describe what is acquired as the child develops, but we cannot account for why the changes in ability or behavior occur. Bloom’s (1983) view is that the transitions to new levels of functioning require insight or intuitive understanding on the child’s part. Insight and intuitive understanding are unarguably internal to the child, as is the child’s propensity to begin searching for understanding of the objects and people around him. But these innate abilities would have little value without the environment providing the objects upon which to act or the people with whom to interact. Both nature and nurture seem to be inextricably implicated as mediators in the acquisition of language.

219

220

Children With Hearing Loss:  Developing Listening and Talking

Figure 8–2.  Daily routines such as putting on shoes provide opportunities for varied, but repeated use of words. The child is putting on shoes; Mom models the form (e.g., (“Let’s put on your shoes! Push in your foot.”), and waits for the child to respond in some way as part of the interaction.

Relevance for Intervention Decisions For the educator or therapist with children and families needing immediate attention, the nature versus nurture debate may be interesting but immaterial. With regard to the child’s nature, some aspects of the child’s presenting repertoire are undoubt-

edly genetically endowed and/or internally generated. This may include aspects such as information processing rate, learning style (e.g., analytic, holistic, or some combination thereof), temperament, and the child’s facility with language learning strategies such as the elicitation, entry, and expansion operations proposed by Shatz (1987). The child’s genetic endowment clearly cannot be ignored. But



8.  Spoken Language Learning

With regard to the nurturing of the child, there is merit in the notion that language learning is a highly buffered or protected process, and that “the environment needs only to be sufficiently rich that the learner can at any point in time discover what [he] needs . . . ” (Shatz, 1987, p. 4). This may be so for normally developing and normally hearing children. But for children with language-learning challenges (including those imposed by hearing loss), there is evidence that differences in the environment can have considerable impact on the child’s language development. Even for normally developing children, the defining characteristics of the

for the interventionist, ethical practice demands that all of the child’s abilities and inabilities must be considered amenable to influence or teaching until demonstrated otherwise.

How Should Intervention Be Organized? The question that then arises for the interventionist is one of how to organize all of the kinds of learning and knowledge to assess the status of the child with hearing loss, and then implement a program intended to guide and promote the child’s progress. To answer that question, one needs to consider two contrasting views regarding the effects of hearing loss on psychological and cognitive processing. Studies that have set out to define the effects of deafness on psychological and cognitive processing tend to be unavoidably fraught with weaknesses in experimental design. Sources of experimental

“sufficiently rich” environment for language learning have yet to be unequivocally articulated. In any case, interventionists think more in terms of attempting to determine the components of an optimally rich environment, rather than a sufficiently rich one. Some would say that the sufficiently rich environment is the optimal one, in fact. The environment, even with all of its individual and cultural variability, remains an extremely likely candidate for profitable examination and guidance toward becoming as able to facilitate language as present scientific knowledge permits.

weakness include the relatively low incidence of congenital sensorineural deafness, which results in a small subject pool, complicated by the myriad of factors relating to individual differences that can affect the individual’s use of residual hearing and learning of language. Research planning is often further limited by ethical questions regarding the withholding of habilitative treatment to establish control groups. The result is that, for nearly every observation or research finding in this field, there is a counterfinding. Consequently, one is left with several points of view and many questions still open to research endeavors. A traditional view is that due to hearing loss, the child’s psychological and cognitive abilities and strategies are somehow different, or at least reorganized. The result of this reorganization is language usage and understanding that are different from those of people with normal hearing (Furth, 1966; Myklebust, 1960). Proponents of this view tend to base their arguments on observations of

221

222

Children With Hearing Loss:  Developing Listening and Talking

people who are culturally Deaf and whose primary mode of communication is sign language. It seems quite logical that one’s language usage and understanding would require a different sort of cognitive processing for a language whose semantic and grammatical subtleties are encoded visually, rather than acoustically. But these differences would seem to be more related to the modality or base (visual versus acoustic) through which the language is learned and used — rather than due to the sensory or neural damage or differences in the child’s cochlea or eighth nerve. In any case, espousing the view of cognitive and psychological difference resulting from deafness might logically lead one to the circular belief that visually based language is the “natural” language of the deaf. A diametrically opposed view to the traditional one is that if adequate auditory input (through use of technology such as hearing aids and cochlear implants) and linguistic experience is provided to most children who are deaf or hard of hearing from an early age, then cognitive and linguistic functioning can be expected to follow the normal course of development (Dornan, Hickson, Murdoch, Houston, & Constanti­ nescu, 2010; Eriks-Brophy, Durieux-Smith, Olds, Fitzpatrick, Duquette, & Whittingham, 2012; Fulcher, Purcell, Baker, & Monro, 2012; Geers & Moog, 1989; Geers, Moog, Biedenstein, Brenner, & Hayes, 2009; Geers et al., 2011; Hayes, Geers, Treiman, & Moog, 2009; Nicholas & Geers, 2006; Nittrouer & Burton, 2001; Nott, Cowan, Brown, & Wiggelsworth, 2009; Sininger, Grimes, & Christensen, 2010; Wang, Shafto, & Houston, 2018; Yoshinaga-Itano, Sedey, Coulter, & Mehl, 1998). Differences in the child’s language ability are seen as due to lack of experience and exposure to auditory/linguistic

information; as delay rather than as deviance. The sequence of language learning is expected to include normal processes such as the intertwining of linguistic and cognitive activity, which is why it is important to share a variety of interesting activities with the child — and talk about every aspect of the experiences — as the mother in Figure 8–3 is doing with her baby. That is, assuming early and appropriate auditory, communicative, and linguistic stimulation, the early talking of the child with hearing loss is likely to mirror that of his or her hearing peers. For example, early semantic and syntactic performance will focus on objects and people in the immediate environment; the presence, recurrence, and/or disappearance of objects; actions related to the child; possession; basic characteristics of objects; locations in space; and cause and effect (Bloom, 1973; Nelson, 1973; Schauwers et al., 2005). The child is also likely to exhibit normal “errors” such as overgeneralizing or over-restricting syntactic and semantic rules during the learning process (Kretschmer & Kretschmer, 2001).

Assuming audiologic and technological management is rigorous and appropriate, providing enriched verbal interaction and conversation becomes the most important priority to promote the child’s linguistic and cognitive development. The most reasonable course to follow in carrying out intervention for children with hearing loss is thus one of replicating (or reestablishing) the overall framework of a normal languagelearning environment. The child is expected to develop normal psychological and cognitive function and to follow the usual developmental sequences.



8.  Spoken Language Learning

Figure 8–3.  Activities such as cooking together provide ways to enrich the child’s exposure to new experiences and language such as by talking about pouring the water, mixing up the batter, rolling it out, cutting out stars or other shapes, putting the cookies in the oven, knowing the oven is hot, being careful, taking out the cookies, blowing on them to cool them off, taking a bite; and by saying, “Yummy!”

However, the provision of adequate auditory and linguistic stimulation requires embellishments of the normal situation, which undeniably results in it being abnormal to some degree. One such embellishment would be the presence of the child’s hearing aids, cochlear implant, and/or remote microphone system. Other embellishments would include the par-

ent consciously employing specific techniques to help the child learn to use his residual hearing, such as sitting close by (either behind or beside) the child when speaking to him, and using an interesting and animated voice when speaking. Another necessary embellishment might be increasing the frequency of caregiver– child interacting events throughout the

223

224

Children With Hearing Loss:  Developing Listening and Talking

day, specifically as an attempt to remedy the child’s decreased exposure to spoken language. That is, the child who is deaf or hard of hearing may have a lessened ability to profit from each instance of interaction, as well as a lessened (or at least initially, nonexistent) ability to overhear others’ interactions with each other (Ling, 2002). Consequently, to achieve a desirable level of support, it may be necessary to establish an environment where interacting occurs more consistently and frequently than it otherwise might (Esta-

brooks, 2006; Estabrooks et al., 2016; Snow, 1984).

The overall goal of the parents and of the intervention program is to acknowledge and cope with the differences related to the hearing loss while attempting to put these differences into perspective in a scene where most other aspects of interacting with their child will be normal.

9 Constructing Meaningful Communication

Key Points Presented in the Chapter n The essence of fluent communi­

cation between parent and child consists of sending and receiving linguistically encoded messages. n Language is typically acquired in the course of normal everyday routines and play. n The affective relationship between the caregiver and child has funda­ mental importance for the child’s optimal acquisition of language. n Early learning about commu­ nication that the young child acquires includes knowledge about joint reference, turn-taking conventions, and the signaling of intentions. n Research on caregivers in mainstream middle-class North American, British, Australian, and other cultures demonstrates the use of a number of characteristic behaviors in adult talk with young children. These characteristics

include content, prosodic features, reductions in semantic and syntactic complexity, increased repetition of their own utterances and those of the child, employment of strategies to clarify meaning or to keep the child talking, and increased responsiveness to the child. n There are a number of theories about why caregivers engage in motherese or parentese, as well as about whether or not such infantdirected talkactually is necessary or contributory to the child’s language development. n A child with no identified disabilities may develop language in the midst of a number of different language learning conditions. However, for a child with a hearing loss, it makes sense to strive for what research indicates is occurring in optimally supportive language learning

225

226

Children With Hearing Loss:  Developing Listening and Talking

environments. Professionals working with children and families from diverse cultures may find

Note to the reader.  Throughout this chapter, the reader will note that although the citations are updated, I have also deliberately retained citations, even prior to 2000, that represent seminal work in communication and spoken language development.

Introduction This chapter focuses on the communication and spoken language developments that occur for normally hearing children between birth and 6 years of age. This period begins with a time where the child is not actually using words — the time often referred to as prelinguistic. It continues to a time when the child is interacting in a broad variety of conversational contexts with different people, nearly always using complex, grammatically correct, mostly intelligible, multiple-word utterances. The present discussion concerns the normal (presumably language facilitating) interactive environment whose basic features remain important throughout early childhood in mainstream English-speaking, middle-class families in North America, the United Kingdom, and Australia. This provides the theoretical background and rationale for the Framework for Maximizing Caregiver Effectiveness in Promoting Auditory/Linguistic Development in Children with Hearing Loss in Chapter  10, with item explanations in Appendix 4. The information in this chapter is heavily based on research studies that are the foundation of theory and practice in the

that some aspects are applicable to their own situations, and some are not.

fields of speech-language pathology and deaf education today. Let us begin by considering what happens when a parent and 3-year-old have a conversational exchange.

Language develops in the course of interactions between parent and child during routine, everyday play and caregiving activities, the more the better (Ambrose, VanDam, & Moeller, 2014; Szagun & Stumper, 2012).

The essence of fluent communication between the parent and child consists of sending and receiving linguistically encoded messages. Early interactions may be as simple as the mother and child gazing into each other’s eyes for several moments as the baby is feeding. By the time the child is 3 years of age, the caregiver or child can seek out the other and verbally interact for any of the following purposes: to enjoy, share, request, assist, inform, and/or learn about the world (Dore, Gearhart, & Newman, 1978). For any communicative attempt by either interactant to be successfully received, there are two necessary conditions: first, a shared focus or topic must be established; and second, the message must be relevant to and interpretable by the listener (Bruner, 1975; Grice, 1975). That is, initially the parent and the child need to be jointly engaged in some activity, or looking at each other or an object. Furthermore, the comment or question



9.  Constructing Meaningful Communication

that is made needs to be of interest to the listener, as well as understandable. When the initiating attempt is successful, the receiver and sender quickly exchange roles, and a linked communicative chain ensues; the two are thus engaged in an intentional communicative interaction. The communicative aspect relates to the cycle of shared understanding (messages) sent and received, multiple times. The interactive aspect refers partly to the backand-forth nature of the event (taking turns in a conversation), and partly to the fact

that each partner actively derives meaning about the message from details of the participation of the other in the event. But how do the parent and child arrive at this felicitous level of fluent conversation and interacting? This discussion of the development of communicative interacting will address three related areas, which are illustrated in Figure 9–1: n the affective relationship between

parent and child,

Figure 9–1.  Fluent conversation and interacting depend on the development of all three related areas going into the hopper in the drawing.

227

228

Children With Hearing Loss:  Developing Listening and Talking n the child’s development of

interactional abilities, and n features of caregiver talk

that may facilitate the child’s early development of verbal communication.

The Affective Relationship It is widely accepted now that the affective relationship between parent and child has fundamental importance for all areas of the child’s development, including the child’s optimal acquisition of language (Cassidy & Shaver, 2008; Dirks & Rieffe, 2018; Goldberg, 2013; Thoman, 1981). The question then arises regarding the nature of the affective relationship that would be most supportive of language learning (Barnett, Gustafsson, Deng, MillsKoonce, & Cox, 2012). Both the parent and the child play essential roles in the early relationship. It is interesting to note the adjectives scattered throughout one scholar’s discussion of “personhood” and the interpersonal motivation for learning language. The environment provided by the parent during at least the early interactions was variously referred to as loving, rational, caring, thoughtful, caressing, and joyous (Dore, 1986). This nurturing environment is not unlike the environment Clarke-Stewart’s (1973) seminal research found to be most facilitative throughout infancy for social-emotional, intellectual, and linguistic development: an environment that was “not only warm, loving, and nonrejecting, [but also] stimulating and enriching visually, verbally, and with appropriate materials . . . and . . . as well, immediately and contingently responsive” (p. 47). On the other hand, the socially competent infant gives clear,

readable clues about what is wanted, is responsive when others approach him or her, and captures the parent’s ongoing attention by making the parent feel as if he or she is competent. The importance and mutuality of influence of a positive relationship between parent and child for all aspects of the child’s development, including language, were thoroughly demonstrated through Bromwich’s longitudinal study in 1981. Bromwich describes interactions as a matter of “reciprocal reading and responding to each other’s cues,” which begins with bonding and attaching in the early months of life (p. 9). From a very young age, the normally developing infant’s behaviors are seen as signals or cues by the parent who “reads” them and responds to them. The parent’s behaviors eventually also become signals that the infant learns to read. It is through these early reciprocal transactions that the emotional ties or bonds are established that are crucial to the child’s social-emotional, intellectual, and linguistic development. Bromwich says the child is developing a rudimentary level of trust in the predictability of the human and physical environment, and in his ability to have some influence on it. The parent, in turn, is being “captured” by the infant as a storehouse of effective interactions is built up. For example, after the parent responds to the infant, the infant settles down, smiles, or babbles. These positive experiences can only serve to encourage both parent and infant to interact frequently, as both gain feelings of efficacy from it. As the infant develops socially and cognitively, the interactions become more complex. Eventually, the child is motivated to use verbal language as a more effective means of dealing with the complexity and sophistication of the interactions.



9.  Constructing Meaningful Communication

The success of the communicative process is entirely dependent on the readability of each interactant’s cues, as well as sensitivity to each other’s cues (Figure 9–2). When the cues of either are not easily read, or one seems to be unresponsive to the other, it may begin a downward spiral in the affective and interactive relationship between parent and child.

The parent becomes the focus of the intervention in this situation, as it is the parent who has the potential of conscious control of his or her actions toward the child. The intention is to help the parent perceive the child’s difficult-to-read cues and to modify parental responses in such

a way that the child’s cues become more predictable, and the likelihood is increased that the child will respond to the adult’s behavior. Thus, the downward spiral in the relationship would be reversed as the quality and mutuality of the interactions are enhanced — all of which favors optimal social emotional, intellectual, and linguistic development of the child.

The Child’s Development of Interactional Abilities Some of the much-debated issues regarding language development focus on whether or not there is a continuity between preverbal and verbal behaviors (whether or not preverbal behaviors are

Figure 9–2.  Interactions are a matter of reciprocal reading and responding to each other’s cues. This child’s cues may be expressed by direction of gaze, pointing, touching, and/or vocalizing.

229

230

Children With Hearing Loss:  Developing Listening and Talking

precursors, prerequisites, or simply antecedents for verbal behaviors) (Golinkoff, 1983); and once again, the degree of importance that should be attributed to biological development and caregiver and environmental influences. It seems likely that child behaviors that occur prior to the child’s use of spoken language share at least some features with spoken language behaviors (Bloom, 1983). For example, the shared affective environment and timing of the “dance” of interactive behaviors in the very early months have features in common with the notion of reciprocal synchronization that occurs in verbal conversations. The child’s first words tend to be about characteristics, movement, and locations of objects — the very concepts about which the prelinguistic child gradually acquires knowledge. But this relatedness or similarity should not be construed as implying that the linguistic behaviors necessarily develop out of the prelinguistic behaviors. For example, the child does not use speech because he or she has learned to babble; the child does not talk about objects and their relationships because he or she has developed knowledge of them; and the child does not communicate verbally because he or she can communicate nonverbally. Something else must be operating to make those advances occur. On the other hand, as Sugarman (1984) said, Some preverbal experiences and acquisitions may nonetheless be critical to some aspects of language development. For example, it may be that unless children have learned something about communication prior to speaking they would have little motivation to look for a language to learn . . . (p. 136)

To the careful observer, early infant behavior and learning (birth to 12 months)

seem to be “all of a piece.” Attempts to describe it seem to impose artificial structure, uniformity, and order on what is actually a very complex, dynamic, and interwoven phenomenon. Added to that are the possible erroneous inferences resulting from the widespread application of linguistic terminology to preverbal behaviors. The reader should be aware that, lacking a better way, the following discussion is guilty on both counts: it divides early communication learning into three (untidy) areas for consideration, and it uses the same widely accepted terminology. It appears that the preverbal learning having the clearest relationship to later verbal learning has to do primarily with the establishment of the social structure and communicative groundwork, rather than with semantic or syntactic groundwork (Kaye & Charney, 1980).

Three essential features of discourse structure that are learned preverbally include joint reference to objects and events, turn-taking conventions, and the signaling of intention.

Joint Reference, or Joint Attention One of the fundamental rules of fluent communication requires that the interactants share a focus or topic about which the message(s) is/are sent and received. For very young children, gaze and gesture are two of the means through which mutual attention is established and regulated. It is generally believed that it is through gaze and gesture that the child eventually acquires an understanding that part of communicating involves the



establishment of topics and the means by which to establish them (Tomasello & Farrar, 1986). Initially, a joint focus may be getting established as the mother and newborn engage in patterns of mutual gazing and mother vocalizing or talking to the baby. Then, as the child begins to notice objects, colors, and movement, the mother frequently follows the child’s direction of regard and comments on what the child is presumably observing. By about 4 to 6 months of age, the child can follow the mother’s gaze and will do so even more readily if the mother has just said something such as “Oh, look!” The consistency of a mother’s encouragement of the child attending to a topic during the first 6 months of life correlates with later language development. Over time, as the child begins to understand more and more spoken language (albeit initially within much-practiced routines), the mother can increasingly establish topics using verbal means alone. Infant gestures, specifically pointing and showing, are an important part of early communication development. At about 9 months of age, the child sometimes points at objects in the process of inspecting or examining them. This is generally not considered to be communicative. However, the child also now exhibits a “showing” gesture, whereby the child holds out an object for someone else to look at. This showing gesture is considered to be a step toward being communicative in that another person’s involvement is essential, which is often signified by the child’s looking toward the adult’s face. This showing may or may not be accompanied by vocalizing. Soon after, the child not only shows the object to others, but actually allows the other to take it (i.e., the “giving” gesture). By about 11 months, the child points to objects in communicative

9.  Constructing Meaningful Communication

fashion — not necessarily to obtain them, but just to point them out and presumably share the experience with another person (signaled by the child looking toward the other person). Similar to the previous gestures, the child’s pointing is most likely to be effective in obtaining and maintaining the other person’s attention if it is accompanied by vocalizing.

Pointing out objects to have others look at them using vocalizing is an extremely important communicative achievement (Goldin-Meadow, 2007; Tomasello, 2008). The importance of the achievement of joint reference or joint attention as a crucial developmental step becomes clear when one considers that so much of human experience is grounded in its shared nature (Moore & Dunham, 1995). From a learning point of view, times of joint attention provide an optimal situation for the parent to provide contingent responses to the child, as well as an optimal situation for the child to learn new words (Scofield & Behrend, 2011).

Turn-Taking Conventions Fluent communicators know their culture’s conventions regarding aspects of turn-taking such as when, in the flow of the other person’s talk, it is expected that one should take a turn. They also know how to signal that they are finished, and that it is the other person’s turn to talk. This is acquired knowledge, presumably learned in the course of the multitudinous interactive exchanges that the child initially begins experiencing at birth.

231

232

Children With Hearing Loss:  Developing Listening and Talking

Kaye and Wells (1980) were the first to note that even the “jiggle-suck” sequence that occurs as the infant is nursed may be an initial kind of turn-taking prototype. The adult orchestrates the turns, with clear indications to the child when it is time to take a turn. With the nursing infant, when the infant stops nursing, the mother jiggles the baby slightly, which seems to make the child begin nursing again for a time. When the infant stops nursing, the parent jiggles again — and so on, back and forth. The clear openings for the child to take a turn continue in early childhood. Snow (1977) famously observed that, in conversations with a young child, the adult’s primary purpose seems to be to try to get the child to take a turn, unlike the situation in adult-to-adult conversation where the purpose often seems to be to get to take as many turns as possible. Kaye and Charney (1980) called the back-and-forth exchanges “turnabouts,” and talked about the parent initially doing most of the conversational catching (responding) and throwing back (trying to keep the child in the conversation). This back and forth exchange also is referred to as a “serve and return” interaction that shapes the development of the child’s brain architecture. See the Center on the Developing Child at Harvard University for more information, and for videos that display serve and return interactions (https://developingchild.harvard.edu/ science/key-concepts/serve-and-return/). These interactions are seen as building the foundation for virtually all areas of human development. There are a number of characteristics of adult behaviors in interactions with young children that could be interpreted as part of an effort to get the child to take a turn. For example, adults respond to the child with great consistency. This could

be seen as simple reinforcement of the child’s behavior in the hope that the adult response will cause the child’s behavior to be repeated, or even as a kind of modeling of responsive behavior in the hope that it eventually will be emulated by the child. An infant can return a “vocal volley” by about 2 months of age, and can initiate vocally by 3 to 4 months of age (Bates, O’Connell, & Shore, 1987). Recognizable vocal imitation begins at about 5 months but is not frequent or systematic until about 10 months of age. During all of this time, the adult consistently initiates and responds to the child’s behaviors (including a number of nonvocal behaviors), and thus frames the exchanges so they seem to be conversational. Adults also make frequent attempts to engage the child in conversation, and adult speech to young children includes a large proportion of speech acts (e.g., expansions, questions, and requests) intended to elicit responses from the child (Brinton & Fujuki, 1982; Kaye, 1982; Scherer & Olswang, 1984). In fact, all of the motherese alterations adults make in speaking to young children can be viewed as at least partly related to the adult’s efforts to keep the child involved in the turntaking process. Perhaps it is not surprising that research on conversational exchanges between parents and their 2-year-old children with hearing loss demonstrates a clear correlation between the numbers of conversational exchanges and the child’s receptive language ability (VanDam, Ambrose, & Moeller, 2012). It is probably through such repeated interactive experiences that the child gradually acquires knowledge of when to take a turn, as well as how to signal that the other person can take a turn. Children have mastered terminal gaze by 24 months of age. Terminal gaze is the common practice (in



speakers in mainstream North American culture) of looking up at the listener at the end of a message to signal that the floor is about to be offered. Once this practice has become part of the child’s repertoire (apparently at 24 months of age), the turntaking proceeds smoothly, with few interruptions of the other partner. However, the adult continues to take a leading (albeit gradually lessening) role in maintaining conversations with young children at least through 3 years of age.

Signaling of Intention The third preverbal development that seems critical to the acquisition of language is the intention to convey an idea or message to someone else. What follows is a description of the hypothesized major milestones in the child’s development of intentionality, where the involvement of both partners is required (Bates et al., 1987; Harding, 1983; Sugarman, 1984). In the first few months of life, the child’s actions are oriented in simple unitary fashion toward either a person or an object. What the child does is grasp, drop, mouth, manipulate, or look at either a person or an object. It is widely agreed that these very early infant behaviors are not intentionally communicative. However, in that they quite consistently elicit reactions from the parent, they are viewed as having communicative effect. Soon, by about 4 months of age, some behaviors occur repetitiously or with persistence and the parent begins to infer that the behaviors are communicative. The simple unitary actions become more complex as they are used in tandem with one another to achieve a goal (e.g., child looks at a toy, reaches for it, grasps it, and finally pulls it into his mouth).

9.  Constructing Meaningful Communication

Adult interpretation of this goal-directed behavior as communicative is increasingly frequent between 4 and 8 months of age (“Oh — you want your toy!”). During this time, also, the parent consistently responds to the child’s vocalizations as if they were meaningful conversation. Gradually, the child begins to expect certain behaviors of the adult to follow particular behaviors of his. He might be viewed as beginning to make inferences of his own about which of his behaviors cause particular adult behaviors to follow. Eventually, by about 8 to 10 months, the child combines an action toward an object with an action toward a person, such as looking at the mother while reaching for a toy. The child is now using people to get objects, as well as using objects to get attention from people, rather than acting on either objects or people separately. That is, rather than simply persisting in reaching toward a hard-to-attain toy, the child now reaches as well as looks at the mother, or looks back and forth between the two. This is considered to be a major milestone in the child’s communicative development — the beginning of the child’s encoding of intent for someone else and the first visible demonstration of the child purposefully conveying a message to another. The reaching and looking (Figure 9–3) are soon combined with simultaneous vocalizing and more conventionalized gesturing such as pointing. In her research, Sugarman (1984) observed that spontaneous use of words began just after coordinated object and person behaviors were being used with some frequency. From then on, to oversimplify, the developmental process basically consists of an increasing proportion of verbal (rather than nonverbal) communications, and increasing length and complexity of communicative episodes.

233

234

Children With Hearing Loss:  Developing Listening and Talking

Figure 9–3.  This child appears to have a firm grasp on joint attention, turn-taking, and signaling intentionality as she shares suppertime with her dad, all within a warmly responsive and nurturing relationship.

This description of the development of the discourse structures of joint reference, turn-taking, and communicative intentionality provides a complementary framework for the notion that separately developing threads of abilities converge at transition points. Add to this the warm, responsive, nurturing affective environment, and the early language learning scene is nearly complete. The missing piece is a description of the nature of parent talk in responding to the child, and/or in initiating interaction with him or her. Because caregiver interactive behaviors may be some of the most important elements of the language-learning environment for a child with a hearing loss (as well as one of the most eligible for modification), they are described in some detail.

Characteristics of Caregiver Talk

It should be noted that the discussion that follows is based on research where the subjects were mainstream American, British, or Australian mothers and children, although motherese or parentese (also called “child-directed talk”) has been studied in a wide variety of languages and cultures, including French, Italian, German, Japanese, Hebrew, Korean, and Mandarin (Saint-Georges et al., 2013). The degree to which any or all of the specific features discussed below are present in the talk addressed to young children in other cultures is



9.  Constructing Meaningful Communication

n Birth to 3 months.  The vast majority

sleepy, or hungry; and getting fed, getting diapered, getting picked up) (Snow, 1977). Many of these utterances are in the form of singleword greetings such as “hi,” “OK,” “yes,” “oh,” “aw,” “sure,” “what,” “well,” and “hm” (Kaye, 1980). n 3 to 7 months.  The topics in the mother’s talk begin to shift after about 3 months of age, so that by 7 months of age, topics tend to be more evenly divided between the child’s internal experiences and objects or events in the immediately surrounding environment. n 7 to 12 months.  After 7 months, there is a steady increase in the proportion of the mother’s utterances about immediately present objects and people, actions presently occurring, or actions just recently completed (Newport et al., 1977; Snow, 1977). n 12 to 24 months.  With infants between 11 or 12 and 24 months, more than three-quarters of the mother’s talk is about toys and objects that are being manipulated by either mother or child, things in the “here and now.” This means, for example, that when the child is about 1 to 2 years of age, measures of motherese lexicon tend to yield sets of words reflecting the child’s world. This is likely to include names of family members and pets, terms of endearment, words for body parts and functions, words for basic qualities and conditions (e.g., “pretty,” “dirty,” “good”), and names of toys and games (Collis, 1977; Collis & Schaefer, 1975).

of topics in the mother’s talk from birth to 3 months tend to be about the child’s feelings and experiences (e.g., the infant’s feeling happy, sad,

The contingency between the topics in the mother’s talk and the child’s likely interests decreases sharply when the child

outside the scope of this book. Clearly, children with normal hearing in all cultures learn to talk, and not all cultures use the specific motherese characteristics described here. However, the motherese findings are robust for the populations represented in the studies, which provides validity to their likely importance when the goal is to create an optimal environment for a child with hearing loss who is learning spoken language in one of the cultures included.

Mothers, fathers, and other caregivers have a special manner or register that they use when talking to infants and young children, at least from birth to age 3, which has been shown to promote infants’ affect, attention, and language learning (Saint-Georges et al., 2013). Features of this “baby talk,” motherese, or parentese register are described below, as they appear in the talk of mainstream North American mothers to their children. To simplify, the term motherese is used here. Not surprisingly, the degree of the mother’s incorporation of the various features in her talk changes as the child moves from birth through childhood. Specific changes are mentioned within the discussion of particular features.

1. Content:  What Gets Talked About? Topics in adult talk to babies change over time.

235

236

Children With Hearing Loss:  Developing Listening and Talking

begins using multiple-word utterances. It has been hypothesized that at that time, the adult senses that the child’s cognitive and linguistic abilities are capable of supporting discussion of less immediate topics and events in the past and future (Bellinger, 1980; Cross, 1977; Lasky & Kopp, 1982; Newport et al., 1977). See Table 10–3 for examples of routines in the child’s life where the language used reflects this gradual shift to topics that are less and less in the here and now. The above discussion has been devoted to referential redundancy, or the parallel between the mother’s verbal utterance and the child’s interests in a particular context (sometimes also referred to as verbal contextual redundancy). A related but somewhat different notion is that of the caregiver’s semantic contingency. This refers to the tendency for the mother’s utterances to be based on what the child is interested in or talking about. Research on parental semantic contingency shows that it correlates positively with children’s progress in acquiring a number of language features. In fact, this is one of the most consistent and robust findings regarding motherese and its effects (Barnes, Gutfreund, Satterly, & Wells, 1983; Cross, 1978; Nelson, 1973; Newport et al., 1977; Rocissano & Yatchmink, 1983). The key to a child’s very early language acquisition (nonverbal or verbal) may lie in the many situations where the immediately present focus of attention is encoded linguistically in the adult’s utterance. The adult utterance thus provides syntactic and semantic information within a conversational frame focused directly on whatever the child is interested in. This concept is closely related to Vygotsky’s (1978) notion that higher mental functions, presumably including language,

emerge in the child as a result of social mediation.

The Takeaway Message: Talk about what the child is focused upon or interested in. Or, alternatively, attract the child’s attention to something, and then talk about it.

2. Prosody:  What Does Motherese Sound Like? Relative to adult–adult speech, characteristic delivery in motherese tends to have higher overall pitch, more varied intonational contours, a slower pace, more rhythmic phrasing (it has been described as “singsong-y”), longer pauses between utterances, and clearer enunciation. Typical motherese also includes phonological simplification processes such as reduplication (e.g., the mother referring to the child’s father as “Dada” in speaking to the child); and lengthened vowels (e.g., a prolonged “Hiiiiii!” or “You want some mooore?”) (Newport et al., 1977; Phillips, 1973; Sachs, 1977; Snow, 1972, 1977; Stern, Spicker, Barnett, & MacKain, 1983). These phonological changes seem to be more frequent in the mother’s speech when the baby is between 2 and 4 months of age. This is a time of rather intense face-to-face, playful interactions, where the mother is busy trying to keep the infant alert, interested, and happy. To carry out this role, she uses speech features that are most likely to get and hold attention such as a greater range of intonational contours, higher overall pitch, and greater terminal and transitional pitch contrasts. Then, as the child begins to be



increasingly interested and able to explore objects and communicate about them, the mother’s role shifts to facilitating those interests and abilities. For this new task, the pitch variation and sound repetition no longer need to be quite as exaggerated as they were when the mother was trying to get and keep the child’s attention for face-to-face interactions. However, the mother’s prosodic features are still significantly different from adult–adult speech, even when the child reaches 2 years of age. This may be related to a continuing fairly frequent need to catch the child’s attention, and/or to cue the child that he or she is being addressed. It may also be part of an attempt to cue the child to take a turn, similar to pausing, as discussed in the next paragraph. Maternal pausing between sequential utterances is greatest at birth and declines somewhat by 2 years of age. Initially, the mother may use elongated pauses to provide for the infant’s slower processing time, as well as to arouse the infant, heighten affect, or to cue the child to take a turn (Arco & McCluskey, 1981; Stern et al., 1983). The need for long, invitational pauses decreases somewhat after the basic turn-taking rules have been established, but the post-maternal-utterance pauses at 2 years of age are still approximately twice as long as they are in adult–adult conversation. In contrast to that, however, most maternal responses to infants occur with a 1-second interval, and infants also generally respond to their mothers’ utterances within a 1-second interval (Beebe, Jaffe, Mays, & Alston, 1985; Beebe & Stern, 1977). It may be that when the child is not responding for some reason, the mother simply increases the pause time before producing another utterance herself.

9.  Constructing Meaningful Communication

The Takeaway Message: Motherese has a great range of intonational contours, rhythmic phrases, slower pace, clear enunciation, and relatively long invitational pauses after the adult utterance, all of which probably catches the children’s attention and cues them that they are being addressed.

3. Semantics and Syntax: What About Complexity? Compared with adult–adult talk, motherese is generally shorter in length (Soderstrom, Blossom, Foygel, & Morgan, 2008). The mean length of the mother’s utterances to infants between 2 days and 2 years of age varies from about three to four and a half morphemes. This is in contrast to a mean length of utterance in adult-toadult talk of about eight morphemes (Phillips, 1973; Stern et al., 1983). In addition, motherese exhibits a restricted number of sentence types, as well as simplifications in both semantics (fewer semantic relations expressed in each utterance) and in syntax (fewer subordinate and coordinate clauses; fewer transformations, embeddings, conjoinings, function words). Changes do occur in the complexity of the mother’s semantics and syntax throughout childhood (Snow, 1972). However, there is marked disagreement among researchers concerning exactly which aspects change, when, in response to what, and in what manner. Some of the conflict may be related to the manner in which individual mothers adjust the complexity of their talk. For example, some mothers seem to match the child’s mean length of utterance, and some stay four or

237

238

Children With Hearing Loss:  Developing Listening and Talking

five morphemes in advance of the child. It may also be that semantic and syntactic adjustments are made at one or more stages in the child’s development, and not at others. Keeping in mind that this topic continues to need clarification, there does seem to be some consistency in two findings. The first is that the mother’s syntax demonstrates the most change (increasing mean length of utterance) when the child is moving from 18 to 24 months of age. The second consistent finding is that the complexity of the semantic relations in the mother’s talk does not increase during that same time period.

The Takeaway Message: Talk to very young children (birth to 2 years of age) in short, simple phrases, about three or four words in length (not in single words). The adult’s utterance length should increase as the child’s linguistic complexity increases, staying slightly ahead of the child’s abilities.

4. Repetition:  Say It or Play It Again Two types of repetition have been observed in mother’s talk to young children. First, mothers immediately repeat their own utterances (in full) much more frequently in motherese than they do in talk to other adults. This is especially true in the first months of life, with an apparent peak in repetition at about 4 months of age. This repetitiveness gradually declines over the next 2 years to become more and more like that found in adult-to-adult conversation (Kaye, 1980; Snow, 1972; Stern et al., 1983). The other type of repetition has to do with the repeated occurrence of well-

practiced sequences of exchanges or routines in interactions between mothers and young children. Research has shown the value of repeated patterned events that help the child discover how language is used appropriately in different contexts (Conti-Ramsden & Friel-Patti, 1987; Hsu et al., 2014; Peters & Boggs, 1986; Snow et al., 1987). In essence, these are ordinary interactions where the adult and child do things with each other with a great deal of regularity, or routines. This includes both the routines of everyday life (such as the routines for getting dressed, for eating, and for bedtime) and the routines that occur within play. The responses of each interactant are contingent on the other’s behaviors. In the North American context, these routines can be established around eating, bathtime, bedtime, or diaper-changing time, or can be connected to nursery rhymes, songs, tickling games, peek-a-boo games, book reading, and puzzle play. The same (or nearly the same) words and phrases are used each time, which makes it easy for the child to predict the next move or utterance in the sequence. After several repetitions, the adult begins to expect child participation at appropriate moments. Less-structured routines occur around a wide variety of daily events, where there can be subroutines embedded in longer term routines.

The Takeaway Messages: (a) Adults frequently repeat themselves with young children when they are trying to get a message across and they are not sure of the child’s attention, focus, or understanding. (b) The routines of everyday life and play provide a multitude of opportunities for varied repetition that promote the child’s language development.



Here also, the patterned nature of the event helps the child discover how language is used appropriately within each event.

5.  Negotiation of Meaning:  Huh? Much of adult talk with young children can be described as attempts to negotiate meaning. In these instances, the adult is most often attempting to resolve the child’s unclear, imprecise, incorrect, or incomplete utterance(s). At the same time, it is often the case that the adult’s attempt to understand maintains the child’s participation in the conversation (Snow, 1984). Two specific strategies that adults can use in negotiating meaning with young children are expansions (see next section) and seeking clarification. Here are some of the ways adults can seek clarification of meaning in a conversation with a child (Brinton & Fujiki, 1982; Gallagher, 1981; Garvey, 1977; Olsen-Fulero & Conforti, 1983; Peterson, Donner, & Flavelli, 1972): 1. Indicate lack of understanding through a puzzled facial expression and/or through gestures such as lifting the shoulders and hands in a questioning manner. This can be done by itself, or in combination with any of the other clarification questions. 2. Use a neutral request for repetition such as, “What?” “Pardon?” “Huh?” “What did you say?” or “I didn’t understand you.” 3. Use a yes/no (or “choice”) question to solicit confirmation that the adult’s interpretation is correct. Several varieties of these requests for confirmation occur: a. Adult simply repeats child’s utterance with rising intonation.

9.  Constructing Meaningful Communication

Example: Child says:  “Look! Teapot turn up!” Adult says:  “The teapot turned upside down?” b. Adult repeats part of the child’s utterance with rising intonation. Example: Child says:  “Look! Teapot turn up!” Adult says:  “Up?”

6. Participation-Elicitors: Let’s (Keep) Talk(ing) Snow (1977) said that one of the primary goals in adult–child conversation seems to be to get the child to take a turn. Clearly, the effect of the requests for clarification (described above) is to do just that. But, in fact, any questioning could be expected to include that same participation-eliciting intent (McDonald & Pien, 1982). Naturally, other intents are also likely to be there, such as truly soliciting information. However, any adult utterance that is followed by certain nonverbal cues such as an expectant pause, adult facial expression, and/or slight lean toward the child could be viewed as participation-eliciting in intention. This may, in fact, describe nearly all of the prosodically highly marked adult utterances in face-to-face interaction with infants during at least the first 6 months of life. Two types of adult utterances to children after they begin talking bear particular mention in this regard, because they have received much research and intervention attention. These are acknowledgments and expansions. Acknowledgments are utterances that acknowledge and/or accept the child’s previous utterance without adding anything to it. This may take the form of simple confirmatory repetition of the child’s

239

240

Children With Hearing Loss:  Developing Listening and Talking

utterance or it may be short phrases such as “Right,” “Oh,” or “Um-hmm.” In certain instances, the effect of an acknowledgment may be to give the speaker (child) positive feedback regarding the utterance’s truth value or clarity, and/or the child’s success in communicating his or her idea. In any case, acknowledgments tend to serve as an encouragement to the child’s continued participation in the conversation (Cross, 1984; Ellis & Wells, 1980; Furrow, Nelson, & Benedict, 1979; Snow et al., 1984). Expansions (or recasts) and extensions are adult utterances that incorporate part or all of the child’s previous utterance in a syntactically and/or semantically better-formed sentence. Expansions provide an improved or corrected alternative way of saying whatever the child said. Extensions also provide the improved or corrected alternative, but in addition, amplify the topic in some way. Extensions can request or provide new information about the same topic, or can apply the same information to a new topic. This kind of adult response to a child’s utterance has been shown to correlate well with the child’s language development (Szagun & Stumper, 2012). Example: Child says:  “Zak” (tapping dog’s bowl with his foot). Expansion:  “Yes, that’s Zak’s.” Note:  Expansions tend to primarily be syntactic corrections. Extension:  “Yes, it’s Zak’s bowl, and it’s empty.” Or,

In any case, expansions of child utterances may be one of the most frequently suggested and employed strategies in language intervention settings. They are likely to be more effective in eliciting improved responses from children who are at least at the two-word utterance level of syntactic-semantic development (Fey, Krulik, Loeb, & Proctor-Williams, 1999; McNeil & Fowler, 1999). Their use is based on observations that expansions occur frequently in normal caregiver talk, as well as on evidence that expansions and extensions have been associated with increases in grammatical development and in sentence length. Research has also shown that children are most likely to imitate expansions over any other type of adult utterance (Scherer & Olswang, 1984). It may be that an expansion has particular salience and motivating value for a child since it is based directly on the child’s utterance, and simply introduces an element of relevant, moderate novelty to it.

7. Responsiveness In face-to-face interactive play, mothers tend to be very consistently responsive to their children’s behaviors (Franklin et al., 2014; Snow, 1977; Stern, 1974; Topping, Dekhinet, & Zeedyk, 2013; Trevarthen, 1977). Whatever the infant does will be interpreted as if he or she has taken a turn in a conversation, including burps and sneezes. Example:

“Yes, it’s Zak’s bowl, and this is Zak’s toy.”

Context:  Mother holding 2-monthold baby on shoulder after child has nursed; mother is patting baby on back gently.

Note:  Extensions provide both syntactic and semantic additions.

Mother:  “OK . . . Time for burpies. Ok now . . . Yeah! Sooooo full . . .



9.  Constructing Meaningful Communication

Yes, you are. Yeah . . . What a baby! Um-hmmmm . . . ” Baby: [burps] Mother:  “Ooooh! Wow! What a big burp! Good job! Yes . . . Now you feel better?” Even in very early encounters, the parent incorporates the child’s behaviors in such a way that a semblance of dialogue is created. Kaye and Charney (1980) describe it as follows, “the rule seems to be that if an infant gives his mother any behavior which can be interpreted as if he has taken a turn in conversation, it will be; if he does not, she will pretend he has” (p. 227). Starting when the child is about 7 months of age, the mother begins to require a bit more from the child in terms of vocalizations and motoric behaviors before she will accept them as conversational turns (Newson, 1977; Snow, 1977; Snow, de Blauw, & van Roosmalen, 1979). She will still often respond to smiles, laughs, burps, and crying as if they were turns, but the vocalizations now tend to be ignored unless they include some consonantal or vocalic babble. The mother is also now more likely to ignore simple kicking or arm flailing. She will respond more selectively to more advanced motoric behaviors such as taking a bite of food, looking toward an object after it has moved or been talked about, or reaching for an object. These requirements for more sophisticated behaviors clearly reflect the mother’s recognition of the child’s increasing abilities. Through the 7-month to 36-month period, the mother continues to “up the ante” in terms of requiring communicative behaviors from the child. The child’s developing abilities and the maternal requirements run parallel to each other, so that the mother continues

to be quite consistently responsive to a large proportion of the child’s behaviors throughout the birth to 3-year-old age range. Interestingly, starting at about 3 months of age, babies are twice as likely to repeat a vocalization if the parent has responded vocally to the initial vocalization by the child. The back-and-forth cycles of responsiveness in conversation are similar to a game of catch (Kaye & Charney, 1980, 1981) or “serve and return” (National Scientific Council on the Developing Child, 2004). A competent participant both catches and throws with ease. The type of interactional turn that both (a) responds to the previous turn by the partner, and (b) attempts to elicit a response from the partner has been called a turnabout or a verbal reflective. Turnabouts seem to be more frequent in adult talk to children than to other adults. Turnabouts are considered to be very powerful language learning devices, because a turnabout by the mother has the properties of relating directly to the child’s previous topic, of being responsive to it, and also of positing an expectation of further response from the child. It should be noted that turnabouts may take the form of any of the

Mothers respond initially to very minimal child behaviors as if they were conversationally meaningful. In catching such minimal behaviors and tossing the conversational ball back, the mother is creating a situation where the child appears to be much more of an active participant than he or she may actually be. Interestingly, of course, the child eventually does become an active participant. This process certainly has all the markings of a self-fulfilling prophecy.

241

242

Children With Hearing Loss:  Developing Listening and Talking

utterance types described above as being used for negotiation of meaning and/or for eliciting the child’s participation.

Issues About Motherese How Long Is Motherese Used? Motherese alterations continue to be used throughout the preschool years, although changes occur as the child’s abilities emerge (Girolametto, Weitzman, Wiigs, & Pearce, 1999; McNeil & Fowler, 1999). Caregivers continue to provide a nurturing environment, to engage in topics of mutual interest (and attention) within their mutual experience, to encourage the child to take turns and to talk more through generous use of acknowledgments (e.g., “Oh!” “Good.”), praise, to repeat one’s own or the child’s utterances with expansions, and to use turnabouts, often with yes/no questions. Prosodic alterations are also often present.

Motherese:  Why Do We Use It? The period from birth to 3 years of age is marked by consistent maternal responsiveness and the framing of interactive events toward the appearance of more and more sophisticated conversations. This creates a situation of some contrast between adult– adult and adult–child conversation. This seems to fit with Snow’s (1977) contention that the purpose of adult–adult conversation appears to be to get to take a turn; in adult–child conversation, the purpose is to get the child to take a turn. There is a special significance to this differentiation as it has become one of the primary explana-

tions for why adults “do” motherese. The adult’s accepting minimally meaningful behaviors from the child as communicative may be part of an effort to provide a conversational frame to the encounter, and to facilitate the child’s participation in the dialogue. The adult’s frequent use of imitations of the child’s productions, continuates, invitations to vocalize, and turnabouts also seems aimed at guaranteeing the conversational flow (Bruner, 1977; Kaye & Charney, 1980, 1981; Newport et al., 1977; Snow, 1977; Wanska & Bedrosian, 1985). Another reason why adults use motherese register is that the modifications are fine-tuned to meet the child’s developing linguistic and attention levels. The mother’s talk is continually adjusted to maintain an optimal fit with what the child can understand. The mother is using motherese primarily to ensure the effectiveness of her utterances in terms of the child’s understanding and learning. Naturally, this should also enhance the chances of the child’s participating in the conversation, which suggests that the feedback and conversation-promoting motivations for motherese are ultimately not so dissimilar (Bellinger, 1980; Cross, 1977, 1978; SaintGeorges et al., 2013; Snow, 1972). Other explanations for the motherese register are that it is used to convey a warm, loving, nurturing affect (Sherrer, 1974); to make oneself and one’s talk interesting to the child to attract and maintain the child’s attention (Kaye, 1980; Snow, 1972); and to help the child learn specific linguistic items and vocabulary (Gleitman, Newport, & Gleitman, 1984). All of these explanations have theoretical explanatory merit for at least some features of motherese at some steps in the developmental process; the explanations seem to be complementary, rather than in conflict.



Motherese:  Is It Immaterial or Facilitative? Researchers have demonstrated that the child’s receptive and expressive language benefits from having a parent who talks frequently to him or her and who uses a variety of vocabulary (Fey, Krulik, Loeb, & Proctor-Williams, 1999; Hart & Risley, 1999; Hirsh-Pasek et al., 2015; Hoff, 2003; McNeil & Fowler, 1999; Saint-Georges et al., 2013; Suskind, 2015; Szagun & Stumper, 2012; Weizman & Snow, 2001). In addition, children’s language is positively affected by parents talking about what the children are attending to, rather than the parents redirecting the child’s attention elsewhere and talking about that other topic (Kaiser & Hancock, 2003; Yoder, McCathen, Warren, & Watson, 2001). Although no one disputes the fact that absent or greatly impoverished adult input has a devastating effect on language learning, there is much disagreement about the nature of the adult input that is necessary and/or facilitative of language development in normally developing children. Research by Szagun and Stumper (2012) suggests that not only does maternal talk with a rich vocabulary result in the child learning more vocabulary, but maternal talk with more grammatical complexity also results in the child using longer utterances that are grammatically more complex. Another study (Ramirez-Esparza, Garcia-Sierra, & Kuhl, 2014), shows positive correlations between parent usage of parentese with their babies at 11 and 14 months of age, and their child’s concurrent speech utterances, as well as their word production at 24 months. One of the sources of confusion is the fact that there is a great deal of individual variability in normal maternal interactional styles (Clarke-Stewart, 1973; Kaye,

9.  Constructing Meaningful Communication

1980; Kaye & Charney, 1980, 1981; Newport et al., 1977; Snow et al., 1987). Some of the variability in the amount and nature of maternal talk appears to be related to maternal educational level (Hoff, 2003; Szagun & Stumper, 2012). Other sources of variability are related to the variety of contexts being observed in the various studies of interacting, and to changes identified in characteristics of the mother’s style at different developmental stages of the child (Conti-Ramsden & Friel-Patti, 1987; Snow et al., 1987). Differences in the children’s cognitive, learning, and/or interactive style and temperaments could also easily influence the mothers’ styles (Bates et al., 1987; Spiker, Boyce, & Boyce, 2002). As if this were not enough confusion, cross-cultural studies demonstrate that children can become competent language users without experiencing the same adult adjustments typical of middle-class motherese in English (Crago, 1988; Genesee, Paradis, & Crago, 2004). However, it is difficult to entertain the notion that all of those well-documented motherese adjustments are irrelevant, at least for children similar to the research subjects who are learning language in middle-class, Englishspeaking culture. It also seems possible that the language learning process for the normally developing child is well cushioned by the child’s intact “intake” and “output” systems, by redundancy in spoken language, and perhaps by the strength of hypothesized drives toward being like his or her parents or toward communicating in the most efficacious manner possible. Many children with normal hearing will probably develop language in the face of a number of adverse conditions. On the other hand, recent research suggests that even for children with normal hearing, there is a direct relationship between the amount

243

244

Children With Hearing Loss:  Developing Listening and Talking

and quality of early infant-directed speech in parent–infant interactions and the vocabularies and word-processing abili-

ties of the children later at 24 months of age (Hoff, 2003; Szagun & Stumper, 2012; Weisleder & Fernald, 2013).

For the child with hearing loss (similar to any child with language learning problems), however, there is no “cushioning” (Snow, 1994). To counteract that, the adults in the child’s life can provide an enriched quantity and quality of verbal interacting so that the auditory parts of the child’s brain are appropriately fed the auditory-linguistic input needed for the child to learn to listen, understand, and use spoken language. To assist the child to reach a maximal level of achievement, it makes sense to try to create an optimal set of interactive conditions for facilitating the growth of the child’s language. An optimal set of caregiver behaviors is detailed in the preceding sections, which provide the research and theoretical base for the discussion and framework in Chapter 10, Table 10–5.

10 Interacting in Ways That Promote Listening and Talking

Key Points Presented in the Chapter n The term listening and spoken

language (LSL) practice refers to intervention aimed at helping young children with hearing loss learn spoken language through listening. The child’s parents, guided by the LSL practitioner, are the primary agents of change as they do what it takes to make sure that the child’s brain is receiving copious auditory/linguistic stimulation to help the child learn. n The parents need to absorb a great deal of information about hearing loss and its implications, as well as internalize ways of interacting that will promote the child’s use of audition and the child’s acquisition of intelligible spoken language. n For many parents, discovery of the child’s hearing loss is a shock and is accompanied by a whole

host of feelings that need to be discussed for the parents to be able to cope with all of the demands on their time, energy, thinking, and patience. n The LSL practitioner will feel pressure to move quickly to minimize any additional loss of hearing or listening time for the child. However, the intervention will be most effective when the professional works with sensitivity and respect for the degree to which the parents’ emotional well-being and feelings of efficacy may have been impacted by the discovery of their child’s hearing loss, as well as by the ongoing requirements of addressing the child’s unanticipated needs. In short, the parents need to be able to focus on the child’s needs to

245

246

Children With Hearing Loss:  Developing Listening and Talking

be on board with regard to the technology and the auditory/ linguistic stimulation that the child’s brain needs. Until this happens, progress may be limited. n The most important intervention tool is properly selected and maintained auditory assistive technology, since it provides the child’s brain with access to sound. n Putting on auditory assistive technology (hearing aids or cochlear implants) is not enough. The child also needs the parent to provide an abundance of developmentally appropriate, auditorily based verbal interacting. n Routine, normal, everyday caregiving and play events are perfect as natural contexts for learning language through listening. n The “Framework for Maximizing Caregiver Effectiveness in

Introduction In this chapter, intervention aimed at helping young children learn to listen and talk is referred to as listening and spoken language (LSL) practice. The assumption here is that this work will be carried out by a parent, who is being guided and supported by a certified auditory-verbal therapist, or a knowledgeable and skillful teacher of children who are hearing impaired, pediatric audiologist, or speechlanguage pathologist. The professional’s title matters much less than the professional’s expertise.

Promoting Auditory/Linguistic Development in Children with Hearing Loss” provides researchbased guidelines for desirable caregiver behaviors. n Normal, everyday interacting needs to be “embellished” to help the child learn spoken language the most effectively, as well as to teach the child specific auditory/ linguistic targets. What the parent does is perfectly normal interacting — just more deliberately and more frequently. n Preplanned parent guidance or auditory-verbal sessions are typically structured to provide confirmation of the child’s hearing status, diagnostic practice of selected auditory/linguistic targets or objectives, discussion between parent and practitioner, homework for the coming week, and a closing routine.

In its most enlightened and efficacious form, listening and spoken language (LSL) intervention is a process between parents and professionals. Parents and practitioners are natural collaborators as they both are striving for the same thing: a better-communicating child. The efficacy of intervention which views the parent as the primary client has long been supported by research, as well as by logical arguments regarding the amount of parent contact with the child, and consequent opportunities for continuity and generalization (Caraway & Horvath, 2012b; Clarke-Stewart, 1973; Dinnebeil & Hale, 2003; Peterson, Carta,



10.  Interacting in Ways That Promote Listening and Talking

& Greenwood, 2005; Simser & Estabrooks, 2012). The discussion that follows is about intervention for young children with hearing loss whose parents or caregivers are considered to be the primary agents of change, and the LSL practitioner is a collaborator in that process. Discovery of a child’s hearing loss is likely to evoke a tangle of feelings in the parent that can be long-lasting and recurring as the parent comes to terms with life experiences that are different from those he or she had expected. However, parents will be required to act, even though they are in turmoil, to minimize or avoid any additional loss of listening and learning time. The practitioner has a great deal of information and many skills to impart to the parents or caregivers. This must be accomplished with sensitivity and respect for the fact that the learning and skills are not things the parents were always just dying to acquire. They are learning out of love and hope for their child’s present and future life. The fortunate thing is that often, the child’s progress helps the parents’ emotional state begin to lift.

The Emotional Impact of a Child’s Hearing Loss on the Family Because parents are expected to take on major roles in the child’s learning to listen and talk, the parents’ ability to do that needs to be carefully considered. Teaching a child to listen and talk may be easier for parents than teaching the child to sign from one perspective, at least the hearing/ speaking parent does not need to learn a new language to teach the child, as they do to teach the child to sign fluently. How-

ever, the ability to function well in both the parenting and listening-and-languagefacilitating roles requires at least a moderate sense of well-being. For many of those parents, discovery of the child’s hearing loss is a shock and is accompanied by a whole host of feelings that need to be addressed for the parent to be able to cope with all of the new demands on their time, energy, thinking, and patience. In many countries, infants have their hearing screened at birth, so that early identification and early intervention both are a reality for many of today’s children who have hearing loss. Although research suggests that parents feel positively about having discovered the hearing loss when the baby was very young, early identification brings with it psychological complexities for parents as they experience waves of grief, reassurance, hope, and distress (Young & Tattersall, 2007).

Approximately 95% of babies with hearing loss are born to parents who have normal hearing (Mitchell & Karchmer, 2004).

Not surprisingly, for the vast majority of those parents the discovery of the child’s hearing loss is a crisis, or creates one, for the parents and family. The child is just as he or she was the day before, but much is changed for the parents. For some parents, it is a relatively small crisis, where the parents rapidly and relatively smoothly accept the child’s hearing loss and become implicated in the listening and spoken language development process. For many others, however, the child’s hearing loss is initially experienced as a catastrophe.

247

248

Children With Hearing Loss:  Developing Listening and Talking

Later, the presence and implications of the hearing loss become a simmering crisis that can be coped with, but that periodically returns to an acutely painful stage. Resumptions of the emotional crisis often coincide with events such as audiological testing times; educational transitions into preschool, kindergarten, grade school, or high school; at graduations; at puberty; at young adulthood; and/or at the parent’s retirement. These events sometimes precipitate a reworking of the feelings surrounding the continuing implications of the child’s hearing loss (Luterman, 2006). The emotional turmoil surrounding the discovery and presence of hearing loss in a child has been likened to grieving, not unlike the process described some time ago by Kubler-Ross (1969). Moses (1985) and Moses and Van Hecke-Wulatin (1981) were some of the first authors to apply grieving process theory to hearing loss, but a number of authors have continued to make similar observations about parental needs (Gilliver, Ching, & Sjahalam-King, 2013; Zaidman-Zait & Curle, 2015). What is being grieved by the parents is the loss of their dreams and fantasies about what the child would be like. Both prior to and after the child’s birth, parents have dreams and expectations for the child’s future, most of which include an unimpaired child. When those dreams are shattered by the diagnosis of hearing loss, the parent needs to have a chance to grieve about the situation, to separate from those dreams, and to let them go. In the process, grieving may stimulate a reevaluation of a number of aspects of one’s existence, including social, emotional, environmental, and philosophical attitudes and beliefs. This may include unconscious biases such as fear, pity, or revulsion that the parent has toward people who have disabilities. Alternatively, it

may involve a strong desire for things to remain as they were — a very normal fear of change. According to Luterman (1984), this process may be an existential “boundary experience” that may force the parent into a more intense awareness of reality and of the transitory nature of existence. The consequence may ultimately be a rearrangement of priorities, an abandonment of trivialities and timidity, and the selection of a different set of meanings for one’s existence.

Rearranging one’s priorities is not step one on a linear journey through suffering to a final nirvana of acceptance, coping, and purpose. Being able to accept the situation enough to cope with it and move on does not signal a magical end to the recurrence of painful feelings or to the need to work and rework the surrounding issues.

The following emotional states are generally accepted as part of the parental grieving process. These occur in no specific order, nor even necessarily by themselves, even though writing about them requires that they appear to do so: n Denial:  The denial may be denial of

the accuracy of the diagnosis, of the permanence of the hearing loss, and/ or of the impact of the hearing loss on the child’s life or on the lives of those around him or her. n Guilt:  The parents may feel guilt because they believe they caused the hearing loss. Or, they may believe they are being punished for being “bad” or for something they did or did not do in the past.



10.  Interacting in Ways That Promote Listening and Talking

n Depression:  The parents may feel

impotent at not being able to cure or take away the hearing loss. They may be enraged at themselves for not having been able to prevent the loss to start with. They may also feel incompetent to deal with what is seen as the impossible demands imposed by the hearing loss, such as dealing with the technology and having to do different things with this child than they had expected to do. n Anger:  The parents may feel anger at the injustice of it all. They may be angry at the unasked-for demands on their time, energy, finances, and emotions. The demands all emanate from the child with hearing loss, but the parents’ anger is often displaced to other family members and to professionals. n Anxiety:  Parents may feel over­ whelmed at the added pressures and responsibilities of having a child with a hearing loss. They may feel anxious about the need to juggle the added responsibilities with their own needs to have an independent life. They may also feel anxious about the child’s safety, acceptance by others, and future life. These feelings are natural, normal, and perhaps even necessary for the parent to deal with the situation and to move on. Each of the emotional states serves a function that eventually allows parents to separate from their shattered dreams for their child to be “perfect,” and for themselves to be the parents of that perfect child. Then new dreams can be generated, incorporating the child’s hearing loss. The parents begin to feel that coping is, at least occasionally, possible. The reader is strongly urged to seek out Flasher and

Fogel’s excellent book (2012) for further explanation and guidance on counseling skills for practitioners who are not professional counselors. So how is the practitioner supposed to respond to emotional turmoil that may be occurring in the parent? On a practical level, the turmoil may occasionally get expressed in a “pure” form with a parent so upset that he or she cannot function at a given moment. Or it may get expressed in a different form such as combinations of anxiety, sadness, disbelief, frustration — or anger at the professionals ​ — about the baby having to wear technology or receive therapy to learn things other babies learn naturally, about having to learn a lot about something the parent never wanted to learn about in the first place, about having to participate actively in sessions, about having too many or too few sessions, and/or about the difficulty of finding a parking space near the audiology clinic. According to Flasher and Fogel (2012), Luterman (2008), Moses (1985), Moses and Van Hecke-Wulatin (1981), and Rogers (1951), the most facilitative thing the professional can do, difficult as it might be, is to convey an attitude of acceptance of whatever the parents’ emotional states happen to be. This is one obvious time when the professional’s interpersonal skills will be drawn upon. The idea is not to become a psychotherapist. Indeed, one of the essential aspects of intervention with families of children who are deaf or hard of hearing is to know when the parents’ emotional needs exceed the practitioner’s skills, and to refer the parents elsewhere for professional counseling. However, it is not practical, possible, or desirable to refer all upset parents for psychiatric care. The practitioner must accept responding empathically as a part of his or her professional

249

250

Children With Hearing Loss:  Developing Listening and Talking

responsibilities (Flasher & Fogel, 2012; Luterman, 2008) (Figure 10–1).

In the vast majority of instances, what is required is direct, honest, and personal confirmation of the other person’s feelings and thoughts as normal. This means being genuinely empathic and conveying that intent through active empathic listening.

Despite widespread popular acceptance (Bernstein, 2015) and a great deal of descriptive conceptual work published on active listening, there has been relatively little research attention directed toward operationalizing it or developing measures of it (Gearhart & Bodie, 2011). Drollinger et al. (2006), developed a selfmeasure for business, entitled the Active-

Empathic Listening Scale, which has been adapted for other contexts and validated as a measure of the individual’s sensitivity, attentiveness, appropriate responsiveness, and body language (Bodie, 2011). A timeless description of active empathic listening that has direct relevance for professionals working with families of children with hearing loss is explained by Pickering (1987). Pickering calls active listening “listening as a receiver rather than as a critic”— accepting rather than evaluating and trying to control the other person (p. 217). Based on Pickering (1987), Tables 10–1 and 10–2 list and illustrate two general types of responses that the practitioner can make in interactions with parents. Responses that reflect empathy are those that convey respect and acknowledge and attempt to understand the feelings, perceptions, and interpretations of the parent. These responses

Figure 10–1.  The parent has a feeling! Quick — call the psychiatrist!



10.  Interacting in Ways That Promote Listening and Talking

Table 10–1.  Empathic Responses • Reflecting back what was heard, restating, paraphrasing (e.g., If I heard you correctly, you said . . . ?) • Extending, clarifying what the other said (e.g., Then? And? Tell me more about that. Give me an example. I didn’t understand that. What did you mean when you said . . . ?) • Using supportive, open-ended, exploratory questions — as nonthreatening as possible (e.g., What happened to make you feel that way? What do you think of when you hear . . . ?) • Summarizing, synthesizing (e.g., It must be very overwhelming/ frustrating/frightening to . . . ) • Perception checking (e.g., You seem to be . . . Does this fit with the way you see things?) • Acknowledging, without taking the conversation in a different direction (e.g., Um-hmmm. I see what you mean. I can appreciate what you went through.) • Encouraging expression of an idea or feeling (e.g., How did you feel about that?) • Being quiet Source: Cole, E. (1992). Listening and Talking: A Guide to Promoting Language in Young Hearing-Impaired Children. Washington, DC: The Alexander Graham Bell Association for the Deaf and Hard of Hearing (http://www.listeningandspokenlan​ guage.org). Reprinted with permission. All rights reserved.

increase the parent’s feelings of autonomy and self-efficacy. They ultimately have a “freeing” effect. The others, the controlling responses, have the opposite effect of diminishing the parent’s feelings of autonomy and self-worth. The outcome of a professional’s (even unconscious) attempts to control parents in this way may result in either parental resistance or an overdependence upon the practitioner. The preferred manner of interacting is obvious. It bears mentioning that the listener’s tone of voice, eye contact, and other nonverbal behaviors need to be congruent with the empathic messages the listener is trying to convey (Flasher & Fogle, 2012).

Often, early indications or clues of the child progressing are the most effective and immediate catalysts for giant leaps in the parents’ ability to cope with it all. For that to happen, intervention tasks such as hearing aid selection, fitting, and wearing clearly need to be carried out — and parents need to be guided in what to look for as early indicators of the child listening. These indicators include alerting to sound, turning to sound, quieting to sound, increased vocalizing, decreased vocalizing (while listening), and/or an increased variety in vocalized sounds.

251

252

Children With Hearing Loss:  Developing Listening and Talking

Table 10–2.  Responses That Control • Changing or diverting the subject without explanation, particularly to avoid discussing the other person’s feelings • Interpreting, explaining, or diagnosing the other person’s behavior (e.g., You do that because . . .) • Giving advice, insisting, or trying to persuade (e.g., What you should do is . . . ) • Agreeing vigorously. This binds the other person to his present position. • Expressing own expectations for the other person’s behavior. This type of response can bind the other to the past (e.g., You never did that before!) or to the future (e.g., I’m sure you will . . . ) • Denying the existence or importance of the other’s feelings (e.g., You don’t really mean that! You have no reason to feel that way. You shouldn’t feel that way.) • Praising the other for thinking, feeling, or acting in ways you want him or her to; approving on your own personal grounds • Judging, disapproving, admonishing on your own personal grounds; blaming, censuring the other for thinking, feeling in ways you do not approve of; imputing unworthy motives • Commanding, ordering (This includes, “Tell me what you want me to do!”) • Labeling, generalizing (e.g., She’s always shy. He’s aggressive and hyperactive.) • Controlling through arousing feelings of shame and inferiority (e.g., How can you do this to me when I have done so much for you?) • Giving false hope or inappropriate reassurance (e.g., It’s not as bad as it seems.) • Focusing inappropriately on yourself (e.g., When my dog died, I felt terrible, too.) • Probing aggressively, interrogating, asking threatening questions (e.g., Tell me how you think a child learns language.) Source:  Cole, E. (1992). Listening and Talking: A Guide to Promoting Language in Young Hearing-Impaired Children. Washington, DC: The Alexander Graham Bell Association for the Deaf and Hard of Hearing (http://www.listeningandspokenlanguage.org). Reprinted with permission. All rights reserved.

Intense feelings will unavoidably come up in the family-focused LSL sessions with individual parents and their children, so the practitioner must be prepared to address them. It is through the

acceptance and personal confirmation of significant others that the parent may be able to constructively cope with the emotional impact of the child’s hearing loss. The professionals involved can have



10.  Interacting in Ways That Promote Listening and Talking

a powerful impact on the parents in this regard, but this support may also come from friends, religious groups, and community and/or parent groups. Parent networks or parent groups can be extremely valuable and supportive in a way that the practitioner cannot. The parents may gain hope, relief, and a feeling of cohesiveness with other parents who are having some of the same experiences. The reader is referred to Flasher and Fogle (2012), Luterman (2008), and Luterman and Maxon (2002) for additional treatment of this topic. As the parent feels more supported and is better able to cope, the parent’s feelings of competence will strengthen and, in turn, the child’s development will be positively affected. Intervention for the child cannot be delayed until the parent reaches some mythical level of perfect coping. Hearing aids must be immediately selected, fitted, worn, and maintained. Children must be exposed to spoken language, expected to listen, and vocalize or verbalize — as well as to conform to behavioral expectations. The point is that as the intervention agenda unfolds, the LSL practitioner must interact with the parent sensitively and empathically, without rejecting, denying, or criticizing how the parents are feeling or behaving. When roadblocks occur (such as parents not keeping technology on the child, continuing to keep a loud television turned on throughout the day, or not carrying out intervention strategies), it may be helpful to try to look at the underlying reasons for what the parent is doing. This could occur through the practitioner examining what he or she may be doing, if anything, to contribute to the roadblock, through a conversation with the parent where the practitioner deliberately engages in active

listening, and/or through a referral of the parent to another family with a child who has hearing loss or for counseling. When an adult persists in behaving in ways that are not congruent with what they understand and know to do, it bears examination. In other cases, what the parent has learned from the practitioner may not fit with the reality of the parent’s everyday life. The child’s equipment or interacting frequently may have to take a back seat when the parent is struggling to keep food on the table and keep the family safe, or when the extended family’s culture does not address the needs of children with hearing loss in ways that fit with the family’s intended intervention and desired outcome. Whatever the reason, intervention goals are unlikely to move forward until the roadblock is addressed.

Adult Learning In addition to the need to be prepared to address the parents’ feelings around the child’s hearing loss, the practitioner needs to teach both information and skills to the parents and other adults in the child’s life. This is very unlike teaching children in a classroom, and also unlike doing direct therapy with a child. Much has been written about adult learning principles and learning styles, dating back at least to Malcolm S. Knowles, whose publications span a period of 1950 to 2005 (a sixth edition of a book, published posthumously). A summary of his general contentions as they apply to the present context is that: n Adults want to know the reason

that they need to learn something, and they want the information to be applicable and task oriented.

253

254

Children With Hearing Loss:  Developing Listening and Talking n Adults bring experience to learning

and need that experience to be respected and valued, as they are in charge of their own learning. The practitioner needs to take into account the fact that the parents may not yet be emotionally ready to learn information, even though they know they need to learn it for their child’s benefit. This is not learning that that parent would have sought out if the child with hearing loss had not come into their lives — and generally, adults prefer to choose the learning that they become engaged in. However, many parents will be ready and anxious to learn everything they can to help their child. These parents will have read everything they could find on the internet, and will have questions, confusion, and preconceived ideas about a number of things, since there is so much information (both accurate and inaccurate) available on the internet and so much conflict and disagreement in the field. The practitioner needs to be prepared for parents of children with hearing loss to come with their own educational experiences,

Learning styles are generally divided into visual, auditory, and kinesthetic learning styles. n Visual learners prefer to learn from

watching others demonstrate or from reading or watching videos. n Auditory learners prefer to learn from listening to explanations and/or from discussing information with others. n Kinesthetic learners prefer to learn from experiencing things, from moving, or from doing.

expectations, and learning styles (Caraway & Horvath, 2012a). For the parent, being involved in LSL family-focused intervention will automatically bring the parents’ own educational history and their own learning style into play. They may have loved or hated school; they may or may not have done their homework or had feelings associated with being given homework; and they may or may not have absorbed information in the way that it was presented to them. In addition, as noted above, adults come with well-established learning styles and differ in important aspects such as whether or not they like to jump in and get started; ask lots of questions; read articles and/or research; understand why before acting; learn from demonstration; and/or listen, consult, and share with others — as well as combinations of all of the above. To work most effectively with each parent, the practitioner may find it useful to address these issues early on, in some of the discussions about “what it takes” for the child to learn to listen and talk and how the LSL intervention itself will occur. In her talk at the AG Bell Sympo-

All learners learn from combinations of visual, auditory, and kinesthetic experiences. Some resources say that adult learners retain 10% of what is presented visually alone, 30 to 40% of what is presented visually and auditorily, and 90% of what is presented visually, auditorily, and kinesthetically. Source:  Retrieved from https://www.nhi​.fhwa​ .dot.gov/downloads/freebies/172/pr%20precourse%20reading%20assign​ment.pdf



10.  Interacting in Ways That Promote Listening and Talking

sium in Washington, DC, in 2011, Caraway (2011) provided an ingenious way of broaching the topic of adult learning styles by using an analogy with different adult approaches to learning how to operate a convection oven. This can be an amusing (and instructive) means of having parents reflect on their own learning styles. Questions may be needed to get the discussion going, such as, “Would you just start pushing buttons and assume you could make it work?” or “Would you read the manual cover to cover, and then begin step by step to unwrap the oven and set it up?” or “Would you call a friend or two to see how theirs works?” Knowing how the parents learn best will help with practicalities such as determining the amount of talking to do before demonstrating, knowing whether or not the parents are likely to want articles and books to read, whether or not the parent would like to videotape themselves interacting with the child and study the tape to see what was working or not working, whether or not being coached in real time would be useful or whether it might be better to let the parent finish and have a discussion afterward, and determining the format for activities for the parents to carry out until the next session. Talking about these issues early on will promote clear communication and effectiveness of the guidance provided to the parent.

What Parents Need to Learn Parents here is used in a generic sense and includes any and all persons associated with the child’s well-being or caregiving. This can mean babysitters, grandparents, older siblings, and other relatives

and friends. The words parent and caregiver are used interchangeably here.

There is a lot for the parent to learn. Parents need to absorb information about topics such as the nature and implications of hearing loss; the functioning, benefits, and maintenance of hearing aids and other amplification systems; educational choices and child advocacy; and the development of intelligible spoken language. They also need to internalize ways of interacting that will promote the child’s use of audition and the child’s acquisition of intelligible spoken language. The emphasis here is on what can be done to assist parents or other caregivers to accomplish what is needed for the child.

Role of the LSL Practitioner The role of the LSL practitioner is that of a supportive coach (Friedman & Woods, 2012). The intent is to help the parents learn the information and skills that will allow them to be competent and confident in supporting their children and helping them grow from babyhood to adulthood. In the early years (and perhaps well beyond!), the parents are the child’s most important teacher and set the stage for all that is to come. Parents learn most effectively when the coach assumes the parents are already the most knowledgeable about the child, and are fully capable of acquiring the additional knowledge and skills they will need related to the child’s hearing loss. Talking with the parents to learn about the child is the best way to begin

255

256

Children With Hearing Loss:  Developing Listening and Talking

the relationship with the family. In fact, starting every session about how the week before went will continue that effort at having the parents be the ones who know the child best, and know what is important in the child’s life from the week before and what worked and didn’t work from the suggestions the practitioner made in the previous session. In the early conversations, it is useful to learn about who is important in the child’s life, what the child’s day is typically like, what the child likes to do, and what the child is good at, what the parents’ hopes and dreams are for the child, and what the parents find most challenging right now (McWilliam, 2009, 2010). With appropriate guidance from the practitioner, goals or outcomes for the therapy can be developed with the parent, based on all of the above, that will fit into the parent/child daily routines and play. Throughout the duration of therapy, the LSL practitioner frequently checks in with the parents, asking whether the information was clear and useful, as well as asking questions about how hard an activity was for the child or for the parent, whether or not it seemed to be beneficial, how they might repeat the same activity during the week to come, or what they would like to do in the next session. The entire relationship is intended to be collaborative. However, there will be instances where the practitioner needs to demonstrate a particular skill or activity. The danger of demonstration is that it can either make it look so difficult that the parent is discouraged, or it may look so simple that the parents miss the essential point of the activity. As quickly as possible, the practitioner hands the activity over to the parents to carry out, and provides support as needed so that the parents are able

to do it and can see what the goal was. After the parents carry out the activity, it is important to talk about how the parents felt they did in carrying it out and to help them reflect on what the child’s responses were to what they were doing. Another excellent way of providing immediate and powerful feedback to parents with regard to their interactions with the child and to their employment of language-promoting strategies is to use videotape ( Juffer & Steele, 2014; Yagmur et al., 2014). Videotaping is now readily achievable any place and at any time on smartphones and tablets. Table 10–5 later in this chapter contains the Framework for Maximizing Caregiver Effectiveness in Promoting Auditory/Linguistic Development in Children Who Have Hearing Loss. Use of the framework with parents is also discussed in detail.

Components of Intervention for Babies and Young Children with Hearing Loss Intervention for a child with a hearing loss has two components of paramount importance: 1. Providing the child’s brain with access to auditory information through proper selection and maintenance of technology for the child. 2. Providing the child with an abundance of developmentally appropriate, auditorily based, verbal interacting. The first five chapters of this book focused on what needs to be understood and



10.  Interacting in Ways That Promote Listening and Talking

done to address the first component. The child’s hearing technology is the most important intervention tool. Assuming the child’s hearing technology has been properly selected and is being carefully monitored and maintained, most of the professional and parent effort can then be directed toward delivering the other component: providing abundant, developmentally appropriate, and auditorily based communicative experience. Providing abundant, auditorily based communicative experience is not an especially easy task. There are no universal guidebooks or surefire formulas, or even particular language, auditory, or speechtraining exercises or activities, to be uniformly applied to guarantee an outcome of a perfectly intelligible, independent, socialized, and happy child. In fact, in intervention with very young children with hearing loss, there is no clear separation among language, auditory, and speech work. Any interactive event necessarily taps and exercises all three areas to some degree — which is a wonderful thing. The adult’s job is to be aware of the existence and importance of the synergism of linguistic, auditory, and speech elements to which the child is being exposed in any routine event, and to emphasize them, provide another example, push it all just one more step, make certain the event occurs again, and most of all reinforce and encourage the child by keeping it all meaningful and fun. The next part of this chapter begins with a discussion of the types of routine, normal, everyday events that provide natural contexts for language learning. The chapter then turns to possible effects of the hearing loss on adult–child interactions, and sets out clear procedures for examining those normal, everyday interactive events to determine

areas in the parent’s usual interactive behavior that may facilitate or hinder the child’s language growth. This is followed by discussion of ways of emphasizing particular language, auditory, or speech elements in the course of normal, but somewhat “embellished,” interacting and preplanned play.

When to Talk with Your Child and What to Talk About Babies and very young children with hearing loss who have appropriate audiological technology can typically learn language in the course of normal, everyday events, as do normally hearing children. The essential difference is that for children who have hearing loss, it is important that the surrounding environment be as quiet as possible, and that the verbal interaction contributes to maintaining an optimally rich environment or an embellished auditory/linguistic environment. That is, verbal interaction must occur frequently and consistently and be framed within the use of auditory strategies to establish a desirable level of support. However, as the child grows older, omissions or gaps in the child’s linguistic knowledge may appear that the child will need to learn from the other kind of intervention, which is preplanned instruction. Preplanned instruction with very young children is most effective when it is focused on something of interest to the child, and when it is playful, brief, and highly rewarded. Preplanned instruction is discussed in more detail a little later in this chapter. In this regard, then, much of the LSL practitioner’s job is to heighten the parents’ awareness of how much they are

257

258

Children With Hearing Loss:  Developing Listening and Talking

already naturally doing and to encourage them to do more of it. Simple, everyday routines and play can form the basis for the provision of abundant, auditorily based communicative experience. Particularly in the birth to 2-year-old period, the necessities of household functioning (including especially baby care) support

the establishment of regular routines. Since these are frequently repeated, the child is provided with nearly the same meaningful experiences with nearly the same language over and over. Examples of some of those ordinary, everyday routines and playful activities can be found in Table 10–3.

Table 10–3.  Examples of Ordinary, Everyday Routines That Can Be the Scene for Providing Abundant, Auditorily Based Communicative Experience for Children Birth to 6 Months

Feeding, getting diapers changed, having baths, getting dressed; getting put to sleep; manipulating simple toys and objects; watching parents, siblings, pets; being fastened into strollers, car seats, infant seats; being engaged in visual/ vocal games with adults and siblings (including peek-a-boo games); reading books together

6 to 12 Months

Feeding, getting diapers changed, having baths, getting dressed; being put to sleep; being engaged in simple nursery rhymes involving manipulation of the child and, often, anticipation of tickling, such as “This Little Piggy,” “Round and Round the Garden,” “Pattycake”; reading books together; being watched and helped as gross motor skills improve and the child crawls (both on the floor and up the stairs), stands, takes a few steps; being fastened into strollers, car seats, infant seats; taking walks and observing the world; increasingly active play with objects and toys (banging, dropping, throwing, turning, pulling, squeezing); verbal greeting routines, animal sound imitation

12 to 18 Months

Feeding, getting diapers changed, having baths, getting dressed; being put to bed; playing with balls, active motoric play (climbing, bouncing, walking, running); container play (filling and pouring, especially), block play, picture books and stories; drawing with crayons

18 to 36 Months

Feeding, getting diapers changed, using the potty (some), having baths, getting dressed, going to bed; active motoric play; play with balls, water, sand, blocks, play dough, crayons; reading picture books and stories together or alone; “helping” with making the bed, dusting, setting the table, sweeping, filling and emptying the dishwasher, cooking, shopping; going to the park; playing on playground toys; nursery rhymes and songs

3 to 5 Years

Eating snacks and meals; taking trips in the car; doing errands; putting dishes in the dishwasher, folding laundry, setting the table, cooking, making beds, sweeping; shopping; going to the park; picking up brothers and sisters, watching siblings’ sports practices; cleaning up; getting dressed, toileting, bathing; going to bed; active play outside; play with balls, water, sand, blocks, play dough; drawing, painting; reading picture books and stories, nursery rhymes and songs; being polite, making friends with peers



10.  Interacting in Ways That Promote Listening and Talking

Any time the caregiver is with the child is a possible time for normal, everyday interaction and language “input.” The basic idea is to respond to the child’s communicative attempts and to talk about whatever interests the child about the events or materials at hand. The idea is not to intrude into the child’s self-absorbed exploratory play or to engage him or her in talk every waking minute, but to select (or create) opportune moments for verbal interaction. This may be through observing the child carefully and joining in when the child looks up and appears interested in having the parent’s involvement. Or, it may be at a moment when the parent has something genuinely exciting, interesting, or important to tell the child. For some parents, these embellishments are already natural behaviors. For others, the notions are easily internalized and expanded upon. For still others, who may be less experienced or less talkative with young children, more guidance and conscious behavior change may be required.

As mentioned in Chapter 9, sometimes children without identified disabilities learn language in all kinds of situations, including adverse conditions. However, at least in mainstream middleclass North American, British, Western European, and Australian culture, children develop spoken language in the course of interactions with fluent-speaking adults. Because the child with hearing loss is operating with reduced quality, quantity, and intensity of auditory/linguistic input to his or her brain, it makes sense to provide an environment that has enriched facilitative conditions.

Once the child has appropriate audiologic technology and noise in the environment has been managed, what can caregivers do to provide an enriched auditory/ linguistic environment? The Framework for Maximizing Caregiver Effectiveness in Promoting Auditory/Linguistic Development in Children with Hearing Loss (in the next section) was developed to provide concrete information about exactly that. The framework clearly lays out interactive language-facilitating behaviors for caregivers (including teachers and therapists) to use from birth to age 6. The intention is that the framework will be used to reinforce or affirm language-promoting behaviors that caregivers are already using, as well as to encourage the addition of ones of which they may not been aware. The framework is not intended to be used to correct “mistakes” that parents are making, but to help them provide an interactive environment with enriched levels of auditory/linguistic facilitation in an attempt to compensate for the impact of the disadvantages imposed by the hearing loss.

A Framework for Maximizing Caregiver Effectiveness in Promoting Auditory/ Linguistic Development in Children with Hearing Loss Background and Rationale Sometimes a child’s hearing loss seems to have little or no effect on the interactive relationship with his or her parents. But the intactness of those interactions can be threatened in several ways. First, when a child has an undetected hearing

259

260

Children With Hearing Loss:  Developing Listening and Talking

loss, he or she is not receiving auditory/ linguistic brain stimulation in the same manner as his or her normally hearing peers. And even after a sensory or neural loss is detected and appropriate hearing aids or cochlear implants are worn consistently, the auditory input is reduced in quality, quantity, and intensity. That is, the hearing aids and the cochlear implants both provide brain access to speech but are unlike corrective lenses for eyes: they do not restore listening abilities to normal threshold levels and make speech clear. Speech still lacks the clarity and intensity of the speech received by an infant or young child with normal hearing. Furthermore, the young child with a severe to profound hearing loss needs to learn that he or she can benefit from “overhearing” conversations directed toward others in the surrounding environment; the child with hearing loss may not automatically overhear the speech of others, and may need to be taught to do so to the best of his or her ability. The child with a hearing loss may not respond in expected ways to parent messages or cues, and may not send out expected cues. This mismatch of expectations can disrupt interactional events from the very beginning. In addition, as mentioned earlier in this chapter, the caregiver’s ability to interact “normally” may also be influenced by emotional turmoil that can surround the discovery, as well as the continuing presence, of the hearing loss. In the course of this process, the parents experience a number of powerful emotions that may interfere with the their normal ability to be sensitive to the child’s communicative cues, to respond to them, and to send appropriate cues to the child. To summarize, the lack of expected audition-based responses from the child, the slowness of the process due to the reductions in quality and quality of the

There are frustrations and problems inherent in trying to understand and be understood by any young child who is only just beginning to learn that adult signals (e.g., words) have consistency and significance. If the child is the physical size of a 2-year-old with normal learning abilities, but is understanding and producing language at a much younger age level, a developmental dyssynchrony is created, which sets up the interactive scene for potential frustration and problems.

auditory/linguistic input to the child’s brain, and the parent’s emotional turmoil all may contribute toward limiting the child’s access to precisely the kind of auditory, linguistic, and social interaction he or she needs. Furthermore, these factors may explain some of the traditional research findings regarding caregiver talk to children who are deaf or hard of hearing, which suggest that parent speech to children with hearing loss varies considerably from parent speech to normally hearing children as shown in Table 10–4. However, all of the studies cited were limited in the amount of data they were able to collect and analyze. Recent technological advances now allow automated analyses of full-day recordings of the language environment of young children. As more studies of this nature are carried out with large samples of children with varying degrees of hearing loss, and varying histories of technology use and engagement in intervention, the traditional observations in Table 10–4 may be refined. One study using the automated analyses of full-day recordings (VanDam, Ambrose, & Moeller, 2012) found that, looking at the groups in their entirety, both the normal hearing and hard of hearing groups of



10.  Interacting in Ways That Promote Listening and Talking

Table 10–4.  Summary of Research Findings Regarding Adult Talk to Young Children With Hearing Loss • Reduced amount of talk directed toward child • More exact self-repetitions • Fewer expansions • Shorter sentences • Simpler grammatical constructions • More frequent rejecting, critical, or ignoring responses • More direct imperatives (attempts to control child’s behavior) • More adult topic changes away from child’s focus of attention, activity, or previous utterance • Speech that is faster, less fluent, less intelligible, or less audible Note. See review in Cole, E. (1992). Listening and Talking: A Guide to Promoting Spoken Language in Young HearingImpaired Children (pp. 42–46). Washington, DC: Alexander Graham Bell Association for the Deaf.

children were exposed to similar numbers of adult words and conversational turns. However, for the children with hearing loss, both the number of words and the number of turns were correlated with the child’s hearing levels. More research with additional analysis of the content of the adult utterances is needed. All of the findings in Table 10–4 may be normal adult responses to the child’s not understanding or responding in expected ways, to the parent not being able to understand the child’s utterances, and to the parent’s feelings of frustration, sadness, or hopelessness. But however normal these responses may be, they are of concern since many of them are not interactive behaviors that promote listening and language. Although physiologically intact children may learn fluent spoken language in less-than-optimal

conditions, children with sensory impairments such as hearing loss require more supportive conditions to achieve in accordance with their potential. The child with hearing loss is likely to progress most effectively with more frequent and direct access to social and verbal interactions, more consistent adult responsiveness, more repetition in varied ways, and sometimes more consciously planned input. As a consequence, it becomes imperative that the parent and practitioner understand the features of optimal, growth-promoting interactions between caregivers and their young babies and children to foster them. Chapters 8 and 9 in this book provide important theoretical background regarding language learning. The framework shown in Table 10–5 is offered as an attempt to augment that understanding. Much of the credit for the

261

Table 10–5.  A Framework for Maximizing Caregiver Effectiveness in Promoting Auditory/Linguistic Development in Children Who Have Hearing Loss Behavior

Comments

I.  Sensitivity to Child 1. Handles child in a positive manner 2. Paces play and talk in accordance with child’s tempo 3. Follows child’s interests much of the time 4. Provides appropriate stimulation, activities, and play for the child’s age and stage 5. Encourages and facilitates child’s play with objects and materials II.  Conversational Behaviors A.  In responding to the child: 6. Recognizes the child’s communicative attempts 7. Responds to child’s communicative attempts 8. Responds with a response that includes a question or comment requiring a further response from the child 9. Imitates child’s productions 10. Provides child with the words appropriate to what he or she apparently wants to express 11. Expands child’s productions semantically and/or grammatically B.  In establishing shared attention: 12. Attempts to engage child 13. Talks about what the child is experiencing, looking at, doing 14. Uses voice (first) to attract child’s attention to objects, events, self 15. Uses body movement, gestures, touch appropriately in attracting child’s attention to objects, events, self 16. Uses phrases and sentences of appropriate length and complexity

262



10.  Interacting in Ways That Promote Listening and Talking

Table 10–5.  continued Behavior

Comments

C. In general: 17. Pauses expectantly after speaking to encourage child to respond 18. Speaks to child with appropriate rate, intensity, and pitch 19. Uses interesting, animated voice 20. Uses normal, unexaggerated mouth movements 21. Uses audition-maximizing techniques 22. Uses appropriate gesture Source:  Adapted from Cole, E. (1992). Listening and Talking: A Guide to Promoting Spoken Language in Young Hearing-Impaired Children. Washington, DC: Alexander Graham Bell Association for the Deaf.

original version of this framework goes to Jacqueline St. Clair-Stokes (Cole & St. Clair-Stokes, 1984).

ment in children with language-learning problems. Detailed explanations for all components can be found in Appendix 4.

Structure of the Framework

Getting a Representative Sample of Interacting

Items on the framework are intended to provide a guide to the major components of optimal caregiver communication-promoting behaviors, with several additions specific to interacting with a child who is deaf or hard of hearing. All of the items in Section I are related to being sensitive to the child in some way. This section is intended to highlight specific adult behaviors that can promote a positive affective environment. Section II of the framework outlines the interactive mechanics of the situation: the “how” of establishing a shared focus, responding, and talking to the child. This section includes items from the language intervention literature that are believed to facilitate language develop-

This framework is intended to be used in the study of videotaped samples of the parent and child interacting in a normal, everyday fashion. Smartphones and tablets can make videotaping easy and available in all situations, and can be done by the parent or by the practitioner. Videotaping is the method of choice for collecting the sample as it provides a retrievable record of both the auditory and visual components of the events. Because much early communicating occurs through nonvocal, nonauditory means such as changes in gaze direction, body orientation, and gesture, it is especially important to have the visual record available. Video-

263

264

Children With Hearing Loss:  Developing Listening and Talking

Steps to Consider in Trying to Get a Representative Sample of Interactive Behaviors when Videotaping n Videotaping can be done by the par-

ent at any time, or by the practitioner during a session. The setting should be normal and familiar to the child. n The practitioner should discuss the purpose and planned use of the tape with the caregiver. The purpose is to get a good sample of how the caregiver and child play and talk together, which will then be studied by both the parent and the practi­tioner to be certain the interactions are as language promoting as they can be. n The caregiver should think about which toys or activities to use so the baby or toddler will be likely to interact a great deal (e.g., feeding or lunchtime; diaper changing; bath-

taping also allows for multiple reviews of the interactional sample. This is necessary to account for social, physical, and linguistic dimensions of the context that contribute to the interactants’ co-construction of meaning in a given instance. The usefulness of the sample depends very heavily on the degree to which it is representative of normal interacting between this particular parent and child. It is not unusual for the parent to feel a bit uncomfortable (knowing that the tape will be studied), or for either the parent or the child to “perform,” sometimes unconsciously. Some of the measures listed in the box may be helpful in counteracting these possibilities and attempting to set everyone at ease so that the sample will be as representative as possible.

ing; playing in a swing; playing with play dough, tea set, dolls and clothes, building blocks, doctor kit, etc.). n If the taping occurs during a session, the practitioner should mention to the parent that he or she will try not to interact with either participant during the taping. The usefulness of the tape will be greatly compromised if much of the parent’s talk is directed to the practitioner rather than to the child. It will also be compromised if the child is constantly looking offcamera at the practitioner, or walking over to him or her. n The videotaping should be at least 5 to 10 minutes in duration to get a good sample of interacting.

Discussing the Framework with Parents The practitioner’s interpersonal skills are enormously important in the process of discussing something as personal and sensitive as the way a parent interacts with his or her child. The practitioner will need to guard carefully against any possibility of conveying unspoken censure of the parent, since collaboration will be impossible if the parent feels judged or blamed (refer back to Tables 10–1 and 10–2). Some practical suggestions for addressing the items with sensitivity include the following: 1. The caregiver can simply view the tape afterwards with the framework items in mind and talk about



10.  Interacting in Ways That Promote Listening and Talking

his or her observations with the practitioner. 2. In viewing the tape with the caregiver, the practitioner can point out examples of instances on the videotape where the parent demonstrates desired behaviors. Together, the practitioner and the parent note the effect that the parent’s behavior has on the child. 3. The practitioner can check his or her perceptions with the parents’. “I don’t see a great deal of X in this little segment we videotaped. Is that what happens other times too?” or, “It looks to me as if you are working very hard to get Johnny’s attention. Does it seem that way to you, too?” 4. As part of discussing a particular event or item, the practitioner can ask open-ended, exploratory questions concerning that item. Examples might be, “How do you think you (or the child) were feeling at that point?” or “What occurred to make that happen right there?” or “How well does Johnny seem to imitate you when you sit beside him and speak close to his microphone?” These questions must be carefully phrased so they are not perceived as threatening or critical. The framework provides a guideline for desirable caregiver behaviors. It can be used to help parents become aware of how much they are already doing. It can also be used to identify behaviors the caregiver wishes to change. In this regard, Luterman’s wise words are relevant when he wrote, “People will learn just what they are ready to learn and absorb. The best indication of readiness is a parent-asked question: gratuitous

information usually serves to increase confusion and is rarely helpful” (1984, p. 65). Behaviors the parents identify as problematic are most likely to be the ones about which they will be ready to absorb suggestions. This does not mean that the practitioner has to wait until the parent discovers every problem before bringing it up, but it does mean the targets will not meet with much success if they are summarily selected and imposed by the practitioner. The practitioner needs to find a way to help the parent recognize the problematic areas in such a manner that they can “hear” the suggestions for change.

Ways of Addressing Parent-Chosen Interactional Targets As mentioned above, the framework can be used to show the caregiver how much they are doing that is helping the child learn to listen and talk. However, if interactive targets for change have been identified, the next step is to determine how the targets will be worked on. Clearly, this is another instance of something that is most likely to be effective if it comes from the parent rather than being imposed by the practitioner. However, the practitioner needs to be ready with suggestions to offer for the parent’s consideration if necessary. Methods of achieving adult behavior change vary widely depending on the behaviors, as well as on the personality and learning styles of the individuals involved (Caraway & Horvath, 2012a). Simply becoming aware of the importance of a particular behavior (perhaps as a result of using the framework) may be enough to cause a change in some behaviors. For example, a parent may notice that

265

266

Children With Hearing Loss:  Developing Listening and Talking

her voice was very high when she spoke to the child. To change it, the parent may just decide to try to keep her voice pitched lower, and be able to make that change simply from having decided to do it. Or, the parent may have a means for helping him or herself to make the change. For example, the parent may decide to leave an audiotape recorder running as she is playing with the child at home to remind herself to keep her pitch down, and to provide a check of how well she is doing in that regard. Another idea is for the parent to ask someone at home to provide reminders or feedback regarding the behavior — including instances where the change has occurred. Other behaviors initially may be changed more easily if there is a period of focused feedback from the practitioner. One way to do this is to examine in detail one or more instances where the behavior was problematic on the videotape. This examination might include consideration of what was happening just before the parent’s behavior occurred, exactly what the parent did, what the parent intended

Probably one of the most common strategies used in intervention is that of “expert” demonstration. The parent is expected to learn how to behave in a certain way from observing the expert do it. In this case, the practitioner would interact with the child while being videotaped, and the parent and practitioner would use the framework to analyze what just occurred on the taped segment. Demonstration can be quite effective (and, in this context, it certainly makes you put your money where your mouth is), but it does have several inherent dangers. For example,

to accomplish by the action, and what effect the parent’s behavior had on the child. Then a number of possible alternate behaviors for that instance could be generated. This focused feedback can be extended into an on-the-spot coaching event. This occurs when the parent and child play at some normal activity, and the practitioner quietly interjects a suggestion at the exact moment the behavior either is or is not occurring. Using a Remote Microphone (RM) system for this activity to connect parent and practitioner may be a good idea with some parents, assuming there is a one-way mirror arrangement so the practitioner can coach from outside of the room. This kind of intense feedback may be useful for many of the behaviors in the framework, but to be effective it must be done with the parent’s full agreement. The possible ways of helping a parent change behaviors are limited only by the particular behaviors involved and by the creativity of the parent and practitioner. The key element in all of this, however, is that the effectiveness of any behavior change plan is directly dependent on the

the demonstration may leave the parent feeling overwhelmed and inadequate. It may also have looked so smooth and easy, and occurred so quickly that the parent may have missed the essential point of the demonstration. To counteract these possibilities, it may be useful to have the parent take over in interacting with the child immediately after the demonstration, and attempt to replicate the desired behavior or to perform it at the next opportunity, with the practitioner right there to adjust or confirm.



10.  Interacting in Ways That Promote Listening and Talking

parent’s commitment to it. Adults respond most favorably when they are actively involved in designing and implementing their own learning. This is why it is optimal for the parent to be the one who recognizes the problematic nature of a particular behavior, and who generates and enacts the plans for change. It is also why the practitioner needs to interact with the parent in ways that contribute to the parent’s feeling of autonomy, confidence, and competence (DesJardin & Eisenberg, 2007).

Determining and Sequencing Targets Specific to the Child’s Development of Auditory, Language, and Speech Development Targets to address with regard to the parent’s ways of interacting with their child to promote the child’s listening and language development have already been addressed. But what about targets for the child’s auditory, language, and speech development? That is, how does one know what to work on with the child? Simply put, assessment in all three areas is needed to determine the child’s strengths and needs, instructional targets selected, and a sequence for teaching those targets determined. Appendix 6 lists selected resources for assessing auditory and spoken language abilities. Some of the assessments provide standardized scores, some are criterion-referenced scales, and some are hierarchies of skills. Standardized scores are essential for determining how a child’s performance on that test compares with his or her same-age peers, but may not

yield usable targets for intervention. Criterion-referenced scales and hierarchies of skills, on the other hand, may provide specific skills or knowledge that need to be targeted in instruction. Scales such as the Carolina Curriculum for Infants and Toddlers and the Cottage Acquisition Scales for Listening, Language, and Speech are primarily based on the normal developmental sequence of learning in each of the assessment areas and can be used broadly as a way of ordering the child’s targets into an instructional sequence. Using the normal developmental sequence as a guide for the child’s spoken language development makes the most sense because the expectation is that, given brain access to sound, the child’s language learning will follow a normal developmental pattern. For auditory targets, the hierarchies for early listening development (including the one in Appendix 3) are based on logically gross-to-fine abilities and simple-to-complex skills. Normal development does follow those general “rules,” but many of the specific items are theoretical constructs, as explained in Chapter 7. This treatment of the extensive topic of assessment, target selection, and target ordering is quite brief. The reader is referred to Appendix 6 for additional resources, as well as to Eskridge and Wilson (2015); Houston, Bradham, and Bell (2015); Paterson and Perigoe (2015); and Perigoe, Allen, and Dodson (2012) for more comprehensive and detailed information. In addition, Bradham and Houston (2015) have included the Option Schools Assessment Reference Guide in Appendix A, which contains 87 pages of detailed annotations on assessments commonly used with children who have hearing loss.

267

270

Children With Hearing Loss:  Developing Listening and Talking

conversations; and that the responding is generally related to the semantic content of the previous speaker’s utterance. She is teaching him through repeated natural exposure that the mover (the worm) can perform an (intransitive) action; that the mover/subject is mentioned first, and then the action/verb; that the presently occurring action is coded by the use of a present progressive form of “be” + verb “-ing”; and that the subject is preceded by a definite article, “the,” that signals the noun “worm” that follows it is known to both conversational partners. The event



10.  Interacting in Ways That Promote Listening and Talking

number of everyday contexts (worms, strings, spaghetti, the child — all wiggle).

Scene II:  Embellished Interacting So, in the next breath, the parent says: “Wiggle, wiggle, wiggle. The worm is still wiggling!” (Pause for the child to respond.) “Oh look! There’s another worm that’s wiggling.” (Pause — the child finds a third worm.) “Oh — you found another one! Is that worm wiggling?” (Pause.) “Yes, it is! Oh, that’s funny. Can you wiggle like a worm?” (And so on.) In the parent’s mind, “wiggle” has become a vocabulary goal. The parent is intentionally providing varied, repetitive exposure to that goal, and attempting to elicit it. The goal is specific and deliberately sought after, and the adult’s actions are intended to reduce the time period required for the child to learn the word through depending entirely on the vagaries of totally natural exposure. However, the teaching is incidental in that it is occurring with the child and parent focused on a normal, everyday activity, and the adult is employing ordinary language appropriate to that event (taking a walk and finding a worm). This kind of incidental seizing of the teachable moment is embellished interacting or embellished teaching by the adult.

Teaching Spoken Language Through Embellished Interacting The Framework for Maximizing Caregiver Effectiveness in Promoting Auditory/Linguistic Development in Children with Hearing Loss (see Table 10–5) identifies the components of how to demonstrate interactional sensitivity toward the child, and how to carry on interactive conversa-

tions with the child in ways that will promote the child’s spoken language growth. The specific conversational behaviors listed are some of the strategies to use in embellishing incidental events to teach spoken language, such as deliberately responding to as many of his or her communicative attempts as possible, imitating the child, giving the child the words that are appropriate to what the child is trying to express, pausing expectantly, and using an animated voice. The embellishing comes from the adult’s decision to focus carefully on the child’s communicative attempts to respond consistently to them, or to watch the child very carefully to supply the right words at the right time, or to imitate the child with or without an expansion — with an expectant pause afterward for the child to fill in with a response. The embellishing is in the adult’s mind, not the child’s. The embellishments are similar to what an adult would do in any interaction with a child, but they are done with more deliberateness and more consistency. There is a time-honored languagelearning principle that may be useful to keep in mind in using a number of the strategies from the framework, and that is the informativeness principle of Greenfield and Smith (1976). The idea stems from an observation that, at the one-word stage of linguistic development, when the child is going to talk, he or she is most likely to verbally encode the most informative (interesting, novel, uncertain, dynamic) element of the situation. This is important because the adult will want to be sure to use words that describe the most informative element of the situation, since that is what the child will be paying attention to. This can involve some guesswork. So, for example, if the child’s cookie falls off the table, the most likely utterance

271

272

Children With Hearing Loss:  Developing Listening and Talking

will concern the “fell down” business, not the “cookie” part. And if the child says nothing, the preferred adult utterance might be, “It fell down,” rather than, “Oh! The cookie!” Of course, the aspect which is the most informative to the child may not always be obvious and predictable. For example, if the cookie falls off the table and the dog rapidly gobbles it up, one is faced with the quandary of whether “fell down,” “all gone,” or “doggie!” is the most interesting aspect of the situation to the child. This means that, to provide the child with the words appropriate to what he or she wants to express, the parent needs to observe keenly to encode (talk about) the most informative aspect of the situation, which is presumably what the child would be trying to express (Paul, 2007) (see framework items 2, 3, 6, 7, 10, and 13). The informative principle also has implications for the use of expansions. Extended expansions, which provide related and new ideas to the child’s original utterance, fit well with the concept of informativeness. Minimal syntactic expansions fit less well since the addition of a plural s or an -ing probably has low informativeness value to the child (see Framework item 11). Responding with a response that includes a question or comment for the child (framework item 8) will also be most effective if the question or comment is purposefully centered on some salient, novel, or uncertain aspect. If one stretches the principle to include being informative, dynamic, and interesting, it could also include speaking in phrases and sentences of appropriate length and complexity, as well as speaking in an interesting and animated voice (see Framework items 16 and 19).

Several other language intervention techniques that are not in the framework merit particular mention. These techniques are excellent examples of ways to carry out embellished incidental teaching, and they also adhere to the informativeness principle. They can be used to emphasize any language feature(s) that fit with the expectations for that child’s age and stage. Multiple exemplars:  One of the techniques is the use of multiple examples, or multiple exemplars. When an event occurs where the child has produced a target item or an approximation to it, the parent recreates and repeats the event, paralleling what the child has done. For example, if the child is pretending to make a stuffed elephant jump off a chair and says, “Jump!” the adult (immediately thinking either of extended expansions or of mover-action, intransitive sentence frames) could then say: “Yes, the elephant jumped right off the chair!” (picking up a stuffed dog). “And then the doggie jumped” (making dog jump). “And the kitty cat jumped” (making cat toy jump off). “And then the bear said, ‘I want a turn!’ (getting the bear ready to jump, and then pausing). “So then . . .” (And, one hopes, the child would say at least some part of, “The bear jumped.”) (Figure 10–3). It should be noted that this is an auditory activity, in that the child is sitting on the parent’s lap and is focused on the animals that are jumping, not watching the parent’s face all of the time. This “jumping” scenario could also be used with a child who is not yet using words. The adult would similarly observe the child’s play, provide the words, and vary one aspect of the play in an interesting and fun manner. The scenario could be simplified to be “Uh-oh — the (animal) fell down!” each time, with the expectation



10.  Interacting in Ways That Promote Listening and Talking

Figure 10–3.  Father and child are having a great time making all the animals jump off the chair.

that the child might imitate the “Uh-oh” or the “fell down.” Another variation on this multiple example theme would be to provide several instances of exactly the same event, followed by one different one. Here, the adult would participate in the toy bear jumping off of the chair several times (while talking about it), and then have the bear climb up the chair to create a related, but new and remark-worthy event. Sabotage: Other intervention techniques that conform to the informativeness principle require an already established routine or script (Weismer, 2000) between the adult and the child regarding what should normally be happening. Consequently, this technique would not be appropriate with a very small infant

until the child has enough experience with the event to have built up an expectation regarding the way things should go. These strategies are based on Fey, Long, and Finestack (2003); Owens (2004); and Warren and Yoder’s (1998) ideas of the adult being a “saboteur” of routine events to stimulate communication or entice the child into communicating. One way to do this is to violate the order of the steps of a task or to omit one. For example, in making orange juice with the child, the parent could have the child pour the frozen juice into a container and add the cans of water. Then the adult would act as if he or she was going to pour it into glasses (skipping the step of stirring). Or, object function can be violated through getting ready to pour the juice into a shoe

273

274

Children With Hearing Loss:  Developing Listening and Talking

or a hat. As mentioned above, the success of these techniques hinges on the child’s knowledge that there is a “right” way for these events to occur. Otherwise, nothing will be said as the child will innocently accept that what the adult is doing is normal behavior. But when sabotage works, it can be great fun since any harmless violation of expectations of this sort is likely to evoke laughter. Other strategies are perhaps more obvious such as hiding objects to stimulate “Where?” questions, or to stimulate guesses as to the object’s location. (Note: It is not necessary to fake this situation in most households!) Another strategy, long recommended by teachers of children who are deaf or hard of hearing is to require the child to vocalize or speak to get something. This operant strategy is probably acceptable for phrases and situations well known to the child, similar to the common practice of saying, “Say ‘please’ and then I’ll give it to you.” Some children quickly comply with the expected words or phrase, but the adult needs to be aware that the inherent danger in using this strategy is that it can turn into a power struggle, which the parent never wins no matter what the outcome.

The parents’ job is to provide an abundance of appropriate verbal interaction in the course of normal, everyday events. Within the course of this normalbut-embellished interactiong, there are some strategies that the parent can use to emphasize particular language elements that are appropriate for the child’s age and stage. These include providing the child with the words to describe what the child is likely to think is the most interesting aspect of the situation; providing

Teaching Listening (Audition) Through Embellished Interacting The explicated examples in Chapter 7 clearly illustrate the ways in which normal, everyday activities can be embellished to provide the child with opportunities to auditorily attend selectively; localize; discriminate; identify, categorize and associate; and integrate, interpret, and comprehend. The suggestion was made that parents need to be aware that these “listening events” are occurring and are important. Then they will be able and motivated to create similar listening opportunities whenever possible to enhance the child’s automatic and full use of residual hearing. Caregiver and interventionist auditory strategies need to be internalized and used in all interactions with the child. These strategies are listed in Table 10–6. The reader is referred to Cole, Carroll, Coyne, Gill, and Paterson (2005); Estabrooks (2006); Estabrooks et al. (2016); and Rossi (2003) for additional examples of auditory strategies being used in the course of everyday activities. One caveat is that there will be occasions when it is appropriate to smile and interact when

expansions and extensions; using various kinds of multiple example techniques; violating the order of events or the function of objects; and purposefully setting up the environment so a target is likely to be used (e.g., hiding objects). The notion underlying all of these strategies is that language is used for communicating — and all of these strategies create a meaningful reason to communicate. The possibilities are limited only by one’s ingenuity.



10.  Interacting in Ways That Promote Listening and Talking

Table 10–6.  Auditory Strategies for Everyday Use • Reduce or eliminate all extraneous noise. (Turn off the TV!) • Get down on the child’s level • Place yourself behind or beside the child near the microphone of the child’s equipment • Talk, and expect the child to understand even when he or she is not watching your face • Keep the child’s visual attention on the toys or objects in front of you both • If you do allow the child to watch your face, look for an opportunity to say it again right afterward without allowing the child to watch • Present your message auditory-only the first time, and every time that you possibly can • Remember that by employing these strategies you are helping the child to gain confidence in his or her ability to understand based on listening alone • Be a parent who says “Listen!” (not “Look at me!”)

your baby or young child is watching your face, and it is important to have fun and enjoy these moments without feeling guilty. The idea is to become more and more conscious of times when you can employ auditory strategies, and to do so as often as possible. Each time the child relies on hearing alone to listen and understand, he or she builds confidence in his auditory abilities.

Teaching Speech Through Embellished Interacting Phonating/Vocalizing With some exceptions, very young children who are deaf or hard of hearing are usually vocalizing to some extent at the time that the hearing loss is detected.

The task of the parents is to encourage an abundance of vocalizing by the child through consistently and positively responding to it (Brady, Marquis, Fleming, & McLean, 2004; Gazdag & Warren, 2000; Ling, 1989, 2002; Paul, 2007; Poehlman & Fiese, 2001; Yoder & Warren, 1998, 2002). The encouragement often takes the form of the parent smiling, looking interested, moving closer to the child, and imitating the child’s vocalization. The child may also be encouraged to vocalize as a result of hearing and experiencing others talking around him or her. This would include the interactive nonsense babbling that parents sometimes initiate with babies, as well as humming and singing near the child. In fact, simply hearing language spoken to and around him or her may be a motivation for the child to vocalize.

275

276

Children With Hearing Loss:  Developing Listening and Talking

How Important Is Imitation? Not all children are natural imitators, but having a child who imitates speech sounds can be very useful for a variety of reasons. n One reason imitation by the child is

useful is that the child is extremely likely to imitate only the sounds that he can hear. If the child cannot imitate a particular sound at all, or does it incorrectly, then it may be time to look at the child’s hearing aids, earmold system, or cochlear implant programming to improve auditory brain access to that sound or set of sounds. If the child can normally hear and imitate a particular sound, and then suddenly is unable to, then it is likely that the child no longer has adequate auditory access to the acoustic information necessary for perceiving it. In this case, improvements to the audiological technology are likely to result in immediate restoration of the child’s ability to hear and imitate the target sound(s). n If the child has previously never been able to hear or produce the sound, when auditory brain access is provided the child is unlikely to suddenly produce it correctly (although that sometimes does happen). It is possible that even though the child can hear the sound(s), he may not have learned to listen for the sound’s essential features. This is similar to what an English speaker experiences in learning that there are multiple “d” sounds in Hindi, all of which initially sound similar to the English speaker.

For another example, the child might be confusing the b and the m sounds. These are two sounds that he can hear, but they sound the same to him or her because he has not learned to listen for their distinguishing features. The child’s family or LSL practitioner can use auditory-based techniques to help him perceive the salient differences and to produce the sounds differently. n Assuming the child has auditory brain access, imitation can be used to assess which phonemes should become the next targets for deliberate stimulation, and whether they are phonemes (speech sounds) or prosodic features (the features that make up the melody of speech: intonation, pitch, loudness, rhythmic patterns).

How to Encourage a Child to Imitate If the child is 2 to 5 months old and is wearing hearing instruments that provide him or her with appropriate brain access to speech, then engaging in playful speech games is usually easy. Infants at this age will often babble back and forth with an adult, if the adult has playfully imitated a sound that the infant just produced. Mealtimes and diaper-changing times are great times to get a vocal volley of this nature going. Research supports the contention that when the adult imitates the baby’s sounds, the baby is likely to do even more vocalizing (Folger & Chapman, 1978). This is important because increases in imitating and vocalizing have been shown to be associated with accelerations in language development (Carpenter, Tomasello, & Striano, 2005). If the child is no longer an infant, then playful speech activities can become one of the ways in which caregivers and tod-



10.  Interacting in Ways That Promote Listening and Talking

dlers interact with each other. The child vocalizes; the adult smiles, leans toward the child, imitates the vocalization, and waits expectantly for the child to vocalize again; the child vocalizes; and the adult imitates whatever the child said; and so on. If the child does not seem to have the idea of this kind of back-and-forth babbling game, then try nonverbal imitative games. These imitative games can be “speechy,” such as imitating mouth movements (raspberries, sticking out the tongue in different goofy directions, pursing the lips, kissing), or imitating hand motions and facial expressions for “Mmmmm” or “Yuck!” or imitative singing games such as “Eensy Weensy Spider.” Another playful imitative game is taking turns kissing small plastic animals one at a time and tossing them in a fishbowl (no fish in it) or a glass carafe filled with water. After the plastic toys are in the water, the water can be stirred to make a whirlpool bath. The idea is to catch the child’s fancy, keep it playful, and have the child imitate without thinking about it. Another way to elicit imitation is by using a toy that can be operated with a switch or a squeeze ball that is out of sight, such as a hopping frog or Oscar the Grouch coming out of his garbage can. The adult models the motion or words (“Hop hop hop” or knocking on Oscar’s garbage can and saying, “Oscar, come out!”). Then the toy is moved toward the child, and the child is encouraged to imitate the adult’s model. Nothing happens until the child does what is needed, or unless the adult repeats the model. Then the toy needs to be put away so the child does not discover (for now!) what was actually making the frog hop or Oscar come out. Once the child will play imitative turn-taking games, start to build specific

speech sounds into the play. It is important to vary the length of the speech sound strings, as well as to vary the intonation and loudness. Listening for and imitating increasingly longer strings of reduplicated and alternated syllables has the added bonuses of helping build the child’s auditory memory (listening and remembering) and practicing the automaticity needed for articulation in running speech.

Sound-Toy Associations One frequently suggested early intervention strategy is to associate a deliberately selected sound with a particular toy or with a frequently occurring event (Cole et al., 2005; Estabrooks, 2006; Estabrooks et al., 2016; Ling, 2002; Rhoades et al., 2016). Sound-toy association has several benefits, including enhancing the child’s perception and production of speech sounds. It also teaches a consistent sound/ word for each of the toys or activities. Examples follow of sound-toy and soundevent associations that can be used in many daily situations, but the caregiver can make up others that will recur frequently in their particular home routines.

To Highlight Intonational Variation Use dramatic expressions such as “Uhoh” (for things falling down), “All gone” (for food being finished or toys rolling under furniture), “Oh no!” (for toys colliding or breaking or accidents of all kinds), “Wow!” (for delightful events), “Mmmm” (for anything smelling or tasting good), “Aaaaaaa” or Aaaaaaaeeeeeee” or “Aaaeeeaaaeee” of long duration (to signify flying of airplanes or birds). Produce the phrases with very marked high/low contrasts of the intonational contours.

277

278

Children With Hearing Loss:  Developing Listening and Talking

To Highlight Variations in Intensity Use “Shhhh!” and a whisper for any quiet events such as a baby or doll sleeping. Play the “wake up” game where one person closes his eyes and pretends to sleep (“Shhhh! X is sleeping.”). The other one shouts (“Boo!” or the person’s name) to wake him or her up.

To Highlight Variations in Rhythmic Patterns (Duration) Use expressions that have interesting contrasts of rhythmic patterns such as “So . . . big!”; or “Drip drip drip” for a faucet or for rain falling off of a flower versus “Shhhhhhhh” (sound of water coming out of faucet quickly) or “Pitter-patter” for ongoing rain; or “Mmmm” for something smelling or tasting good, versus “Yuck!” for the reverse; or “Up up up up — down . . . ” for a small doll climbing up a slide and then coming down; “Choo-choooo . . . ” for the train’s whistle, and “Ch-ch-ch-ch” for the train moving (pretend they still sound like that).

To be effective, the sound should be used consistently whenever the child plays with the toy or whenever the event occurs. The child can thus learn in a very natural manner to associate the toy or event with both listening for and producing the chosen sound. Once the child can, using audition alone, select or indicate the toy or activity for a set 6 to 8 items, he has demonstrated that he can associate a sound with a toy, and understands how to do auditory selection using sounds, words, or phrases. It is important always to surround the targeted sound with talk about the toy or activity (“I hear a

To Highlight Specific Vowel Sounds For /a/

“Hop hop hop” (a rabbit or a frog) or “Hot!”

For /i/

“Peek-a-boo!” or “Peek!” or “Go to sleep!” or “Whee!”

For /u/

“Boo!” or “Moooooooo” or “Ooooooo” (train whistle)

For /au/ “Meow” or “Bow wow” or “Ow!” or “Round and round” For /ai/ “Bye-bye” or “eye” or “Hi!” For /ε/

“bed” or “wet”

For /υ/

“push” or “woof woof”

For /ae/ “quack quack” or “Daddy” For //

“up” or “cut”

For /o/

“nose” or “No no!” or “It broke.” “It’s broken.”

To Highlight Specific Consonant Sounds For /b/

“Bye-bye” or “Boo!”

For /p/

“Pppppppp” (boat sound) or “Pop!” (bubbles)

boat that’s going ppppp very quietly.”), so that the child doesn’t begin to think that a “pppppppp” is a boat. When the child has demonstrated that he or she can say “ppppp,” it is time to drop that sound as a label for the boat, and time to use the real word for boat, pairing it with the “ppppp” but soon insisting on usage of the word, not just the sound. The idea is to use the sound-toy associations to establish labels associated with toys or activities, but then to move on to more advanced expectations and activities using the real names of the objects or events.



10.  Interacting in Ways That Promote Listening and Talking

For /w/ “Walk, walk, walk” (any toy or person walking) For /m/ “Mmmmmmm!” (for something smelling or tasting good) For /h/

“Hi!” or “Ho ho ho!” (Santa Claus)

For /ʃ/ “Shhhhh!” For /f/

“Off!” or “foot”

For /tʃ/

“Ch-ch-ch” (train sound)

Preplanned Parent Guidance Sessions or Auditory-Verbal Therapy/ Instructional Sessions For children whose hearing losses were identified in infancy and who then quickly achieved full-time wearing of technology that provides them with appropriate and consistent auditory brain access, what is needed will be an abundance of incidental learning and teaching events throughout the child’s day, with adult embellishing of those events to provide emphasis, varied repetition, and optimal learning. They may or may not ever need preplanned therapy sessions, although most parents often find it helpful as a way of making sure that the child’s listening and language abilities are progressing appropriately and that they are doing all that they can to support the child’s learning. For other children who are late to listening and learning to talk, that same provision of auditory/linguistic abundance throughout the child’s day will be essential. In addition, sessions of direct teaching and learning will be needed, with preplanned targets within playful activities for the child that focus and accelerate the child’s learning. The

intent is that the parent then carries out similar playful but targeted activities with the child in the time until the next session.

Where Should the Auditory-Verbal Therapy (LSL)/ Instructional Sessions Occur? These sessions may be carried out in the child’s home, daycare center, or in a playroom, or in some other quiet spot (e.g., library). The principle of providing intervention in comfortable, natural settings is valid for children with many types of disabilities. However, for children with hearing loss who are learning spoken language through listening, the home or day care can sometimes present so much noise that the child cannot hear anything but the noise. If the noise cannot be reduced or eliminated, a different, quiet setting must be found to be able to provide an appropriate listening and learning environment that takes into account the unique listening needs of a child with a hearing loss. This is the only way that the parent will be able to know that the child really can learn through listening. The same concern about noise exists for children in the 3- to 6-year-old age range who are in preschool group settings, as described in Chapter 6 and Appendix 5. Individual sessions for children in the 3- to 6-year-old range also need to take place in a quiet setting. For children in group preschool settings, individual sessions with the LSL practitioner may take place on a daily basis. However, parents of preschoolers need to be involved in the sessions at least once a week so that all of the child’s team is working on the same goals and objectives in a similar manner, and so that the parent can engage in ongoing discussion about the child’s

279

280

Children With Hearing Loss:  Developing Listening and Talking

equipment, instructional program, and progress. This is very important for most parents of preschoolers with hearing loss, and crucial for parents whose child’s hearing loss was discovered late.

What Happens in an Auditory-Verbal Therapy/ Instructional Session to Address Child Targets? The sessions are carefully preplanned by the professional to include time to talk with the parent about how the child is doing (most likely at the beginning of the session) as well as time for diagnostic therapy with the child for checking progress, for teaching new auditory or linguistic targets, for demonstrating techniques and activities to teach targets, and for determining what’s next in stretching the child’s abilities to the next level. Targets are drawn from assessments of the child’s abilities, and are taught generally in accordance with normal development. Appendix 6 contains references for assessment and for guidelines for normal development of auditory/linguistic abilities. Parents are present and participating for all of each session, since the parent is expected to implement suggestions and activities from the session at home. Here is a list of the kinds of things that need to be accomplished in a typical preplanned session.

Components to Be Accomplished in a Typical Preplanned Session to Address Child Targets n Confirmation that the child’s brain

has access to auditory/linguistic

information through checking the child’s equipment. Troubleshooting equipment can be done by the parent with the practitioner coaching new families, or it can be done by the professional. However, an equipment check is a crucial step in making sure that all of the talking that will take place in the session is accessible to the child. Monitoring equipment also repeatedly demonstrates to the family how incredibly seriously the practitioner is taking the child’s auditory brain access; LSL work all starts with appropriate auditory brain access. n Checking with the parent about the child’s progress (“What changes have you noticed in the time since we last met?”), and with how the parent is feeling about recent suggestions and activities (“Did you have a chance to try/practice ______ and how did it go?”). n Diagnostic practice through play on specific auditory linguistic targets with the practitioner and parent changing roles throughout as the person interacting with the child; the practitioner coaches as appropriate. Activities are playful; geared to the child’s needs; paced to the child’s interest and stamina level; and interspersed with songs, books, gross motor movement, and play, as appropriate. Depending on the child’s age and attention span, there could be as many as 10 or 12 toys played with in a preplanned fashion, or as few as three or four toys or preplanned activities. n Parent and practitioner discuss any of the following: n Issues of concern to parent



10.  Interacting in Ways That Promote Listening and Talking n Information

that needs to be discussed, such as regarding the child’s hearing loss, audiological equipment, or school choices n What will be done at home before the next session by parent and child (“What do you think you would like to focus on from what we did today?”) n Closing routine with child; could include turning off lights, picking up toys, saying good-bye, walking to door. The practitioner’s job is to work with the parent to select and carry out activities that will both teach and test the child’s ability to meet the target(s) that are on his agenda. The activities can include any of the activities of daily life such as eating, changing diapers, taking a walk, getting dressed, or playing with toys. Experienced practitioners guide the discussion and approach that task from two different directions, depending on the targets. One is to start from the direction of the target or goal. In considering the target, what

Not surprisingly, general principles for intervention include many of the same strategies discussed above for embellished interacting. A list follows: n Using motherese (except not using a

higher vocal pitch) n Providing reasons to talk n Keeping the communication

meaningful and informative n Using multiple exemplars n Enticing the child to communicate n Using routines or scripts for certain activities (includes songs and rhymes)

playful and enticing activity could be done that would cause the child to “fall over” the target multiple times? The other way of approaching the planning of the session is to consider the daily routines or toys the child likes to play with. In considering the routines or toys, how can the routine or toy be used to provide playful practice on the child’s targets? The challenge of solving this puzzle for assisting each child to achieve multiple auditory, language, and speech targets playfully is at the heart of preplanned intervention for children who have hearing loss. Excellent additional discussions of goals and selected strategies used in auditory-verbal therapy sessions, parent coaching, and the structure of auditoryverbal sessions can be found in Rhoades et al. (2016), Rhoades and MacIver-Lux (2016), and Estabrooks et al. (2016), respectively. Moeller and Cole (2016) also contains a thorough discussion of many of the same issues about parent-centered early intervention for children who have hearing loss focused on the development of spoken language through listening.

n Providing choices for the child to

make so that communication is meaningful n Expanding the child’s utterances n Extending the child’s utterances n Encouraging initiations and self-generated language from the child n Always pushing on to the next step (if the child can say “big,” teach him “huge,” “gigantic,” “mammoth,” and even “ginormous”) n Using literature n Keeping it playful

281

282

Children With Hearing Loss:  Developing Listening and Talking

Sample Preplanned Scenario An example of a preplanned activity with a 20-month-old and his mother follows. The event itself took approximately 15 minutes from beginning to end, although there were about 10 minutes before this activity where the parent talked about the child’s progress in the past week, and about how well the homework from the previous lesson had gone. This activity is an example of diagnostic practice addressing the child’s specific auditory and linguistic targets.

The Context and the Targets The LSL practitioner, the mother, and the child are sitting at a small table in the child’s home. This particular child has a severe hearing loss that was identified only 4 months before this session. He is just beginning to produce vowels with some variety and on demand. The practitioner has three small toys hidden in a box: a rabbit (for /a/ in “hop hop hop”); a cow (for /u/ in “mooo”); and a dog (for /au/ in “bow wow”). The specific speech goals in the practitioner’s mind are for the child to imitate /a/, /u/, and /au/ in the course of the activity; the auditory goals are to participate in the routines for listening and to select correctly each toy when its sound is produced. The language goals include learning “hop,” “moo,” and “bow wow” as lexical items of a sort, as well as more informal goals of being optimally exposed to language and communication. The choice of “hop hop hop,” “moooo,” and “bow wow” as three words using the three target phonemes was not random. These three words provide salient contrasts in duration, which may be the acoustic parameter that the child first uti-

lizes in discriminating among them. Further, they are all mid- to low-frequency, intense sounds, which enhances the likelihood of the child easily detecting them and attending to them. And finally, /au/ is a diphthong requiring rapid tongue movement (always a desirable target) from the /a/ part to the /u/ part of the diphthong.

The Action The practitioner shakes the box with the three toys hidden inside, looks intrigued, and says, “I hear something. I think there’s something in the box . . . Listen, Mommy!” The practitioner then hands the box to the child’s mother. The mother also shakes the box, looks interested and curious, and says, “I hear something too! Here, Alan — you want to listen too?” She hands the box to the child. Alan shakes the box with vigor, vocalizing with a central vowel sound he frequently uses. The mother smiles and says, “You hear that too, don’t you!”

Explanation The box-shaking routine is done to encourage the child to listen and to be curious about sounds he hears. By having the mother listen and then react to hearing the sound, the practitioner has provided the child with a model of listening and of being interested and excited about listening. Also, by having the parent immediately take over and perform the same activity, the practitioner demonstrated an embellished informal strategy for the parent that could be used in other everyday instances. If there was some problem in the parent’s ability to carry out the activity, the practitioner could gently coach the parent right at that moment.



10.  Interacting in Ways That Promote Listening and Talking

The Action Alan starts to try to open the box. The box is purposely fastened in such a way that a child cannot open it. Alan hands the box to the practitioner, looks her in the eye, and vocalizes.

Explanation The box is child-proofed so that the child will be required to ask for help to get it open. This creates a need for the child to express a request for an action, and allows exposure to an “Open the _____” routine.

The Action The practitioner accepts the box and says, “Oh, okay. You want me to open the box. Here, I’ll open it for you.” The practitioner quickly removes the rabbit from the box and conceals it in her hand. She says, “I found a little bunny rabbit! This rabbit goes hop hop hop.” As she says “hop hop hop,” she makes the bunny hop on the table. She then reaches her hand out (still holding the rabbit) toward the mother. The mother recognizes this as a cue to repeat what the practitioner said and did. The mother immediately takes the bunny and says, “The rabbit goes hop hop hop,” making the rabbit hop on the table. The child’s eyes are on the rabbit because of the movement. The practitioner then says, “Yes! It goes hop hop hop,” looking pleased. She then reaches it toward the child as if to hand the rabbit (still hidden in her hand) to him, as she had just done with the mother, as a cue for the child to produce (imitate) the sounds. The child says “A a a,” and the practitioner smiles and says, “Yes, that’s right. The rabbit goes hop hop

hop right over to you!” She gives the rabbit to the child to play with for a few minutes. (The action to this point has taken about 60 seconds at most; the child plays with the rabbit for another minute or two.) (Figure 10–4).

Explanation The practitioner’s actions are intended to encourage the child to listen to and imitate the “hop hop hop.” She conceals and then reveals the rabbit to add an element of mystery. She speaks in phrases and sentences of normal length, and uses the “hop hop hop” target in a sentence, not just in isolation. Moving the hand with the rabbit in it toward the adult or the child is a cue which is consciously used to signal the moment when the other person is supposed to imitate. The mother is used as a model for the child’s behavior. If the child had not imitated, the mother could have demonstrated “hop hop hop” again; then the child could be given one more opportunity to produce the target prior to being given the toy. The child needs to be given the toy as a reward for staying involved in the activity, even if he does not produce the sound.

The Action Then the practitioner brings out a toy barn and places it on the table in front of the child. The practitioner says, “Alan, the rabbit is so sleepy. He wants to go to sleep. He wants to go to sleep in the barn.” She opens the barn door and puts the rabbit inside. The practitioner says, “Night night, rabbit. Go to sleep. Shhhh!” She shuts the barn door, pushes the barn to the far side of the table out of the child’s reach, while saying, “Night night, rabbit. Shhhh!” again.

283

284

Children With Hearing Loss:  Developing Listening and Talking

Figure 10–4.  Shhhhh. The rabbit is sooo sleepy! He wants to go to sleep in the barn.

(The going-to-sleep episode takes 10 to 20 seconds. It could be more if the child participated a great deal, or if he objected to giving up the toy.)

Explanation This same routine is repeated for the other two toys (the cow and the dog), with the parent being the one who controls the toys, exposes the child to the targets, and attempts to elicit them.

The Action After all three animals are safely sleeping in the barn, the practitioner glides right into an auditory selection activity. While Alan is looking at the barn where the last animal just disappeared, the practitioner can say, “Uh-oh. Time to wake up, cow! Can you wake up the cow, Alan? I want

the one that goes ‘Moooo!’ Can you wake up the one that goes ‘Moooo’?” Alan looks blankly at the practitioner. The practitioner says, “Let’s see if Mommy can help. Mommy, can you get me the one that goes ‘Moooo’?” The mother does, saying “Come on, cow. Wake up. Time to go! Moooo!” The mother puts the cow back in the box the practitioner is holding and says, “Bye-bye, cow! Moooo!” The exit of the other animals is similarly handled, with the child being given every opportunity to select the correct toy based on the auditory message alone, as well as to produce the sound. The activity is over when the last animal is inside the box. The practitioner and the mother then discuss all the ways in which “hop hop hop” and “shhhhh” might come up during the next week, and how the parent will try to emphasize these two words through daily play and interacting.



10.  Interacting in Ways That Promote Listening and Talking

Explanation This part of the activity allows the toys to be left on the table but out of sight, and out of immediate consideration. It also allows for exposure to the routines for “Go to sleep” (for the /i/ sound) and “Shhhh!” Most of the basic principles for effective language facilitation are there. The context is play; the language is normal; the targets are generally incorporated into complete sentences; much repetitive but varied exposure to the targets is provided; at the practitioner’s request, the mother demonstrates auditory and speech behaviors for the child; the practitioner demonstrates strategies for the parent to use to encourage the child to listen and to imitate; the parent attempts the activities immediately after the practitioner; and the parent, rather than the practitioner, does the majority of the interacting with the child.

Substructure Preplanned teaching events generally have three parts: (a) getting the toys or materials out in front of the participants, (b) playing with the toys or materials, and (c) getting the toys or materials put away again. Each part of the event has auditory, speech, language, and management routines associated with it, which, once learned by all the participants, make the event flow smoothly. In this example, the speech and language routines are centered on consistent use of the sound-toy and sound-event associations. Examples from the session include the box-opening episode (“Open the X”) and the episode where the animals were put to sleep in the barn (“Go to sleep. Shhhh!”). One auditory routine

in the illustration occurs at the outset where the box is shaken, making a noise, and the practitioner and the mother model listening behaviors while talking about the box. Moving the hand with the toy inside toward the adult or child is a management routine, whereby the child is being cued imitate or respond. Using the mother to model correct responding behavior probably also provides the same sort of management cue (“Be alert — you are next!”). It should be noted that these are not routines until the child and parent have participated in them a number of times.

About the Benefits and Limitations of Preplanned Teaching For the child who cooperates easily, there is likely to be one major benefit: The time required for the child to acquire the targets through natural exposure may be reduced. This may happen because of the greater amount of repetition provided, and because of the increased demands for imitative practice that occur in a teaching event of this kind, which is surely instructionally intense. The practitioner who employs this kind of activity may accrue a feeling that tangible progress is being carefully crafted and that “real” teaching is going on. The parent may also feel comfortable with preplanned activities for those reasons. However, one concern about direct teaching of specific targets is that the child’s learning may not generalize to other situations, or in the worst case, the targets learned may not even be appropriate for use in situations other than in therapy. This is clearly something to guard against in planning.

285

286

Children With Hearing Loss:  Developing Listening and Talking

In addition, the adult is very much in charge of this kind of activity, and the child’s primary role is to cooperate by listening, imitating, and doing what he or she is told. For some children, even at 5 years of age, these are very exacting demands. However, although the period from 18 months to 3 years (and older) is notorious as a time for children to be asserting their independence through the full variety of noncooperative tactics, a number of young children really enjoy short periods of adult-directed play. The success of this type of activity seems to hinge on the child’s desire to comply, and on teacher or parent ability to keep the child’s interest and enthusiasm high. Strategies for maintaining the child’s interest and enthusiasm promote the activity’s naturalness and informality. These include playing with vigor and enthusiasm; smiling, patting, and hugging the child; joking; using age-appropriate toys; creating mystery and curiosity (hiding the toys); letting the child be as active as possible (letting the child shake the box and eventually play with the toys); giving the child choices to make; judging the pace of the steps appropriately (moving quickly from one step to the next, but at the same time pausing to allow the child sufficient time to participate); and recognizing and responding in a positive manner to all of the child’s attempts to communicate. These strategies mask the adult-directiveness and make the preplanned teaching event communicative and fun. The most important thing to remember in implementing intervention intended to help the parent help the child learn to talk is to listen to the parent and to the child. Plans are meant to be carefully prepared, and then tossed rapidly out the window if competencies or interests were improperly estimated, or if other needs

arise that are more pressing, such as a grandma coming to visit, or the child not responding auditorily as well as several days ago, or the child having learned how to pump on the swing, or the parent having a meltdown from trying to keep hearing aids on very tiny pliable ears, or a family of turkeys walking by the window. Do not ever let lessons get in the way of life.

What Does the Research Say? Several efforts have been made recently through mega-analyses of studies of the effectiveness of the kinds of techniques and strategies that are described in this book, which are those of auditory-verbal therapy and auditory-verbal education, designed to optimize a listening and spoken language outcome. The mega-analyses have looked at studies that compared methodologies, as well as studies that compared the performance of children in auditory-verbal therapy with the performance of typically developing same-age peers (Brennan-Jones et al., 2014; EriksBrophy, 2004; Eriks-Brophy et al., 2016; Fitzpatrick et al., 2012; Rhoades, 2006). One of the problems in carrying out comparative research has been the fact that traditional criteria for the highest level of scientific evidence has required subjects to be assigned to randomized control groups in prospective studies. It is difficult to imagine a researcher being able to find the parents in a pool of subjects who would willingly allow their children to be randomly assigned to groups with various kinds of therapeutic approaches, as well as a group where therapy would be withheld entirely. Other problems with doing traditional research with children with hearing loss are the problems inherent in trying



10.  Interacting in Ways That Promote Listening and Talking

to gather large enough groups of subjects who are truly homogeneous in terms of all of the possible variables that could influence the outcomes, such as etiology of the hearing loss, age of identification, age of access to amplification, age at beginning intervention, length and frequency of intervention, degree of parental involvement in the intervention, and presence of additional learning challenges. The final variable that is difficult to control concerns the exact nature of the intervention. Even with the LSLS certification, centers and individuals vary in the approaches used, simply because the individuals carrying out the work vary in their knowledge, skills, and beliefs. That would be equally true of the practitioners who espoused any therapeutic intervention approach for human beings. However, other researchers have compared the performance of children receiving auditory-verbal therapy against the performance of children with typical hearing, and/or have collected data regarding the children’s rate of development before and during intervention. Overwhelmingly, the results of these studies show: 1. that outcome speech/language, reading, and academic performance of the children with hearing loss is similar to that of children who have normal hearing, and/or 2. that as a group, the children with hearing loss in the studies demonstrated at least 12 months of language growth in a 12-month period, similar to what is expected for typically developing children. The body of clinical research evidence supporting the efficacy of auditory-verbal

practice is now quite substantial. These research results absolutely support the contention that the development of the auditory brain can be positively influenced through intensive auditory and spoken language intervention, such as that provided by auditory-verbal therapy. Factors associated with the likelihood of positive results include early detection of the hearing loss, early and consistent fulltime wearing of hearing aids or cochlear implants, early intensive intervention aimed at listening and spoken language development, parental commitment, age at cochlear implantation, no additional disabilities, gender, and maternal education level. Some of the studies since 2001 are listed here: Dornan, Hickson, Murdoch, Houston, and Constantinescu, 2010; EriksBrophy et al., 2012; Fitzpatrick, Crawford, Ni, and Durieux-Smith, 2011; Fitzpatrick, Durieux-Smith, Eriks-Brophy, Olds, and Gaines, 2007; Fulcher, Purcell, Baker, and Munro, 2010; Goldberg, Weber, and Manz, 2011; Hogan, Stokes, White, Tyskiewicz, and Woolgar, 2008; Lim, Goldberg, and Flexer, 2018; Lim and Hogan, 2017; Percy-Smith et al.; 2018; Rhoades, 2001; Rhoades and Chisholm, 2001; Sahli and Belgin, 2011. It should be noted that some of the subjects in the studies had hearing losses that were identified before the implementation of newborn hearing screening, some had hearing aids, and some had cochlear implants. The studies cited above took place in Australia, Canada, the United Kingdom, and the United States. In any case, these results are an impressive confirmation of the efficacy of early intervention aimed at listening and spoken language development.

287

Appendix

1

How to Grow Your Baby’s or Child’s Brain Through Daily Routines

This is a handout that can be given to all families who have infants or children with any type and degree of hearing problem. Above all, love, play, and have fun with your child! 1. The quieter the room and the closer you are to your child, the better you will be heard. The child may have difficulty overhearing conversations and hearing you from a distance. You need to be close to your child when you speak. Keep the TV, computer/ tablet, and music off when not actively listening to them. 2. Your child must wear his or her hearing aids or cochlear implants every waking moment, at least 10 to 12 hours per day, every day of the week (even when bathing or swimming — use technology that is waterresistant or waterproof). The brain needs constant, detailed

auditory information to develop integrated neural connections. Knowingly depriving your child of this access to information is a form of neglect. The technology is your contact with your child’s auditory brain, and your child’s access to full knowledge of the world around him. If your child pulls off the devices, promptly, persistently, and calmly replace them. 3. Check your child’s technology regularly. Equipment malfunctions often. Become proficient at troubleshooting. 4. Use a remote microphone (RM) system at home to facilitate distance hearing and incidental learning. An RM system can be used during reading, too, to improve the signalto-noise ratio and to facilitate the development of auditory self-monitoring. Place the RM microphone on the child so she can clearly hear her own speech,

289

290

Children With Hearing Loss:  Developing Listening and Talking

thereby facilitating the development of the “auditory feedback loop.” 5. Focus on listening, not just seeing. Call attention to sounds and to conversations in the room. Point to your ear and smile, and talk about the sounds you just heard and what they mean. Use listening words such as, “You heard that,” “You were listening,” and “I heard you.” 6. Maintain a joint focus of attention when reading and when engaged in activities. That is, the child looks at the book or at the activity while listening to you. 7. Speak in sentences, not single words, with clear speech, correct grammar, and lots of melody. Speak a bit slower to allow the child time to process the words. Many adults speak faster than most children can listen. 8. Read aloud to your child daily. Even infants can be read to, as can older children. Try to read at least 10 books to your baby or child each day. You should be reading chapter books to them by preschool. Have the child tell the story back to you. 9. Sing and read nursery rhymes to your baby or young child every day.

Fill her days with all kinds of music, songs, and rhythm to promote interhemispheric transfer, and to serve as a foundation for reading. 10. Name objects in the environment as you encounter them during daily routines. Constantly be mindful of expanding vocabulary. 11. Talk about and describe how things sound, look, and feel. 12. Talk about where objects are located. You will use many prepositions such as in, on, under, behind, beside, next to, and between. Prepositions are the bridge between concrete and abstract thinking. 13. Compare how objects or actions are similar and different in size, shape, quantity, smell, color, and texture. 14. Describe sequences. Talk about the steps involved in activities as you are doing the activity. Sequencing is necessary for organization and for the successful completion of any task. 15. Tell familiar stories or stories about events from your day or from your past. Keep narratives simpler for younger children, and increase complexity as your child grows.

Appendix

2

Application and Instructions for the Ling 6-7 Sound Test for Distance Hearing

The Ling 6-7 sounds are /oo/, /ah/, /ee/, /sh/, /s/, and /m/. The seventh is a silent interval. For a child 18 months of age and older, start out sitting next to the child about 3 inches away, and encourage the child to drop a toy or block into a container when he or she detects each sound. You may need to have parents and/or older siblings model this behavior or provide a “hand-over-hand” facilitation for the child before he or she begins to listen and drop independently. Continue saying the sounds and gradually, one step at a time, move away from the child. Note the distance cutoffs. Once you know the child’s distance hearing, administer the test only at that distance. There is no need to start close because the issue is the monitoring of distance hearing. Distance hearing depends on the hearing loss, hearing for that day, ambient room noise, and hearing aid or cochlear implant efficiency. All consonants and vowels should be detected

out to 40 feet in quiet with cochlear implants. For children who wear hearing aids, the /sh/ and /s/, due to their weak acoustic energy and high frequencies, may need much closer distances for detection than vowels. For children between the ages of 6 months and 18 months, have the child sit in a high chair. A test assistant will need to have “quiet toys” for the child to play with or look at to keep the child occupied. With the presenter standing behind the child and out of the child’s visual range, present each of the Ling-6 sounds individually. Note when the child turns to look for the sound presented. The assistant should then regain the child’s attention with the quiet toys, so the next Ling sound can be presented. For children less than 6 months of age, hold the child close to you with the child’s best ear or hearing aid nearest you. Sing the Ling-6 song to the tune of “Wheels on the Bus” while rocking the child back and forth. Include siblings and

291

292

Children With Hearing Loss:  Developing Listening and Talking

other family members by using “hand cuing” or “toy cuing” to encourage them to repeat the Ling-6 sounds as presented

in the song. This will act as a model for the child and will help to acoustically highlight the Ling-6 sounds for the infant.

Take care to produce all sounds at a normal conversational loudness. Do not increase your intensity or sound duration as you increase your distance from the child.

Appendix

3

Targets for Auditory/ Verbal Learning

Prerequisite for Auditory Brain Development n Child/student wears hearing aid/

implant all waking hours. n For all distance listening activities,

the teacher needs to be sure to know the range of hearing that the child’s technology allows (e.g., can the child detect /s/ at 8 inches, 30 feet, or some other distance). Directions: These are targets for auditory/linguistic learning that, over time, should be demonstrated by a child with hearing loss who is learning spoken lan-

guage through listening (auditory brain access, stimulation and development). See Chapter 7 for a more complete explanation of the usage of this chart. A suggested way of tracking development of the behaviors is to write the observation dates in the corresponding column. “Rarely Observed” refers to behaviors that are observed less than 50% of the time, “Emerging” behaviors are those that occur approximately 50% to 75% of the time when they could have occurred, and behaviors that are “Appropriately Demonstrated” are those that occur approximately 80% or more of the time when they could have occurred.

293

294

Young child:  May move eyes and/or head in seeking the sound source; look toward a nearby adult or speaker; perform head turning responses for visual reinforcement audiometry

2. Searches for sound/attempts to lateralize or localize (auditory attention)

5. Indicates that hearing aid/ cochlear implant is not working (auditory attention)

4. Demonstrates learned responses to having heard something without seeing the sound source (auditory association)

3. Sustained attention to sound (auditory attention)

Young child:  May blink or widen eyes; startle; cry; change rhythm of sucking or breathing

1. Responds reflexively to sound (auditory attention)

Older child:  Tells adult when equipment is not working

Young child:  Takes off equipment and gives/shows to adult

Older child:  Reliably performs the Conditioned Play Task by raising hand, pushing button, or saying “I heard that” for audiologic testing; takes off equipment and gives/shows to adult

Young child:  Points to ear, smiles, looks to adult, claps when sound is heard, participates in Visual Reinforcement Audiometry reliably

Older child:  May clap, sing, dance in response to music, nursery rhymes, poems, or clapping patterns; attends to nearby conversations

Young child:  Moves body rhythmically, smiles and coos to talking or music

Older child:  Looks for sources of sounds

Indicators/Examples

Stage/Skill

Rarely Observed

Phase I:  Being Aware of Sound Emerging

Appropriately Demonstrated

295

5. Speechlike vocalizations (early auditory feedback)

Young child:  May quiet, change vocalizations

4. Responds to hearing aid/ cochlear implant being turned on (auditory association)

Young child:  Uses some clear vowels/consonants, intonational and rhythmic patterns — child’s vocalizing sounds “speechy”

Older child:  May smile, or point to ear, or say “It’s working”

Young child:  May cry or indicate displeasure, or simply widen eyes or startle; if sound is repeated without meaning, child may no longer respond

Older child:  Searches for sound sources, or may ask what made a novel sound

Young child:  Quiets or excites

3. Responds to loud, unexpected noises (auditory association)

2. Responds to novel sounds (auditory association)

Young child:  Smiles or coos responsively

1. Responds to speech (auditory association) Older child:  Turns toward speaker(s)

Indicators/Examples

Stage/Skill

Not Present

Phase II:  Connecting Sound With Meaning Emerging

continues

Appropriately Demonstrated

296

Searches, turns, stops activity

6. Responds to calling voice (without seeing the speaker)

N.B. This can be a difficult, if not impossible, task for some children; practice in controlled conditions may improve the child’s ability, but FM should be used in all instances at home and at school where it is important that the speaker’s message be heard to reduce the detrimental effects of even “small” amounts of noise and of distance on the child’s ability to hear and understand This is a very important milestone as it suggests some level of auditory scanning of the environment, and attentiveness to messages that are not directed toward the child within his or her visual field (rudimentary overhearing). The child’s technology may or may not limit ability to do this; surrounding noise may also make it difficult

b. In noise (close by) (auditory figure-ground)

c. From another room (distance listening)

a. In quiet (close by)

Indicators/Examples

Stage/Skill

Phase II.  continued

Not Present

Emerging

Appropriately Demonstrated

297

b. Three or more exchanges

a. One or two exchanges

8. Engages in vocal turn-taking

b. Recorded voice (somewhat degraded signal)

a. Live voice

Young child:  Expresses what comes next using body movements, vocalizations, words

7. Anticipates what comes next in simple nursery rhymes, finger plays, songs, stories (auditory closure)

Young child:  Raspberries, vowel sounds, repeats consonants, intonational patterns

Older child:  Expresses what comes next using words

Indicators/Examples

Stage/Skill

Not Present

Emerging

Appropriately Demonstrated

298

Young child:  Stops activity (words such as “no”), waves (words such as “bye-bye”), looks toward the person (words such as “mama”), points (words such as “uh-oh,” “all gone”)

1. Performs appropriate actions when common, everyday words are used without watching the speaker’s face

Young child:  Puts rings on a post, claps, throws soft toys in container, raises hand Imitates sounds

a. Detection

b. Discrimination/ identification

4. Responds to Ling-6 sounds

Young child:  Imitates a variety of sounds in playful elicited speech activities

3. Imitates an increasing number and variety of speech sounds with prosodic features varied (auditory feedback) Older child:  Imitates an increasing number of speech sounds with accuracy and ease of production in an increasing number of coarticulated syllable contexts

Young child:  Reaches for toy associated with sound

2. Responds to Learning to Listen sounds (up to seven) without watching the speaker’s face

Older child:  Responds appropriately to common, everyday phrases or requests (e.g., “Get your shoes,” “Time to go!,” “Daddy’s/Mommy’s home!”)

Indicators/Examples

Stage/Skill

Not Present

Emerging

Phase III:  Understanding Simple Language Through Listening Appropriately Demonstrated

299

Chooses the correct object or picture from a set of known objects or pictures Follows a few one-step requests or responds to simple questions without watching the speaker Participates in conversation with an adult or peer without watching the other person’s face

b. In normal, everyday situations (comprehension)

6. Engages in brief, spontaneous, pragmatically correct conversations on simple, everyday topics with 1 or 2 exchanges without watching the speaker’s face

Indicators/Examples

a. In a closed set of up to seven items (identification)

5. Responds appropriately to simple questions or requests without seeing the speaker’s face

Stage/Skill

Not Present

Emerging

Appropriately Demonstrated

300

N.B. Can be immediate or delayed imitation (sometimes occurs to the amusement or chagrin of the adults); an important auditory/linguistic/ cognitive milestone, since the child is imitating from overhearing conversation rather than direct instruction Sings or fills in familiar parts Repeats an increasing number of random digits; try both forward and backward

3. Recognizes recordings of familiar songs and rhymes

4. Imitates an increasing number of random digits: 4, 5, 6, 7, 8, 9 digits (auditory memory)

An important milestone, suggesting some level of auditory scanning and overhearing

Time for lunch, What do you want for lunch, ____ or ____? Where’s your ____? Get the ____ ____. (Adjective + Noun)

Indicators/Examples

2. Spontaneously imitates familiar, everyday phrases or other peoples’ speech, with or without fully understanding what they are saying (auditory memory)

c. At a distance when engaged in play

b. In noise (auditory figure-ground)

a. In quiet

1. Responds appropriately to a variety of familiar, everyday phrases,expressions, and questions without watching the speaker’s face

Stage/Skill

Not Present

Emerging

Appropriately Demonstrated

Phase IV:  Understanding Increasingly Complex Language Through Listening

301

Points to pictures or objects, chooses picture or object, manipulates toys or objects in accordance with the spoken message, or responds verbally to statements or questions

7. Without watching the speaker’s face, responds appropriately to utterances that increase in grammatical complexity and length from two-word utterances right through to complex language (eight- to 12-word utterances)

b. Recorded voice

a. Live voice at normal conversational distance

Uses novel words/expressions that have not been directly taught — a very important auditory/ linguistic/cognitive milestone as it is clear evidence that the child is learning from overhearing

Imitates words, phrases, sentences

Indicators/Examples

6. Begins using words or expressions that have not been directly taught

c. Seven to 10 words in length

b. Four to five words in length

a. Two to three words in length

5. Imitates increasingly longer utterances in accordance with language abilities (auditory tracking and auditory memory)

Stage/Skill

Not Present

Emerging

continues

Appropriately Demonstrated

302

b. Complex language

a. Simple sentences

Names the object or activity

For example, “Wash your hands, brush your teeth, and pick out a book”

b. Three-step requests

9. Identifies an object or activity based on clues (riddles) without watching the speaker’s face

For example, “Please get the cups and put them on the table”

Follows commands, complies with requests

Responds to familiar speech that is or is not directed toward him or her; a very important auditory/linguistic/cognitive milestone, since it requires distance listening, auditory scanning and overhearing

Indicators/Examples

a. Two-step requests

8. Without watching the speaker’s face, responds appropriately to requests that have an increasing number of steps indicated by verb changes

e. At a distance, spontaneously (distance listening, overhearing)

d. Over the telephone

c. Live voice at a distance

Stage/Skill

Phase IV.  continued

Not Present

Emerging

Appropriately Demonstrated

303

13. Understands and supplies whole word or message when part is missing (auditory closure)

12. Retells stories or events with two, three, or four events/ideas/concepts in the appropriate order (auditoryonly presentation)

b. Complex language

a. Simple language

11. Answers questions with no topic given without watching the speaker’s face

b. Complex language Appropriately answers questions

Appropriately answers questions

10. Answers personal questions with the topic given without watching the speaker’s face

a. Simple language

Indicators/Examples

Stage/Skill

Not Present

Emerging

continues

Appropriately Demonstrated

304

c. While engaged in another task or activity

b. Without the topic stated ahead of time

a. With the topic stated ahead of time

16. Engages in increasingly longer, spontaneous, pragmatically correct conversations without having to watch the speaker’s face

15. Carries out a telephone task for a specific purpose

b. Recorded voice (degraded signal) Examples: Calls the bookstore to ask about a specific book; calls a friend to ask about an assignment; calls parent to find out what time he or she will be picked up

Appropriately answers questions

14. Listens to a familiar paragraph or story of increasing length and answers questions regarding the main idea and supporting details

a. Live voice

Indicators/Examples

Stage/Skill

Phase IV.  continued

Not Present

Emerging

Appropriately Demonstrated

305

This ability to tune in and out of conversations occurring around them will give the child the ability to learn from the world around him or her without having to be directly taught. This ability underlies the child’s ability to use the surrounding world continuously to acquire vocabulary, grammar, pragmatics and sociolinguistic appropriateness, and information, thus building the foundation for maturing social-emotional development, theory of mind, and executive function.

17. While engaged in an activity of their own choosing, child demonstrates an awareness of other conversations that have been occurring around him or her or can spontaneously tune in to conversations that have been occurring around him or her (distance listening, overhearing, listening set, auditory multitasking)

Not Present

Emerging

Appropriately Demonstrated

Sources:  Caleffe-Schenck, N. S., & Stredler-Brown, A. (1992). Early Steps. Auditory Skills Checklist), Cole, E. (2004). Early Spoken Language Through Audition Checklist, SKI*HI Manual (Vol. I, pp. 137–141); Stredler-Brown, A., and Johnson, D. C. (2003). Functional Auditory Performance Indicators (FAPI).

Requirements:  Technology that permits listening at a distance, and curiosity

Indicators/Examples

Stage/Skill

Appendix

4

Explanation for Items on the Framework

I.  Sensitivity to Child Items in this section all require that the caregiver demonstrate a level of awareness of the child’s way of being and a desire to adjust to it in a supportive manner that promotes the child’s social/emotional, cognitive, and linguistic development. Although researchers have observed this for years (Bromwich, 1981; ClarkeStewart, 1973), the importance of the caregiver’s sensitivity and positive affective relationship has been reiterated in the results from a number of recent studies (Cassidy & Shaver, 2008; Goldberg, 2013; Quittner et al., 2013). The items can be grouped as follows.

Item 1:  Affect Handles child in positive manner. The parent-child affective relationship is considered to have fundamental importance for a child’s language development, as well as for his or her social and cogni-

tive development (Bromwich, 1981, 1997; Clarke-Stewart, 1973; Dore, 1986; Owens, 2005; Paul, 2007). “Handling the child in a positive manner” could include behaviors such as caressing, hugging, smiling, and behaving in a warm, loving, accepting, and joyous fashion with the child. It could also include simply being near the child, watching him or her thoughtfully, and being openly available for interaction. The key question here is, “Does the parent behave in ways that tell the child that the parent loves him and enjoys being with him?” Sometimes, even if many other areas on the framework are problematic, this is one area that is not, and the interventionist can emphasize its importance. Sometimes it is a question of caregivers not knowing how to behave in ways that send the warm, positive message they want to send. Sometimes the caregiver is unaware of the difference that this can make in the child’s behavior. And sometimes the caregiver actually is feeling primarily frustrated, angry, or hopeless toward the child, rather than warm and loving. Obviously, this can be a temporary or long-term phenomenon.

307

308

Children With Hearing Loss:  Developing Listening and Talking

The interventionist must be aware of the limits of his or her expertise in addressing this particular area, and must know when to draw on the assistance of a social worker or clinical psychologist.

Items 2 and 3:  Pacing and Focus Paces play and talk in accordance with child’s tempo. Follows child’s interests much of the time. These items relate to the caregiver’s ability to read the child’s cues quickly regarding such dimensions as the amount of information and stimulation the child can absorb, the kinds of things the child is interested in, and how he or she indicates that interest. What needs to happen is for the caregiver to observe the child frequently and carefully, to follow his or her interests and tempo, and to fit in with both (Bromwich, 1997). With a young child, the cues may lie in such things as rate of body movement, orientation and/or tension of the body, facial expressions, gaze direction, gaze shifts, gestures, and vocalizations (Paul, 2007). The interventionist and the caregiver may find it useful to study the videotapes of the child interacting to determine what the child’s more salient as well as subtle cues are, and to spot moments where the adult responded or interjected comments in synchrony with the child’s tempo and focus.

Items 4 and 5: Appropriate Stimulation Provides copious amounts of appropriate stimulation, activities, and play for the child’s age and stage. Encourages and facilitates child’s play with objects and materials. According to Clarke-

Stewart’s seminal work (1973), the most growth-facilitating environment will be not only warm and loving, but also will be stimulating in visual, verbal, and material input. Naturally, what is appropriate will vary throughout the early childhood period. The point is that the caregiver needs to be aware of the child’s changing needs and abilities with regard to toys, books, and activities, and needs to provide for them (Gazdag & Warren, 2000; Thal & Clancy, 2001). In addition to the importance of the quality of the stimulation, recent research highlights the fact that the quantity of interacting also matters (Ambrose, VanDam, & Moeller, 2014; Szagun & Stumper, 2012). In essence, what is needed is large amounts of highquality interactions.

II.  Conversational Behaviors Items in this section relate to what the caregiver does in the course of communicative interactions with the child. This section begins with behaviors associated with responding to the child’s communicative attempts, because parental responsiveness is highly associated with the child’s language development (Brady, Marquis, Fleming, & McLean, 2004; Gazdag & Warren, 2000; Poehlmann & Fiese, 2001). But clearly, the caregiver cannot (and would not want to) always follow the child and respond to him or her. Consequently, features of efficaciously establishing a shared focus, or initiating interactions, are next in the framework. The final group of items is those that apply in an overall sense to all caregiver talk in the course of communicative interactions with their child who experiences hearing loss.



Appendix 4:  Explanation for Items on the Framework

A.  In Responding to the Child Items 6, 7, and 8:  Recognizing Cues and Responding Recognizes the child’s communicative attempts. Responds to child’s communicative attempts. Response includes a question or comment requiring a further response from the child. Adult responsiveness to the child’s attempts to communicate is considered to be one of the most important facilitative elements for optimal development of language in young children (Cook, Tessier, Klein, & Armbruster, 2000; Nittrouer & Burton, 2010). To respond to the child, the caregiver must be able to recognize the child’s communicative behaviors as such (item 6). For the older child, these may be words, with or without gestures. For the younger child, as mentioned above, communicative behaviors may be cued by rate of body movement, orientation and/or tension of the body, facial expressions, gaze direction, gaze shifts, gestures, and vocalizations (Bromwich, 1997; MacDonald, 1989; Rossetti, 2001). The caregiver can respond to the child’s communicative attempts in words and/or by smiling, nodding, gesturing, and moving closer (item 7). One of the most effective ways of continuing the conversational exchange is to respond and include a question or comment that encourages the child to respond also (item 8). For example, an infant might reach toward an object and look back and forth between the object and the adult. The adult’s response could be to smile, move the object close to the infant, and say, “Oh you want the dolly, don’t you?” The expected child response would be to take the dolly and relax body tension,

and perhaps put it in his or her mouth or manipulate it. For an older child, the response to the child’s “Uh-oh!” might be the adult’s visual attention, moving closer, and “What happened?” The child’s expected response might be to point at an object and say, “Broke” or “Fell down.” These conversational turnabouts or verbal reflectives are considered to be very powerful learning devices (Kaye & Charney, 1980, 1981; Proctor-Williams, Fey, & Loeb, 2001). Caregivers of young children with hearing impairments may need some guidance in recognizing and effectively responding to child behaviors that are potentially communicative. Particularly with intervention that begins with children who are over the age of 12 months, caregivers tend to focus on the child’s production of words as evidence of progress. Long before words appear, there are a multitude of potential (and actual) communicative behaviors that the caregiver can advantageously promote.

Item 9:  Imitation Imitates child’s productions. Obviously, conversation consists of more than simply two people imitating each other’s contributions. But imitation is a normal and important part of caregiver–child interaction, particularly during the first 12 months. To imitate the child, the parent is automatically required to recognize and respond to the child’s cues, to follow the child’s interests rather than impose the parent’s own agenda, and to sense the child’s tempo. By imitating the child, the parent has an excellent chance of successfully establishing a mutual focus of attention and of engaging in turn-taking. Furthermore, it can be playful and fun.

309

310

Children With Hearing Loss:  Developing Listening and Talking

Item 10:  Providing the Words Provides child with the words appro­ priate to what he or she apparently wants to express. Providing the child with the words appropriate to what he or she apparently wants to express has some of the same benefits as imitating the child’s utterance (Capone & McGregor, 2004). It requires recognizing and interpreting the child’s cues, as well as following the child’s lead. Snow, MidkiffBorunda, Small, and Proctor (1984) and Vygotsky (1978) have all stated that, for cognitive development and language learning, it is vital to have a correspondence between what the adult says and the objects and events in the immediately surrounding environment. If the topic and message expressed by the adult are the child’s own, the adult’s utterance seems likely to have even more significance and impact. There are two dangers inherent to this activity, however. The first is that the caregiver must be a keen observer of the child’s cues, and as accurate an interpreter of the child’s intentions as possible. If not, and there is a mismatch between the caregiver’s words and the child’s real message, it could be at best meaningless, and at worst detrimental (Duchan, 1986). The other danger to guard against is also related to the caregiver’s ability to determine which of the child’s behaviors are potentially communicative. Providing words to describe every gesture or sound the child makes could clearly become a mindless narration of ongoing events. The idea is to focus on the times when the child is actually trying to express something, and to give him the words he would use in that situation.

Examples Context:  The child peers over the side of the high chair, points downward, looks sad, and vocalizes. Adult:  “Uh-oh! The cookie fell down.” Or Context:  The child points out of the car window and vocalizes excitedly. Adult:  “Oh look! A great big fire truck!”

Item 11:  Expansions Expands child’s productions semantically and/or grammatically. Expansions are parental responses that repeat part or all of the child’s preceding utterance and add to it either semantically or syntactically or both. They can be minimal replies that basically correct grammar in the child’s utterance, or they can be more extended expansions that provide related new ideas or information (Proctor-Williams, Fey, & Loeb, 2001). In both of the following instances, the mother’s utterances are extended expansions of the child’s preceding utterances that provide the child with both a syntactic alternative and a semantic extension. Examples Child:  “Broke truck.” Mother:  “Tommy broke the truck. Too bad.” Or Child:  “Doggie pee-pee!” Mother:  “Oh no! The doggie went pee-pee on the floor!”



Appendix 4:  Explanation for Items on the Framework

Expansions are considered to be of great importance in intervention settings since they occur with great regularity in normal caregiver talk, because they model correctly for the child how to add syntactic or semantic elements to an utterance, and because they are positively correlated with measures of language development (Saxton, 2005; Szagun & Stumper, 2012). Interestingly, it has been shown that children are more likely to imitate expansions spontaneously than any other type of adult utterance (Folger & Chapman, 1978; Gazdag & Warren, 2000; Nittrouer, 2010; Scherer & Olswang, 1984; Seitz & Stewart, 1975). This may be because an expansion is directly based on the child’s utterance, and thus holds some automatic interest to him. Also, the adult expansion often only adds a moderately novel element to the original utterance, and often supplies words the child probably understands but did not use in his original utterance. When the expanded adult’s utterance also contains a “hook” that intends to clarify the child’s message and/or expects a response from the child, it may be even more powerful as a conversation-promoting strategy. In the examples above, the caregiver could respond with the following expansions that also contain hooks. Examples “Tommy broke the truck?” (an expansion; also a yes/no clarification question) Or “The doggie went pee-pee where?” (an expansion; also a request for specific additional information)

The examples then become not just expanded responses, but turnabouts as discussed by Kaye and Charney (1980, 1981) and Proctor-Williams, Fey, and Loeb (2001). The benefits of these responses are that the caregiver is responding to the child, is on the child’s topic, is modeling a semantically and syntactically expanded and correct utterance, is engaged in negotiating communicative meaning, and has attempted to elicit continued participation from the child. And, in most cases, none of this requires conscious effort beyond wanting to understand and communicate with the child.

B.  In Establishing Shared Attention Items 12 and 13:  Engaging the Child Attempts to engage child. Talks about what the child is experiencing, looking at, or doing. Item 12 is listed separately to refocus attention on two important elements of interaction already mentioned above as part of other items. One element relates to the items concerning affect, similar to item 1. Whether the caregiver does or does not occasionally attempt to engage the child in playful interaction or conversation can be a clue to the parent’s level of coping and/or commitment to the process (see Chapter 10). If the caregiver does not attempt to engage the child, that could be a bit of behavioral evidence that there are problems in their affective relationship or that the parent is simply feeling too hopeless or frustrated to bother. Or, a lack of attempting to engage the child may be related to the other element upon which item 12 is intended to refocus

311

312

Children With Hearing Loss:  Developing Listening and Talking

attention. That element is that with all the emphasis on responding to the child (items 3, 6 to 11), the caregiver may begin to get the impression that one is never supposed to initiate interaction, only to respond to the child’s apparent communicative attempts. This simply is not the case. To begin with, it is impractical in terms of the daily events occurring in the child’s life: Try dressing or feeding a child in a purely responsive manner. Taken to the extreme, if the adults rarely initiate communication with the child, it could deprive the child of normal opportunities to learn to respond to others’ communicative attempts (including others’ interests and agendas). That said, item 13 reiterates that any child-directed utterance is likely to be most successful at capturing and maintaining the child’s attention, as well as at eliciting a response, if the adult utterance has some connection with what the child is experiencing, looking at, or doing. That is, the caregiver certainly can initiate at will and on any topic but can expect the most rewarding child response if the parent’s utterance fits in with the child’s state and interests at that moment. This requires observation, sensitivity, and insight into the child’s behavioral cues to his or her state and interest (see items 2, 3, and 4).

Items 14 and 15:  Sense Modalities Uses voice (first) to attract child’s attention to objects, events, and self. Uses body movement, gesture, and touch appropriately in attracting child’s attention to objects, events, and self. Items 14 and 15 concern the caregiver’s use of sense modalities in attracting the child’s attention. The items here refer to adult strategies used in the course of normal

interactive events. Because spoken language is primarily an acoustic event, it can most efficaciously be learned by means of acoustic input (i.e., through listening). Thus, as part of helping the child learn to listen, the caregiver should always attempt to attract the child’s attention using voice alone (e.g., through calling the child’s name). When the child responds based on listening alone, do make sure that the child knows that that was a great thing to do! If the child does not respond, get closer to the child’s microphone and call again. If the child still does not respond after three or so tries, then catch the child’s visual attention and let him know that you were calling him, and that he was supposed to listen and respond. One of the important aspects to consider is: “To what does one attract the child’s attention?” There should be a meaningful reason for the child to turn, such as to look at an interesting toy or event. Otherwise, why would anyone continue turning when called?

Item 16:  Sentence Complexity Uses phrases and sentences of appropriate length and complexity. This item needs to be interpreted with caution and some latitude. Research fairly consistently reports that adults use sentences of approximately eight morphemes when speaking with each other, and of about three to five morphemes when speaking with children from birth through toddlerhood (Fey, Long, & Finestack, 2003; Owens, 2005; Phillips, 1973; Sheng, McGregor, & Xu, 2005; Stern, Spieker, Barnett, & MacKain, 1983). However, research results are markedly inconsistent (see Snow, Perlman, & Nathan, 1987, for a review) regarding the



Appendix 4:  Explanation for Items on the Framework

exact nature of the semantic and syntactic changes that occur, and when and why they occur during the birth to 3-year-old age period. For example, with very young infants (i.e., birth to 2 months), some mothers use utterances of adult length as they croon and talk to them. Then, perhaps as the child shows an increasing capability of participating in face-to-face play interactions, some mothers reduce the length and complexity of their utterances to fit more closely with the child’s early smiling and babbling exchanges. Other mothers use short utterances from the beginning. And after the child begins using words, some mothers match the child’s utterance length, whereas some stay several morphemes in advance. Lacking whatever certainty research could provide for this item, the interventionist can rely on what seems sensible from a communication and learning point of view. An argument could be made that if an adult is using very long and complex sentences with a young child, it is possible that the adult has no expectation that the child will understand what is being said. With the neonate, the mother’s long crooning sentences are likely to be more a part of the mother’s efforts to provide a safe and comforting environment for the child, rather than a part of attempting to communicate specific messages to the child. When an adult uses very long sentences with an older child, it can sometimes actually sound as if the adult is talking to himself or herself, rather than to the child. In intervention settings, the intent generally is to communicate with the child. One is striving for the child’s understanding. Consequently, the three to five-morpheme length is probably a reasonable guideline for most caregiver utterances in most situations.

As for whether the caregiver’s utterances should match or stay slightly in advance of the child’s utterance length as the child’s language grows, it seems sensible for the caregiver’s utterance length and complexity to be generally a bit in advance of the child’s. Certainly, this will automatically be the case if the caregiver is speaking primarily in phrases and sentences (rather than in single words) to provide more complete “acoustic envelopes,” and if the caregiver is responding to the child’s utterances using expansions and/or turnabouts (see Chapter 9).

In General Items 17, 18, 19, and 20: Manner of Speaking Pauses expectantly after speaking to encourage child to respond. Speaks to child with appropriate rate, intensity, and pitch. Uses interesting, animated voice. Uses normal, unexaggerated mouth movements. Items 17 through 20 relate to the manner in which the caregiver delivers the message to the child. They are interrelated, but it is possible for one item to be problematic and not the others. Pausing after speaking to encourage the child to respond (item 17) is a fundamental part of turn-taking. In practice, it can mean giving the child the space (time) to “get a word in edgewise,” or it can mean waiting long enough for the child to process the adult message and to respond to it. According to Stern et al. (1983), maternal pausing between utterances is greatest at birth and declines somewhat after basic turn-taking rules are established. However, between-utterance pauses even

313

314

Children With Hearing Loss:  Developing Listening and Talking

at 2 years of age are still approximately twice as long as they are in adult–adult utterances. The caregiver’s rate of speaking (item 18) also tends to be a bit slower in adult– child speech than in adult–adult speech, with enunciation somewhat clearer. Clarifying the message in these ways could be important in many instances as an aid to the child’s understanding (Sheng, McGregor, & Xu, 2005). But it would be most reasonable to maintain a normal rate of about three to five syllables per second (Ling, 2002) to avoid distorting the message. At slower rates, there can be a tendency to overenunciate — to mouth the articulatory movements to an excessive extent. This can create a situation where the child can only understand people who make exaggerated mouth movements. (Not surprisingly, these are also often the children who overenunciate.) If the rate is careful but normal, the mouth movements are also likely to be normal and unexaggerated (item 20). Intensity (item 18) should be normal (not shouting and not murmuring) to avoid inadvertently distorting the acoustic signal, and must take into account the child’s hearing range (Ling, 1981). The closer the speaker’s mouth is to the child’s hearing aid microphone, the louder it will be for the child. The generally accepted optimal distance is 4 to 6 inches from the microphone. However, in normal conversational situations the optimum speaking distance is entirely dependent on the degree of the child’s hearing loss, the level of background noise, the social context, and the presence or absence of remote microphone devices. In normal motherese, the overall pitch is generally higher and the intonational contours more varied than in adult–adult talk. If the overall pitch is too high (item

18), it may give a less-than-optimal signal to a child with a high-frequency hearing loss. However, varied intonational contours and an interesting, animated voice (item 19) are likely to be well within the hearing capabilities of nearly all children with hearing loss. As these voice features are used to get and maintain the child’s attention (Stern et al., 1983), as well as to cue constituent boundaries, sentence types, and speech acts, it is especially important that caregivers incorporate them when talking to children with hearing loss.

Items 21 and 22:  Sense Modalities Again Uses audition-maximizing techniques. Uses appropriate gesture. Items 21 and 22 concern the caregiver’s use of sense modalities in situations other than attracting the child’s attention (see items 14 and 15). The principles are the same in that auditory input is the preferred choice for receiving an acoustically based communication modality (i.e., spoken language). Strategies for maximizing the child’s listening abilities include speaking well within the child’s hearing range; using a normal overall pitch and rate of speaking; using an animated, interesting voice with varied intonational contours; speaking while sitting behind or beside the child; obscuring the child’s view of the lips using the hand or a piece of paper; and attracting the child’s visual focus away from the face and to the toys or activities at hand while speaking. The amount of gesture that is appropriate (item 22) will need to be interpreted with some flexibility according to the individual case and contexts observed. Research suggests that most child communications include gestures throughout the



Appendix 4:  Explanation for Items on the Framework

birth to 3-year-old time period (Carpenter, Mastergeorge, & Coggins, 1983; Lasky & Klopp, 1982). When the goal is for the child to learn spoken language through listening, natural gestures on the adult’s part are fine as long as they are not exces-

sive and it is still acoustics (spoken language) that is conveying the message, not gestures. The item has been included on the framework to encourage discussion in case the caregiver’s gestures seem to be the major mode of conveying messages.

315

Appendix

5

Checklist for Evaluating Preschool Group Settings for Children With Hearing Loss Who Are Learning Spoken Language Federal law (Part B, IDEA 2004) requires that children with disabilities receive a free and appropriate education in the least restrictive environment. The child’s placement in a particular educational environment is a decision to be made by the child’s individualized educational plan (IEP) team (which includes the parents), only after an IEP has been developed, which is based on assessments that yield information about the child’s current abilities and needs. The child’s individual, unique learning needs are used to determine what goals and objectives, services, modifications, and accommodations the child needs, and only then can an appropriate educational placement be determined. The least restrictive environment (LRE) for one child can be different from the LRE for another child, and the school system cannot use a “one-size-fits-all”

approach for determining placement of children with disabilities. The placement decision cannot be based on location of staff, available funds, or school district convenience (Rebhorn & Smith, 2008). In many school districts, preschoolers with disabilities who qualify for special education services attend preschool special education classes on a part-time or full-time basis. However, the question is whether the district special education preschools serving children with the full range of disabilities can meet the unique needs of a preschooler who has hearing loss and whose major goals are listening, understanding, and learning spoken language through audiological technology. The child’s IEP team is required to consider a range of alternative placements in coming to a decision about what will most appropriately address the child’s

317

318

Children With Hearing Loss:  Developing Listening and Talking

needs (Wright & Wright, 2006, 2007; Wright, Wright, & O’Connor, 2010). For one child, a preschool setting may not be the most appropriate educational setting if, for example, the parents want to continue to be the child’s primary auditory and spoken language teachers, guided by a certified Auditory-Verbal Listening and Spoken Language Specialist (LSLS). For another child, the child’s IEP team may decide that a district special education preschool or some other specialized preschool program is the most appropriate educational setting for addressing the needs of the child — or that the child will be most appropriately served with individual intervention sessions and optional placement in a regular preschool. Much hinges on the assessment of the child’s unique needs as a child with a hearing loss, and on the knowledge of the members of the IEP team in determining what services and supports are needed to address those needs most appropriately. Since hearing loss is a low-incidence disability for school-age children, the unique needs of children with hearing loss may not be well known to school district special education staff. It behooves parents to be as well informed as possible to advocate effectively for needs of their children. In some states, there is a language and communication plan, which should be filled out following discussion in the IEP meeting. That plan may help to focus the team’s efforts on meeting the unique listening and learning needs of a child with hearing loss who is learning spoken language. The checklist that follows is offered as a way of assisting with the process of evaluating the appropriateness of preschool settings under consideration, and of assuring that the necessary elements are present so the child will have appro-

priate auditory brain access to the curriculum and will have instruction that addresses his or her unique auditory and language-learning needs as an individual with hearing loss. To reiterate, children with hearing loss have listening and language-learning needs that are unique, and that make educational placement a different process than it is for a child with a different disability (Alexander, 1992; Joint Committee on Infant Hearing, 2007). When considering educational placement for any child with hearing loss who is learning to listen and talk, the child’s team (parents and school personnel) needs to take into account the elements of that setting that will appropriately address those unique listening and language-learning needs. Without these elements, the placement is likely to restrict the child’s access to the curriculum and to the instruction from which he or she is intended to benefit, and will not fit the federal requirement of educational placement in the LRE. If the goal is for the child to (continue to) develop spoken language through audition (for auditory brain access, stimulation, and development), the following elements need to be present in any preschool group learning setting to address the unique listening and language-learning needs of a child with hearing loss.

Listening Needs 1. The child with hearing loss needs appropriate auditory brain access to spoken communication and instruction. n Through appropriate educational activities and classroom manage-



Appendix 5.  Checklist for Evaluating Preschool Group Settings for Children With Hearing Loss

ment techniques, the teacher generally keeps the classroom quiet enough that children can hear whoever is speaking. n All preschool staff members are vigilant about maintaining as quiet a learning environment as is possible. n The classroom has acoustic treatments, such as the following: n Acoustic tiles in the ceiling n Carpeting or chair slippers n Sound-absorbent surfaces on at least some walls (e.g., bulletin boards) so that reverberation is reduced n Sound-absorbent window coverings n Very quiet (unnoticeable) heating, ventilating, and airconditioning (HVAC) system n The classroom is located in a quiet spot away from city traffic, hallway noise, the cafeteria, the band room, and the gym. n The child has properly selected and fit personal remote microphone (RM) equipment for use in all large and small group activities in which it is important for the child to hear the person who is speaking. n The teacher is committed to wearing the RM transmitter with appropriate microphone placement and uses it in all appropriate settings, being vigilant about turning it off in inappropriate settings. n Training for all school staff is given prior to the child’s arrival at school by an educational audiologist, teacher of children with hearing loss, or other

knowledgeable professional regarding the child’s RM equipment and regarding the child’s needs related to the hearing loss. n One adult in the child’s school takes responsibility for daily checks of the child’s personal audiological technology and RM equipment, as well as for troubleshooting and requesting additional technical help when needed. n In case the child’s school RM malfunctions, the school system has provisions for loaner RM equipment to be available within 24 to 48 hours. 2. The child with hearing loss needs to have his or her hearing status closely monitored, with systematic changes in the child’s technology made in response to changes in thresholds and/or changes in the child’s auditory perceptual abilities as reported by parents or professionals who work with the child. n The educational audiologist is an integral part of the child’s team. An audiologist is the only professional with the expertise and licensure to select, fit, and monitor the child’s audiological technology, including personal RM technology. Typically, preschoolers need to have their hearing evaluated at least two to four times during the school year, and more frequently when parents or teachers are concerned about changes in the child’s responses.

319

320

Children With Hearing Loss:  Developing Listening and Talking

Spoken Language Learning Needs 1. The child with hearing loss needs to continue to develop spoken language through listening — that is, through stimulating the auditory centers of the child’s brain with spoken information. n Intervention services are available from qualified professionals who have expertise in working with children who have hearing loss and who are learning spoken language through listening. This expertise includes knowledge of current audiological technology, as well as knowledge and skills in aural habilitation and auditory verbal therapy. Ideally, the professional with the expertise needed to provide appropriate services for children with hearing loss will have certification as an Auditory-Verbal Listening and Spoken Language Specialist (LSLS Cert. AVT, or LSLS Cert. AVEd). n A quiet space needs to be available for intervention by the aural habilitationist (hopefully a certified AVT or AVEd) that is focused on the child’s individual auditory and linguistic targets. The amount of individual work will depend on the degree of delay in the child’s auditory and linguistic abilities and the amount of parent involvement. n School staff expect to work as a team with parents to continue to develop the child’s spoken language and listening abilities by means of, for example, weekly parent guidance sessions,

regular team meetings, regular communication with parents so that parents can assist with pre- and post-teaching, as well as with vocabulary expansion and practice on new linguistic structures. This has crucial importance in view of the fact that even when the child attends a preschool setting for a full school day, that represents only about 25% of the child’s waking hours. 2. The child with hearing loss needs a linguistically enriched environment. n The teacher frequently encourages all children to interact verbally in class. n The teacher bases most of his or her activities and conversation around developmentally appropriate activities that capture the children’s interest and curiosity. n Play centers, tables, chairs are arranged in a way that encourages verbal interaction in small groups. n The teacher repeats and/or summarizes other children’s comments and questions. n The teacher checks for comprehension, without asking, “Do you understand?” n The teacher’s pace reflects student understanding. 3. The child with hearing loss needs excellent models of spoken language. n The teacher’s usual rate, pitch, vocal intensity, and articulation make the teachers’ speech easy to hear and understand. n The teacher uses facial expression and natural gestures to aid



Appendix 5.  Checklist for Evaluating Preschool Group Settings for Children With Hearing Loss

comprehension in talking to preschoolers. n At least half of the children in the class are typically developing children whose behavior and spoken language are within normal limits for their ages. 4. The child with hearing loss needs to learn appropriate ways of interacting with typically developing peers (e.g., how to start, carry on, and close a conversation or play;

how to negotiate ownership of toys or materials; how to take turns; how to initiate play with a peer). n The preschool offers frequent opportunities for play and verbal interaction with typically developing peers. n Adults are available to expedite verbal interactions among the children but step back when appropriate.

321

Appendix

6

Selected Resources

Normal Development of Spoken Language: Textbooks

Assessment of Auditory Abilities n See Table 5–1 for list of functional

Gleason, J. B., & Ratner, N. B. (2012). The development of language (8th ed.). Boston, MA: Pearson Education. Hoff, E. (2014). Language development (5th ed.). Belmont, CA: Thomson Wads­ worth. Hulit, L. M., Fahey, K. P., & Howard, M. R. (2014). Born to talk: An introduction to speech and language development (5th ed.). Boston, MA: Pearson Education. Otto, B. W. (2009). Language development in early childhood education (3rd ed.). Boston, MA: Merrill. Owens, R. E. (2015). Language development: An introduction (9th ed.). Boston, MA: Allyn & Bacon. Paul, R., Norbury, C., & Gosse, C. (2018). Language disorders from infancy through adolescence (5th ed.). St. Louis, MO: Mosby.

auditory assessment tools n Targets for auditory/verbal learning

(see Appendix 3) n Alexander Graham Bell Association

for the Deaf and Hard of Hearing (AG Bell, 2014). Recommended protocol for audiological assessment, hearing aid and cochlear implant evaluation, and follow-up (see more at: https://www.agbell.org/ Advocacy/Alexander-Graham-BellAssociations-Recommended-Protocolfor-Audiological-Assessment-HearingAid-and-Cochlear-Implant-Evaluationand-Follow-up) n Perigoe, C., & Paterson, M. M. (2013). Understanding auditory development and the child with hearing loss. In Deborah R. Welling and Carol A. Ukstins (Eds.), Fundamentals of

323

324

Children With Hearing Loss:  Developing Listening and Talking

Audiology for the Speech-Language Pathologist (pp. 173–204). Burlington, MA: Jones & Bartlett Learning.

Cottage Acquisition Scales for Listening, Language, and Speech (CASLLS) Wilkes, E. (2003). San Antonio, TX: Sunshine Cottage. n Criterion-referenced assessment

Language Assessment Tools Boehm 3–Preschool Boehm, A. E. (2001). Austin, TX: Pro-Ed. n Assesses knowledge of concepts,

receptive only; provides percentiles, T-scores, age equivalents. Brigance Early Childhood Development Inventory Brigance, A. H. (2010). North Billerica, MA: Curriculum Associates. n Curriculum-based assessment from

birth to 7 years in six developmental areas. Carolina Curriculum for Infants and Toddlers (3rd ed.) Johnson-Martin, N., Hacker, B., & Attermeier, S. (2004). Baltimore, MD: Paul H. Brookes. n Provides assessment and intervention

for children birth to 24 months; includes prelinguistic and verbal communication, as well as cognition, social skills, and fine and gross motor abilities.

of listening, language, speech, and cognition; based on normal development of children age birth to 8 years. Peabody Picture Vocabulary Test (4th ed.) (PPVT-4) Dunn, L., & Dunn, L. (2006). Circle Pines, MN: American Guidance Service. n Assesses receptive vocabulary in

children ages 2.5 years to adult; provides standard scores, percentiles, age equivalents, stanine; widely recognized. Preschool Language Scale (5th ed.) Zimmerman, I., Steiner, V., & Pond, R. (2011). San Antonio, TX: Psychological Corporation/Pearson. n Assesses receptive and expressive

language abilities; provides standard scores, percentile ranks, and age equivalents. MacArthur-Bates Communicative Development Inventories (2nd ed.) Marchman, V. A. et al. (2006). Baltimore, MD: Paul H. Brookes. n Assesses receptive and expressive

Clinical Evaluation of Language Fundamentals — Preschool (2nd ed.) (CELF-Preschool) Wiig, E. H., Secord, W., & Semel, E. (2004). San Antonio, TX: Harcourt. n Assesses receptive and expressive

language; provides standard scores, percentile scores.

vocabulary and early language from nonverbal gestures through use of word combinations; parent report. Receptive-Expressive Emergent Language Scale (REEL) (3rd ed.) Bzoch, K., League, R., & Brown, V. (2003). Austin, TX: Pro-Ed.



Appendix 6.  Selected Resources

n Assesses receptive and expressive

language; provides standard scores. Reynell Developmental Language Scales Reynell, J. K., & Gruber, C. P. (1999). Los Angeles, CA: Western Psychological Services. n Assesses receptive and expressive

language abilities; provides standard scores. Test for Auditory Comprehension of Language (TACL-4) Carrow-Woolfolk, E. (2014). Austin, TX: Pro-Ed. n Comprehension of word classes and

relations, grammatical morphemes, complex sentences; provides percentiles, standard scores, age equivalents, ages 3 to 10 years.

Books and Materials of Interest to Parents and Interventionists Apel, K., & Masterson, J. (2001). Beyond baby talk. New York, NY: Prima. n Book for parents about speech and language development. BabyBeats is a free app available in the App Store and Google Play, by Advanced Bionics. n The app displays songs, videos, and musical activities for infants and preschoolers to facilitate early communication, listening, and preliteracy skills. Bradham, T. S., & Houston, K. T. (2015). Assessing listening and spoken language in children with hearing loss. San Diego, CA: Plural Publishing.

Clark, M. (2006). A practical guide to quality interaction with children who have hearing loss. San Diego, CA: Plural Publishing. The SKI-HI Curriculumn n See the section entitled Early Spoken Language Through Audition by Cole, E. B., Carroll, E., Coyne, J., Gill, E., & Paterson, M. (2005). Vol. I, pp. 137–141; Vol. II, pp. 1279–1394. Logan, UT: Hope. Estabrooks, W. (Ed). (2012). 101 Frequently asked questions about auditoryverbal practice. Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing. Estabrooks, W., MacIver-Lux, K., & Rhoades, E. A. (eds) (2016). Auditory-verbal therapy. San Diego, CA: Plural Publishing. Estabrooks, W., & Samson, A. B. (2003). Songs for listening! Songs for life! Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing. Fitzpatrick, E. M., & Doucet, S. P. (2013). Pediatric audiologic rehabilitation: From infancy to adolescence. New York, NY: Thieme. Hollinger, P., & Doner, K. (2003). What babies say before they can talk. New York, NY: Simon & Schuster. Ling, D. (2002). Speech and the hearingimpaired child: Theory and practice (2nd ed.). Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing. Madell, J. R., Flexer, C., Wolfe, J., & Schafer, E. C. (2019). Pediatric audiology: Diagnosis, technology and management (3rd ed.). New York, NY: Thieme.

325

326

Children With Hearing Loss:  Developing Listening and Talking

Madell, J. R., Flexer, C., Wolfe, J., & Schafer, E. C. (2019). Pediatric audiology casebook (2nd ed.). New York, NY: Thieme.

Suskind, D. (2015). Thirty million words: Building a child’s brain. New York, NY: Penguin Random House.

McCreery, R. W., & Walker, E. A. (2017). Pediatric amplification: Enhancing auditory access. San Diego, CA: Plural Publishing,

Wolfe, J. (2019). Cochlear implants: Audiologic management and considerations for implantable hearing devices. San Diego, CA: Plural Publishing.

Pepper, J., & Weitzman, E. (2004). It takes two to talk: A practical guide for parA Few Favorite Websites for ents of children with language delays (3rd ed.). Toronto, Ontario: Hanen Parents and Interventionists Center (http://www.hanen.org/Shop/ Products/It-Takes-Two-to-Talk-Guide​ https://community.hearingfirst.org/: book-and-DVD-ComboPack.aspx). For families and professionals seeking Pollack, D., Goldberg, D., & Caleffe- resources and support for listening and Schenck, N. (1997). Educational audi- spoken language outcomes for children ology for the limited-hearing infant with all degrees of hearing loss and preschooler: An auditory-verbal http://trelease-on-reading.com: How program. Springfield, IL: Charles C. and why to read aloud to all children Thomas. http://www.infanthearing.org: National Rhoades, E. A., & Duncan, J. (2017). Audi- Center for Hearing Assessment and tory-verbal practice: Family-centered Management for improvement of early early intervention (2nd ed.). Spring- hearing detection and intervention field, IL: Charles C. Thomas. systems (EHDI) Robertson, L. (2014). Literacy and deaf- http://www.jtc.org:  John Tracy Clinic ness: Listening and spoken language (2nd ed.). San Diego, CA: Plural Pub- http://magickeys.com:  Fast Phonics lishing. http://www.cochlear.com/wps/wcm/ Rossi, K. (2005). Let’s learn around the clock: A professional’s early intervention toolbox. Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing. n Comprehensive intervention

program for use through everyday activities, birth to 3 years of age. Smaldino, J. J., & Flexer, C. (2012). Handbook of acoustic accessibility: Best practices for listening, learning, and literacy in the classroom. New York, NY: Thieme.

connect/us/home https://advancedbionics.com/us/en/ home/support/tools-for-schools.html https://www.medel.com/us/ children-kids-corner/ https://www.learntotalkaroundtheclock​ .com:  Look for the place to sign up for the free monthly “Talking Tips” newsletter http://oberkotterfoundation.org: Support for families who choose listening and spoken language outcomes



http://www.successforkidswithhearing​ loss.com:  Karen Anderson’s website with assessment and intervention materials for infants and children with hearing loss of all ages http://www.wrightslaw.com: Special education law

Appendix 6.  Selected Resources

https://global.ihs.com/search_res.cfm?&​ rid=ASA&input_doc_number=ANSI%2 FASA%20S12%2E60&input_doc_title=​ %20:  Acoustical guidelines

327

Appendix

7

Description and Practice of Listening and Spoken Language Specialists: LSLS Cert. AVT and LSLS Cert. AVEd1

Listening and Spoken Language Specialists

optimal acquisition of spoken language through listening (auditory brain development) by newborns, infants, toddlers, and children who are deaf or hard of Listening and spoken language (LSL) hearing. LSL specialists guide parents in helpspecialists help children who are deaf or hard of hearing develop spoken language ing their children develop intelligible spoand literacy primarily through listening — ​ ken language through listening and coach through auditory brain access, stimula- them in advocating their children’s inclusion in the mainstream school. Ultimately, tion, and development. LSL specialists focus on education, parents gain confidence that their children guidance, advocacy, family support, and will have access to the full range of eduthe rigorous application of techniques, cational, social, and vocational choices strategies, and procedures that promote in life.

1

The Alexander Graham Bell Association for the Deaf and Hard of Hearing (https://www.agbell.org/ Families/Listening-and-Spoken-Language). Reprinted with permission. All rights reserved.

329

330

Children With Hearing Loss:  Developing Listening and Talking

Listening and Spoken Language Approaches

Listening and Spoken Language Practice

The two main listening and spoken lan- Both types of LSL specialists have similar guage approaches, historically, have been knowledge and skills and work on behalf the auditory-verbal (AV) approach and the of the child and family. auditory-oral (A-O) approach. Today, as a result of advances in newborn hearing The LSLS Cert. AVT works one on screening, hearing technologies, early one with the child and family in all intervention programs, and the knowlintervention sessions. edge and skills of professionals, these two The LSLS Cert. AVEd involves the approaches have more similarities than family and also works directly with differences and they can lead to similar the child in individual or group/ outcomes. classroom settings. The AG Bell Academy for Listening and Spoken Language certifies LSL The LSLS Cert. AVT and the LSLS specialists. Cert. AVEd both follow developmental Currently, the designations of the LSLS models of audition, speech, language, certification program are LSLS Cert. AVT cognition, and communication. (Certified Auditory-Verbal Therapist) and The LSLS Cert. AVT and the LSLS LSLS Cert. AVEd (Certified Auditory-Verbal Cert. AVEd both use evidence-based Educator). The LSL specialist must provide practices. services in adherence to the AG Bell Academy Code of Ethics and the Principles of The LSLS Cert. AVT and the LSLS Auditory-Verbal Therapy or the Principles Cert. AVEd both strive for excellent of Auditory-Verbal Education (available outcomes in listening, spoken in the LSLS candidate handbooks and language, literacy, and independence online at: https://agbellacademy.org/certi​ for children who are deaf or hard of hearing. fication/forms-fees-and-faqs/)

Appendix

8

Principles of Certified LSL Specialists1

Each certified LSL Specialist designation has specific principles professionals must uphold. These principles are the basis of the auditory-verbal practitioner profession.

Principles of Certified LSLS Auditory-Verbal Therapists (LSLS Cert. AVT)2 1. Promote early diagnosis of hearing loss in newborns, infants, toddlers, and young children, followed by immediate audiologic management and auditory-verbal therapy. 2. Recommend immediate assessment and use of appropriate, state-of-theart hearing technology to obtain maximum benefits of auditory stimulation.

3. Guide and coach parents3 to help their child use hearing as the primary sensory modality in developing listening and spoken language. 4. Guide and coach parents to become the primary facilitators of their child’s listening and spoken language development through active consistent participation in individualized auditory-verbal therapy. 5. Guide and coach parents to create environments that support listening for the acquisition of spoken language throughout the child’s daily activities. 6. Guide and coach parents to help their child integrate listening and spoken language into all aspects of the child’s life.

1 

An auditory-verbal practice requires all 10 principles. Adapted from the principles originally developed by Doreen Pollack (1970). Adopted by the AG Bell Academy for Listening and Spoken Language®, July 26, 2007. 3  The term “parents” also includes grandparents, relatives, guardians, and any caregivers who interact with the child. 2 

331

332

Children With Hearing Loss:  Developing Listening and Talking

7. Guide and coach parents to use natural developmental patterns of audition, speech, language, cognition, and communication. 8. Guide and coach parents to help their child self-monitor spoken language through listening. 9. Administer ongoing formal and informal diagnostic assessments to develop individualized auditoryverbal treatment plans, to monitor progress, and to evaluate the effectiveness of the plans for the child and family. 10. Promote education in regular schools with peers who have typical hearing and with appropriate services from early childhood onwards.

Principles of LSL Specialist Auditory-Verbal Education (LSLS Cert. AVEd)4 A Listening and Spoken Language Educator (LSLS Cert. AVEd) teaches children with hearing loss to listen and talk exclusively though listening and spoken language instruction. 1. Promote early diagnosis of hearing loss in infants, toddlers, and young children, followed by immediate audiologic assessment and use of appropriate state-of-the-art hearing technology to ensure maximum benefits of auditory stimulation. 2. Promote immediate audiologic management and development of listening and spoken language for 4 

children as their primary mode of communication. 3. Create and maintain acoustically controlled environments that support listening and talking for the acquisition of spoken language throughout the child’s daily activities. 4. Guide and coach parents to become effective facilitators of their child’s listening and spoken language development in all aspects of the child’s life. 5. Provide effective teaching with families and children in settings such as homes, classrooms, therapy rooms, hospitals, or clinics. 6. Provide focused and individualized instruction to the child through lesson plans and classroom activities while maximizing listening and spoken language. 7. Collaborate with parents and professionals to develop goals, objectives, and strategies for achieving the natural developmental patterns of audition, speech, language, cognition, and communication. 8. Promote each child’s ability to selfmonitor spoken language through listening. 9. Use diagnostic assessments to develop individualized objectives, to monitor progress, and to evaluate the effectiveness of the teaching activities. 10. Promote education in regular classrooms with peers who have typical hearing, as early as possible, when the child has the skills to do so successfully.

The Alexander Graham Bell Association for the Deaf and Hard of Hearing (https://agbellacademy​ .org/certification/principles-of-lsl-specialists/). Reprinted with permission. All rights reserved.

Appendix

9

Knowledge and Competencies Needed by Listening and Spoken Language Specialists (LSLS) AG BELL ACADEMY INTERNATIONAL CERTIFICATION PROGRAM FOR LSLS Certified Auditory-Verbal Educator®— LSLS Cert. AVEd® Certified Auditory-Verbal Therapist®— LSLS Cert. AVT ®

Core Competencies, Content Areas, and Test Domains for the LSLS1

n Domain 1.  Hearing and hearing

technology n Domain 2.  Auditory functioning n Domain 3.  Spoken language

communication This is a list of the core competencies, content areas, and test domains and their body of knowledge that a professional must have to qualify for and pass a test to earn the LSLS certification. The nine content areas and domains of knowledge covered in the LSLS test include:

n Domain 4.  Child development n Domain 5.  Parent guidance,

education, and support n Domain 6.  Strategies for listening and spoken language development n Domain 7.  History, philosophy, and professional issues

1 

The Alexander Graham Bell Association for the Deaf and Hard of Hearing (https://agbellacademy​ .org/certification/lsls-domains-of-knowledge/). Reprinted with permission. All rights reserved.

333

334

Children With Hearing Loss:  Developing Listening and Talking n Domain 8.  Education (this domain

focuses on the development and expansion of the auditory and language skills that underlie and support the child’s progress in the general education curriculum.) n Domain 9. Emergent literacy (this domain focuses on the development of the auditory and language skills that underlie and support the acquisition and advancement of literacy.)

10. Early identification and high-risk factors 11. Audiogram, audiogram interpretation, and implications to speech perception 12. Audiologic assessments a. Behavioral b. Speech perception testing c. Electrophysiologic (e.g., OAE, ABR, ASSR, acoustic immittance) d. Hearing aid evaluation (e.g., real-ear/probe microphone, electroacoustic analysis) e. Cochlear implant candidacy, surgery, activation, and functional application of programs

Following is a list of the nine test domains and content areas with the addition of their competencies to classify the LSLS body of knowledge. Questions on the LSLS test will address these competencies (https://agbellacademy.org/wpcontent/uploads/2018/12/LSLS-Certifica​ B.  Hearing Technology tion-Exam-Blueprint_FINAL.pdf). 1. Sensory devices (e.g., hearing aids, cochlear implants, vibrotactile aids, transposition aids) Domain 1.  Hearing and 2. Assistive listening devices (e.g., Hearing Technology Hearing assistive technology (HAT), such as personal remote A.  Hearing Science/Audiology microphone (RM) systems, sound-field FM and infrared [IR] 1. Anatomy of the ear and neural systems) pathways 3. Earmold acoustics (e.g., impact of 2. Physiology of hearing the earmold characteristics on the 3. Physics of sound (e.g., decibel, transmission of sound) frequency, sound waves) 4. Hearing technology troubleshooting 4. Psychoacoustics (e.g., HL, SPL, SL) strategies 5. Auditory perception (e.g., masking, localization, binaural hearing) 6. Speech acoustics 7. Environmental acoustics Domain 2.  Auditory Functioning a. Signal-to-noise b. Distance 1. Auditory skill development c. Noise 2. Infant auditory development d. Reverberation (e.g., neural development, 8. Causes of hearing impairment neuroplasticity) 9. Types of hearing impairment and 3. Functional listening skill disorders (e.g., site of lesion, age of assessments and evaluations, both onset) formal and informal



Appendix 9.  Knowledge and Competencies Needed by LSLS

4. Acoustic phonetics as related to speech perception and production 5. Functional use of audition

9. International Phonetic Alphabet (IPA) 10. Impact of auditory access on speech production

Domain 3.  Spoken Language Communication

B. Language

A. Speech 1. Anatomy of speech/voice mechanism 2. Physiology of speech/voice mechanism 3. Suprasegmental, segmental, and coarticulation aspects of speech production 4. Sequences of typical speech development (e.g., preverbal, articulation, phonology, intelligibility) 5. Sequence of speech development in clients with various sensory devices (e.g., hearing aids, cochlear implants, vibrotactile aids, transposition aids) 6. Speech production assessment measures, both formal and informal 7. Teaching techniques in speech production a. Prerequisite skills for phoneme production b. Developmental (habilitative) and remedial (rehabilitative) speech development c. Suprasegmental and segmental aspects of speech facilitation d. Auditory strategies for speech facilitation e. Visual and tactile strategies for speech facilitation f. Integration of speech targets into spoken language 8. Speech characteristics of children without auditory access to the full speech spectrum

1. Impact of auditory access on language development 2. Aspects of language (e.g., phonology, pragmatics, morphology, syntax, semantics) 3. Sequence of typical language development (e.g., prelinguistic, communicative intent, linguistic) 4. Language assessment measures, both formal and informal 5. Teaching techniques in receptive and expressive language 6. Impact of speech acoustics on choice of language targets (e.g., inside/beside, he/she) 7. Development of complex conversational competence 8. Development of divergent and convergent thinking 9. Figurative language and higher-level semantic usage

Domain 4.  Child Development 1. Sequence of typical child development a. Cognitive b. Gross and fine motor c. Self-help d. Play 2. Influence of associated factors on child development (e.g., cultural, community, family) 3. Conditions that are present in addition to hearing impairment (e.g., sensory integration deficits, visual challenges, autism spectrum

335

336

Children With Hearing Loss:  Developing Listening and Talking

disorders, neurological disorders, learning disabilities)

Domain 5.  Parent Guidance, Education, and Support 1. Family systems (e.g., boundaries, roles, extended family, siblings) 2. Impact of hearing impairment on family (e.g., coping mechanisms, family functioning, stages of grief) 3. Family counseling techniques (e.g., active listening, reflective listening, questioning, open-ended statements) 4. Family coaching and guidance techniques (e.g., demonstration, modeling, turning over the task, providing feedback, coteaching) 5. Impact of associated factors on parent guidance (e.g., cultural, language in the home, economic, lifestyle, community) 6. Behavior management techniques 7. Adult learning styles

6. Creating a need for the child to talk 7. Acoustic highlighting techniques 8. Auditory presentation prior to visual presentation (e.g., say before seeing) 9. Spoken language modeling 10. Meaningful, interactive conversation 11. Experience-based, naturalistic language activities 12. Experience and personalized books

Domain 7.  History, Philosophy, and Professional Issues A.  History and Philosophy 1. History of education of individuals who are deaf or hard of hearing 2. Historical perspective of communication approaches 3. Current communication approaches and principles for individuals who are deaf or hard of hearing

B.  Professional Issues

Domain 6.  Strategies for Listening and Spoken Language Development 1. Learning to listen strategies (e.g., creating optimal listening environment, positioning to maximize auditory input) 2. Pausing (wait time) appropriately 3. Language facilitation techniques (e.g., expansion and modeling) 4. Prompting techniques (e.g., linguistic, phonological, acoustic, physical, printed/written prompts) 5. Responsive teaching (e.g., listening to the client and modifying according to the client’s language and speech production)

1. Ethical requirements and issues 2. Professional development requirements and opportunities 3. Evidence-based practice and research findings

Domain 8.  Education This domain focuses on the development and expansion of the auditory and language skills that underlie and support the child’s progress in the general education curriculum. 1. Continuum of educational and community placements (e.g., child care, respite care)



Appendix 9.  Knowledge and Competencies Needed by LSLS

2. Curricular objectives that meet local standards in areas of instruction 3. Strategies for preteaching and reteaching (post-teaching) the academic curriculum 4. Strategies for preteaching and reteaching (post-teaching) language needed for academics 5. Strategies to integrate auditory speech-language goals with curriculum 6. Cognitive and academic assessments 7. Process for developing individualized educational plans 8. Collaborative strategies with school professionals

Domain 9.  Emergent Literacy This domain focuses on the development of the auditory and language skills that underlie and support the acquisition and advancement of literacy. 1. The learning sequence and pedagogy related to teaching the following skills in accordance with the child’s level of language development: a. Reciting finger plays and nursery rhymes b. Telling and/or retelling stories c. Activity and story sequencing

d. Singing songs and engaging in musical activities e. Creating experience stories/ experience books f. Organization of books (e.g., cover, back, title, author page) g. Directionality and orientation of print h. Distinguishing letters, words, sentences, spaces, and punctuation that mark text i. Phonics (e.g., sound–symbol correspondences and letter– sound correspondences) j. Phonemic awareness (e.g., sound matching, isolating, substituting, adding, blending, segmenting, deleting) k. Sight of word recognition l. Strategies for the development of listening, speaking, reading, and writing vocabulary m. Contextual clues to decode meaning n. Oral reading fluency development o. Text comprehension strategies (e.g., direct explanation, modeling, guided practice, and application) p. Abstract and figurative language (e.g., similes, metaphors) q. Divergent question comprehension (e.g., inferential questions, predictions)

337

Appendix

10

Listening and Spoken Language Domains Addressed in This Book Domain

Chapter

1

1

Competencies A.  1, 2, 7, 10 B. 1

2

A.  1, 2, 3, 4, 6, 7

3

A.  1, 2, 3, 4, 5, 8, 9, 10

4

A.  1, 2, 3, 4, 5, 6, 7, 10, 11, 12

5

A.  3, 4, 5, 6, 7, 10, 11, 12 B.  1, 2, 3, 4

7

A.  6, 7

Appendix 1

A. 7 B. 2

2

Appendix 2

A. 6

Appendix 5

A. 7

1

1, 2, 5

2

4

5

3

7

1, 3, 4, 5

10

5

Appendix 1

1, 5

Appendix 2

3, 4, 5

Appendix 3

1, 3, 4, 5 continues 339

Appendix 10.  continued Domain

Chapter

3

7

Competencies A.  3, 4, 10 B. 1

8

B.  2, 3, 7

9

A. 3 B.  5, 7

10

A. 3 B.  5, 7

Appendix 1

B. 5

Appendix 3

A.  3, 10 B.  5, 7

4

5

Appendix 4

B.  3, 5, 7

Appendix 5

B. 1

Appendix 6

B.  3, 4, 5

3

3

6

2, 3

10

1

Appendix 4

1

10

2, 3, 4, 5

Appendix 4 6

7

4

7

1, 2, 7, 8

9

2, 3, 5, 6, 9, 10, 11

10

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12

Appendix 1

3, 10, 11

Appendix 4

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11

Appendix 6

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12

1

B. 3

5

B. 3

6

A. 3

Appendix 7

A. 3 B. 2

Appendix 8

A. 3 B. 2

Appendix 9

A. 3 B. 2

Appendix 10

B. 2

340

Domain 8 9

Chapter

Competencies

6

1

Appendix 5

1

10

1.  a, b, d, e, l

Appendix 1

1.  b, c, d

Appendix 5

1.  a, b, d, e, f, g, h, i, j, k

Appendix 6

1. l

341

Glossary

Accommodations.  Services or equipment to which the student with a disability is legally entitled for the purpose of providing an adequate and equitable education. Acoustic.  Sound, its physical nature. Acoustic accessibility.  Refers to the behavior of sound in a room (any room — home, school, or community based), its impact on children, and its effect on the ability of a child to extract meaning from sound. Acoustic filter effect of hearing loss. Hearing loss acts like an invisible acoustic filter that distorts, smears, or eliminates incoming sounds; the negative impact of this filter effect on spoken communication, reading, academics, and independent function causes the problems associated with hearing loss. Acoustic phonetics.  The study of speech sounds as they are perceived by the listener’s ear; objective study and measurement of the sound waves produced when speech sounds are spoken. Acoustic saliency.  An acoustically salient phoneme (speech sound)

or word is one that is obvious and prominent in an utterance. In a sentence context, acoustically nonsalient morphemes are shorter in duration and softer than louder phonemes in adjacent portions of the utterance. Acoustical treatment.  Using a material or process to change the absorptive or reflective characteristics of a surface in a room. Acoustically modified earmolds. ​ Specifically shaped earmolds that change the output of the hearing aid (e.g., horn-shaped earmold that improves high-frequency amplification). Acquired hearing loss.  Hearing losses that occur after speech and language have been developed. Affect.  Pertaining to feelings, emotions. Aided thresholds.  Represented by the symbol A on an audiogram, they are the softest tones that a person can hear while wearing hearing aids. Air conduction.  Sound travels through the air from a source, reaches the ear, enters the auditory system through the ear canal, and progresses through 343

344

Children With Hearing Loss:  Developing Listening and Talking

the eardrum, middle ear, inner ear, and then to the brain. Air conduction thresholds are obtained by using the earphones (or ear inserts) of an audiometer, with the symbol O representing right ear sensitivity and the symbol X representing left ear sensitivity on an audiogram. Alveolar.  A speech sound produced with involvement of the ridge in the mouth just behind the upper teeth, such as /t/ or /d/. Ambient noise.  Sounds in the environment that are not part of the desired acoustic signal. Amplification.  To make sounds louder; may also refer to a piece of equipment used to make sounds louder, such as a hearing aid. Amplifier.  A device that increases the intensity of an electrical signal. Amplitude.  The intensity of a sound. Articulation index.  A measure of speech perception that quantifies the audibility of speech. Assistive listening device (ALD). Any of a number of pieces of equipment used to augment hearing or to assist the hearing aid in difficult listening situations; through the use of a remote microphone, assistive listening devices provide a superior signal-tonoise ratio that enhances the clarity (intelligibility) of the speech signal. Atresia.  Absence or complete closure of the ear canal, causing a conductivetype hearing loss. Attenuation.  A reduction or decrease in magnitude; a sound made softer. Audibility.  Being able to hear but not necessarily distinguish among speech sounds. Audiogram.  A graph of a person’s peripheral hearing sensitivity with

frequency (pitch) on one axis and intensity (loudness) on the other. Audiologist.  An individual who holds a doctoral degree, state license, and optional professional certification for the diagnosis and treatment of hearing and balance problems. See the American Academy of Audiology website for more information (http:// www.audiology.org). Audiometer.  An instrument that delivers calibrated pure-tone or speech stimuli for the measurement of sound detection, speech detection, speech recognition, and word identification. Auditory background.  A subconscious (primitive) function of hearing that allows identification of sounds that are consistent with specific locations (e.g., schools, grocery stores, hospitals) and biological functions. Auditory behaviors. Behaviors displayed by infants and children in response to acoustic stimulation; may be categorized as being attentive or reflexive in nature. Auditory brainstem implant (ABI).  A surgically implanted electronic device that provides acoustic stimulation to a person who is profoundly deaf due to illness or injury damaging the cochlea or auditory nerve; but instead of electrical stimulation being used to stimulate the cochlea, the ABI electrode “paddle” bypasses the cochlea and is placed directly on the cochlear nucleus of the auditory brainstem. Auditory brainstem response test (ABR). ​An objective test that measures the tiny electrical potentials produced in response to sound stimuli; the synchronous discharge of the first- through sixth-order neurons in the auditory nerve and brainstem.

Glossary 345

Auditory feedback loop.  The process of self-monitoring (input) and correcting one’s own speech (output); it is crucial for the attainment of auditory goals and fluent speech and is a critical component motivating early vocalization frequency. Auditory neuropathy spectrum disorder (ANSD).  A complex collection of auditory disorders that interfere with hearing and especially with speech perception in quiet and in noise; possible sites of lesion include the inner hair cells, the auditory nerve, and the synapse between the inner hair cells and nerve fibers. Auditory system.  A term used to describe the entire structure and function of the ear. Autoimmune.  This term refers to a disordered immunologic response in which the body produces antibodies against its own tissues; may occur in the inner ear. Background noise.  Any unwanted sound that may or may not interfere with listening depending on the speech-to-noise ratio in the environment. Behavioral observation audiometry (BOA).  The “lowest” developmentallevel test procedure whereby a controlled, calibrated sound stimulus is presented, and an infant’s or child’s unconditioned response behaviors are observed. Behavioral tests.  Hearing tests that elicit a behavior, measure that behavior, and infer function of the auditory system. Bilabial.  A speech sound produced with involvement of the lips, such as /b/ or /m/.

Bilateral.  A disorder or hearing loss that involves both ears. Bimodal fitting.  Bimodal hearing means the child is wearing a cochlear implant in one ear and a hearing aid in the other. Binaural.  Hearing with both ears; wearing a separate hearing aid on each ear. Bone anchored hearing aid implant (BAI).  Also called a bone anchored hearing system (BAHS), a bone anchored hearing aid (BAHA or Baha), or an osseointegrated implant (“osseo implant” is often used) —  it is a surgically implantable device that is an alternative to a traditional bone conduction hearing aid and is suitable for children with conductive or mixed hearing losses or single-sided deafness (complete sensorineural hearing loss in one ear). Bone conduction.  A pathway by which sounds directly reach the inner ear primarily through skull vibration, thereby bypassing the outer and middle ears. Bone oscillator.  The piece of audiometric equipment that looks like a small black box attached to a headband and is used to obtain bone conduction thresholds. Brain plasticity.  The ability of the central nervous system, especially the brain, to change and reorganize with stimulation. Calibrate.  To check or adjust a piece of equipment until it accurately conforms to a predetermined, standard measure. Caregiver.  Any adult who takes care of a child.

346

Children With Hearing Loss:  Developing Listening and Talking

Central auditory mechanism. The part of the ear structure that includes the brainstem and cortex. Central auditory processing disorder (CAPD).  Not really a hearing loss relative to a reduction of sound reception, CAPD causes difficulty with perception or deriving meaning from incoming sounds. The source of the problem is in the central auditory nervous system (brainstem or cortex), not in the peripheral hearing mechanism (outer, middle, or inner ear). Certified auditory-verbal therapist (Cert. AVT).  See Listening and Spoken Language Specialist (LSLS). Cerumen.  Also called earwax, cerumen is a normal, protective secretion of the ear canal. Approximately 30% of infants and young children in early intervention programs have so much earwax that it causes a hearing loss that interferes with the learning of language (spoken communication). Classroom acoustics.  The noise and reverberation (echo) characteristics of a classroom as determined by sound sources inside and outside the classroom, including classroom size, shape, surface, material, furniture, persons, and other physical characteristics. Classroom audio distribution systems (CADS) (also called sound-field systems).  Similar to a wireless public address system, but it is designed specifically to ensure that the entire speech signal, including the weak high-frequency consonants, reaches every child in the room using infrared or FM transmission. Clear speech.  Speech that is spoken at a moderately loud conversational level; characterized by precise but not exaggerated articulation, pausing at

appropriate places, and speaking at a somewhat slower rate; clear speech is easier for people with hearing loss to understand. Cochlea.  The inner ear where the reception of sound actually occurs. The organ of Corti (end organ for hearing) and hair cells (sensory receptors for sound) are located in the cochlea. Cochlear implant.  A biomedical device that delivers electrical stimulation to the eighth cranial nerve (auditory nerve) via an electrode array that is surgically implanted in the cochlea. Complementation.  A grammatical term for a particular way of inserting new verbal relationships into sentences. Complex sound.  A sound made up of two or more frequencies. Compression (in hearing aid circuitry).  Nonlinear amplifier gain used to determine the output of the signal from the hearing aid as a function of the input signal. Conductive hearing loss. Hearing loss caused by damage or disease (pathology) located in the outer or middle ear that interferes with the efficient transmission of sound into the inner ear where sound reception occurs. Conductive mechanism.  The part of the auditory system that includes the outer and middle ear. Congenital hearing loss.  A hearing loss that occurs prior to the development of speech and language, usually before or at birth. Conjoining.  A grammatical term for putting together or combining. Consonant.  A speech sound formed by restricting, channeling, or directing airflow with the tongue, teeth, and/or lips (e.g., th, s, p, f, m, and so on).

Glossary 347

Continuates.  Caregiver talk that narrates the obvious. Coupled.  The connecting or attaching of one object to another — for example, a hearing aid to an assistive listening device. CV syllable.  A consonant-vowel syllable. CVC syllable.  A consonant-vowelconsonant syllable. Decibel (dB).  The standard unit for measuring and describing the intensity of a sound; the logarithmic unit of sound intensity or sound pressure; one-tenth of a Bel. Diagnostic audiologic assessment (hearing test).  The collection of tests given to an individual who wants his or her hearing evaluated, typically including case history, speech tests (threshold and word identification), pure-tone air and bone conduction testing, immittance, and counseling. Differential diagnosis. Separating peripheral hearing loss from other problems that present with symptoms similar to hearing loss (e.g., lack of appropriate spoken language development and/or lack of appropriate and observable auditory behaviors). Digital RF systems (also called DM [digital modulation] systems). Are able to provide a higher level of analysis and control over the signal received by the microphone, and then deliver the signal to the listener using a carrier frequency that “hops” — a feature that makes these systems less susceptible to interference from nearby RF devices. Direct audio input (DAI). Direct transmission of a sound signal into a hearing aid without being changed from one form of energy to another.

Disability.  Impairment or loss of function of whole or parts of body systems. Discrimination.  Ability to differentiate among sounds of different frequencies, durations, or intensities. Distance hearing.  The ability to monitor environmental events and to overhear conversations, an ability that is severely reduced by hearing loss of any type or degree. Distortion.  Adding to or taking away from the original form or composition of the acoustic signal. Dynamic range compression circuits. Circuits in hearing aids designed to provide more amplification for soft sounds than for loud sounds to meet the needs of many hearing aid wearers who primarily need more amplification of soft sounds. Dysplasia (of the inner ear). Incomplete development or malformations of the inner ear (cochlea). Ear canal.  The canal between the pinna and eardrum, also called the external auditory meatus. Earmold.  The part of a hearing aid or ALD that fits into the ear and functions to conduct sound from the hearing aid into the ear. Earshot.  The distance over which (speech) sounds are intelligible; this distance is reduced when someone has a hearing loss of any type or degree. Echo.  A reflected sound that is perceived after the direct sound that initiated it. Educational accessibility.  The child with hearing loss should have access to curriculum and instruction in the classroom at the same level and rate as typically hearing peers.

348

Children With Hearing Loss:  Developing Listening and Talking

Educational streaming.  Refers to the mode of delivery of videos from YouTube, websites, and other online educational programming to the child’s and/or classroom’s computer system. EduLink.  This personal FM system was designed specifically for listeners with normal hearing sensitivity. It is a miniaturized FM receiver that can be used with a variety of FM transmitters. Its primary purpose is to improve listening and attention skills, and perhaps the academic abilities of children with attention and processing difficulties. Efficacy.  Efficacy can be defined as the extrinsic and intrinsic value of a treatment. Every technology or treatment that we institute for a baby or child should have some associated efficacy measure to determine if the treatment is effective. EHDI.  Early hearing detection and intervention: A term used to describe formal programs within the United States for newborn hearing screening, the diagnosis of infant hearing loss, and early intervention for infant hearing loss. Electroacoustic.  Pertains to the electronic processing of sound. Electrophysiologic tests.  Tests that measure the electrical activity of the brainstem and brain in response to acoustic stimulation. Embedded.  A grammatical term for inserting additional information into a sentence. Endogenous hearing loss.  A hearing loss caused by genetic factors that have various probabilities of being passed on to children or to grandchildren. Environmental sounds. Nonspeech sounds such as a fire engine, a siren,

a telephone ringing, the garage door opening, or a doorbell ringing. Etiology.  Cause of the hearing loss. Exogenous hearing loss.  A hearing loss caused by environmental factors such as a virus, medications, or lack of oxygen; it cannot be passed on to offspring. Feedback (acoustic).  A high-pitched squeal produced by a hearing aid and caused most often by a poorly fitting earmold. Feedback also can be caused by cracked earmold tubing, earwax in the earmold, or a crack in the ear hook or hearing-aid case. Frequency.  The number of times that any regularly repeated event, such as sound vibration, occurs in a specified period of time, usually measured in cycles per second. Also called pitch and measured in hertz (Hz) along the top of an audiogram from low- to high-frequency sounds; low-frequency sounds are most commonly associated with vowel sounds and high-frequency sounds with consonants. Frequency modulation (FM). The frequency of transmitted waves is alternated in accordance with the sounds being sent; radio waves. Frequency response.  The way that each hearing aid or ALD shapes the incoming (speech) sounds to best reach the person’s hearing loss; the amount of amplification provided by the hearing aid at any frequency. Fricatives.  Speech sounds such as /f/ and /th/ that have turbulent breath flow. Functional assessment. Functional measures of efficacy are typically accomplished by having the teacher, student, or parent complete a

Glossary 349

questionnaire before and after use of a hearing aid, cochlear implant, personal FM, or sound-field system. Functional hearing loss.  A hearing problem with no physiological basis. Fundamental frequency.  The rate at which an individual speaker’s vocal cords vibrate in speaking. Gain.  The amount by which the output level of the hearing aid or ALD exceeds the input level; how much louder sounds are made by an amplification device. Gestation (gestational age).  The time in weeks after conception; a full-term birth is 40 weeks. HAT.  Hearing assistance technology; also called assistive listening devices (ALDs) — for example, personal remote microphone RM systems. Hearing age (also called listening age).  The relationship between the age of the child when he or she first receives amplification and that child’s chronological age; for example, a 3-year-old child is 1 day old relative to listening, language, and information learning when his or her hearing aids or cochlear implants are first fit. Hearing aid (also called hearing instrument).  FDA-cleared electronic device that amplifies and shapes incoming sounds to make them audible to an ear that would not otherwise detect them; the first step in an early intervention treatment paradigm to improve access of auditory information to the brain of persons with hearing loss. Hearing aid adjustment. Varying the program of a hearing aid or the acoustics of an earmold to improve its

capability to assist a specific person with a particular hearing loss. Hearing aid fitting.  Trying a hearing aid and earmold on a person using specific formula, programs, and procedures, until it is suitable for the person acoustically, physically, and cosmetically. Hearing aid prescriptive targets. These are formulas or fitting procedures for prescribing gain (amplification) for hearing aids; the two main prescriptive targets are DSL (desired sensation level) and NAL (National Acoustic Laboratories in Sydney, Australia). Hearing aid programmability. Refers to a hearing aid’s adjustment capability. If a hearing aid is programmable, its amplification characteristics can be fine-tuned via a special computer program to the hearing aid wearer’s needs. If the hearing aid user’s hearing changes, the hearing aid can be easily and quickly reprogrammed. Hearing aid stethoscope.  An instrument that allows one to listen to the output of a hearing aid or an ALD to detect a malfunction. Hearing loss.  Lessened or loss of hearing sensitivity caused by disease or damage to one or more parts of one of both ears. Hearing protection device (HPD). ​ This is a device that attenuates the loudness of sound that reaches the ear, such as foam earplugs and earmuffs. Hertz (Hz).  The standard unit for measuring and describing the frequency of a sound; a unit of frequency measurement equal to 1 cycle per second. Horn, acoustic (of an earmold). ​ Progressive increase of the internal

350

Children With Hearing Loss:  Developing Listening and Talking

diameter of the earmold sound channel resulting in a “horn effect” that enhances certain sounds. A common use is to restore some of the high-frequency sound energy associated with ear canal resonance that may be lost when a conventional earmold is inserted into the ear. Idiopathic.  No known cause (of the hearing loss). Impedance (immittance) testing. An objective measure of middle ear function, not hearing sensitivity; measures how well the eardrum moves. Inner ear.  Composed of the cochlea, which contains hair cells (the thousands of tiny receptors of sound), the vestibular or balance system, and the acoustic nerve that transmits nerve impulses from the inner ear to the brain. Intelligibility.  The ability to detect differences among speech sounds (e.g., to hear words such as “vacation” and “invitation” as separate and distinct words). Intensity.  Intensity also is referred to as loudness and is measured in dB. Internal noise.  Noise generated within a sound system such as a hearing aid or assistive listening device. Intonation.  Variations in pitch patterns (melody) and stress in an utterance that add meaning to the message; the melody carried in the voice when speaking. Inverse square law.  The intensity or loudness of a sound decreases by 6 dB as the distance between the sound source and the listener is doubled. Jack.  An electrical device that can receive a plug to make a connection

in a circuit (e.g., an amplifier that has a specific place where earphones can be attached to the unit). Labiodental.  A speech sound produced with involvement of the lips and teeth, such as /f/ or /v/. Labiolingual.  A speech sound produced with involvement of the lips and tongue, such as children’s “raspberries.” Language.  A structured symbolic system used to communicate ideas in the form of words, with rules for combining and sequencing these words into thoughts, experiences, or feelings. An oral language system is composed of a sound system, a vocabulary or concept system, a word ordering and grammar system, and rules for effectively using this symbolic system. Learning disability.  The lack of a skill in one or more areas of learning that is inconsistent with the child’s overall intellectual capacity. Lexicon.  Vocabulary; understanding the meaning(s) of words. Listening.  Understanding speech and environmental sounds by attending to auditory clues; detection, discrimination, recognition, identification, and comprehension of speech and environmental sounds; being able to determine where a sound is coming from. Listening age.  See Hearing age. Listening and Spoken Language Specialists (LSLS Cert. AVT and LSLS Cert. AVEd). Audiologists, speech-language pathologists, or teachers of children with hearing loss who have obtained additional education and supervised training beyond their graduate degrees and who have

Glossary 351

passed a special certification examination for auditory-verbal therapists (AVT) and auditory-verbal educators (AVEd); a registry of LSLS certified professionals may be obtained from: https://agbellacademy.org/ Live signal.  An unrecorded sound or voice. Localization.  The ability to determine where a sound is coming from; to associate a sound with its source by finding the source. Loudness.  The perception of sound intensity; marked by high volume. Loudspeaker.  An electroacoustic transducer that changes electrical energy into acoustical energy at the output stage of a sound system so that the acoustic event can be heard by many people at the same time. Mainstreaming.  The process that promotes integration into a general educational setting, using a typical curriculum for a child with a disability, to the maximum extent possible in accordance with federal laws. Integration with typical children ranges from full-time placement to integration in some classes such as music, art, or gym. Masking.  A procedure often used in hearing testing where a static-like noise is presented to the nontest ear through headphones to keep it from responding to the test stimuli. Microphone/transmitter.  An electroacoustic transducer that changes a sound stimulus to electrical energy. Middle ear.  The part of the auditory mechanism that functions to conduct sound efficiently into the inner ear; made up of the eardrum, a small air-filled space, and the three smallest bones in the body.

Minimum response level (MRL). The lowest or softest dB hearing level (dB HL) where a baby displays identifiable behavioral change to sound, usually considerably above or louder than his or her actual threshold — may also be called minimal response level (MRL). Mixed hearing loss.  A hearing loss caused by the presence of more than one pathology in different parts of the ear at the same time; it can be thought of as having two hearing losses in the same ear, usually conductive and sensorineural. Modality.  Sensory channel such as audition, vision, or touch. Motherese (also called child-directed speech).  The way many mothers (and other caregivers) talk with small children. Multifactorial inheritance.  This means that there are additive effects of several minor gene-pair abnormalities in association with nongenetic environment interactive factors or “triggers” that cause hearing loss in a child. Multiple-memory hearing aids. These hearing aids offer different settings or memories for various listening environments. One memory may be used for quiet listening and another for listening in a noisy restaurant; a remote control is usually used to access the various memories. Ideally, each memory is adjusted to provide the hearing aid wearer with optimum hearing in that particular environment. Note that children must also have a personal remote microphone system when in a classroom learning situation. Nasals.  Speech sounds produced with air emitted from the nose, such as /m/ or /n/.

352

Children With Hearing Loss:  Developing Listening and Talking

Noise.  Any unwanted or unintended sound or electrical signal that interferes with communication or transmission of energy; it may be disturbing and/or hazardous to hearing health. Noise reduction.  A process of acoustical treatments and structural modifications whereby the intensity in dB of unwanted sound is reduced in a room. Objective tests.  Objective tests provide some direct measurement of the auditory system (e.g., eardrum and middle ear mobility, middle ear muscle reflex action, outer hair cell function, or transmission of sound up the auditory pathways of the lower brainstem). Occupied classroom.  A classroom when the teacher and students are present. Orosensory.  Tactile sensation in the mouth. Otitis media.  Also known as middle ear infection, otitis media is the most common cause of conductive hearing loss in children. It is an inflammation of the middle ear, typically with fluid present in the normally air-filled middle ear space. Otoacoustic emissions (OAEs). Soft, inaudible sounds produced by vibratory motion of the (outer) hair cells in the cochlea. To measure evoked otoacoustic emissions, a small probe is inserted in the ear canal, sounds are presented, and response tracings are recorded. Otologist.  A physician who specializes in the diagnosis and treatment of diseases of the ear; also known as an otolaryngologist or an ear-nose-throat (ENT) specialist.

Ototoxicity.  A “poisoning” of the delicate inner ear by high doses of certain drugs and medications. Outer ear.  The part of the hearing mechanism made up of the pinna and external auditory meatus (ear canal) that functions to protect and channel sounds into the middle ear. Over-the-counter (OTC) hearing aid. ​ A hearing aid intended for adults with mild to moderate hearing loss that can be purchased directly by consumers from a retailer. OTC hearing aids are NOT for children; children must have their hearing aids fit by a pediatric audiologist! Pediatric audiologic test battery. The use, with infants and children, of several, developmentally appropriate behavioral and objective hearing tests to avoid drawing conclusions from a single test, allow detection of multiple pathologies, and provide a framework for observation of auditory behaviors. Perception.  Understanding the meaning of incoming sounds; occurs in the central auditory system (brainstem and cortex). Perilymphatic fistula.  A leak in the oval or round windows, both being points of communication between the middle and inner ear; causes a sudden or degenerative hearing loss. Peripheral auditory mechanism. The part of the ear that includes the outer, middle, and inner ear; it functions to receive incoming sounds. Peripheral ear.  External, middle, and inner ear (not the auditory cortex). Personal sound amplification product (PSAP).  A device that amplifies sound for a person with normal hearing and can be purchased

Glossary 353

directly from a retailer. PSAPs can look and function like a hearing aid, but they cannot be marketed for people with hearing loss — and they must NOT be used with children! Personal-worn FM systems.  They are like individual, private radio stations where the talker (typically the teacher or parent) wears the wireless microphone transmitter, and the child wears the radio-like receiver coupled to the ears via headphones, earbuds, or personal hearing aids. Phonation.  Voicing, vocalizing. Phoneme.  The smallest meaningful unit of sound in a language. Phonemic.  Pertaining to speech sounds, usually in isolation or in nonsense syllables. Phonetic.  Having to do with phonemes; the distinctive sounds that make up the words of a language. Phonologic.  Relating to the production and use of speech sounds in meaningful language. Phonological awareness (phonemic awareness).  The explicit awareness of the speech sound structure of language units that forms the basis for reading. It is demonstrated by a variety of skills that include generating rhyming words, segmenting words into syllables and discrete sounds, and categorizing groups of words based on a similarity of sound segments. Pinna.  The auricle or outer flap of the ear; one located on each side of the head. Pitch.  The overall highness or loudness of a person’s voice or of a speech sound; it can be measured, but in this context, it is usually a perceptual estimation.

Play audiometry.  The highest level of pediatric task in hearing testing, play audiometry involves the active cooperation of the child while an auditory stimulus is paired with an operant task, such as dropping a block in a bucket. It is also called the “listen and drop” test. Plosives.  Speech sounds such as /p/, /t/, or /k/ which are produced with a sudden burst of air. Pragmatics.  The study of how language is used. Prognosis.  A prediction of the course of outcome of a disease or treatment. Programmable hearing aid.  See Hearing aid programmability. Pronominal.  Pertaining to pronouns. Propagation.  Act of extending, projecting, or traveling through space. Prosodic.  Stress, intonation, or melodic features of spoken language. Prosody.  The melody and rhythm of speaking; features of prosody include pitch, intonation, intensity, and duration. Pure tone.  A tone that has energy at only one frequency. Pure tones are useful for evaluating hearing sensitivity because they permit measurement of the contour or configuration of the hearing loss. Rapid speech transmission index (RASTI).  A number between zero and one that quantifies speech intelligibility. RASTI is derived from measurement of the modulation transfer function of the two octave bands of pink noise modulations that mimic the long-term speech spectrum. Real-ear measurements. Measurement of amplified sound in the ear

354

Children With Hearing Loss:  Developing Listening and Talking

canal through the use of a probe microphone. Reception.  The detection of sound that occurs in the inner ear. Receptive language.  Skills involving the ability to receive and comprehend the language one hears in the environment. Redundancy.  The part of a message that can be eliminated without loss of information. Reflected sound.  Propagated sound after it has struck one or more objects or surfaces in a room. Remote microphone.  A microphone that can be placed close to the desired sound source, thereby improving the speech-to-noise ratio (S/N ratio); a key feature of personalworn FM systems, sound-field FM or IR systems, and some hardwired assistive listening devices. Residual hearing.  The hearing that remains after damage or disease in the auditory mechanism; there is almost always some residual hearing, even with the most profound hearing losses. Residual hearing can be accessed through the use of amplification technology. Resonant voice.  A voice that is strong and rich in overtones. Reverberant sound.  Sound that reaches a given location in a room only after being reflected from one or more barriers or partitions within a room. Reverberation.  The amount of echo in a room; the more reverberant the room, the poorer the speech-to-noise ratio and the less intelligible the speech. Reverberation time (RT).  Refers to the amount of time it takes for a steadystate sound to decrease 60 dB from its peak amplitude.

Secondary language.  Skills that involve higher-level language functions such as reading and writing versus listening and speaking. Segmental(s).  Pertaining to the sounds of speech; consonants and vowels. Semantic.  Related to the meaning of words or groups of words in a language system. Sensation.  The conscious awareness of the arrival of the auditory signal. Sensorineural hearing loss. Often called a nerve loss, a sensorineural hearing loss results from disease or damage located in the inner ear; usually a permanent hearing loss. Sensorineural mechanism.  The part of the ear structure that includes the inner ear and acoustic nerve. Sensory deprivation.  Lack of auditory sensory input to the brain caused by a hearing loss, which can cause delayed and/or deviant behaviors. Signal.  Sounds that carry information to listeners. Signal-to-noise ratio (S/N ratio; also called speech-to-noise ratio). S/N ratio is the relationship between a primary signal such as the teacher’s or parent’s speech and background noise. The quieter the room and the more favorable the S/N ratio, the clearer the auditory signal will be for the brain. Signal warning.  The function of hearing that enables one to monitor the environment, including other people’s interactions. Sine wave.  A periodic wave related to simple harmonic motion. Single-sided deafness (SSD).  SSD is when one ear has normal hearing and the other ear does not have usable hearing; usually a profound hearing loss.

Glossary 355

Site of lesion.  The location in the auditory system where disease or damage occurs, causing hearing loss. Sound-field.  A space where sound is propagated. Sound-field systems (also called classroom amplification systems or classroom audio distribution systems [CADS]).  The teacher wears a wireless microphone transmitter, just like the one worn for a personal FM unit, and her voice is sent via radio waves (FM), or light waves (infrared — IR) to an amplifier that is connected to loudspeakers. It looks like a wireless public address system, but it is designed specifically to ensure that the entire speech signal, including the weak high-frequency consonants, reaches every child in the room no matter where the children and the teacher are located. Sound-field testing.  Calibrated auditory signals are presented through loudspeakers into the sound-isolated room rather than through headphones to test hearing; represented by the symbol S on an audiogram. Sound-isolated room.  An acoustically treated room where hearing tests should be performed to obtain accurate results. Sound level.  The intensity of sound in decibels. Spectrogram.  A graphic illustration of speech sounds displaying intensity, frequency, and duration. Speech intelligibility.  The percentage of the speech of a talker that is understood by a listener(s). Speech-language pathologist. ​ A specialist who has a graduate degree in speech and language disorders and how to alleviate them, and holds a state license and certification

from the American Speech-LanguageHearing Association (ASHA). Speech recognition or reception threshold (SRT).  An audiologic test that determines the softest level that a person can just barely understand speech 50% of the time. Speech sound improvement. Therapy directed at improving the production of specific sounds such as r, s, l, sh, and ch. Static.  Electrical discharges in a hearing aid, cochlear implant, radio, and so forth that cause crackling sounds, interfere with signal reception, and can damage the electronic device. Stenosis.  An abnormally small ear canal. Stimulus.  Something, such as a sound, that can evoke or elicit a response. Support services.  Ancillary services that are provided to assist a child in achieving academic success; services can include speech-language therapy, occupational therapy, aural habilitation, and tutoring. Suprasegmentals.  Prosodic aspects of speech; variations in the pitch, loudness, rate, and duration of speech patterns. Syndrome.  A collection of anomalies that co-occur; hearing loss often accompanies other disabilities or abnormalities, such as skeletal malformations or endocrine disorders. Telecoil.  A series of interconnected wire loops in a hearing aid that respond electrically to a magnetic signal. Telecoil switch.  An external control (switch or program) on a hearing aid that turns off the microphone and activates a telecoil in the hearing aid, which picks up magnetic leakage from a telephone or the loop of an

356

Children With Hearing Loss:  Developing Listening and Talking

ALD; hearing aids for children need to come equipped with a telecoil. Threshold.  The intensity at which an individual can just barely hear a sound 50% of the time; all sounds louder than threshold can be heard, but softer sounds cannot be detected. Tinnitus.  Any of a number of internal head noises that can accompany hearing loss; also called ringing in the ears. Topicalize.  Establish topics in conversations. Transducer.  A device such as a microphone or loudspeaker that changes sound into electricity (microphone) or changes electricity into sound (loudspeaker). Transformer.  A two-coil, induction device for increasing or decreasing the voltage or alternating current. Transition time.  The time required to move from a consonant to a vowel or from a vowel to a consonant in speaking, usually measured in milliseconds. Transmitter.  An apparatus that sends out electromagnetic rays; the FM system component that modulates the frequency of the radio signal in an audio frequency signal and sends the radio waves through the air to the antenna of the amplifier/receiver. Treatment paradigm for recommending (sound-field) technology. If viewed as a treatment, technology is recommended for a particular child and managed through the special education system. Troubleshooting (hearing aids or ALDs).  Performing various visual and listening inspections to determine whether an amplification unit is malfunctioning and, if so, evaluating the nature and severity of the malfunction.

Tympanostomy tubes.  Tiny ventilating tubes that are surgically inserted through the eardrum to replace a malfunctioning eustachian tube in allowing ventilation of the middle ear space. Unilateral hearing loss.  Hearing loss in one ear; the other ear has normal hearing sensitivity. A unilateral hearing loss can have a negative impact on a child’s behavior, social skills, and development of spoken communication (language), due in large part to the reduction of incidental learning caused by the hearing loss. Universal design paradigm for recommending classroom audio distribution systems (CADS). This concept means that the assistive technology is not specially designed for an individual student but rather for a wide range of students. Universally designed approaches are implemented by general education teachers rather than by special education teachers. Validation.  In hearing aid fitting, the process of determining that a person is getting as much benefit as possible from amplification in daily communication; often determined by audiologic speech perception testing. Velar.  Speech sounds made with the tongue elevated near or touching the velum. Vent.  A small hole or opening that serves as an outlet for sound in an earmold. Verification.  In hearing aid fitting, the process for making sure that a hearing aid is properly amplifying

Glossary 357

sound for a specific person; usually determined by real-ear measurements. Vertigo.  Dizziness that includes a sensation of spinning or whirling. Vestibular system.  Composed of the saccule, utricle, and semicircular canals, the vestibular system is located in the inner ear and functions to regulate balance. Specifically, it coordinates changes in head position, acceleration and deceleration, and gravitational effects. Vibrotactile.  Pertains to the detection of vibrations through the sense of touch; sounds that are felt rather than heard. Vibrotactile hearing aid.  An assistive listening device that converts acoustic energy into vibratory patterns that are delivered to the skin. Visual reinforcement audiometry (VRA).  A pediatric behavioral hearing test in which a child’s auditory behaviors (usually localization) are conditioned to and rewarded by a visual display. Vocalizations.  Babbling sounds produced by infants.

Volume control.  A wheel or knob for adjusting the intensity of sound produced by a sound system. Vowel.  A speech sound identified by its unrestricted voice flow. Wavelength.  The distance between one peak of a sine wave and the next peak of the same wave. Wireless connectivity. Wireless connection of hearing aids to other audio devices such as phones, MP3 players, personal computers and tablets, televisions, and so on. Wireless system.  A remote microphone transmitter and receiver; radio waves, not wires, connect the talker and listener. Word discrimination testing (also called word identification testing). ​ Part of a diagnostic audiologic assessment that determines how well words can be distinguished when they are presented at a typical conversational level of loudness and at an additional level loud enough to overcome the individual’s hearing loss.

References

Abdala, C. (2018). Otoacoustic emissions: Toward an updated approach. Audiology Today, 30(1), 43–49. Abramson, M., Moncrieff, D., Chermak, G. D., Musiek, F. E., Geffner, D., & Guillory, L. (2018). Six points of audiological consensus on central auditory processing disorders (CAPD). Hearing Review, 71(4), 38–40. Ackerhalt, A. H., & Wright, E. R. (2003). Do you know your child’s special education rights? Volta Voices, 10(3), 4–6. Adrian, M. (2009). Acoustical retrofitting for learning spaces. Volta Voices, 16(5), 16–21. Ahmmed, A. U., Ahmmed, A. A., Bath, J. R., Ferguson, M. A., Plack, C. J., & Moore, D. R. (2014). Assessment of children with suspected auditory processing disorder: A factor analysis study. Ear and Hearing, 35(3), 295–305. Akhtar, N., Jipson, J., & Callanan, M. A. (2001). Learning words through overhearing. Child Development, 72(2), 416–430. Alexander Graham Bell Association for the Deaf and Hard of Hearing (AG Bell). (2014). Recommended protocol for audiological assessment, hearing aid and cochlear implant evaluation, and follow-up. Retrieved from https://www.agbell.org/Advocacy/Alex​ ander-Graham-Bell-Associations-Recom​ mended-Protocol-for-Audiological-Assess​ ment-Hearing-Aid-and-Cochlear-ImplantEvaluation-and-Follow-up

Alexander, J. M., Kopun, J. G., & Stelmachowicz, P. G. (2014). Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss. Ear and Hearing, 35(5), 519–532. Alexander, L. (1992). Notice of policy guidance, deaf students education services. Federal Register, 57, 49274–01. Alexiades, G., & Hoffman, R. A. (2014). Medical evaluation and medical management of hearing loss in children. In J. R. Madell & C. Flexer (Eds.), Pediatric audiology: Diagnosis, technology, and management (2nd ed., pp. 36–43). New York, NY: Thieme Medical. Allegretti, C. M. (2002). The effects of a cochlear implant on the family of a hearing-impaired child. Pediatric Nursing, 28, 614–620. Ambrose, S. E., VanDam, M., & Moeller, M. P. (2014). Linguistic input, electronic media, and communication outcomes of toddlers with hearing loss. Ear and Hearing, 35(2), 139–147. American Academy of Audiology. (2013). Clinical practice guidelines: Pediatric amplification. Retrieved from http://audiology-web​ .s3.amazonaws.com/migrated/PediatricAm​ plificationGuidelines.pdf_539975b3e7e9f1​ .74471798.pdf American Academy of Audiology. (2012). Audiologic guidelines for the assessment

359

360

Children With Hearing Loss:  Developing Listening and Talking

of hearing in infants and young children. Retrieved from https://www.audiology.org/ publications-resources/document-library/ pediatric-diagnostics American Academy of Audiology (HAT). (2011). Clinical practice guidelines for remote microphone hearing assistance technologies for children and youth birth to 21 years (Suppl. A). Retrieved from https://www​ .audiology.org/publications-resources/​ document-library/hearing-assistance-tech​ nologies American Academy of Audiology. (2011). Position statement: Classroom acoustics. Retrieved from https://audiology-web.s3.ama​ zonaws.com/migrated/ClassroomAcoustics​ PosStatem.pdf_539953c8796547.05480496​ .pdf American Academy of Pediatrics. (2013). Clinical practice guideline: The diagnosis and management of acute otitis media. Retrieved from http://pediatrics.aappublications.org/ content/early/2013/02/20/peds.2012-3488 .full.pdf+html ANSI/ASA. (2002). American National Standard acoustical performance criteria, design requirements, and guidelines for schools (S12.60). New York, NY: American National Standards Institute. ANSI/ASA. (2010). American National Standard acoustical performance criteria, design requirements, and guidelines for schools, part 1: Permanent schools, and part 2: Relocatable classrooms factors (S12.60). New York, NY: American National Standards Institute. Retrieved from https://global.ihs​ .com/search_res.cfm?&rid=ASA&input_doc _number=ANSI%2FASA%20S12%2E60&​ input_doc_title=%20 American Speech-Language-Hearing Association. (2005). Guidelines for addressing acoustics in educational settings. Retrieved from http://www.asha.org/members/desk​ ref-journals/deskref/default American Speech-Language-Hearing Association. (2006). Roles, knowledge, and skills: Audiologists providing clinical services to infants and young children birth to 5 years

of age. Retrieved from http://www.asha​ .org/members/deskref-journals/deskref/ default Anderson, K. (1989). Screening Instrument for Targeting Educational Risk (SIFTER). Tampa, FL: Educational Audiology Association. Anderson, K. (2000). Early Listening Function (ELF). Retrieved from http://www.hear2​ learn.com Anderson, K. (2004). The problem of classroom acoustics: The typical classroom soundscape is a barrier to learning. Seminars in Hearing, 25(2), 117–129. Anderson, K. (2011). Predicting speech audibility from the audiogram to advocate for listening and learning needs. Hearing Review, 18(10), 20–23. Anderson, K., & Arnoldi, K. A. (2011). Building skills for success in the fast-paced classroom. Hillsboro, OR: Butte Publications. Anderson, K., & Matkin, N. (1996). Screening Instrument for Targeting Educational Risk in preschool children (Age 3–kindergarten) (Preschool SIFTER). Retrieved from http:// www.hear2learn.com Anderson, K., & Smaldino, J. (1998). The Listening Inventory forEducation: An efficacy tool. (LIFE). Retrieved from http://www​ .hear2learn.com Anderson, K., & Smaldino, J. (2000). Children’s Home Inventory for Listening difficulties. (CHILD). Retrieved from http://www.hear​ 2learn.com Anderson, K. L. (2001). Voicing concern about noisy classrooms. Educational Leadership, 77–79. Anderson, K. L., Goldstein, H., Colodzin, L., & Inglehart, F. (2005). Benefit of S/N enhancing devices to speech perception of children listening in a typical classroom with hearing aids or a cochlear implant. Journal of Educational Audiology, 12, 14–28. Anne, S., Lieu, J. E. C., & Kenna, M. A. (2018). Pediatric sensorineural hearing loss: Clinical diagnosis and management. San Diego, CA: Plural Publishing. Arco, C. M. B., & McCluskey, K. A. (1981). “A change of pace:” An investigation of the

References 361

salience of maternal temporal style in mother-infant play. Child Development, 52, 941–949. Arjmand, E. M., & Webber, A. (2004). Audiometric findings in children with a large vestibular aqueduct. Archives of Otolaryngology– Head and Neck Surgery, 130, 1169–1174. Arndt, S., Prosse, S., Laszig, R., Wesarg, T., Aschendorff, A., & Hassepass, F. (2015). Cochlear implantation in children with single-sided deafness: Does aetiology and duration of deafness matter? Audiology and Neurotology, 20(Suppl. 1), 21–30. Atcherson, S. R., Franklin, C. A., & Smith-Olinde, L. (2015). Hearing assistive and access technology. San Diego, CA: Plural Publishing. Baart, J. L. G. (2010). A field manual of acoustic phonetics. Dallas, TX: SIL International. Baguley, D. M., & McFerran, D. J. (2002). Current perspectives on tinnitus. Archives of Disease in Childhood, 86, 141–143. Baldwin, S. M., Gajewski, B. J., & Widen, J. E. (2010). An evaluation of the cross-check principle using visual reinforcement audiometry, otoacoustic emissions, and tympanometry. Journal of the American Academy of Audiology, 21(3), 187–196. Barnes, S., Gutfreund, M., Satterly, D., & Wells, C. (1983). Characteristics of adult speech which predict children’s language development. Journal of Child Language, 10, 65–84. Barnett, D. W., VanDerHeyden, A. M., & Witt, J. C. (2007). Achieving science-based practice through response to intervention: What it might look like in preschools. Journal of Educational and Psychological Consultation, 17(1), 31–54. Barreira-Nielsen, C., Fitzpatrick, E., Hashem, S., Whittingham, J., Barrowman, N., & Agli­pay, N. (2016). Progressive hearing loss in early childhood. Ear and Hearing, 37(5), 311–321. Bates, E., Benigni, L., Bretherton, I., Camaioni, L., & Volterra, V. (1979). The emergence of symbols: Cognition and communication in infancy. New York, NY: Academic Press. Bates, E., & Goodman, J. (1999). On the emergence of grammar from the lexicon. In B. MacWhinney (Ed.), The emergence of lan-

guage (pp. 29–80). Mahwah, NJ: Lawrence Erlbaum. Bates, E., O’Connell, B., & Shore, C. (1987). Language and communication in infancy. In J. D. Osofsky (Ed.), Handbook of infant development (pp. 149–203). New York, NY: John Wiley & Sons. Bateson, M. C. (1979). The epigenesist of conversational interaction: A personal account of research development. In M. Bullowa (Ed.), Before speech: The beginning of interpersonal communication (pp. 63–77). Cambridge, UK: Cambridge University Press. Beck, D. L. (2010). Shifting paradigms: Boneanchored hearing systems. Hearing Review, 17(1), 22–26. Beebe, B., Jaffe, J., Feldstein, S., Mays, K., & Alson, D. (1985). Matching of timing: The application of an adult dialogue model to mother-infant vocal and kinesthetic interactions. In T. Field (Ed.), Infant social perceptions (pp. 217–241). Norwood, NJ: Ablex. Beebe, B., & Stern, D. (1977). Engagementdisengagement and early object experience. In N. Freedman & S. Grant (Eds.), Communicative structures and psychic structures (pp. 35–55). New York, NY: Plenum Press. Bellinger, D. (1980). Changes in the explicitness of mother’s directives as children age. Journal of Child Language, 6, 443–455. Bellis, T. J. (2003). Assessment and management of central auditory processing disorders in the educational setting: From science to practice (2nd ed.). Clifton Park, NY: Thomson Delmar Learning. Bench, J., Hoffman, E., & Wilson, I. (1974). A comparison of live and videorecord viewing of infant behavior under sound stimulation: I. Neonates. Developmental Psychology, 7, 455–464. Benitez-Barrera, C. R., Angley, G. P., & Tharpe, A. M. (2018). Remote microphone system use at home: Impact on caregiver talk. Journal of Speech, Language and Hearing Research, 61, 399–409. Bentler, R. (2010). Frequency-lowering hearing aids: Verification tools and research needs. ASHA Leader, 15(4), 14–16.

362

Children With Hearing Loss:  Developing Listening and Talking

Berlin, C. I., Hood, L. J., Morlet, T., Wilensky, D., Li, L., Mattingly, K. R., . . . Frisch, S. A. (2010). Multi-site diagnosis and management of 260 patients with auditory neuropathy/dys-synchrony (auditory neuropathy spectrum disorder). International Journal of Audiology, 49(1), 30–43. Berlin, C. I., & Weyand, T. G. (2003). The brain and sensory plasticity: Language acquisition and hearing. Clifton Park, NY: Thomson Delmar Learning. Bertolini, P., Lassalle, M., Mercier, G., Raquin, M. A., Izzi, G., Corradini, N., & Hartmann, O. (2004). Platinum compound-related ototoxicity in children: Longterm follow-up reveals continuous worsening of hearing loss. Journal of Pediatric Hematology and Oncology, 26(10), 649–655. Bess, F. H., Dodd-Murphy, J., & Parker, R. A. (1998). Children with minimal sensorineural hearing loss: Prevalence, educational performance, and functional status. Ear and Hearing, 19, 339–354. Bess, F. H., & Hornsby, B. W. Y. (2014). Commentary: Listening can be exhausting — Fatigue in children and adults with hearing loss. Ear and Hearing, 35(6), 592–598. Blair, J. C. (2006). Teachers’ impressions of classroom amplification. Educational Audiology Review, 23(1), 12–13. Blaiser, K. M., Behl, D., Callow-Heusser, C., & White, K. R. (2013). Measuring costs and outcomes of teleintervention when serving families of children who are deaf and hard of hearing. International Journal of Telerehabilitation, Fall, 5(2), 3–10. Bloom, L. (1973). One word at a time: The use of single word utterances before syntax. The Hague, The Netherlands: Mouton. Bloom, L. (1983). Of continuity, discontinuity, and the magic of language development. In R. M. Golinkoff (Ed.), The transition from prelinguistic to linguistic communication (pp. 79–92). Hillsdale, NJ: Lawrence Erlbaum. Bloom, L., & Lahey, M. (1978). Language development and language disorders. New York, NY: John Wiley & Sons.

Bloom, L., & Lo, E., (1990). Adult perception of vocalizing infants. Infant Behavior and Development, 13, 209–213. Boothroyd, A. (2004). Room acoustics and speech perception. Seminars in Hearing, 25(2), 155–166. Boothroyd, A. (2019). The acoustic speech signal. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 207–214). New York, NY: Thieme Medical. Borders, C. M. (2014). Deaf and hard of hearing students with disabilities. Retrieved from https://www.infanthearing.org/ebook-edu​ cating-childrendhh/chapters/14%20Chapter​ %2014%202017.pdf Bortfeld, H., Morgan, J. L., Golinkoff, R. M., & Rathbun, K. (2005). Mommy and me: Familiar names help launch babies into speech steam segmentation. Psychological Science, 4, 298–304. Bradham, T. S., & Houston, K. T. (2015). Assessing listening and spoken language in children with hearing loss. San Diego, CA: Plural Publishing. Brady, N., Marquis, J., Fleming, K., & McLean, L. (2004). Prelinguistic predictor of language growth in children with developmental disabilities. Journal of Speech, Language, and Hearing Research, 47, 663–677. Braine, M. D. S. (1988). Modeling the acquisition of linguistic structure. In Y. Levy, I. M. Schlesinger, & M. D. S. Braine (Eds.), Categories and processes in language acquisition (pp. 217–259). Hillsdale, NJ: Erlbaum. Braine, M. D. S. (1994). Is nativism sufficient? Journal of Child Language, 21, 9–32. Brinton, B., & Fujiki, M. (1982). A comparison of request-response sequences in the discourse of normal and language disordered children. Journal of Speech and Hearing Disorders, 47, 57–63. Bromwich, R. (1981). Working with parents and infants: An interactional approach. Baltimore, MD: University Park Press. Bromwich, R. (1997). Working with families and their infants at risk. Austin, TX: Pro-Ed.

References 363

Bruner, J. (1975). The ontogenesis of speech acts. Journal of Child Language, 2, 1–19. Bruner, J. (1977). Early social interaction and language acquisition. In H. R. Schaffer (Ed.), Studies in mother-infant interaction (pp. 271–289). London, UK: Academic Press. Bruner, J. (1983). The acquisition of pragmatic commitments. In R. M. Golinkoff (Ed.), The transition from prelinguistic to linguistic communication (pp. 27–42). Hillsdale, NJ: Lawrence Erlbaum. Caleffe-Schenck, N. S., & Stredler-Brown, A. (1992). Auditory skills checklist. Available from the Colorado School for the Deaf and Blind, 33 N. Institute St., Colorado Springs, CO 80903. Camarata, S., & Nelson, K. (2006). Conversational recast intervention with preschool and older children. In R. McCauley & M. Fey (Eds.), Treatment of language disorders in children (237–264). Baltimore, MD: Paul H. Brookes. Campbell, K. C. M. (2009). Emerging pharmacologic treatments for hearing loss and tinnitus. ASHA Leader, 14(6), 14–17. Capone, N., & McGregor, K. (2004). Gesture development: A review for clinical and research practices. Journal of Speech, Language, and Hearing Research, 47, 173–187. Cardon, G., & Sharma, A. (2013). Central auditory maturation and behavioral outcome in children with auditory neuropathy spectrum disorder who use cochlear implants. International Journal of Audiology, 52(9), 577–586. Carpenter, M., Tomasello, M., & Striano, T. (2005). Role reversal, imitation and language in typically developing infants and children with autism. Infancy, 8, 253–278. Carpenter, R. L., Mastergeorge, A. M., & Coggins, T. E. (1983). The acquisition of communicative intentions in infants eight to fifteen months of age. Language and Speech, 26, 101–116. Caskey, M., Stephens, B., Tucker, R., & Vohr, B. (2011). Importance of parent talk on the development of preterm infant vocalizations. Pediatrics, 128, 910–916.

Chasin, M. (2013). Did we throw out the baby with the bathwater? The limitations and benefits of flared tubing. Hearing Review, 20(10), 12. Chasin, M. (2018). Frequency and intensity: A historical perspective. Hearing Review, 25(5), 24–25. Chen, C. K., Chen, K. T. P., Chiu, W. T., Cheng, W. D., & Tsui, P. H. (2012). Comparison of high-resolution computed tomography with conventional injection fitting method for fabricating hearing aid shells. Otolaryngology– Head and Neck Surgery, 147(1), 170–172. Chen, M., Wang, Z., Zhang, Z., Li, X., Wu, W., Xie, D., & Xiao, Z. A. (2016). Intelligence development of prelingual deaf children with unilateral cochlear implantation. International Journal of Pediatric Otorhinolaryngology, 90, 264–269. Chen, S. H., Kennedy, M., & Zhou, Q. (2012). Parents’ expression and discussion of emotion in the multilingual family: Does language matter? Perspectives on Psychological Science, 7(4), 365–383. Chermak, G. D., Bellis, J. B., & Musiek, F. E. (2014). Neurobiology, cognitive science, and intervention. In G. D. Chermak & F. E. Musiek (Eds.), Handbook of central auditory processing disorder: Comprehensive intervention (Vol. II, pp. 3–38). San Diego, CA: Plural Publishing. Chermak, G. D., & Musiek, F. E. (Eds.). (2014a). Handbook of central auditory processing disorder: Auditory neuroscience and diagnosis (Vol. I). San Diego, CA: Plural Publishing. Chermak, G. D., & Musiek, F. E. (Eds.). (2014b). Handbook of central auditory processing disorder: Comprehensive intervention (Vol. II). San Diego, CA: Plural Publishing. Ching, T. Y. C., Dillon, H., Leigh, G., & Cupples. L. (2018). Learning from the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study: Summary of 5-year findings and implications. International Journal of Audiology, 57(S-2), S-105–S-111. Ching, T. Y., & Hill, M. (2007). The parents’ evaluation of aural/oral performance of

364

Children With Hearing Loss:  Developing Listening and Talking

children (PEACH) scale: Normative data. Journal of the American Academy of Audiology,18(3), 220–235. Ching, T. Y. C., Zhang, V. W., & Hou, S. Y. L. (2019). The importance of early intervention for infants and children with hearing loss. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 293–304). New York, NY: Thieme Medical. Chiossi, J. S. C., & Hyppolito, M. A. (2017). Effects of residual hearing on cochlear implant outcomes in children: A systematicreview. International Journal of Pediatric Otorhinolaryngology, 100, 119–127. Choi, D., Black, A., & Werker, J. F. (2018). Cascading and multisensory influences on speech perception development. Mind, Brain, & Education. Advance online publication. Chomsky, N. (1993). On the nature, use and acquisition of language. In A. J. Goldman (Ed.), Reading in philosophy and cognitive science (Vol. C, pp. 511–534). Cambridge, MA: MIT Press. Christensen, L.V. (2019). Osseointegrated implants for children. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 225–234). New York, NY: Thieme Medical. Christensen, L., Smith-Olinde, L., Kimberlain, J., Richter, G. T., & Dornhoffer, J. L. (2010). Comparison of traditional bone-conduction hearing aids with the Baha system. Journal of the American Academy of Audiology, 21(4), 267–273. Clark, J. L., & Roeser, R. J. (2005). Large vestibular aqueduct syndrome: A case study. Journal of the American Academy of Audiology, 16, 822–828. Clark, M. (2006). A practical guide to quality interaction with children who have a hearing loss. San Diego, CA: Plural Publishing. Clarke-Stewart, K. A. (1973). Interactions between mothers and their young children: Characteristics and consequences. Monographs of the Society for Research in Child Development, 38(6, 7, Serial No. 149).

Clinard, C., & Tremblay, K. (2008). Auditory training: What improves perception and how? Audiology Today, 20, 68–69. Cohen, M., & Brill, G. (2018). Can auditoryverbal intervention be effective for a child with an auditory brainstem implant (ABI)? Volta Voices, 25(1), 38–42. Cohen, N., Roland, J. T., & Marrinan, M. (2004). Meningitis in cochlear implant recipients: The North American experience. Otology and Neurotology, 25, 275–281. Cole, E. B. (1994). Encouraging intelligible spoken language development in infants and toddlers with hearing loss. Infant-Toddler Intervention, 4(4), 263–284. Cole, E. B., Carroll, E., Coyne, J., Gill, E., & Paterson, M. (2005). Early spoken language through audition. In The SKI-HI Curriculum (Vol. I, pp. 137–141; Vol. II, pp. 1279–1394). Logan, UT: Hope. Collis, G. (1977). Visual co-orientation and maternal speech. In H. R. Schaffer (Ed.), Studies in mother-infant interaction (pp. 355– 375). London, UK: Academic Press. Collis, G., & Schaffer, H. (1975). Synchronization of visual attention in mother-infant pairs. Journal of Child Psychological Psychiatry, 16, 315–320. Cone, B., & Garinis, A. (2009). Auditory steadystate responses and speech feature discrimination in infants. Journal of the American Academy of Audiology, 20(10), 629–643. Conti-Ramsden, G., & Friel-Patti, S. (1987). Situational variability in mother-child conversations. In K. E. Nelson & A. Van Kleeck (Eds.), Children’s language (Vol. 6, pp. 43–63). Hillsdale, NJ: Lawrence Erlbaum. Cook, R. E., Tessier, A., Klein, M., & Armbruster, V. (2000). Nurturing communication skills. In R. E. Cook, A. Tessier, & M. D. Klein (Eds.), Adapting early childhood curricula for children in inclusive settings (5th ed., pp. 290–399). Englewood Cliffs, NY: Merrill. Cox, R. M. (2009). Verification and what to do until your probe-mic system arrives. Hearing Journal, 62(10), 10–14. Crago, M. (1988). Cultural context in communicative interaction of young Inuit children

References 365

(Unpublished doctoral dissertation). McGill University, Montreal, Quebec. Crandell, C. C., Kreisman, B. M., Smaldino, J. J., & Kreisman, N. V. (2004). Room acoustics intervention efficacy measures. Seminars in Hearing, 25(2), 201–206. Cross, T. G. (1977). Mother’s speech adjustments: The contribution of selected child listener variables. In C. E. Snow & C. A. Ferguson (Eds.), Talking to children: Language input and acquisition (pp. 151–188). Cambridge, UK: Cambridge University Press. Cross, T. G. (1978). Mother’s speech and its association with rate of linguistic development in young children. In N. Waterson & C. E. Snow (Eds.), Talking to children: Language input and acquisition (pp. 199–216). Cambridge, UK: Cambridge University Press. Cross, T. G. (1984). Habilitating the languageimpaired child: Ideas from studies of parentinfant interaction. Topics in Language Disorders, 4, 1–14. Cui, T. (2014). Monaural hearing aid fitting considerations for patients with unilateral hearing loss. Hearing Review, 21(7), 32–34. Cullington, H. E., & Zeng, F. G. (2010). Bimodal hearing benefit for speech recognition with competing voice in cochlear implant subject with normal hearing in the contralateral ear. Ear and Hearing, 31(1), 70–73. Dabrowski, T., Myers, B., & Danilova R. (2009). Identifying enlarged vestibular aqueduct syndrome. Advance for Audiologists, 11(6), 46–49. Davis, H., Hornsby, B., Camarata, S., & Bess, F. (2018). My ears are tired: Listening fatigue in children with hearing loss. Educational Audiology Review, Spring, 6. Davis, J. (Ed.). (1990). Our forgotten children: Hard-of-hearing pupils in the schools. Bethesda, MD: Self-Help for Hard of Hearing People. Dehaene, S. (2009). Reading in the brain: The science and evolution of a human invention. New York, NY: Penguin Group. Delage, H., & Tuller, L. (2007). Language development and mild-to-moderate hearing loss: Does language normalize with age?

Journal of Speech, Language, and Hearing Research, 50, 1300–1313. DesJardin, J. L., & Eisenberg, L. S. (2007). Maternal contributions: Supporting language development in young children with cochlear implants. Ear and Hearing, 28, 456–469. Dettman, S. J., Dowell, R. C., Choo, D., Arnott, W., Abrahams, Y., Davis, A., . . . Briggs, R. S. (2016). Long term communication outcomes for children receiving cochlear implants younger than 12 months: A multi-centre study. Otology and Neurotology, 37(2), e82–e95. Dettman, S., Wall, E., Constantinescu, G., & Dowell, R. (2013). Communication outcomes for groups of children using cochlear implants enrolled in auditory-verbal, aural-oral, and bilingual-bicultural early intervention programs. Otology & Neurotology, 34, 451–459. DiFrancesco, J., & Tufts, J. (2018). Today’s hearing technologies: Definitions, regulations, overlaps. The Hearing Journal, 71(10). 46–51. Dillon, H. (2006). What’s new from NAL in hearing aid prescriptions? The Hearing Journal, 59(10), 10–16. Dillon, H. (2012). Hearing aids (2nd ed, p. 117). New York, NY: Thieme Medical. Dillon, H., Cowan, R., & Ching, T. Y. (2013). Longitudinal Outcomes of Children with Hearing Impairment (LOCHI). International Journal of Audiology, 52(Suppl. 2), S2–S3. doi:10.3109/14992027.2013.866448 Dinnebeil, L., & Hale, L. (2003). Incorporating principles of family-centered practice in early intervention program evaluation. Zero to Three, 23, 24–27. Dirks, E., & Rieffe, C. (2018). Are you there for me? Joint engagement and emotional availability in parent-child interactions for toddlers with moderate hearing loss. Ear and Hearing, 40(1), 18–26. Djalilian, H. R. (2018). Diagnosis: Cochlear aplasia and common cavity. The Hearing Journal, 71(8), 38–43. Dobie, R. A., & Berlin, C. I. (1979). Influence of otitis media on hearing and development.

366

Children With Hearing Loss:  Developing Listening and Talking

Annals of Otology, Rhinology, and Laryngology, 88, 46–53. Dodson, K. M. (2015). Embryologic origins of hearing loss. The ASHA Leader, 20(1). 24–25. Doidge, N. (2007). The BRAIN that changes itself. London, UK: Penguin Books. Dore, J. (1986). The development of conversational competence. In R. L. Schiefelbusch (Ed.), Language competence: Assessment and intervention (pp. 3–60). San Diego, CA: College-Hill Press. Dore, J., Gearhart, M., & Newman, D. (1978). The structure of nursery school conversation. In K. E. Nelson (Ed.), Children’s language (Vol. 1, pp. 337–395). New York, NY: Gardner Press. Dorman, M. F., & Gifford, R. H. (2017). Speech understanding in complex listening environments by listeners fit with cochlear implants. Journal of Speech, Language, and Hearing Research, 60, 3019–3026. Duchan, J. F. (1986). Language intervention through sense making and fine tuning. In R. L. Schiefelbusch (Ed.), Language competence: Assessment and intervention (pp. 187–212). San Diego, CA: College-Hill Press. Dun, C. A. J., Agterberg, M. J. H., Cremers, C. W. R. J., Hol, M. K. S., & Snik, A. F. M. (2013). Bilateral cone conduction devices: Improved hearing ability in children with bilateral conductive hearing loss. Ear and Hearing, 34, (6), 806–808. Duncan, J., & Rhoades, E. A. (2017). Introduction to auditory-verbal practice. In E. A. Rhoades & J. Duncan (Eds.), Auditoryverbal practice (pp. 5–19). Springfield, IL: Charles C Thomas. Dunst, C., Boyd, K., Trivette, C., & Hamby, D. (2002). Family-oriented program models and professional help-giving practices. Family Relations: Interdisciplinary Journal of Applied Family Studies, 51, 221–229. Edwards, D., & Feun, L. (2005). A formative evaluation of sound-field amplification systems across several grade levels in four schools. Journal of Educational Audiology, 12, 57–64.

Einhorn, K. (2017). Ear infections over the ages. The Hearing Review, 24(2), 18. Eisenberg, L. S., Wilkinson, E. P., & Winter, M. (2013). New trial opens door to auditory brainstem implant research in children. The Hearing Journal, 66(10), 4–10. Ellis, R., & Well, G. (1980). Enabling factors in adult-child discourse. First Language, 1, 46–82. English, K., Walker, E., Farah, K., Munoz, K., Pelosi, A., Scarinci, N. . . . Jones, C. (2017). Implementing family-centered care in early intervention for children with hearing loss: Engaging parents with a Question Prompt List (QPL). The Hearing Review, November, 12–18. Erber, N. P. (1982). Auditory training. Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing. Ertmer, E., Strong, L., & Sadogopan, N. (2003). Beginning to communicate after cochlear implantation: Oral language development in a young child. Journal of Speech, Language, and Hearing Research, 46, 328–340. Estabrooks, W. (Ed.). (2006). Auditory-verbal therapy and practice. Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing. Estabrooks, W. (Ed). (2012). 101 Frequently asked questions about auditory-verbal practice. Washington DC: The Alexander Graham Bell Association for the Deaf and Hard of Hearing. Estabrooks, W., MacIver-Lux, K., Honck, L., & Quayle, R. (2016). Blueprint of an auditory-verbal session. In W. Estabrooks, K. MacIver-Lux, & E. A. Rhoades (Eds.), Auditory-verbal therapy: For young children with hearing loss and their families, and the practitioners who guide them (pp. 341–349). San Diego, CA: Plural Publishing. Estabrooks, W., MacIver-Lux, K., & Rhoades, E. A. (2016). Auditory-verbal therapy. San Diego, CA: Plural Publishing. Estabrooks, W., MacIver-Lux, K., Rhoades, E. A., & Lim, S. R. (2016). Auditory-verbal therapy: An overview. In W. Estabrooks, K. MacIver-Lux, & E. A. Rhoades (Eds.), Audi-

References 367

tory-verbal therapy: For young children with hearing loss and their families, and the practitioners who guide them (pp. 1–21). San Diego, CA: Plural Publishing. Fabry, D. (2008). Cochlear implants and hearing aids: Converging/colliding technologies. The Hearing Journal, 61(7), 10–17. Fagan, M. K. (2014). Frequency of vocalization before and after cochlear implantation: Dynamic effect of auditory feedback on infant behavior. Journal of Experimental Child Psychology, 126, 328–338. Fey, M., Krulik, T., Loeb, D., & Proctor-Williams, K. (1999). Sentence recast use by parents of children with typical language and children with specific language impairment. American Journal of Speech-Language Pathology, 8, 273–286. Fey, M., & Loeb, D. (2002). An evaluation of the facilitative effects of inverted yes-no questions on the acquisition of auxiliary verbs. Journal of Speech, Language, and Hearing Research, 45, 160–174. Fey, M., Long, S., & Finestack, L. (2003). Ten principles of grammar facilitation for children with specific language impairments. American Journal of Speech-Language Pathology, 12, 3–15. Finkelstein, J. A., Stille, C. J., Rifas-Shiman, S. L., & Goldmann, D. (2005, June). Watchful waiting for acute otitis media: Are parents and physicians ready? Pediatrics, 115(6), 1466–1473. Fitzgerald, D. C. (2001). Perilymphatic fistula and Ménière’s disease: Clinical series and literature review. Annals of Otology, Rhinology, and Laryngology, 110, 430–436. Fitzpatrick, E., Angus, D., Durieux-Smith, A., Graham, I. D., & Coyle, D. (2008). Parents’ needs following identification of childhood hearing loss. American Journal of Audiology, 17, 38–49. Fitzpatrick, E. M., & Doucet, S. P. (2013). Pediatric audiologic rehabilitation: From infancy to adolescence. New York, NY: Thieme Medical. Fitzpatrick, E., Olds, J., Durieux-Smith, A., McCrae, R., Schramm, D., & Gaboury, I.

(2009). Pediatric cochlear implantation: How much hearing is too much? International Journal of Audiology, 48, 91–97. Fitzpatrick, E. M., Whittingham, J., & DurieuxSmith, A. (2013). Mild bilateral and unilateral hearing loss in childhood: A 20-year view of hearing characteristics, and audiologic practices before and after newborn hearing screening. Ear and Hearing, 35(1), 10–18. Flasher, L. V., & Fogel, P. T. (2004). Counseling skills for speech-language pathologists and audiologists. Clifton Park, NY: Thomson Delmar Learning. Flexer, C. (2004). The impact of classroom acoustics: Listening, learning, and literacy. Seminars in Hearing, 25(2), 131–140. Flexer, C. (2017). Start with the brain and connect the dots. Retrieved from https://hear​ ingfirst.org/downloadables/logic-chain Flexer, C., & Long, S. (2003). Sound-field amplification: Preliminary information regarding special education referrals. Communication Disorders Quarterly, 25(1), 29–34. Flexer, C., & Madell, J. (2009). The concept of listening age for audiologic management of pediatric hearing loss. Audiology Today, 21, 31–35. Flexer, C., & Rollow, J. (2009). Classroom acoustic accessibility: A brain-based perspective. Volta Voices, 16(5), 16–18. Flynn, T. S., Flynn, M. C., & Gregory, M. (2005). The FM advantage in the real classroom. Journal of Educational Audiology, 12, 35–42. Foder, J. (1997). Do we have it in us? [Review of the book Rethinking innateness], Times Literary Supplement, 3–4. Folger, J. P., & Chapman, R. S. (1978). A pragmatic analysis of spontaneous imitations. Journal of Child Language, 5, 25–38. Fowler, K. B., & Ross, S. A. (2017). What to know about advances in CMV detection, prevention. The ASHA Leader, 22(1), 18–20. Franklin, B., Warlaumont, A. S., Messinger, D., Bene, E., Iyer, S. M., Lee, C. C., & Oller, D. K. (2014). Effects of parental interaction on infant vocalization rate, variability and vocal

368

Children With Hearing Loss:  Developing Listening and Talking

type. Language Learning and Development, 10(3), 279–296. French, N., & Steinberg, J. (1947). Factors governing the intelligibility of speech sounds. Journal of the Acoustical Society of America, 19, 90–119. Friedman, C., & Margulis, D. (2017). Getting an early start: FM systems for toddlers and preschoolers with hearing loss. Volta Voices, 24(2), 24–27. Friedman, M., & Woods, J. (2012). Caregiver coaching strategies for early intervention providers: Moving toward operational definitions. Infants and Young Children, 25, 62–82. Fry, D. B. (1966). The development of the phonological system in the normal and the deaf child. In F. Smith & G. A. Miller (Eds.), The genesis of language (pp. 187–206). Cambridge, MA: MIT Press. Fry, D. B. (1978). The role and primacy of the auditory channel in speech and language development. In M. Ross & T. G. Giolas (Eds.), Auditory management of hearingimpaired children (pp. 15–43). Baltimore, MD: University Park Press. Fulcher, A., Purcell, A., Baker, E., & Munro, N. (2012). Listen up: Children with early identified hearing loss achieve age-appropriate speech/language outcomes by 3 years-ofage. International Journal of Pediatric Otorhinolaryngology, 76, 1785–1794. Furrow, D., Nelson, K. E., & Benedict, H. (1979). Mothers’ speech to children and syntactic development: Some simple relationships. Journal of Child Language, 6, 423–442. Furth, H. (1966). Thinking without language. New York, NY: Free Press. Gabbard, S. A. (2003). The use of FM technology for infants and young children. Paper presented at ACCESS Conference, Chicago, IL. Gallagher, T. (1977). Revision behaviors in the speech of normal children developing language. Journal of Speech and Hearing Research, 70, 303–318. Gallagher, T. (1981). Contingent query sentences within adult-child discourse. Journal of Child Language, 8, 51–62.

Gans, R. E. (2019). Evaluation and management of vestibular function in infants and children with hearing loss. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 189–196). New York, NY: Thieme Medical. Gardner-Berry, K., Hou, S. Y. L., & Ching, T. Y. C. (2019). Managing infants and children with auditory neuropathy spectrum disorder. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 347–356). New York, NY: Thieme Medical. Gardner-Berry, K., Purdy, S. C., Ching, T. Y., & Dillon, H. (2015). The audiological journey and early outcomes of twelve infants with auditory neuropathy spectrum disorder from birth to two years of age. International Journal of Audiology, 54(8), 524–535. Gargiulo, R. M., & Metcalf, D. (2013). Teaching in today’s inclusive classrooms: A universal design for learning approach (2nd ed). Belmont, CA: Cengage Learning. Garvey, C. (1977). The contingent query: A dependent act in conversation. In M. L. Lewis & L. A. Rosenblum (Eds.), Interaction, conversation, and the development of language (pp. 63–93). New York, NY: John Wiley and Sons. Garvey, C., Garvey, K., & Hendi, A. (2008). A review of common dermatologic disorders of the external ear. Journal of the American Academy of Audiology, 19, 226–232. Gazdag, G., & Warren, S. F. (2000). Effects of adult contingent imitation on development of young children’s vocal imitation. Journal of Early Intervention, 23(1), 24–35. Geers, A. (2004). Speech, language and reading skills after early cochlear implantation. Archives of Otolaryngology–Head and Neck Surgery, 130, 634–638. Geers, A. E., Mitchell, C. M., Warner-Czyz, A., Wang, N. Y., Eisenberg, L. S., & the CDaCI Investigative Team. (2017). Early sign language exposure and cochlear implantation benefits. Pediatrics, 140(1), e20163489. Geers, A., & Moog, J. (1989). Factors predictive of the development of literacy in

References 369

profoundly hearing-impaired adolescents. Volta Review, 91, 69–86. Geers, A. E., Moog, J. S., Biedenstein, J., Brenner, C., & Hayes, H. (2009). Spoken language scores of children using cochlear implants compared to hearing age-mates at school entry. Journal of Deaf Studies and Deaf Education, 14, 371–385. Geers, A., Nicholas, J., & Sedley, A. (2003). Language skills of children with early cochlear implantation. Ear and Hearing, 24, 46–58. Geers, A. E., Strube, M. J., Tobey, E. A., Pisoni, D. B., & Moog, J. S. (2011). Epilogue: Factors contributing to long-term outcomes of cochlear implantation in early childhood. Ear and Hearing, 32(1), 84S–92S. Genesee, F., Paradis, J., & Crago, M. B. (2004). Dual language development and disorders: A handbook on bilingualism and second language learning. Baltimore, MD: Paul E. Brookes. Gifford, R. H. (2013). Pediatric ABI research in the U.S. — Why the wait? The Hearing Journal, 66(10), 1. Gifford, R. H. (2017). CI updates for singlesided deafness. The Hearing Journal, 70(12), 8–11. Girolametto, L., & Weitzman, E. (2002). Responsiveness of child care providers in interactions with toddlers and preschoolers. Language, Speech and Hearing Services in Schools, 33, 268–281. Giralometto, L., Weitzman, E., Wiigs, M., & Pearce, P. (1999). The relationship between maternal language measures and language development in toddlers with expressive vocabulary delays. American Journal of Speech-Language Pathology, 8, 364–374. Glantz, G. (2018). Protecting preemies: Audiology’s role in early intervention. The Hearing Journal, 71(11), 18–23. Gleitman, L. R., Newport, E. I., & Gleitman, H. (1984). The current status of the motherese hypothesis. Journal of Child Language, 11, 43–79. Glista, D., Scollie, S., Bagatto, M. Seewald, R., Parsa, V., & Johnson, A. (2009). Evaluation of nonlinear frequency compression:

Clinical outcomes. International Journal of Audiology, 48(9), 632–644. Goldin-Meadow, S. (2007). Pointing sets the stage for learning language — and creating language. Child Development, 78(3), 741–745. Golinkoff, R. M. (Ed.). (1983). The transition from prelinguistic to linguistic communication. Hillsdale, NJ: Lawrence Erlbaum. Golinkoff, R. (2013). Development of infant language. Session presented at the American Academy of Audiology 2013 Conference, Anaheim, CA. Golz, A., Netzer A., Westerman, S. T., Westerman, L. M., Gilbert, D. A., Joachims, H. Z., & Goldenberg, D. (2005). Reading performance in children with otitis media. Otolaryngology–Head and Neck Surgery, 132, 495–499. Gordon, K. A., Papsin, B. C., & Harrison, R. V. (2003). Activity-dependent developmental plasticity of the auditory brain stem in children who use cochlear implants. Ear and Hearing, 24(6), 485–500. Gordon, K. A., Papsin, B. C., & Harrison, R. V. (2004). Thalamocortical activity and plasticity in children using cochlear implants. International Congress Series, 1273, 76–79. Gorga, M. P., Neely, S. T., Hoover, B. M., Dierking, D. M., Beauchaine, K. L., & Manning, C. (2004). Determining the upper limits of stimulation for auditory steady-state response measurements. Ear and Hearing, 25(3), 302–307. Gravel, K. E., Rao, A., Steyermark, J., & Schimmenti, L. A. (2019). Genetics of hearing loss. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 27–40). New York, NY: Thieme Medical. Graydon, K., Rance, G., Dowell, R., & Van Dun, B. (2017). Consequences of early conductive hearing loss on long-term binaural processing. Ear and Hearing, 38(5), 621–627. Greenfield, P., & Smith, J. (1976). The structure of communication in early language development. New York, NY: Academic Press.

370

Children With Hearing Loss:  Developing Listening and Talking

Grice, H. (1975). Logic and conversation. In M. Cole & J. Morgan (Eds.), Syntax and semantics (Vol. 3, pp. 41–58). New York, NY: Academic Press. Grieco-Calub, T. M., Saffran, J. R., & Litovsky, R. Y. (2009). Spoken word recognition in toddlers who use cochlear implants. Journal of Speech, Language, and Hearing Research, 52, 1390–1400. Griffiths, C. (1955). The utilization of individual hearing aids on young deaf children (Unpublished doctoral dissertation). University of Southern California, Los Angeles. Griffiths, C. (1964). The auditory approach for preschool deaf children. Volta Review, 66(7), 387–397. Grimes, A. (2017). Kids need two ears. Audiology Today, 29(1), 54–58. Hall, J. W., III. (2011). Auditory processing disorder: Application of FM technology. In J. R. Madell & C. Flexer (Eds), Pediatric audiology casebook (pp. 32–36). New York, NY: Thieme Medical. Hall, J. W., III. (2014). Introduction to audiology today. New York, NY: Pearson. Hall, J. W., III. (2018). What is evidence-based audiology? Audiology Today, 30(1), 74–75. Hall, J. W., III, & Swanepoel, D. (2010). Objective assessment of hearing. San Diego, CA: Plural Publishing. Hall, J. W., III., Zayat, Z., & Aghamolaei, M. (2019). Clinical measurement and application of auditory evoked responses. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 145–162). New York, NY: Thieme Medical. Halliday, L. F., Tuomainen, O., & Rosen, S. (2017). Language development and impairment in children with mild to moderate sensorineural hearing loss. Journal of Speech, Language, and Hearing Research, 60, 1551– 1567. Halpin, K. S., Smith, K. Y., Widen, J. E., & Chertoff, M. E. (2010). Effects of universal newborn hearing screening on an early intervention program for children with hearing loss, birth to 3 years of age. Journal of the

American Academy of Audiology, 21(3), 169–175. Handelsman, J. A. (2018). Maintaining the balance: Vestibular help for children with hearing loss. The ASHA Leader, 23(8), 18–19. Hanin, L. (2018). Noise and toys: Dos and don’ts. The Hearing Journal, 71(8), 32. Harding, C. G. (1983). Setting the stage for language acquisition: Communication development in the first year. In R. M. Golinkoff (Ed.), The transition from prelinguistic to linguistic communication (pp. 93–113). Hillsdale, NJ: Lawrence Erlbaum. Harrison, R. (2006). Factors that shape the development and plasticity of the central auditory system: The balance of “nature and nurture.” Paper presented at the Fourth Widex Congress of Paediatric Audiology, Ottawa, Canada. Hart, B. (2004). What toddlers talk about. First Language, 24, 91–106. Hart, B., & Risley, T. R. (1999). The social world of children learning to talk. Baltimore, MD: Brookes. Hayes, H., Geers, A. E., Treiman, R., & Moog, J. S. (2009). Receptive vocabulary develop­ ment in deaf children with cochlear implants: Achievement in an intensive auditory-oral educational setting. Ear and Hearing, 30(1), 128–135. He, S., Teagle, H. F. B., McFayden, T. C., Ewend, M., Henderson, L., He, N., & Buchman, C. A. (2018). Longitudinal changes in electrically evoked auditory event-related potentials in children with auditory brainstem implants: Preliminary results recorded over 3 years. Ear and Hearing, 39(2), 318–325. Hirsch, I. (1970). Auditory training. In H. Davis & S. Silverman (Eds.), Hearing and deafness (pp. 346–359). Hirsh-Pasek, K., Adamson, L. B., Bakeman, R, Owens, M. T., Golinkoff, R.M., Pace, A. . . . Suma, K. (2015). The contribution of early communication quality to low-income children’s language success. Psychological Science, 26(7), 1071–1083. Hoff-Ginsberg, E. (1990). Maternal speech and the child’s development of syntax: A fur-

References 371

ther look. Journal of Child Language, 17, 85–100. Holden, P. K., & Linthicum, F. H. (2005). Mondini dysplasia of the bony and membranous labyrinth. Otology and Neurotology, 26, 133. Hopkins, M., Kyzer, J., Matta, K., & Dunkelberger, L. (2014). Game on! Playing sports with cochlear implants and hearing aids. Volta Voices, 21(6), 28–29. Hornickel, J., Zecker, S. G., Bradlow, A. R., & Kraus, N. (2012). Assistive listening devices drive neuroplasticity in children with dyslexia. Proceedings of the National Academy of Scientists, 109(41), 16731–16736. Hornsby, B. W. Y. (2013). The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear and Hearing, 34(5), 523–534. Hornsby, B. W. Y., Camarata, S., & Bess, F. (2014). Subjective fatigue in children with hearing loss: Some preliminary findings. American Journal of Audiology, 23, 129–134. Horton, K. (1974). Infant intervention and language learning. In R. L. Schiefelbusch & L. L. Lloyd (Eds.), Language perspectives: Acquisition, retardation, and intervention (pp. 469–491). Baltimore, MD: University Park Press. Houston, D. M., Stewart, J., Moberly, A., Hollich, G., & Miyamoto, R. (2012). Word learning in deaf children with cochlear implants: Effects of early auditory experience. Developmental Science, 1–14. Houston, K. T., & Stredler-Brown, A. (2012). A model of early intervention for children with hearing loss provided through telepractice. The Volta Review, 112 (3), 283–296. Hsu, H. C., Iyer, S. N., & Fogel, A. (2014). Effects of social games on infant vocalizations. Journal of Child Language, 41(1), 132–154. Hughes, M. L., Seivier, J. D., & Choi, S. (2018). Techniques for remotely programming children with cochlear implants using pediatric audiological methods via telepractice. American Journal of Audiology, 27, 385–390. Hulit, L. M., Fahey, K. P., & Howard, M. R. (2014). Born to talk: An introduction to

speech and language development (5th ed.). Boston, MA: Pearson Education. Hung, Y. C., & Ma, Y. C. J. (2016). Effective use of the six sound test. The Hearing Journal, 69(7), 16–17. Hymes, D. (1972). On communicative competence. In J. B. Pride & J. Holmes (Eds.), Sociolinguistics (pp. 269–293). New York, NY: Penguin Books. Janky, K. (2013). Unspun. The ASHA Leader, 18(121), 49–53. Janky, K. L., Thomas, M. L. A., High, R. R., Schmid, K. K., & Ogun, O. A. (2018). Predictive factors for vestibular loss in children with hearing loss. American Journal of Audiology, 27, 137–146. Jerger, J. (2018). The evolution of the audiometric pure-tone technique. Hearing Review, 25(9), 12–18. Johnson, C. D., & Seaton. J. (2012). Educational Audiology Handbook (2nd ed.). Clifton Park, NY: Delmar-Cengage Learning. Johnson, S. (1998). Who moved my cheese? New York, NY: Putnam’s Sons. Joint Committee on Infant Hearing ( JCIH). (2007). Year 2007 position statement: Principles and guidelines for early hearing detection and intervention programs. Pediatrics, 120(4), 898–921. Retrieved from http:// www.listeningandspokenlanguage.org/ Protocol.Audiological.Assessment/#sthash​ .TbQtv2QN.dpuf Joint Committee on Infant Hearing ( JCIH). (2013). Year 2013 supplement: Principles and guidelines for early intervention after confirmation that a child is deaf or hard of hearing. Pediatrics, 131(4), e1324–e1349. Retrieved from http://www.audiology.org/ publications-resources/document-library/ infant-identification Juffer, F., & Steele, M. (2014). What words cannot say: The telling story of video in attachment-based interventions. Attachment and Human Development, 16, 307–314. Kahley, S., & Dybka, M. (2016). Real-ear measurements in children: Compliance meets best practice guidelines. Audiology Today, 28(2), 78–80.

372

Children With Hearing Loss:  Developing Listening and Talking

Kaiser, A., & Hancock, T. (2003). Teaching parents new skills to support their young children’s development. Infants and Young Children, 16, 9–21. Karass, J., Braungart-Reiker, J. M., Mullins, G., & Lefever, J. B. (2002). Process in language acquisition: The roles of gender, attention, and maternal encouragement of attention over time. Journal of Child Language, 29, 529–543. Kaye, K. (1980). Why we don’t talk “baby talk” to babies. Journal of Child Language, 7, 489–507. Kaye, K. (1982). The mental and social life of babies: How parents create persons. Chicago, IL: Chicago University Press. Kaye, K., & Charney, R. (1980). How mothers maintain “dialogue” with two-year-olds. In D. Olson (Ed.), The social foundations of language and thought (pp. 211–230). New York, NY: Norton. Kaye, K., & Charney, R. (1981). Conversation asymmetry between mothers and children. Journal of Child Language, 8, 35–50. Kaye, K., & Wells, A. (1980). Mothers’ jiggling and the burst-pause pattern in neonatal sucking. Infant Behavior and Development, 3, 29–46. Kemmery, M. A., & Compton, M. V. (2014). Are you deaf or hard of hearing? Which do you go by: Perceptions of identity in families of students with hearing loss. The Volta Review, 114(2), 157–192. Kenna, M. A., Feldman, H. A., Neault, M. W., Frangulov, A., Wu, B. L., Fligor, B., & Rehm, H. L. (2010). Audiologic phenotype and progression in GJB2 (Connexin 26) hearing loss. Archives of Otolaryngology–Head and Neck Surgery, 136(1), 81–87. Kent, R. D., & Read, C. (2001). Acoustic analysis of speech (2nd ed.). San Diego, CA: Singular Publishing Group. Kerns, K., & Whitelaw, G. M. (2014). Mild hearing loss? Says who? Audiology Today, 21(4), 50–54. Kilgard, M. P., Vazquez, J. L., Engineer, N. D., & Pandya, P. K. (2007). Experience dependent plasticity alters cortical synchronization. Hearing Research, 229, 171–179.

Killion, M. C., & Mueller, H. G. (2010). Twenty years later: A NEW count the-dots method. Hearing Journal, 63(1), 10–17. Kirk, K. I., Miyamoto, R. T., Lento, C. L., Ying, E., O’Neill, T., & Fears, B. (2002). Effects of age at implantation in young children. Annals of Otology, Rhinology, and Laryngology, 189(Suppl.), 69–73. Knightly, L. M., Jun, S. A., Oh, J. S., & Au, T. K. (2003). Production benefits of childhood overhearing. Journal of the Acoustical Society of America, 114(1), 465–474. Kolpe, V. (2003). Earmold evolution. Hearing Health, 19(2), 39–41. Kral, A. (2013). Auditory critical periods: A review from system’s perspective. Neuroscience, 247, 117–133. Kral, A., Kronenberger, W. G., Pisoni, D. B., & O’Donoghue, G. M. (2016). Neurocognitive factors in sensory restoration of early deafness: A connectome model. The Lancet Neurology, 15(6), 610–621. Kral, A., & Lenarz, T. (2015). How the brain learns to listen: Deafness and the bionic ear. E-Neuroforum, 6(1), 21–28. Kramer, S., & Brown, D. K. (2019). Audiology: Science to practice (3rd ed). San Diego, CA: Plural Publishing. Kramer, S., Dreisbach, L., Lockwood, J., Baldwin, K., Kopke, R., Scranton, S., & O’Leary, M. (2006). Efficacy of the antioxidant N-acetylcysteine (NAC) in protecting ears exposed to loud music. Journal of the American Academy of Audiology, 17, 265–278. Kraus, N. (2014). Neural synchrony: Opening an umbrella. The Hearing Journal, 67(6), 1. Kraus, N. (2017). Hearing in noise: The brain health connection. The Hearing Journal, 70(7), 6. Kraus, N. (2018). Promoting sound health. The Hearing Journal, 71(11), 5. Kraus, N., & Anderson, S. (2014). Music benefits across the lifespan: Enhanced processing of speech in noise. Hearing Review, 21(8), 18–21. Kraus, N., & White-Schwoch, T. (2015). Listening in the din: A factor in learning disabilities? The Hearing Journal, 68(9), 38–40.

References 373

Kraus, N., & White-Schwoch, T. (2017). Neural markers of listening in noise. The Hearing Journal, 70(7), 44–45. Kraus, N., & White-Schwoch, T. (2018). How feedback synchronizes the auditory brain. The Hearing Journal, 71(9), 44–46. Kraus, N., & White-Schwoch, T. (2019). Auditory neurophysiology of reading impairment: Theory and management. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 163–172). New York, NY: Thieme Medical. Kretschmer, L., & Kretschmer, R. (2001). Children with hearing impairment. In T. Layton, E. Crais, & L. Watson (Eds.), Handbook of early language impairment in children: Nature (pp. 56–84). Albany, NY: Delmar. Kretzmer, E. A., Meltzer, N. E., Haenggeli, C. A., & Ryugo, D. K. (2004). An animal model for cochlear implants. Archives of Otolaryngology–Head and Neck Surgery, 130(5) 499–508. Krishnamurti, S., & Wise, L. (2018). Effects of amplification on cortical electrophysiological function. Hearing Review, 28(7), 26–32. Krishnan, L. A. (2009). Universal newborn hearing screening follow-up: A university clinic perspective. American Journal of Audiology, 18, 89–98. Krishnan, L. A., & Van Hyfte, S. (2014). Effects of policy changes to universal newborn hearing screening follow-up in a university clinic. American Journal of Audiology, 23, 282–292. Kubler-Ross, E. (1969). On death and dying. New York, NY: Macmillan. Kuhl, P. K. (1987). Perception of speech and sound in early infancy. In P. Salapatek & L. Cohen (Eds.), Handbook of infant perceptions: From perception to cognition (Vol. 2, pp. 275–381). New York, NY: Academic Press. Kuhn-Inacker, H., Weichbold, V., Tsiakpini, L., Coninx, S., & D’Haese, P. (2003). Little ears: Auditory questionnaire. Innsbruck, Austria: MED-EL. Kuk, F., & Marcoux, A. (2002). Factors ensuring consistent audibility in pediatric hearing aid

fitting. Journal of the American Academy of Audiology, 13, 503–520. Lafferty, K. A., Hodges, R., & Rehm, H. L. (2014). Genetics of hearing loss. In J. R. Madell & C. Flexer (Eds.), Pediatric audiology: Diagnosis, technology, and management (2nd ed., pp. 22–35). New York, NY: Thieme Medical. Lahey, M. (1988). Language disorders and language development. New York, NY: Macmillan. Lai, C. C., & Shiao, A. S. (2004). Chronological changes of hearing in pediatric patients with large vestibular aqueduct syndrome. Laryngoscope, 114, 832–838. Lasky, E. Z., & Klopp, K. (1982). Parent-child interactions in normal and language-disordered children. Journal of Speech and Hearing Disorders, 47, 7–18. Latham, N. M., & Blumsack, J. T. (2008). Classroom acoustics: A survey of educational audiologists. Journal of Educational Audiology, 14, 58–68. Laugesen, S., Nielsen, C., Maas, P., & Jensen, N. S. (2009). Observations on hearing aid users’ strategies for controlling the level of their own voice. Journal of the American Academy of Audiology, 20(8), 503–513. Law, J., Garrett, Z., & Nye, C. (2004). The efficacy of treatment for children with developmental speech and language delay/disorder: A meta-analysis. Journal of Speech, Language, and Hearing Research, 47, 924–943. Leavitt, R. J., & Flexer, C. (1991). Speech degradation as measured by the Rapid Speech Transmission Index (RASTI). Ear and Hearing, 12, 115–118. Leigh, J. R., Dettman, S. J., & Dowell, R. C. (2016). Evidence-based guidelines for recommending cochlear implantation for young children: Audiological criteria and optimizing age at implantation. International Journal of Audiology, S55, S9–S18. Levitt, H. (2004). Assistive listening technology: What does the future hold? Volta Voices, 11(1), 18–21. Lew, J., Purcell, A., Doble, M., & Lim, L. (2014). Hear here: Children with hearing loss learn words by listening. International Journal

374

Children With Hearing Loss:  Developing Listening and Talking

of Pediatric Otorhinolaryngology, 78(10), 1716–1725. Liden, G., & Kankkunen, A. (1969). Visual reinforcement audiometry. Acta Otolaryngologica, 67, 281–292. Lieu, J. E. C. (2004). Speech-language and educational consequences of unilateral hearing loss in children. Archives of OtolaryngologyHead and Neck Surgery, 130, 524–530. Lieu, J. E. C., Tye-Murray, N., & Fu, Q. (2012). Longitudinal study of children with unilateral hearing loss. Laryngoscope, 122(9), 2088–2095. Lightfoot, G., & Brennan, S. (2019). Auditory evoked response testing in children. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 135–144). New York, NY: Thieme Medical. Lim, S. R., Goldberg, D. M., & Flexer, C. (2018). Auditory-verbal graduates — 25 years later: Outcome survey of the clinical effectiveness of the listening and spoken language approach for young children with hearing loss. The Volta Review, 1181(2), 5–40. Lim, S. R., & Hogan, S. C. (2017). Research findings for AV practice. In E. A. Rhoades & J. Duncan (Eds.), Auditory-verbal practice: Family centered early intervention (2nd ed., pp. 52–64). Springfield, IL: Charles C. Thomas. Lin, X. (2014). New hope for the treatment of genetic deafness. The Hearing Journal, 66(5), 1. Lindsey, H. (2014). Making sense of auditory neuropathy spectrum disorder. The Hearing Journal, 67(6), 8–12. Ling, D. (1981). Keep your hearing-impaired child within earshot. Newsounds, 6, 5–6. Ling, D. (2002). Speech and the hearingimpaired child: Theory and practice. Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing. Ling, D. (1986). Devices and procedures for auditory learning. In E. B. Cole & H. Gregory (Eds.), Auditory learning (pp. 19–28). Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing.

Ling, D. (2002). Speech and the hearing impaired child (2nd ed.). Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing. Litovsky, R. (2010). Bilateral cochlear implants: Are two ears better than one? ASHA Leader, 12(2), 14–17. Litovsky, R. Y., Johnstone, P. M., Godar, S., Agrawal, S., Parkinson, A., Peters, R., & Lake, J. (2006). Bilateral cochlear implants in children: Localization acuity measured with minimum audible angle. Ear and Hearing, 27(1), 43–59. Litovsky, R. Y., Parkinson, A., & Arcaroli, J. (2009). Spatial hearing and speech intelligibility in bilateral cochlear implant users. Ear and Hearing, 30(4), 419–431. Lovett, R., Elizabeth, S., Vickers, D. A., & Summerfield, A. Q. (2015). Bilateral cochlear implantation for hearing-impaired children: Criterion of candidacy derived from an observational study. Ear and Hearing, 36(1), 14–23. Lowell, E., Rushford, G., Hoversten, G., & Stoner, M. (1956). Evaluation of pure-tone audiometry with pre-school age children. Journal of Speech and Hearing Disorders, 21, 292–302. Lowy, L. (1983). Social work supervision: From models toward theory. Journal of Education for Social Work, 19, 55–62. Luterman, D. (1984). Counseling the communicatively disordered and their families. Boston, MA: Little, Brown. Luterman, D. M. (2001). Counseling persons with communication disorders and their families (4th ed.). Austin, TX: Pro-Ed. Luterman, D. (Ed.). (2006). Children with hearing loss: A family guide. Sedona, AZ: Auricle Ink. Luterman, D. M., & Maxon, A. (2002). When your child is deaf: A guide for parents (2nd ed.). Austin, TX: Pro-Ed. MacDonald, J. (1989). Becoming partners with children: From play to conversation. San Antonio, TX: Special Press. MacIver-Lux, K., Lim, S. T., Rhoades, E. A., Robertson, L., Quayle, R., & Honck, L. (2016).

References 375

Milestones in auditory-verbal development. In W. Estabrooks, K. MacIver-Lux, & E. A. Rhoades (Eds.), Auditory-verbal therapy: For young children with hearing loss and their families, and the practitioners who guide them (pp. 219–261). San Diego, CA: Plural Publishing. Madell, J. R. (2016). The speech string bean. Volta Voices 23(1), 29–31. Madell, J. R., & Anderson, K. (2014). Hearing aid retention for infants and young children. Volta Voices, 21(5), 23–25. Madell, J. R., & Flexer, C. (2018). Maximize children’s school outcomes: The audiologist’s responsibility. Audiology Today, 30(1), 18–25. Madell, J. R., Flexer, C., Wolfe, J., & Schafer, E. C. (2019a). Hearing test protocols for children. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 65–74). New York, NY: Thieme Medical. Madell, J. R., Flexer, C., Wolfe, J., & Schafer, E. C. (2019b). Pediatric audiology: Diagnosis, technology and management (3rd ed.). New York, NY: Thieme Medical. Madell, J. R., Flexer, C., Wolfe, J., & Schafer, E. C. (2019c). Pediatric audiology casebook (2nd ed.). New York, NY: Thieme Medical. Madell, J. R., & Schafer, E. C. (2019). Evaluation of speech perception in infants and children. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 97–108). New York, NY: Thieme Medical. Markides, A. (1986). The use of residual hearing in the education of hearing impaired children: A historical perspective. In E. B. Cole & H. Gregory (Eds.), Auditory learning (pp. 57–66). Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing. Marriage, J. E., Vickers, D. A., Baer, T., Glasberg, B. R., & Moore, B. C. J. (2018). Comparison of different hearing aid prescriptions for children. Ear and Hearing, 39(1), 20–31.

Martin, R. L. (2008). More 2K. The Hearing Journal, 61(3), 64–65. May, L., Gervain, J., Carreiras, M., & Werker, J. F. (2017). The specificity of the neural response to speech at birth. Developmental Science, 21(3), e12564. Mayer, T. E., Brueckmann, H., Siegert, R., Witt, A., & Weerda, H. (1997). High resolution CT of the temporal bone in dysplasia of the auricle and external auditory canal. American Journal of Neuroradiology, 18(1), 53–65. Mazlan, R., Kei, J., & Hickson, L. (2009). Testretest reliability of the acoustic stapedial reflex test in healthy neonates. Ear and Hearing, 30(3), 295–301. McCreery, R. (2013). Data logging and hearing aid use: Focus on the forest, not the trees. The Hearing Journal, 66(12), 18–19. McCreery, R. (2014). Approaching unilateral hearing loss from both sides. The Hearing Journal, 67(6), 28–29. McCreery, R. (2015). For children with hearing loss, listening can be exhausting work. The Hearing Journal, 68(5), 26–28. McCreery, R. W., Bentler, R. A., & Roush, P. A. (2013). Characteristics of hearing aid fittings in infants and young children. Ear and Hearing, 34(6), 701–710. McCreery, R. W., & Walker, E. A. (2017). Pediatric amplification: Enhancing auditory access. San Diego, CA: Plural Publishing. McCreery, R. W., Walker, E. A., Spratford, M., Bentler, R., Holte, L., Roush, P. . . . Moeller, M. P. (2015). Longitudinal predictors of aided speech audibility in infants and children. Ear and Hearing, 36, 24S–37S. McDonald, L., & Pien, D. (1982). Mother conversational behaviors as a function of conversational intent. Journal of Child Language, 9, 337–358. McKay, S. (2008). Managing children with mild and unilateral hearing loss. In J. R. Madell & C. Flexer (Eds.), Pediatric audiology: Diagnosis, technology, and management (pp. 291–298). New York, NY: Thieme Medical. McNeil, J., & Fowler, S. (1999). Let’s talk: Encouraging mother-child conversations

376

Children With Hearing Loss:  Developing Listening and Talking

during story reading. Journal of Early Intervention, 22, 51–69. McWilliam, R. A. (Ed.) (2010). Working with families of young children with special needs. New York, NY: The Guilford Press. McWilliam, R. A., Casey, A. M., & Sims, J. (2009). The routines-based interview: A method for gathering information and assessing needs. Infants and Young Children, 22(3), 224–233. Meltzer, L. (2007). Executive function in education: From theory to practice. New York, NY: Guilford Press. Mendel, L. L., Roberts, R. A., & Walton, J. H. (2003). Speech perception benefits from sound field FM amplification. American Journal of Audiology, 12, 114–124. Menn, L. (2013). Development of articulatory, phonetic, and phonological capabilities. In M. M. Vihman & T. Keren-Portnoy (Eds.), The emergence of phonology: Whole word approaches to cross-linguistic evidence (pp. 170–172). New York, NY: Cambridge University Press. Mercer, N. (2007). Words & minds: How we use language to think together. New York, NY: Routledge. Meyer, K. (2003). Mild? Moderate? Any hearing loss is a big deal. Educational Audiology Review, 20(4), 22–24. Milne, A. A. (1926). Winnie-the-Pooh. New York, NY: Dutton. Mitchell, R. E., & Karchmer, M. A. (2004). Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign Language Studies, 4(2), 138–163. Mlot, S., Buss, E., & Hall, J. W., III. (2010). Spectral integration and bandwidth effects on speech recognition in school-aged children and adults. Ear and Hearing, 31(1), 56–62. Moeller, M. P. (2007). Current state of knowledge: Psychosocial development in children with hearing impairment. Ear and Hearing, 28(6), 729–739. Moeller, M. P., & Cole, E. B. (2016). Familycentered early intervention. In M. P. Moeller, D. J. Ertmer, & C. Stoel-Gammon (Eds.), Pro-

moting language & literacy in children who are deaf or hard of hearing (pp. 107–148). Baltimore, MD: Brookes. Moeller, M. P., Donaghy, K. F., Beauchaine, K. L., Lewis, D. E., & Stelmachowicz, P. G. (1996). Longitudinal study of FM system use in nonacademic settings: Effects on language development. Ear and Hearing, 17(1), 28–41. Moeller, M. P., Hoover, B., Peterson, B., & Stelmachowicz, P. (2009). Consistency of hearing aid use in infants with early-identified hearing loss. American Journal of Audiology, 18(1), 14–23. Moeller, M. P., & McCreery, R. (2017). Children who are hard of hearing: Still forgotten? The ASHA Leader, 22(6), 16–17. Moeller, M. P., & Mixan, K. (2016). Family-centered early intervention. In M. P. Moeller, D. J. Ertmer, & C. Stoel-Gammon (Eds.), Promoting language and literacy in children who are deaf or hard of hearing (pp. 77–106). Baltimore, MD: Brookes. Moodie, S. T. F., The Network of Pediatric Audiologists of Canada, Scollie, S. D., Bagatto, M. P., & Keene, K. (2017). Fit-to-targets for the desired sensation level version 5.0a hearing aid prescription method for children. American Journal of Audiology, 26(9), 251–258. Moog, J. S. (2002). Changing expectations for children with cochlear implants. Annals of Otology, Rhinology, and Laryngology, 111, 138–142. Moog, J. S., & Geers, A. E. (2003). Epilogue: Major findings, conclusions and implications for deaf education. Ear and Hearing, 24(1S), 121S–125S. Moon, C., Lagercrantz, H., & Kuhl, P. K. (2013) Language experienced in utero affects vowel perception after birth: A two-country study. Acta Pædiatrica, 102,156–160. Moses, K. (1985). Infant deafness and parental grief: Psychosocial early intervention. In F. Powell, T. Finitzo-Hieber, S. Friel-Patti, & D. Henderson (Eds.), Education of the hearing-impaired child (pp. 86–102). San Diego, CA: College-Hill Press. Moses, K., & Van Hecke-Wulatin, M. (1981). The socio-emotional impact of infant deaf-

References 377

ness: A counselling model. In G. T. Mencher & S. E. Gerber (Eds.), Early management of hearing loss (pp. 243–278). New York, NY: Grune & Stratton. Mulla, I., & McCracken, W. (2014). Frequency modulation for preschoolers with hearing loss. Seminars in Hearing, 35(3), 206–216. Musiek, F. E. (2009). The human auditory cortex: Interesting anatomical and clinical perspectives. Audiology Today, 21(4), 26–37. Myklebust, H. (1954). Auditory disorders in children. New York, NY: Grune & Stratton. Myklebust, H. (1960). The psychology of deafness. New York, NY: Grune & Stratton. Neault, M. W. (2014). Managing infants and children with auditory neuropathy spectrum disorder (ANSD). In J. R. Madell & C. Flexer (Eds.), Pediatric audiology: Diagnosis, technology, and management (2nd ed., pp. 356–364). New York, NY: Thieme Medical. Nelson, E. L., Smaldino, J., Erler, S., & Garstecki, D. (2008). Background noise levels and reverberation times in old and new elementary school classrooms. Journal of Educational Audiology, 14, 12–18. Nelson, K. E. (1973). Structure and strategy in learning to talk. Monographs of the Society for Research in Child Development, 38 (Serial No. 149). Newport, E. L. (1977). Motherese: The speech of mothers to young children. In N. Castellan, D. Pisoni, & G. Potts (Eds.), Cognitive theory (Vol. II, pp. 69–84). Hillsdale, NJ: Lawrence Erlbaum. Neumann, S., Smith, J., & Wolfe, J. (2016). Conditioned play audiometry: It should be all fun and games. The Hearing Journal, 69(4), 34–37. Neumann, S., & Wolfe, J. (2013). What’s new and notable in hearing aids: A friendly guide for parents and hearing aid wearers. Volta Voices, 20(3), 24–29. Newport, E. L., Gleitman, H., & Gleitman, L. R. (1977). Mother, I’d rather do it myself: Some effects and non-effects of maternal speech style. In C. E. Snow & C. A. Ferguson (Eds.),

Talking to children: Language input and acquisition (pp. 109–149). Cambridge, UK: Cambridge University Press. Newson, J. (1977). The growth of shared understanding between infant and caregiver. In M. Bullowa (Ed.), Before speech: The beginning of interpersonal communication (pp. 207–222). Cambridge, UK: Cambridge University Press. Nguyen, H., & Bentler, R. (2011). Optimizing FM systems. The ASHA Leader, 16(12), 5–6. Nicholas, J. G., & Geers, A. E. (2006). Effects of early auditory experience on the spoken language of deaf children at 3 years of age. Ear and Hearing, 27(3), 286–298. Nicholas, J. G., & Geers, A. (2007). Will they catch up? The role of age at cochlear implantation in the spoken language development of children with severe to profound hearing loss. Journal of Speech, Language, and Hearing Research, 50, 1048–1062. NIDCD. (2019). Ear infections in children. Retrieved from http://www.nidcd.nih.gov/ health/hearing/pages/earinfections.aspx#1 Niparko, J. K. (2004). Cochlear implants: Clinical application. In F. G. Zeng, A. Popper, & R. Fay (Eds.), Cochlear implants: Auditory prosthesis and electric hearing (pp. 53–100). New York, NY: Springer-Verlag. Nittrouer, S., & Burton, L. T. (2001). The role of early language experience in the development of speech perception and language processing abilities in children with hearing loss. Volta Review, 103(1), 5–37. Nohara, M., McKay, S., & Trehub, S. (1995). Analyzing conversations between mothers and their hearing impaired and deaf adolescents. Volta Review, 97, 23–134. Northcott, W. (1972). Curriculum guide: Hearing impaired children — birth to three years. Washington, DC: The Alexander Graham Bell Association for the Deaf and Hard of Hearing. Northcott, W. H. (Ed.). (1978). I heard that! A development sequence of listening activities for the young child. Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing.

378

Children With Hearing Loss:  Developing Listening and Talking

Northern, J. L., & Downs, M. P. (2014). Hearing in children (6th ed.). San Diego, CA: Plural Publishing. Nott, P., Cowan, R., Brown, P. M., & Wigglesworth, G. (2009). Early language development in children with profound hearing loss fitted with a device at a young age: Part II. Content of the first lexicon. Ear and Hearing, 30(5), 541–551. Nozza, R. (2006). Developmental psychoacoustics: Auditory function in infants and children. Paper presented at the Fourth Widex Congress of Paediatric Audiology, Ottawa, Canada. Nyffeler, M., & Dechant, S. (2010). The impact of new technology on mobile phone use. Hearing Review, 17(3), 42–49. Oller, D. K. (1995). The development of vocalizations in infancy. In H. Winitz (Ed.), Human communication and its disorders: A review (Vol. IV, pp. 1–30). Timonium, MD: York Press. Oller, D. K. (2000). Emergence of the speech capacity. Mahwah, NJ: Lawrence Erlbaum. Olsen-Fulero, L., & Conforti, J. (1983). Child responsiveness to mother questions of varying type and presentation. Journal of Child Language, 10, 495–520. Oomen, K. P., Rovers, M. M., van den Akker, E. H., vanStaaij, B. K., Hoes, A. W., & Schilder, A. G. (2005). Effect of adenotonsillectomy on middle ear status in children. Laryngoscope, 115(4), 731–734. Owens, R. (2004). Language disorders: A functional approach to assessment and intervention (4th ed.). Boston, MA: Allyn & Bacon. Pallarito, K. (2013). CMV susceptibility varies with neural cell stage. The Hearing Journal, 66(6), 23–24. Palmer, C. V., & Grimes, A. M. (2005). Effectiveness of signal processing strategies for the pediatric population: A systematic review of the evidence. Journal of the American Academy of Audiology, 16, 505–514. Pandya, A. (2018). Genetic testing and counseling for hearing loss in era of precision medicine. Audiology Today, 30(4), 56–57.

Patton, J. B., Hackett, L., & Rodriguez, L. (2004). The listening age formula. San Antonio, TX: Sunshine Cottage School for the Deaf. Paul, R. (2007). Language disorders from infancy through adolescence: Assessment and intervention (3rd ed.). St. Louis, MO: Mosby. Paul, R., Norbury, C., & Gosse, C., (2018). Language disorders from infancy through adolescence (5th ed.). St. Louis, MO: Mosby. Pepper, J., & Weitzman, E. (2004). It takes two to talk: A practical guide for parents of children with language delays. Toronto, Canada: The Hanen Center. Percy-Smith, L., Tonning, T. L., Josvassen, J. L., Mikkelsen, J. H., Nissen, L., Dieleman, E., . . . Caye-Thomasen, P. (2018). Auditory-verbal habilitation is associated with improved outcome for children with cochlear implant. Cochlear Implants International, 19(1), 38–45. Perigoe, C., & Paterson, M. M. (2013). Understanding auditory development and the child with hearing loss. In D. R. Welling and C. A. Ukstins (Eds.), Fundamentals of Audiology for the Speech-Language Pathologist. Burlington, MA: Jones & Bartlett Learning. Perkell, J. S. (2008). Auditory feedback and speech production in cochlear implant users and speakers with typical hearing. Paper presented at the 2008 Research Symposium of the Alexander Graham Bell Association International Convention, Milwaukee, WI. Peters, A. M., & Boggs, S. T. (1986). Interactional routines as cultural references upon language acquisition. In B. B. Schiefflelin & B. Ochs (Eds.), Language socialization across cultures. Cambridge, UK: Cambridge University Press. Peters, K. (2017). Considering the classroom: Educational access for children fitted with hearing assistive technology. Audiology Today, 29(1), 20–29. Peterson, C. F., Donner, F., & Flavell, J. (1972). Developmental changes in children’s responses to three indications of communication failure. Child Development, 43, 1463–1468.

References 379

Peterson, P. (2004). Naturalistic language teaching procedures for children at risk for language delays. The Behavior Analyst Today, 5, 404–424. Peterson, P., Carta, J., & Greenwood, C. (2005). Teaching enhanced milieu language teaching skills to parents of multiple risk families. Journal of Early Intervention, 27, 94–109. Phillips, J. (1973). Syntax and vocabulary of mother’s speech to young children: Age and sex comparisons. Child Development, 44, 182–185. Pickering, M. (1987). Interpersonal communication and the supervisory process: A search for Ariadne’s thread. In M. Crago & M. Pickering (Eds.), Supervision in human communication disorders: Perspectives on a process (pp. 203–225). Boston, MA: Little, Brown. Pickett, J. M. (1998). The acoustic analysis of speech communication: Fundamentals, speech perception theory, and technology. Needham, MA: Allyn & Bacon. Pirzanski, C. (2006). Earmolds and hearing aid shells: A tutorial — pediatric impressions. Hearing Review, 13(6), 50–55. Pisoni, D. B., Kronenberger, W. G., Harris, M. S., & Moberly, A. C. (2017). Three challenges for future research on cochlear implants. World Journal of OtorhinolaryngologyHead and Neck Surgery, 3, 240–254. Pittman, A. L. (2008). Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. Journal of Speech-Language and Hearing Research, 51(3), 785–797. Pittman, A., & Beauchaine, K. L. (2019). Hearing aids for infants, children and adolescents. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 215–224). New York, NY: Thieme Medical. Poehlmann, J., & Fiese, B. (2001). Parent-infant interaction as a mediator of the relations between neonatal risk stats and 12-month cognitive development. Infant Behavior and Development, 24, 171–188.

Pollack, D. (1970). Educational audiology for the limited-hearing infant. Springfield, IL: Charles C. Thomas. Pollack, D. (1985). Educational audiology for the limited-hearing infant and preschooler (2nd ed.). Springfield, IL: Charles C. Thomas. Pollack, D., Goldberg, D., & Caleffe-Schenck, N. (1997). Educational audiology for the limited-hearing infant and preschooler: An auditory-verbal program. Springfield, IL: Charles C. Thomas. Powers, T. (2010). The pediatric device. Advance for Audiologists, 12(1), 24–27. Proctor-Williams, K., Fey, M., & Loeb, D. (2001). Parental recasts and production of copulas and articles by children with specific language impairment and typical development. American Journal of Speech Language Pathology, 10, 155–168. Pugh, K. (2005). Neuroimaging studies of reading and reading disability: Establishing brain/behavior relations. Presentation at the Literacy and Language Conference of the Speech, Language and Learning Center, New York, NY. Pugh, K. R., Sandak, R., Frost, S. J., Moore, D. L., & Menci, W. E. (2006). Neurobiological investigations of skilled and impaired reading. In D. Dickinson & S. Neuman (Eds.), Handbook of early literacy research (Vol. 2, pp. 64–76). New York, NY: Guilford. Pulverman, R., Song, L., Golinkoff, R. M., & Hirsh-Pasek, K. (2013). The development of English- and Spanish-learning infants’ attention to manner and path: Foundations for relational term learning. Child Development, 84, 241–252. Purdy, S. C., Farrington, D. R., Moran, C. A., Chard, L. L., & Hodgson, S. A. (2002). A parental questionnaire to evaluate children’s auditory behavior in everyday life (ABEL). American Journal of Audiology, 11, 72–82. Purdy, S. C., & Kelly, A. (2014). Auditory evoked response testing in infants and children. In J. R. Madell & C. Flexer (Eds.), Pediatric audiology: Diagnosis, technology, and

380

Children With Hearing Loss:  Developing Listening and Talking

management (2nd ed., pp. 148–163). New York, NY: Thieme Medical. Ramirez-Esparza, N., Garcia-Sierra, A., & Kuhl, P. K. (2014). Look who’s talking: Speech style and social context in language input to infants are linked to concurrent and future speech development. Developmental Science, 17(6), 880–891. Ramirez Inscoe, J. M., & Nikolopoulos, T. P. (2004). Cochlear implantation in children deafened by cytomegalovirus: Speech perception and speech intelligibility outcomes. Otology & Neurotology, 25, 479–482. Rance, G. (2014). In children with chronic otitis media, auditory training is a challenge. The Hearing Journal, 67(7), 18–20. Rebhorn, T., & Smith, A. (2008). LRE decision making (Module 15). Building the legacy: IDEA 2004 training curriculum. Washington, DC: National Dissemination Center for Children with Disabilities. Available from http://www. nichcy.org/training/contents .asp Red Book. (2006). 2006 report of the committee on infectious diseases. Elk Grove Village, IL: American Academy of Pediatrics. Regional and National Summary Report of Data from 2005–2006 Annual Survey of Deaf and Hard of Hearing Children and Youth. Washington, DC: Gallaudet University. Rehabilitation Act Amendments of 1992, P. L. 102–569. (1992). United States Statutes at Large, 106, 4344–4488. Rehabilitation Act of 1973, P. L. 93–112. (1973). United States Statutes at Large, 87, 355–394. Retherford, K. S., Schwartz, B. C., & Chapman, R. S. (1981). Semantic roles and residual grammatical categories in mother and child speech: Who tunes in to whom? Journal of Child Language, 8, 583–608. Rhoades, E. A. (2014). Adolescence through the lens of an auditory-verbal therapist with hearing differences. Volta Voices, 21(6), 10–13. Rhoades, E. A., & Duncan, J. (2017). Auditoryverbal practice: Family-centered early intervention (2nd ed). Springfield, IL: Charles C. Thomas. Rhoades, E. A., Estabrooks, W., Lim, S. R., & MacIver-Lux, K. (2016). Strategies for listen-

ing, talking, and thinking in auditory-verbal therapy. In W. Estabrooks, K. MacIver-Lux, & E. A. Rhoades (Eds.), Auditory-verbal therapy: For young children with hearing loss and their families, and the practitioners who guide them (pp. 285–326). San Diego, CA: Plural Publishing. Rhoades, E. A., & MacIver-Lux, K. (2016). Parent coaching strategies in auditory-verbal therapy. In W. Estabrooks, K. MacIver-Lux, & E. A. Rhoades (Eds.), Auditory-verbal therapy: For young children with hearing loss and their families, and the practitioners who guide them (pp. 327–340). San Diego, CA: Plural Publishing. Rice, M. (2004). Growth models of developmental language disorders. In M. Rice & F. Warren (Eds.), Developmental language disorders: From phenotypes to etiologies (pp. 207–240). Mahwah, NH: Lawrence Erlbaum. Ricketts, T. A., Bentler, R., & Mueller, H. G. (2019). Essentials of modern hearing aids. San Diego, CA: Plural Publishing. Riesen, A. H. (1974). Studies of early sensory deprivation in animals. In C. Griffiths (Ed.), Proceedings of the International Conference on Auditory Techniques (pp. 33–38). Springfield, IL: Charles C Thomas. Rini, D., & Hindenlang, J. (2006). Family-centered practice for children with communication disorders. In R. Paul & P. Casaella (Eds.), Introduction to clinical methods in communication disorders (pp. 317–336). Baltimore, MD: Paul H. Brookes. Robbins, A. M., Green, J., & Waltzman, S. B. (2004). Bilingual oral language proficiency in children with cochlear implants. Archives of Otolaryngology–Head and Neck Surgery, 130, 644–647. Robbins, A. M., Koch, D. B., Osberger, M. J., Zimmerman-Philips, S., & Kishon-Rabin, L. (2004). Effect of age at cochlear implantation on auditory skill development in infants and toddlers. Archives of Otolaryngology–Head and Neck Surgery, 130(5), 570– 574. Robbins, A. M., Renshaw, J. J., & Berry, S. W. (1991). Evaluating meaningful integration in profoundly hearing impaired children

References 381

(MAIS). American Journal of Otolaryngology, 12(Suppl.), 144–150. Robertson, L. (2014). Literacy and deafness: Listening and spoken language (2nd ed.). San Diego, CA: Plural Publishing. Rocissano, L., & Yatchmink, Y. (1983). Language skill and interactive patterns in prematurely born toddlers. Child Development, 54, 1229–1241. Roeser, R. J., & Downs, M. P. (2004). Auditory disorders in school children: The law, identification, remediation (4th ed.). New York, NY: Thieme Medical. Rogers, C. (1951). Client-centered therapy. Boston, MA: Houghton Mifflin. Rosenberg, G. G., Blake-Rahter, P., Heavner, J., Allen, L., Redmond, B. M., Phillips, J., & Stigers, K. (1999). Improving classroom acoustics (ICA): A three-year FM sound field classroom amplification study. Journal of Educational Audiology, 7, 8–28. Ross, M. (2005). Fitting hearing aids and evidence-based audiology. Hearing Loss, 26(4), 30–33. Ross, M., Brackett, D., & Maxon, A. (1991). Assessment and management of mainstreamed hearing-impaired children. Austin, TX: Pro-Ed. Rossetti, L. (2001). Communication intervention: Birth to three (2nd ed.). San Diego, CA: Singular Publishing Group. Rossi, K. (2003). Learn around the clock: A professional’s early intervention toolbox. Washington, DC: Alexander Graham Bell Association for the Deaf and Hard of Hearing. Roush, J., & Kamo, G. (2019). Counseling and collaboration with parents of children with hearing loss. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 365–372). New York, NY: Thieme Medical. Sabbagh, M. A., & Moses L. J. (2006). Executive functioning and preschoolers’ understanding of false beliefs, false photographs, and false signs. Child Development, 77(4), 1034–1049. Sachs, J. (1983). The adaptive significance of linguistic input to prelinguistic infants. In

C. E. Snow & C. A. Ferguson (Eds.), Talking to children: Language input and acquisition. Cambridge, UK: Cambridge University Press. Sanford, C.A., & Feeney, M.P. (2019). Middle ear measurement in infants and children. In J. R. Madell, C. Flexer, J. Wolfe, & E.C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management 3rd ed. (pp. 109–119). New York, NY: Thieme Medical. Sarant, J., Harris, D., Bennet, L., & Bant, S. (2014). Bilateral versus unilateral cochlear implants in children: A study of spoken language outcomes. Ear and Hearing, 35(4), 396–409. Saxton, M. (2005). ‘Recasts’ in a new light: Insights for practice from typical language studies. Child Language Teaching and Therapy, 21, 23–38. Schafer, E. C., & Thibodeau, L. M. (2003). Speech-recognition performance in children using cochlear implants and FM systems. Journal of Educational Audiology, 11, 15–26. Schafer, E. C., & Wolfe, J. (2019). Remote microphone technologies. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 257–266). New York, NY: Thieme Medical. Schaub, A. (2009). Digital hearing aids. New York, NY: Thieme Medical. Schauwers, K., Gillis, S., & Govaerts, P. (2005). Language acquisition in children with a cochlear implant. In P. Fletcher & J. Miller (Eds.). Developmental theory and language disorders (pp. 95–120). Philadelphia, PA: John Benjamin. Schein, J. D., & Miller, M. H. (2010). Genetics and deafness: Implications for education and life care of students with hearing loss. Hearing Review, 17(4), 38–42. Scherer, N., & Olswang, L. (1984). Role of mothers’ expansions in stimulating children’s language production. Journal of Speech and Hearing Research, 27, 387–396. Schneider, W., Schumann-Hengsteler, R., & Sodian, B. (2005). Young children’s cognitive development: Interrelationships among

382

Children With Hearing Loss:  Developing Listening and Talking

executive functioning, working memory, verbal ability, and theory of mind. Mahwah, NJ: Lawrence Erlbaum. Scholl, J. R. (2006). Unilateral hearing loss and babies: Finding the right mix. The Hearing Journal, 59(7), 47. Schum, D. J. (2010). Wireless connectivity for hearing aids. Advance for Audiologists, 12(2), 24–26. Scollie, S. (2006). Current issues in paediatric amplification: Roles for technology and for prescriptive methods. Paper presented at the Fourth Widex Congress of Paediatric Audiology, Ottawa, Canada. Seitz, S., & Stewart, C. (1975). Imitation and expansion: Some developmental aspects of mother-child conversation. Developmental Psychology, 11, 763–768. Serpanos, Y. C., Renne, B., Schoepflin, J. R., & Davis, D. (2018). The accuracy of smartphone sound level meter applications with and without calibration. American Journal of Speech-Language Pathology, 27, 1319– 1328. Sharma, A. (2013). Post-implantation developmental neuroplasticity. Plenary session presented at the 11th European Symposium on Paediatric Cochlear Implantation (ESPCI 2013), Istanbul, Turkey. Sharma, A., Dorman, M. F., & Kral, A. (2005). The influence of a sensitive period on central auditory development in children with unilateral and bilateral cochlear implants. Hearing Research, 203, 134–143. Sharma, A., Dorman, M. F., & Spahr, A. J. (2002). A sensitive period for the development of the central auditory system in children with cochlear implants: Implications for age of implantation. Ear and Hearing, 23(6), 532–539. Sharma, A., & Glick, H. (2018). Cortical neuroplasticity in hearing loss: Why it matters in clinical decision-making for children and adults. The Hearing Review, 25(7), 24. Sharma, A., Martin, K., Roland, P., Bauer, P., Sweeney, M. H., Gilley, P., & Dorman, M. (2005). P1 latency as a biomarker for central auditory development in children with

hearing impairment. Journal of the American Academy of Audiology, 16, 564–573. Sharma, A., & Nash, A. (2009). Brain maturation in children with cochlear implants. ASHA Leader, 14(5), 14–17. Sharma, A., Tobey, E., Dorman, M., Bharadwaj, S., Martin, K., & Gilley, P. (2004). Central auditory maturation and babbling development in infants with cochlear implants. Archives of Otolaryngology–Head and Neck Surgery, 130(5), 511–516. Shaywitz, S. E., & Shaywitz, B. A. (2004). Disability and the brain. Educational Leadership, 61(6), 7–11. Sheng, L., McGregor, K., & Xu, Y. (2005). Prosodic and lexical syntactic aspects of the therapeutic register. Clinical Linguistics and Phonetics, 17, 355–363. Sherrer, K. R. (1974). Acoustic concomitants of emotional dimensions: Judging affect from synthesized tone sequences. In J. Weitz (Ed.), Nonverbal communication (pp. 145– 180). Oxford, UK: Oxford University Press. Sheridan, D. J., & Cunningham, L. L. (2009). Music to my ears: Enhanced speech perception through musical experience. Audiology Today, 21(6), 56–57. Shpak, T., Most, T., & Luntz, M. (2013). Fundamental frequency information for speech recognition via bimodal stimulation: Cochlear implant in one ear and hearing aid in the other. Ear and Hearing, 35(1), 97–109. Siem, G., Fruh, A., Leren, T. P., Heimdal, K., Teig, E., & Harris, S. (2008). Jervell and Lange-Nielsen syndrome in Norwegian children: Aspects around cochlear implantation, hearing and balance. Ear and Hearing, 29(2), 261–269. Simmons, D. D. (2003). The ear in utero: An engineering masterpiece. Hearing Health, 19(2), 10–14. Sindrey, D. (2002). Listening games for littles (2nd ed.). London, Canada: WordPlay. Sininger, Y. S., Grimes, A., & Christensen, E. (2010). Auditory development in early amplified children: Factors influencing auditory-based communication outcomes in

References 383

children with hearing loss. Ear and Hearing, 31(2), 166–185. Sininger, Y. S., Martinez, A., Eisenberg, L., Christensen, E., Grimes, A., & Hu, J. (2009). Newborn hearing screening speeds diagnosis and access to intervention by 20–25 months. Journal of the American Academy of Audiology, 20(1), 49–57. Sjoblad, S., Harrison, J., & Roush, J. (2004). Parents’ experiences and perceptions regarding early hearing aid use. Volta Voices, 11(4), 8–9. Skinner, B. F. (1957). Verbal behavior. New York, NY: Appleton-Century-Crofts. Skinner, B. F. (1964). New methods and new dimensions in teaching. New Scientist, 22(392), 483–484. Sloutsky, V. M., & Napolitano, A. C. (2003). Is a picture worth a thousand words? Preference for auditory modality in young children. Child Development, 74, 822–833. Smaldino, J. (2004). Barriers to listening and learning in the classroom. Volta Voices, 11(4), 24–26. Smaldino, J. J., & Crandell, C. C. (Eds.). (2004). Classroom acoustics. Seminars in Hearing, 25(4), 113–206. Smaldino, J. J., & Flexer, C. (2012). Handbook of acoustic accessibility: Best practices for listening, learning, and literacy in the classroom. New York, NY: Thieme Medical. Smaldino, J., & Ostergren, D. (2012). Classroom acoustic measurements. In J. Smaldino & C. Flexer (Eds.), Handbook of Acoustic Accessibility: Best Practices for Listening, Learning and Literacy in the Classroom (pp. 34–54). New York, NY: Thieme Medical. Smart, J. L, Purdy, S. C., & Kelly, A. S. (2018). Impact of frequency modulation systems on behavioral and cortical auditory evoked potential measures of auditory processing and classroom listening in school-aged children with auditory processing disorder. Journal of the American Academy of Audiology, 29(7), 568–586. Smith, J. T., & Wolfe, J. (2014). Using contemporary ABR protocols to get accurate results. The Hearing Journal, 67(5), 36–40.

Smith, J. T., & Wolfe, J. (2018a). Updates on unilateral hearing loss. The Hearing Journal, 71(2), 18–20. Smith, J. T., & Wolfe, J. (2018b). Ins, outs of frequency-lowering processing. The Hearing Journal, 71(10), 16–21. Smith, J. T., & Wolfe, J. (2019). Highlights in 2018 pediatric audiology research. The Hearing Journal, 72(1), 14–17. Smith, J. T., Wolfe, J., & Dettman, S. (2018). Compelling evidence supports early implantation. (2018). The Hearing Journal, 71(6), 36–38. Smith, J., Wolfe, J., & Neumann, S. (2017). Maximizing research, audiology fundamentals in ANSD management. The Hearing Journal, 70(8), 32–39. Snow, C. E. (1972). Mother’s speech to children learning language. Child Development, 43, 549–565. Snow, C. E. (1977). The development of conversation between mothers and babies. Journal of Child Language, 4, 1–22. Snow, C. E. (1984). Parent-child interaction and the development of communicative ability. In R. L. Schiefelbusch & J. P. Pickar (Eds.), The acquisition of communicative competence (pp. 69–107). Baltimore, MD: University Park Press. Snow, C. E., de Blauw, S., & van Roosmalen, G. (1979). Talking and playing with babies: The role of ideologies of childrearing. In M. Bullowa (Ed.), Before speech: The beginning of interpersonal communication (pp. 269–288). Cambridge, UK: Cambridge University Press. Snow, C. E., & Gilbreath, B. J. (1983). Explaining transitions. In R. M. Golinkoff (Ed.), The transition from prelinguistic to linguistic communication (pp. 281–296). Hillsdale, NJ: Lawrence Erlbaum. Snow, C. E., Midkiff-Borunda, S., Small, A., & Proctor, A. (1984). Therapy as social interaction: Analyzing the context of language remediation. Topics in Language Disorders, 4, 72–85. Snow, C. E., Perlman, R., & Nathan, D. (1987). Why routines are different: Toward a multiple-factors model of the relation between

384

Children With Hearing Loss:  Developing Listening and Talking

input and language acquisition. In K. E. Nelson & A. Van Kleeck (Eds.), Children’s language (Vol. 6, pp. 65–97). Hillsdale, NJ: Lawrence Erlbaum. Snow, C. E., Tabors, P., & Dickinson, D. (2001). Language development in the preschool years. In D. Dickinson & P. Tabors, Beginning literacy with language. Baltimore, MD: Brookes. Spangler, C. (2014). Positive “disruptions” for a new school year. Volta Voices, 21(4), 12–17. Spiker, D., Boyce, G., & Boyce, L. (2002). Parent-child interactions when young children have disabilities. International Review of Research in Mental Retardation, 25, 35–70. Spirakis, S. E. (2011). Auditory neuropathy spectrum disorder and hearing aids: Rethinking fitting strategies. Hearing Review, 18 (11), 28–33. Stach, B. A. (2008). Reporting audiometric results. Hearing Journal, 61(9), 10–16. Stach B. A., & Ramachandran, V. S. (2019). Hearing disorders in children. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 17–26). New York, NY: Thieme Medical. Stedler-Brown, A. (2014). The importance of early intervention for infants and children with hearing loss. In J. R. Madell & C. Flexer (Eds.), Pediatric audiology: Diagnosis, technology, and management (2nd ed., pp. 297– 307). New York, NY: Thieme Medical. Stedler-Brown, A., & Johnson, D. C. (2003). Functional auditory performance indicators: An integrated approach to auditory development. Retrieved from http://www​ .csdb.org/chip/resources/docs/fapi6_23​ .pdk Stern, D. N. (1974). Mother and infant at play: The dyadic interaction involving facial, vocal, and gaze behaviors. In M. L. Lewis & L. A. Rosenblum (Eds.), The effects of the infant on its caregiver (pp. 187–213). New York, NY: John Wiley and Sons. Stern, D. N., Spieker, S., Barnett, R. K., & MacKain, K. (1983). The prosody of mater-

nal speech: Infant age and context-related changes. Journal of Child Language, 10, 1–16. Steuerwald, W., Windmill, I., Scott, M., Evans, T., & Kramer, K. (2018). Stories from the webcams: Cincinnati children’s hospital medical center audiology telehealth and pediatric auditory device services. American Journal of Audiology, 27, 391–402. Strickland, D. S., & Shanahan, T. (2004). Laying the groundwork for literacy. Educational Leadership, 61(6), 74–77. Strom, K. (2010). Why we should get looped. Hearing Review, 17(2), 8. Sugarman, S. (1984). The development of communication: Its contribution and limits in promoting the development of language. In R. L. Schiefelbusch & J. P. Pickar (Eds.), The acquisition of communicative competence (pp. 23–67). Baltimore, MD: University Park Press. Sun, W. (2009). The biological mechanisms of hyperacusis. ASHA Leader, 14 (11), 5–6. Suskind, D. (2015). Thirty million words: Building a child’s brain. New York, NY: Penguin Random House. Suzuki, T., & Ogiba, Y. (1961). Conditioned orientation audiometry. Archives of Otolaryngology, 74, 192–198. Sweetow, R. W., Rosbe, K. W., Philliposian, C., & Miller, M. T. (2005). Considerations for cochlear implantation of children with sudden fluctuating hearing loss. Journal of the American Academy of Audiology, 16, 770–780. Tallal, P. (2004). Improving language and literacy is a matter of time. Nature Reviews Neuroscience, 5, 721–728. Tallal, P. (2005). Improving language and literacy. Paper presented at the Literacy and Language Conference of the Speech, Language, and Learning Center, New York, NY. Tannock, R., & Girolometto, L. (1992). Reassessing parent-focused language interventions programs. In S. F. Warren & J. Reichle (Eds.), Causes and effects in communication and language intervention (pp. 49–80). Baltimore, MD: Paul H. Brookes.

References 385

Teagle, H. F. B., Henderson, L., He, S., Ewend, M. G., & Buchman, C. A. (2018). Pediatric auditory brainstem implantation: Surgical, electrophysiologic and behavioral outcomes. Ear and Hearing, 39(2), 326–336. Tees, R. C. (1967). Effects of early auditory restrictions in the rat on adult pattern discrimination. Journal of Comparative and Physiological Psychology, 63, 389–393. Tervoort, B. (1964). Development of language and “the critical period” in the young deaf child: Identification and management. Acta Otolaryngologica, 206(Suppl.), 247–251. Thal, D., & Clancy, B. (2001). Brain development and language learning: Implications for nonbiologically based language learning disorders. Journal of Speech-Language Pathology and Audiology, 25, 52–76. Tharpe, A. M. (2004). Who has time for functional auditory assessments? We all do! Volta Voices, 11(4), 10–12. Tharpe, A. M. (2006). The impact of minimal and mild hearing loss on children. Paper presented at the Fourth Widex Congress of Paediatric Audiology, Ottawa, Canada. Tharpe, A. M. (2008). Unilateral and mild bilateral hearing loss in children: Past and current perspectives. Trends in Amplification, 12(1), 7–15. Tharpe, A. M. (2009). Closing the gap in EHDI follow-up. ASHA Leader, 14(4), 12–14. Thibodeau, L. M., & Schaper, L. (2014). Benefits of digital wireless technology for persons with hearing aids. Seminars in Hearing, 35(3), 168–176. Thoman, E. B. (1981). Affective communication as the prelude and context for language learning. In R. L. Schiefelbusch & D. D. Bricker (Eds.), Early language: Acquisition and intervention (pp. 181–200). Baltimore, MD: University Park Press. Thomas, E., El-Kashlan, H., & Zwolan, T. A. (2008). Children with cochlear implants who live in monolingual and bilingual homes. Otology and Neurology, 29(2), 230–234. Thomson, V., & Yoshinaga-Itano, C. (2018). Audiologists key to EHDI programs. The Hearing Journal, 21(11), 8–9.

Tiido, M., & Stiles, D. (2013). Cochlear damage and otitis media. Audiology Today, 25(1), 62–64. Timmer, B. (2013). It’s sync or stream! The differences between wireless hearing aid features. Hearing Review, 20(6), 20–22. Tincoff, R., & Jusczyk, P. W. (2012). Six-montholds comprehend words that refer to parts of the body. Infancy, 17, 432–444. Tjellstrom, A. (2005). Baha in children — An overview. Bone-anchored applications. Goteborg, Sweden: Entific Medical Systems. Tobey, E. (2010). The changing landscape of pediatric cochlear implantation: Outcomes influence eligibility criteria. ASHA Leader, 12(2), 10–13. Tobey, E. (2013). Parental involvement. Session presented at the 11th European Symposium on Paediatric Cochlear Implantation (ESPCI 2013), Istanbul, Turkey. Tomasello, M. (2008). The origins of human communication (pp. 57–72). Cambridge, MA: MIT Press. Tomblin, J. B., Harrison, M., Ambrose, S. E., Walker, E. A., Oleson, J. J., & Moeller, M. P. (2015). Language outcomes in children with mild to severe hearing loss. Ear and Hearing, 36, 76S–91S. Tomlin, D., & Rance, G. (2014). Long-term hearing deficits after childhood middle ear disease. Ear and Hearing, 35, (6). Topping, K., Dekhinet, R., & Zeedyk, S. (2013). Parent-infant interaction and children’s language development. Educational Psychology, 33(4), 391–426. Trevarthen, C. (1977). Descriptive analyses of infant communicative behavior. In H. R. Schaffer (Ed.), Studies in mother-infant interaction (pp. 227–270). London, Ontario Canada: Academic Press. Tye-Murray, N. (2003). Conversational fluency of children who use cochlear implants. Ear and Hearing, 24(Suppl. 2) 82S–89S. Tyler, R. S., Parkinson, A. J., Wilson, B. S., Witt, S., Preece, J. P., & Nobel, W. (2002). Patients utilizing a hearing aid and a cochlear implant: Speech perception and localization. Ear and Hearing, 23, 98–105.

386

Children With Hearing Loss:  Developing Listening and Talking

Uchanski, R. M., & Geers, A. (2003). Acoustic characteristics of the speech of young cochlear implant users: A comparison with normal hearing age mates. Ear and Hearing, 24(Suppl. 1), 90S–105S. Valente, L. M. (2011). Vestibular issues with profound hearing loss. In J. R. Madell & C. Flexer (Eds.), Pediatric audiology casebook (pp. 150–152). New York, NY: Thieme Medical. Van Camp, G. (2006). Deafness in children: The role of genes unraveled. Paper presented at the Fourth Widex Congress of Paediatric Audiology, Ottawa, Canada. VanDam, M., Ambrose, S. E., & Moeller, M. P. (2012). Quantity of parental language in the home environments of hard-of-hearing 2-year-olds. Journal of Deaf Studies and Deaf Education, 17(4), 402–420. Vickers, D. A., Backus, B. C., Macdonald, N. K., Rostamzadeh, N. K., Mason, N. K., Pandya, R., . . . Mahon, M. H. (2013). Using personal response systems to assess speech perception within the classroom: An approach to determine the efficacy of sound field amplification in primary school classrooms. Ear and Hearing, 34(4), 491–502. Vouloumanos, A., & Werker, J. F. (2007). Listening to language at birth: Evidence for a bias for speech in neonates. Developmental Science, 10(2), 159–164. Vygotski, L. (1978). Mind in society. Cambridge, MA: University Press. Wallber, J. (2009). Audiologists’ role in early diagnosis of Usher syndrome. ASHA Leader, 14 (16), 5–6. Walker, B. A., Horn, D. L., & Rubinstein. J. T. (2019). Medical management of hearing loss in children. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 41-54). New York, NY: Thieme Medical. Waltzman, S. (2005). Expanding patient criteria for cochlear implantation. Audiology Today, 17(5), 20–21. Wang, Y., Shafto, C. L., & Houston, D. M. (2018). Attention to speech and spoken language in deaf children with cochlear implants:

A 10-year longitudinal study. Developmental Science, 21, 1–12. Retrieved from https:// onlinelibrary.wiley.com/doi/epdf/10.1111/ desc.12677 Wanska, S. K., & Bedrosian, J. L. (1985). Conversational structure and topic performance in mother-child interaction. Journal of Speech and Hearing Research, 28, 579–584. Warren, S., & Yoder, D. (1998). Facilitating the transition from preintentional to intentional communication. In A. Wetherby, S. Warren, & J. Riechle (Eds.), Transitions in prelinguistic communication (pp. 365–384). Baltimore, MD: Paul H. Brookes. Weber, P., Bluestone, C. D., & Perez, B. (2003). Outcome of hearing and vertigo after surgery for congenital perilymphatic fistula in children. American Journal of Otolaryngology, 24(3), 138–142. Weismer, S. (2000). Intervention for children with developmental language delay. In D. Bishop & L. Leonard (Eds.), Speech and language impairments in children: Causes, characteristics, intervention and outcome (pp. 157– 176). New York, NY: Psychology Press. Weizman, Z., & Snow, C. (2001). Lexical input as related to children’s vocabulary acquisition: Effects of sophisticated exposure and support for meaning. Developmental Psychology, 37, 265–279. Werker, J. (2006). Infant speech perception and early language acquisition. Paper presented at the Fourth Widex Congress of Paediatric Audiology, Ottawa, Canada. Werker, J. (2012). Perceptual foundations of bilingual acquisition in infancy. Annals of the New York Academy of Sciences, 125(1), 50–61. Werker, J. F. (2018). Perceptual beginnings to language acquisition. [Target article with peer commentaries]. Applied Psycholinguistics, 39(4), 703–728. Werker, J. F., & Byers-Heinlein, K. (2008). Bilingualism in infancy: First steps in perception and comprehension. Trends in Cognitive Sciences, 12(4), 144–151. Westby, C. (2017). Keep this theory in mind. The ASHA Leader, 22 (4), 18–20.

References 387

White, K. R., & Munoz, K. (2019). Newborn hearing screening. In J. R. Madell, C. Flexer, J. Wolfe, & E. C. Schafer (Eds.), Pediatric audiology: Diagnosis, technology, and management (3rd ed., pp. 57–64). New York, NY: Thieme Medical. Williams, C. (2003). The children’s outcome worksheets (COW): An outcome measure focusing on children’s needs (ages 4–12). News from Oticon. Retrieved from http:// www.oticon.com Windmill, S., & Windmill, I. M. (2006). The status of diagnostic testing following referral from universal newborn hearing screening. Journal of the American Academy of Audiology, 17, 367–378. Winegert, P., & Brant, M. (2005). Reading your baby’s mind. Newsweek, 15, 33–39. Wolfe, J. (2019). Cochlear implants: Audiologic management and considerations for implantable hearing devices. San Diego, CA: Plural Publishing. Wolfe, J., Morais, M., Schafer, E., Millis, E., Mulder, H. E., Goldbeck, F., & Lianos, L. (2013). Evaluation of speech recognition of cochlear implant recipients using a personal digital adaptive radio frequency system. Journal of the American Academy of Audiology, 24(8), 714–724. Wolfe, J., & Schafer, E. (2015). Programming cochlear implants (2nd ed). San Diego, CA: Plural Publishing. Wolfe, J., & Scholl, J. R. (2009). Ensuring wellfitted earmolds for infants and toddlers. The Hearing Journal, 62(10), 56. Wolfe, J., & Smith, J. (2017). Aspiring for new heights from a sound foundation. The Hearing Journal, 70(2), 12–18. Wright, P. W. D., & Wright, P. D. (2006). Wrightslaw: From emotions to advocacy. The special education survival guide. Hartfield, VA: Harbor House Law Press. Wright, P. W. D., & Wright, P. D. (2007). Wrightslaw: Special Education Law (2nd ed.). Hartfield, VA: Harbor House Law Press. Wright, P. W. D., Wright P. D., & O’Connor, S. W. (2010). Wrightslaw: All about IEPs. Hartfield, VA: Harbor House Law Press.

Wu, C. C., Chen, Y. S., Chen, P. J., & Hsu, C. J. (2005). Common clinical features of children with enlarged vestibular aqueduct and Mondini dysplasia. Laryngoscope, 115, 132–137. Yoder, P., McCathren, R., Warren, S., & Watson, A. (2001). Important distinctions in measuring maternal responses to communication in prelinguistic children with disabilities. Communication Disorders Quarterly, 22, 135–147. Yoder, P., & Warren, S. (1998). Maternal responsivity predicts the prelinguistic communication intervention that facilitates generalized intentional communication. Journal of Speech, Language, and Hearing Research, 41, 1207–1219. Yoder, P., & Warren, S. (2001). Prelinguistic milieu teaching. In H. Goldstein, L. Kacz­ marek, & K. English (Eds.), Promoting social communication: Children with developmental disabilities from birth to adolescence (pp. 98–135). Baltimore, MD: Paul H. Brookes. Yoder, P., & Warren, S. (2002). Effects of prelinguistic milieu teaching and parent responsivity education on dyads involving children with intellectual disabilities. Journal of Speech, Language, and Hearing, 45, 1158–1175. Yoshinaga-Itano, C. (1998). Early identification and intervention: It does make a difference. Audiology Today, 10(Suppl. 11), 20–22. Yoshinaga-Itano, C. (2004). Issues and outcomes of early intervention. Presentation at the Consensus Conference on Early Intervention, Washington, DC. Yoshinaga-Itano, C., Sedey, A. L., Coulter, D. K., & Mehl, A. L. (1998). Language of early- and later-identified children with hearing loss. Pediatrics, 102, 1161–1771. Young, A., & Tattersall, H. (2007). Universal newborn hearing screening and early identification of deafness: Parents’ responses to knowing early and their expectations of child communication development. Journal of Deaf Studies and Deaf Education. 12(2), 209–220.

388

Children With Hearing Loss:  Developing Listening and Talking

Zeng, F. G., Popper, A., & Fay, R. (Eds.). (2004). Cochlear implants: Auditory prosthesis and electric hearing. New York, NY: Springer-Verlag. Zhang, T., Spahr, A. J., & Dorman, M. F. (2010). Frequency overlap between electric and acoustic stimulation and speech-perception benefit in patients with combined electric and acoustic stimulation. Ear and Hearing 31(2), 195–201. Zimmerman-Phillips, S., Osberger, M. F., & Robbins, A. M. (1997). Infant-toddler: Meaningful Auditory Integration Scale (IT-MAIS). Sylmar, CA: Advanced Bionics.

Zupan, B., & Sussman, J. E. (2009). Auditory preferences of young children with and without hearing loss for meaningful auditoryvisual compound stimuli. Journal of Communication Disorders, 42, 381–396. Zwolan, T. A., Ashbaugh, C. M., Alarfaj, A., Kileny, P. R., Arts, H. A., El-Kashlan, H. K., & Telian, S. A. (2004). Pediatric cochlear implant patient performance as a function of age at implantation. Otology and Neurotology, 25(2), 112–120. Zwolan, T. A., & Sorkin, D. L. (2017). Expanded cochlear implant candidacy. The ASHA Leader, 22(3), 14–15.

Index Note:  Page numbers in bold reference non-text material

A AAA (American Academy of Audiology), 75 Abbreviations, used by audiologists, 102–103 ABEL (Auditory Behavior in Everyday Life), 156 ABI (Auditory brain implant), 154–155 Abnormalities, ossicular, 51 ABR (Auditory brainstem response) test, 86–87, 101 Acknowledgments, 239–240 Acoustic feedback, infant hearing aids and, 116 filter, 31 neuroma, 60 spectrum, detection of, 107–112 trauma, 55 Acoustical Guidelines for Classroom Noise and Reverberation, 109 Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools, 109–110 Acoustics, 22–29 speech connection, 186–190 Acquired hearing loss, 41 causes of, 42 Acquisition of language behaviorist view of, 218 components of, 6 development of, target determining/ sequencing, 267 domains, 339–341 English as a second language, children with hearing loss and, 177–181

rule system of, 216–218 home environment and, 6 intonational melody of, 188 listening time and, 192 nativist view of, 218 skills in children, theory of mind and, 200 Active-Empathic Listening Scale, 250 Acute otitis media (AOM), 48, 50 ADD (Attention deficit disorder), 172 ADHD (Attention-deficit hyperactivity disorder), 172 Adults AVT (Auditory-verbal therapy), guidance sessions, 279–287 child communication, 226–228 turn-taking conventions, 231–233 with hearing loss, emotional impact of, 247–253 listening abilities 204 helping children listen, 201–204 intervention and, 163 learning and, 253–255 motherese and, 235 immaterial or facilitative, 243–244 length of use, 242 reasons for, 242 relationship with LSL practitioner, 268 talk characteristics and, 234–242 content, 235–236 negotiation of meaning, 239 participation-elicitors, 239–240 prosody, 236–237 repetition, 238–239 responsiveness, 240–241

389

390

Children With Hearing Loss:  Developing Listening and Talking

Affective relationship, 228–229 AI (Articulation index), 120–121 Aided or CI thresholds test, 98 Aided sound-field thresholds, 124 ALD (Assistive listening device), 133 Alexander Graham Bell Academy, Listening and Spoken Language Specialists (LSLS), 164 Association for the Deaf and Hard of Hearing, 75, 113, 181 Allergies, otitis media and, 49 Alport’s syndrome, 44 American Academy of Audiology (AAA), 75, 143–144 American Academy of Pediatrics, 50 recommendations on avoiding OME, 50 American National Standard Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools, 109 American National Standards Institute (ANSI), 109 Aminoglycosides, as ototoxic drug, 58 Amplification classroom, 140 for infants/children, 112–155 hearing aids, 112–130 not for infants/children, 113 technologies, 31 Anger, grief process and, 249 Anoxia, 57 ANSD (Auditory neuropathy spectrum disorder), 64–66 ANSI/ASA S12.60-2010, 109 Anterior semicircular system, 33 Antimalarial, as ototoxic drug, 58 Antioxidants, hearing loss and, 55 Anxiety, grief process and, 249 AOM (Acute otitis media), 48, 50 Aplasia auditory nerve, 154 cochlear nerve, 149, 152 Michel, 46 Articulation index (AI), 120–121 ASD (Autistic spectrum disorder), 95 Aspirin, ototoxic hearing loss and, 58–59 Assessment audiologic diagnostic, 75–89 evaluation, goals of, 50

of infants/children, 75–89 test protocols, 77–79 auditory tools, 156–157 cortical audiology evoked potential (CAEP), 64 Assistive listening device (ALD), 133 ASSR (Auditory steady-state response) test, 86–87 Atresia, 51–52 ear inserts and, 74 Attention joint, 230–231 selective, 210 Attention-deficit disorder (ADD), 172 Attention-deficit hyperactivity disorder (ADHD), 172 Attentive auditory behaviors, 81–82 Audibility vs. intelligibility in speech, 27 Audio brainstem response (ABR) test, 86–87 Audiogram, 89–95 configuration (pattern) of thresholds on, 92–95 pure-tone, 90 speech spectrum on, 121 Audiologic diagnostic assessment evaluation, goals of, 50 of infants/children, 75–89 test protocols, 77–79 Audiologic tests, 98 pediatric, 99 summary of, 98 Audiometer, 74 Audiometric test environment room, 76 Audio-verbal therapy, location of, 279–280 Audiovestibular, structures, 29 Audition primacy of, 184–186 programs that emphasize, 166 Auditory assessment tools, 156–157 brain development, newborns, 6–7 processing models, 195–198 skills, 195–198 behaviors, attentive/reflective, 81–82 brain access to speech signal, 109 brain implant (ABI), 154–155 brainstem response testing (ABR), 101

Index 391

centers, brain, sensory stimulation of, 8 development delayed, 10–11 target determining/sequencing, 267 dyssynchrony, 64 feedback loop, 138–139 learning, targets for, 293–305 nerve, aplasia, 154 neural development, 8–11 neuropathy, 64 strategies, for everyday use, 275 system, 29 location of damage in, 37 teaching/learning examples of, 204–210 vestibular nerve, 33 Auditory Behavior in Everyday Life (ABEL), 156 Auditory-linguistic information information, 197 processing, 196 learning targets, 210–211 Auditory neuropathy spectrum disorder (ANSD), 64–66 digital vs. linear hearing aids and, 118 Auditory steady-state response (ASSR) test, 86–87 Auditory-verbal intervention child with hearing loss and, 217 direct access to, 261 noise and, 168, 257 play centers and, 320 preschool and, 321 practice, described, 4 Auditory-verbal education (AVEd), 4, 329–330 Auditory-verbal therapy (AVT), 4, 177 preplanned guidance sessions, 279–287 agenda, ` goals of, 280–281 limits of preplanned teaching, 285–287 location of, 279–280 sample scenario, 282–285 Aural habilitationists, 164 Australian Hearing Services Longitudinal Outcomes of Children with Hearing Loss (LOCHI), 64

Autistic spectrum disorder (ASD), 95 AVEd (Auditory-verbal education), 4, 329–330 AVT (Auditory-verbal therapy), 4, 177 preplanned guidance sessions, 279–287 agenda, 279–280 goals of, 280–281 limits of preplanned teaching, 285–287 location of, 279–280 sample scenario, 282–285

B Background noise, 203 distance hearing/incidental learning and, 108–109 Bacterial infections, hearing loss and, 55–57 meningitis, 42, 56, 149 BAI (Bone anchored hearing aid implant), 52 Balance evaluation of, 66 vestibular system and, 33–34 Battery basic test, 98 life, 127 tester for hearing aids, 126 Beebe, Helen, 191 Behavioral observation audiometry (BOA), 79–82, 99, 102 Behavioral state, 80 Behaviorist, view of language acquisition, 218 Behind-the-ear (BTE) hearing aid, 113–114 Belief, false, 200 Bilateral cochlear implants, 150–151 vestibular labyrinthine dysfunction, 66 Bimodal fittings, cochlear implants and, 151 Blindness, 172 Blood tests, infections and, 56 Bluetooth®, 134 BOA (Behavioral observation audiometry), 79–82, 99, 102 Body-worn hearing aids, 115

392

Children With Hearing Loss:  Developing Listening and Talking

Bone anchored hearing aid implants (BAI), 52, 130–132 conduction hearing devices, 52 Bonnets, hearing aids and, 117 Boystown National Hospital, 44 Brain auditory centers, sensory stimulation of, 8 development auditory, newborns, 6–7 through daily routines, 289–290 ear as doorway to, 16 imaging, advances in, 218 Brüel and Kjaer Rapid Speech Transmission Index (RASTI), 109 BTE (Behind-the-ear) hearing aid, 113–114

C CADS (Classroom audio distribution system), 110, 139–142 benefits from, 140–141 small group area, 143 system selection, 141–142 CAEP (Cortical auditory-evoked potential), 119–120 assessment, 64 Canal earmold, 128 CAPD (Central auditory processing disorder), 62–64 digital vs. linear hearing aids and, 118 Caps, hearing aids and, 117 Care kits, for hearing aids, 126 Caregiver effectiveness, 262–263 maximizing effectiveness of, 259–267 background/rationale for, 259–263 discussing with parents, 264–265 interactional targets, 265–267 interactive sample, 263–264 structure of, 263 motherese and, 235 immaterial or facilitative, 243–244 length of use, 242 reasons for use, 242 talk characteristics and, 234–242 content, 235–236 negotiation of meaning, 239

participation-elicitors, 239–240 prosody, 236–237 responsiveness, 240–241 semantics/syntax, 237–238 Carolina Curriculum for Infants and Toddlers, 267 CAT (Computer axial tomography), advances in, 218 Cavity, tympanic, 32 abnormalities, conductive hearing loss and, 44 structures, 29 ear infection audiologic diagnosis/treatment of, 50–51 hearing loss and, 47–51 high incidence populations, 49 Centers for Disease Control and Prevention, immunization practices of the, 50 Central auditory processing disorder (CAPD), 62–64 digital vs. linear hearing aids and, 118 Central auditory system, 29 peripheral auditory and, 31 Central hearing loss, 61 nervous system, 33 Cerebral palsy, 172 Cerumen, impaction, 52–53 CHARGE syndrome, 46 Checklist, preschool group settings, 317–321 Chicken pox, 57 teaching to imitate, 276–277 CHILD (Children’s Home Inventory for Listening Difficulties), 156 Children ABI and, 154 amplification for, 112–155 audiologic diagnostic assessment, 75–89 behavioral tests, 79–80 test protocols, 77–79 development of, target determining/ sequencing, 267 educational options for, with hearing loss, 175–181 everyday routines of, 258 hearing aids, 112–130

Index 393

behind-the-ear, 113–114 body-worn, 115 brain waves and, 192 in-the-canal, 114–115 completely-in-the-canal, 115 in-the-ear (ITE) hearing aid, 114 new context for, 113 points to consider, 123–130 receiver-in-the-canal, 114 receiver-in-the-ear, 114 with hearing loss emotional impact of, 247–253 intervention, 256–257 preschool and, 177–181 helping to listen, 201–204 interactional abilities development, 229–234 intervention, 256–257 language skills, theory of mind and, 200 learning to talk, 218–219 listening abilities, parents and, 204 time and, 192 multilingual environment, 174–175 multiple handicapped, 172–173 parent communication, 226–228 turn-taking conventions, 231–233 parents and, affective relationship, 228–229 persistent noise and, 173–174 singing with, 189 teaching to imitate, 276–277 Children’s Home Inventory for Listening Difficulties (CHILD), 156 Children’s Outcome Worksheet (COW), 156 Cholesteatoma, 53 Chromosome 13, 43 CIC (Completely-in-the-canal) hearing aid, 115 Classroom amplification, 140 Classroom audio distribution system (CADS), 110, 139–142 benefits from, 140–141 small group area, 143 system selection, 141–142 Cleft palate, otitis media and, 49 CMV (Cytomegalovirus) average number of words spoken to, 8

blood tests and, 56 with CAPD, 62–63 congenital, 49 later-onset hearing loss and, 73 postmeningitic, 56 CNT (Could not test), 102 Cochlea, 32 Cochlea ossification, ABI and, 154 Cochlear America, HOPE, 197 Cochlear implants, 12, 144–154 ABI and, 154 ANSD and, 65 benefits of, 148–149 bilateral implants, 150–151 bimodal fittings, 151 candidacy for, 146–148 EAS and, 152 hybrid, 152 illustrated, 145 late to full-time wearing of, 168–172 listening time and, 195 purpose of, 8 reception of speech and, 191 remote microphone (RM) and, 152–153 remote programming of, 153–154 SSD (Single-sided deafness), 152 Cochlear nerve aplasia, 152 Cognitive problems, 172 Cold sores, 57 Collapsed ear canals, 51 Communication, parent and child, 226–228 Completely-in-the-canal (CIC) hearing aid, 115 Compression, frequency, 122 Computer axial tomography (CAT), advances in, 218 Conditioned play audiometry (CPA), 79–80, 83–84, 100, 102 Conductive hearing loss, 37 ear infection and, 49 middle ear abnormalities and, 44 pathologies of hearing loss, 47 Congenital abnormalities, otitis media and, 49 CMV (Cytomegalovirus), 49 hearing loss, 41 causes of, 42

394

Children With Hearing Loss:  Developing Listening and Talking

Connectivity issues of, 132–133 wireless, 132 Connexin 26, 43 Consonants sounds of, 27–28, 57, 96, 278 high frequencies and, 129 Ling 6-7 Sound Test and, 28 Content development, language, 219 Contralateral routing of signal hearing aids, 152 Conversations, conventions of, 215–216 Cortical auditory evoked potential (CAEP), 119–120 assessment, 64 Cottage Acquisition Scales for Listening, Language, and Speech, 267 Could not test (CNT), 102 COW (Children’s Outcome Worksheet), 156 CPA (Conditioned play audiometry), 79–80, 83–84, 100, 102 Cranial nerve, 33 Critical questions, 5 Critter clips, hearing aids and, 117 CROS hearing aids, 152 Cross-modal reorganization, 11 Crouzon’s syndrome, 45 Cumulative practice, 10 Custom mini-BTE earmold, 128 open earmold, 128 Cycles per second, 24 low frequency, 24 Cytomegalovirus (CMV), 56–57 average number of words spoken to, 8 blood tests and, 56 with CAPD, 62–63 congenital, 49 later-onset hearing loss and, 73 postmeningitic, 56

D Daily routines, brain development through, 289–290 Data input analogy, 29–31

logging, 123 Day care centers, otitis media and, 49 Deaf, new context for, 12–13 Deaf Child Bill of Rights, 181 Deafness ambiguity of, 96–97 average, 25 central, 61 cerumen impaction and, 52 children, intervention, 256–257 classifications of, 36–42 minimal, 36–37 profound, 36–37 conductive, 37 ear infection and, 49 middle ear abnormalities and, 44 diagnosing audiogram, 89–95 audiologic diagnosis assessment, 75–89 formulating a differential diagnosis, 95–97 measuring hearing distance, 97 newborn screening, 70–74 test equipment/environment, 74–75 disabilities in addition to, 172–173 EDHI programs, 70–74 effect of, on reception of speech, 190–191 environmental causes of, 42 functional, 62 general causes of, 42 genetic testing and, 43 hearing screening, 70–74 infants, intervention, 256–257 as invisible acoustic filter, 13 medical aspects, 47–66 acoustic neuroma, 61 anoxia, 57 atresia, 51–52 bacterial meningitis, 56 cerumen, 52–53 cholesteatoma, 53 conductive pathologies of, 47 cytomegalovirus, 56–57 ear canal collapse, 51 ear infections, 48–49 functional, 62

Index 395

LVAS, 59 mixed, 61 noise-induced, 55 objects in ear canal, 53 otitis media, 47–48, 50–51 OME medical diagnosis, 49–50 ossicular abnormalities, 51 otitis externa, 53 ototoxic, 58–59 PLF, 59–60 progressive, 61 Rh incompatibility, 60 sensorineural pathologies, 54–55 tinnitus, 60–61 tympanic membrane perforation, 53 viral/bacterial infections, 55–56 mild, 39–40 minimal, 37–38 mixed, 61 model of, 13–14 moderately severe, 40–41 noise-induced, 55 ototoxic, 58–59 profound, 36–37, 41 progressive, 61–62 sensorineural, 37 causes of, 54 pigmentation abnormalities and, 44 pathologies and, 54–55 severe, 41 types of peripheral, 36 unilateral in infants, 38 in newborns, 38–39 late-onset, 73 viral/bacterial infections and, 55–57 Decibel, 27 described, 25 Decisions, intervention revelance for, 220–221 Delayed hereditary sensorineural hearing loss, 42 Denial, grief process and, 248 Depression, grief process and, 249 Desired sensation level (DSL), 119 Detection, of acoustic spectrum, 107–112 Developmental disabilities, 172 Developmental synchrony, 10

Did not test (DNT), 102 Differential diagnosis, formulating, 95–97 Diffusion image scanning, 218 Digital hearing aids, 118–123 background noise and, 203 RM systems, 134–135 Direct bone conduction, 131 Directional microphones, 138 Direct-to-the consumer, hearing aids, 113 Discourse, conventions of, 215–216 Discrimination task, 207 Distance, frequency and, 25 Distance hearing incidental learning and, 107–108 measuring, 97 learning, 107 listening, 205 DNT (Did not test), 102 Dose, high, described, 58 Down syndrome, otitis media and, 49 Drugs, ototoxic, 58 Dry and store kit, for hearing aids, 126 DSL (Desired sensation level), 119 Duration of speech, 189–190 Dysplasias inner ear, 46 Michel, 46 Mondini, 46 Scheibe, 46

E Ear, 29–30 canals, 32 collapsed, 51 objects in, 53 stenotic, 52 as doorway to the brain, 16 drum, 32 gear, hearing aids and, 117 infections, 48–51 audiologic diagnosis/treatment of, 50–51 hearing loss and, 47–51 high incidence populations, 49 with effusion (OME), 48–50

396

Children With Hearing Loss:  Developing Listening and Talking

Ear  (continued) inner ear, 32–33 dysplasias, 46 middle, 32 abnormalities, conductive hearing loss and, 44 structures, 29 outer, 32 abnormalities, conductive hearing loss and, 44 ear infection, 47–51 structures, 29 ear infection, 47–51 peripheral portion of, 30 pulling on, otitis media and, 50 Early hearing detection and intervention (EHDI), programs, 12, 70–74 Early intervention, importance of, 31 Early Listening Function (ELF), 156 Earmolds described, 127–129 illustrated, 128 infant hearing aids and, 116 otitis externa and, 53 Earshot, 107 Earwax, impaction, 52–53 EAS (Electric-acoustic stimulation), cochlear implants and, 152 ECMO (extracorporeal membrane oxygenation), later-onset hearing loss and, 73 EDHI (Early hearing detection and intervention), programs, 12, 70–74 Educational opportunities, for children, 175–181 Efficacy of intervention, 246–247 measuring fittings, 155–158 for school systems, 155 EHDI (Early hearing detection and intervention), 12 E-health, 153 Eighth nerve tumor, 60 Electric-acoustic stimulation (EAS), cochlear implants and, 152 Electrophysiologic tests, 85–87 ELF (Early Listening Function), 156 Embellished interacting, 271 incidental, 270–271

listening, speech, 275 teaching listening, 274–275 speech, 275 teaching through, 271–274 listening, 274–275 spoken language, 271–274 Embellishing, Empathic responses, 251 Endogenous hearing loss, 42 English as a second language, children with hearing loss and, 177–181 rule system of, 216–218 Environmental causes of hearing loss, 42 Environments background noise, 203 listening/learning, 107 multilingual, 174–175 persistent noise in, 173–174 Epstein-Barr virus, 57 Equilibrium, 66 vestibular system and, 33–34 Equipment, efficacy for school system, 155, 157 Eustachian tubes, functions of, 48 Everyday routines, of children, 258 Examination room, audiometric test, 76 Executive functions described, 201 theory of mind (ToM) and, 198–201 Exogenous of hearing loss, 42 Expansions, 239–240 of adult utterances, 240 Extracorporeal membrane oxygenation (ECMO), later-onset hearing loss and, 73

F False belief, 200 Family average number of words spoken to children, 8 child with hearing loss, emotional impact of, 247–253 desired outcome, 5 FAPIs (Functional Auditory Performance Indicators), 156

Index 397

Feedback auditory loop, 138–139 described, 127 Fittings bimodal, cochlear implants and, 151 hearing aids validated functionally, 123 verified electroacoustically, 123 measuring efficacy of, 155–158 Flexibility, programmable hearing aids for, 126 Fluent conversation and interaction, 226–228 FM (radio frequency modulated), wireless delivery by, 136 fMRI (Functional magnetic resonance imaging), advances in, 218 Form development, language, 219 Framework explanation for items on, 307–315 maximizing caregiver effectiveness, 259–267 background/rationale for, 259–263 discussing with parents, 264–265 interactional targets, 265–267 interactive sample, 263–264 structure of, 263 Frequencies, key speech, 26 Frequency compression, 122 defined, 24 distance and, 25 lowering, 122 physical parameter of, 23–24 response, hearing aids and, 118 speech and, 188 transposition, 122 Froeschels, Emil, 191 Full shell earmold, 128 Functional Auditory Performance Indicators (FAPIs), 156 Functional hearing loss, 62 Functional magnetic resonance imaging (fMRI), advances in, 218

G Gallaudet Research Institute, 172 Ganciclovir, 57

Gaspard, Jean Marc, 191 Gaze, terminal, 232 Genetics, testing, hearing loss and, 43 Genotype, described, 44 Goldstein, Max, 191 Grief, child with hearing loss and, 247–253 Grieving process theory, 248 steps in, 248–249 Griffiths, Ciwa, 191–192 Gross motor development, vestibular issues and, 34 Group settings, preschool, checklist for, 317–321 Guidelines for Remote Microphone Hearing Assistance Technologies for Children and Youth From Birth to 21 Years, 143–144 Guilt, grief process and, 248

H Haemophilus influenzae, 56 Hands and Voices, 181 Harmonic motion, 23 Hard of hearing ambiguity of, 96–97 average, 25 central, 61 cerumen impaction and, 52 children, intervention, 256–257 classifications of, 36–42 minimal, 36–37 profound, 36–37 conductive, 37 ear infection and, 49 middle ear abnormalities and, 44 diagnosing audiogram, 89–95 audiologic diagnosis assessment, 75–89 formulating a differential diagnosis, 95–97 measuring hearing distance, 97 newborn screening, 70–74 test equipment/environment, 74–75 disabilities in addition to, 172–173 EDHI programs, 70–74 effect of, on reception of speech, 190–191

398

Children With Hearing Loss:  Developing Listening and Talking

Hard of hearing  (continued) environmental causes of, 42 functional, 62 general causes of, 42 genetic testing and, 43 hearing screening, 70–74 infants, intervention, 256–257 as invisible acoustic filter, 13 medical aspects, 47–66 acoustic neuroma, 61 anoxia, 57 atresia, 51–52 bacterial meningitis, 56 cerumen, 52–53 cholesteatoma, 53 conductive pathologies of, 47 cytomegalovirus, 56–57 ear canal collapse, 51 ear infections, 48–49 functional, 62 LVAS, 59 mixed, 61 noise-induced, 55 objects in ear canal, 53 otitis media, 47–48, 50–51 OME medical diagnosis, 49–50 ossicular abnormalities, 51 otitis externa, 53 ototoxic, 58–59 PLF, 59–60 progressive, 61 Rh incompatibility, 60 sensorineural pathologies, 54–55 tinnitus, 60–61 tympanic membrane perforation, 53 viral/bacterial infections, 55–56 mild, 39–40 minimal, 37–38 mixed, 61 model of, 13–14 moderately severe, 40–41 noise-induced, 55 ototoxic, 58–59 profound, 36–37, 41 progressive, 61–62 sensorineural, 37 causes of, 54 pigmentation abnormalities and, 44 pathologies and, 54–55

severe, 41 types of peripheral, 36 unilateral in infants, 38 in newborns, 38–39 late-onset, 73 viral/bacterial infections and, 55–57 Harvard Medical School Center for Hereditary Deafness, 44 HATs, for infants/children, 133 Head, rolling, otitis media and, 50 Headbands, hearing aids and, 117 Hearing age, 192–195 defined, 15 distance, measuring, 97 functions of, 20 language development and, 14 vs. listening, 13 measuring distance, 97 normal, 36 processes of, 29 speech and, 185–186 use of residual, 191–192 Hearing aids, 112–130 background noise and, 203 behind-the-ear, 113–114 body-worn, 115 care kits for, 126 characteristics of, 23 children brain waves and, 192 points to consider, 123–130 CROS, 152 digital, 118–123 dry and store kit for, 126 ear canal cleansing and, 52 earmolds, 116 fitting validated functionally, 123 verified electroacoustically, 123 infants fitting, 116–117 full-time wearing, 168–172 in-the-canal, 114 in-the-ear (ITE), 114 late to full-time wearing of, 168–172 moisture and, 126 new context for, 113

Index 399

otitis externa and, 53 programmable for flexibility, 126 receive-in-the-canal, 114 reception of speech and, 191 retention devices, 117 RM systems and, 126 stethoscope, 126 styles/use of, 113–115 water resistant, 126 Hearing First, 197 Hearing First website, Logic chain white paper, 4 Hearing instruments. See Hearing aids Hearing loss ambiguity of, 96–97 average, 25 central, 61 cerumen impaction and, 52 children, intervention, 256–257 classifications of, 36–42 minimal, 36–37 profound, 36–37 conductive, 37 ear infection and, 49 middle ear abnormalities and, 44 diagnosing audiogram, 89–95 audiologic diagnosis assessment, 75–89 formulating a differential diagnosis, 95–97 measuring hearing distance, 97 newborn screening, 70–74 test equipment/environment, 74–75 disabilities in addition to, 172–173 EDHI programs, 70–74 effect of, on reception of speech, 190–191 environmental causes of, 42 functional, 62 general causes of, 42 genetic testing and, 43 hearing screening, 70–74 infants, intervention, 256–257 as invisible acoustic filter, 13 medical aspects, 47–66 acoustic neuroma, 61 anoxia, 57 atresia, 51–52 bacterial meningitis, 56

cerumen, 52–53 cholesteatoma, 53 conductive pathologies of, 47 cytomegalovirus, 56–57 ear canal collapse, 51 ear infections, 48–49 functional, 62 LVAS, 59 mixed, 61 noise-induced, 55 objects in ear canal, 53 otitis media, 47–48, 50–51 OME medical diagnosis, 49–50 ossicular abnormalities, 51 otitis externa, 53 ototoxic, 58–59 PLF, 59–60 progressive, 61 Rh incompatibility, 60 sensorineural pathologies, 54–55 tinnitus, 60–61 tympanic membrane perforation, 53 viral/bacterial infections, 55–56 mild, 39–40 minimal, 37–38 mixed, 61 model of, 13–14 moderately severe, 40–41 noise-induced, 55 ototoxic, 58–59 profound, 36–37, 41 progressive, 61–62 sensorineural, 37 causes of, 54 pigmentation abnormalities and, 44 pathologies and, 54–55 severe, 41 types of peripheral, 36 unilateral in infants, 38 in newborns, 38–39 late-onset, 73 viral/bacterial infections and, 55–57 Herpes simplex, 57 blood tests and, 56 Hertz (Hz), 24 High dose, described, 58 pitched sounds, 25

400

Children With Hearing Loss:  Developing Listening and Talking

HIV, otitis media and, 49 HIV infection, congenital abnormalities and, 49 HOPE, 197 Horizontal semicircular system, 33 Huizing, Henk, 191 Hybrid cochlear implant, 152 Hypertension, vestibular system dysfunction and, 66 Hz (Hertz), 24

I IDEA (Individuals with Disabilities Education Act), 84–85, 165–166, 181 Imaging, brain, 218 Imitation, importance of, 276 Immittance (Impedance) testing, 87, 89, 98 Immune deficiencies, otitis media and, 49 Impaction, earwax, 52–53 Impedance testing, 87, 89, 98 Implants auditory brainstem, 154–155 benefits from early, 10 bone anchored, 52, 130–132 cochlear, 12, 144–154 ABI and, 154 ANSD and, 65 benefits of, 148–149 bilateral implants, 150–151 bimodal fittings, 151 candidacy for, 146–148 EAS and, 152 hybrid, 152 illustrated, 145 late to full-time wearing of, 168–172 listening time and, 195 purpose of, 8 reception of speech and, 191 remote microphone (RM) and, 152–153 remote programming of, 153–154 SSD (Single-sided deafness), 152 Incidental interacting, 269–270 embellishing, 270–271 learning, 107 distance hearing and, 107–108 Incus, 32

Individuals with Disabilities Education Act (IDEA), 84–85, 165–166, 181 Induction loop, wireless delivery by, 136 Infancy, listening in, 7 Infants amplification for, 112–155 audiologic diagnostic assessment, 75–89 behavioral tests, 79–80 test protocols, 77–79 CMV and, 57 facts about hearing of, 71, 73 hearing aids, 112–130 body-worn, 115 in-the-canal, 114–115 completely-in-the-canal, 115 in-the-ear (ITE) hearing aid, 114 fitting, 116–117 new context for, 113 points to consider, 123–130 receiver-in-the-canal, 114 receiver-in-the-ear, 114 interactional abilities development, 229–234 intervention, 256–257 language development of, 5–8 listening experience of, 7 listening of, 5–8 listening time and, 192 parents affective relationship, 228–229 turn-taking conventions, 231–233 prelinguistic., 226 Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), 156 Infections, 48–49 bacterial, 55–57 blood tests and, 56 ear, 48–49 viral, 55–56 Influenza vaccine, 50 Infrared transmission (IR), 133 for large field, 110, 139–142 wireless delivery by, 136 Inner ear, 32–33 dysplasias, 46 Intensity sound, 24 of speech, 186–188 variations of, 278

Index 401

Intentionality, talking and, 214 Interacting, embellished, 271 listening, speech, 275 teaching through, 271–274 listening, 274–275 speech, 275 incidental, 269–270 embellishing, 270–271 Interaction, verbal child with hearing loss and, 217 access to, 261 noise and, 168, 257 play centers and, 320 preschool and, 321 Interactional abilities, development of, 229–234 Interactionist, view of language acquisition, 218 Intervention adult learning and, 253–255 auditory-verbal, intervention child with hearing loss and, 217 direct access to, 261 noise and, 168, 257 play centers and, 320 preschool and, 321 children, 256–257 conceptual drawing of, 162 differentiation dimensions among programs of, 164–167 early, importance of, 31 efficacy of, 246–247 infants, 256–257 Listening and spoken language (LSL), 246–247 organization of, 221–224 parents and, 253 premises of, 162–164 acquire speaking through listening, 163 early, 162–163 include parents, 163 teaching listening/speaking, 163 relevance for decisions, 220–221 Interviews, routines-based, 166 In-the-canal (ITC) hearing aid, 114–115 In-the-ear (ITE) hearing aid, 114 Intonational melody of language, 188

variation, 277 Invisible acoustic filter, hearing loss as, 13 IR (Infrared transmission), 133 for large field, 110, 139–142 wireless delivery by, 136 Irritability, otitis media and, 50 ITE (In-the-ear) hearing aid, 114 IT-MAIS (Infant-Toddler Meaningful Auditory Integration Scale), 156

J JCIH ( Joint Committee on Infant Hearing), 71 Jervell and Lange-Nielsen syndrome, 45 Johnson, Spencer, 12 Joint attention, 230–231 reference, 230–231 Joint Committee on Infant Hearing ( JCIH), 71

K Key frequencies, speech information carried by, 26 Knowledge, presuppositional, 215 Knowles, Malcolm S., 253

L Language acquisition behaviorist view of, 218 components of, 6 development of, target determining/ sequencing, 267 domains, 339–341 English, as a second language, children with hearing loss and, 177–181 English rule system of, 216–218 home environment and, 6 intonational melody of, 188 listening time and, 192 nativist view of, 218 skills in children, theory of mind and, 200 components of, 6

402

Children With Hearing Loss:  Developing Listening and Talking

Language  (continued) development of, target determining/ sequencing, 267 domains, 339–341 English as a second language, children with hearing loss and, 177–181 rule system of, 216–218 home environment and, 6 intonational melody of, 188 listening time and, 192 skills in children, theory of mind and, 200 spoken, challenges to the process of learning, 167–175 intentionality/speech acts, 214–218 intervention decisions, 220–221 intervention organization, 220–221 talking, 214–218 learning, 213–225 listening and, 4, 164 teaching, through embellished interacting, 271–274 Language Environment Analysis technology (LENA), 173 Large vestibular aqueduct syndrome (LVAS), 59, 94 Later-onset hearing loss, risk factors for, 73 Law of Initial Value (LIV), 80–81 Learning adult, 253–255 audio-linguistic, targets, 210–211 auditory, 293–305 examples of, 204–210 disabilities, 172 distance, 107 environment, 107 incidental, 107, 269–270 spoken language, 213–225 intervention decisions, 220–221 intervention organization, 220–221 talking, 214–218 verbal, 293–305 LENA (Language Environment Analysis technology), 173 LIFE (Listening Inventories for Education), 156

Ling 6-7 Sound Test, 19, 28–29, 82 application/instructions for, 291–292 measuring distance hearing, 97 Linguistic development maximizing caregiver effectiveness, 259–267 discussing with parents, 264–265 interactional targets, 265–267 interactive sample, 263–264 structure of, 263 input/practice, 168 maximizing effectiveness of, background/rationale for, 259–263 Listening check, on hearing instruments, 126 defined, 7 distance, 205 environment, 107 experience in infancy, 7 vs. hearing, 13 helping children with, 201–204 in infancy, 7 microphone and, 203 as a receiver, 250 teaching, through embellished interacting, 274–275 time, language and, 192 Listening and spoken language (LSL), 4, 164, 172 age, 192–195 intervention, 246–247 knowledge/competencies needed, 333–337 practitioner, relationship with parents, 268 preschool and, 177 principles of certified, 331–332 role of practitioner, 255–256 systematic approach to, 17 Listening and Spoken Language Specialists (LSLS), 164 description/practice of, 329–330 knowledge/competencies needed, 333–337 Listening environment, 107 Listening Inventories for Education (LIFE), 156 The Listening Room, 197, 211

Index 403

Little Ears, 156 LIV (Law of Initial Value), 80–81 Location, audio-verbal therapy, 279–280 LOCHI (Australian Hearing Services Longitudinal Outcomes of Children with Hearing Loss), 64 Logic chain components of, 4 white paper, 4 Loop diuretics, as ototoxic drug, 58 Loudness, 23 of speech, 186–188 Low frequency, cycles per second, 24 Lowering, frequency, 122 LQTS syndrome, 45 LSL (Listening and spoken language), 4, 164, 172 age, 192–195 intervention, 246–247 knowledge/competencies needed, 333–337 practitioner, relationship with parents, 268 preschool, 177 principles of certified, 331–332 role of practitioner, 255–256 systematic approach to, 17 LSLS (Listening and Spoken Language Specialists), 164 description/practice of, 329–330 knowledge/competencies needed, 333–337 LVAS (Large vestibular aqueduct syndrome), 59, 94

M Macrolides, as ototoxic drug, 58 Magnetic resonance imaging (MRI), advances in, 218 Magnetoencephalography (MEG), advances in, 218 MAIS (Meaningful Auditory Integration Scale), 156 Malleus, 32 Mastoiditis, 53–54 MCL (Most comfortable loudness level), 102 Meaning, negotiation of, 239

Meaningful Auditory Integration Scale (MAIS), 156 Measurement, rear-ear, hearing aids and, 119 Medical aspects, 47–66 of hearing loss acoustic neuroma, 61 anoxia, 57 atresia, 51–52 bacterial meningitis, 56 CAPD, 62–63 cerumen, 52–53 cholesteatoma, 53 conductive pathologies of, 47 cytomegalovirus, 56–57 ear canal collapse, 51 ear infections, 48–49 functional, 62 LVAS, 59 mastoiditis, 53–54 microtia, 51–52 mixed, 61 noise-induced, 55 objects in ear canal, 53 otitis externa, 53 otitis media, 47–48, 50–51 OME medical diagnosis, 49–50 ossicular abnormalities, 51 ototoxic, 58–59 PLF, 59–60 progressive, 61 Rh incompatibility, 60 sensorineural pathologies, 54–55 stenotic ear canal, 52 synergistic/multifactorial effects, 63–64 tinnitus, 60–61 tinnitus, 60–61 tympanic membrane perforation, 53 vestibular issues, 66 viral/bacterial infections, 55–56 microtia of hearing loss, 51–52 stenotic ear canal, hearing loss, 52 Medline Plus, 44, 50 MEG (Magnetoencephalography), advances in, 218 Membrane, tympanic (TM) otitis media and, 50 perforated, 53

404

Children With Hearing Loss:  Developing Listening and Talking

Meningitis bacterial, 42, 56, 149 vaccine, 56 Michel aplasia, 46 Microphone directional, 138 listening and, 203 omnidirectional, 138 remote cochlear implants and, 152–153 managing personal, 143–144 mute/not to mute, 137–138 noise and, 203–204 options, 138 personal-worn units, 133 systems, 126, 134–138 use in school, 136 use of outside of schools, 136–137 Microtia, 51–52 Middle ear, 32 abnormalities, conductive hearing loss and, 44 structures, 29 ear infection audiologic diagnosis/treatment of, 50–51 hearing loss and, 47–51 high incidence populations, 49 with effusion (OME), 48–50 Mild hearing loss, 39–40 Mind, theory of, executive functions and, 198–201 Minimal hearing loss, 36–38 Minimum response level (MRL), 80 Minocycline, as ototoxic drug, 58 Mixed hearing loss, 61 Modality primary sense, 33, 185 tactile sense, 167 visual sense, 166 Moderately severe hearing loss, 40–41 Moisture hearing aids and, 126 otitis externa and, 53 Mondini dysplasia, 46 Mononucleosis, 57 Most comfortable loudness level (MCL), 102

Motherese, 235 immaterial or facilitative, 243–244 length of use, 242 reasons for, 242 Motor development, gross, vestibular dysfunction and, 34 Mr. Potato Head, 84 MRI (Magnetic resonance imaging), advances in, 218 MRL (Minimum response level), 80 Multichannel capability, hearing aids and, 118 Multifactorial effects, 63–64 hearing loss, 42 Multilingual environment, 174–175 Multimemory capability, hearing aids and, 118 Multimicrophone capability, hearing aids, 118–119 Multiple exemplars, 272 Music, frequencies of, 189

N NAL (National Acoustic Laboratories), 119 Narrowband noise (NBN), 102 National Acoustic Laboratories (NAL), 119 National Institute on Deafness and Other Communication Disorders, 55, 70 congenital abnormalities (NIDCD) and, 49–50, 66 National Institutes of Health, 44 Nativist, view of language acquisition, 218 NBN (Narrowband noise), 102 Negotiation, of meaning, 239 Neonatal intensive care units (NICUs), otitis media and, 49 Neural development, auditory, 8–11 Neurofibromatosis type II, ABI and, 154 Neuroplasticity, 11 Newborns, auditory brain development of, 6–7 NICUs (Neonatal intensive care units), otitis media and, 49

Index 405

NIDCD (National Institute on Deafness and Other Communication Disorders), 66 congenital abnormalities and, 49–50 No response (NR), 102 Noise background, 203 distance hearing/incidental learning and, 108–109 induced hearing loss, 55 levels, 27 measurements, using smartphone/tablet apps, 110–111 persistent, 173–174 reduction capability, hearing aids, 119 Non-custom mini-BTE earmold, 128 Normal hearing, for children, 37 NR (No response), 102

O OAE (Otoacoustic emissions) test, 85–86, 101 OME (Otitis media with effusion), 48–50 treatment of, 49–50 Omnidirectional microphone, 138 “Open” earmolds, 127 Organ of Corti, 32–33, 43, 79 Orthopedic disabilities, 172 Osseointegrated (Osseo) hearing instruments, 130–132 Osseointegration, 131 Ossicular abnormalities, 51 Ossification, cochlea, 154 OTC (Over the counter), hearing aids, 113 Otitis externa, 53 Otitis media audiologic diagnosis/treatment of, 50–51 hearing loss and, 47–51 high incidence populations, 49 Otitis media with effusion (OME), 48–50 treatment of, 49–50 Oto clips, hearing aids and, 117 Otoacoustic emissions (OAE) test, 85–86, 101 OtoFirm, 126 OtoGenome Test, 43 Otolith organs, 33

OtoSCOPE v8 Test, 43 Otoscopes, 77 Otoscopy, pneumatic, OME diagnosis ande, 49–50 Ototoxic drugs, 58 hearing losses, 58–59 Ototoxicity, 58 Outer ear, 32 abnormalities, conductive hearing loss and, 44 structures, 29 ear infection audiologic diagnosis/treatment of, 50–51 hearing loss and, 47–51 high incidence populations, 49 with effusion (OME), 48–50 Over the counter (OTC), hearing aids, 113

P Parent, child, affective relationship, 228–229 Parents adult learning and, 253–255 AVT (Auditory-verbal therapy), guidance sessions, 279–287 child communication, 226–228 turn-taking conventions, 231–233 child with hearing loss, emotional impact of, 247–253 child’s listening abilities and, 204 helping children listen, 201–204 intervention and, 163 motherese and, 235 immaterial or facilitative, 243–244 length of use, 242 reasons for, 242 relationship with LSL practitioner, 268 talk characteristics and, 234–242 content, 235–236 negotiation of meaning, 239 participation-elicitors, 239–240 prosody, 236–237 repetition, 238–239 responsiveness, 240–241

406

Children With Hearing Loss:  Developing Listening and Talking

Parent’s Evaluation of Aural/Oral Performance of Children (PEACH), 156 questionnaire, 64 Participation-elicitors, 239–240 Patterns, rhythmic, highlight variations in, 278 PE (Pressure equalization) tubes, 50 PEACH (Parent’s Evaluation of Aural/ Oral Performance of Children) questionnaire, 64, 156 Pediatric audiologic evaluation, goals of, 50 audiological tests, 99–101 behavioral tests, 79–80 Pediatric Amplification Protocol and the Pediatric Amplification Guideline, 112 Pendred syndrome, 45 Perilymphatic Fistula (PLF), 59–60 Peripheral auditory system, 29 central auditory system and, 31 Persistent noise, in child’s environment, 173–174 Personal sound amplification products (PSAP), 113 Personal-worn RM units, 133–134 PET (Positron-emission tomography), advances in, 218 Phenotype, described, 44 Phonation, 275 Phonemes, 25 Phonemic awareness, 9 Phonological awareness, 9 Pigmentation abnormalities, sensorineural hearing loss and, 44 Pinna, 32 Pitch, 23–24 speech and, 188–189 Platinum drugs, as ototoxic drug, 58 PLF (Perilymphatic Fistula), 59–60 Pneumatic otoscopy, OME diagnosis ande, 49–50 Pneumococcal conjugate vaccine, 50 Pollack, Doreen, 191 Positioning, talker/listener, 111 Positive feedback, 240 Positron-emission tomography (PET), advances in, 218

Posterior semicircular system, 33 Postimplant, meningitis, 56 Practice, cumulative, 10 Prelinguistic age, 226 prelinguistic., 226 Preplanned teaching, limits of, 285–287 Preschooler, auditory/linguistic stimulation and, 176 Preschool group settings, checklist for, 317–321 LSL, 177 special education, disadvantages of, 178 Preschool Screening Instrument for Targeting Educational Risk (Preschool SIFTER), 156 Preschool SIFTER (Preschool Screening Instrument for Targeting Educational Risk), 156 Prescriptive targets, hearing aids, 119 Pressure, sound, 24 Pressure equalization (PE) tubes, 50 Presuppositional knowledge, talking and, 215 Primary sense modality, 33, 185 Processing models, auditory, 195–198 Profound hearing loss, 36–37, 41 digital vs. linear hearing aids and, 118 Programmable for flexibility, hearing aids, 126 Progressive hearing loss, 61–62 Prosody, 236–237 Protowords, 214–215 PSAP (Personal sound amplification products), 113 PTA (Pure-tone average), 102 Pure-tone audiogram, 90 average (PTA), 102 thresholds on an audiogram test, 98

Q Q-T syndrome, 45 Questionnaire PEACH (Parent’s Evaluation of Aural/ Oral Performance of Children), 64, 156

Index 407

R Radio frequency modulated (FM), wireless delivery by, 136 RASTI (Brüel and Kjaer Rapid Speech Transmission Index), 109 Rear-ear measurement, hearing aids and, 119 Recasts, of adult utterances, 240 Receiver-in-the-canal (RIC) hearing aid, 114 Receiver-in-the-ear (RITE) hearing aid, 114 Recommended Protocol for Audiological Assessment, Hearing Aid and Cochlear Implant Evaluation, and Followup, 113 Reference, joint, 230–231 Referential redundancy, 236 Reflective verbal, 241 auditory behaviors, 81–82 Relationship, affective, 228–229 Remote microphone (RM) cochlear implants and, 152–153 hearing aids and, 126 managing personal, 143–144 noise and, 203–204 personal-worn units, 133 systems, 126, 134–136 mute/not to mute, 137–138 options, 138 use in school, 136 use of outside of schools, 136–137 Repetition, 238–239 Residual hearing, use of, 191–192 Responses empathic, 251 that control, 252 Responsiveness, 240–242 Retention devices, hearing aids, 117 Rh incompatibility, 60 Rhythmic patterns, highlight variations in, 278 RIC (Receiver-in-the-canal) hearing aid, 114 RITE (Receiver-in-the-ear) hearing aid, 114 RM (Remote microphone) cochlear implants and, 152–153 hearing aids and, 126

managing personal, 143–144 noise and, 203–204 personal-worn units, 133–134 systems, 126, 134–136 mute/not to mute, 137–138 options, 138 use in school, 136 use of outside of schools, 136–137 Roger, wireless delivery by, 136 Room, audiometric test environment, 76 Routines brain development through, 289–290 everyday, of children, 258 Routines-based interviews, 166 Rubella, blood tests and, 56

S S (Sound field testing), 102 Sabotage, 272 Saccule, 33 Salicylate, as ototoxic drug, 58 SAT (Speech awareness threshold), 102 Scheibe dysplasia, 46 Screening Inventory for Targeting Educational Risk (SIFTER), 156 Selective attention, 210 Semantics, 237–238 Semicircular system, 33 Sensorineural hearing loss, 37 causes of, 54 pigmentation abnormalities and, 44 mechanism, 32 pathologies, hearing loss and, 54–55 Sensory deprivation, 95–96 stimulation, brain auditory centers and, 8 Sequencing, 13 Severe hearing loss, 41 digital vs. linear hearing aids and, 118 Shingles, 57 Siblings, theory of mind and, 200 SIFTER (Screening Inventory for Targeting Educational Risk), 156 Signal processing, advanced, 118 warning hearing function, 21–22

408

Children With Hearing Loss:  Developing Listening and Talking

Signal-to-noise ratio distance hearing/incidental learning and, 108–109 hearing loss and, 173 positioning talker/listener and, 111 Sine wave, 23 Singing, with child, 189 Single-sided deafness (SSD), 38 cochlear implants and, 152 defined, 39 identification of in newborn, 38 in infants, 39 Skeleton earmold, 128 Slight hearing loss, 37–38 SLMs (Sound-level meters), 111 Small, speech and, 185 Smartphones, measuring noise with, 110–111 Smoke passive, otitis media and, 49, 50 S/N ratio distance hearing/incidental learning and, 108–109 HATs and, 133–134 hearing loss and, 173 positioning talker/listener and, 111 Softbans, 131–132 Sound field amplification, 140 FM, 110, 139–142 testing (S), 102 thresholds, 92 intensity, 24 nature of, 20–22 pressure, 24 psychological attributes of, 23 thresholds, aided, 124 Sound processing, 9 Sound-level meters (SLMs), 111 Sounds, complex, 25 Sound-toy associations, 277 Speak, children learning to, 218–219 Speaking, overall rate of, 189 Special education, preschool, disadvantages of, 178 Speech acoustics connection, 186–190

audibility vs. intelligibility, 27 banana, 123–124, 187 development of, target determining/ sequencing, 267 discourse, 215–216 duration of, 189–190 frequencies of key, 26 frequency and, 188–189 infants learning of, 6–7 information carried by key frequencies, 26 intensity of, 186–188 intentionality and, 214–218 loudness of, 186–188 perception testing, 84–85 pitch and, 188–189 presuppositional knowledge and, 215 primacy of audition and, 185 reception of, 190–191 softest to loudest in dB, 25 string bean, 124, 125, 187 teaching, through embellished interacting, 275 Speech awareness threshold; also known as speech detection threshold (SAT or SDT), 98, 102 Speech reception threshold (SRT), 98, 102 Speech-to-noise ratio distance hearing/incidental learning and, 108–109 HATs and, 133–134 Spoken communication hearing function, 22 Spoken language challenges to the process of learning, 167–175 learning, 213–225 intentionality/speech acts, 214–218 intervention decisions, 220–221 intervention organization, 220–221 talking, 214–218 listening check, on hearing instruments, 126 defined, 7 distance, 205 environment, 107 experience in infancy, 7 vs. hearing, 13

Index 409

helping children with, 201–204 in infancy, 7 microphone and, 203 as a receiver, 250 teaching, through embellished interacting, 274–275 time, language and, 192 teaching, through embellished interacting, 271–274 SRT (Speech reception threshold), 98, 102 SSD (Single-sided deafness), 38 cochlear implants and, 152 defined, 39 identification of in newborn, 38 in infants, 39 Stapes, 32 “Start with the Brain and Connect the Dots,” 4, 106 Stenotic ear canal, 52 Stereocilia, 33 Stethoscope, hearing aids, 126 Streamers, 132 Subconscious hearing function, 20 level, hearing and, 20 Supra-aural earphones, 74, 75 Swimmer’s ear, 53 Syndrome Alport’s, 44–45 CHARGE, 46 Crouzon’s, 45 defined, 44 Down, otitis media and, 49 Jervell and Lange-Nielsen, 45 LQTS, 45 Pendred, 45 Q-T, 45 Treacher Collins, 45, 130 Usher, 45–46 Waardenburg, 45 Synergistic effects, 63–64 Syntax, 237–238 Syphilis, blood tests and, 56

T Tablets, measuring noise with, 110–111 Tactile, sense modality, 167

Talk, children learning to, 218–219 Talk characteristics parents and negotiation of meaning, 239 participation-elicitors, 239–240 repetition, 238–239 responsiveness, 240–241 semantics/syntax, 237–238 Talking, 214–218 acoustics connection, 186–190 audibility vs. intelligibility, 27 banana, 123–124, 187 development of, target determining/ sequencing, 267 discourse, 215–216 duration of, 189–190 frequencies of key, 26 frequency and, 188–189 infants learning of, 6–7 information carried by key frequencies, 26 intensity of, 186–188 intentionality and, 214–218 loudness of, 186–188 perception testing, 84–85 pitch and, 188–189 presuppositional knowledge and, 215 primacy of audition and, 185 reception of, 190–191 softest to loudest in dB, 25 string bean, 124, 125, 187 teaching, through embellished interacting, 275 with effusion (OME), 48–50 Tangible reinforcement operant conditioning audiometry (TROCA), 79–80 Targets audio-linguistic learning, 210–211 determining/sequencing, 267 Task, discrimination, 207 Taste, speech and, 185 TEACH (Teacher’s Evaluation of Aural/ Oral Performance of Children), 156 Teacher’s Evaluation of Aural/ Oral Performance of Children (TEACH), 156

410

Children With Hearing Loss:  Developing Listening and Talking

Teaching auditory, examples of, 204–210 limits of preplanned, 285–287 listening, through embellished interacting, 274–276 spoken language, through embellished interacting, 271–274 through incidental interacting, 269–270 Teleaudiology, 153 Telecoil, described, 126 Telehealth, 153 Teleintervention, 268 Telemedicine, 153 Telepractice, 153 Temporal bone, outer ear and, 32 Terminal gaze, 232 Test battery, basic, 98 Test protocols, infants/children, 77–79 Theory grieving process, 248 steps in, 248–249 Theory of mind (ToM), executive functions and, 198–201 Therapy, audio-verbal, location of, 279–280 Threshold aided sound-field, 124 configuration (pattern) on, 92–95 defined, 89 of sensitivity, 91 sound-field, 92 test, 98 Tinnitus, 60–61 TM (Tympanic membrane) otitis media and, 50 perforated, 53 ToM (Theory of mind), executive functions and, 198–201 Touch, speech and, 185 Toupee tape, hearing aids and, 117 Toxoplasmosis, blood tests and, 56 Transposition, frequency, 122 Treacher Collins syndrome, 45, 130 TROCA (Tangible reinforcement operant conditioning audiometry), 79–80 Troubleshooting, described, 126 Tumor, eighth nerve, 60 Turnabout, 241 Turn-taking conventions, 231–233

Tympanic cavity, 32 abnormalities, conductive hearing loss and, 44 structures, 29 ear infection audiologic diagnosis/treatment of, 50–51 hearing loss and, 47–51 high incidence populations, 49 Tympanic membrane (TM) otitis media and, 50 perforated, 53 Tympanometry, 87 Tympanostomy, 50 Tympanostomy tubes, tubes, 50

U UHL (Unilateral hearing loss), 38–39 identification of in newborn, 38 in infants, 38–39 later-onset hearing loss and, 73 Unilateral hearing loss (UHL), 38–39 identification of in newborn, 38 in infants, 38–39 later-onset hearing loss and, 73 Urbantschitsch, Viktor, 191 Use development, language, 219 Usher syndrome, 45–46 Utricle, 33

V Vaccines influenza, 50 meningitis, 56 pneumococcal conjugate, 50 Valganciclovir, 57 Validation, hearing aids, 119 Vancomycin, as ototoxic drug, 58 Variable expression, 44 Variations in intensity, 278 in rhythmic patterns, 278 Varicella-zoster virus, 57 Velocity, defined, 23 Vented earmolds, 127 Ventilation tubes, 50

Index 411

Verbal language, 22 learning, targets for, 293–305 reflective, 241 Verbal interaction child with hearing loss and, 217 access to, 261 noise and, 168, 257 play centers and, 320 preschool and, 321 Verbalization acoustics connection, 186–190 audibility vs. intelligibility, 27 banana, 123–124, 187 development of, target determining/ sequencing, 267 discourse, 215–216 duration of, 189–190 frequencies of key, 26 frequency and, 188–189 infants learning of, 6–7 information carried by key frequencies, 26 intensity of, 186–188 intentionality and, 214–218 loudness of, 186–188 perception testing, 84–85 pitch and, 188–189 presuppositional knowledge and, 215 primacy of audition and, 185 reception of, 190–191 softest to loudest in dB, 25 string bean, 124, 125, 187 teaching, through embellished interacting, 275 Verification techniques, hearing aids, 119 Vestibular comorbidity and, 34 dysfunction, 66 gross motor development and, 34 issues of, 66 system, 20, 33–34 balance, 33–34 dysfunction, 66

Vestibulocochlear nerve, 33 Vestibulotoxicity, 58 Vibrotactile devices, 167 Video tape, sample collection, 263, 264 Viral infections, hearing loss and, 55–57 Virtual home visits, 268 Vision, speech and, 185 Visual problems, 172 Visual Reinforcement Audiometry (VRA), 79–80, 82, 99, 102 Visual sense modality, 166 Vocalizing, 275 Vowel sounds, 25–28, 188, 278 low-frequency access to, 121 VRA (Visual Reinforcement Audiometry), 79–80, 82, 99–100, 102

W Waardenburg syndrome, 45 Water-resistant hearing aids, 126 WDRC (Wide dynamic range compression), 118 WDS (Word discrimination score), 98, 102 Website Hearing First, Logic chain white paper, 4, 5 Wedenberg, Erik, 191 Whetnall, Edith, 191 Who Moved My Cheese? 12 Wide dynamic range compression (WDRC), 118 Wig tape, hearing aids and, 117 Winnie the Pooh, 197 Wireless connectivity, 132 “Wise Ears!” website, 55 Word discrimination score (WDS), 98, 102 Words, average number of spoken to children, 8 Wright, John, 191

Z Zinc-air cells, 127

A Must-Have Resource with 7,000+ Audiology Terms! By Brad A. Stach, PhD The Comprehensive Dictionary of Audiology: Illustrated, Third Edition is a must-have resource for anyone involved in the field of audiology. It defines over 7,000 terms integral to the profession, practice, and science of audiology and covers both current and historical terms. Practicable illustrations enrich the definitions throughout. Additionally, the text includes an appendix of acronyms, abbreviations, and symbols, an appendix of auditory system disorders, and a user’s guide to the dictionary. Concise, current, and accessible, this edition meets the needs of audiologists today with updates in response to developments in practice and technology in the field. Approximately 200 terms have been added to the third edition and it is now available in both print and electronic formats for the first time. Comprehensive Dictionary of Audiology: Illustrated, Third Edition, is invaluable for audiologists and professionals in the communication sciences. © 2019 | 349 pages B&W, Softcover, 7” x 10” ISBN13: 978-1-94488-389-8





This is a first-rate dictionary that is valuable and easy to use. It belongs on the shelf of every student and

professional in the field. It is the only dictionary of audiology on the market and the only one needed.

—Lori J. Newport, BA, MA, AuD, Biola University

ORDER YOUR COPY TODAY!

Tel: 1-866-PLURAL-1 (1-866-758-7251) E-mail: [email protected] www.pluralpublishing.com