Music and Hearing Aids: A Clinical Approach 2021054266, 2021054267, 9781635503951, 1635503957

Music and Hearing Aids: A Clinical Approach is written for hearing health care professionals working with hard-of-hearin

187 111 10MB

English Pages 146 [169]

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Music and Hearing Aids: A Clinical Approach
 2021054266, 2021054267, 9781635503951, 1635503957

Table of contents :
Foreword
Preface
Reviewers
1. A Primer on Wavelength Acoustics for Musical Instruments
2. Music (and Speech) for the Audiologist
3. Hearing Aids and Music: What the Literature Says
4. Clinical Approaches to Fitting Hearing Aids for Music
5. A Return to Older Technology?
Appendix A. Conversion Chart of Musical Notes and Their Fundamental Frequencies
Appendix B. Research Projects That Would Contribute Significantly to Clinical Knowledge
Appendix C. 15 Audio File Descriptions
Index

Citation preview

Music and

Hearing Aids A C L I N I C A L A PPRO A CH

Editor-in-Chief for Audiology Brad A. Stach, PhD

Music and Hearing Aids A C LINIC AL AP PROA CH Marshall Chasin, AuD, MSc

5521 Ruffin Road San Diego, CA 92123 e-mail: [email protected] Website: https://www.pluralpublishing.com Copyright © 2022 by Plural Publishing, Inc. Typeset in 11/13 Garamond by Flanagan’s Publishing Services, Inc. Printed in the United States of America by Integrated Books International All rights, including that of translation, reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, recording, or otherwise, including photocopying, recording, taping, Web distribution, or information storage and retrieval systems without the prior written consent of the publisher. For permission to use material from this text, contact us by Telephone: (866) 758-7251 Fax: (888) 758-7255 e-mail: [email protected] Every attempt has been made to contact the copyright holders for material originally printed in another source. If any have been inadvertently overlooked, the publisher will gladly make the necessary arrangements at the first opportunity.

Library of Congress Cataloging-in-Publication Data Names: Chasin, Marshall, author. Title: Music and hearing aids : a clinical approach / Marshall Chasin. Description: San Diego, CA : Plural Publishing, Inc., [2022] | Includes bibliographical references and index. Identifiers: LCCN 2021054266 (print) | LCCN 2021054267 (ebook) | ISBN 9781635503951 (paperback) | ISBN 1635503957 (paperback) | ISBN 9781635503968 (ebook) Subjects: MESH: Hearing Aids | Music | Correction of Hearing Impairment | Persons With Hearing Impairments | Acoustics Classification: LCC RF300 (print) | LCC RF300 (ebook) | NLM WV 274 | DDC 617.8/9--dc23/eng/20211124 LC record available at https://lccn.loc.gov/2021054266 LC ebook record available at https://lccn.loc.gov/2021054267

Contents Foreword by Mead Killion, PhD, DSc vii Preface ix Reviewers xiii

1 A Primer on Wavelength Acoustics for Musical

1

Instruments

2 Music (and Speech) for the Audiologist

15

3 Hearing Aids and Music:  What the Literature Says

41

4 Clinical Approaches to Fitting Hearing Aids for Music

87

5 A Return to Older Technology?

117

Appendix A. Conversion Chart of Musical Notes and Their Fundamental Frequencies Appendix B. Research Projects That Would Contribute Significantly to Clinical Knowledge Appendix C. 15 Audio File Descriptions

123 125 133

Index 137

v

Foreword Marshall Chasin is not only highly regarded as an audiologist, but has extensive experience in music and musical instrument acoustics, hearing-aid-fitting methods for musicians, and an extensive knowledge of the literature in those fields, not to mention his having made significant contributions to those areas. Perhaps most important, he himself is a musician. Not surprisingly, Chasin is considered the go-to audiologist by numerous practicing musicians. In an unusual approach, the first chapter of this book provides a good explanation of the wavelength associated acoustics of musical instruments. A clear explanation of the reasons for this as applied to musical instruments is a delightful beginning for this treatise. Bringing it closer to home, Marshall’s second chapter summarizes the differences and similarities between music and speech in understandable terms, providing the basic understanding that can stand in good stead when the client is a musician. For this purpose, the mp3 audio files accompanying this and other chapters help bring understanding to the words. A major feature of this book is the extensive review of the literature in the third chapter. In each case, the relevance of the research findings to their implications for hearing aid design and fitting is emphasized. Frequency-response shaping, and the effect of defects in the frequency response, wide-dynamic-range compression characteristics — good and bad, and the danger of using Standards measurements that provide for quality control but often do not provide the information required for intelligent hearing aid adjustments. An extensive discussion follows on the advantages and challenges of digital signal processing, frequency lowering for those with cochlear dead regions and when it may be expected to fail, peaks in the frequency response, and possibly excessive delay times in digital hearing aid circuits. A wealth of research is summarized with an eye to the clinical approach and fitting of hearing aids. vii

viii  Music and Hearing Aids:  A Clinical Approach

The fourth chapter concentrates on the application of the previous information as applied to the clinical environment for the avoidance and/or solving of several challenging problems with some current approaches to hearing aid design. Starting with input overload and avoidance, this chapter discusses how to find cochlear dead regions simply by using a piano keyboard. The chapter ends with a discussion of MEMS microphones and digital delay. The final chapter contains an interesting summary of some “lost” earlier circuits, and an eleven-item Wish List compiled from well-known musicians, two of whom are audiologists. Marshall Chasin is a musician, a clinician, and a good teacher. He brings that background to this book, intended to provide additional solutions to difficult fitting challenges with musicians and nonmusicians. —Mead Killion, PhD, DSc

Preface I do play several instruments, but I am not a musician. I read the music and try to convey the music in an artistic manner, but my art is a learned and practiced one. I use memorized and technical knowledge to generate what may appear to be acceptable. But I am an audiologist. I do feel that any audiologist, regardless of their musical abilities or talents, can work with musicians or those people for whom music is important in their lives. You don’t need to know that the note A on the second space of the treble clef has its fundamental frequency at 440 Hz, but it can be useful to learn this when communicating with musicians. Even when communicating with nonmusicians, this knowledge is important so that at least one can understand why frequency lowering, for example, is not a useful idea, despite its success for amplified speech. One does not need to be a musician in order to work with professional musicians or just someone who wants to appreciate the music that they hear. One needs to be an audiologist. An audiologist is an ideal person to work with amplified music and this is true whether a client requires hearing aids or cochlear implants. Audiologists know about earmold acoustics, room acoustics, speech acoustics, and psychoacoustics. Audiologists know about digital processing and technologies that can assist a hard-of-hearing person to play, or listen to amplified music. In addition, audiologists know about the verification, counseling, and follow-up required for the fine tuning of a “music program” in a hearing aid and when the use of additional accessories are warranted. Throughout this book, each of these fundamental areas of audiology will be touched upon en route to establishing the most optimal series of settings and technologies that will constitute a “music program” in a hearing aid. This book is a clinically based resource that covers the “recent” post-1988 history of research concerning how music can, and should, be processed through modern hearing aids. The book will include a series of clinically based strategies to ix

x  Music and Hearing Aids:  A Clinical Approach

optimize the sound of amplified music for hard-of hearing people; a “wish list” of technologies either yet to be invented, or that have already been invented, but are no longer used, to optimize a “music program” in a hearing aid. After a short primer of wavelength acoustics, this book features an overview entitled, “Music (and Speech) for the Audiologist” that provides the reader with some basic knowledge about music. Then, one will quickly realize that one’s own audiology training has already provided them with this information, perhaps using differing terminologies. Comparable with the many other areas of audiology study, audiologists need to make clinical decisions where the knowledge is not yet completely known, or where the research data appears to be contradictory, or perhaps absent. Music as an input to hearing aids is one such area. In cases such as these, one needs to fall back on general principles, and where appropriate, these guiding principles are discussed to at least “get us going in the right direction.” Throughout this book, a series of audio files are available, either by being embedded in the e-book version or in the print version. (There is a dedicated portion of a server on the Plural Publishing site that will house these files.) These audio files are used to bolster the information in the text and provide the interested reader with the opportunity to perform their own spectral analyses. These will be marked with the “audio icon” in the margin as shown here. Along with being adjacent to the discussion at hand, a listing of all of the 15 audio files will also appear in Appendix C. And similar to the many areas of audiology where studies are waiting to be done, many of these can be addressed within the scope of a student research project, such as an AuD Capstone study. Twelve of these are noted throughout the text and are marked with the “study icon” in the margin as shown here, referencing the associated description in Appendix B of this book. Dr. Mead Killion has graciously agreed to write the Foreword for this book. He is a mathematician, an audiologist, a researcher, an inventor, founder of Etymotic Research (https://www.Ety​ motic​.com), and most recently founder of MCK Audio — and one of the great thinkers in the field of music and hearing aids. He is the “father” of the K-AMP, which is a 1988 analog technology

Preface   xi

hearing aid that was, and still is, one of the best hearing aids for amplified music. As an industry, we have been trying to play catch-up to the 1988 K-AMP, and only in the last several years do I think we are now back to where we were, and should be. A big thank you goes to Shaun Chasin (http://www.Chasin​ .ca), a composer who has kept me on the “straight and narrow” when discussing music issues. Shaun also supplied all of the audio files and many of the spectra used in this book. I would also like to acknowledge, and thank, the many hardof-hearing musicians, and other hard-of-hearing clients over the years for their honestly and perseverance in not just putting up with the sound of their amplified music, but articulating how perhaps it can be improved. We wouldn’t be where we are without their feedback. Specifically, I would like to thank and acknowledge the insights of Charles Mokotoff, Larry Revit, Richard Einhorn, Stu Nunnery, Phil Nimmons, Wendy Cheng, and Rick Ledbetter. I would also like to acknowledge some people whose work provided the foundation for some of the technologies used today that allow our hard-of-hearing clients to have the best fidelity for music listening and playing. Of the many people I would like to single out are: Ed Villchur, who is the father of multichannel compression; Harry Teder, who is the inventor of adaptive compression in hearing aids; and Elmer Carlson, who was instrumental in the development of many of the miniature microphones we use today. And, finally, I have had the chance to work with some delightful and knowledgeable colleagues in this field and would like to thank Mead Killion, John Chong, Wm. A. Cole, Steve Armstrong, Neil Hockley, Frank Russo, Mark Schmidt, Francis Kuk, Jim Kates, and Brian Moore for the many discussions over the years about music and hearing aids, as well as the peer reviewers who provided constructive comments in the writing of this book. Marshall Chasin, AuD, Musicians’ Clinics of Canada http://www.MusiciansClinics.com All royalties from the sale of this book go to support educational activities of the Musicians’ Clinics of Canada.

Reviewers Plural Publishing and the author would like to thank the following reviewers for taking the time to provide their valuable feedback during the manuscript development process. Shelly Boelter, AuD, F-AAA Assistant Professor Pacific University School of Audiology Forest Grove, Oregon Margaret Brac, BSc, (Hon), MCISc, Reg. CASLPO, AuD(C) Audiologist London Audiology Consultants London, Ontario, Canada Gail B Brenner, AuD Doctor of Audiology Hearing Technology Associates, LLC The Tinnitus and Sound Sensitivity Treatment Center of Philadelphia, PC Bala Cynwyd, Pennsylvania Kathy Landau Goodman, AuD Founder and President Main Line Audiology Narberth, Pennsylvania Neil S. Hockley, MSc, Aud(C) Head of Audiology Bernafon AG Bern, Switzerland Robert L B Stevenson, AuD Owner and President Bridgerland Audiology and Hearing Aids LLC Logan, Utah xiii

To all the hard-of-hearing musicians who have been my best teachers.

1 A Primer on Wavelength Acoustics for Musical Instruments SOURCE-FILTER-RADIATION MODEL Most musical instruments are made up of a type of tube or a hollow body that acts as an acoustic amplifier or resonator. This is not the case of some percussion instruments where a trapped (or semi-trapped) volume of air is caused to resonate as a result of a strike to a drum head or solid structure that is hit like a glockenspiel. This chapter deals with the wavelengthassociated resonances only. There are a number of references about percussive acoustics for the interested reader, such as Morrison and Rossing (2018). Although this is a chapter on the relevant acoustics of musical instruments, it solidifies the bases for many of the acoustic properties observed in related fields of audiology, including speech acoustics, earmold acoustics, and even the acoustic behavior of cerumen occlusion of the outer ear canal. Gunnar Fant was among the first to apply the model of “source-filter-radiation” to acoustical systems (Fant, 1960). His work focused on the vocal tract and speech acoustics, but it can be used in a wide range of applications including hearing aid 1

2  Music and Hearing Aids:  A Clinical Approach

acoustics, car muffler systems, and the study of musical instruments. What has colloquially become known as “Fant’s model of the vocal tract” is shown in Figure 1–1.

The Source This block schematic shows that sound begins with a “source,” which can be one’s vocal cords during speech phonation, or the vibration of a reed, or the use of a bow in musical instruments. The exact rigidity of the reed for reeded woodwind instruments and the stiffness and material makeup of the bow for stringed instruments will act as an input (or source) to some type of resonator. Although the block schematic in Figure 1–1 shows a progression of sound energy from the source, through a filter, and finally the radiation properties into a room, it is simplistic, especially for musical instruments. In reality, in order to fully understand the acoustics of some musical instruments, a feedback loop needs to be considered between the “filter” stage and the initial “source.” More on this is discussed in Chapter 2, and the interested reader is referred to the Springer Handbook of Systematic Musicology edited by Rolf Bader (2018) for more information.

Figure 1–1.  This is a box diagram showing the three components of Gunnar Fant’s 1960 description of a “source-filter-radiation” model. Although this was conceived to study the vocal tract, this approach has been quite useful in the study of all resonator systems, including musical instruments. In reality, there is a feedback loop (dotted line) from the filter to the source, which generates some nonlinear behavior as the playing level increases.

1.  A Primer on Wavelength Acoustics for Musical Instruments   3

The Filter The “filter” (or “filter response”) refers to any resonant chamber or tube (or series of chambers and tubes). In the realm of speech acoustics, the filter refers to the oral and nasal cavities with their own unique resonating properties as the tongue, soft palate, and other structures move, creating differing resonances and turbulent action. In the creation of music, the filter is the hollow body of stringed instruments or the variable lengths and diameters in woodwind and brass instruments. For example, a clarinet playing the note C [262 Hz] just below the treble clef has a well-defined length of the resonating tube, as shown in Figure 1–2 (Chasin, 2009). In this case, most of the sound emanates or radiates out of the first non-covered hole, and the lower part of the clarinet does not contribute significantly to the sound generation.

Radiation The “radiation” portion refers to the output conditions of the resonant system, such as the size and shape of the output apertures (e.g., bell). This can be thought of as a change in the amplitudes of (typically higher frequency) energy as the mouth is opened wider in speech acoustics, or a musical instrument having an acoustic flare or bell. In hearing aid and earmold acoustics, this would refer to any changes in the frequency response due to changes of the inner diameter of hearing aid tubing, such as the use of a flared bore or Libby horn. The size of the bell or flare of the musical instrument will define the increase or decrease of the higher frequency acoustic output depending on whether the bell of the instrument was left unobstructed or whether a mute (or hand) was situated in the bell portion of the instrument. The amplitude enhancement related to an acoustic bell (whether it’s in a hearing aid with a Libby horn, a vocal tract with the mouth wide open, or a musical instrument with a bell as part of its design) is that the boost of higher frequency sound energy is length-dependent and begins its enhancement for all frequencies above F = v/2L. In the hearing aid example, this is typically for sound energy above 2500 Hz, for the adult vocal tract with

L Figure 1–2.  Stylized clarinet showing that the “acoustic length” is governed primarily by the length to the first noncovered air hole. The longer the length L, the lower its resonant frequency(ies) will be. From Hearing Loss in Musicians: Prevention and Management (p. 130) by Marshall Chasin. Copyright © 2009 Plural Publishing. All rights reserved. 4

1.  A Primer on Wavelength Acoustics for Musical Instruments   5

an open mouth, for example, it is for all energy above 1000 Hz, and for the longer lengths found with many brass instruments, the enhancement can be for all energy above 100 Hz — all being inversely related to the resonator length (L).

WAVELENGTH RESONATORS OF THE FILTER In speech acoustics there can be both wavelength-associated and “Helmholtz” (volume/constriction)-associated resonances. Mid and high vowels have primarily Helmholtz-related formant structures, at least for their first two formants. However, in music, there are very few situations with constrictions (and adjacent volumes) that would generate Helmholtz resonances. Most of the acoustic behavior can be described by wavelength properties and are primarily related to the length (L) of the resonating filter and its “boundary conditions.” In all acoustic formulae, the length factor (L) is always found in the denominator; longer length instruments will always have a lower frequency resonant behavior than their shorter length cousins. Boundary conditions are related to several issues in acoustics, but we can restrict our discussions to (i) whether the two ends of the resonating tube have similar impedances or different ones (i.e., are they both “closed” or is one end “open” and the other end “closed”?), and (ii) the shape of the tube (e.g., cylindrical like a clarinet, or conical like a saxophone).

Quarter-Wavelength Resonators In speech acoustics, the unconstricted vowel [ə], also known as the reduced vowel “schwa” as in the final vowel in ‘feeder’([fidər]) is a quarter-wavelength resonator because the vocal tract is “closed,” or has a high impedance at the vocal cords or source, and “open” or has a low impedance, at the open mouth. The vocal tract is relaxed and is almost a straight cylindrical tube during the articulation of this vowel. The [ə] vowel is characterized by a series of resonances (or formants) that have odd numbered multiples of the first resonance.

6  Music and Hearing Aids:  A Clinical Approach

The same is true of the clarinet, trumpet, French horn, and a few other musical instruments. These musical instruments are “closed” at the mouth piece and “open” at the other end. In the case of the clarinet, shown in Figure 1–2, the resonating length is from the mouthpiece to the first open hole. In the case of brass instruments, the length is a function of which valves are opened or closed, thereby lengthening or shortening the resonating tube. Figure 1–3 shows the formula, both to predict the resonant frequencies of the vowel [ə], as well as the resonant frequencies of any quarter-wavelength resonator instrument. In Figure 1–3, the length of the resonating portion of the tube is given by L (and for the purposes of this discussion we will use centimeters, or cm), and the speed of sound is given by v (in cm/sec). Although the speed of sound can vary slightly based on temperature, pressure, altitude, and even latitude, we will assume, for ease of calculations, that the speed of sound (v) is 34,000 cm/sec (or 340 m/sec). The first resonant frequency is given by F = v/4L, but subsequent resonances are at odd numbered multiples of v/4L. The “2k-1” term is just a multiplier that has solutions of 1, 3, 5, 7, and so on. In the adult vocal tract, with a length of 17 cm, the first resonance (or formant) of the

Figure 1–3.  The quarter-wavelength resonator formula where F is the resonant (or formant) frequency (Hz), v is the speed of sound (assume 34,000 cm/sec), and L is the length of the resonator (in cm). The units of the solution are 1/sec, which is the same as Hz. The (2k-1) is a multiplier that generates odd numbered multiples (modes) of the first resonance. (See text for more information.)

1.  A Primer on Wavelength Acoustics for Musical Instruments   7

unconstricted vowel [ə] is F = 34,000/4 × 17 = 500 Hz. The vowel [ə] also has formants at 3 × 500 Hz (= 1500 Hz) and 5 × 500 Hz (= 2500 Hz), and so on. A clarinet, with a length of roughly 32.5 cm down to the first uncovered hole is shown in Figure 1–2. Its first resonance for that note is F = 34,000/4 × 32.5 = 262 Hz. And being a quarter-wavelength resonator instrument, when that same note is played, there are additional higher frequency resonances at 3 × 262 Hz (= 786 Hz), 5 × 262 Hz (= 1310 Hz) and so on. This array of higher frequency resonant energy characterizes all onequarter wavelength resonator instruments. The amplitudes, as well as some dynamic attack/release features, help the listener to identify one instrument from another. The clarinet, being a one-quarter wavelength resonator instrument, has a “register” key (rather than an octave key) that triples the playing frequency, which is an octave and a half (also known as a twelfth).

Half-Wavelength Resonators There are a number of musical instruments that have acoustical properties that are derived from having a similar impedance at both ends (i.e., are both “closed” or both “open” at each end of the filter tube). Some of these are listed in Table 1–1. These include all of the stringed instruments where the strings are held tightly at both ends of the instrument and only the middle portions of the strings are allowed to move. As a boundary condition issue, this also includes all musical instruments with a conical flare rather than a cylindrical tube (e.g., clarinet) or an exponential flare (e.g., French horn). The formula to calculate the resonant properties of a halfwavelength resonator instrument is shown in Figure 1–4. This is similar to that found in Figure 1–3 but differs in that the multiplier at the top is merely “k,” meaning integer numbered multiples such as 1, 2, 3, and so on. And the denominator of the equation (where L = length) has a 2 rather than a 4. The implications are that for any given length (L) of the instrument, the first resonant frequency is twice as high as for a quarter-wavelength resonator instrument, but each resonance is twice as close as for a quarter-wavelength resonator instrument.

8  Music and Hearing Aids:  A Clinical Approach

Figure 1–4.  The half-wavelength resonator formula where F is the resonant (or formant) frequency (Hz), v is the speed of sound (assume 34,000 cm/sec), and L is the length of the resonator (in cm). The units of the solution are 1/sec, which is the same as Hz. The k is a multiplier that generates integer numbered multiples (modes) of the first resonance. (See text for more information.)

For example, a clarinet and a soprano saxophone have almost the identical lengths, yet the saxophone is a half-wavelength resonator. Its more tightly packed resonant structure can be seen in Figure 2–2 in Chapter 2. Those half-wavelength resonator instruments, such as the saxophone, have an “octave” key (rather than a register key such as the clarinet). The octave key doubles the playing frequency with the new note being exactly one octave (or double the frequency) higher.

Shape of the Tube In addition to whether a resonating tube is equally open or closed at both ends, or whether the ends differ significantly in impedance, the shape of the tube can be another factor in defining the resonant nature of the tube. Both the clarinet and the soprano saxophone are almost identical in their lengths and both have a mouthpiece and reed at one end, and are “open” at the other end (i.e., first open finger hole), yet the clarinet is a quarter-wavelength resonator and the saxophone is a halfwavelength resonator. The difference lies in the fact that one

1.  A Primer on Wavelength Acoustics for Musical Instruments   9

is a cylinder with a constant internal diameter (i.e., clarinet), and the other has a gradually increasing internal diameter and is conical or cone-like in shape (i.e., saxophone). Although the mathematics of the difference is not too complicated (and only requires basic trigonometry) we will not delve into that here. Suffice it to say that if we had a rubber semi-deformable clarinet, we would be able to gradually change its shape from a cylinder with quarter-wavelength properties, to a conical shape with half-wavelength properties. All conical instruments possess half-wavelength characteristics. A “trick” is to be able to verify that the shape of the instrument is indeed conical. Although this may be obvious with a saxophone, oboe, and bassoon, this determination may be more difficult for some of the brass instruments. If you can place a straight edge, such as a ruler, against the wall of the flare of the instrument and it perfectly fits, then the instrument is most likely a half-wavelength resonator. If the wall of the bell is flaring such as a French horn or trumpet, then it is a quarter-wavelength resonator instrument. This can also be verified clinically using a real ear measurement system by disabling the reference microphone and loudspeaker. After calibrating the real ear measurement system in the normal fashion, depending on the manufacturer, set the stimulus level to “0 dB” or “off.” This disables the reference microphone and the loudspeaker allowing the musician to play their instrument (and maintain the note of playing) throughout the warbled sinusoidal sweep, or spectral analysis. With this modification, the resulting display (i.e., the Real Ear Aided Response or REAR) will be a spectral analysis of the musical instrument as played by that individual. This can be clinically replicated for soft, medium, and loud playing and for a number of different representative notes on that instrument, providing the audiologist with information regarding the sound levels that are generated for that individual client. This has implications for both hearing loss prevention and hearing aid fittings. A partial listing of some quarter-wavelength resonator and some half-wavelength resonator instruments are found in Table 1–1. This wavelength-related analysis can also be useful when the human ear canal is considered. The human ear canal is not

10  Music and Hearing Aids:  A Clinical Approach Table 1–1.  Examples of Musical Instruments That Behave Primarily as Either a Quarter-Wavelength or a Half-Wavelength Resonator Quarter-Wavelength Resonators

Half-Wavelength Resonators

clarinet*

saxophone

trumpet

oboe

trombone

guitar

tuba

violin

French horn

flute

*The clarinet functions as a quarter-wavelength resonator only in the lower register, but as a half-wavelength resonator in the upper register.

a straight tube, but a tube whose internal diameter decreases slightly as one moves more medially toward the tympanic membrane. One can argue that the resonant character of the ear canal has properties of both a quarter-wavelength resonator (with an associated resonance of 2700 Hz) as well as a half-wavelength resonator due to its conical shape (with an associated resonance of 5400 Hz; Chasin, 2005). More on the specifics and the implications of these discussions is found in Chapter 2.

MUSICAL INSTRUMENT MUTES AND THE VOLUME VELOCITY A characteristic of all wavelength-associated musical instruments is that the instrument itself functions as a “waveguide” where standing waves can be set up. Associated with the standing waves are the resonances that we have been discussing. And depending on where in the tube or instrument waveguide you are, there may be a maximum velocity of the air molecules (volume velocity maximum) or a minimum, or anything in between. Because of the nature of wavelength-associated resonances, all

1.  A Primer on Wavelength Acoustics for Musical Instruments   11

brass instruments have a volume velocity maximum at the end of the bell or tube. This is also the case with the human vocal tract during the articulation of the reduced mid vowel schwa [ə]. Obstruction of the musical instrument (or the oral cavity) at its point of a volume velocity maximum will have the greatest effect. Placing a mute or physical attenuator, such as the hand in the case of the French horn, at the end of the brass instrument has the greatest effect because it is at that musical instrument’s maximum volume velocity — an ideal place for a mute. Mutes will not work on woodwinds because the sound emanates out of the first uncovered hole and will only make it to the end of the instrument when all of the holes are covered up — usually just two notes for that instrument. In the case of the clarinet, the notes would be E [165 Hz] and B [494 Hz], which is an octave and one-half (i.e., a twelfth) above it. It would not be practical to use a mute just for these two notes. Audiologists also use mutes when fitting behind-the-ear hearing aids in the earmold coupling system. An acoustic resistor (or filter) is frequently placed near the earhook nub in order to smooth out some of the wavelength resonances, but an optimal position acoustically would be at the end of the earmold in the ear canal. This position is similar to the location of musical instrument mutes and would be optimal in the sense that all wavelength-related resonances would be minimized with an acoustic resistor at this location. That is, all of the standing waves associated with their resonances would have a volume velocity maximum at this location. It would, however, be suboptimal, as the acoustic resistor would frequently become occluded with earwax and other debris. A final example is related to the location of occluding cerumen in the outer ear canal. If the cerumen is medial, near the tympanic membrane, the effect would be minimal. But the attenuation would be much greater if the cerumen were more lateral, nearer the meatal opening where there would be a volume velocity maximum. Figure 1–5 is a schematic of the outer ear. If cerumen or other debris were located medially (B), there would be minimal effect; however, if the blockage was located more laterally (A), there would be a much greater effect, as the cerumen would function as a “mute” in this position. This has been verified and demonstrated by Narayanan et al. (2019).

12  Music and Hearing Aids:  A Clinical Approach

A

B

Figure 1–5. Schematic of the outer ear canal that functions primarily as a quarter-wavelength resonator. The standing wave associated with this resonator type shows a volume velocity minimum adjacent to the tympanic membrane (B) and a volume velocity maximum at the lateral side (A). Cerumen or other debris near (B) would have a minimal effect but would have a much greater effect if the cerumen was located more laterally near (A), with an associated greater conductive hearing loss.

1.  A Primer on Wavelength Acoustics for Musical Instruments   13

SUMMARY AND CLINICAL RAMIFICATIONS The acoustic behavior of the vast majority of all musical instruments can be described by either a quarter-wavelength resonator model or a half-wavelength resonator model. In both cases, the length (L) of the resonating portion of the instrument is a primary element, with longer musical instruments resonating at a lower frequency. The quarter-wavelength resonator instruments have odd numbered multiples of their fundamental, whereas the-halfwavelength resonator instruments have integer multiples of their fundamental. In some cases, musical instruments possess both quarter- and half-wavelength resonant properties. This can be verified using clinical real ear measurement systems, where the reference microphone and the loudspeaker have been disabled. The acoustics of musical instruments are similar in many ways to that of the human vocal tract, a car muffler system, or even the outer ear. A study of the acoustics of any of these systems can improve our understanding of other, perhaps more obscure, systems.

REFERENCES Bader, R. (Ed.). (2018). Springer handbook of systematic musicology. Springer-Verlag. Chasin, M. (2005). A re-examination of the etiology of the REUG — Did we get it completely right? The Hearing Journal, 58(12), 22–24. Chasin, M. (Ed.). (2009). Hearing loss in musicians: Prevention and management. Plural Publishing. Fant, G. (1960). Acoustic theory of speech production. Mouton. Morrison, A. C., & Rossing, T. D. (2018). Percussion musical instruments. In R. Bader (Ed.), Springer handbook of systematic musicology (pp. 157–170). Springer-Verlag. Narayanan, D. A., Raman, R., & Chong, A. W. (2019). The role of occlusion of the external ear canal in hearing loss. Turkish Archives of Otorhinolaryngology, 57(3), 122–126.

2 Music (and Speech) for the Audiologist Without explicitly recognizing it, audiologists have all of the tools needed for the understanding and analysis of music. In some cases, any limitation can be traced to lack of extrapolation of a concept and, in other cases, it is merely terminology. A discussion of the musical notes, versus the fundamental frequency (in Hz), is one such area. A full list of the various musical notes and the value of their fundamental frequencies is found in Appendix A of this book. Beginning with this chapter (and continuing in Chapters 3 and 4) there will be icons delineating where more research needs to be performed and many of these can be addressed within the confines of a student research project, such as an AuD Capstone study. All 12 of these research studies in this book are replicated in Appendix B of this book.

LETTERS AND FREQUENCIES Musicians use the letters, A, Bb, C, whereas audiologists would say 440 Hz, 466 Hz, and 524 Hz. According to the situation, in both cases this may be correct but still lead to an oversimplification. 15

16  Music and Hearing Aids:  A Clinical Approach

Figure 2–11 shows several musical notes along with their fundamental frequencies. Depending on the musical instrument being played, the note A may indeed have its fundamental energy at 440 Hz, but it also may have higher frequency harmonics whose exact amplitude may vary depending on the quality of the instrument and the instrument being played. For example, a flute and a violin are both “half-wavelength resonators” (as are the human vocal cords) and, as such, would have energy at A [440 Hz], but also at integer multiples of 440 Hz (namely 880 Hz, 1320 Hz, 1760 Hz, and so on). The amplitudes of the higher frequency harmonics would define the timbre, but they also serve to help distinguish between various instruments, such as a flute and a violin. To say that “middle C” on a piano keyboard is exactly at 262 Hz is both true and false. Its fundamental frequency is at 262 Hz, but a piano is made up of higher frequency harmonics at integer multiples of 262 Hz. And, the amplitude of these harmonics assist a listener in distinguishing a piano from the flute or violin, or any other “half-wavelength” resonator instrument. Other musical instruments, such as the clarinet, trumpet, and trombone are “quarter-wavelength” instruments and have odd numbered harmonics of the fundamental. An A [440 Hz] on a trumpet would have harmonics at 1320 Hz (440 × 3), 2200 Hz (440 × 5), and so on.

Figure 2–1.  Some musical notes and their fundamental frequencies (in Hz) on a treble clef. 1

I wish to thank and to acknowledge Composer Shaun Chasin (http://www. chasin.ca) for many of the spectra and all of the audio files used in this book.

2.  Music (and Speech) for the Audiologist   17

Many adult males, including myself, have a speech fundamental frequency of around 125 Hz — an octave below middle C. And being a “half-wavelength resonator” there are harmonics at integer multiples of 125 Hz similar to the piano, flute, or violin (at 250 Hz, 375 Hz, 500 Hz, and so on). A listing of some examples of quarter- and half-wavelength resonator instruments can be found in Table 1–1 in Chapter 1. A resulting feature of any wavelength associated musical instrument is that a half-wavelength resonator musical instrument (e.g., saxophone) has twice as many harmonics as a quarter-wavelength resonator musical instrument (e.g., clarinet) of the same length for any given frequency range. Despite the increased density of harmonics in a saxophone, does this translate into twice as many auditory cues? A hypothesis is that a half-wavelength resonator instrument may be better for a hardof-hearing person (or child) to hear (Figure 2–2). This may depend on the severity, and even audiometric configuration of the hearing loss, and may have ramifications for the answer to the question “what musical instrument should my hard-of-hearing child learn to play?” If it turns out that an instrument with a denser, more tightly packed harmonic structure does provide a better sound or timbre for a hard-of-hearing person then a saxophone, being a half-wavelength resonator instrument, may be better than a clarinet. Of course, other factors do come into play acoustically, in that larger (or longer) musical instruments do have a lower set of resonant frequencies such that more of the music may be within a healthier region of hearing for that person. To my knowledge, this has never been formally studied and more research will be required (see Study 2–1).

SPEECH VERSUS MUSIC Frequency of Harmonics A salient feature of speech is that all speech sounds can be divided into two linguistic categories: sonorants and obstruents. Sonorants have most of their energy in the lower frequency

18

Figure 2–2.  The Bb clarinet and Bb saxophone playing the same note. The clarinet is a one-quarter wavelength instrument with odd number harmonics of the fundamental, whereas the saxophone is a half-wavelength instrument with integer multiples of the harmonics of the fundamental. The more densely packed harmonic structure curve on the top is the saxophone. The arrow indicates a significant harmonic of the saxophone that would not be seen in the clarinet. There is significantly more harmonic energy in the saxophone than the clarinet.

2.  Music (and Speech) for the Audiologist   19

region and are (typically) voiced sounds that are made up of continuous airflows and have characteristic resonant or formant frequencies. In English, this includes all vowels, nasals, and liquids ([l] and [r]). In contrast, the obstruents do not have resonant or formant frequencies but do have broad bands of energy, typically in the higher frequency regions. This includes stops, affricates, and fricatives. Obstruents can have a secondary effect on the formant pattern (or transition) of the adjacent vowel or nasal and this is certainly an important element to the perception of the speech sound. However, speech in general has low frequency sonorants with well-defined harmonics underlying a well-defined resonant pattern and also high frequency obstruents with no well-defined resonant pattern, and little, if any, high frequency harmonic information. With the exception of most percussion instruments, similar to the sonorants in speech, music is composed of well-defined harmonics and resonant patterns. For music, this is true of the lower frequency sounds as well as the higher frequency sounds. Even the top note on the piano keyboard (C [4086 Hz]) has its first harmonic just above 8000 Hz and its second harmonic at 12,258 Hz (3 × 4086 Hz). Changing the frequency location of a harmonic by only one-half of one semitone can be problematic and would significantly degrade the quality (and acceptability) of music (see Chapters 3 and 4 for more information, and Audio Files 3–4 and 3–5). Having said this, the main reason why the various forms of hearing aid frequency lowering work so well with speech is that only the higher frequency obstruents are affected, which has little or no effect on the harmonic structure. (Chasin, 2016, 2020). This limits the usage of (nonlinear) frequency lowering for most forms of music.

Sound Levels of Harmonics All musical instruments (and even the human vocal tract) have a mechanical shape and volume that creates its own regions of greater harmonic energy. These are called formants and can create a series of resonances where the underlying harmonics can be “amplified” or enhanced in certain frequency regions. Despite having identical frequency components (i.e., harmonic frequency

20  Music and Hearing Aids:  A Clinical Approach

locations) for a given musical note between a violin and a flute, the amplitudes of the harmonics differ because of the different shapes of the flute and the violin. (And, of course the human vocal tract can change its shape generating different speech sounds, and consequently different resonances or formants.) Musicians sometimes refer to this localized increase in harmonic amplitude as the “fat” part of an instrument. For example, a flute has its “fat” part at 880 Hz with higher amplitude harmonics in this region. It is simply impossible to play a flute at A [880 Hz] softly. This relatively higher level of harmonic energy defines the sound quality of the flute and helps to distinguish it from the violin whose “fat” part may be in a different frequency region. This can become problematic when a multi-channel compression hearing aid system is used and is programmed improperly. Multi-channel compression inherently treats the lower frequency fundamental energy differently from higher frequency harmonic information, thereby altering the amplitude balance between the fundamental and its harmonics — a flute may begin to sound more like a violin, or an oboe. (See Chapters 3 and Audio File  3–10). Of course, even though the exact frequencies of the harmonics and their associated amplitudes are quite important in the identification of a musical instrument, other dynamic factors, such as attack and decay parameters of the note, as well as vibrato, are also important factors. A further examination of the exact specifics of how poor the settings of multi-channel compression needs to be before instrument identification becomes difficult, or the change in timbre becomes problematic, would provide important clinical (and hearing aid design) knowledge (see Study 2–2).

LOUD SPEECH AND LOUD MUSIC Modern hearing aid technology has the capability to be responsive to varying levels of inputs, namely, level dependent compression. A hearing aid will generate significant gain for soft level inputs, less gain for medium level inputs, and sometimes no (or even negative) gain for louder level inputs. Many people simply do not need a lot of hearing aid gain for the higher-level components of

2.  Music (and Speech) for the Audiologist   21

speech (and music). Understandably, there has been a significant amount of research in this area for speech, but relatively little for music. What exactly is loud music and is this the same thing for low frequency sounds and higher frequency sounds? Figure 2–3A shows the spectrum of the low-back vowel [ɑ] at a quieter level and at a higher sound level. Figure 2–3B shows the same thing for the voiceless alveolar consonant [s]. As can be seen, as a person speaks at a higher level, it is primarily the lower frequency vowels and nasals (i.e., the sonorants, Figure 2–3A) that increase in level, whereas the higher frequency obstruent consonants (Figure 2–3B) are only at a slightly higher level. In conversational speech, one simply cannot utter a loud [s] as one can utter a loud vowel [ɑ]. For speech, as the speaking level increases, there is a relative low frequency boost as compared with the higher frequency region. This is all verifiable in any clinical setting using real ear measurement tools, with the modifications described in Chapter 1 to disable the reference microphone and disable the loudspeaker. These are comments about the relative sound levels of these sounds. The absolute level will depend on speaking level, the syntactic category, as well as location of the speech sound within a word and also within the sentence. Nouns and sentence initial words tend to be of a higher speaking level than “helper words” such as adjectives and prepositions in conversational speech, especially if they occur nearer to the end of a sentence or phrase. Music is a different type of input for hearing aids than is speech. Where speech can be soft (55 dB SPL), medium (65 dB SPL), or loud (80 dB SPL), especially for the lower frequency region, music can be soft (65 dB SPL), medium (80 dB SPL), or loud (95 dB SPL); music, especially live music, tends to be shifted up one “loudness” category as compared with speech. There are other “statistical” differences between speech and music, such as the crest factor, as well as spectral and temporal (attack/release) features; however, a major difference is that music is played, and frequently listened to, at a higher sound level than speech. Table 2–1 shows examples of the sound levels of some instruments as measured from 3 meters away at a mezzo forte (mf) or average playing level. This does vary significantly from musician to musician and also depends on some of the acoustic characteristics of the musical instrument (based on Chasin, 2006).

22

Figure 2–3.  A. Spectrum of the low-back vowel [ɑ] spoken quietly and then at a louder level (top gray) showing the increase in harmonic energy. Note that the fundamental frequency also increases slightly for the louder-spoken case because the laryngeal muscles are more contracted, thereby increasing their tension.  continues

A

23

Figure 2–3.  B. Spectrum of the high frequency obstruent [s] spoken quietly and then at a louder level (top gray) showing a relatively slight increase (as compared with the vowel [ɑ] in Figure 2–3A.)

B

24  Music and Hearing Aids:  A Clinical Approach Table 2–1.  Average Sound Levels of a Number of Musical Instruments (From Over 300 Musicians) Measured From 3 Meters on The Horizontal Plane Musical Instrument

dB(A) Ranges Measured From 3 Meters

Cello

80  to 104

Clarinet

68  to  82

Flute

92  to 105

Trombone

90  to 106

Violin

80  to  90

Violin (near left ear)*

85  to 105

Trumpet

88  to 108

Note. *Also given is the sound level for the violin measured near the left ear of the player. (Chasin, 2006).

So far, these adjustments seem rather straightforward: for a music program, adjust “quiet” music to be similar to medium speech; adjust “medium” music to be similar to loud speech; and adjust “loud” music to be similar to very loud speech. Thus, overall, perhaps subtract 5 to 10 dB from the amplification for a music program for loud music than what would be programmed for loud speech. And in some cases, depending on the fitting formula, if the sensorineural hearing loss does not exceed a moderate level, then 0 dB of amplification may be required for loud inputs (of 90 to 95 dB SPL). Simply removing the hearing aid may optimize the listening to the music, especially live music (Chasin, 2012). Can we definitively say that music has well-defined features for soft, medium, and loud levels? Figure 2–4 is a spectrum for a French horn being played at a quiet level (pp, pianissimo) and at a higher level (ff, fortissimo). The file, Audio 2–1, demonstrates this difference. This relatively high frequency bias, as the playing level is increased, is the case for all brass instruments and also the case for reeded woodwind instruments, such as the clarinet and saxophone (Figure 2–5). Audio File 2–2 also demonstrates this high frequency cue for reeded woodwinds. That is, unlike louder speech, the increased playing level of brass and woodwind instruments is a high frequency increase cue.

25

Figure 2–4.  Spectrum of a French horn played quietly (pp) and again at a louder level (ff) (top gray) showing the relative increase in the higher frequency region where the fundamental only exhibits a slight increase in sound level.

26

Figure 2–5.  Spectrum of a clarinet played quietly (pp) and again at a louder level (ff) (top gray) showing the relative increase in the higher frequency region where the fundamental only exhibits a slight increase in sound level.

2.  Music (and Speech) for the Audiologist   27

Figure 2–6 is a spectrum for stringed instruments such as the violin, and cello, where there is minimal bias and as the playing level increases, the relative balance of low frequency and high frequency sound energy is maintained. Audio File 2–3 demonstrates this relative balance for stringed instruments as the playing level is increased. With stringed instruments, one can simply reduce the playing level and the balance between the low and high frequency regions will be maintained. The interested reader will be able to perform their own spectral analyses of these files to verify the relative high frequency increase in output for loud playing, as compared with the low frequency output. As can be seen, different instruments have different properties as the playing level becomes louder and these can be grouped into “stringed instruments and the rest.” Stringed instruments, such as the violin, appear to have similar increases in sound level for both the low frequency and higher frequency regions as the instruments are played at a higher level. With reeded instruments, as one blows harder to create a louder sound, the reed distorts and its vibration pattern becomes nonlinear. This creates additional high frequency harmonic energy with almost no increase in the lower frequency fundamental energy — a high frequency combinatorial distortion artifact related to the mechanical properties of all reeds (see Figure 2–5). As discussed above, the same is true of brass instruments but the acoustics are different. Nonlinear behavior is created by an interplay between the mouthpiece and the impedance of the air column “downwind” (see Figure 2–4). In contrast, for stringed instruments, when playing louder, it’s rather straightforward — the sound levels for all frequencies are increased similarly with a maintenance of the spectral shape (see Figure 2–6). The resonant pattern (and the dynamical characteristics) can be affected by the quality of the instrument and even the bow that is being used. Questions such as “What is the spectral shape of a trumpet or any other instrument?” are misleading and ignore the playing level–dependent spectral shapes of many of the contributors to music. A summary of the general effects is shown in Table 2–2 as the singing or playing level is increased from an average (or mezze forte) level to a louder (fortissimo) level.

28

Figure 2–6.  Spectrum of a violin played quietly (pp) and again at a louder level (ff) (top gray) showing a balanced increase between the lower frequency fundamental energy and the higher frequency harmonic energy.

2.  Music (and Speech) for the Audiologist   29

Table 2–2.  Summary Chart for The Differences Between Singing or Playing Level Between Average (Mezzo Forte or mf) and Loud (Fortissimo or f). Low Frequency Region

High Frequency Region

Vocal

11 to 15 dB

0 to 10 dB

Stringed instruments

11 to 15 dB

11 to 15 dB

Brass instruments

30 dB

Reeded woodwinds

30 dB

Note.  The spectral level increases are shown for both the low frequency region and the high frequency region as the voice increases its sound level or the instrument is played louder. For vocals, the low frequency region increases slightly more than the higher frequency region. For stringed instruments, both frequency regions increase by roughly the same amount. And for brass and reeded woodwind instruments, there is a large high frequency region increase despite having minimal, if any, increase near the lower frequency fundamental frequency region. All numbers are increases from mf to f, in relative decibels.

More information about the nonlinear acoustical behavior of many musical instruments can be found in two chapters in Springer Handbook of Systematic Musicology, edited by Dr. Rolf Bader (2018). Specifically useful are the chapters by Nicholas Giordano, “Some Observations on the Physics of Stringed Instruments,” and Benoit Fabre and colleagues, “Modelling of Wind Instruments.” Another resource is Moore (2016) who discusses the acoustics of brass instruments. Finally, many local acoustical associations such as the Acoustical Society of America and the Canadian Acoustical Association have ongoing committees that study and develop standards in the realm of musical acoustics. As is found in many proprietary hearing aid fitting software programs, for speech for louder inputs, one can prescribe less gain for low and mid-frequency sound than for softer speech. The high frequency gain should be left pretty much the same for all speaking levels.

30  Music and Hearing Aids:  A Clinical Approach

However, for reeded woodwind and brass music, for loud playing levels, the higher frequency gain can be reduced relative to softer playing levels. And for the string instruments, there can be a gain reduction for all frequency regions as the playing level increases. This provides a fitting strategy for the hard-of-hearing client. This can (and should) be verified with real ear measures performed in the clinic. Clinically, when using a real ear measurement system, in order to perform verification using music (or speech) stimuli, the following procedure needs to be performed: n Calibrate

the real ear measurement device in the normal fashion with an appropriate “unaided” (e.g., REUR) response being performed.

n The

reference microphone and the equipment speaker need to be disabled, and this is usually accomplished by setting the stimulus level to “0 dB” or turning the stimulus to “Off.” This step creates an “in situ” sound level meter.

n An

“aided” (e.g., REAR) response should be performed with music (or speech) as the input stimulus.

For listening to music that is primarily of one type (e.g., strings, whether amplified or not) the above fitting strategies may point you in the correct direction, but for more varied instrumental (and perhaps mixed vocal) music such as orchestral or operatic music, it would be more of a trial-and-error approach. One could conceivably work out a fitting formula (such as a “weighted average” or dot product) for orchestral or jazz music based on the various energy contributions of each of the musical sections. To my knowledge, such a calculation has never been performed but the results would provide important clinical input regarding how a “music program” can function and how this may be different from a “speech” program. Until more research becomes available, at this point with our clinical knowledge, these should just be thought of as guiding principles rather than hard and fast rules (see Study 2–3).

2.  Music (and Speech) for the Audiologist   31

THE CREST FACTOR Another difference between the acoustics of speech and the acoustics of music is the crest factor. This is the difference in decibels (or ratio of its intensities) between the instantaneous peak and the long-term average RMS (root mean square) of the signal. The RMS of a signal is frequently chosen as one of its primary characteristics as it correlates highly with perceived loudness, and is also independent of phase (Randall, 1977). For speech, the crest factor is on the order of 12 to 15 dB but for music, it can easily be 18 to 25 dB. That is, because of the lower level of damping of musical instruments (as they are hard-walled) as compared to that of the human vocal tract (as it is soft-walled and has many constrictions), the spectrum of music has higherlevel peaks than a similar frequency musical note being sung or spoken by a person. This has ramifications for both setting the output characteristics of a hearing aid as well as selecting the appropriate technology in order to ensure that amplified music does not exceed the tolerance level of the listener or is not distorted (see Chapter 4). Figure 2–7 shows the different shapes of a spectrum for speech (vowel [ɑ] with F0 of 91 Hz) and for music (baritone saxophone F# with its fundamental close to 91 Hz) with the music having a substantially (6–8 dB) higher crest factor than speech. Another element in the study of crest factors is that measurement artefacts can affect the results. Historically crest factor measures used windows of analysis of 125 msec based on the research of Dunn and White (1940). Using a 120 msec window, this has been replicated by Cox and her colleagues (1988) and is part of the ANSI S 3.22 hearing aid measurement standard. This time window (120–125 msec) was chosen due to its relationship with the time constants related to the temporal integration in the human cochlea; however, this discussion is about the input to the front end or A/D converter for all digital hearing aids prior to even reaching the ear. Integration times that are less than 50 msec correspond with what actually reaches the hearing aid microphone, and this has nothing to do with the cochlea or cochlear integration, merely

32

Figure 2–7.  First spectral peaks (the fundamental frequency and four harmonics) of a spoken vowel [ɑ] with a fundamental frequency of 91 Hz and a baritone saxophone playing the same fundamental (close to F#). Note that the magnitudes of the peaks are greater (marked by an arrow on the top curve) for the hard-walled saxophone than the soft-walled human vocal tract. The instantaneous peak to long-term RMS ratio is called the crest factor. (See text for more information.)

2.  Music (and Speech) for the Audiologist   33

the front-end hardware of modern hearing aids. Figure 2–8 shows differing crest factors (for speech and for music) as the window of analysis is decreased from 500 msec to 10 msec showing that the crest factor can actually be over 20 dB. Even the most recent implementation of the calculation of the Speech Intelligibility Index (SII) uses a crest factor that has been increased from 12 to 15 dB from its previous version (ANSI, S 3.5, 1997). The choice of 125 msec as an analyzing window can be problematic and would not be appropriate for the calculation of the crest factor to examine music-related inputs to hearing aids. (Chasin, 2017; Rhebergen et al., 2009).

Figure 2–8.  Based on Chasin and Hockley (2014), crest factors for both speech (black) and music (gray) are shown as the analyzing window is reduced from 500 msec to 10 msec.

34  Music and Hearing Aids:  A Clinical Approach

HOWEVER . . . The hearing aid (or cochlear implant algorithm) fitting goal for amplified speech has some similarities to that for amplified music, but also some important differences. Depending on the language, there can be different, but well-defined speech intelligibility indices such as the SII that can assist in the programming of a hearing aid for speech. There is, however, no “music intelligibility index” or even a “music preference index,” although there have been some strides toward such a goal. Watson and Knudsen (1940), during the very early years of hearing aid fittings, noted that “paradoxically” many of their subjects preferred a frequency response that possessed a narrow band and provided the worst speech discrimination ability, and disliked the frequency response that was broadband in nature and optimized their speech communication. Optimizing preference is a very complex aspect of fitting a hearing aid with a music program, and whereas the subjects in the Watson and Knudsen study did gradually adapt to the sound, it sometimes took quite some time, if at all. The clinician should be aware that a goal of preference is not necessarily an optimal hearing aid fitting for music and that there needs to be an interplay and giveand-take relationship with their clients for any music program. Although this discussion would depend on many factors, including the exact instrument(s) that is being played or listened to, the acoustic properties of amplified music and adjustments to improve the preference, like amplified speech, can all be verified clinically using real ear gain measurement tools. In addition, some “music preference scales” are already starting to be published in the literature. Having said all of this, as will be discussed in Chapter 4, a “first fit” music program will not be that dissimilar to a “speech in quiet” program, following a “less is more” philosophy. Mead Killion (personal communication, 1987) had told a story about Dr. Harvey Fletcher — one of the fathers of psychoacoustics ​— who purchased a new “high fidelity” radio for his wife. This new technology had an extended high frequency range but Mrs. Fletcher hated the new sound. Dr. Fletcher soldered some 20 pF capacitors in parallel across the loudspeaker wires, which acted as a high-cut filter. Every week he would surrepti-

2.  Music (and Speech) for the Audiologist   35

tiously snip one capacitor off, thereby sneaking in more high frequency output. Eventually all capacitors had been removed and Mrs. Fletcher was quite happy with the sound. In a similar way, sound quality (and amplification characteristics for a new hearing aid user) can be something that changes over time. Many hearing aid manufacturers even have a “first fit for new hearing aid users” program that initially undershoots the target in hopes that adaptation over time will allow the hard-of-hearing person to accept more gain.

A FINAL ACOUSTIC ELEMENT A significant difference between music and speech is that music is truly broadband both at the synchronic level and the diachronic level, whereas speech is always narrowband in the short term. At any one point in time, music is made up of lower frequency fundamental energy AND also higher frequency harmonic energy, and always both. However, at any one point in time, speech is either made up of low frequency sonorant energy OR high frequency obstruent energy, but never both. With speech, one can never have both substantial low and high frequency energy at the same time, at least when only one person is speaking (Figure 2–9). Although it is true that the “long term” speech spectrum is wide band, at any one point in time, its bandwidth is quite restricted as compared with music. At any one point in time, speech is either low frequency sonorant, or high frequency obstruent, but never both. This has yet-to-be determined ramifications for whether a single hearing aid receiver, or multiple receivers — each specializing in their own frequency bandwidths — would be better or worse for speech versus music. A hypothesis is that amplified music should be transduced through one receiver (to respond to its concurrent wideband nature) and speech, transduced through two receivers (for low frequency and high frequency information being treated separately). This hypothesis also has implications for in-ear monitor design. This is an important area of study but one in which there is little current information. More study needs to be performed (see Study 2–4).

36

Figure 2–9.  Spectrogram showing that sounds are typically either low frequency emphasis sonorants such as vowels, nasals, or liquids, or higher frequency obstruents such as aspiration, fricatives, and affricates, but not both low and high frequency at the same point in time. The utterance is “Children are happy.” (See text for more information.)

2.  Music (and Speech) for the Audiologist   37

OUR CLIENT’S STORY When switching gears from acoustics to our individual clients, it is imperative to listen to their story. Audiology, similar to all other clinical sciences, begins with a case history. The term “case history” tends to be rather simplistic and narrow because it tends to elicit information about medically based issues such as pain, vertigo, tinnitus, and asymmetrical complaints. Although this is important and should be within the scope of all first-contact client interviews, this is only the beginning. One needs to understand what it is about music that is important to the client. Are they instrument players? Does their income and livelihood depend on musical performance? If they are only consumers of music, what type(s) and in what situation? Do they have a passion for going out to see a concert or a performance, or are they just looking for an improvement listening at home? Does the musician have a sense of the sound level of their live performances and are there any known “noisy culprits”? Depending on the answers to these questions, audiological intervention and hearing aid prescription may be quite different. Assistive devices may be useful, as are certain Personal Sound Amplification Products (PSAPs), or perhaps no amplification would be required. Whereas this is important for all listening situations, including speech, this is doubly important when listening to, or playing music. For more information, refer to the Clinical Consensus Document Audiological Services for Musicians and Music Industry Personnel by the American Academy of Audiology (2020).

SUMMARY AND CLINICAL RAMIFICATIONS Several clinical questions arise out of the comparison of speech and music. Depending on the musical instrument, there may be twice as many harmonics in any one frequency range as another instrument and this increased harmonic density may translate to improved audibility for hard-of-hearing players. Questions relating to “what instrument should my hard-of-hearing child

38  Music and Hearing Aids:  A Clinical Approach

learn” become very important and need to be addressed. Bass instruments would generally be better, but perhaps less heavy, half-wavelength resonance instruments such as the soprano saxophone can also be considered. Acoustically, music and speech as an input to a hearing aid are quite different, as are the differences between quiet level inputs and higher-level inputs. As shown in Table 2–2, brass and reeded woodwinds tend to have a relative high-frequency increase as the playing level is increased, whereas there is minimal increase in the low-frequency levels during louder playing. Stringed instruments have a more balanced increase in playing level between the low frequency region and the high frequency region, and speech tends to be biased more toward the lower frequency region increase as the vocal or singing level is increased. Acoustically, music has a higher crest factor than a similar level of speech, and caution needs to be taken to ensure that the peaks of amplified music do not exceed one’s tolerance level or significantly contribute to distortion of the sound. And finally, unlike speech, there is no “music intelligibility index” as there is an SII for speech. “Preference” does not always correspond with what is optimal, and preference is something that can (and usually does) change over time. Some strides have been made toward indices for music preferences, but these need to be used in conjunction with other measures for amplified music, especially real ear verification. A rather blunt, but effective measure, is whether the client who was just fit with hearing aids calls the very next day and wants a refund — clearly something went wrong.

REFERENCES American Academy of Audiology. (2020). Clinical consensus document audiological services for musicians and music industry personnel. http://www.Audiology.org American National Standards Institute. (1997). American National Standard methods for calculation of the Speech Intelligibility Index. ANSI S 3.5-1997. Chasin, M. (2006). Hearing aids for musicians. Hearing Review, 13(3), 1–16.

2.  Music (and Speech) for the Audiologist   39

Chasin M. (2012). Okay, I’ll say it: Maybe people should just remove their hearing aids when listening to music! Hearing Review, 19(3), 74. Chasin M. (2016). Back to basics: Frequency compression is for speech, not music. Hearing Review, 23(6), 12. Chasin, M. (2017). Use of a novel technique to improve amplified sound quality for both music and speech. Hearing Review, 24(8), 32–36. Chasin, M. (2020). The problem with frequency transposition and music, Part 1. Hearing Review. https://www/hearingreview.com Chasin, M., & Hockley, N. S. (2014). Some characteristics of amplified music through hearing aids. Hearing Research, 308, 2–12. Cox, R. M. Mateisch, J. N., & Moore, J .N. (1988). Distribution of shortterm RMS levels in conversational speech. Journal of the Acoustical Society of America, 84, 1100–1104. Dunn, H. K., & White, S. D. (1940). Statistical measurements on conversational speech. Journal of the Acoustical Society of America, 11, 278–288. Fabre, B., Gilbert, J., & Hirschberg. (2018). Modeling of wind instruments. In R. Bader (Ed.), Springer handbook of systematic musicology (pp. 121–137). Springer-Verlag. Giordano, N. (2018). Some observations on the physics of stringed instruments. In R. Bader (Ed.), Springer handbook of systematic musicology (pp. 105–118). Springer-Verlag. Moore, T. R. (2016). The acoustics of brass musical instruments. Acoustics Today, 4(12), 30–37. Randall, R. B. (1977). Application of B&K equipment to frequency analysis (2nd ed.). Bruel & Kjaer, Naerum. Rhebergen, K. S., Versfeld, N. K., & Dreschler, W. A. (2009). The dynamic range of speech, compression, and its effect on the speech reception threshold in stationary and interrupted noise. Journal of the Acoustical Society of America, 126(6), 3236–3245. Watson, N. A., & Knudsen, V. D. (1940). Selective amplification in hearing aids. Journal of the Acoustical Society of America, 11, 406–419.

3 Hearing Aids and Music:  What the Literature Says Even well-controlled laboratory experiments may not be representative of how a person may be using their hearing aids to play or listen to music. Many of the music stimuli that are used in some of these experiments are at a relatively quiet level and are more representative of speech sound levels rather than that of music. Although it is true that some forms of music listening, especially if the environment is quiet, is not too dissimilar from the levels of average speech, most music tends to be much louder; this is especially the case if the music is portable where the listener walks or jogs past a noisy construction site or if the music is live. Experiments that use lower-level stimuli cannot always be generalized to the clinical environment where an audiologist is required to make fitting decisions on the spot. Another example of how some well-controlled laboratory experiments may have gone awry would be those that ignore the “acoustics” of the hearing aid fitting, and only focus on the digital algorithms or hardware technologies. As we shall see, the research is relatively clear on both an extended low frequency and high frequency emphasis (as long as the individual can tolerate it), but with proper venting in the earmold or nonoccluding Receiver In the Canal (RIC) coupling tips, significant 41

42  Music and Hearing Aids:  A Clinical Approach

low frequency (unamplified) sound can reach the listener’s ear despite the hearing aid “specification data” sheet stating that the hearing aid has a low frequency cutoff of 200 Hz. And yet, a third characteristic of some hearing aid research laboratories is that they only deal with what is actually in the marketplace and not what can be. This is typically stated clearly and is never intended to mislead the reader, but this feature can be problematic. For example, if the only hearing aids in the marketplace have three channels or more, and if the best setting for some aspects of the experiment is achieved with only three channels, one should not conclude that for some elements of an optimal hearing aid fitting, three channels would be best. It could be two, or perhaps even a single channel hearing aid that would be the best (despite no longer being commercially available, at least as a hearing aid). Despite this, given the strengths of some of the ingenious research, but given the challenges of a sometimes-hurried clinical environment, we cannot always translate research into clinical interventions. We sometimes need to view the research only as a “directional pointer” to assist the clinician in making appropriate adjustments to their fitting approaches.

PEAK INPUT LIMITING LEVEL One of the misconceptions in the field of audiology is for the clinician to restrict oneself to the various electro-acoustic and fitting parameters that are delineated in ANSI S 3.22, or the equivalent IEC 60118-0 standard. ANSI S 3.22 —“Specification of Hearing Aid Characteristics” — is merely a reporting standard that specifies, along with tolerances, the method to assess a hearing aid for a number of electro-acoustic parameters in a properly designed hearing aid test box. These may include measures of frequency response, OSPL90, gain, distortion, equivalent input noise, and up until recently, the attack and release times for amplitude compression. There are no standards for how a hearing aid should function for a number of differing inputs. ANSI S 3.22 makes no recommendations for whether a hearing aid is optimal or not,

3.  Hearing Aids and Music:  What the Literature Says   43

for speech, for music, and for non-English languages. It is merely a reporting standard. In the most recent implementation of ANSI S 3.22, attack and release times will have been relegated to an optional annex. This does not mean that attack and release times are not important for speech and music, but that modern hearing aid technology has surpassed the need to measure these parameters that were based on the function of a capacitor that is no longer used with digital technology. Attack and release times can be very important for setting modern digital hearing aids for a client, but they no longer appear as relevant for the ANSI S 3.22 standard. And there are some parameters that have never been part of the ANSI S 3.22 standard that are quite important, especially in the realm of hearing aids and music. One such measure includes the ability of the analog-to-digital (A/D) converter and its associated circuitry to handle inputs in excess of 90 to 95 dB SPL without appreciable distortion. The literature has referred to this parameter by several names, such as the “Upper Input Level Limit” (UILL; Oeding & Valente, 2015), “Extended Input Dynamic Range” (EIDR; Plyer et al., 2019) or the “peak input limiting level.” (Chasin & Russo, 2004; Chasin & Hockley, 2014, 2018). This refers to the highest “input” sound level that can be transmitted through the A/D converter and associated circuity (such as analog compressors), to the software digital algorithm portion of the hearing aid without significant distortion. The primary element of digital signal processing for hearing aids that has been problematic is the 16-bit architecture that, until recently, has been the mainstay of the hearing aid industry. The selection of a 16-bit architecture is quite adequate for speech signals as an input to a hearing aid. With 16 bits, the quantization error noise floor is below that which normal hearing people can hear, and the associated dynamic range, in practice, is on the order of 85 to 90 dB. With the highest input sound levels of human speech being far less than 90 dB SPL, a 16-bit architecture is more than adequate. However, instrumental music is not constrained by the limits of the human vocal mechanism. Peak levels in excess of 100 dB SPL are routinely measured even for very quiet music, and sound levels in excess of 120 dB SPL are not unheard of. A 16-bit system cannot handle these higher input levels unless some alterations

44  Music and Hearing Aids:  A Clinical Approach

are made to this architecture in the way of engineering design decisions. Because the “peak input limiting level” is associated with the A/D converter and other “front end” technology, this is prior to any digital software algorithms that may be applied. This is, therefore, considered a hardware issue and not something that can be resolved with software programming. Once an input signal is distorted, no amount of software manipulation can be used to resolve this front-end distortion. This is shown graphically in Figure 3–1A (undistorted input) and Figure 3–1B (distorted input). For the interested reader, two audio files (Audio File 3–1 and Audio File 3–2) are available to listen to the degraded sound of music as the peak input limiting level is altered from 115 dB SPL, to 105 dB SPL, to 95 dB SPL, and back to 115 dB SPL again. This is demonstrated for music (Audio File 3–1) and for speech

A Figure 3–1.  A. Recording with the threshold of the compressor set to 110 dB before the A/D converter; input level 110 dB SPL. The black waveform is the original input file. Reprinted from Chasin, M., and Hockley, N. (2014). Some characteristics of amplified music through hearing aids, Hearing Research, 308, 2–12, Elsevier Limited.

3.  Hearing Aids and Music:  What the Literature Says   45

(Audio File 3–2). Note that the highest levels of speech are typically well within the operating characteristic of the hearing aid such that no degradation would be noticed. The same, however, cannot be said for music. There are some reports in the literature, such as Plyer, Easterday, and Behrens (2019) that did not find that having a low peak input limiting level is a detriment to listening to music. In the Plyer et al. study, although they were able to demonstrate an issue in the laboratory, they were not able to find an equivalent problem in their real-world field trials, especially for speech. However, this is as expected as the levels of speech, and some forms of music listening, are at a relatively low sound level and their quality would not be compromised by an excessively low peak input limiting level.

B Figure 3–1.  B. Recording with the threshold of the compressor set to 95 dB before the A/D converter; input level 110 dB SPL. The black waveform is the original input file. Reprinted from Chasin, M., and Hockley, N. (2014). Some characteristics of amplified music through hearing aids, Hearing Research, 308, 2–12, Elsevier Limited.

46  Music and Hearing Aids:  A Clinical Approach

Strictly speaking, it is the digital word-length that is “16 bits” (or depending on the architecture, 20 bits or 24 bits). Although this is improving with each new generation of IC innovation, for practical current consumption issues, hearing aids historically could not get past the 14-bit barrier with an associated dynamic range of 85 dB (personal communication, Steve Armstrong, 2021). Hearing aid microphones have a 95 dB dynamic range, so in order to take full advantage of the capabilities of modern microphones, this extra 10 dB of dynamic range needs to be added, either in the analog domain where some form of compression is instituted prior to the A/D converter or if the input is governed by a form of semi-automatic digital algorithm that continuously provides the digital input stages with sound that is within the optimal operating characteristic of the converter. (Both of these front-end assisting technologies have been on the market for at least a decade and continue to be available even with many hearing aids that use post-16-bit architecture.) However, with the most recent innovations in IC technology, A/D converters, depending on how it is implemented by a manufacturer, can now easily provide a dynamic range of 109 dB while using the same current consumption as the older 14-bit technologies. The option exists to provide even larger dynamic ranges (up to 112 dB), but with a greater current consumption. ( Jim Ryan, personal communication, 2021). Front end limiting due to a poorly configured A/D conversion process is similar to an output that is set too low for the input and the hearing aid gain. There will be peak clipping with the creation of a square wave in the time domain. This corresponds to the creation of odd numbered harmonics in the frequency domain. This lays the basis for a number of quick tests to determine whether the front end of any hearing can handle the higher level inputs associated with music. One can use a pure tone (e.g., 1000 Hz) at a sufficiently high input level and measure the output, either in a person’s ear with real ear measurement or in a 2-cc coupler and hearing aid test box. If the hearing aid gain is set to be low at this frequency, and the OSPL90 output is set to be high (where input + gain 55 dB Hearing Loss

Steeply Sloping Audiogram

Broad bandwidth

Narrow bandwidth

Narrow bandwidth

Based on the work of Moore et al. (2000), Moore (2001), Aazh and Moore (2007), and Ricketts et al. (2008). Note that these rules are similar for speech, as well as music.

70  Music and Hearing Aids:  A Clinical Approach

SMOOTHNESS OF THE FREQUENCY RESPONSE There are many factors that may add to the lack of a smooth frequency response. And similar to the amplitude compression and the frequency bandwidth responses of hearing aids, these results are equally applicable to speech as an input to a hearing aid, as to music as an input.

Peaks in the Frequency Response Clinically, one generally views a smooth frequency response as one of several hearing aid fitting goals. This derives from not only clinical experience but from publications in the 1980s, demonstrating that a smoother frequency response was better. (See for example, Libby, 1981.) As far back as the 1980s, acoustic filters were included in hearing aid earhooks to smooth the 1000 Hz resonance, and depending on where the filter was in the earhook or tubing, also the resonant energy in the 3000 Hz region. Clinical comments such as “that sounds much better” once a filter was introduced were commonplace occurrences. There are even current advertisements in the hearing aid industry showing “improvements” because a frequency response looks smoother. But just because everyone, from the field of audio to our hard-of-hearing clients agree that smoother is better, it doesn’t mean that this is correct. There are two areas where having a smoother or even flat frequency response is certainly better. One is the field of transducers (receivers and microphones) and the other is the maximum output that we need to control with hearing aid fittings. In the case of the transducer field, a flat frequency response makes perfect sense. A flat frequency response means that the media on which the music or speech is played, or involved in the transduction of, does not alter the speech or music. A microphone or receiver that has large peaks at certain frequency regions would alter the recorded or transduced speech or music and would impart large peaks at the corresponding frequencies in the output. The same can be said about the media themselves. Although it is true that .mp3 and .wav audio files are perfectly

3.  Hearing Aids and Music:  What the Literature Says   71

flat, the tape cassettes and vinyl records of the past had peculiar bumps or peaks that many listeners of the era heard as part of the music; either positively or negatively. And even today, many performing vocalists cup their hands around the microphone as if they are “kissing it,” and this adds an additional (5–10 dB) output around 1500 Hz. However, barring the “flatness limitations” of cassette tapes of several-generations past, whether intentional or not, a design agenda was to replicate the music and speech as accurately as possible, and this required a perfectly flat frequency response, or as close to that as was possible. In the case of hearing aids, removing unwanted peaks in the frequency response was primarily to ensure that rogue and/or unexpected peaks did not exceed one’s tolerance levels as well as to minimize acoustic feedback. In the case of using acoustic filters in the 1980s and 1990s’ hearing aids, the acoustic filter not only dampened the frequency response, but more importantly, the output (i.e., OSPL90) of the hearing aids. With acoustic filtering, the hard-of-hearing users’ volume controls could be increased, thereby providing more usable gain, without encountering tolerance issues. In many cases, hearing aid users were able to obtain significantly more high-frequency amplification than the unfiltered hearing aid specification would suggest. And it was this tolerance-related “smoothing” that many hard-ofhearing people found so beneficial. However, Chasin (2021) presented audio files for both music and speech and found no significant effects for a non-smooth frequency response, for peaks up to 10 dB. Cox, Alexander, and Gilmore (1987) found similar results. Spectral peaks in excess of 10 dB were noticeable by the vast majority of listeners. Figure 3–6 shows the spectrum of a series of artificially created peaks in the frequency response above 1000 Hz, one for each octave. Each peak could be 5 dB, 10 dB, or 15 dB and can be heard in the associated Audio File 3–11 in an A-B-C-D-A format with music; A is a flat response, B has 5 dB peaks, C has 10 dB peaks, and D has 15 dB peaks. Audio File 3–12 is the same thing in an A-B-C-D-A format, but for speech. A difference is noted for peaks in excess of 10 dB or when two adjacent peak conditions are presented side by side in a well-controlled environment, but not necessarily noticeable in a real live listening situation.

72

Figure 3–6.  Spectrum of a flat frequency response with one artificial peak created each octave above 1000 Hz. The peaks can be 5 dB, 10 dB, or 15 dB above a flat curve. The associated Audio Files 3–11 and 3–12 sounds are given in an A-B-C-D-A format, where A is a flat spectrum, and B, C, and D have peaks of 5 dB, 10 dB, and 15 dB, respectively.

3.  Hearing Aids and Music:  What the Literature Says   73

Time Delay in Digital Circuits Another element that may result in a non-smooth, or “rippled” frequency response is related to the time delay inherent in digital circuitry. Each element of processing (except inductive and analog), and each digital algorithm has an associated delay between the input and the processed signal. The hearing aid industry has been especially vigilant to minimize the digital time delay in their respective products. A time delay in the temporal domain is similar to the creation of unexpected, and perhaps undesired, peaks in the spectral domain. Digital delay may cause a phenomenon colloquially referred as “comb filtering” where the spectrum has many “comb-teeth” spectral bumps embedded in the frequency response caused by uncontrolled (and unwanted) constructive and destructive interference. An example of “comb filtering” is shown in Figure 3–7 (Balling et al., 2020). In order to understand the background, one needs to examine five articles in Ear and Hearing by Stone and Moore, and colleagues that spanned almost a decade from 1999 to 2008. (Stone & Moore, 1999, 2002, 2003, 2005; Stone et al., 2008). Stone and Moore (1999) outlined the issues that are created by significantly long digital delays on a person’s own voice. Unexpected constructive and destructive interactions occurred between the hearing aid processed sound and the unprocessed sound entering through the hearing aid vent or, if the low frequency hearing was quite good, then through bone conduction. They categorized the effects as self-monitoring, audio-visual communication disruption, and the greater reliance on amplified sound as the hearing loss became greater. Digital delays of up to 20 msec were found to be acceptable with even longer ones for those with greater sensorineural involvement. Stone and Moore (2002) examined the acceptable level of delay on the hearing aid wearer’s own voice, and this turned out to be related to some properties of the listening environment. Results showed that the disturbing effects of delays needed to exceed 15 msec in an acoustically “dry” environment (e.g., with short reverberation times) and more that 20 msec in an acoustically “live” environment (e.g., with reverberation times in excess of 1 second), and that speech production was not affected until the delay was greater than 30 msec.

74  Music and Hearing Aids:  A Clinical Approach

Figure 3–7.  An example of “comb filtering” showing the creation of spectral peaks (gray) as well as a smoother frequency response (black) with minimal digital delay. Figure from Balling et al. (2020) and used with permission of Hearing Review and WS Audiology, Denmark. (For more information on how the measurements were done see http:// webfiles.widex.com/WebFiles/WidexPress-44.pdf).

Stone and Moore (2003) examined the frequency-dependent delay on subjective and objective measures of speech production and perception, but with hard-of-hearing subjects. They found that the delay was at its maximum between 700 and 1400 Hz, and almost minimal above 2000 Hz. Digital delays on the order of 9 to 15 msec created significant changes in speech identification (place and manner information), as well as a subjective disturbance — the authors suggested that an overall delay of less than 10 msec was preferable across the frequency range.

3.  Hearing Aids and Music:  What the Literature Says   75

With modern technology, digital delays can be less than 1  msec (with an inherent theoretical current minimum of 50 microseconds [50 µsec] for just the A/D and D/A conversion process without any processing [Steve Armstrong, personal communication, 2021]). Figure 3–8 shows some typical digital delays for modern hearing aids with one being less than 1 msec over most of the frequency range. Stone and Moore (2005) looked at the effects of time delay on the perception of the hard-of-hearing person’s own voice and

Figure 3–8.  Digital delays as a function of frequency of some commercially available hearing aids with some delays being less than 1 msec (black). Figure from Balling et al. (2020) and used with permission of Hearing Review and WS Audiology, Denmark. (For more information on how the measurements were done see http://webfiles. widex.com/WebFiles/WidexPress-44.pdf).

76  Music and Hearing Aids:  A Clinical Approach

reading rates. They found that with increasing delay, there was an increase in the disturbance created by the delay. They also found that individuals with a low-frequency hearing loss that exceeded 50 dB HL were much less disturbed by the delays than by subjects with a lesser degree of hearing loss. Interestingly, the authors found that disturbance ratings decreased with increasing experience over the course of a short period of time. This led the authors to suggest some type of acclimatization feature could be used in hearing aids for delay. The delay also required increasing effort, as reported by the subjects, but there was no change in their rate of speaking. Although the adjustment of digital delay is not yet available to the clinical audiologist (or the hard-of-hearing musician), this study has implications for such an algorithm to be developed in the future. The final paper in the series estimated the limits of delay that would be acceptable for open canal fittings (Stone et al., 2008). A series of three experiments with a simulated hearing aid system used normal hearing subjects with delays varying between 1 and 15 msec. The use of 1 msec delay was below what was possible (at that time) with digital hearing aids, while 15 msec was used as this was judged to be longer than what was commercially available in hearing aids at that time. The authors concluded that hearing aid delays of about 5 to 6 msec are likely to be acceptable for open-fit hearing aids, such as the RIC style with a non-occluding earmold. Like amplitude compression and the hearing aid frequency response, these numbers are not necessarily specific to speech or music as stimuli. More work needs to be performed to determine whether the digital time delay results of Stone and his colleagues can be translated to music.

QUALITY AND PREFERENCE Compared with speech, it is a much more difficult task to provide objective results to determine optimal hearing aid parameters for music. There is no long-term music spectrum and there is no equivalent of the Speech Intelligibility Index (SII) for music. Some musical instruments have spectral and temporal features

3.  Hearing Aids and Music:  What the Literature Says   77

similar to speech and other musical instruments are quite different, both spectrally and temporally. Researchers and clinicians have turned to a number of subjective scales that typically include elements of “sharpness/hardness,” “clearness/distinctness,” “feeling of space,” and “disturbing sounds,” in addition to scales of “preference,” “loudness,” “perceived quality,” “timbre,” and “balance” (Gabrielsson & Sjogren, 1979; Gabrielsson et al., 1990). Many scales have been derived from these sources. Cox and Alexander (1983) and Crogan, Arehart, and Kates (2012) updated the scales by adding in additional attributes such as “overall loudness,” and “dynamic range.” However, each of these subjective properties have their own levels of complexity. For example, although people can judge whether the timbre is good, or at least “as good as I recall it,” there are no well-defined electro-acoustic correlates (Leek et al., 2008). Cox and colleagues (2014) developed the Device Oriented Subjective Outcome (DOSO) scale that is both easy to administer for clinical research and is reportedly less sensitive to personality-related variability that have plagued earlier scales. It is useful to be able to examine differences between two hearing aids or two processing schemes, and the differences (but not necessarily the absolute values) have been shown to have a high degree of statistical reliability. A difference measure allows for much of the variability to be minimized as the subject is their own reference. Wu et al. (2017) demonstrated that the DOSO index was not totally independent of personality, but nevertheless, the DOSO was a significant improvement in terms of reliability over previous subjective measures. Figure 3–9 shows a typical outcome of this scale when used to assess the differences between a music processing strategy turned off and turned on (from Chasin, 2017). Kates and Arehart (2016) developed an index that used an auditory model as part of their calculations called the HearingAid Audio Quality Index (HAAQI). This was designed in order to provide more objective measures for the interaction between hearing aid processing parameters and music than previous indices that either were not designed for hard-of-hearing people using hearing aids (Thiede et al., 2000) or have limited use in the study of hearing aid processing (Huber & Kollmeier, 2006). The HAAQI could be used with a wide range of hearing losses

78  Music and Hearing Aids:  A Clinical Approach

Figure 3–9.  An outcome of the DOSO scale when used to assess the difference between a music processing strategy turned off and turned on. (Chasin, 2017). (Used with permission Hearing Review). The DOSO test can be obtained through https://harlmemphis.org

as well as the various processing strategies typically used in hearing aids. Patil et al. (2012), also using an auditory model, studied the auditory cortex in animal models and found that the . . . study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by

3.  Hearing Aids and Music:  What the Literature Says   79

human listeners, as well as recognition of musical instruments. (p. 1)

Sound quality and preference scales continue to evolve and each has its strengths and weakness. Some can be implemented in the clinic, whereas other scales and measures are primarily for research to inform engineering design decisions.

SUMMARY AND CLINICAL RAMIFICATIONS Laboratory-based research has some challenges with respect to the results being able to be applied clinically. Many well-controlled studies are “too well controlled” and may use stimuli that are not representative of real-world music, may use simulations that do not fully represent any of the commercialized technologies, or are based only on technologies that are currently available. The strengths, however, are that many of these studies can vary only one parameter at a time to at least provide some clinical direction. The peak input limiting level has long been an issue with digitizing the higher-level components of music without appreciable distortion. Manufacturers have implemented some ingenious technologies to overcome this issue, such as the use of microphone modifications, analog compression prior to the A/D stage, and auto-ranging A/D converters. Recent post-16 bit architecture technology has become available but these still require other associated approaches to minimize distortion. A beneficial side effect of resolving this front-end problem is that the hearing aid consumer’s own voice should also sound better to them because this high-level input would be transduced with minimal distortion. Hearing aids are still limited by the current consumption, which increases dramatically for 15 bit and higher architectures. Depending on the technology used by the hearing aid manufacturer, hearing aids may no longer be limited by their current consumption to a dynamic range governed by 14-bit implementation, but can use newer ICs that can provide up to

80  Music and Hearing Aids:  A Clinical Approach

109 dB of dynamic range without any compromises and the use of other technologies. Currently other technologies are still required to increase that range, unless a trade-off is made with having poorer current consumption. As a result of cochlear dead regions, technologies, such as frequency lowering, have become the staple of the field, but that only works well for speech. Even a 1/2 of one semitone alteration in the harmonic structure of music can be quite deleterious. Clinically, it is best to apply a mild high frequency decrease of gain and output to avoid overamplifying these frequency regions while still maintaining good sound quality. Also, suggesting that hard-of-hearing musicians consider playing in a lower musical key or a more bass-oriented musical instrument (e.g., change from a violin to a viola) can be quite useful. And for schools of musical instruction where ear training is a required course, having the student do their exams in a lower frequency octave can be quite useful, or even being allowed to do their exam for rhythm instead of pitch. Listening to prerecorded music, such as from a mp3 player or from the radio, can be difficult. Such music has often undergone compression limiting (CL) once during the production stage and as such, a special music program should be implemented for prerecorded (or perhaps even radio music) that is more linear, or perhaps more vented (Kuk & Ludvigsen, 2003), than a live performance listening or playing program. In my city, there is a Jazz radio station that is a great fan of CL and, as such, any additional compression in the hearing aids can be problematic for optimal listening. However, the local classical music stations, and many of the pop music radio stations, do not use compression limiting in their transmission, so this is less problematic. To date, I have been unsuccessful in convincing the local Jazz radio station to back off on their use of compression limiting, but I do consider this to be an important role that an audiologist may undertake when working with hard-of-hearing clients where music is a significant part of their lives. Compression and frequency response are limited more by the damage to the cochlea rather than the nature of the input stimuli per se, and the settings of both of these parameters should be similar for music as a speech program. It is argued that there should be several music programs such as one with a

3.  Hearing Aids and Music:  What the Literature Says   81

linear response for listening to prerecorded music where there may already be quite a bit of compression limiting; and another one for performing and listening to live music that is similar to speech. Similarly, the frequency response should be as wide as is possible (given the limitations of cochlear dead regions in the higher frequency region and venting and other masking issues in the lower frequency region). Importantly there should be a balance between the low frequency amplification and the high frequency amplification, and this is probably equally true for both speech and music. It is not clear that a reduction in digital delay for amplified music would be any different than for amplified speech. Although the technology has improved to the point where digital delay can now be on the order of 1 to 2 msec (with an associated smoothing of the frequency response), it is not yet clear that this is an advantage or a disadvantage for music.

REFERENCES Aazh, H., & Moore, B. C. (2007). Dead regions in the cochlea at 4kHz in elderly adults: Relation to absolute threshold, steepness of audiogram, and pure-tone average. Journal of the American Academy of Audiology, 18(2), 97–106. Akinseye, G. A., Dickinson, A. M., & Munro, K. J. (2018). Is non-linear frequency compression amplification beneficial to adults and children with hearing loss? A systematic review. International Journal of Audiology, 57(4), 262–273. American National Standards Institute. (2020). American National Standard: Specification of hearing aid characteristics. ANSI S3.222014 (R2020). Baekgaard L., Rose S., & Andersen H. (2013) Designing hearing aid technology to support benefits in demanding situations, Part 2. Hearing Review, 20(6), 30–33. Balling, L. W., Townend, O., Stiefenhofer, G., & Switalski, W. (2020). Reducing hearing aid delay for optimal sound quality: A new paradigm in processing. Hearing Review, 27(4), 20–26. Chasin, M. (2006). Can your hearing aid handle loud music? A quick test will tell you. The Hearing Journal, 59(12), 22–24. Chasin, M. (2014). A hearing aid solution for music, Hearing Review, 21(1), 28–31.

82  Music and Hearing Aids:  A Clinical Approach Chasin, M. (2016). Back to basics: Frequency compression is for speech, not music. Hearing Review, 23(6), 12. Chasin, M. (2017). Use of a novel technique to improve amplified sound quality for both music and speech, Hearing Review, 24(8), 32–36. Chasin, M. (2020a). The problem with frequency transposition and music, Part 1. Hearing Review. https://www.hearingreview.com Chasin, M. (2020b). The problem with frequency transposition and music: Part 2. The one octave example. Hearing Review. https:// www.hearingreview.com Chasin, M. (2021). A smooth frequency response may not necessarily be a golden rule in sound quality. Hearing Review, 28(2), 12. Chasin, M., & Hockley, N. S. (2014). Some characteristics of amplified music through hearing aids. Hearing Research, 308, 2–12. Chasin, M., & Hockley, N. S. (2018). Hearing aids and music: Some theoretical and practical issues. In R. Bader (Ed.), Springer handbook of systematic musicology (pp. 841–853). Springer-Verlag. Chasin, M., & Russo, F. (2004). Hearing aids and music. Trends in Amplification, 8(2), 35–48. Chasin, M., & Schmidt, M. (2009). The use of a high frequency emphasis microphone for musicians, Hearing Review, 16(2), 32–37. Cheng, W. (2018). Notes from an early-deafened musician. Hearing Review, 25(10), 26. Cornelisse, L. E., Gagne, J. P. & Seewald, R. C. (1991). Ear level recordings of the long-term average spectrum of speech, Ear and Hearing, 12(1), 47–54. Cox, R. M. (1979). Acoustic aspects of hearing aid-ear canal coupling systems. Monographs in Contemporary Audiology. 1, 3. Cox, R. M., & Alexander, G. C. (1983). Acoustic versus electronic modifications of hearing aid low-frequency output. Ear and Hearing, 4(4), 190–196. Cox, R. M., Alexander, G. C., & Gilmore, C. (1987). Intelligibility of average talkers in typical listening environments, Journal of the Acoustical Society of America 81(5), 1598. Cox, R. M., Alexander, G. C., & Xu, J. (2014). Development of the device oriented subjective outcome (DOSO) scale. Journal of the American Academy of Audiology, 25(8), 727–736. Croghan, N. B. H., Arehart, K. H., & Kates, J. M. (2012). Quality and loudness judgments for music subjected to compression limiting. Journal of the Acoustical Society of America 132, 1177–1188. Crogan, N. B., Arehart, K. H., & Kates, J. M. (2014). Music preferences with hearing aids: Effects of signal properties, compression settings, and listener characteristics. Ear and Hearing, 35, e170–e184.

3.  Hearing Aids and Music:  What the Literature Says   83

Davies-Venn, E., Souza, P., & Fabry, D. (2007). Speech and music quality ratings for linear and nonlinear hearing aid circuitry, Journal of the American Academy of Audiology, 18( 8), 688–699. Davis, H., Morgan, C.T., Hawkins, J.E., Galambos, R., & Smith, F.W. (1950). Temporary deafness following exposure to loud tones and noise. Acta Otolaryngologica, 88(Suppl.), 1–57. Franks, J. R. (1982). Judgments of hearing aid processed music. Ear and Hearing. 3, 18–23. Gabrielsson, A., Hagerman, B., Bech-Kristensen, T., & Lundberg, G. (1990). Perceived sound quality of reproductions with different frequency responses and sound levels. Journal of the Acoustical Society of America, 88(3), 1359–1366. Gabrielsson, A., & Sjogren. H. (1979). Perceived sound quality of hearing aids. Scandinavian Audiology, 4, 159–169. Hirsch, I. J., & Bowman, W. D. (1953). Masking of speech by bands of noise. Journal of the Acoustical Association of America, 25, 1175–1180. Hockley, N. S., Bahlmann, F., & Chasin, M. (2010). Programming hearing instruments to make live music more enjoyable. The Hearing Journal, 3(9), 30–38. Hockley, N. S., Bahlmann, F., & Fulton, B. (2012). Analog to digital conversion to accommodate the dynamics of live music in hearing Instruments. Trends in Amplification, 16(3), 146–158. Huber, R., & Kollmeier, B. (2006). “PEMO-Q — A new method for objective audio quality assessment using a model of auditory perception. IEEE Transactions on Audio, Speech, and Language Processing, 14(6), 1902–1911. Kates, J. M., & Arehart, K. H. (2016). The Hearing-Aid Audio Quality Index (HAAQI). IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(2), 354–365. Kuk, F., Korhonen, P., Peeters, H., Keenan, D.A., & Jessen, A. (2006). Linear frequency transposition: extending the audibility of highfrequency information. Hearing Review. https://www.hearingreview​ .com Kuk, F., & Ludvigsen, C. (2003). Reconsidering the concept of the aided threshold for nonlinear hearing aids. Trends in Amplification, 7(3) 77–97. Leek, M. R., Molis, M. R., Kubli, L. R., & Tufts, J. B. (2008). Enjoyment of music by elderly hearing-impaired listeners. Journal of the American Academy of Audiology, 19, 519–526. Libby, E. R. (1981). Achieving a transparent, smooth, wideband hearing aid response. Hearing Instruments, 32, 9–12.

84  Music and Hearing Aids:  A Clinical Approach Madsen, S. M. K. Stone, M. A., McKinney, M. F., Fitz, K., & Moore, B. C. J. (2015). Effects of wide dynamic-range compression on the perceived clarity of individual musical instruments. Journal of the Acoustical Society of America, 137(4), 1867–1876. Moore, B. C. J. (1996). Perceptual consequences of cochlear hearing loss and their implications for the design of hearing aids. Ear and Hearing, 17(2), 133–161. Moore, B. C. J. (2001). Dead regions in the cochlea: Diagnosis, perceptual consequences, and implications for the fitting of hearing aids. Trends in Amplification, 5(1), 1–34. Moore, B. C. J. (2012). Effects of bandwidth, compression speed, and gain at high frequencies on preferences for amplified music. Trends in Amplification, 16(3), 159–172. Moore, B. C. J., Füllgrabe, C., & Stone, M. A. (2011). Determination of preferred parameters for multichannel compression using individually fitted simulated hearing aids and paired comparisons. Ear and Hearing, 32(5), 556–568. Moore, B. C. J., Huss, M., Vickers, D. A., Glasberg, B. R., & Alcantara, J. I. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology, 34(4), 205–224. Moore, B. C. J., & Tan, C-T. (2003). Perceived naturalness of spectrally distorted speech and music. Journal of the Acoustical Society of America, 114(1), 408–419. Oeding, K., & Valente, M. (2015). The effect of a high upper input limiting level on word recognition in noise, sound quality preferences, and subjective ratings of real-world performance. Journal of the American Academy of Audiology, 26(6), 547–562. Patil, K., Pressnitzer, D., Shamma, S., & Elhilali, M. (2012) Music in our ears: The biological bases of musical timbre perception. PLoS Computational Biology, 8(11), 1–16. Plyer, P., Easterday, M., & Behrens, T. (2019). The effect of extended input dynamic range on laboratory and field-trial evaluations in adult hearing aid users. Journal of the American Academy of Audiology, 30(7), 634–648. Punch, J. L. (1978). Quality judgments of hearing aid-processed speech and music by normal and otopathologic listeners. Journal of the American Audiological Society, 3, 179–188. Ricketts, T. A., Dittberner, A. B., & Johnson, E. E. (2008). High frequency amplification and sound quality in listeners with normal through moderate hearing loss. Journal of Speech-Language-Hearing Research, 51, 160–172. Salorio-Corbetto, M., Baer, T., & Moore, B. C. J. (2017). Evaluation of a frequency-lowering algorithm for adults with high-frequency hearing

3.  Hearing Aids and Music:  What the Literature Says   85

loss. Trends in Hearing, 21. https://doi.org/10.1177/233121651773​ 4455 Salorio-Corbetto, M., Baer, T., & Moore, B. C. J. (2019). Comparison of frequency transposition and frequency compression for people with extensive dead regions in the cochlea. Trends in Hearing, 23. https://doi.org/10.1177/2331216518822206 Salorio-Corbetto, M., Baer, T., Stone, M. A., & Moore, B. C. J. (2020). Effect of the number of amplitude-compression channels and compression speed on speech recognition by listeners with mild to moderate sensorineural hearing loss. Journal of the Acoustical Society of America, 147(3), 1344–1358. Schmidt, M. (2012). Musicians and hearing-aid design — Is your hearing instrument being overworked? Trends in Amplification, 16(3), 140–145. Skinner, M. W., & Miller, J. D. (1983) Amplification bandwidth and intelligibility of speech in quiet and noise for listeners with sensorineural hearing loss. Audiology, 22(3), 253–279. Stone, M. A., & Moore, B. C. J. (1999). Tolerable hearing-aid delays. I. Estimation of limits imposed by the auditory path alone using simulated hearing losses. Ear and Hearing, 20, 182–192. Stone, M. A., & Moore, B. C. J. (2002). Tolerable hearing-aid delays. II. Estimation of limits imposed during speech production. Ear and Hearing, 23, 325–338. Stone, M. A., & Moore, B. C. J. (2003). Tolerable hearing-aid delays. III. Effects on speech production and perception of across-frequency variation in delay. Ear and Hearing, 24, 175–183. Stone, M. A., & Moore, B. C. J. (2005). Tolerable hearing-aid delays: IV. Effects on subjective disturbance during speech production by hearing impaired subjects. Ear and Hearing, 26, 225–235. Stone, M. A., Moore, B. C. J., Meisenbacher, K., & Derleth, R. P. (2008). Tolerable hearing-aid delays. V. Estimation of limits for open canal fittings. Ear and Hearing, 29, 601–617. Thiede, T., Treurniet, W.C., Bitto, R., Schmidmer, T., Sporer, J.G., Beerends, C., . . . Feiten, B. (2000). PEAQ — The ITU standard for objective measurement of perceived audio quality. Journal of the Audio Engineering Society, 4(1/2) 3–29. Vaisberg, J. M., Beaulac, S., Glista, D., Macperson, E. A., & Scollie, S. D. (2021). Perceived sound quality dimensions influencing frequencyshaping preferences for amplified speech and music. Trends in Hearing, 25, 1–7. Vaisberg, J. M., Folkeard, P., Agrawal, S., Levy, S., Dundas, D., & Scollie, S. D. (2020). Sound quality ratings of amplified speech and music using a direct drive hearing aid: Effects of bandwidth. Otology and Neurotology, 42( 2), 227–234.

86  Music and Hearing Aids:  A Clinical Approach Wu, Y-H., Dumanch, K., Stangl. E., Miller, C., Tremblay, K., & Bentler, R. (2017). Is the device-oriented subjective outcome (DOSO) independent of personality? Journal of the American Academy of Audiology. 28, 10, 932–940.

4 Clinical Approaches to Fitting Hearing Aids for Music SO . . . WHAT DO WE DO CLINICALLY? There are many factors to control and many elements to consider when developing a “first fit program for music.” The program may be subtly different depending on whether the hard-of-hearing client will be playing an instrument or just listening to music. And, even within the programming category of “music,” there may be some fine-tuning decisions to be made. For example, issues of live music versus recorded music have to be dealt with, also if the music is mixed with vocals and instruments versus whether it is only instrumental music. Issues relating to which parameters should you be allowing your client to access, or not, should also be part of the hearing aid fitting and selection process. Moreover, should there be more than one “music program”? Following are some categories based on some of the technical parameters that were discussed in the previous chapter, and others are based on the inherent physics and acoustics of musical instruments. The suggestions are based on the research, and are bolstered by clinical practice. 87

88  Music and Hearing Aids:  A Clinical Approach

PEAK INPUT LIMITING LEVEL: SOME CLINICAL STRATEGIES Clinically, we have all been confronted with a client who is quite pleased with their hearing aids for speech, but are less than pleased while listening to, or playing music. Given that they may have an “older technology” hearing aid with a low peak input limiting level, there are some clinical strategies that may be useful. Recall that the peak input limiting level refers to the highest sound level that can be presented to the “front end” of a hearing aid and which can be digitized without distortion. Distortion of music (and even one’s own voice) at this early stage would be impossible to resolve clinically. The distortion would be perpetuated through the various processing stages of the hearing aid and will be quite apparent to the hard-of-hearing client. Clinical strategies are required to avoid these front-end peak input limiting issues. Audio files from Chapter 3 demonstrate the potentially deleterious effect of having a peak input limiting level that distorts the quality of music. Audio File 3–1 is provided again here where the peak input limiting level is decreased from 115 dB SPL (A) to 105 dB SPL (B) to 95 dB SPL (C) in an A-B-C-A format for music. Audio File 3–2 for speech shows no such degradation because speech levels are at an inherently lower sound level than those of music.

A Bit of Technical “Recent” History In attempts to resolve the front-end peak input limiting level issue, the hearing aid industry has resorted to some ingenious technical attempts to present the hearing aid processing algorithms with the best possible signal. It is worthwhile to have a quick review of some of these approaches as many are still in use, even in conjunction with modern post-16 bit hearing aid architecture. One of the first approaches that is still in widespread use today is to have an analog compressor associated with the preamplifier stage of the hearing aid microphone assembly. This

4.  Clinical Approaches to Fitting Hearing Aids for Music   89

would reduce overly high sound levels, thereby presenting the Analog-to-Digital (A/D) converter with the best possible signal level. The sound level will be increased digitally once it has been successfully digitized such that the initial input characteristics of the sound are identical to the digitized pre-processed signal — similar to ducking under a low hanging bridge, and then (digitally) standing up again. Although not all manufacturers use this approach today, many consider it to be an element of good engineering design that can be used in conjunction with other technologies. Another approach uses an A/D converter assembly that could “auto range” such that the effective operating dynamic range is appropriate for the input. For example, the front end could have an effective input range of 15 to 115 dB SPL, rather than 0 to 100 dB SPL. In both cases, there is a 100 dB dynamic range, but the 15 to 115 dB SPL range is more appropriate for the higher levels associated with music. And, as there are no usable speech or other environmental cues below 20 dB SPL, there are few limitations with this technology. In some cases, this dynamic range (e.g., 15 to 115 dB SPL) is static and built into the hearing aid hardware, and in others can be quite dynamic, depending on the nature and level of the input signal. This technology has been available almost since the advent of digital hearing aids and continues to be used today, again in conjunction with other technologies. A third approach, which again can be used with other modern technologies, is the use of “stacked A/D converters.” As the name suggests, there is more than one A/D converter, and associated circuitry, with one being optimized for the input range associated with speech and another for the input range associated with music. Although this technology is still commercially available, it is no longer deployed widely in the hearing aid industry. Each of these technical innovations are still available, to some extent, in the hearing aid and can be used in conjunction with each other and/or with post-16 bit architectures. Both can be viewed either by the metaphor of ducking under the bridge (analog compression prior to digitization), or an increase in the level of the low hanging bridge (auto ranging and stacked A/D converters).

90  Music and Hearing Aids:  A Clinical Approach

Clinical Strategies There are five clinical strategies presented below and some are based on the “ducking under the bridge” metaphor. These are (i) reducing the sensitivity of the hearing aid microphone(s), (ii) using an assistive listening device (such as an FM system or a room loop) with their own volume control prior as an input to the hearing aid, (iii) removing the hearing aids while listening to music, (iv) changing the characteristics of the clients’ own hearing aid microphone(s), and (v) the recommendation of a low-cost Personal Sound Amplification Product (PSAP) that may be used specifically for listening and/or playing music.

Reducing the Sensitivity of the Hearing Aid Microphone Using a Microphone Covering such as Cellophane Tape One can have the client place four to five layers of cellophane tape over the hearing aid microphones while listening to, or playing, music. Depending on the gauge of the tape, this may reduce the input by 10 to 12 dB, thereby allowing the input music to be within the operating range of the A/D converter. Depending on the hearing loss and the compression parameters of the hearing aid, the hard-of-hearing client may need to increase the volume setting on their hearing aids. Volume controls are implemented later in the hearing aid circuitry — this is analogous to standing up again after ducking under a low hanging bridge or doorway. Table 4–1 shows the sound attenuation caused by the use of five layers of cellophane tape over the hearing aid microphone. This technique (or clinical tool) provides clinicians with a diagnostic strategy to determine whether their hard-of-hearing clients’ own amplification have a front-end limiting problem. If the use of cellophane tape improves live music listening, or the level of their own voice, this is evidence that the hearing aid’s front end cannot handle the higher sound levels associated with music. This technique can also be performed “remotely,” where the hard-of-hearing person can be instructed to perform this low-tech technology at home. But be careful — I have had a few

4.  Clinical Approaches to Fitting Hearing Aids for Music   91

Table 4–1.  Reduction in Sound Level With Five Layers of Cellophane Tape Wrapped Around the Hearing Aid Microphones*

Attenuation due to tape

250 Hz

500 Hz

1000 Hz

2000 Hz

4000 Hz

8000 Hz

−7

−9

−11

−10

−12

−8

*This would help ensure that overly loud inputs are received in the optimal operating range of the “front end” of the hearing aid.

instances over the years in which after telling someone to wrap their hearing aids in tape, I received a few “choice words” in response — at least until they actually tried it and it improved their listening. For some reason, I rarely received an apology! Use of a Low-Cut Microphone Although it is true that changing the hearing aid microphone is not something that can be implemented in the clinical setting, this is something that a hearing aid manufacturer can perform on any existing hearing aid (in conjunction with some other technical and software alterations), and, as such, this discussion belongs in this clinical chapter. A microphone change is relatively minor as compared with an IC-based platform change in a hearing aid. Modern hearing aids use a typical broadband microphone in their design and this is the case for clients who have any degree of hearing loss including those people with relatively good lowfrequency hearing ability. The technical advantage is that with a broadband microphone, the internal noise of the hearing aid is minimized. Figure 4–1 shows two commercially available hearing aid microphones; one being a broadband response and the other being one with a 6 dB/octave low cut roll-off of the sensitivity in the lower frequency region. These “low cut” hearing aid microphones typically have a sensitivity roll-off, on the order of 6 dB/octave below 1000 to 1500 Hz and have been commercially available for decades. However, there is an advantage for using a hearing aid microphone that has reduced sensitivity for the (inherently higher

92  Music and Hearing Aids:  A Clinical Approach

Figure 4–1.  The frequency response of two commercially available hearing aid microphones is shown. The top curve is for the commonly used broadband response and the bottom curve is for the microphone with reduced low frequency sensitivity (approximately 6 dB/ oct low frequency roll-off). Figure courtesy of Sonion. Used with permission.

sound level) low frequency sounds. The “trick” is that high sound level, low-frequency sounds will be reduced in level prior to the digitization process, such that there is a greater chance that the music presented to the “front end” will be within the optimal operating conditions of the A/D converter. With a broadband microphone (110 dB SPL input) the total harmonic distortion at 500 Hz was 15%, as opposed to less than 2% with a 6 dB/ oct low-cut microphone, given the same input level (Schmidt, 2012). A drawback of the use of this low-cut microphone is that the internal noise of the hearing aid increases substantially, especially for the lower frequency sounds. This has caused some manufacturers in the industry to forgo this approach, but at least one other manufacturer had adopted it with excellent clinical results. This manufacturer had realized that the internal noise level can be reduced to that of a broadband microphone with a judicious implementation of expansion circuitry. The internal

4.  Clinical Approaches to Fitting Hearing Aids for Music   93

noise with a broadband microphone, with and without expansion, as well as the internal noise with a −6 dB/oct roll-off (below 1000 Hz) microphones is shown in Table 4–2 (Schmidt, 2012).

Use of an External Microphone Another strategy would be to use an assistive-listening device with an external microphone and volume control. Although it is true that any mode of input to a hearing aid (e.g., microphone, telecoil, direct audio input, etc.) still needs to go through its own A/D converter pathway, having an external microphone will allow the option of reducing the input to ensure that the digitized signal will not be significantly altered. Any external microphone system, such as an FM system or an inductive loop system, would be useful. An advantage of an inductive loop system, whether transmitted via an inductive neck loop to the hearing aid or via a room loop system is that there will not be any appreciable increase in the time delay of the signal. This also provides the clinician with a recommendation that can be made to their hard-of-hearing clients who wear hearing aids. It would be better to reduce the input to the hearing aid by turning down the room stereo system, car radio, or MP3 player Table 4–2.  Internal Noise Levels (in dB SPL) For a Broadband Microphone, One With a 6 dB/oct Roll-off Below 1000 Hz, and The 6 dB/oct Roll-off Microphone With Expansion Implemented on The Hearing Aid 250 Hz

1000 Hz

4000 Hz

Broadband mic.

25

18

15

6 dB/oct roll-off mic.

42

24

16

6 dB/oct roll-off mic + expansion

27

20

16

*Note. the beneficial element of expansion that decreases the internal noise, especially in the lower frequency region. (Adapted with permission of Schmidt, 2012.)

94  Music and Hearing Aids:  A Clinical Approach

(and then if necessary, increasing the volume of the hearing aid to compensate), than to have the input at a higher sound level. The hearing aid volume control is implemented in the software after the A/D conversion process.

Remove the Hearing Aids For those hard-of-hearing clients with only a mild, up to a moderate level sensorineural hearing loss, removal of their hearing aids may be useful. Given the higher sound levels of instrumental music, only several dB of gain, or even 0 dB of gain may be required. Table 4–3 is based on the FIG6 fitting formula (Killion & Fekret-Pasa, 1993; de Jong, 1996) and shows that with a sufficiently high level input signal, one may only require several decibels (if that) of hearing aid gain. (Chasin, 2012). Although the FIG6 program is no longer commercially available, similar data can be obtained from other fitting formulae.

Table 4–3.  Prescribed Hearing Aid Gain at Three Input Levels for a Range of Hearing Loss at 1000 Hz That is Based on FIG6 (Killion & Fekret-Pasa, 1993)* dB HL at 1000 Hz

65 dB SPL input

80 dB SPL input

95 dB SPL input

25

2

1

0

35

8

4

0

45

14

7

0

55

20

10

1

65

28

15

2

75

36

20

3

85

44

24

4

*​Similar results will be obtained with other hearing aid fitting formulae. Used with permission from Hearing Review (Chasin, 2012).

4.  Clinical Approaches to Fitting Hearing Aids for Music   95

Try a PSAP, but Not Just Any PSAP A Personal Sound Amplification Product (or PSAP) is one of a range of amplifying devices that are not intended for the hardof-hearing person, but yet can be quite useful in some situations. This may be a less expensive alternative for the client, short of purchasing another set of hearing aids, especially if their only complaint is with the fidelity of loud and live music. Not all PSAPs are created equal and most suffer from the same “front-end” limiting problem, as do some hearing aids. To my knowledge there is only one PSAP that would be ideal for the higher sound level associated with music, and this is called the Bean™ (available from http://www.Etymotic.com). This device is based on the 1988 K-AMP IC analog hearing aid circuit developed by Killion (1988), and despite the age of this technology, continues to rival that of many of the more modern digital hearing aids. It would not be hyperbole to say that we are only just now back to the level of 1988 when it comes to high fidelity hearing aids for music.

FREQUENCY LOWERING Although ensuring that the peak input limiting level of any one hearing aid is sufficiently high, such that the inherently higher levels associated with music are optimally digitized, is the primary element when specifying (and designing) hearing aids for music (and speech), there are other parameters that need to be addressed. One such parameter is with the use of frequency lowering algorithms in hearing aids. Frequency lowering by only 1/2 of one semitone (i.e., a quarter tone) has been shown to be deleterious to the quality of music, however, it would be quite acceptable for speech. See, for example, Audio File 3–4 and Audio File 3–5 for music and speech, respectively. Frequency lowering (for speech) would be implemented clinically whenever there are cochlear dead regions or if the cochlear damage is too severe such that the hearing threshold is too poor (or absent) for successful amplification. This, of

96  Music and Hearing Aids:  A Clinical Approach

course, limits the high frequency response of any hearing aid and this is as true for music, as it is for speech. As discussed in Chapter 3, there are two approaches to handling cochlear dead regions clinically. One is to reduce the amplitude of the offending frequency range by using a form of low pass filtering. This will limit the high frequency amplification, but will not distort the harmonic relationships of the music. Specifically, a 6 dB/octave roll-off above 1000 Hz (or just below the beginning of the severely damaged cochlear frequency region), as shown in Figure 4–2, would be useful. Audio File 3–6 from Chapter 3 demonstrates that this slight high frequency decrease in gain can be barely noticeable for the inherently quieter speech sounds. A second approach, if a high frequency reduction in gain is not sufficient, is to implement exactly a one-octave linear frequency lowering, such that the severely damaged frequency region(s) are not overamplified. As discussed in Chapter 3, this strategy would be ideal for instrumental-only music, but would create additional, but acceptable, musical notes into the composition, such as the creation of a perfect fifth or a third. Figure 3–5A, Figure 3–5B, and Audio File 3–7 (violin) and Audio File  3–8 (clarinet) show the usefulness of this approach for music, but not for speech (Audio File 3–9). Commercially, this approach had been available in the past (Auriemmo et al., 2009) but was suggested for all stimuli such as speech, bird sounds, and music. This algorithm is no longer commercially available but perhaps can be resurrected as a special hearing aid program just for instrumental music.

How to Find Cochlear Dead Regions There are three clinical approaches to locating and assessing cochlear dead regions: the TEN test (Moore, 2004, 2010); using a piano or keyboard (Chasin, 2019); or creating a “distortion-ogram” (Wm. Martin, personal communication, 2021). Each will provide slightly different information and each will take varying amounts of time. The identification of cochlear dead regions has been the subject of much research over the last 15 years, and its importance

97

Figure 4–2.  Modified frequency response designed to minimize effects caused by a cochlear dead region by implementing a high frequency (6 dB/oct) roll-off above 1000 Hz. The arrow indicates the reduced high frequency gain output caused by the high-cut roll-off.

98  Music and Hearing Aids:  A Clinical Approach

for the clinic cannot be overemphasized. Given a cochlear dead region, less is typically more. Minimizing gain (or shifting away) from frequency regions with significant cochlear damage can result in a more successful hearing aid fitting. The TEN (HL) Test A commonly used diagnostic test is the Threshold Equalizing Noise (TEN) test that is a clinically efficient prerecorded psychophysical test based on masking and cochlear tuning curves. (Baer et al., 2002; Moore, 2004, 2010; Moore et al., 2000). The original TEN test took about 20 minutes to obtain results (2-dB steps and 4 octave test frequencies from 500 to 4000 Hz). Using a shaped masker noise that mimicked the HL to SPL curve, Moore, Glasberg, and Stone (2004) created the TEN (HL) version and have been able to reduce the clinical time to roughly 8 to 10 minutes. The TEN (HL) version has been implemented on some clinical audiometers. Because cochlear dead regions are typically seen when there is significant inner hair cell damage, clinical suggestions are to only perform this test if the audiometric threshold is greater than 50 dB HL. Moore and colleagues (2004) do suggest some caveats when using this test because artifacts in the results may occur with people with central auditory involvement or in cases of auditory neuropathy. Piano or Keyboard Whereas the TEN (HL) approach to the assessment of cochlear dead regions is based on the research into masking in the 1950s, such as Hirsch and Bowman (1953), the piano approach is based on the work of Davis et al. (1950). As discussed in Chapter 3, Davis created a unilateral temporary hearing loss by exposing his subjects to noise. Two unmarked knobs were provided to the subjects: one controlling the sound level and the other controlling the frequency. The subjects were asked to match the loudness and the frequency of tones heard in the normal hearing ear with that of the temporarily damaged ear. For low frequency tones, there was a good one-to-one match between the ears, but in the damaged cochlear region, the subjects heard the increase

4.  Clinical Approaches to Fitting Hearing Aids for Music   99

in frequency as merely an increase in loudness but not frequency. That is, the subjects heard sounds in the temporarily damaged ear as flat relative to the good ear. Clinically, one should ask the hard-of-hearing client whether the “distorted” sound was sharp or flat. If the sound was heard as flat, this is evidence of a cochlear dead region to be avoided upon subsequent amplification. However for those clients with a “reverse” slope sensorineural hearing loss such as observed with Ménière’s syndrome, they may find that the distorted tone is “sharp.” Although this is frequently observed clinically, to my knowledge no basic research has been performed that has replicated the Davis et al. (1950) research for clients with low frequency “reverse slope” audiograms. It may be that the only strategy (for both speech and music) for clients with this type of hearing loss would be to have gain (and output) reduction for severely damaged low frequency regions. And perhaps in the future, there will be the development of a frequency shifting algorithm that would be frequency “increasing” to a healthier cochlear region. Although this frequency increasing algorithm would not be useful for music, it may be quite useful for speech. Unlike the TEN (HL) test, this piano-based test takes about 15 to 20 seconds and allows for a clear discussion with the potential hearing aid consumer about some fine-tuning modifications that may be necessary, apart from hearing aid fitting formulae. Any garage-sale electronic keyboard would suffice as this test is merely about a person judging whether two adjacent notes are the “same” or “different” and not about the quality of the music. This is something that the client can perform at home, or at a friend’s place who has a piano and they can bring the results with them. They are asked to indicate where on the piano the difficulty begins and usually a statement about how many white keys from the top (or above middle C), is the region of their pitch problem. Ask the hard-of-hearing person (without hearing aids) to sit down at the keyboard and begin somewhere on the middle-right side (250+ Hz) by playing every adjacent note going upwards (white key, white key, black key, white key . . . ). They are to judge whether any two adjacent notes (semitones) are the same or different in pitch. Even for people with significant sensorineural hearing loss, the first octave or so will be quite easy, but as one

100  Music and Hearing Aids:  A Clinical Approach

reaches the last upper octave (2000 to 4000 Hz) on the piano or keyboard, this becomes a more difficult task. Once they find a region where they are beginning to have difficulty distinguishing the pitch, or simply cannot distinguish if there was a change in pitch, then this may be considered a cochlear dead region. The result is converted from notes on the piano keyboard to frequency in Hz. The “C” that is one octave below the top note is 2000 Hz, the “G” above that is close to 3000 Hz, and the top note “C” is close to 4000 Hz. (A full listing of musical note names to their fundamental frequency can be found in Appendix A.) This 20-second exercise can then be used to adjust frequency lowering for speech, or gain reduction for music, depending on the results. How does this compare with the TEN (HL) test? In a pilot study of 10 hard-of-hearing people, given a criteria of being within 1/2-octave, 8 out of the 10 gave results that were within 1/2-octave of the more time-consuming TEN (HL) test and 2 of the 10 showed results that were greater than 1/2-octave. Given this more clinically efficient version of the TEN (HL) test, questions about cochlear dead regions might be addressed quickly and sometimes with surprising results (Chasin, 2019). Distortion-O-Gram One limitation of the piano/keyboard method is that only information about the frequency of the cochlear dead region is provided, but not the sound level. It is possible (and probable) that for lower sound levels, a damaged cochlear region may function quite adequately and this may be the case for speech, but more likely, not the higher sound levels associated with music. Although never published William (Billy) Martin (personal communication, 2021) describes another approach that can provide information on both frequency and sound level. Using sound field testing, or under earphones, the stimulus sound level is increased in 5 dB increments from 0 dB HL until the client reports an onset of “distortion” or an undefinable change, but a change nevertheless. This can be performed with as many, or as few, frequency regions as you would like and have time for clinically. A distortion-o-gram can be created, which is a frequency/ sound-level map of the distortion in both frequency and sound level. This can be performed with or without client comments

4.  Clinical Approaches to Fitting Hearing Aids for Music   101

about the degree of distortion and can be useful to identify cochlear dead regions (at higher stimulus levels), or even triggers for tinnitus or hyperacusis at lower levels. A comparative study of these three methods of assessing cochlear dead regions would provide for an interesting project (for both clients with a high frequency sensorineural hearing loss and also for those with a low frequency sensori-neural hearing loss): The TEN (HL) test can take 10 to 15 minutes (for two ears and four test frequencies,) whereas the piano keyboard (which only requires a “same/different” judgment of pitch) only takes 20 seconds and can actually be used prior to a client coming to the clinic. In addition, the distortion-o-gram that provides information on both frequency and sound level may allow the clinical audiologist to paint a better picture (see Study 4–1 in Appendix B).

MULTI-CHANNEL COMPRESSION The use of compression is more of a “cochlear damage” issue than an input stimulus issue, and as such, the compression settings in a hearing aid program for speech stimuli will be similar to that for music. As discussed in Chapter 3, the use of a relatively slow-acting WDRC circuit would have minimal negative effects on music, with the exception of two provisos — listening to already compression limited (CL) pre-recorded music, and potential issues arising out of the use of multi-channel compression.

Compression Limiting Commercially available pre-recorded music frequently already has a significant degree of compression, called compression limiting (CL) and is similar to peak clipping in that it only comes into effect at higher sound levels, but unlike peak clipping has a “look ahead” feedback function that reduces the level before its waveform is clipped. For lower levels of pre-recorded music, the recording is near linear such that a hard-of-hearing client’s own amplification can

102  Music and Hearing Aids:  A Clinical Approach

optimally provide WDRC as required, based on their hearing loss. For higher levels of the music, the audiologist may need to program a linear phase of their input compressor so that the pre-recorded music is not “doubly compressed.” This would mandate a specific hearing aid program “recorded music listening” for hearing aids. This may be in addition to one for “live music,” where higher input levels are treated without distortion such that the peak input limiting level is not exceeded. And, perhaps a third music program for “instrumental only music” where perhaps a one octave-linear frequency lowering can be implemented, when and if this technology becomes resurrected in the industry.

Multi-Channel Compression Despite multi-channel compression being the mainstay of the hearing aid industry since the late 1980s and being the standard of care for speech inputs to the hearing aid, one needs to ensure that the lower frequency elements of music are not substantially treated differently than the mid or higher frequency elements. Croghan et al. (2014) showed that given a choice between 18 channels and 3 channels, a “less is more” approach may be better (especially for Rock music). It is possible that a singlechannel device that maintains the relative amplitudes of the high frequency harmonics and the lower frequency harmonics may be quite beneficial for most types of music. Although there are currently no single-channel hearing aids that are commercially available, the Bean™ PSAP (http://www.etymotic.com), based on the K-AMP IC can be useful for those clients with mild hearing losses. Figure 4–3 shows two spectra with identical frequency components of their fundamental and harmonics, except that amplification is applied only to the higher frequency elements of the sound. The associated Audio File 4–1 (identical to Audio File 3–10) provides an A-B-A format comparison between these two spectra. The A version is of a flute, but with high frequency amplification of the harmonics, the B version sounds more “oboe-like.” In both cases, the music is not dissonant and the frequency location of the harmonics has not been altered; however, the timbre has been changed.

103

Figure 4–3.  Two spectra with identical frequency components of their fundamental and their harmonics, except that amplification is applied only to the higher frequency harmonics of the second sound, marked by an arrow. The spectrum marked with the arrow is based on an original flute sound, but now sounds more “oboe-like.”

104  Music and Hearing Aids:  A Clinical Approach

Having said this, in most cases seen in the clinic, this would typically not be observed because the gain and output characteristics would be adjusted based on the individual’s hearing loss, such that mid and higher frequency harmonics would still be in a balanced, desired sensation level relative to the lower frequency harmonics. An interesting area of study would be to examine this potential problem, especially for stringed instruments where harmonic amplitude changes are more noticeable. Related to Study 2–2 in Chapter 2, multi-channel compression may be more problematic for some instrument groups than others. Perceptively, stringed instruments sound the best when the relative balance between the lower frequency fundamental energy and the higher frequency harmonic energy is maintained. A hypothesis would be that this would be less of an issue for woodwinds that, despite generating a similar output spectrum as a stringed instrument, have their quality judged by the lower frequency inter-resonant breathiness, and not the relative amplitudes of the higher frequency harmonic structure (see Study 4–2).

FREQUENCY RESPONSE Based on the discussion in Chapter 3, there are no major differences between the frequency response defined for a speech program in a hearing aid and for a music program, especially in the higher frequency region. The amount of high frequency amplification, similar to the issue of amplitude compression, is limited more by the damage to the individual’s cochlear function rather than whether the input stimulus is speech, as opposed to music. Cochlear dead regions, which typically occur for more significant hearing losses with thresholds in excess of 50 dB HL, are the limiting element for the setting of the high frequency amplification limits. As shown in Table 3–1, hard-of-hearing clients with either a steeply sloping sensorineural hearing loss configuration or those with more severe hearing losses may require a narrower high frequency limit versus those hard-of-hearing clients with a milder, gently sloping hearing loss. Table 3–1 from Chapter 3 is reproduced here.

4.  Clinical Approaches to Fitting Hearing Aids for Music   105

Table 3–1.  Summary of Three “Rules of Thumb” for Specifying The Upper Limit of the Frequency Range for Hearing Aids, for Clients Who May Have Cochlear Dead Regions Mild Hearing Loss

>55 dB Hearing Loss

Steeply Sloping Audiogram

Broad bandwidth

Narrow bandwidth

Narrow bandwidth

Based on the work of Moore et al. (2000), Moore (2001), Aazh and Moore (2007), and Ricketts et al. (2008). Note that these rules are similar for speech, as well as music.

When it comes to the lower limit of the frequency response, data, as shown by Moore and Tan (2003) and others, suggest a lower limit of 55 Hz for music. Although speech has no energy below the fundamental frequency of the speaker (typically 90 Hz and above), there can be significant fundamental frequency, and some harmonic energy in the lower frequency region for music. There are no commercially available hearing aids in the marketplace that electronically extend down to 55 Hz, and because of upward spread of masking, this is understandable. Nevertheless, with appropriate acoustic earmold/ear tip coupling and associated venting, significant unamplified low frequency music energy can enter directly through the vent, as long as the input level does not exceed about 80 dB SPL, which can cause undesirable low frequency cochlear distortions (see, for example, Hirsch & Bowman, 1953).

FEEDBACK MANAGEMENT SYSTEMS Clinically, many have had the experience where an overly active feedback management system has essentially turned off music for their hard-of-hearing clients. It is a difficult, but not insurmountable task, to distinguish a feedback component from a harmonic of the music. There have been clinical experiences where flutes have been “removed” from the music because of this. Some manufacturers have responded by limiting their feedback management system to only be effective above a certain

106  Music and Hearing Aids:  A Clinical Approach

frequency level, such as 1500 Hz or 2000 Hz. This has indeed improved the situation, but similar to the number of channels in multi-channel compression for music, a “less is more” approach may be desirable. Hearing aids should have feedback management systems that can either be disabled or at least minimized for the various music programs. Chung (2004) reviewed three types of feedback management used in the hearing aid industry: (i) phase cancellation where a new signal was created that was out of phase with the offending feedback, thereby resulting in more usable gain before feedback was encountered, (ii) an overall reduction in gain across the frequency range, and (iii) a notch filtering paradigm centered near the feedback frequency region. Chung pointed out the possibility of “chirping” that could occur with music passages that had a quick offset such as classical music. In this condition, the generated feedback tone would still be audible for a brief moment after the music has ceased. However, clinically, in working with musicians, this complaint has never been encountered. Since that article came out in 2004, the hearing aid industry, because of improved digital control and the increase in high frequency gain that can be achieved, has uniformly moved away from notch filtering and overall gain reduction, in favor of phase cancellation. Johnston and colleagues (2007) examined this in more detail for both speech and music. In comparing 16 subjects with a similar high frequency hearing loss in both “feedback reduction system on” and “feedback reduction system off” conditions, they found no significant differences for any of their measures, for both music and for speech. There were, however, slight differences between the two hearing aid models that were used, with one providing an additional almost 16 dB of gain as a result of using the feedback reduction system. They concluded that even for music, the feedback reduction system should be implemented. However, this study was limited in that they only used a short passage of music and only used the phase control method of feedback management. One advantage of the overall gain reduction technique reported in Chung (2004) is that the balance of the entire music spectrum would be maintained and this would be potentially useful when listening to, or playing, classical music that uses many

4.  Clinical Approaches to Fitting Hearing Aids for Music   107

stringed instruments. And despite the fact that this approach is no longer commercially available, this would make for an interesting future study regarding the processing of music (see Study 4–3). Such a system can be simulated using MatLab™ or similar software.

USE OF “ACCESSORIES” Primarily to improve the quality and perception of music during live music sessions, an accessory such as a wireless TV-listening device, can be useful in conjunction with a hard-of-hearing person’s own amplification. Such a device, albeit designed for TV listening, can provide an excellent method of allowing a hard-ofhearing musician to move around wirelessly while still be able to appreciate, and play the music. This would be similar to using wireless in-ear monitors (Lesimple, 2020). This is something that is rapidly changing in the hearing aid industry, but not without its drawbacks. For example, any wireless transmission, such as Bluetooth, has a distance limitation (currently on the order of 10 meters) but also currently uses a 2.4 GHz transmission architecture. Although this wavelength is appreciably shorter than previous incarnations that used longer wavelength frequencies in the MHz range and has the advantage of only requiring a smaller receiving antenna (which obviates the need for an intermediary streamer), a disadvantage is that 2.4 GHz is near the resonant frequency of body fat. This is why microwaves also use 2.4 GHz. This issue may arise when someone may walk between the emitter (or TV listening device) and the musician, thereby temporarily cutting out the Bluetooth signal.

SMARTPHONE CONTROL AND MUSIC Smartphones, in conjunction with hearing aids can be used to control some aspects of the amplified signal. Currently, many manufacturers offer an equalizer that can be used to alter the

108  Music and Hearing Aids:  A Clinical Approach

frequency response — there may be slight adjustments that can be offered in the way of filtering certain frequency regions of the speech and music. Although this can be quite useful, especially for the experienced hearing aid user, other types of self-control can be problematic. This is especially the case for setting and altering the maximum output of the hearing aid. The maximum output of the hearing aid is something that needs to be carefully set by the audiologist and verified using equipment that is not available to the general public. Damage may occur if the output constantly exceeds a certain well-defined level, and this is true of speech as well as music. Smartphones are ubiquitous and will find greater usage down the line for listening and/or playing music. As such, two aspects of Smartphone usage will be reviewed that may be problematic for music: (i) Smartphone microphone(s) (Chasin, 2017) and (ii) time delays in certain systems. Just as in many areas of technological growth, these areas may be resolved somewhat with new implementations of the various operating systems and their associated algorithms.

Smartphone Microphones There are multitudes of apps for Smartphones that can turn them into sound level meters, recording devices, playback devices, and even allow them to be coupled with external devices for hearing aids via Bluetooth or other wireless protocols. However, each step in the recording/playback/control pathways can add some error to the final measured result. Some of these errors are small and are well discussed in the literature, such as the sound level range of usefulness of a Smartphone as a sound level meter. Other sources of error have been less discussed in the literature. Smartphones have microphones that are quite amazing for their size, and their production quality is quite high given the sheer numbers that need to be manufactured. These are Micro Electro Mechanical Systems (MEMS) microphones. These were initially used in the cellphone industry and many have built-in A/D converters and more recently, in the hearing aid industry. MEMS microphones use silicon instead of Teflon, an element that can get hotter before it starts to lose electrons. MEMS micro-

4.  Clinical Approaches to Fitting Hearing Aids for Music   109

phones still lose electrons, though, which is why a charge pump is used (the “mechanical” part of the MEMS acronym) to replace the lost electrons on the microphone diaphragm. However, many of the modern Smartphones have more than one microphone for the same reason that modern hearing aids also have more than one microphone. This allows the microphone system to be directional in the sense that it helps to reject unwanted noise if the noise is coming from the rear direction, or off-axis direction. Although this noise-rejection strategy can be very useful, it may be problematic if the noise or the music is what you want to measure or hear. For example, if one wants to use an app that turns the Smartphone into a sound level meter, then this can pose problems if the Smartphone is not held properly or aimed appropriately at the noise source. Although use of a MEMS microphone has added substantially to the quality of modern Smartphones (and now modern hearing aids that used MEMS microphones), and you can leave the Smartphone in your car on a hot summer’s day with temperatures hovering around 40 degrees Celsius, there are still limitations with the dynamic range. The dynamic range for modern MEMS microphones is limited. For speech, the difference between the softest speech sound (“th” as in “thin”) and the loudest sound ([ɑ] as in “father”) is about 35 dB. A MEMS microphone that can handle a 60 dB range is, therefore, quite adequate. It may be a different story for inputs that have a greater dynamic range, such as music. Some form of compression or AGC would be required to match the large dynamic range of music to the more limited dynamic range capability of a MEMS microphone. And to further complicate things, many microphones have a frequency response that has been optimized for phone communication — after all, these are telephones. The issue arises if one wants to measure sound sources that are not speech or speech-like. This many include using the Smartphone as a noise measuring device or as a recorder, or a transducer of music. The frequency region of concern consists of the bass notes below middle C (lower than 250 Hz) on the piano keyboard. Having said this, app developers can use “Application Programming Interfaces” or APIs that are provided by the Smartphone manufacturers on their operating systems (OS). These APIs can

110  Music and Hearing Aids:  A Clinical Approach

be quite useful by providing a broader frequency response as well as disabling certain features, such as compression or AGC that would normally be implemented. Not all developers use these APIs, with the result that two seemingly similar apps may function quite differently. A strategy that some app developers have used is to utilize an external microphone that is connected directly to the Smartphone. This obviates many of the concerns about using the internal microphone(s) and some of the design parameters, such as directionality, which may have posed a problem. Not all Smartphone microphones are created equal. Actually, they are, but the subsequent hardware and software design decisions can create quite different devices.

Smartphones and Digital Delay Latency, and its associated problems, has been well studied in hearing aids (e.g., Stone & Moore, 2003) but the Smartphone industry also has this problem. Depending on what you are using the Smartphone (or hearing aid) for, longer latencies can be fine. This is especially the case for listening to recorded music or other recorded information signals, such as a low battery warning. However, for live speech and music, a slight delay can be problematic. Values on the order of 10 msec have been stated as the maximum delay in a hearing aid circuit before some of the deleterious effects begin to become audible, but some Smartphones can have delays that are much longer. And, if you are using your Smartphone in conjunction with other external Bluetooth, or otherwise, enabled devices, these latencies can add up. Even a relatively passive mixing board in a recording studio can add another 5 msec or more in delay. Typically, it is not one culprit, but the entire stream of devices and technologies that, taken together, create a significant delay. Wireless technology can add to the latency, as can some algorithms that may be unique to certain apps. Whereas the hearing aid industry has been very careful to ensure that their hearing aids have a latency that is as short as

4.  Clinical Approaches to Fitting Hearing Aids for Music   111

possible, the issue is that once the hearing aid is connected, especially wirelessly connected, an entire new set of delays can be introduced into the listening environment.

SUMMARY AND CLINICAL RAMIFICATIONS Depending on the implementation by any one hearing aid manufacturer, current consumption issues still need to be addressed with other approaches, such as analog compression prior to the digitization stage or auto-ranging A/D conversion stages that optimize the digitization process for the higher level inputs associated with music. Some of the current hearing aids in the marketplace, despite being advertised as “post-16-bit architecture” may not function optimally for music. A quick test to determine whether any one hearing aid can handle the higher level inputs associated with music is to wrap five or so layers of tape around the hearing aid microphone. If this strategy improves the sound quality of live music, then this is a clinical solution, albeit an inelegant one — no amount of software programming will be beneficial because the problem occurs before the algorithm stages of the hearing aid. Other clinical strategies to improve music listening include using an external microphone system (such as an FM system or other assistive listening device) where the microphone has its own volume control that can be used to reduce the volume presented to the hearing aid. Removing the hearing aid for those with only up to a moderate level of hearing loss may be better for live music listening (or playing). If only 10 to 15 dB of amplification is required. the slight gain boost of the Bean™ PSAP may be useful. And when listening at home or in your car, the best strategy is to lower the volume of the stereo or radio, and if necessary, turn up the volume control of the hearing aid — a “ducking under the bridge” strategy. This is actually another clinical strategy that can even be used when speaking with your clients remotely or via email; if turning down the home (or car) radio improves things substantially (as opposed to increasing the volume of the radio and decreasing the hearing aid volume

112  Music and Hearing Aids:  A Clinical Approach

control), then this is evidence that the current hearing aid has a front-end limiting problem associated with its A/D conversion hardware. Cochlear dead regions can be assessed clinically, and other than the TEN (HL) test that takes roughly 10 to 15 minutes to perform (for both ears at four frequencies), a quick strategy is to have the hard-of-hearing client play the piano keyboard (and indeed any “garage sale” inexpensive keyboard should work). The client is asked to say whether any two adjacent notes are the “same” or “different” in pitch. If two adjacent notes do not sound different in pitch, then this is evidence of a cochlear dead region and a high frequency decrease of gain and output should be implemented for a music program. (Frequency lowering should still be used for the speech programs.) The frequency response for a music program in the hearing aid should be similar to that of a speech program. The limitation in the higher frequency region is more related to cochlear dead regions than whether the input is speech or music. There is no reason why a music program should have a wider frequency response than a speech program. The bandwidth limitations as a result of cochlear dead regions which would limit the high frequency limit of the frequency response was shown in Table 3–1 (and has been reproduced earlier in this chapter). An issue of caution is warranted with respect to music and is based on the Skinner and Miller’s (1983) study (discussed in Chapter 3). They demonstrated that a correct balance between the low frequency amplification and high frequency amplification is required, at least for speech. It is reasonable to assume that this would also be the case for music, but this still needs to be studied. Compression for a “music program” should be similar to any of the “speech programs” with slow-acting (adaptive) WDRC being the best setting for most types of music. For pre-recorded music, which may have already undergone compression limiting (CL), a more linear setting would be beneficial, especially for higher-level inputs. Therefore, there should be at least two music programs: a more linear program for listening to prerecorded (and some radio station) music; and a second with slowacting WDRC for live listening and performance that would be similar to the “speech program.” However, both music programs

4.  Clinical Approaches to Fitting Hearing Aids for Music   113

should have a slight high frequency roll-off of gain and output if cochlear dead regions exist. In addition to the two music programs, “advanced features,” such as noise reduction, feedback management, and frequency lowering, should be disabled. The issue of “directional” microphones has intentionally not been addressed in this book because the implementations of this technology can be highly variable, as can the listening environments for music. I have had some orchestral musicians prefer a static directional microphone pattern in order to suppress some of the brass music sounds from the rear, and others, in similar situations, strongly prefer the omni-directional setting. With the recent advent of dynamic or adaptive directional microphones, it is difficult to conclude anything definitive, but this is important territory for further research into this technology for amplified music while performing (see Study 4–4).

REFERENCES Auriemmo, J., Kuk, F., Lau, C., Marshall, S., Thiele, N., Pikora, M., . . . Stenger, P. (2009). Effect of linear frequency transposition on speech recognition and production of school-age children. Journal of the American Academy of Audiology, 20, 289–305. Baer, T., Moore, B. C. J., & Kluk, K. (2002). Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America, 112(3), 1133. Chasin M. (2012). Okay, I’ll say it: Maybe people should just remove their hearing aids when listening to music! Hearing Review, 19(3), 74. Chasin, M. (2017). Smartphones and microphones. Hearing Review, 24(12), 10. Chasin M. (2019). Testing for cochlear dead regions using a piano. Hearing Review, 26(9), 12. Chung, K. (2004). Challenges and recent developments in hearing aids: Part II. Feedback and the occlusion effect reduction strategies, laser shell manufacturing processes and other signal processing technologies. Trends in Amplification, 8(4), 125–164.

114  Music and Hearing Aids:  A Clinical Approach Crogan, N.B., Arehart, K.H., & Kates, J.M. (2014). Music preferences with hearing aids: Effects of signal properties, compression settings, and listener characteristics. Ear and Hearing, 35, e170–e184. Davis, H., Morgan, C. T., Hawkins, J. E., Galambos, R., & Smith, F. W. (1950). Temporary deafness following exposure to loud tones and noise. Acta Otolaryngologica, 88(Suppl.), 1–57. De Jonge, R. (1996). Microcomputer applications for hearing aid selection and fitting. Trends in Amplification, 1(3), 86–114. Hirsch, I. J., & Bowman, W. D. (1953). Masking of speech by bands of noise. Journal of the Acoustical Association of America, 25, 1175–1180. Johnson, E. E., Ricketts, T. A., & Hornsby, B. W. Y. (2007). The effect of digital phase cancellation feedback reduction systems on amplified sound quality. Journal of the American Academy of Audiology, 18, 404–416. Killion, M. C. (1988). An acoustically invisible hearing aid. Hearing Instruments, 39(10), 39–44. Killion, M.C., & Fikret-Pasa, S. (1993). The 3 types of sensorineural hearing loss: Loudness and intelligibility considerations. The Hearing Journal, 46(11), 31–34. Lesimple, C. (2020). Turn your hearing aids into in-ear monitors for musicians. https://www.bernafon.com/professionals/blog/2020/ hearing-aids-as-in-ear-monitors Moore, B. C. J. (2004). Dead regions in the cochlea: Conceptual foundations, diagnosis, and clinical applications. Ear and Hearing, 25(2), 98–116. 3. Moore, B. C. J. (2010). Testing for cochlear dead regions: Audiometer implementation of the TEN (HL) test. Hearing Review, 17(1), 10–16, 48. Moore, B. C. J., & Tan, C. T. (2003). Perceived naturalness of spectrally distorted speech and music. Journal of the Acoustical Society of America, 114(1), 408–419. Moore, B. C. J., Glasberg, B. R., & Stone, M. A. (2004). New version of the TEN Test with calibrations in dB HL. Ear and Hearing, 25, 478–487. Moore, B. C. J., Huss, M., Vickers, D. A., Glasberg, B. R., & Alcantara, J. I. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology, 34(4), 205–224. Schmidt, M. (2012). Musicians and hearing-aid design — Is your hearing instrument being overworked? Trends in Amplification, 16(3), 140–145. Skinner, M. W., & Miller, J. D. (1983) Amplification bandwidth and intelligibility of speech in quiet and noise for listeners with sensorineural hearing loss. Audiology, 22(3), 253–279.

4.  Clinical Approaches to Fitting Hearing Aids for Music   115

Stone, M. A., & Moore, B. C. J. (2003). Tolerable hearing-aid delays. III. Effects on speech production and perception of across-frequency variation in delay. Ear and Hearing, 24, 175–183.

5 A Return to Older Technology? A RETURN TO OLDER TECHNOLOGY? This is an unusual title for a “conclusions” section but in this field, some of the technologies have already been invented and commercialized by the hearing aid industry, but have since been replaced by other approaches that arguably may be better for speech processing, though perhaps not music processing through hearing aids. Each of the following technologies have an element that can be addressed as part of a future research study. The following is a brief summary of these technologies.

One-Octave Linear Frequency Lowering for Instrumental Music Although the phrases “frequency lowering” and “music listening” should not typically be mentioned in the same sentence, an acceptable option would be to have exactly a one-octave linear frequency lowering for an “instrumental only music program,” This is a third possible “music program,” in addition to the two already mentioned in Chapter 4. This algorithm would create 117

118  Music and Hearing Aids:  A Clinical Approach

additional notes in the music, such as a perfect fifth and a third, but these notes (despite the composer not having this in mind) would not sound dissonant to the listener. Although a version of this was commercially available in the past, its intention at that time was for a wider range of input stimuli such as bird sounds, music, and speech. This technology can be resurrected and can be useful as a music hearing aid option for “instrumental music only.”

Single Channel Hearing Aid Similar to the K-AMP (Killion, 1988), a single-channel hearing aid (with adaptive compression) would treat the lower frequency fundamental energy in the same manner as the higher frequency harmonic energy, thereby maintaining the balance of music. Despite the fact that this technology is currently only available as a Personal Sound Amplification Product (PSAP) (i.e., Bean™ from http://www.Etymotic.com), I suspect that there would be a larger market for such a circuit. As an alternative, it may be possible to add in a single broadband detector prior to the bandspecific detectors that may function as a governor of the sound. There are some data, such as from Johnstone et al. (2019), which support such an implementation. The single broadband detector approach had previously been available in the hearing aid industry, and is still used today, but in a more limited fashion.

Adjustable Time Delay The data from Stone and Moore (2005), and Stone et al. (2008) suggest that control over a variable time delay mechanism could be useful for music and, in the future, could be provided to the clinical audiologist fitting the hearing aid. Clinical adjustment of this delay mechanism could possibly also be available (perhaps via a Smartphone app) to the hard-of-hearing users, as long as it is kept below 30 msec. This may be useful for altering the timbre of the music to increase its acceptability, especially with non-occluding hearing aid fittings in more reverberant environments. Audio File 3–11 for music demonstrates that peaks of

5.  A Return to Older Technology?   119

less than 10 dB are not readily noticeable for music (and speech Audio File 3–12), such that a peakier frequency response as a result of a longer time delay may still be quite acceptable.

Double Hearing Aid Output Stages and Receivers This innovation would be suggested for speech rather than for music, but is based on a comparison between speech and music. Although the hearing aid industry has offered hearing aids with two receivers, the emphasis had been to increase the gain and output to a hard-of-hearing consumer with a severe and/or profound hearing loss. This same innovation. however. can be used to independently provide either low frequency output (for the sonorant sounds) or high frequency output for the obstruent sounds, but not both. That is, speech would have two transduction pathways, each perhaps with their own dynamic time constants, whereas music would be treated via the one transduction route. At any one point in time, speech has either low frequency emphasis or high frequency emphasis (but not both), whereas music is always both low and high frequency emphasis at the same time. (See Figure 2–9 in Chapter 2.) This is the opposite of what typically is available as an inear monitor for performing musicians, where there is generally a “more would be better” philosophy. Thus, the more drivers and bands that a monitor is required to transduce, the more potential errors — especially crossover errors between adjacent frequency ranges — can occur.

Overall Gain Reduction for Feedback Control The hearing aid industry has used three electro-acoustic approaches for feedback management with phase-cancellation based feedback control being currently ubiquitous in the field. However, it may be useful to resurrect the “overall gain reduction” approach that had been commercially available in the past. This maintains the balance between the low frequency fundamental energy and higher frequency harmonic energy and may be more preferable for music listening and playing.

120  Music and Hearing Aids:  A Clinical Approach

Smartphone Apps Providing the End User With More Control Although this is improving with each implementation of a manufacturer’s user’s app to control elements of the hearing aid fine tuning, there is still much that can be done. Providing the musician with a more active equalizer that can change both the compression characteristics (within limits) and within a greater number of channels, may yield some benefits. Caution should be exercised about providing too much control as this may adversely affect the maximum output of the hearing aids.

A MUSICIAN’S WISH LIST There is a no better fitting way to end a book on music and hearing aids than to turn to what hard-of-hearing musicians who wear hearing aids would wish for. The following comments come from a special issue of Hearing Review (Edited by Chasin, 2018) where we examined the use of “Audiology to Extend a Musicians’ Career.” Some of the suggestions are easy to implement. For example, having a music student take her ear training classes with musical interval testing being assessed an octave or two lower than where the instructor had initially considered, or having a hard-of-hearing violinist switch to the slightly larger viola that has more of its musical energy in a region where the musician may have better cochlear function. Other suggestions were from the musicians themselves. I wish to thank Charles Mokotoff, Larry Revit, Richard Einhorn, Rick Ledbetter, Stu Nunnery, and Wendy Cheng for the following insights: n The

use of a smartphone app that has a 5- or even 10-band equalizer to modify music output. It should be able to either enable or disable automatic controls easily, such as feedback and attenuation controls, without a visit to an audiologist.

n Encourage

hearing aid manufacturers to use microphone pre-amps and A/D converters that can

5.  A Return to Older Technology?   121

handle the higher-level elements of music without distortion. n Musicians

should be able to immediately turn on or off features that control more or less lowfrequency inputs to their hearing aids which, among other things, would improve verbal communication in-between songs.

n Fewer

“buttons” or knobs on the hearing aids as these are amongst the first to malfunction and instead have these features controlled wirelessly.

n Earpieces

and earmold coupling systems that can be changed quickly from occluding to open, depending on the music being played.

n Access

to some, or all, of the features and programming hardware that audiologists use clinically so that adjustments can be made at the musician’s own, home studio.

n Sufficiently

sharp filter bands for multiband equalization so that that a change at 250 Hz will not compromise the hearing of music at 1000 Hz.

n A

hearing aid module specifically designed for the musician’s own Digital to Audio Workstation (or DAW).

n High

quality sound reproduction in the musician’s own hearing aids when streaming audio, with an option for a flat frequency response, given the limits of their hearing.

n Adaptor

cords and special connector boxes that will allow the musician to be able to plug in, use, and charge a wide range of accessories.

n Improved

communication between audiologists and the various institutions musicians find themselves in contact with, ranging from schools of music to performance venues, as well as radio stations that overly utilize non-linear technologies, such as compression limiting in their transmission.

122  Music and Hearing Aids:  A Clinical Approach

REFERENCES Chasin, M. (Ed.). (2018). Using audiology to extend a musician’s career. Hearing Review, 25(10), 10–26. Johnstone, P., Reinbolt, J., Pappas, J., Chasin, M., Hausladen, J., Phillips, T., . . . Martin, K. (2019, February). A comparison of hearing aid music-listening programs on perceived sound quality of individual musical instruments by child and adult musicians [Poster presentation]. 42nd Association for Research in Otolaryngology (ARO), Baltimore, MD. Killion, M.C. (1988). An acoustically invisible hearing aid. Hearing Instruments, 39(10), 39–44. Stone, M. A., & Moore, B. C. J. (2005). Tolerable hearing-aid delays: IV. Effects on subjective disturbance during speech production by hearing impaired subjects. Ear and Hearing, 26, 225–235. Stone, M. A., Moore, B. C. J., Meisenbacher, K., & Derleth, R. P. (2008). Tolerable hearing-aid delays: V. Estimation of limits for open canal fittings. Ear and Hearing, 29, 601–617.

Appendix A

Conversion Chart of Musical Notes and Their Fundamental Frequencies Listed are musical notes to corresponding frequencies of their fundamental frequencies. The notes are given from the lowest notes on the piano keyboard to an octave above the top note on the piano keyboard. Sharps (and flats) can be obtained by multiplying each note by exactly the twelfth root of 2 or 1.059. For example, A# on the second space of the treble clef would be at 440 Hz × 1.059 = 466 Hz. Middle C is 262 Hz and the top note on the piano is C at 4186 Hz. The truncation of the more exact value of the twelfth root of 2 (1.05946309…) to 1.059, can become problematic for the very high frequencies, but still be within a few Hz.

123

124  Music and Hearing Aids:  A Clinical Approach

C0

16

C3

131

C6

1048

D0

18

D3

147

D6

1175

E0

21

E3

165

E6

1319

F0

22

F3

175

F6

1397

G0

25

G3

196

G6

1568

A0

28

A3

220

A6

1760

B0

31

B3

247

B6

1976

C1

33

C4

262

C7

2094

D1

37

D4

294

D7

2349

E1

41

E4

330

E7

2637

F1

44

F4

349

F7

2794

G1

49

G4

392

G7

3136

A1

55

A4

440

A7

3520

B1

62

B4

494

B7

3951

C2

65

C5

524

C8

4186

D2

72

D5

587

D8

4699

E2

82

E5

659

E8

5274

F2

87

F5

698

F8

5588

G2

98

G5

784

G8

6272

A2

110

A5

880

A8

7040

B2

123

B5

988

B8

7902

Appendix B

Research Projects That Would Contribute Significantly to Clinical Knowledge Following is a summary of 12 potential studies, many of which can be addressed within the scope of an AuD Capstone study or similar project. Each one is marked with a number (e.g., Study 2–1) which corresponds to where in the book (Chapter 2, Study 1) it is discussed, and this is marked by the following icon:    In addition, Chapter 5 has a listing of some technologies that were already commercially available, but for a number of reasons had been withdrawn from the marketplace. Studies into each of these “old” technologies may provide sufficient information that can be translated into current clinical practice.

STUDY 2–1 The half-wavelength resonator musical instruments, such as the saxophone have twice as many harmonics in any given frequency range as do quarter-wavelength resonator instruments, such as the clarinet. If it turns out that a more tightly packed harmonic structure provides a better sound for a hard-of-hearing person, then a saxophone may be better than a clarinet. Do twice as 125

126  Music and Hearing Aids:  A Clinical Approach

many harmonics mean twice as many auditory cues? Of course, other factors do come into play acoustically in that larger (or longer) musical instruments do have a lower set of resonant frequencies such that the music may be within a healthier region of hearing for that person. To my knowledge this has never been formally studied and more research will be required.

STUDY 2–2 Multi-channel compression inherently treats the lower frequency fundamental energy differently from higher frequency harmonic information, thereby altering the amplitude balance between the fundamental and its harmonics — a flute may begin to sound more like a violin, or an oboe. Of course, although the exact frequencies of the harmonics and their associated amplitudes are quite important in the identification of a musical instrument, other dynamic factors such as attack and decay parameters of the note, as well as vibrato, are also important factors. The exact specifics of how poor the settings of multi-channel compression need to be before musical instrument identification or the change in timbre becomes problematic, would provide important clinical (and hearing aid design) knowledge.

STUDY 2–3 For listening to music that is primarily of one type (e.g., strings where the lower and higher frequency components equally increase as the music is played louder) then one fitting strategy may indeed point you in the correct direction, but for more varied instrumental (and perhaps mixed vocal) music, such as orchestral or operatic music, it would be more of a trial-and-error approach. One could conceivably work out a fitting formula (such as a “weighted average” or dot product) for orchestral music based on the various energy contributions of each of the musical sections. To my knowledge, such a calculation has never

Appendix B.  Research Projects   127

been performed but the results would provide important clinical input regarding how a “music program” can function and how this may be different than a “speech” program.

STUDY 2–4 Although it is true that the “long term” speech spectrum is wideband at any one point in time, the speech bandwidth is quite narrow when compared to music. At any one point in time, speech is either low frequency sonorant, or high frequency obstruent, but never both. This has yet-to-be determined ramifications for whether a single hearing aid receiver, or multiple receivers —  each specializing in their own frequency bandwidths — would be better or worse for speech versus music. A  hypothesis is that amplified music should be transduced through one receiver (to respond to its concurrent wideband nature), and speech, transduced through two receivers (with low frequency and high frequency information being treated separately). This hypothesis also has implications for in-ear monitor design. This is an important area of study but one in which there is little current information.

STUDY 3–1 Although the level of speech at 1 meter is on the order of 65 dB SPL (RMS), the level of a person’s own voice at their hearing aid microphone (which is much closer) is roughly 20 dB greater (Cornelisse, Gagne, & Seewald, 1991 [see Chapter 3]), and with crest factors for speech being at least 12 to 15 dB, and the input of their own voice can have peaks that are close to 100 dB SPL. Resolving the front- end processing issue for music can also possibly benefit all hard-of-hearing consumers who wear hearing aids by improving the quality of their own voice. To my knowledge, this has not been formally studied and more research regarding this potential beneficial side effect is warranted.

128  Music and Hearing Aids:  A Clinical Approach

STUDY 3–2 Frequency lowering in many of the various formats that are commercially available can be quite useful in the clinic, mostly to avoid overamplifying cochlear dead regions that may be related to severe inner hair cell damage. Although this may be useful for speech, it does not follow that this would also be the case for instrumental music. A “gain-reduction” strategy in the offending frequency region(s) may be the most appropriate clinical approach, rather than any alteration to the harmonic components of the amplified music. This question does require further investigation.

STUDY 3–3 Linear processing, or slow-acting WDRC that is either “always on” or “always off,” could provide us with some cues for a “first fit” of a music program. The more “seasoned” reader may remember the 1980s when many hearing aid manufacturers created a form of “frequency dependent compression” that was a circuit with a high level, low frequency kneepoint and a lower level, high frequency kneepoint. The idea was to mimic the shape of the long-term speech spectrum and prevent the hearing aid from entering its non-linear stage prematurely. When listening to Rock music in a study by Croghan, Arehart, and Kates (2014) (see Chapter 3), the subjects preferred a linear response. That is, as long as the music is sufficiently loud, then a “less may be more” approach may be useful. More research is required in this area and a study of the frequency dependence of the (adaptive) compressor kneepoint with music would provide more information for the clinical audiologist.

STUDY 3–4 When it comes to music it may not be sufficient to merely state an “optimal” high frequency limit for its frequency response, and

Appendix B.  Research Projects   129

then another unrelated “optimal” low frequency limit. This has implications for fitting hearing for speech and is observed often in the clinic. Whenever an increase in the higher frequency gain is provided, in order to improve the acceptability of sound, a concomitant low frequency enhancement is also required. This was pointed out by Skinner and Miller (1983) and discussed in Chapter 3. It is hypothesized that this same “balance” would optimize the frequency response, but to my knowledge, this topic has not been formally studied for music.

STUDY 4–1 Three clinical methods have been discussed in order to assess the presence or absence of cochlear dead regions. The TEN (HL) test can take 10 to15 minutes (for two ears and four test frequencies), whereas the piano keyboard (which only requires a “same/ different” judgment of pitch) only takes 20 seconds and can actually be used prior to a client coming to the clinic. In addition, the distortion-o-gram that provides information on both frequency and sound level may allow the clinical audiologist to paint a better picture. A comparative study of these three methods of assessing cochlear dead regions would provide for an interesting project and this work can be translatable into the busy clinical environment. While there has been some basic research performed on subjects with a high frequency sensorineural hearing loss (e.g., Davis et al., 1950), to my knowledge there has not been equivalent basic research on subjects with a low frequency sensorineural hearing loss such as those with unilateral Ménière’s syndrome.

STUDY 4–2 Related to Study 2–2, multi-channel compression may be more problematic for some instrument groups than others. An interesting area of study would be to examine this potential problem, especially for stringed instruments where harmonic amplitude

130  Music and Hearing Aids:  A Clinical Approach

changes are more noticeable. Perceptively, stringed instruments sound the best when the relative balance between the lower frequency fundamental energy and the higher frequency harmonic energy is maintained. This would be less of an issue for woodwinds that, despite generating a similar output spectrum as a stringed instrument, have their quality and timbre judged by the lower frequency inter-resonant breathiness, and not the relative amplitudes of the higher frequency harmonic structure. A hypothesis is that woodwind players would tolerate poorly configured multi-channel compression systems better than string players.

STUDY 4–3 One advantage of the overall gain reduction technique as a feedback management approach reported in Chung (2004) in Chapter 4 is that the balance of the entire music spectrum would be maintained. This could be potentially useful when listening to, or playing, classical music that uses many stringed instruments, where audibility of the higher frequency harmonics can contribute to a good sound quality and proper timbre. Despite the fact that this approach is no longer commercially available, this would make for an interesting future study regarding the processing of music. Such a system can be simulated using MatLab™ or similar software.

STUDY 4–4 The issue of “directional” microphones has intentionally not been addressed in this book because the implementations of this technology can be highly variable, as can the listening environments for music. Depending on the acoustic and music environment of hard-of-hearing musicians, a static directional microphone implementation can be useful, especially to suppress some of the sound to the rear, while others strongly prefer the omnidirectional setting. With the recent advent of dynamic or adap-

Appendix B.  Research Projects   131

tive directional microphones, it is difficult to conclude anything definitive, but this is important territory for further research into this technology for amplified music while performing.

Appendix C

15 Audio File Descriptions There are 15 audio files in this text and each one is marked with the following icon adjacent to where it is discussed: For convenience, following is a listing of all of the audio files where the first number in the suffix denotes the chapter number (e.g., 2–1 is from Chapter 2). Audio file 2–1.  Audio file of a French horn brass instrument playing notes either quietly (pp) or loudly (ff). The interested reader can perform their own spectral analysis of the sounds in the file. Audio file 2–2.  Audio file of a clarinet woodwind instrument playing notes either quietly (pp) or loudly (ff). The interested reader can perform their own spectral analysis of the sounds in the file. Audio file 2–3.  Audio file of a violin stringed instrument playing notes either quietly (pp) or loudly (ff). The interested reader can perform their own spectral analysis of the sounds in the file. Audio file 3–1.  The deleterious effects of decreasing the peak input limiting level from inputs of 115 dB SPL, to 105 dB SPL, to 95 dB SPL, and back to 115 dB SPL for music in an A-B-C-A format. Audio file 3–2.  The minimal effect of decreasing the peak input limiting level from inputs of 115 dB SPL, to 105 dB SPL, 133

134  Music and Hearing Aids:  A Clinical Approach

to 95 dB SPL, 90 dB SPL, and back to 115 dB SPL for speech in an A-B-C-D-A format. Audio file 3–3.  (A) Undistorted 1000 Hz pure tone and (B) a front end clipped pure tone creating a square wave with odd numbered multiples 1000 Hz, in an A-B-A format. Audio file 3–4.  In an A-B-A format, an orchestral piece of music has no frequency lowering (A) and linear frequency lowering (B) applied to it but by only 1/2 of one semitone above 1500 Hz. Audio file 3–5.  In an A-B-A format, speech has no frequency lowering (A) and linear frequency lowering (B) applied to it, but by only 1/2 of one semitone above 1500 Hz. Audio file 3–6.  In an A-B-A format, music has (A) no modification to its spectrum and (B) a high frequency decrease of sound (-6dB/oct above 1500 Hz). This technique would be equally acceptable for both speech and for music. Audio file 3–7.  In an A-B-A format, (A) the original violin music, followed by the one-octave linear frequency lowered music (B) creating a perfect fifth. A perfect fifth will always be created with a half-wavelength resonator instruments, such as the violin, and will never be dissonant. Audio file 3–8.  In an A-B-A format, (A) the original clarinet music, followed by the one-octave linear frequency lowered music (B) creating a third. A third will always be created with a quarter-wavelength resonator instruments, such as the clarinet, and will never be dissonant. Audio file 3–9.  In an A-B-A format, (A) the original speech, followed by the one-octave linear frequency lowered speech (B), which is almost unintelligible. Audio file 3–10.  In an A-B-A format, (A) flute, followed by (B) a “modified flute sound” where the higher frequency harmonics have been increased in amplitude, making it sound more like an oboe. Audio file 3–11.  Music is given in an A-B-C-D-A format, where A is a flat spectrum, and B, C, and D have peaks of

Appendix C.  15 Audio File Descriptions   135

5 dB, 10 dB, and 15 dB, respectively. There is one peak for each octave above 1000 Hz. Audio file 3–12.  Speech is given in an A-B-C-D-A format, where A is a flat spectrum, and B, C, and D have peaks of 5 dB, 10 dB, and 15 dB, respectively. There is one peak for each octave above 1000 Hz.

Index Note:  Page numbers in bold reference non-text material.

A Absolute sound level, 21 Accessories, use of low-cut, 107 Acoustic filters, 71 hearing aids and, 70 resistor, earhook num and, 11 systems, source-filter-radiation model, 1–2 Acoustical Society of America, 29 Acoustics experiments that ignore, 41 speech, 3 A/D converter dynamic range, increase upper limit, 50–51 reduce input to, 49–50 Adjustments, music/speech, 24 Affricates spectrogram of, 36 speech frequencies, 52 Algorithms cochlear implant, 34 digital, 41 semi-automatic, 46 software, 43–44 frequency increasing, 95 lowering, 55, 56, 57, 95 shifting, 95 hearing aid processing and, 88 linear processing, 60 nonlinear behavior of, 60

one-octave frequency lowering, 57 linear frequency lowering, 57 time delay in digital circuits and, 73 American Academy of Audiology, Clinical Consensus Document Audiological Services for Musicians and Music Industry Personnel, 37 Amplification cochlear dead regions and, 52 ranges, for music listening, 64 Amplitude balance, 20 compression, 42, 60–64 frequency response and, 70 hearing aids and, 70 Analysis, windows of, crest factor and, 31 ANSI S 3.22, 42–43, 64 API (Application Programming Interfaces), 109–110 Application Programming Interfaces (API), 109–110 Armstrong, Steve, 46 Aspiration, spectrogram of, 36 Attack parameters, 20 AuD Capstone study, 10, 15, 125 Audiologists, use of mutes, 11 Audiology to Extend a Musicians’ Career, Hearing Review, 120

137

138  Music and Hearing Aids:  A Clinical Approach

B Bader, Rolf, Springer Handbook of Systematic Musicology, 2, 29 Balance, amplitude, 20 Bands of noise, 65 Bluetooth, distance limit of, 107 Broadband microphone, internal noise level, 93

C Canadian Acoustical Association, 29 Chasin, Marshall, Hearing Loss in Musicians: Prevention and Management, 4 Cello, spectrum of, 27 Cellophane tape, as microphone covering, 90–91 Cerumen, outer ear canal and, 11 Channels, rock music and, 60, 102 Cheng, Wendy, 120 Child, hard-of-hearing, instrument to play, 17 Circuits, digital, time delay in, 73–76 CL (Compression limiting), 60, 101–102 Clarinet, 4, 6 described, 7 frequency lowering, one octave, 58 mutes and, 11 quarter-wavelength instrument, 16 spectrum of, 26 tube shape and, 8–9 Classical music, channels and, 60 Clients, hard-of-hearing, fitting strategy, 30

Clinical Consensus Document Audiological Services for Musicians and Music Industry Personnel, 37 Cochlea, integration times, 31–32 Cochlear dead regions, 68–69 amplification and, 52 assessing, 101 avoidance, 56 finding, 96–98 hearing losses and, 104 TEN (HL) test and, 98 implant algorithm, fitting goal, 34 integration, integration times, 31–32 Comb filtering, 73 teeth, 73 Compression, 60 amplitude, 42, 60–64 level dependent, 20 limiting (CL), 60, 101–102 multi-channel, 20, 102–104 Converter, A/D increase upper limit of dynamic range, 50–51 reduce input to, 49–50 Coupling tips, RIC, 41 Crest factor, 31–33 intelligibility Index (SII) and, 33 for music, 33, 38 for speech, 33 speech/music, statistical differences, 21 windows of analysis and, 31

D Dead regions, cochlear, 68–69, 104

Index   139

amplification and, 52 assessing, 101 avoidance, 56 finding, 96–98 Decay parameters, 20 Decibels dynamic range expressed in, 51 ratio of intensities, 31 Device Oriented Subjective Outcome (DOSO) scale, 77, 78 Digital algorithms experiments and, 41 semi-automatic, 46 circuits, time delay in, 73–76 delays, 73 as frequency function, 75 smartphones and, 110–111 hearing aids, front end limiting, 49 signal processing, for hearing aids, 43 technology, frequency lowering and, 55 Distortion-o-gram, 100–101 DOSO (Device Oriented Subjective Outcome) scale, 77, 78 Double output, hearing aids, stages/receivers, 119 Dynamic range, expressed in decibels, 51

E Ear outer canal, 12 cerumen and, 11 rear, measures, 30 Ear and Hearing, digital delay articles in, 73

Earhooks acoustic filters and, 70 nub, acoustic resistor and, 11 Earmolds, frequency response and, 3 EIDR (Extended Input Dynamic Range), 43 Einhorn, Richard, 120 Even numbered harmonics, 57 Experiments acoustics, ignored, 41 using lower-level stimuli, 41 Extended Input Dynamic Range (EIDR), 43

F Fabre, Benoit, “Modelling of Wind Instruments,” 29 Fant, Gunnar, source-filterradiation model, for acoustic systems, 1–2 Fant’s model of the vocal track, 2 Fat, part of instruments, 20 Feedback control, gain reduction and, 119 management systems, 105–107 Fifth note, perfect, 57, 58 Filter response, 3 of the wavelength resonators, 5–10 Filtering comb, 73 Filters acoustic, 71 hearing aids and, 70 First resonant frequency, 6 Fitting, strategies, listening to music, 30 Flared bore, 3 Flat frequency response, transducer field and, 70

140  Music and Hearing Aids:  A Clinical Approach Fletcher, Harvey, 34–35 Flute, half-wavelength resonator, 16 FM system, 93 Formant structures, Helmholtz, 5 Formants, 19 French horn, 6, 24 frequency lowering, one octave, 58 spectrum of, 25 Frequencies letters and, 15–17 used by musicians, 15–17 Frequency harmonics, timbre and, 16 limit higher, 67–69 lower, 65–67 upper, 67–69 low bands of noise, 65 hearing aid setting limit, 67 limit, hearing aids, 67–69 lowering music and, 52–59 one-octave linear, 117–118 speech and, 52–53 music and, harmonics and, 53 response, 64–69, 104–105 acoustic filters and, 71 hearing aids and, 70, 76 peaks in, 70–71, 72 removing unwanted peaks, 71 smoothness of, 70–76 spectrum of flat, 72 transducer field and, 70 Frequency lowering, 95–101 algorithm, 55 nonlinear, 56 cochlear dead regions and finding, 96–98 lowering, 56

dependent compression, 63 linear, lowering, 57–58 nonlinear, 56 one-octave linear, lowering, 57 violin one octave, lowering, 59 Fricatives spectrogram of, 36 speech frequencies, 52 Front end limiting, 46, 47, 48 digital hearing aids, 49 Fundamental frequencies, male voices, 17

G Gain brass music and, 30 hearing aids and, 20–21, 29 at three input levels, 94 high-frequency reduction, 55 reduction for feedback control, 119 strategy, cochlear dead regions and, 52 reeded woodwind and, 30 software programs and, 29 Giordano, Nicholas, “Some Observations on the Physics of Stringed Instruments,” 29 Glockenspiel, 1 Guitar, frequency lowering, one octave, 58

H HAAQI (Hearing- Aid Audio Quality Index), 77–78 Half-wavelength resonators, 7–8 formula for, 8 hard-of-hearing person and, 17 harmonics of, 17 musical instruments, 10

Index   141

Hard-of-hearing child, instrument to play, 17 gain and, fitting strategy, 30 peak input limiting level, voice quality and, 52 person, half-wavelength resonators and, 17 Hardware, technologies, experiments and, 41 Harmonics even numbered, above first harmonic, 57 frequency of, 17–19 timbre and, 16 music and, 19 frequency and, 53 resonators half-wavelength, 17 quarter-wavelength, 17 sound levels of, 19–20 speech and, 19 Hearing- Aid Audio Quality Index (HAAQI), 77–78 Hearing aids acoustic filters in, 70 amplitude compression and, 70 architecture of, post-16 bit, 51 digital signal processing for, 43 double output, stages/ receivers, 119 fitting goal, 34 frequency bandwidth response and, 70 lowering and, 55 response and, 3, 64, 76 front end limiting, 49 gain and, 29 at three input levels, 94 IC-based platform change, 91 microphones, 46, 48–49 integration times, 31–32 optimizing performance of, 34

removing, 94–95 music and, 24 rules of thumb, specifying upper frequency range limit, 105 setting higher frequency setting, 67–69 low frequency limit, 67 single channel, 118 smartphones and, 107–111 Hearing loss, half-wavelength resonators and, 17 Hearing Loss in Musicians: Prevention and Management, Marshall Chasin, 4 Hearing Review, 120 Audiology to Extend a Musicians’ Career, 120 Helmholtz, related formant structures, 5 High frequency gain reduction, 55 obstruent, spectrum of, 23 setting, hearing aid, 67–69 sound, balance of high frequency and, 65–67 Higher limit, frequency, 67–69

I IC-based platform change, 91 Inductive loop system, 93 Inner hair, damage, cochlear dead regions and, 52 Input level, distorted/ undistorted, 44, 45 Instantaneous peak, to long-term average RMS, 31 Instruments fat part of, 20 musical, described, 1

142  Music and Hearing Aids:  A Clinical Approach Instruments  (continued) reeded gain and, 30 sound levels and, 27 stringed frequency regions and, 27 gain and, 30 sound levels and, 27 Integration times, 31–32 Intelligibility, of spondees, 65 Intelligibility Index (SII), crest factor and, 33, 76

Low frequency bands of noise, 65 limit, hearing aid setting, 67 sound, balance of high frequency and, 65–67 Lower-level stimuli, experiments using, 41 Lower limit frequency, 67–69 response, 105

K

Males, fundamental frequencies of, 17 Martin, William (Billy), 100 Masking, upward spread of, 65 MEMS (Micro Electromechanical Systems) microphones, 108–110 Men, fundamental frequencies of, 17 Ménière’s syndrome, 99 Micro Electromechanical Systems (MEMS) microphones, 108–109 Microphones broadband, 91 internal noise level, 93 covering, 90–91 external, 93–94 hearing aids, 46, 48–49 integration times, 31–32 MEMS, 108–110 reducing sensitivity of, 90–93 smartphones, 108–110 use of low-cut, 91–93 Middle C, piano and, 16 Model, source-filter-radiation model, for acoustic systems, 1–2 Modelling of Wind Instruments, Benoit Fabre, 29

K-AMP, 117 Keyboard, TEN (HL) test and, 98–100 Killion, Mead, 34–35

L Ledbetter, Rick, 120 Letters, used by musicians, 15–17 Level dependent compression, 20 Libby horn, 3 Linear frequency lowering, 57–58 processing algorithms, 60 response, rock music and, 63 Liquids sonorants, 19 spectrogram of, 36 Listening, to music, fitting strategies, 30 Long-term average RMS, instantaneous peak to, 31 usic spectrum, 76 Loud, speech/music, 20–30 Low-back vowels, spectrum of, 22

M

Index   143

Mokotoff, Charles, 120 Multi-channel compression, 20, 102–104 Music adjustments, 24 amplification ranges for, 64 crest factor for, 21, 33, 38 differences, in speech and, 35 frequency harmonics and, 53 lowering and, 52–59 harmonics and, 19 instruments described, 1 mutes, 10–11 wavelength resonators of, 10 listening to, fitting strategies, 30 loud, 20–30 notes, fundamental frequencies of, 16 removing hearing aid and, 24 smartphone and, 107–111 spectrum, long-term, 76 vs. speech, 17–20 frequency of harmonics, 17–19 Musicians, letters used by, 15–17 Mutes, musical instruments, 10–11

N Nasals sonorants, 19 spectrogram of, 36 Noise, low frequency bands of, 65 Nonlinear algorithms, 60 behavior, creation of, 27 frequency lowering, 56 algorithm, 56

Nouns, speaking level and, 21 Nunnery, Stu, 120

O Oboe, frequency lowering, one octave, 58 Obstruents, 19 high-frequency, spectrum of, 23 speech frequencies, 52 One-octave, linear frequency lowering algorithm, 57 for instrumental music, 117–118 Outer ear canal, 12 cerumen and, 11

P Parameters attack, 20 decay, 20 Peak clipping, 46, 60, 62 compression limiting and, 101 levels, 43 input limiting, 43 Peak input limiting level, 88–95 clinical strategies, 90 hearing aid microphone, 90–93 covering, 90–91 external, 93–94 low-cut, 91–93 voice quality and, 52 Percussion instruments described, 1 sonorants and, 19 Perfect fifth note, 57, 58 third note, 57

144  Music and Hearing Aids:  A Clinical Approach Personal Sound Amplification Product (PSAP), 95, 117 Physical attenuator, musical instruments, 10–11 Piano based test, 99 frequency lowering, one octave, 58 middle C on, 16 TEN (HL) test and, 98–100 Platform change, IC-based, 91 Playing, chart, differences between playing and, 29 Preference, quality and, 76–79 PSAP (Personal Sound Amplification Product), 95, 117

Q Quality, preference and, 76–79 Quarter-wavelength resonators, 5–7 formula for, 6 harmonics of, 17 musical instruments, 10

R Radiation, resonant system output, 3, 5 Real Ear Aided Response (REAR), 9 REAR (Real Ear Aided Response), 9 Rear ear, measures, procedure, 30 Receiver In the Canal (RIC), coupling tips, 41 Reduced vowel “schwa,” 5 Reeded instruments gain and, 30 sound levels and, 27 Research laboratories, 41–42

Resonance, vent-associated, 67 Resonances Helmholtz, 5 speech acoustics, 5 Resonant chamber, 3 frequency, first, 6 system output, radiation, 5 Resonating length, described, 6 Resonators half-wavelength, 7–8 formula for, 8 hard-of-hearing person and, 17 harmonics of, 17 musical instruments, 10 quarter-wavelength, 5–7 formula for, 6 wavelength of, filter, 5–10 Revit, Larry, 120 RIC (Receiver In the Canal), coupling tips, 41 RMS (Root mean square) of a signal, 31 spectral peaks, 32 Rock music channels and, 60, 102 linear response and, 63, 128 Root mean square (RMS) of a signal, 31 spectral peaks, 32

S Saxophone frequency lowering, one octave, 58 soprano, tube shape and, 8–9 Sentences, initial words, speaking level and, 21 Signal, RMS of, 31 SII (Speech Intelligibility Index), 33, 76

Index   145

Singing, chart, differences between playing and, 29 Single channel hearing aids, 118 Skinner, Margaret (Margo), 65–66 Smartphones apps, user control and, 120 control, music and, 107–111 digital delay and, 110–111 microphones, 108–110 Software digital, algorithm, 43–44 programs, gain and, 29 “Some Observations on the Physics of Stringed Instruments,” Nicholas Giordano, 29 Sonorants, 17, 19 lower frequency, 53 Soprano saxophone, tube shape and, 8–9 Sound levels absolute, 21 of harmonics, 19–20 instruments and, reeded, 27 stringed instruments and, 27 violin, 27 Source-filter-radiation model, acoustic systems, 1–2 Speaking, levels, nouns/initial words and, 21 Spectral level, 29 peaks, 32, 71, 74 shape, 27 Spectrum of cello, 27 clarinet, 26 French horn, 25 of low-back vowel, 22 violin, 27, 28 processed in two conditions, 54

Speech acoustics, 3 resonances, 5 unconstricted vowel [ə], 5 adjustments, 24 crest factors for, 21, 33 differences, in music and, 35 frequency boost and, 21 lowering and, 52–53 harmonics and, 19 loud, 20–30 vs. music, 17–20 frequency of harmonics, 17–19 sonorants and, 19 spectrum, 35 Speech acoustics, 3 resonances, 5 unconstricted vowel [ə], 5 Speech Intelligibility Index (SII), 33, 76 Spondees, intelligibility of, 65 Springer Handbook of Systematic Musicology, Rolf Bader, 2, 29 Stops, speech frequencies, 52 Stringed instruments frequency regions and, 27 gain and, 30 sound levels and, 27 System output, 5 resonant, radiation, 3

T Tape, cellophane, as microphone covering, 90–91 Technology, return to older, 117–121 TEN (HL) test, 96, 98 THD (Total harmonic distortion), 48 Third note, perfect, 57

146  Music and Hearing Aids:  A Clinical Approach Threshold Equalizing Noise (TEN) test, 96, 98 Timbre, frequency harmonics and, 16 Time delay adjustable, 118–119 in digital circuits, 73–76 Total harmonic distortion (THD), 48 Transducer, field, flat frequency response and, 70 Trombone, 16 Trumpet, 6, 16 frequency lowering, one octave, 58 spectral shape of, 27 Tube, 3 shape of, 8–10 TV listening device, 107

Vibrato, 20 Violin frequency lowering, one octave, 58, 59 half-wavelength resonator, 16 sound levels and, 27 spectra of, processed in two conditions, 54 spectrum of, 27, 28 Vocal track, Fant’s model of, 2 Voice, quality, peak limiting level and, 52 Volume velocity, 10–11 Vowels sonorants, 19 spectral peaks, 32 spectrogram of, 36 spectrum of low-back, 22

U

Wavelength resonators, of the filter, 5–10 WDRC (Wide dynamic range compression), circuit and, 102 compression and, 60–61 slow acting, 63 Wide dynamic range compression (WDRC) circuit, 102 compression and, 60–61 slow acting, 63 Windows of analysis, crest factor and, 31 Woodwinds, mutes and, 11

UILL (Upper Input Level Limit), 43 Unconstricted vowel [ə], speech acoustics, 5 Upper, frequency response, cut-off, 69 Upper Input Level Limit (UILL), 43

V Velocity, volume, 10–11 Vent-associated resonance, 67

W