Hearing in Noise – The Importance of Amplification

Since T has a mild hearing loss, it’s actually pretty hard to tell from his behavior that he has a hearing loss at all – without hearing aids, he startles to loud noises, turns to look for sounds like the faucet, turns when he hears his name, etc. A lot of people ask me why he needs to wear hearing aids at all, when his hearing actually seems quite good!

I think that T will get the most benefit from his hearing aids when listening to speech in noisy environments. To understand why, I’ll explain a little about listening to speech in noise and how hearing loss affects understanding speech in noise. Keep in mind that how people understand speech in noise and how hearing loss affects understanding speech in noise is a really complicated topic which is the subject of lots of active research, so this is really very simplified!

Different speech sounds are associated with different frequencies (or pitches). For example, a sound like “mmmm” has primarily low-frequency components, whereas a sound like “ssss” has primarily high-frequency components. Speech, with lots of sounds all mixed together, will have lots of different frequencies. In the figure below (FIG. 1), in the left panel, I drew a cartoon of a speech signal that consists of lots of different frequencies.

nh.jpg

FIG. 1 – Speech and the normal-hearing cochlea

Incoming sound is received by the ear and is processed by the cochlea – the part of the inner ear that turns sound pressure waves into electrical signals that are sent to the brain. One of the coolest things about the cochlea is that the cochlea is tonotopically organized. This means that low frequency sounds activate one end of the cochlea, and high frequency sounds activate the opposite end of the cochlea (and frequencies in the middle activate portions in the middle). In the normal-hearing cochlea, the frequency resolution is amazing – very small differences in frequency (like the difference between adjacent keys on a piano) will cause noticeable differences in the place of activation in the cochlea. I tried to illustrate this in the middle panel of FIG. 1 marked “Normal Auditory Filters” – the curves above are relatively narrow (especially contrasted to FIG. 2 below).

The effect of the narrow auditory filters in the normal-hearing cochlea is that a speech signal, processed by the cochlea and sent to the brain, will show sharp contrasts in frequencies – the peaks are sharp and the valleys are deep, as indicated by the red arrow.

The contrast between the normal-hearing cochlea and a cochlea with hearing loss might be clearer looking at FIG. 2 below:

hl.jpg

FIG. 2 – Speech in quiet with hearing loss

The middle panel of FIG. 2 shows the broadened auditory filters that occur with hearing loss. The broadened auditory filters that result from hearing loss can be thought of like this: if you were pressing 2 keys on a piano, you would need to skip over more keys with hearing loss to tell that you are pressing different keys than with normal hearing. The effect of the broadened auditory filters on speech can be seen by comparing the right panels of FIG. 1 and FIG. 2 – you can see that with hearing loss, the peaks of the speech signal are less sharp and the valleys are more shallow – there is less contrast between the peaks and valleys.

The contrast between the peaks and valleys in a speech signal is important for understanding speech, but it’s especially important for understanding speech in noise. Take a look at FIG. 3 below:

nh_noise.jpg

FIG. 3 – Understanding speech in noise with normal hearing

In the right panel of FIG. 3, I’ve “covered up” a lot of the speech signal with noise. Although a lot of the speech signal is buried by the noise, you can still see some of the peaks (like the one in the red box). Depending on the level of the noise (and the type of noise!), this might be enough to understand what’s being said.

Now, compare the right panel of FIG. 3 with the right panel of FIG. 4, below.

hl_noise.jpg

FIG. 4- Understanding speech in noise with hearing loss

With hearing loss, since the peaks and valleys had far less contrast even in quiet, the noise is especially detrimental – there’s barely any speech signal to grasp on to once the speech is buried in noise.

Here’s a cool page that has simulations of the effects of different levels and types of hearing loss on speech in quiet and in noise.

Hearing aids don’t “fix” the broadened auditory filters, they merely amplify sounds of particular frequencies (which are tailored based on a person’s audiogram) – but this is super important for understanding speech – you can imagine that boosting particular frequencies about the noise to make the peaks stand out more would be helpful to understanding speech, particularly boosting peaks in noise.

In T’s case, with a mild hearing loss, he can most likely hear quite well in quiet (like when he and I are interacting one-on-one), but the big benefit for him is likely in boosting particular frequencies that are important for understanding speech in noisy environments (like daycare or, in a few years, school).

Note: there’s lots of cool technology currently available that helps people with hearing loss better understand speech in noisy environments, from FM systems to directional microphones on hearing aids, but this post was merely to give an idea of the benefits of amplification in general to understanding speech in noise!

Speech Therapy Session – January 25, 2016

We started today by repeating some of the games we played last session where we paired toys with different Ling sounds, and asked T to identify the toy paired with a particular sound.  Today, he was a bit more interested in trying to pull himself up to stand; we tried to entice him with the toys for a few minutes, then moved on 🙂

One thing we have been working on since T first started speech therapy is the concept of turn-taking. Conversation naturally involves taking turns; for example, knowing that when one person pauses, it’s the other person’s turn to say something. One way we have worked on this that T LOVES is by taking turns banging on stuff (like a drum). One of us will bang on the drum, then give T a turn, and we go back and forth. He has really started to get the hang of turn-taking, and will wait for his turn to bang.

The other concept that drum-banging is helping to reinforce with T is the concept of segmentation – for example, that “bang” is one word and “bang bang” is two words. When it’s our turn to drum, we will bang either once, twice, or three times, and then encourage T to bang the same number of times when it’s his turn to drum. T is definitely getting the hang of segmentation, as he will bang a similar number of times, although it’s clear that he prefers banging many times to banging just once! 🙂 I learned a bit about how infants learn to segment word boundaries (for example, if you hear the stream of speech “iateanicecreamconebutitmelted,” you learn that “melted” is a word, and “itmelted” is not a word) in grad school – and it’s SO COOL – I hope to write more about this soon!

We finished up with a fun couple of rounds of peekaboo, where we tried to encourage T to make sounds like “boo!” before we pulled a sheet away from our faces.  He definitely made a few “ba ba” sounds, and he definitely had a blast! Peekaboo is always a favorite with him!

Speech Therapy Session – January 22, 2016

We did a few things today that were pretty new to T – he seems to be just on the cusp of being able to understand what we were doing, so I’m excited to see how he responds to these games over the next few weeks.

The first thing we did today was work on identification with the Ling-6 Sounds.  The Ling-6 sounds are a group of 6 different, common speech sounds that, together, represent the speech spectrum from 250 Hz-8000 Hz. The sounds are:

  1. “mmm” – (very low frequency)
  2. “ooo” – (low frequency)
  3. “eee” – (some low frequency, some high frequency)
  4. “ahhh” – (centered in the frequency range)
  5. “shhh” – (moderately high frequency)
  6. “ssss” – (high frequency)

Today, we paired different Ling sounds with a toy.  For example, we paired “mmm” with an ice cream cone (as in “yuMMMM”), “oooo” with a ghost (as in “bOOO!”), “ahhh” with an airplane, “shhhh” with a baby doll, and “ssss” with a snake. So, we’d make the sound while showing T the corresponding toy. The cool thing that we did today was that, after showing T two different toys while making the corresponding sounds, we set the two toys together on the floor, and then made one of the Ling sounds, and waited for T to reach for the toy that matched the sound he heard.

I was blown away when T heard the speech therapist say “ahhhh” and he reached for the airplane! He only did this once, so it might have just been chance. We didn’t get a chance to test him too thoroughly, because he soon got bored and started crawling over to the trash can 🙂

A lot of the stuff we’ve done previously merely required T to detect that he heard something; this was one of the first times he was required to identify the sound that he heard (by reaching for the corresponding, paired toy). I’m excited to watch him get better at this sort of thing in the next few months!

The other cool thing that we did today was starting to teach him to understand the phrase “give me.” When he was holding a toy, we’d say “give me the toy,” and hope that he’d give us the toy. For today at least, he was more interested in throwing the toys then giving them to us,  but we’ll keep practicing!

 

Article Review – Language Outcomes in Children With Hearing Loss

Many studies that have looked at language outcomes of children with hearing loss have collapsed all children with any degree of hearing loss (mild, moderate, severe, profound) into one group and compared them to children with normal hearing.  There haven’t been many studies that have looked at how the degree of hearing loss affects language outcomes, how age at which hearing aids are first fit affect language outcomes, or how these factors might interact with each other.

Tomblin et al. recently published an article sharing results from a longitudinal study that looked at language outcomes from children (aged 2 to 6 years) with varying degrees of hearing loss.  (“Language Outcomes In Young Children With Mild to Severe Hearing Loss.” 2015. Ear and Hearing, Vol. 36, p. 76-91).

A really cool thing about this study is that it was a longitudinal study that looked at hundreds of children. A longitudinal study follows the same group of people and collects data from them at different time points, and it can therefore detect changes across time for different groups (for example, how language development changes over time for children with mild hearing loss compared to children with severe hearing loss). Longitudinal studies tend to be quite expensive and time-consuming to run, so this was quite a feat! Here’s more information about longitudinal studies.

Tomblin et al. compared language outcomes for 290 children with hearing loss (grouped into different degrees) with language outcomes for 112 children with normal hearing. The children participated in a battery of age-appropriate language tests that were administered yearly, and the results of the language tests were compared based on factors such as the degree of hearing loss, the age at which children first began wearing hearing aids, and the duration of time per day children typically wore hearing aids.

The study was quite extensive, so I’ll just highlight a few results that I found particularly interesting!

Language Growth Over Time Is Consistent For All Degrees Of Hearing Loss

The researchers found that children with worse hearing (as measured by the Better Ear Pure Tone Average [BEPTA] score) had significantly worse language scores at all age measurements. They also found that language scores significantly improved for all groups of children over time – that is, language scores improved from year to year, for both normal hearing children and children with hearing loss, regardless of the degree of hearing loss.

Interestingly, the researchers also found that the there was no significant difference in the language improvement as a function of degree of hearing loss. So, there was no statistically significant difference in language improvement for children with a severe hearing loss compared to children with a mild hearing loss compared to children with normal hearing – the pattern of growth in language scores was parallel for all groups! (See FIG. 2 of Tomblin, et al., reproduced below).

 

fig2.jpg

The Effect of Age of Hearing Aid Fitting On Language Outcomes and Language Growth

The first 2 years of a child’s life are so critical for language development, and so current thinking is that it’s best for a child to start wearing hearing aids as soon as possible to maximize high quality auditory and language input during this critical time, and that fitting hearing aids before 6 months of age is ideal. However, for various reasons, it might not be possible to detect a hearing loss until a child is older, and the results of this study show some interesting results regarding language development for different populations of children who began wearing hearing aids at different ages.

For children who began wearing hearing aids at less than 6 months of age, the researchers found that language outcome growth was fairly stable and flat (relative to the other groups). Recall that in this study, language outcomes were only measured beginning at 2 years of age, so the researchers hypothesized that early hearing aid fitting at < 6 months protected these children from falling behind, or allowed them time to catch up before language outcomes were measured at 2 years. The children who received hearing aids after 1 year had worse language outcomes when measured at 2 years, but showed a steeper increase in language growth over time compared to the earlier fit children – this indicates that even though these children started off with poorer language outcomes compared to the earlier fit children, wearing hearing aids during the critical preschool years allowed them to catch up to their earlier fit peers. (See FIG. 5 of Tomblin, et al., reproduced below).

Overall, the data presented in the article indicates that even though fitting as early as possible is definitely supported by the data, later fitting (done for whatever reason) is also very beneficial and does not necessarily lead to irreversible language deficits.

fig5.jpg

Morphology and Degree of Hearing Loss

There are many different aspects of language, such as vocabulary, grammar, etc., and childhood acquisition of different aspects of language might be more or less affected by hearing loss. Tomblin et al. hypothesized that there would be a relationship between a child’s degree of hearing loss and their scores on a task that looks at language morphology, and that this relationship would be different than the relationship between degree of hearing loss and a task that looks at vocabulary.

A little bit about morphology: A morpheme is the smallest unit in a language. A simple example is the “s” that turns “dog” into the plural “dogs.” This particular morpheme is particularly subtle – the “s” sound is relatively soft and high in frequency. So, learning the grammatical rule of “one dog” and “two dogs” might be particularly difficult for children with hearing loss (and more difficult than learning the vocabulary word “dog”), because the “s” sound is not as readily accessible to the child in the conversations they hear every day.

 Tomblin et al. found that morphology does seem to have a different relationship with degree of hearing loss compared to vocabulary – although both morphology and vocabulary scores were worse with more severe hearing loss, morphology seemed to be more dramatically affected than vocabulary (see FIG. 9 of Tomblin et al., reproduced below).

These results would indicate the importance of children with hearing loss wearing properly fitted hearing aids, so that they have access to the subtle aspects of language in the conversations they hear every day.

fig9.jpg

 

Overall, I found the results of this study very encouraging – when hearing loss is identified and properly treated with properly fitted hearing aids, language acquisition and language development, even the more subtle aspects of language, are really very good!

 

Speech Therapy & Audiology Appointment – January 15, 2016

T had both a speech therapy session and an audiology appointment today (with a nap in between!)

Speech Therapy

T has gotten good at producing an “ahhhh” sound – our speech therapist explained that this tends to be one of the first sounds babies produce since it’s “easier” compared to other vowel sounds – the mouth is open and relaxed (rather than requiring a coordinated mouth shape or tongue position).  Now, we’re starting to work on getting T to produce other vowel sounds, like “oooo” and “eeee.”

Our speech therapist explained that these are “harder” than “ahhh,” because they require tension in the mouth.  Making those sounds out loud now, I notice that my lips are more closed compared to “ahhh,” and my tongue is in a particular position.  One of the things we’ve been doing at speech therapy for a few weeks now is pairing different sounds with visual motions.  Our speech therapist explained that this is to make the sounds more salient by combining them with an attention-grabbing visual motion cue.  For example, we’ve been playing with ribbons a lot – when we make “ahhh” sounds, we wave the ribbons around loosely, up and down.  In contrast, when we make “ooo” and “eee” sounds, we often grab the ends of the ribbon (or let T hold one end) and pull on the other, to give a visual cue of tension while making the sound.  Today, we also focused on sounds and words that have an “ooo” or “eee” sound, like saying “choo choo!” when playing with a train, or playing peekaboo.

T is going through some big gross motor leaps lately.  In the past 2 weeks, he’s figured out crawling, and just learned to pull himself up to standing yesterday.  Now, all he wants to do is move around and practice his mobility skills.  Our speech therapist told us that often, when babies are working on their gross motor skills, speech production motor skills take a backseat, since their brains are focusing so hard on gross motor skills.  That was really interesting to learn!

Audiology Appointment

T sees his audiologist every 4 to 6 weeks or so – she will usually check and tweak his hearing aid settings, check for fluid in his ear, and, for the past few sessions, try to get some behavioral audiometric data.

Measuring an audiogram from a cooperating adult is pretty straightforward – they can tell you when they hear a sound.  It’s much trickier with babies though! Until T was 6 months old or so, the audiologist estimated his audiogram with an Auditory Brain Response (ABR) measurement.  Now that he’s older, we have started trying to get behavioral audiometric results.  At each audiology appointment, we have continued to try and refine his audiograms; we’re only able to test him for small chunks of time (maybe 10-15 minutes), but we are slowly but surely getting a more accurate picture of his hearing loss.

The test that T’s audiologist has been doing with him is called Visually Reinforced Audiometry (VRA).  She plays sounds at different frequencies and different loudnesses, and when he responds in a way that indicates that he heard the sound, she “rewards” him by showing him light-up dancing puppets (I find them pretty creepy, but T LOVES them).  In this way, T is conditioned to perform the task of indicating what sounds he hears without requiring him to verbally respond – once he realized then when he looks in the direction he heard the sound from, he gets to see the puppets, he became more motivated to respond when he heard the sound, since he gets excited about the puppets.

In order to separately measure his right and left ears (since he won’t tolerate headphones), the audiologist stuck the probe wire with the speaker into his ear mold (with the processor disconnected) – I thought this was such a clever trick!  T is already used to wearing his ear molds since he wears his hearing aids, so he barely even noticed the probe wire (although he did try to chew on the wire once he noticed it).

T was in a great mood, and we were able to get data at both mid (1000 Hz) and high (4000 Hz) frequencies for both ears, and the results continue to confirm a mild bilateral hearing loss.

After measuring his responses with air conduction (through the speaker played into the ear canal), the audiologist wanted to measure his bone conduction responses.  With sounds played through air conduction, the sounds go through the full chain of processing – through the outer ear, the middle ear, and then through the inner ear.  With bone conduction, a little oscillator vibrates on the mastoid bone (just below the ear lobe), and this bypasses the outer and middle ears.  The audiologist explained that if the air conduction and bone conduction results differ (for example, if air conduction results indicate a hearing loss but bone conduction results are normal), this can suggest a middle ear problem or conductive hearing loss.  For T, the bone conduction results agreed with the air conduction results, indicating a sensorineural hearing loss (that is, problem with the hair cells in the cochlea).  One really interesting thing the audiologist mentioned to me was that the bone conduction results indicate hearing levels for the better ear – she had placed the oscillator behind his left ear, but she said that there is an effect where the oscillation is picked up by the “better” cochlea, regardless of where the oscillator is placed!

 

 

 

Speech Therapy Session – January 11, 2016

T (7.5 months old currently) had a speech therapy session today.  Most of our sessions consist of playing with different toys and encouraging him to vocalize to continue playing.

We started by playing with Sophie The Giraffe, a really popular squeaky teething toy.  T LOVES Sophie – the therapist began by hiding Sophie under the table and making her squeak, and waiting for T to localize the squeaking sound by trying to find Sophie.  He’s gotten good at localizing sounds in the last few months, and started trying to find her right away.  The therapist moved Sophie towards him, tickling him, then quickly moving Sophie away – encouraging him to vocalize, and rewarding him by moving Sophie closer when he did so.  Eventually, Sophie just ended up in his mouth!

We later moved on to playing peekaboo.  T has recently gotten REALLY into peekaboo!  The therapist hid herself under a blanket, and T got excited, knowing she was under there.  From under the blanket, she asked him where she was, and then after hearing him vocalize, she pulled off the blanket – and he was so excited to see her!

Up until now, we have been encouraging T to vocalize by pausing before performing an action that he wants – for example, giving him Sophie or pulling off the blanket during peekaboo, and then only performing the action after hearing him make a sound.  When we first started speech therapy 3 months ago, T was inconsistent about vocalizing, but he’s come so far since then!  Now, the speech therapist said we should be pushing him to make more “specific” sounds.  For example, going forward, we will start pushing him to make “mmmm” sounds when we ask if he wants “MOOOOORE Sophie?”  And, instead of rewarding him by performing a desired action after hearing any vocalization from him, we’ll wait until he makes a more specific, desired sound that we’re encouraging him to produce.

T has been babbling “da da da da” for a few weeks now, so as his mom, I’m excited for him to get the hang of “mmmm,” so he can say “Mama!” 🙂