Since T has a mild hearing loss, it’s actually pretty hard to tell from his behavior that he has a hearing loss at all – without hearing aids, he startles to loud noises, turns to look for sounds like the faucet, turns when he hears his name, etc. A lot of people ask me why he needs to wear hearing aids at all, when his hearing actually seems quite good!
I think that T will get the most benefit from his hearing aids when listening to speech in noisy environments. To understand why, I’ll explain a little about listening to speech in noise and how hearing loss affects understanding speech in noise. Keep in mind that how people understand speech in noise and how hearing loss affects understanding speech in noise is a really complicated topic which is the subject of lots of active research, so this is really very simplified!
Different speech sounds are associated with different frequencies (or pitches). For example, a sound like “mmmm” has primarily low-frequency components, whereas a sound like “ssss” has primarily high-frequency components. Speech, with lots of sounds all mixed together, will have lots of different frequencies. In the figure below (FIG. 1), in the left panel, I drew a cartoon of a speech signal that consists of lots of different frequencies.
Incoming sound is received by the ear and is processed by the cochlea – the part of the inner ear that turns sound pressure waves into electrical signals that are sent to the brain. One of the coolest things about the cochlea is that the cochlea is tonotopically organized. This means that low frequency sounds activate one end of the cochlea, and high frequency sounds activate the opposite end of the cochlea (and frequencies in the middle activate portions in the middle). In the normal-hearing cochlea, the frequency resolution is amazing – very small differences in frequency (like the difference between adjacent keys on a piano) will cause noticeable differences in the place of activation in the cochlea. I tried to illustrate this in the middle panel of FIG. 1 marked “Normal Auditory Filters” – the curves above are relatively narrow (especially contrasted to FIG. 2 below).
The effect of the narrow auditory filters in the normal-hearing cochlea is that a speech signal, processed by the cochlea and sent to the brain, will show sharp contrasts in frequencies – the peaks are sharp and the valleys are deep, as indicated by the red arrow.
The contrast between the normal-hearing cochlea and a cochlea with hearing loss might be clearer looking at FIG. 2 below:
The middle panel of FIG. 2 shows the broadened auditory filters that occur with hearing loss. The broadened auditory filters that result from hearing loss can be thought of like this: if you were pressing 2 keys on a piano, you would need to skip over more keys with hearing loss to tell that you are pressing different keys than with normal hearing. The effect of the broadened auditory filters on speech can be seen by comparing the right panels of FIG. 1 and FIG. 2 – you can see that with hearing loss, the peaks of the speech signal are less sharp and the valleys are more shallow – there is less contrast between the peaks and valleys.
The contrast between the peaks and valleys in a speech signal is important for understanding speech, but it’s especially important for understanding speech in noise. Take a look at FIG. 3 below:
In the right panel of FIG. 3, I’ve “covered up” a lot of the speech signal with noise. Although a lot of the speech signal is buried by the noise, you can still see some of the peaks (like the one in the red box). Depending on the level of the noise (and the type of noise!), this might be enough to understand what’s being said.
Now, compare the right panel of FIG. 3 with the right panel of FIG. 4, below.
With hearing loss, since the peaks and valleys had far less contrast even in quiet, the noise is especially detrimental – there’s barely any speech signal to grasp on to once the speech is buried in noise.
Here’s a cool page that has simulations of the effects of different levels and types of hearing loss on speech in quiet and in noise.
Hearing aids don’t “fix” the broadened auditory filters, they merely amplify sounds of particular frequencies (which are tailored based on a person’s audiogram) – but this is super important for understanding speech – you can imagine that boosting particular frequencies about the noise to make the peaks stand out more would be helpful to understanding speech, particularly boosting peaks in noise.
In T’s case, with a mild hearing loss, he can most likely hear quite well in quiet (like when he and I are interacting one-on-one), but the big benefit for him is likely in boosting particular frequencies that are important for understanding speech in noisy environments (like daycare or, in a few years, school).
Note: there’s lots of cool technology currently available that helps people with hearing loss better understand speech in noisy environments, from FM systems to directional microphones on hearing aids, but this post was merely to give an idea of the benefits of amplification in general to understanding speech in noise!