Conductive vs. Sensorineural Hearing Loss

I found the distinction between conductive and sensorineural hearing loss confusing, so I wanted to write about it. Please note that I’m not an audiologist, so this explanation is just based on my own understanding!

First, here’s a diagram of the ear:

ear.png

When we hear a sound, the sound goes in through the outer ear (the pinna), travels through the ear canal, and vibrates the ear drum. The vibration of the ear drum causes the ossicles to move (these are three tiny bones in the middle ear), which then causes fluid in the cochlea (the inner ear) to move. The moving fluid in the cochlea causes tiny hair cells in the cochlea to bend, and the location of the hair cells that bend indicate the frequency of the sound (how high or low pitched it is). The information from the hair cells then travels to the brain.

A conductive hearing loss occurs when someone has a hearing loss that stems from a problem in the outer or middle ear. Conversely, a sensorineural hearing loss occurs when someone has a hearing that stems from a problem with the hair cells of the cochlea or with the nerve that travels from the cochlea up to the brain.

The vast majority of diagnosed hearing losses are sensorineural, especially in adults. Sensorineural hearing loss can occur when hair cells are damaged due to medication, infection, and especially from exposure to really loud noise (don’t turn your headphone volume up too high! If people can hear your music when you’re wearing headphones, it’s too loud!). In babies diagnosed with sensorineural hearing loss, the cause might be genetic.

Conductive hearing losses are less common, although they do occur pretty frequently in children. A conductive hearing loss means that sound is having trouble reaching the cochlea – this could be due to a malformation of the ear canal, too much ear wax, or, most commonly in children, fluid in the middle ear and/or an ear infection.

Many conductive hearing losses can be treated or will go away on their own – for example, ear wax can be removed and ear infections can be treated. Conversely, a sensorineural hearing loss can’t be cured – once hair cells are gone, they can’t be grown back! (at least not yet. scientists are walking on this!). Hearing aids and cochlear implants don’t fix sensorineural hearing loss – in the case of hearing aids they amplify sounds to stimulate the remaining hair cells, and in the case of cochlear implants, they bypass the hair cells completely to stimulate the nerve that transmits sound information to the brain.

So how do audiologists determine whether a hearing loss is conductive or sensorineural? I think this is tricky, especially with babies! One thing T’s audiologist does is compare his audiograms measured with air conduction with his audiograms measured with bone conduction. Sounds played by air conduction go through the full ear chain – from the outer ear to the middle ear and then to the inner ear. With bone conduction, the audiologist puts a tiny oscillator on T’s mastoid bone, and the vibrations cause a sound at a particular frequency to be played – but this sound bypasses the outer and middle ears and goes right to the inner ear.

By comparing the air conduction and bone conduction audiograms, the audiologist can get an indication of if there’s something wrong with the outer and middle ears. If both the air conduction and bone conduction audiograms show a hearing loss, and the losses are similar, this indicates a sensorineural hearing loss. On the other hand, if the bone conduction and air conduction results are very different from each other, this may indicate a conductive hearing loss. For example, if the bone conduction audiogram results show normal hearing thresholds, this indicates that the inner ear is normal. If, however, the air conduction audiogram shows abnormal thresholds where the bone conduction results are normal, this indicates that although the inner ear is normal, there is a problem with sound getting to the inner ear – that is, there might be a problem with the outer and middle ears – a conductive hearing loss.

I think this is particularly difficult to get a handle on for babies and young children, because they are too young to be able to tell you if they are hearing stuff differently in one ear compared to the other, the way an adult would be able to if they had ear wax build up or fluid in one ear. And, I think conductive hearing losses in particular can fluctuate a lot, especially for babies and children who go to daycare and school and may frequently have colds or ear infections, so that variability just makes this even trickier to nail down!

First Word!

We’re calling it! T’s first word is “bubble,” pronounced “ba-buh” or “ba-bwah.”

I wrote here about how we weren’t sure if T’s attempts at “bubble” counted as a word. Since then, T has started pointing to the bottle of bubbles and shouting “ba-buh!” to get us to blow bubbles. It’s clear that he’s trying to say “bubble,” and he’s matching the first consonant and first vowel, and he’s trying to get us to do something, so we’re counting it as a word!

 

Bribing Baby With Cheerios

I’m going to take a little break today from talking about hearing and language to talk about something totally unrelated that caught my attention this past week.  The Washington Post had this article about a recently published study about babies’ innate sense of morality and using bribery to override that innate morality that fascinated me. (Tasimi, A. and Wynn, K. “Costly rejection of wrongdoers by infants and children.” Cognition, 151, pg. 76-79, 2016 – full study available here!)

Background

In a nutshell, there have been previous studies that have shown that even 1-year old babies have an innate sense of morality – after watching a puppet show, babies are more drawn to a puppet that helps another puppet than to a puppet that hurts or hinders the other puppet. What this study found was that while this is true, you can override this innate preference for the “good” puppet by bribing the baby with graham crackers.

The Study

The researchers first established that babies are more likely to reach for a plate that has more than one graham cracker compared with just one graham cracker.  They did this by doing a “baseline puppet show” where they had 2 puppets each offer the baby a plate with graham crackers. One of the plates had just one graham cracker, and the other had more than 1 (either 2 or 8) – indeed, the babies robustly seemed to reach for the plate offered by the puppet that had more than 1 graham cracker.

They next conducted a little “morality puppet play” for the babies. There were two versions of the play. In both versions, a lamb puppet tried to open a box. In the first scenario (the “good puppet” scenario) – a helpful rabbit puppet helped the lamb puppet open the box. In the second scenario (the “bad puppet” scenario) – an unhelpful rabbit puppet (wearing a different colored shirt than the helpful rabbit puppet) slammed the box lid shut on the lamb puppet. After showing babies both versions, the researchers had the helpful rabbit puppet and the unhelpful rabbit puppet each offer the baby a plate of graham crackers, with the helpful rabbit puppet offering just one graham cracker and the unhelpful rabbit puppet offering more.

The researchers found that when the unhelpful rabbit puppet offered only 2 graham crackers (that is, more than the helpful rabbit, but not MUCH more), the babies took the helpful rabbit’s crackers – they were rejecting the unhelpful rabbit. However, when the unhelpful rabbit offered 8 graham crackers (MUCH more than the helpful rabbit), the babies took graham crackers from the unhelpful rabbit, suggesting that babies were unable to resist the lure of the bad puppet offering them more crackers.

Here’s the main figure from the study:

morality.jpg

With the gray bars, you can see that babies preferred the plates with more than 1 graham crackers, regardless of whether the plate had 2 or 8 – they just cared that it was more than 1. Looking at the black bar on the left, though, you can see that the babies strongly rejected the plate with 2 graham crackers when it was offered by the bad puppet – but when the bad puppet offered 8 graham crackers, they reached for that plate almost as often as when the 8 graham cracker plate was offered by a puppet without behavior issues.

My Replication

Since T (11.5 months) is pretty much the exact age of the babies in the study now, I had to try and replicate this! (this is why people have children, yes? <– totally kidding). Luckily we had a set of puppets at home (that T LOVES), and I used cheerios instead of graham crackers (for crumb issues).

I set up a little curtain area and started my puppet show. I first tried to replicate the baseline finding, to see if T would prefer cheerios from a puppet offering more than 1 compared to from a puppet offering just 1 cheerio. I was very easily able to replicate this – regardless of which of my puppets was offering cheerios or how much more than 1, T robustly reached for the “more than 1” cup.

Then I had to replicate the morality plays. Rather than having my helpful puppet help open a box and my unhelpful puppet slam the box shut, I had my helpful puppet help carry a block and my unhelpful puppet drop a block on the other puppet’s head. (This was particularly disturbing since our puppets are part of a “happy helpers” set – so in this case, I had a doctor puppet dropping a block on a firefighter puppets head. oops.)

I then had my “good puppet” offer 1 cheerio and my “bad puppet” offer 2 cheerios – to my surprise, T reached for the cup with one cheerio! I ended up repeating this two more times – in the end, of the 3 times I tried this, he went for the good puppet (offering 1 cheerio) twice, and the bad puppet (offering 2 cheerios) once.

Finally, I had my good puppet offer 1 cheerio and my bad puppet offer 10 cheerios – I repeated this 3 times, and of those 3 times, T always reached for the bad puppet’s many cheerios.

Based on this, it seems like we might have replicated the results of the study! T did somewhat seem to reject the bad puppet when he only offered 2 cheerios (2/3 times), but reached for the bad puppet when he had lots of cheerios (3/3 times). On the other hand, I’m not convinced that he thought the “bad puppet” was all that bad, since he DID reach for the bad puppet several times to kiss him, and he clapped and laughed when the bad puppet dropped the block on the other puppet’s head. So it’s hard to say. Regardless, T and I both had fun 🙂

 

 

 

Singing to Babies

(Note: a lot of the research with infants I’ve been writing about has been done with normally-hearing infants. Although there’s a lot of great research on children with Cochlear Implants, I’m finding that there’s less research on children with mild hearing losses, especially for infants and in interesting areas like music. So, I end up writing about studies that have been done with normally-hearing infants, and I’m really not sure how they translate!)

It’s been clear since T was just a few weeks old that he loved hearing me sing (this was a surprise to me, since pretty much no one else enjoys hearing me sing :)). Since then, I’ve sang to him A LOT – I sing when I play with him, when he’s cranky in the stroller or carrier, lullabies at bedtime, etc. I started wondering what research has been done on singing to babies, so I did a little searching.

One interesting question is whether/how we change how we sing when we sing to a baby compared to singing to an adult or to no one. It’s pretty obvious that people talk to babies differently than they talk to adults – the typical “baby-speak” is called “infant-directed speech” and it has a lot of potential benefits for babies to acquire language. Infant-directed speech is usually characterized by slower speech, repetitiveness (e.g., “look at the doggie! the doggie says woof! hi, doggie!”), higher pitch, and more pitch variation (i.e., MUCH less monotone than when talking to adults). This probably helps babies learn new words and understand the structure of language (e.g., the concept of phrases and sentences) by helping them focus their attention on particular words or groups of words by repeating and emphasizing them.

But do adults sing differently to babies than they do in their absence (even when singing the same song)? This is an especially interesting question, because many of the ways we change our speaking when it’s directed to a baby aren’t easily done in singing – for example, a particular song has constraints on pitch (based on the tune of the song) and rhythm, so it’s harder to vary pitch and rhythm when we sing and still maintain the song. But,it turns out that, similar to infant-directed speech, adults sing differently when the singing is directed to a baby! Studies ([1] and [2]) have shown that when we sing to babies (rather than singing the same song in their absence), we sing with a higher pitch (just like in infant-directed speech) and with a slower tempo (also like in infant-directed speech). And, even though mothers tend to sing more to their babies than fathers, fathers show the same pattern of singing in a higher pitch and with a slower tempo, so there may be something intrinsic about the characteristics of infant-directed singing. (See [2])

And, it turns out that the way we change our singing when it’s directed to a real, live baby and not just an empty room is pretty robust – adults listeners are really good at identifying instances in which another adult was singing to a baby rather than to an empty room – that is, which songs were “infant-directed.” (See [1]).  The adult listeners tended to say that the “infant-directed songs” were sung with a “more loving tone of voice.” ([1]).

I tried to think of whether I sing differently when I’m singing to T than when I’m singing by myself, and it’s hard to say, mostly because I rarely sing if it’s not to T 🙂 But, I wouldn’t be surprised if I sound more loving when I’m singing to T than when I’m singing by myself!

 

On a totally unrelated topic, another interesting study that I came across looked at the effects of moms singing on their babies’ arousal levels as measured with cortisol in saliva ([3]).  They found that babies (averaging 6 months in age) who had relatively low cortisol levels initially (for example, if they weren’t paying attention to anything in particular and were sort of just chilling) had an increase in cortisol after their mom sang to them for 10 minutes – that is, they were more aroused after hearing their mom sing and more in a “playtime” state. Conversely, babies who had higher initial cortisol levels had a decrease in cortisol after their mom sang to them for 10 minutes – that is, they went from a more aroused state to a more chilled out state after their mom sang to them.

This was interesting to read, because I’ve definitely found that my singing to T can have totally different effects on him, even singing the same song! Sometimes, he’ll get really excited and wound up and ready to play, and other times, he’ll totally relax and often, will get kind of drowsy.

References

[1] Trainor, L.J., “Infant Preferences for Infant-Directed Versus Noninfant-Directed Playsongs and Lullabies.” Infant Behavior and Development. (19) 83-92 (1996). (full text here)

[2] Trehub, S.E. et al. “Mothers’ and Fathers’ Singing to Infants.” Developmental Psychology. Vol. 33, No. 3. 500-507 (1997). (full text here)

[3] Shenfield, T. et al. “Maternal Singing Modulates Infant Arousal.” Psychology of Music. Vol. 31, No. 4. 365-375. (2003). (full text here)

What counts as a word?

I was talking to two friends recently who have babies who are close in age to T (11 months), and we talked about how you know when your baby said their first word. Before I had a baby, I never thought this would be so hard to determine! It’s pretty easy to pinpoint exactly when your baby rolled over or started crawling, and it seemed like first words would be equally easy to identify. Based on our experience, and talking to my friends, it seems that this isn’t so easy!

T has been babbling “mama” and “dada” for a few months now, but it doesn’t seem like he ascribes any meaning to those (he rarely says “mama,” but I would say his favorite utterance is “dada” – he’ll shout “dada” not only when he sees my husband, but also when he sees himself in the mirror, when he notices we’ve left the baby gate open, when he sees the vacuum cleaner, at random people walking on the sidewalk, or just randomly when he’s talking to himself – so I don’t think he’s attached “dada” to his dad). Based on that, I don’t think “mama” and “dada” count as words for T yet, since they seem more like he’s just playing around with making different sounds.

There are two things T says that seem closer to being “real” words. First, when we play with bubbles, T will pretty reliably shout “baba!” to get us to blow on the bubble wand. Secondly, there’s a book that T loves (“Dear Zoo”) that has a lion in it – T has been OBSESSED with the lion since the day he first saw it. For the past week, T has been starting to shout “LYYYYY” as soon as we turn to the lion page. Do these count as “real” words?!

I’m not sure! In the paper I reviewed here about infants’ transitions from babbling to words, the researchers considered a child’s utterance a word if: 1) what the child said matched the “real” word by at least 1 consonant and 1 vowel; 2) the utterance was communicative (e.g., directed at someone); and 3) it was clear the child was attempting a word (e.g., referring to a a particular object, imitating the parent, or the parent recognized what the child was saying).

Based on these criteria, it seems like “baba” for “bubble” and “lyyy” for “lion” might be considered words – T will clearly say these things in specific contexts (when he wants bubbles or when he sees the lion on the page), and in those contexts, we are interacting with him. They also match the “real” word (“bubble” or “lion”) by the beginning consonant and the subsequent vowel.

But, my hesitation in considering these first words for T is that he will say “baba” and “lyyy” either talking to himself or in contexts that have nothing to do with bubbles or lions. Also, he hasn’t generalized the concept of a lion to lions other than in this specific book – there’s another book that we read that features a lion (“Goodnight Gorilla”), and T has never said “lyyyy” when he sees that lion – so can that really count as a word if he doesn’t understand the concept of a lion? I don’t know!

One last little story about T and words! There was a book that T went nuts for that we checked out from the library – “The Naked Book.” He would start grinning, squealing, and kicking as soon as we pulled it out and he saw the cover. I think we renewed it from the library about 8 times before we finally returned it. Anyway, one time I brought it out, and, when T saw the cover, he shouted what I could have sworn was “NA-GUH!!!” – or, a pretty good approximation of “naked!” I couldn’t get him to repeat it over the next few days, though. I’m kind of relieved – I don’t know what I’d do if T’s first word had been “naked” (I guess lie in his baby book and say his first word was “mama”?!).

 

Article Review – “Musician Advantage for Speech-on-Speech Perception”

Today, I want to talk about a recently published article (full text here) that isn’t directly related to babies or hearing loss, but that I found really interesting and wanted to share! The article is “Musician Advantage for Speech-on-Speech Perception.” (Baskent, D. and Gaudrain, E. “Musician Advantage for Speech-on-Speech Perception.” J. Acoust. Soc. Am. 139, EL51. 2016).

Also, this paper got some great publicity in Scientific American!

Background

Anyone who’s tried to have a conversation in a crowded bar or restaurant knows that understanding what one person is saying when there’s background noise of other people talking is one of the hardest listening tasks (and one that people with hearing loss struggle the most with!). One of the challenges of understanding speech in the presence of other, competing speech is segregating the different people talking to be able to focus on the one person you want to hear (I talked a bit about differences between babies and adults in this type of task here).  This problem is often called the “cocktail party problem” – that is, if you’re in a noisy, crowded environment with other people talking, being able to understand  what one person you’re having a conversation with is saying.

The authors of this study hypothesized that musicians would be better able to understand speech in the presence of other, competing speech better than non-musicians. If musicians ARE better at understanding speech-on-speech, this might be for a few different reasons. First, musicians are better at identifying subtle changes in pitch (something they do all the time to know if they are playing something correctly and in tune!), and this might be really helpful for separating multiple speech streams. For example, they might be able to use pitch differences to group words that they hear as belonging to different voices. Secondly, over decades of practice, musicians hone their “listening skills” – so it might be that they are just better at shifting their auditory focus to what they want to hear than non-musicians.

So, the researchers first wanted to see if the musicians had an advantage at all. They also wanted to know, if the musicians did have an advantage, if the advantage seemed to be related to their better ability at detecting pitch changes, or if it seemed to be more generally related to an increased ability to shift focus to different speech streams.

The Study

The researchers tested 18 musicians and 20 non-musicians on their ability to understand a sentence (the target) in the presence of one competing talker (the masker) – so the subjects had to understand one person talking who was competing with a second person talking. In order to qualify as a musician for this study, participants had to have had 10+ years of training, began musical training before they were 7 years old, and had to have received musical training within the past 3 years.

To probe whether musicians were more able to take advantage of subtle pitch changes than non-musicians, the researchers manipulated how different the target sentence was from the masker sentence in 2 ways:

  1. The fundamental frequency (F0) – the fundamental frequency (F0) indicates the voice pitch of a person’s speech. So, men generally have lower F0s than women, children have lower F0s than adults, etc.
  2. An estimated Vocal Tract Length (VTL) – The vocal tract is a cavity that filters sounds that you produce – in a very simplified view, it’s kind of like a tube that goes from the vibrating vocal folds at one end to your mouth at the other end, and it helps shapes different sounds that you produce to make them sound like different vowels or consonants. The length of the vocal tract varies across people – children have shorter vocal tracts than adults, and men generally have longer vocal tracts than women. VTL doesn’t directly affect voice pitch (like F0), but it changes other frequencies in speech sounds (the formants – definitely getting a bit technical, but really interesting!). If you have two recordings of people talking and they have the same F0 but different VTLs, the pitch (how high or low their voice is) will be the same, but the quality and characteristics of their voice will sound different – that’s the VTL at work!

The researchers used some fancy software to manipulate the F0 and VTL of the target sentences and the masker sentences so that, in each trial the subjects listened to, the target and masker sentences were more alike or less alike. They measured how well musicians and non-musicians were able to understand the target sentences based on how similar the target sentence was to the masker sentence in terms of these two parameters.

And here are the results!

FIG. 1A (reproduced below) shows the average percent of the sentence the subjects correctly repeated back with various differences in VTL and F0 between the target and masker sentence. The leftmost panel shows the smallest difference in VTL between the target and masker sentences (in the leftmost panel, there was no difference in VTL), and the rightmost panel shows the largest difference in VTL between the target and masker. Within a panel, going left to right increases the F0 difference between the target and masker sentences (so, within a panel, the leftmost points are where the target and masker sentences had the same average voice pitch as each other).

The data from the musicians is shown in purple and the data from the non-musicians is shown in green.

1A.jpg

FIG. 1A from Baskent and Gaudrain

 

As you can see, both musicians and non-musicians were better able to understand the target sentence when the target sentence was “more different” than the masker sentence – if you look at the leftmost points in the leftmost panel (the hardest condition where there was no difference in F0 or VTL between the target and masker sentences), musicians had about 70% intelligibility and non-musicians had about 55% intelligibility. However, looking at the rightmost points in the rightmost panel (the easiest condition where there was the largest difference in both F0 and VTL between the target and masker sentences), both musicians and non-musicians did really well – better than 90% intelligibility. This makes a lot of sense – it’s easier to understand what a (high-pitched) child is saying when their speech is competing with a deep-voiced man compared to trying to understand what one child is saying when their speech is competing with another child.

And, regardless of how different the target and masker sentences were, musicians performed better than non-musicians – and a fairly substantial difference – you can see that the purple points are generally ~15-20 points higher than the green points.

Recall that the researchers wanted to know if a musician advantage was due to the musicians’ ability to detect very subtle pitch differences. Based on this data, it seems like the musician advantage might not primarily be due to musicians’ better pitch perception – in FIG. 1A above, the purple (musician) and green (non-musician) lines are parallel to each other, indicating that both groups were deriving equal benefit from larger pitch differences (larger differences in F0). So, it might be that the musicians are better than the non-musicians at focusing their auditory attention – after all, musicians do this all the time when they practice; for example, a musician in an orchestra has to both listen to what their section is playing as well as what the other sections are playing.

My Reflections

I couldn’t help relating the results of this study to my personal experiences! I started playing the violin and the piano when I was little (~6 years old), and played through college, although I haven’t played regularly since I finished college (many years ago).

I’ve long suspected that I’m much better at understanding speech in noise compared to my husband, G. (This is just a gut feeling, we haven’t thoroughly confirmed this). For example, when G and I go out to eat, I’m usually much better at simultaneously listening to him while eavesdropping on conversations next to us. If G wants to eavesdrop, he’ll have to stop talking to me and stop eating to focus his attention on what the people next to us are saying (while trying hard to look like he’s NOT paying attention to what they’re saying!). So, maybe it’s my childhood musical training that’s given me an edge here!

 

 

 

 

 

 

 

Consonant Confusions

One thing that always makes me laugh is hearing T confuse different consonants. Besides being really cute, the substitutions of one consonant for another makes sense based on the place of articulation and manner of articulation. (I wrote more about consonants here!)

For example, T has sometimes been substituting “ba” for “ma” – both of these have a bilabial place of articulation (the lips are pressed together when you make the sound), but “ba” is a stop consonant (that’s the manner of articulation – you stop airflow with the lips before you release the lips to make the “ba” sound), whereas “ma” is a nasal consonant (air flows through the nose while you make the “mmm” sound). I’ll often ask T to “say mama!” and he’ll grin and say “baba!”

One other SUPER cute thing he’s been doing this week is substituting kisses for “ba” and “ma” – if I ask him to say either “bubbles” or “mama,” he’ll sometimes blow a kiss! This was SO interesting to me, because while blowing a kiss isn’t a proper consonant, it is a bilabial action, just like “ba” and “ma”!