“No” and “Mama”

I have been trying to teach T to say “mama” for months now, but despite my effort, he will often look at me after I say “mama,” laugh, and yell “DADA” – until now!

This past weekend, T started saying “na” and “nuh nuh,” and once he started, that was pretty much all he said for two straight days. And then, within a few days of starting to say “na,” T started saying “ma” and “mama”! He doesn’t yet associate me with “mama,” but we’re working on that part now! Also, earlier this week, T’s speech therapist mentioned that T seems to have associated the concept of negation with the “nuh nuh” sound, so it’s only a matter of time before he starts saying “no!”

I thought it was really interesting that T started saying “ma” once “na” clicked with him, and I started thinking about this in the context of the order T has learned different consonant sounds and his preference for producing different consonant sounds.

There are 3 main features that distinguish different consonant:

  1. Place of articulation – this feature indicates the part of the mouth that’s involved in obstructing airflow to produce the consonant sound. Examples are: “bilabial” (with the lips pressed together, like in /b/, /m/, or /p/), “alveolar” (with the tongue touching the top of the mouth behind the gum ridge, like in /d/, /n/, or /t/) and “velar” (with the tongue pressed against the soft palate, like in /g/ and /k/).
  2. Manner of articulation – this feature indicates the configuration and interaction of the parts of the mouth involved in the obstruction. Examples are “stops” (where airflow stops completely during the articulation, like in /b/, /p/, /d/, /t/, /g/, and /k/), “nasals” (where air flows through the noise during articulation, like in /n/ and /m/), “fricatives” (where air flows through a small channel during articulation to make a “hissy” sound, like in /s/, /sh/, /f/, and /zh/), and “approximants” (where there is very little obstruction in airflow, like in /l/, /w/, and /y/).
  3. Voicing – this feature indicates when obstruction of airflow stops and vibration of the vocal folds begins. For example, /b/ and /p/ are both bilabial consonants (produced with the lips pressed together), but they differ in voicing – /b/ is “voiced”, whereas /p/ is “voiceless” – you can hear the difference in how /p/ has a kind of “pop” or attack when you produce a “pa” sound.

I started thinking about how T’s preferences for different consonants and the order that he started producing them aligns with these different features.

Here’s a chart showing when T first started producing different consonant sounds (purely based on my recollection – I never wrote any of this down!)

productiondate.jpg

When T first started producing different consonant sounds, organized by place and manner of articulation. For stop consonants, the top row indicates voiced consonants and the bottom row indicates voiceless consonants. The color of the consonant indicates when T first began producing the consonant, as indicated by the legend on the right.

And here’s a chart showing how frequently T says different consonant (regardless of when he first learned to say them), again based purely on my informal impressions:

productionfreq.jpg

Frequency with which T says different consonant sounds.  The color of the consonant indicates how frequently T says the consonant, as indicated by the legend on the right.

I think that T’s “favorite” consonants (based on how frequently he says them) tend to be those with an alveolar articulation (/d/, /n/, and /l/ in particular – I think you can kind of see this in the middle column of the second chart). Looking at the first chart (middle row), you can see that he started producing nasal sounds only recently (these are /m/ and /n/). It kind of makes sense to me, in hindsight, that of the nasal sounds, he’d start producing /n/ first, since he seems to prefer the alveolar place of articulation over the bilabial articulation. It kind of seems like T needed a little time to get the hang of the nasal manner of articulation, and that once he started with nasal sounds, he started with the place of articulation that comes most naturally for him (the alveolar articulation) before trying an articulation that’s more difficult for him (the bilabial articulation).

The order and preference of T’s consonant production development has been especially interesting to me in light of articles like this one (which I read before I had T!) that say that babies tend to say sounds that sound like “mama” first because it’s one of the easiest sounds. Clearly, T is an independent thinker!

 

Article Review – “Infants’ listening in multitalker environments: Effect of the number of background talkers”

This week, I’m going to talk about this study (full text available!) looking at infants’ ability to listen in noise.  (Newman, R.S. “Infants’ listening in multitalker environments: Effect of the number of background talkers.” Attention, Perception, & Psychophysics. 71(4), 822-836, 2009).

Background

As anyone who has tried to have a conversation in a noisy bar or restaurant can tell you, understanding speech in noisy environments, particularly when the noise is other people talking, is REALLY difficult.

Adults tend to do better at listening to a target talker when the competing noise is just one other talker compared to when the competing noise is several talkers all at once (like the din of a crowded restaurant). This difference could be for a couple of reasons. First, when the competing noise is just a single talker, adults may be able to recognize words or a topic of the competing talker, and use that context to selectively switch their attention away from the competing talker and toward the target talker. Secondly, speech naturally has pauses (like between syllables, phrases, or sentences), and adults may use pauses in a competing talker’s stream of speech to hone in on what a target talker is saying – with multiple talkers, the pauses tend to all average out so that there aren’t really any pauses (just a steady “roar”), which might make listening in the presence of multiple talkers more challenging for adults.

In this study, the researchers wanted to see if this is true for infants, as well. Note that the infants in this study were normally-hearing, and I’m not sure how the results would translate to infants with hearing loss.

The Study

The researchers had infants (an average age of about 5 months old) listen to a target stream of speech in the presence of competing speech. The target stream of speech consisted of a person saying a name, which could either be the infant’s name, a name other than the infant’s name that was similar (a “stress-matched foil”), or a name other than the infant’s name that wasn’t particularly similar (a “non-stress-matched foil”). The competing speech was either a single voice, or a composite of 9 voices all talking at the same time.

The researchers measured how long the infants listened to each name in the presence of the competing speech, the idea being that infants would listen for a longer duration of time to someone saying their own name if they recognized it. So, the researchers wanted to see if the infants listened longer during trials in which their name was said in the single-voice noise condition compared to the multi-voice noise condition to see whether infants were better able to recognize their own name in one condition versus the other.

And now, on to the results! FIG. 1 shows how long infants listened to their name compared to the other names in both a multi-voice competing speech condition (left-most panel) and a single-voice competing speech condition (middle panel).

fig1_newman.jpg

Interestingly, the infants listened significantly longer to their own name compared to other names in the nine-voice noise condition but there was no difference  in the single-voice noise condition. This suggests that infants had more trouble understanding speech (in this case, recognizing their name) in the single-voice noise condition, which is the opposite of adults!

The researchers hypothesized that the infants might have had more trouble in the single-voice noise condition because they might have recognized the single-voice as speech and found it interesting, or possibly, because they recognized some of the words in the single-voice competing speech and therefore, focused on it. This is different than what an adult might do in the same situation – if an adult is trying to focus on one talker, but there is a single competing talker nearby, they might recognize words from each conversation and realize that the topics of each conversation are different. For example, the first talker might be saying words like “breakfast,” “pancakes,” and “eggs,” and the second talker might be saying words like “rain,” “umbrella,” and “soaked” – an adult listener might be able to use these words to identify topics of each conversation and they could then target their attention on the conversation they’re interested in (this all happens subconsciously, of course!). On the other hand, a baby might recognize a few words in each conversation, but might not have the vocabulary to group recognized words into topics, making the two conversations harder to disentangle. In the case of a multi-talker competing background noise, neither the adult nor the baby would recognize individual words in the background noise – this might be detrimental to the adult (who can’t segregate the noise from the target speech based on conversation topic or gaps in the noise), but might be helpful to the baby (who isn’t distracted by a competing talker that seems like they might be saying something interesting).

To try and address the issue of why the single-talker competing speech condition was so difficult for the infants, the researchers repeated this task, but using single-talker speech played BACKWARDS! In this case, the competing speech would have some acoustic properties similar to single-talker speech played forwards (e.g., gaps in the speech, changes in loudness, changes in pitch, etc.), but would be different in that the infants wouldn’t be able to recognize any words.

The results of this experiment are shown in FIG. 1 (above) in the right-most panel – as you can see, there was no difference in how long the infants listened to their own names versus other names in the single-talker speech played backwards condition. This indicates that the infants had a hard time recognizing speech in the presence of the single-talker backwards noise. This in turn suggests that the infants’ difficulty with understanding speech in the presence of a single competing talker is not due to recognizing some words in the competing speech and finding that distracting, but rather due to other characteristics of competing single-talker speech.

My Reflections

I thought it was so interesting that adults find a multi-talker background noise (like a restaurant) to be more difficult than a single competing talker but that infants are the opposite. I often extrapolate my experiences to T – if we are in a crowded restaurant, I assume he must have a harder time understanding what we’re saying than if there’s just one or two people talking nearby, because *I* find the crowded restaurant more difficult to listen in. It never occurred to me that it might be exactly the opposite for T!

This article also highlighted to me how much cognitive development is required for babies to mature to the point where they can listen to speech in noisy environments the way adults do. For example, they need to learn enough vocabulary to be able to group words in a conversation into topics, learn how to listen in the gaps of competing speech (like between sentences or phrases) to focus in on the target speech, and all sorts of other things – and all of this takes time and experience! I think this is especially important to remember because infants often spend a lot of their waking hours in environments that are very noisy – like daycare!

Additionally, this is yet another study that made me think about the importance of hearing aids for children with hearing loss – this study was done with normally-hearing infants, and they had a hard time understanding speech in noise – this difficulty must be so much worse for infants with hearing loss!

 

Receptive Language Breakthrough!

In just the past few days, T seems to have learned the word “kiss”! Lately, when you say “give me a kiss!” he will look at you and smack his lips (yes, this is just as cute as it sounds :)). And, he seems to have generalized the word to contexts other than me asking for one; there are two books we read frequently that have the word “kiss” in them – when we read that part, T will usually smack his lips at just a mere mention of “kiss”!

This is one of the first times it’s seemed like T has consistently shown that he understands a word, and has generalized the meaning to lots of different contexts – so I’m really excited about this!

Article Review – “Vocalizations of Infants with Hearing Loss Compared with Infants with Normal Hearing: Part II – Transition to Words”

Last week, I talked about Part 1 of this study, which compared the initial, babbling stage of infant language development for infants with hearing loss and normally-hearing infants. This week, I want to talk about Part 2 of the study, which looked at how babies, as they got older, transitioned from babbling to producing words. Here’s a link to a full PDF of the study.

Background

Part 1 of this study found that infants with hearing loss (HL) generally are delayed relative to normally hearing (NH) infants in the babbling stage of language development. HL infants took longer to begin babbling, and, once they began babbling, were slower to acquire particular types of consonants, such as fricatives (“sss,” “shhh,” “f,” etc.). The researchers wanted to then look at older babies to see whether HL infants were also delayed in transitioning from babbling to producing words relative to NH infants.

The Study

The infants included in this study were the same as those in Part 1 – to recap, there were 21 NH infants and 12 HL infants. The HL infants varied a lot in degree of hearing loss, and three received cochlear implants (CIs) during the course of the study. For all infants, language productions were monitored during play sessions with a caregiver (typically the infant’s mother), and these sessions were generally conducted every 6 weeks. In Part 2 of the study, data from sessions when the infants were between 10 and 36 months old were used.

Let’s get to the results!

The researchers analyzed the infants’ language productions during the sessions in 2 broad categories: the proportion of different utterance types at different ages and the structural characteristics of words produced at 24 months.

To look at the proportion of different utterance types at different ages, the researchers coded each utterance produced by an infant during a session as belonging to one of 3 utterance types:

  1. Non-communicative – these were speechlike sounds but were more vocal play than attempts to communicate. Examples include babbling that wasn’t directed to an adult.
  2. Unintelligible communicative attempts – these were vocalizations that were a) directed to an adult and b) served a communicative purpose, such as getting the adult to do something, seeking attention, etc. Some of these might have been attempts by the infant to say a particular word, but weren’t recognized by the caregiver or the researchers as a word.
  3. Words – the researchers pointed out that it’s tricky to decide what constitutes a word. For this study, utterances were classified as words if: 1) at least one vowel and consonant in the word attempted by the infant matched the “real” word (e.g., “baba” for “bottle”), 2) the utterance was a communicative attempt (see #2 above), and 3) it was clear that the child was attempting to say a word, for example, that the infant was imitating the parent or that the parent recognized the word and repeated it.

FIG. 1 of Moeller, et al. (reproduced below) shows the results of the analysis of utterance type for NH and HL infants at 16 months old and 24 months old.

 

fig1.jpg

FIG. 1 of Moeller, et al. – Proportions of different utterance types of NH and HL infants at 16 months and 24 months.

As you can see in FIG. 1, at a given age, the pattern of the proportion of different response types was different for NH infants compared to HL infants. For example, at 16 months, the NH infants were producing more unintelligible communicative attempts as well as more words compared to the HL infants. As another example, at 24 months, a greater fraction of utterances for the NH infants were words compared to the HL infants. Additionally, while both the NH infants and the HL infants produced more words at 24 months compared to 16 months, the researchers found that the magnitude of the increase was larger for NH infants. Interestingly, the researchers found that the pattern of utterance types for the HL infants at 24 months was similar to that of the NH infants at 16 months (I highlighted these in the red boxes in FIG. 1 above), indicating that the HL infants might have a similar pattern of improvement over time, but delayed.

To look at the structure of word attempts by the infants at 24 months, the researchers randomly selected 25 words from each child’s transcripts during the experimental session and compared the word attempt with the actual, target word to assess both the complexity of the word attempt and how accurate the attempt was. They computed 7 different metrics:

  1. Mean syllable structure level (MSSL) – this metric was used in Part 1, as well, and I described this in more detail here.  As a quick recap, words with only vowels were scored with 1 point, words with a single consonant type were scored with 2 points (e.g., “ba” or “baba”) and words with two or more consonant types were scored with 3 points (e.g., “bada” or “dago”).
  2. Percentage of vowels correct – this indicates the percentage of the time that the infant’s vowel productions in their word productions matched the “correct” vowels in the corresponding word. For example, if the target word was “mama,” the child would get 100% for saying “gaga” or “baba” but 0% for saying “momo.”
  3. Percentage of consonants correct (PCC) – this is similar to as above, but with consonants. As an example, if the target word was “shoe,” the child would get 100% for “shoo,” “shee,” “shaw,” etc., but 0% for “too.”
  4. Phonological mean length of utterance (PMLU) – This measure is intended to identify children who attempt longer, more complicated words but produce them less accurately as compared to children who attempt shorter, simpler words but produce them more accurately. To calculate this, the child received 1 point for each vowel and consonant produced, and an additional point for each correctly produced consonant. For example, if the target word was “cat,” at the child produced “cat,” they would receive 3 points for producing “c,” “a,” and “t,” and an additional 2 points for correctly producing the “c” and “t,” for a total of 5 points. However, if the child had instead produced “da” for “cat,” they’d receive only 2 points – one each for the “d” and “a,” but no points for accuracy. In this way, the PMLU reflects both accuracy of the word production as well as the length of the word.
  5. Proportion of whole word proximity (PWWP) – This measure is intended to give an overall reflection of how accurately the child produced a particular word. It is calculated by dividing the PMLU of the word attempt into the PMLU of the target word. As descried above, “cat” produced correctly would have a PMLU of 5, and “cat” produced as “da” would receive a PMLU of 2. Therefore, if a child produced “da” for “cat,” the corresponding PWWP would be 2/5, or 0.4.
  6. Word shape match – This measure indicates how accurate a child’s production of a word was in turns of shape/number of syllables. For example, if the target word was “cookie,” the target shape would be “consonant-vowel-consonant-vowel.” (CVCV). If, instead of producing a word that had a CVCV shape, the child produced one with just a CV shape (e.g., “di,” “koo,” “da,” etc.), this would not be a match.
  7. Words with final consonants – Word productions were given points for this metric if a target word ended with a consonant, and the child’s production of the word also ended with a consonant, even if the consonant wasn’t totally accurate. So, for example, if the target word was “goat,” the child would get points for producing “goat,” “got,” “god,” “goad,” etc.

The results of the structural analysis of the children’s word productions are shown in Table 1 of Moeller, et al. (reproduced below).

table1.jpg

Table 1 of Moeller, et al. – comparison of word structure for NH and HL children at 24 months old.

This table shows that, for every measure of word structure (the rows in the table), the NH hearing children performed better than the HL children. The difference between the NH and HL children was statistically significant (this is indicated by the crosses in next to the score for the NH children in each row).

One of the things that I really like about this table is that they indicate the Effect Size for each metric of word structure (this is indicated in the right-most column of the table). The effect size tells you the strength of of the finding. For the measure of effect size used in this paper (called “Cohen’s d”), an effect size of around 0.2-0.3 is considered a small effect, an effect size of around 0.5 is considered a medium effect, and an effect size of more than 0.8 is considered a large effect. So, as you can see from this table, for every metric of word structure, the researchers found that not only was there a statistically significant difference in performance between the NH children and the HL children, but that the size of this difference was large.

So, overall, the data in Table 1 indicates that compared to age-matched NH children, HL children were producing words that were less complex (contained fewer different types of consonants, were less likely to end in a consonant, and were shorter) and that tended to be less accurate representations of the target word (an incorrect number of syllables or producing an incorrect vowel or consonant).

The researchers also looked at the number of words each child could produce as a function of age. FIG. 4 of Moeller, et al. (reproduced below) shows this data (the top two panels of FIG. 4 show data from this study; the bottom two panels show data from two other studies for comparison). The number of words was determined by asking the child’s caregiver to fill out an evaluation at home at each time point.

fig4.jpg

FIG. 4 of Moeller, et al. – The number of words produced by HL children (top left panel) and NH children (top right panel) as a function of age.

In FIG. 4, you can see that the curves for the NH children (top right panel) are both steeper and shifted to the left compared to the curves for the HL children (top left panel). This indicates that the NH children began producing words at a younger age relative to the HL children, and that, once they began producing words, their vocabularies expanded at a faster rate. The researchers noted that there was considerable variability in the data (for example, you can see that some of the NH children had much shallower curves than others, indicating that they were acquiring words more slowly than their peers), but that the individual data collected in this study “suggest a much slower rate of early vocabulary development compared with NH children.”  (Moeller, et al. p. 636).

One cool thing – in the panel for the HL children (upper left), the curves with unfilled symbols indicate children with CIs – one of the best performing children in this group had a CI! I thought this was pretty remarkable!

Since there was so much variability within the group of HL children regarding degree of hearing loss, the researchers weren’t really able to say much about how degree of hearing loss affected language production in this study.

My Reflections

T has been babbling up a storm for a few months now, but this paper made me think about the different contexts of his babbling (e.g., non-communicative, unintelligible communicative, and words/word attempts). Of course, at this age, T’s babbling is essentially entirely non-communicative or unintelligible communicative (and no words/word attempts). Reflecting on these distinctions, I think that T tends to babble in the non-communicative category primarily when he’s relaxing – like riding in the stroller or in his crib at night (or the wee hours of the morning) – at these times, he’ll go on a long, uninterrupted soliloquy, complete with big variations in vocal inflection. T’s babbles that fall in the unintelligible communicative category seem to happen when we’re playing interactively with him (to tell us to do something again), when he wants something (usually food), or when he’s excited about something (he’ll often shout “DAY-DA!” while looking at us when he’s excited – usually when we open the refrigerator door). I think the distinction in types of communication based on activity/mood makes sense – if non-communicative babbling is a form of vocal play, (that is, allowing T to play with making different sounds), it makes sense that this would come most naturally to him when he’s just chilling.

At 10 months, T is on the young age compared to the ages of the children studied here. However, I think he’s allllllmost on the cusp of his first word. At least a couple times, it seems like he was fairly consistently saying “a-ga” for “again” (to ask us to do something again) and saying “bah-bol” for “bubble” (to ask us to blow more bubbles). I’m not sure these are consistent enough to count as his first word (for example, he’ll say “a-ga” at other times too), but it seems like he might be close. We try to really reinforce when we think he’s saying something that might have meaning – for example, if he says “dada” and it seems plausible that he’s saying something to or about his dad, we’ll make a big production of saying the word “dad.” We do the same thing for “again” and “bubble,” and I think this repetition is helping him connect the sound of the word to the concept/object.

One thing this study made me excited about – I didn’t realize how rapidly vocabulary grows once children start talking! I get the feeling that T is thinking some pretty fun thoughts (like when he starts grinning when he sees the trash can and races over to look inside), and I can’t wait to hear what he’s thinking once he starts talking.

 

Speech Therapy Session – March 14, 2016

Apologies for the long gap in posting! It’s been busy over here at Casa TAG (T, A, and my husband, G :))!

T had a fun speech therapy session yesterday! We mostly focused on teaching T to respond to a few different commands. Here’s what we worked on:

  1. Responding to “wait” and “go!” – T looooves banging on drums. Yesterday, T’s speech therapist showed us how to encourage T to start banging, and then we’d say “WAIT!” Once we told him to wait, if he kept banging, we’d move the drum slightly out of reach or cover it up. Then, after a pause, we’d say “Go!” and let him start banging again. Although at first T was not happy about the “wait!” part of this game, I think by the end of the session, he may have been starting to get the idea – by the end, when we said “wait!” he’d stop banging on his own and look up at us!
  2. Responding to “come!” – We tried to teach T to come toward us when we said “come” or “come here.” T’s speech therapist showed us how to do this by saying “come here!” while trying to entice him with a fun toy. We tried with a bunch of different toys, but T didn’t seem to find any of them compelling enough to come over. However, we repeated this while holding a shoe, T’s current favorite love, and he did race over for that.

Although we are just getting started with this, I think teaching T to understand and respond to different commands could teach him new words, and could be really useful to us!

Audiology Appointment – March 2, 2016

T (9 months) had an audiology appointment yesterday; it’s been about 6 weeks since his last appointment, and T’s audiologist wanted to try to continue to refine his behavioral audiogram. Like last time, she used Visually Reinforced Audiometry (VRA) to measure his behavioral responses to sounds at different frequencies. Since T has started pulling stuff out of his ears (like his hearing aids!), we put his pilot’s cap  on to keep him from pulling the microphone wires out during the test – this worked great! T was in a bit of a feisty mood, which made testing difficult, but T’s audiologist was able to get some good responses before T called it quits. As before, the audiogram results continue to confirm a mild hearing loss.

One other thing T’s audiologist did today was measure Otoacoustic Emissions (OAEs). OAEs are really cool! The inner ear actually produces sounds, which is what the OAE test measures. During the test, a sound is played into the ear (using a microphone inserted into the ear), and sounds produced by the ear are then recorded. People with normal hearing have OAEs, but typically, people with more than a mild hearing loss won’t have OAEs. OAEs are the test that are most often used in newborn hearing screens – T had OAEs measured a few times when he was very little (<3 months old) and at that time, T’s audiologist was able to detect OAEs at a few frequencies (indicating that T has no more than a mild loss), and T’s audiologist wanted to repeat the test today to compare to the results from 6 months ago. Unfortunately, T was a bit restless during the test, and the recordings were really noisy, so I’m not sure we learned anything new from the OAEs recorded today. OAEs are a really interesting phenomenon that I don’t really know too much about, but I’m hoping to learn more and share what I learn with you!

 

 

Article Review – “Vocalizations of Infants With Hearing Loss Compared with Infants with Normal Hearing – Part 1: Phonetic Development”

As T’s (9 months) babbling has taken off, I’ve started to become interested in the order in which infants tend to acquire different speech sounds as well as how this might differ for infants with hearing loss vs. normally-hearing infants. I started doing a little Googling, and found this study (link to Abstract only) that compares vocalizations of infants with hearing loss to infants with normal hearing. (Moeller, M.P., Hoover, B., Putman, C., Arbataitis, K., Bohnenkam, G., Peterson, B., Wood, S., Lewis, D., Pittman, A., and Stelmachowicz, P. “Vocalizations of Infants with Hearing Loss Compared with Infants with Normal Hearing: Part 1- Phonetic Development.” Ear & Hearing, Vol. 28 No. 5, 605-627. 2007).

This study actually has two parts, the first looking at babbling with younger infants (up to 2 years old), and the second looking at older children and how they transition from babbling to acquiring words. This week, I’ll talk about part 1, and will hopefully write about part 2 next week.

Background

It’s well-known that infants with hearing loss develop spoken vocabulary later than normally-hearing children. However, there’s a lot of language development happens before children start speaking words. For example, infants typically start off making vowel sounds, and then progress to babbling (like “bababa,” “dadada,” etc.). Less is known about how hearing loss affects this earlier stage of language development.

The Study

The researchers enrolled a group of normally-hearing (NH) infants and a group of infants identified as having hearing loss (HL). This was a longitudinal study, so each infant was followed over time – their spoken language was measured in an experimental session conducted every 1.5-2 months, from when the study began (generally when infants were 4 months old) until they were 36 months old. There were 21 NH infants, and 12 HL infants. All of the infants with HL had assistive technology, typically hearing aids, although 3 received cochlear implants (CIs) during the course of the study. The degree of hearing loss varied a lot for the HL group; on average, across the group of HL infants, they had a 67 dB HL Better Ear Pure Tone Average (BEPTA – meaning that the average audiogram for the infant’s better ear measured at 500, 1000, and 2000 Hz was 67 dB HL). All of the HL infants were involved in some form of early intervention.

To collect the data, at each session, each infant played with a caregiver while their interaction was taped and then transcribed. The infants would play with a parent or guardian, and the researchers transcribed each vocalization by the infant – for example, identifying a particular vowel or consonant, whether a sound was a grunt, cry, or squeal, etc.

There were 3 main categories of metrics the researchers looked, which were:

  1. Volubility – this indicates how much the infants vocalized over a session – were they pretty chatty during the session, or fairly quiet?
  2. Age at which the infant began babbling
  3. Speech complexity – here, the researchers looked at what types of consonants the infants were producing at a particular age, as well as whether they were able to string different types of sounds together to make more complex sounds.

Let’s get to the results!

Volubility

To measure volubility, for each experimental session, the researchers calculated the infant’s vocalizations per minute. Vocalizations could be any sounds other than stuff like grunts, screams, cries, etc. So, an infant with a higher volubility score would have vocalized more during the session compared to an infant with a lower volubility score.

FIG. 1 of the article (shown below) shows the volubility results for both NH infants (left) and HL infants (right). In the figure, volubility scores are shown for infants at 3 different ages – 8.5 months, 10 months, and 12 months. As you can see in FIG. 1, the volubility scores for HL infants was really similar to that of NH infants, and the researchers found no significant difference between the two groups. I thought it was pretty interesting that, at each age, the HL infants seemed to be vocalizing as much as the NH infants!

fig1.jpg

FIG. 1 of Moeller, et al. – Volubility of NH and HL infants as a function of age

Age of Babbling Onset

The researchers then quantified the age at which the infants began babbling. Although we (or at least, I!) tend to think of babbling as any infant pre-word “talking,” babbling technically requires a consonant-vowel (CV) pairing – examples include “ba,” “da,” “ga,” etc. CV pairs can also be chained together, either the same consonant and vowel (“baba”) or different consonants and/or vowels (“babo,” “bada,” etc.)

In order to set a criteria to define the age of babbling onset, the researchers identified the age at which the proportion of babbles out of the total vocal utterances exceeded 0.2 – so this was the age at which, during an experimental session, more than 20% of the infant’s vocalizations were consonant-vowel pairings.

FIG. 2 of the article (shown below) shows, at each age, the proportion of infants in the NH group (black bars) and HL group (white bars) who had started babbling (defined as having more than 20% of their vocalizations during the session include a CV-pairing). As you can see, NH infants tended to begin babbling much earlier than HL infants – it took roughly 6ish additional months for the HL group to reach the milestone of having 50% of the infants in the group babbling compared to the NH group. The researchers also stated that, for the HL group, there was a correlation between the age at which the infants first received hearing aids and the age at which they began babbling, although this wasn’t statistically significant (possibly because there were only 12 infants in the group, and they varied a lot in degree of hearing loss).

fig2.jpg

FIG. 2 of Moeller et al. – Proportion of infants who had began babbling by age

Babble Complexity

The researchers quantified the complexity of the sounds the infants were producing by scoring each utterance as follows:

  1. 1 point for utterances that were vowels or primarily vowels – (like “ahhh,” “eeee,” “waaa,” etc.) – this was labeled SSSL1
  2. 2 points for utterances that had 1 type of consonant – (like “ba,” “da,” “baba,” etc.) – this was labeled SSSL2
  3. 3 points for utterances that had 2 or more types of consonants – (like “bada,” “gaba,” gabo,” etc.) – this was labeled SSSL3
  4. 4 points for utterances with consonant blends, like “spun.” – this was labeled SSSl4

FIG. 4 of Moeller et al. shows the proportion of utterances that belonged to each point category for both NH infants (top) and HL infants (bottom).

fig4.jpg

Adapted from FIG. 4 of Moeller et al. – proportion of utterances in each babble complexity category as a function of age

As you can see, for both NH infants and HL infants, vocalizations by the youngest babies (10-12 months) were dominated by the simplest type of vocalization – primarily vowels. Both groups tended to increase the proportion of more complex vocalizations – those containing consonants and multiple types of consonants – with age. One really interesting thing you can see in the above figure is that HL infants at 18-20 months had a babble complexity pattern that was similar to the NH infants at 10-12 months (I highlighted these in the red boxes above) – this is a pretty substantial delay. However, by the time the HL infants were 22-24 months old, the pattern resembles that of the NH infants at 18-20 months (highlighted in the green boxes above), indicating that the HL infants were closing the gap! This could be the result of amplification for the HL infants, early intervention services, as well as the fact that three of the HL infants received cochlear implants during this time period.

Phonetic Inventory

The researchers then looked at whether NH infants and HL infants differed in the rates at which they started saying vowels and different types of consonants. FIG. 5 of Moeller et al. (reproduced below) shows the infants’ progression in acquiring both vowels and consonants broken into different classes based on place of articulation. A consonant’s place of articulation indicates what part of the mouth is involved in obstructing the vocal tract – I wrote more about it here. Here’s a quick overview of the different classes of consonants shown in FIG. 5 below:

  1. bilabials – these are consonants produced with the lips pressed together (e.g., p, b, m, and w).
  2. labiodentals & interdetals – labiodentals are produced with the lower lip between the teeth (e.g., f and v). interdentals are produced with the tongue between the teeth (e.g., th).
  3. alveolars – these are produced with the tip of the tongue behind the top teeth (e.g., d and t).
  4. palatals – these are produced with the body of the tongue raised against the hard palate (e.g., j).
  5. velars – these are produced with the back part of the tongue against the soft palate (e.g., k and g).

Each panel in FIG. 5 shows the percent of sounds within a given category that the infants produced at a particular age. So, for example, there are 4 bilabial consonants (p, b, m, and w), and infants who could produce 2 out of the 4 at a particular age would receive a score of 50% for that age.

fig5.jpg

Adapted from FIG. 5 of Moeller, et al. – % of sounds produced in different phonetic categories as a function of age.

One thing that was interesting to me is that bilabial consonants seemed to be one of the “easier” sounds to produce in general (look at the top row, middle panel) – for both NH and HL infants, scores were fairly high at every age range, and the gap between NH and HL infants was fairly small as well. The researchers said that this might be because bilabial consonants tend to be very visually salient compared to other places of articulation – it’s pretty easy to see lips pressed together compared to where your tongue is inside your mouth! This might make it easier for infants to acquire bilabial consonants, since they can more easily see how they are formed.

Another interesting thing about Fig. 5 – the researchers found that acquisition of these different classes of sounds generally fell into 3 different categories, which I’ve highlighted by color in the above figure. For vowels and alveolar consonants, the HL infants were generally delayed relative to the NH infants, but their rate of acquisition was parallel (this is highlighted in blue above). For bilabial consonants and velar consonants, the HL infants seemed to be closing an initial gap relative to the NH infants – that is, their acquisition of these classes of consonants was converging with the NH infants (this is highlighted in green above). Conversely, for palatal consonants and labiodentals/interdetals, the HL infants seemed to be acquiring consonants in these classes at a slower rate than the NH infants – that is, over time, the gap between the HL infants and the NH infants widened. One thing to note is that, for both NH and HL infants, palatal and labiodental/interdental consonants (highlighted in red above) occurred less often in general compared to other consonants – regardless of hearing, children tend to take longer to produce these types of sounds, perhaps because they tend to be less common in English.

The researchers then broke the consonants up in a different way – into fricatives and non-fricatives. Fricatives are consonants that are produced by forming a small opening with the mouth and forcing air through – they include sounds like “ssss,” “shhhh,” “f,” and “zzz” – fricatives are the ones that sound kind of “hissy”! This hissyness also makes fricatives generally hard for people with hearing loss to hear – fricatives tend to have a lot of high frequency components and are often low in intensity. FIG. 6 of Moeller, et al. (reproduced below) shows the rate of acquisition of non-fricatives (left) and fricatives (right) for both NH and HL infants.

fig6.jpg

FIG. 6 of Moeller, et al. – Acquisition of non-fricative and fricative consonants.

As you can see, acquisition of the non-fricative consonants was parallel for both the HL and NH infants – both groups had a steady increase in production of non-fricative sounds. However, for fricatives, while the NH infants steadily increased their production of these sounds, the HL infants didn’t – they seemed sort of stuck from 10 months to 24 months and, in general, didn’t really add many consonants from this group into their repertoire. As I mentioned above, this might be because fricatives tend to be really hard to hear for people with hearing loss, so the HL infants might have not had enough exposure to these types of sounds to begin producing them.

My Reflections

I was particularly interested to read this study since T’s consonant inventory seems to have grown a lot just in the past 2 weeks. Although he’s been saying “da” for awhile (EVERYTHING is “dada”!), he’s started more consistently saying “ba” and “ma” (both are bilabial) and, just in the past few days, has started saying “la” (I think this is alveolar). From the data presented in this study, it seems like bilabials tend to be one of the “easiest” categories of consonants – babies tend to produce the highest proportion of consonants in this class at earlier ages relative to other categories, and this might be because of how easy it is to see the lips pressed together when producing bilabial consonants. Although T’s preferred consonants (the ones we hear more often) are “da” (alveolar) and “ga” (velar), I think we’ve heard him produce most of the bilabial consonants at least a few times now. And, lately, if we really emphasize the position of our lips while saying “pa,” “ba,” or “mmm,” he’ll try to imitate us!

One of the things I think I gained from reading this study was an appreciation for the activities we do at speech therapy and a deeper understanding of how those activities will help T acquire different speech sounds. One thing we really focus on is drawing T’s attention to different sounds by pairing the sound with something interesting and visually salient – this gets him to really listen to the sound rather than just have it be background noise that he might not pay attention to. We’ll do this in different ways, for example, pointing at our mouths, waving toys or ribbons around as we make the sound, etc. I think that, especially for children with hearing loss, merely passively hearing different sounds isn’t quite enough, and having their attention drawn to the sound and the way your mouth looks when you make the sound can help tie everything together.

Once again, this study highlighted the importance of T wearing his hearing aids! I think it’s really important for him to get as much good, high-quality exposure to all these different speech sounds so that he can start to produce them, and this is especially important for fricatives (like, “sss,” “shhh,” “f,” etc.). The “s” sound in particular is really important for English grammar – it’s what turns a singular noun into a plural – and the study that I wrote about here found that children with hearing loss tend to have more trouble with this grammar rule than normally-hearing children.

Finally, on a happy (for me) note – there are a few bad words I’ve been known to accidentally say in front of T that start with fricatives (I’ll let you figure out what they are) – I’ve been thinking I need to clean up my language, since I’m been worried that once T really starts talking, he’ll out me by repeating something he’s heard me say totally out of the blue. But, from the results of this study, it looks like children, whether normally-hearing or with hearing loss, don’t tend to really start producing fricatives until they are quite a bit older than T is now – so it looks like I have a little while before I have to be worried about T surprising me by dropping a fricative-bomb!