E-I-E-I-O!

T (13 months) has been playing around with lots of different vowel sounds lately. For awhile, it seemed like he was mostly saying “ahhh” and “aaa,” but lately, he has added “ee,” “ayy,” “oh,” and “uh.” Some cute highlights – he’s started saying “uh oh” – usually while looking us in the eye, grinning, and throwing food/utensils off his high chair and pointing for us to pick them up (I don’t think that’s exactly an “uh oh”?!). And, out of the blue, he’s started singing “E I E I O”! I THINK this is because of the song “Old McDonald,” but I don’t usually sing this to him, so I’m not totally sure where he learned this (daycare?!). He’s also started very reliably saying “ohhh” to get us to open a box, which is something we’ve worked hard on with speech therapy, and it finally clicked this week!

I’ve noticed that, with all of these new vowels, T tends to say the vowel in isolation, rarely combining it with a consonant (when T says consonants, they tend to be combined with his earlier mastered vowels, like “aaa” and “ahh” – so he will say “ba,” “da,” etc.). But, after a few days of experimenting with his new vowels, I’ve noticed that T has just now started combining them with consonants – and, interestingly, he seems to mostly be combining them with “d.” So, he will babble “doh,” “duh,” and “die” now. I wrote here about how I thought T’s favorite consonant was “d” (based on when he first said it and how frequently he says it), so I wonder if he’s starting to combine his new vowels with “d” because it’s his favorite and/or the easiest for him to say? If that’s the case, I predict that we will next start to hear “bye” and “no!”

T has also started to pick up a few new words that he will say pretty reliably – he will say “buh buh” while waving goodbye, and just in the past two days, has started saying “all done” (pronounced “ah duh”). He has started saying “mama” quite a bit, as well, but I think I might have inadvertently taught him that the word for photograph is “mama” – I may have been a bit overzealous pointing out myself in photographs, and I’ve noticed that T will now excitedly point to ANY photograph, regardless of whether I’m in it, and shout “mama”!

Advertisements

Speech Therapy Session – February 1, 2016

We started today with working on identifying Ling sounds. Today, T was able to pick out the toys paired with “mmm,” “ooo,” and “ahh”! When the speech therapist said “can you show me mmmm?”, T was able to point to the corresponding ice cream toy out of 4 or 5 toys in a bin (and did similarly for the “oooo” ghost toy and the “ahhhh” airplane toy). I was so surprised he remembered these pairings, since we don’t have these toys at home and so aren’t able to practice this at home. I think it’s possible he identified them by chance, but I do think he’s getting the hang of this each time we try!

One really interesting thing we learned about today from T’s speech therapist was about “prosodic bootstrapping.” (Note: I am not a speech therapist, and really don’t know much about linguistics, so the explanation below is just to the best of my understanding!)

Prosody relates to the larger parts of speech than vowels and consonants, like syllables, words and phrases. Prosody gives us information about speech like the speaker’s mood, whether a sentence is a question or a statement, whether a sentence is said sarcastically, etc. Prosody also helps indicate a language’s underlying grammar. For example, prosody can indicate clause boundaries within a phrase or sentence. Consider the sentence “he walked the dog.” – In English, it would be unlikely to insert a pause between “the” and “dog,” because that splits the phrase “the dog.” The locations and durations of pauses within phrases are one example of prosodic information. Other auditory cues used to convey prosody include: loudness or stress (e.g., which parts of a word or clause we emphasize), frequency or pitch of the voice and how it changes (e.g., in English, pitch tends to rise as we ask a question), and duration and timing of pauses (e.g., to mark clause boundaries, as mentioned above).

Bootstrapping refers to way in which infants naturally acquire their native language, just by hearing it around them and spoken to them. In particular, bootstrapping refers to the idea that infants use innate statistical learning abilities to learn small bits of information about language and build on these small bits to build up their understanding of their native language. A particular example of this is how infants use statistical learning to identify word boundaries (see [1]) – in English, the letter combination “st” is fairly common (as in “sting”), but the combination “gb” is not. Just by hearing streams of speech, infants learn that “st” is common, but “gb” is not, and this knowledge helps them segment words within a stream of speech. Take the example of the stream of speech “stingingbee” – this can be broken into the words “stinging” and “bee,” and “gbee” not likely to be a word, because in English, “gb” is not a common combination.

So, “prosodic bootstrapping” is how infants learn to use the auditory cues that indicate prosody to learn about the intent of speech (is it a question? said angrily or sarcastically?) or the underlying grammar (which are the clauses that are grouped together). One really cool thing is that newborn infants cry with a prosody that is similar to their native language (see [2]), which may indicate that they are learning prosody of their native language heard in the womb! I hope to learn more about how infants (particularly those with hearing loss) learn to use prosody information, and will hopefully write more about this as I learn more.

So, to get back to T and our speech therapy session – T’s speech therapist asked if we had noticed him babbling with different prosody – for example, “ba da ga ba?” or babbling with excitement versus irritation. We have definitely noticed that when he’s ready to get up for the day in the morning (or, at 2 am in his case!), he will start off babbling happily, and then his voice gets more and more insistent and agitated until we finally get him up. Also, he seems more likely to respond vocally to us when we ask him a question (with a rising tone at the end of the question) than when we make a statement and then pause, which indicates that he is picking up on the auditory cues that indicate questions versus statements. We will definitely be paying more attention to this now that we talked about this today!

 

References
[1] Saffran, Jenny (1996). “Word Segmentation: The Role of Distributional Cues”. Journal of Memory and Language 35 (4): 606–621.
[2] Cross, Ian (2009). “Communicative Development: Neonate Crying Reflects Patterns of Native-Language Speech”. Current Biology19: R1078–R1079.

 

Speech Therapy & Audiology Appointment – January 15, 2016

T had both a speech therapy session and an audiology appointment today (with a nap in between!)

Speech Therapy

T has gotten good at producing an “ahhhh” sound – our speech therapist explained that this tends to be one of the first sounds babies produce since it’s “easier” compared to other vowel sounds – the mouth is open and relaxed (rather than requiring a coordinated mouth shape or tongue position).  Now, we’re starting to work on getting T to produce other vowel sounds, like “oooo” and “eeee.”

Our speech therapist explained that these are “harder” than “ahhh,” because they require tension in the mouth.  Making those sounds out loud now, I notice that my lips are more closed compared to “ahhh,” and my tongue is in a particular position.  One of the things we’ve been doing at speech therapy for a few weeks now is pairing different sounds with visual motions.  Our speech therapist explained that this is to make the sounds more salient by combining them with an attention-grabbing visual motion cue.  For example, we’ve been playing with ribbons a lot – when we make “ahhh” sounds, we wave the ribbons around loosely, up and down.  In contrast, when we make “ooo” and “eee” sounds, we often grab the ends of the ribbon (or let T hold one end) and pull on the other, to give a visual cue of tension while making the sound.  Today, we also focused on sounds and words that have an “ooo” or “eee” sound, like saying “choo choo!” when playing with a train, or playing peekaboo.

T is going through some big gross motor leaps lately.  In the past 2 weeks, he’s figured out crawling, and just learned to pull himself up to standing yesterday.  Now, all he wants to do is move around and practice his mobility skills.  Our speech therapist told us that often, when babies are working on their gross motor skills, speech production motor skills take a backseat, since their brains are focusing so hard on gross motor skills.  That was really interesting to learn!

Audiology Appointment

T sees his audiologist every 4 to 6 weeks or so – she will usually check and tweak his hearing aid settings, check for fluid in his ear, and, for the past few sessions, try to get some behavioral audiometric data.

Measuring an audiogram from a cooperating adult is pretty straightforward – they can tell you when they hear a sound.  It’s much trickier with babies though! Until T was 6 months old or so, the audiologist estimated his audiogram with an Auditory Brain Response (ABR) measurement.  Now that he’s older, we have started trying to get behavioral audiometric results.  At each audiology appointment, we have continued to try and refine his audiograms; we’re only able to test him for small chunks of time (maybe 10-15 minutes), but we are slowly but surely getting a more accurate picture of his hearing loss.

The test that T’s audiologist has been doing with him is called Visually Reinforced Audiometry (VRA).  She plays sounds at different frequencies and different loudnesses, and when he responds in a way that indicates that he heard the sound, she “rewards” him by showing him light-up dancing puppets (I find them pretty creepy, but T LOVES them).  In this way, T is conditioned to perform the task of indicating what sounds he hears without requiring him to verbally respond – once he realized then when he looks in the direction he heard the sound from, he gets to see the puppets, he became more motivated to respond when he heard the sound, since he gets excited about the puppets.

In order to separately measure his right and left ears (since he won’t tolerate headphones), the audiologist stuck the probe wire with the speaker into his ear mold (with the processor disconnected) – I thought this was such a clever trick!  T is already used to wearing his ear molds since he wears his hearing aids, so he barely even noticed the probe wire (although he did try to chew on the wire once he noticed it).

T was in a great mood, and we were able to get data at both mid (1000 Hz) and high (4000 Hz) frequencies for both ears, and the results continue to confirm a mild bilateral hearing loss.

After measuring his responses with air conduction (through the speaker played into the ear canal), the audiologist wanted to measure his bone conduction responses.  With sounds played through air conduction, the sounds go through the full chain of processing – through the outer ear, the middle ear, and then through the inner ear.  With bone conduction, a little oscillator vibrates on the mastoid bone (just below the ear lobe), and this bypasses the outer and middle ears.  The audiologist explained that if the air conduction and bone conduction results differ (for example, if air conduction results indicate a hearing loss but bone conduction results are normal), this can suggest a middle ear problem or conductive hearing loss.  For T, the bone conduction results agreed with the air conduction results, indicating a sensorineural hearing loss (that is, problem with the hair cells in the cochlea).  One really interesting thing the audiologist mentioned to me was that the bone conduction results indicate hearing levels for the better ear – she had placed the oscillator behind his left ear, but she said that there is an effect where the oscillation is picked up by the “better” cochlea, regardless of where the oscillator is placed!