The Impact of Language Experience On Perceiving Speech in Noisy Situations

Post by Anastasia Sares 

What's the science?

Researchers have been trying to crack the code of human speech processing for a long time. Speech perception is often tested in a quiet lab setting, but in everyday life, we experience noisy environments and have to figure things out based on context. In addition, there may be differences in the way we process our native language versus a second language acquired later in life. This week in Brain and Language, Kousaie and colleagues looked at how these factors interacted during speech processing.

How did they do it?

To answer their question, the authors recruited three groups of people who spoke both English and French fluently. There was no difference between the groups in terms of language proficiency; only the age that they had learned their second language varied. The first group were simultaneous bilinguals, who had learned both languages from birth (their “second language” was defined as their less dominant one or the one they used less). The next group had learned at an early age, between 3-5 years old. The last group had learned “late,” between 6-9 years old.

The three groups performed a speech discrimination task, where they listened to sentences and had to repeat the final word. Some sentences were presented in the participant’s first language and some in their second language. Some sentences were “high context,” meaning it was easy to predict the last word based on the rest of the sentence (“Stir your coffee with a spoon”), while others had low context, meaning the last word was less predictable (“Bob could have known about the spoon”). Finally, some sentences were presented in quiet, whereas others were played with a background noise of babble-talk, much like you’d find at a café or a bar, for example.

Participants did the test in an MRI scanner, with the scanner turned off during the presentation of stimuli so that the scanner noise didn’t interfere with the speech perception (a technique called sparse-sampling).

What did they find?

Predictably, their performance was almost perfect when the sentences from either language were presented in quiet. The differences appeared when noise was introduced. While working in their first language, everyone benefited from high-context sentences to help them discriminate speech in noise. However, when working in their second language, the later learners did not benefit as much from high-context sentences. Keep in mind that all participants were highly proficient in both languages, and only differed on the age they learned them.

anastasia (2).png

Looking at brain activity, the authors focused on this noisy second language condition. Simultaneous bilinguals showed increased activity in the left inferior frontal gyrus for low-context sentences in noise. This is likely due to the effort of the discrimination, made harder by the lack of context. The later learners, on the other hand, showed the most activity during high-context sentences! The authors suggest that this means their brains "gave up" in the low-context, noisy, second language condition since it was too demanding for them.

What's the impact?

This work is consistent with theories that our neural resources are limited, and that despite appearing perfectly fluent, people who have learned a second language later in life might be using more resources just to keep up in difficult listening situations. Finding a quiet place to talk might help them use their mental energies more effectively!

language.jpg

Kousaie et al. Language learning experience and mastering the challenges of perceiving speech in noise. Brain and Language (2019). Access the original scientific publication here.