Partially Overlapping Representations of Speech and Sign Language in the Brain

Post by Anastasia Sares

What's the science?

Communication between humans involves a fascinating process of transformation in the brain. We begin with external signals: spoken words, written symbols, street signs, emojis, braille, or sign language. These signals enter the brain through different senses, but eventually, they become a concept, something that transcends the medium used to communicate them. What are some of the neural processes underlying the formation of concepts? This week in Current Biology, Evans and colleagues used advanced MRI techniques to find out how concepts live in the brain, independent of the signals that created them.

How did they do it?

The authors tested individuals who were bilingual in spoken British English and British Sign Language. This kind of bilingualism is interesting because it is also bimodal in terms of the senses used: spoken language uses auditory information while sign language uses visual information. The participants underwent an MRI scan while being presented with a number of words in both of their languages, with audio-only for speech and visual-only for sign language. The words could be grouped into conceptual categories (fruit, animals, and transport), and the conceptual relationships between each item had been modeled in a previous experiment. For example, an orange and a banana would have a high similarity rating whereas, an orange and a truck would have low similarity. The analysis focused on patterns of brain activity for each object. The authors applied their conceptual relationship model to the brain, looking for areas where conceptually similar words had similar patterns of brain activity, while conceptually distant words had differing patterns of brain activity (using multivariate pattern analysis, or MVPA). They first did this within modalities (within speech and within signs). They found clusters of the brain that fit their criteria. Then, they further tested each cluster, looking for the ones that could distinguish concepts across modalities, having similar patterns of activity for both speech and sign language. The only cluster that met all of their criteria was located in the left posterior middle/inferior temporal gyrus.

What did they find?

The patterns of brain activity in the left posterior middle/inferior temporal gyrus were related to the category of the word (e.g., fruit vs animal), regardless of whether it was spoken or signed. Interestingly, however, this area was not very good at representing individual items (e.g. banana vs orange) cross-modally. There were other regions that did have patterns distinguishing individual items, but these were located in areas specific to either speech or sign language. The authors interpreted this to mean that higher-level concepts were modality-independent, but individual objects had modality-specific representations in the brain.

anastasia (5).png

What's the impact?

This study demonstrates that there is at least one brain area that responds to concepts, independent of the language used (even if one is a spoken language and the other is a sign language). Some recent media trends (like the movie Arrival) advocate for linguistic relativity: the idea that the language we speak determines the way we think. However, Evans and colleagues see language more like a ‘subtle filter’ that ‘influences, rather than determines, perception and thought.’

Evans_quote_Oct29.jpg

Evans et al. Sign and Speech Share Partially Overlapping Conceptual Representations. Current Biology (2019). Access the original scientific publication here.