Post by Flora Moujaes
What's the science?
For skilled readers, the process of reading comes so naturally that its complexities are often overlooked. In order to read at a reasonable level, the brain is required to recognise arbitrary visual symbols irrespective of their case, font, size, position in a word, or other irregularities caused by variations in handwriting. The brain must then instantaneously link the visual form of the letters with the stored meaning they represent. Neuroimaging research has suggested that the brain’s ability to abstract from letters to meaning is achieved by the ventral occipito-temporal (vOT) cortex. When a visual stimulus is encountered by the brain, the first signals reach the primary visual cortex — part of the occipital cortex. The information is then relayed through the occipital lobe towards the temporal lobe to recognize objects or symbols. However, the literature is still unclear about exactly how the vOT supports the ability to abstract from letters to meaning. This week in PNAS, Taylor and colleagues combined artificial language learning and neuroimaging to reveal how the brain represents written words.
How did they do it?
Researchers first trained twenty-four adults to read two sets of 24 novel words, written using two different alphabets of specially created symbols. They used pseudo words as this allowed the researchers to manipulate word form, sound, and meaning in a manner that would be hard to achieve in natural languages. Each pseudo word had a distinct meaning and was comprised of four symbols, three that contributed to the sound of the word and a final silent symbol. The words were similar to each other in one of three ways: (1) they contained some of the same symbols (they were from the same alphabet), (2) they sounded similar, or (3) they had a similar meaning but were written in different alphabets. This enabled researchers to examine how the brain encodes the visual stimuli itself as well as its associated sound and meaning. After two weeks of training, participants read the trained words while neural activity was measured using functional MRI. The researchers then used representational similarity analysis to analyze the response similarity between evoked fMRI responses to similar words in selected regions-of-interest.
What did they find?
Representational similarity analysis of words from the same alphabet revealed that: right vOT and posterior left vOT represented written words in terms of their low-level visual form, and are thus sensitive to basic visual similarity. Posterior to mid -left vOT represented written words in terms of their letters. In mid-vOT these letters had similar representations even when they occurred in different positions within a word. Representational similarity analysis on words from different alphabets revealed that: The anterior left vOT had similar neural patterns for words with similar sounds or meanings, even though they were written differently with no letters in common.
Overall these results show that as you move from the posterior to the anterior vOT, representations of letters become transformed from visual inputs to meaningful linguistic information. There is thus a hierarchical gradient in the vOT where letters are transformed from merely containing visual information to having more abstract meanings in order to convey spoken language information.
What's the impact?
These findings advance our understanding of how the brain comprehends language from arbitrary visual symbols. By examining the relationship between how visual form, sound, and meaning are encoded in the occipito-temporal cortex, this study provides strong empirical support for a hierarchical, posterior-to-anterior gradient in vOT that represents increasingly abstract information about written words. Given that learning to read is the most important milestone in a child’s education, it will be important for future studies to specify how linguistic influences on vOT change over time; both in the short term while reading a word and during reading development.
Taylor et al. Mapping visual symbols onto spoken language along the ventral visual stream. PNAS (2019). Access the original scientific publication here.