Natural Brain Waves Correspond with Eye Movements During Reading

Post by Lani Cupo

The takeaway

The brain’s neural activity imposes its own rhythm onto processing. A matching pattern of activity has been found in language processing, not just for speech which has a rhythm of its own, but also in reading.

What's the science?

Previous studies established that neural oscillations, or rhythmic neural activity, are involved in processing language, such as speech and sign language. However, language (either spoken or signed) contains a rhythm of its own, making it difficult to understand whether the synchrony between speech and brain oscillations arises because of the speech’s rhythm (outside in) or the rhythm of the brain (inside out).  This week in The Journal of Neuroscience, Henke and colleagues provide evidence that the brain may impose its own intrinsic rhythm onto language processing using eye-tracking and electroencephalography (EEG) during reading.

How did they do it?

The authors analyzed openly available data from the Zurich Cognitive Language Processing Corpus, where twelve participants were instructed to read 300 sentences on a screen while they tracked their eye movements and brain activity (with EEG). From the eye-tracking data, the authors could study fixations, where the eye fixates on a target, and saccades, where it moves between targets. If rhythmic neural activity is relevant to reading, the authors hypothesized that the eye movements should also be rhythmic and that the cycles of eye movement data would be correlated with cycles of brain oscillations, which they assessed by examining their phase coherence. Additionally, language comprehension is thought to be broken into multi-word groups or “chunks” where words are meaningfully related to one another. The authors expected to see that the cycles of the eye movements relate to the formation of chunks; at the end of a chunk, there would be a change in the duration of a fixation. To that end, they examined if the fixation durations and the EEG signal were always at a specific point within their cycle at sentence endings.

What did they find?

The authors found the expected rhythmicity of the eye movements. Importantly, the eye movements synchronized with brain oscillations from electrodes above the visual cortex in two different frequency bands. These included the theta band, involved in visual attention and syllable parsing (important for single-word comprehension), and the delta band, involved in chunking speech into multi-word units. Because the language stimulus was written, the rhythmic activity was not likely imposed on the brain by the external stimulus, but naturally arose. Contrary to their hypothesis, the authors could not relate the changes in fixation duration to sentence endings. They suspect this is because they sampled only a single fixation per word, which might have impacted the baseline in their statistical approach. However, they could relate the cycles of oscillatory brain activity to chunking.

What's the impact?

While the study does not allow the authors to claim that the neural oscillations causally impact the fluctuations in eye movements during reading, the data provide evidence that the synchronized brain activity and eye movements may shape reading and information processing. In time, such work might be extended to reading-impaired populations, including individuals with dyslexia, to improve support for individuals who struggle to read.

Transgender Listeners Show Reduced Visual Bias When Classifying Voices

Post by Anastasia Sares

The takeaway

While we usually draw on multiple senses and general predictions to inform our perception, this can sometimes backfire, introducing bias into our judgments. This study found that transgender and nonbinary people were less susceptible to visual bias, and better able to classify a person’s vocal range while watching videos of them singing or speaking. This trans advantage could come from more extensive experience with voice-body mismatch in daily life. 

What's the science?

The brain is constantly trying to fuse information from its different senses and make predictions based on that information. Unfortunately, this can sometimes lead to biases when we are only asked to judge based on one sense alone, or if we are confronted with something that doesn’t match our ingrained predictions. One example of this is the McGurk effect, where misleading visual information causes people to perceive a different syllable than the one they heard: a “da” played over audio combined with the visual of someone saying “ba” can result in people reporting that they heard “ga” instead. Visual bias is especially problematic for voice-body mismatches in the context of opera. A person’s body size and shape doesn’t necessarily indicate what range they can comfortably sing, but the stereotypes are strong and can (consciously or subconsciously) influence the roles that opera singers are cast in. This can affect their long-term vocal health and be detrimental to their careers.

Visual biases are not set in stone, however. There is some evidence that they can be mitigated through training, like musicians learning to resist the McGurk effect. One group of people who may have natural sensitivity to voice-body mismatches are the transgender and nonbinary communities, since voice is a strong gender cue and often a source of insecurity or fear of being outed.

Recently in Frontiers in Psychology, Marchand Knight and colleagues showed that, when asked to judge vocal ranges of different speakers, trans and nonbinary people are more resistant to visual biases than their cis peers, making their judgments more accurate.

How did they do it?

The authors conducted an online experiment including a cis group of participants as well as a trans group, which was composed of a mix of trans and nonbinary identities. Participants started by learning about different voice categories used in opera (from low/dark to high/bright: bass, baritone, tenor, alto, mezzo, soprano) and next used this voice-typing scale to rate clips of people speaking and singing. Participants first got the audio-only (no video) versions of the clips, then the visual-only versions (guessing voice type purely based on looks), and finally the full clips with both video and audio. The researchers intentionally chose some actors who they thought might show stronger voice-body mismatches to better measure the effect of visual bias.

What did they find?

Participants were fairly successful at classifying voice type based on hearing the voices in the audio-only condition, but in the visual-only condition they tended to revert to a gender binary (rating videos of female- presenting people around the “mezzo” voice range and male-presenting people around the “baritone” range). The highest and lowest voice types had the most discrepancy between their audio and visual ratings.

When audio and visual were presented together, ratings fell somewhere in between the two previous conditions, showing that the visual information was influencing participants even though they had been asked to rate solely based on the audio. However, the trans participants were better at resisting the visual biases, so their ratings in the audiovisual condition were closer to the audio-only condition. Cis participants’ ratings were pulled more toward the visual information, 30% more so than trans participants. This difference in ability did not seem to be strongly related to demographic differences between the groups or to gender views in general, as far as the researchers could measure. 

What's the impact?

These findings highlight a strength of the trans and nonbinary community, in a time when most research is focusing on the disadvantages they suffer. It also brings up a crucial issue that can affect the vocal health of opera singers, and calls for it to be addressed.

Access the original scientific publication here. 

[Disclosure: The writer of this BrainPost summary is also a collaborator on the publication]

Anxiety is Induced by Activating Microglia, the Immune Cells of the Brain

Post by Rebecca Hill

The takeaway

Hoxb8 microglia, the support cells of the brain created by the Hoxb8 gene, play a role in regulating anxiety. When these microglia are activated with light using optogenetics in certain areas of the brain, mice display anxious grooming and freezing behaviors.

What's the science?

Hoxb8 is a gene involved in creating certain microglia, the immune support cells of the brain, but the function of both have yet to be fully elucidated. When the Hoxb8 gene is mutated, or these microglia are removed in mice, they show chronic anxious behaviors and excessive grooming. Recently, in Molecular Psychiatry, Nagarajan and colleagues investigated whether activating these microglia in certain areas of the brain using light has an effect on anxious behaviors in mice.

How did they do it?

To activate the Hoxb8 microglia, the authors used optogenetic stimulation — using light to control the activity of certain cells in the brain. They activated Hoxb8 microglia in specific areas of the brain such as the dorsomedial striatum, the medial prefrontal cortex, the amygdala, and the hippocampus, which have previously been shown to control anxiety in mice. While stimulating these areas of the brain, they measured the behavioral effects; changes in grooming and other anxiety behaviors in different situations. They ran mice through several behavioral tests, measuring the anxiety-behaviors 2 minutes before stimulation, during the 2 minutes of stimulation, then the 2 minutes after stimulation. To measure anxiety levels, they used both a maze and an open field area to test how much time mice would spend in the fear-inducing open areas of the chambers as opposed to comfortable enclosed areas.

What did they find?

Mice groomed themselves when the dorsomedial striatum and the medial prefrontal cortex were stimulated and demonstrated higher levels of anxiety when areas in the amygdala were stimulated. This suggests that grooming is controlled by the former two areas, while anxiety is controlled by the latter area. When the microglia in the hippocampus were stimulated, mice showed both grooming and anxiety behaviors, in addition to increased freezing, which suggests the hippocampus is involved in controlling all three behaviors related to anxiety. Interestingly, when both Hoxb8 microglia and microglia not created by Hoxb8 (non-Hoxb8 microglia) were stimulated at the same time, mice did not display any anxiety behaviors at all. This suggests that Hoxb8 and non-Hoxb8 microglia work together with opposing effects, to control anxiety. Hoxb8 microglia turn off anxiety behaviors (like brakes on a car), and non-Hoxb8 microglia turn these behaviors on (like the accelerator).

In order to reconcile previous findings of anxiety increasing when Hoxb8 microglia are removed, with the current finding that activating Hoxb8 microglia also causes anxiety increase, the authors suggest that optogenetic activation of these Hoxb8 microglia might somehow cancel out their inhibitory effects on anxiety behaviors. While these mechanisms are still not fully understood, they likely involve the neighboring neurons that were activated when the Hoxb8 microglia were stimulated. Either way, these microglia are key in regulating anxiety, potentially in both directions.

What's the impact?

This study is the first to show that Hoxb8 microglia can be used to control anxiety behaviors using optogenetic techniques. It also suggests the reason for having both Hoxb8 and non-Hoxb8 microglia is to finely control anxiety behavior. Anxiety and related mental disorders are widespread both among adolescents and adults, so understanding the way it works within the brain is crucial so that we can better treat chronic anxiety. Studies like these could play a huge part in creating treatments that target these specific microglia and areas of the brain for chronic anxiety disorders.