Genetic Factors Influence Brain Criticality and Cognition

Post by Lila Metko

The takeaway

Brain criticality, a homeostatic endpoint indicative of the excitatory-inhibitory balance, is associated with neural information flow, information capacity, and consciousness. Genetic factors influence brain criticality and its relationship with cognitive function.

What's the science?

A critical brain state is defined as a state where the brain is in optimal balance between excitatory and inhibitory activity. Brain criticality provides a framework for modeling and understanding large-scale brain activity that underlies processes like cognition and consciousness. There are a few measures that are used to quantify a brain’s proximity to a critical state, such as inter-avalanche interval (IAI), branching ratio, and Hurst exponents. An avalanche is defined as a cascade of spontaneous neuronal firing, and the IAI is the interval between them. Avalanches follow a power law distribution, meaning that there are many small and some large avalanches. In other words, there is no typical size to the avalanche - the size is random. The branching ratio describes how many neurons can be activated by a single neuron. Hurst exponents are a measure of how much past neuronal activity influences future neuronal activity. Both the genetic heritability of criticality and the genetic relationship between brain criticality and cognition are unknown. Recently, in PNAS, Xin and colleagues determined the heritability of criticality throughout the brain, as well as determined genetic correlations between brain criticality and cognition. 

How did they do it?

The authors obtained resting state fMRI data from 250 monozygotic twins, 142 dizygotic twins, and 437 unrelated individuals. The previously mentioned criticality measures, IAI, branching ratio and Hurst exponents were determined from the fMRI data. They used the ACE (Additive Genetic Effects, Common environment, Environment which is unique to the individual) twin model to determine the heritability of the criticality measures. This is one of the most commonly used models for determining heritability in a twin study, and it takes into account the correlation of features between monozygotic twins, between dizygotic twins, and between unrelated individuals. They used a partial least squares regression model to determine which genes were responsible for variation between participants in Hurst exponents. They then did a gene-ontology enrichment assessment to see if there were any functions or cellular locations that were highly represented in these genes, and a disease gene overlap analysis to see if a high proportion of these genes were associated with a particular disease. They then used twin modeling approaches to determine genetic correlations between cognition (as assessed by the NIH toolbox total cognition score) and criticality. 

What did they find?

The authors found significant heritability of criticality at the whole-brain level and in over half of the individual brain regions analyzed. They found that criticality was more heritable in sensory brain regions as compared to regions that make associations. The top two groups of genes in the partial least squares regression analysis explained 56% of the variance in regional Hurst exponents. The gene ontology enrichment analysis showed that many of the genes were involved in controlling the excitability of the cell, and the disease gene overlap analysis found that major depressive disorder was the disease that had the largest proportion of contributing genes. The authors found a significant genetic correlation between IAI and cognition; genes associated with shorter IAIs are associated with higher cognitive performance. 

What's the impact?

This study is the first to show a genetic relationship between brain criticality and cognitive performance. In recent years, scientists have increasingly been working to develop genetically based treatments for disorders like depression. Thus, it is important for researchers to understand the genetic contribution to criticality, which plays an important role in information processing and cognition.

Access the original scientific publication here. 

Using New Technology to Classify Migraines

Post by Anastasia Sares

The takeaway

This study shows two exciting new technologies (functional near-infrared spectroscopy and machine learning) being put to use for the eventual better diagnosis of migraines.

What's the science?

Migraines are debilitating health episodes that include symptoms like nausea, painful headaches, fatigue, and light or sound sensitivity. It is relatively common, affecting over 1 in 10 people, with women three times more likely to suffer migraines than men. For some people, migraines also come along with an aura—a neurological abnormality like distorted vision.

Having migraines with auras is a risk factor for other conditions like stroke and heart attack, so it is important to identify them early. However, migraine diagnosis is not based on an objective test, but by a questionnaire filled out by the patient. This has two problems: first, people are not great at remembering all of their symptoms while sitting in the doctor’s office filling out a form, and second, doctors have limited time to tease out these symptoms during an appointment. 

This week in Biophotonics, Gulay and colleagues used a relatively new neuro-imaging technology, functional Near-Infrared Spectroscopy (fNIRS), combined with machine learning to classify migraine patients with and without aura, as well as no-migraine control participants.

How did they do it?

The authors performed fNIRS scanning on 32 participants, eight of whom had migraines with aura, twelve of whom had migraines without aura, and twelve of whom had no migraines at all. The participants sat for a 20-second rest period followed by a 3-minute Stroop task while an fNIRS machine recorded data. The Stroop task is an executive function task where participants are required to use inhibition when presented with a word (e.g., "red") printed in a different color (e.g., blue) and asked to name the color of the ink while ignoring the written word. fNIRS data is collected with a headband-like device containing tiny bulbs that shine light towards the scalp, where it scatters, some light penetrating deeper and some shallower. The headband is also equipped with sensors, which pick up the scattered light and analyze it. The light was limited to two very specific wavelengths that can be absorbed by molecules in the blood that carry oxygen (hemoglobin). In this way, fNIRS can track oxygen-rich and oxygen-poor blood as it flows in the brain just below the skull.

Once the data were gathered, the authors performed many mathematical operations on the signal to determine its characteristics: variance, entropy, and power over time, among many others. They then fed these values into a machine learning algorithm, training it to classify between the three groups of participants.

What did they find?

The model’s classification accuracy was evaluated by the leave-one-out method, in which the model is trained on all participants but one, and then asked to classify the final participant as a test. This is repeated many times with a random participant left out each time to obtain an accuracy score. The author’s model had an overall balanced accuracy of 84% to detect migraines with aura, 98% accuracy to detect migraines without aura, and 95% accuracy to detect people without migraines at all. Classification was best when using data from the left prefrontal cortex.

What's the impact?

This work shows the potential of a 5-minute neuroimaging protocol to detect migraines with aura, allowing for clinical follow-up. fNIRS is also more practical because, unlike MRI and EEG, it is less disrupted by a person’s movements; it is also less expensive than MRI and can be less time-consuming than EEG to set up.

Access the original scientific publication here.

How The Brain Recovers From Sleep Debt

Post by Natalia Ladyka-Wojcik 

The takeaway

After a period of sleep deprivation, our bodies settle the (sleep) score by entering into a period of persistent and deep recovery sleep. For the first time, scientists have discovered the neural circuit that promotes recovery sleep, providing key insights into how the brain maintains sleep homeostasis. 

What's the science?

Sleep is governed by homeostatic control, the body’s mechanism for maintaining a stable internal environment despite changes in the external environment. When we experience sleep deprivation, the resulting accumulation of “sleep debt” prompts the body to restore sleep balance by initiating a period of persistent and deep recovery sleep. Although many molecular and cellular mechanisms have been proposed to regulate sleep, we still don’t know what specific neural circuits may detect or transmit homeostatic signals to sleep-promoting brain regions. This week in Science, Lee and colleagues set out to identify a neural circuit responsible for triggering this essential recovery sleep, using tools that allow neuroscientists to control the signaling of brain cells in mice.  

How did they do it?

In mammals, sleep can be categorized into two types: rapid eye movement (REM) sleep and non-REM sleep, the latter of which is considered a deeper, recovery-type sleep. Here, the authors mapped a group of excitatory neurons in the thalamus of mice that project to brain regions which are thought to promote non-REM sleep. Specifically, they investigated non-REM, homeostatic recovery sleep after activating and inhibiting neurons in the nucleus reuniens of the thalamus – a major relay station for sensory and motor information in the brain. The authors used a technique called chemogenetics to inhibit neurons of the nucleus reuniens during sleep deprivation in order to determine if subsequent non-REM recovery sleep would be affected. A similar approach using optogenetics, a tool that uses targeted pulses of light to control the activation of neurons, was also used to determine if the stimulation of excitatory neurons in the nucleus reuniens would promote sleep behaviors. Finally, the authors assessed the downstream impact of activation in these neurons by tracing their projections to other non-REM sleep-promoting brain regions.

What did they find?

The authors found that inhibiting neurons in the thalamic nucleus reuniens decreased the quality of homeostatic, non-REM recovery sleep that the mice subsequently experienced. In contrast, stimulated neurons in the nucleus reuniens led to mice exhibiting longer, deeper, non-REM sleep after a delay, suggesting that these neurons regulate sleep homeostasis. The authors also found that mice engaged in more behaviors associated with preparation for sleep, such as self-grooming, after optogenetic activation of these neurons. Importantly, after longer periods of sleep deprivation, neurons in the nucleus reuniens fired more frequently while the mice were awake – an effect that diminished with subsequent recovery sleep. Finally, the authors found that these neurons projected to a small subthalamic region called the zona incerta, to generate non-REM recovery sleep. Curiously, sleep deprivation enhanced interactions between the nucleus reuniens and zona incerta, whereas disrupting synaptic plasticity in the nucleus reuniens impaired this interaction and reduced non-REM sleep.

What's the impact?

This study is the first to identify a neural circuit responsible for homeostatic control over non-REM recovery sleep, separate from regular sleep-wake cycles. Specifically, these findings suggest that during sleep deprivation, brain regions that promote non-REM sleep increase their communication to drive deeper, more restorative sleep. By uncovering the brain mechanisms that support recovery sleep in mice, this research provides insight into what may happen in the human brain after sleep loss, particularly in conditions like idiopathic hypersomnia, where patients experience an overwhelming and persistent need for sleep.

Access the original scientific publication here.