Predicting Chronic Pain States in Humans

Post by Lani Cupo

The takeaway

The authors developed a neural biomarker to predict chronic pain in patients, with the goal of facilitating diagnosis and treatment of neuropathic pain.

What's the science?

Neuropathic chronic pain (e.g. after a stroke or amputation of a limb) is the cause of great suffering in patients, however it can be difficult to develop objective biomarkers to aid diagnosis and treatment. It is also still not fully clear how brain activity changes with fluctuations in chronic pain levels, and how these changes differ from activity associated with acute pain. This week in Nature Neuroscience, Shirvalkar and colleagues presented a neural biomarker for chronic pain using implanted electrodes in patients, successfully predicting pain ratings.

How did they do it?

The authors enrolled four adult participants in their study (two women), three of whom had post-stroke chronic pain, and one who had phantom limb pain. The authors implanted electrodes into two brain regions important in the processing of pain: the orbitofrontal cortex (OFC) and anterior cingulate cortex (ACC). The study took place over 2.5 - 6 months, during which time participants were asked to record their pain at least 3 times per day. After recording their pain rating (which was inherently subjective, as pain is by definition a subjective, individualized experience), they pushed a button on a remote control which triggered a 30 second recording from the implanted electrodes. This in-depth recording method allowed researchers to track fluctuations in pain over the day as well as over the weeks.

Next, the authors trained a machine learning model to predict subjective pain scores with the neural activity from the implanted electrodes. They compared models trained on data from only one brain region versus models combining data from both electrodes to see which brain region best predicted chronic neuropathic pain.

Finally, the authors sought to compare the neural mechanisms underlying chronic pain with those underlying acute pain in a laboratory experiment. They brought the patients into the lab and presented thermal stimuli (heat at five different temperatures) to both the most painful part and side of the body and the same region on the other side. During the experiment, they recorded neural activity from the electrodes and trained a machine learning algorithm to predict subjective acute pain ratings on the neural activity alone.

What did they find?

First, the authors observed patients had diurnal fluctuations in pain levels (over the 24-hour period), however, they also found cycles of pain in some participants every 3 days. Second, the authors successfully trained an algorithm (linear discriminant analysis) to classify subjective pain states as high vs. low. For three participants, the best prediction resulted in combining data from the ACC and OFC, however, overall the best subregion to predict neuropathic pain was the contralateral OFC — the OFC on the opposite brain hemisphere of the perceived pain. For example, if pain was felt in the left leg, the right OFC was the most effective region to predict pain. The results were stable across the months of the study, suggesting the model was robust in its predictions. Finally, the authors successfully trained a model to distinguish high-vs-low pain states in the acute pain experiment, but importantly, only models that included data from the ACC were successful, unlike the chronic pain state. This suggests the ACC is more centrally involved in acute pain, rather than chronic pain

What's the impact?

This study is the first to successfully predict subjective recordings of chronic pain from intracranial recordings over a period of months. In time, their findings may be used to develop patient-specific metrics to aid in diagnosis of chronic pain states. Further, implanted electrodes may be used to stimulate regions integral to chronic pain processing, reducing the pain that patients experience and improving their quality of life.  

Access the original scientific publication here

Neuroscience of Reading and its Implications for Education

Post by Kulpreet Cheema

Literacy and Reading

In today's text-reliant society, reading and writing skills are critical to our ability to understand and engage with the world around us. Reading is a process of decoding text to acquire meaning, and while we often engage in it effortlessly and unconsciously, it is a psychologically complex process with various underlying components.

How does reading work in the brain?

The process of reading involves language-specific neural processes that include verbal and text processing, comprehension, and vocabulary. Additionally, general processes like working memory and attention interact with one another to derive meaning from text. Difficulties with any of these processes can cause challenges in reading and writing. For example, in a reading-based disorder like dyslexia, individuals struggle to process a word's distinct sounds and connect them with letters and words. This leads to incorrect decoding at the word level and ultimately results in comprehension breakdown.

While reading can often feel effortless, it is an evolutionarily recent skill to emerge relative to speaking. Therefore, there are no specialized brain regions for reading. Instead, reading re-purposes brain regions intended for other processes. The neural circuitry of reading has been investigated for decades with neuroimaging technologies, with two common technologies being functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI).

fMRI measures changes in blood oxygenation to localize the brain areas involved whilst someone is engaged in a cognitive task. This is possible because neurons in an activated brain region require (and are delivered) more oxygen, and oxygenated blood has different magnetic properties than deoxygenated blood, so activated regions can be detected using the powerful magnets of an MRI scanner.

Cortical brain areas activated by reading are interspersed throughout the brain, and connected with white matter tracts. These tracts enable communication between the brain regions to coordinate the various sub-processes involved in reading and can be identified with another neuroimaging methodology, DTI. DTI leverages the same MRI scanner as fMRI but instead of blood oxygenation, measures the movement of water molecules within white matter tracts to identify the integrity of the tracts. Since white matter tracts are fibrous, lots of unimpeded diffusion of water in the direction of the fibers indicates the tract is intact or well formed.

What circuitry is involved in reading?

Using converging evidence from both fMRI and DTI studies, researchers have mapped the neural network responsible for skilled reading. This network comprises three major components: the anterior network situated around the inferior frontal gyrus, the temporo-parietal region, consisting of supramarginal gyrus and superior temporal gyrus, and the occipito-temporal region, including fusiform gyrus and inferior/middle temporal gyrus. These areas leverage white matter pathways to communicate with each other and accomplish the reading process. Using DTI, various reading-based white matter tracts have been identified, including arcuate fasciculus (connecting temporal areas to inferior frontal region) and inferior longitudinal fasciculus (connecting anterior temporal to occipital regions).

How can we apply neuroscience findings to education?

While we’ve gained significant consensus on the neural basis of reading, leveraging this knowledge to enhance literacy teaching and learning requires further exploration. One field of study that seeks to translate the neuroscience findings about learning to educational practices and policy is known as Educational Neuroscience. This emerging field was initially established with several neuroimaging studies investigating the neural basis of both skilled and disordered reading. As one example, research investigating dyslexia used neuroimaging techniques to reveal disrupted functional activity and structural integrity of neural circuitry important for reading. When individuals with dyslexia read words, researchers identified reduced activity in the superior temporal gyrus, providing evidence for dylexia’s neurobiological basis. Evidence for reduced brain activity in brain regions responsible for sound processing in dyslexia led to interventions that targeted sound awareness that normalized brain activity and had a downstream positive impact on reading behavior. However, such successes are few and far between, with most neuroscience studies merely corroborating behavioral findings, rather than innovating toward new therapeutic measures. In the future, further investigations are needed to explore how neuroscience can better inform the improvement of reading skills. One promising avenue is the use of neuroimaging to identify pre-reading individuals at risk of developing dyslexia, allowing for timely intervention and positive remediation effects.

Looking to the future

In conclusion, neuroscience of reading and its application in educational settings could provide critical clues that inform interventions and help foster literacy. To address the challenges associated with reading difficulties, educators, psychologists, and neuroscientists must collaborate to design and implement effective programs and services. By unraveling the complexities of the reading process and harnessing the potential of educational neuroscience, we can empower individuals to become proficient readers, unlocking a world of knowledge and opportunities.

References +

  1. Hung, C. O. Y. (2021). The role of executive function in reading comprehension among beginning readers. British Journal of Educational Psychology, 91(2), 600-616.
  2. Introduction to FMRI. Nuffield Department of Clinical Neurosciences. (n.d.). https://www.ndcn.ox.ac.uk/divisions/fmrib/what-is-fmri/introduction-to-fmri
  3. Kwok, F. Y., & Ansari, D. (2019). The promises of educational neuroscience: examples from literacy and numeracy. Learning: Research and Practice, 5(2), 189-200.
  4. Ozernov-Palchik, O., & Gabrieli, J. D. (2018). Neuroimaging, early identification, and personalized intervention for developmental dyslexia. Perspectives on Language and Literacy, 44(3), 15-20.
  5. Richlan, F., Kronbichler, M., & Wimmer, H. (2011). Meta-analyzing brain dysfunctions in dyslexic children and adults. Neuroimage, 56(3), 1735-1742.
  6. Shaywitz, S. E., Morris, R., & Shaywitz, B. A. (2008). The education of dyslexic children from childhood to young adulthood. Annu. Rev. Psychol., 59, 451-475.
  7. Soares, J. M., Marques, P., Alves, V., & Sousa, N. (2013). A hitchhiker's guide to diffusion tensor imaging. Frontiers in neuroscience, 7, 31.
  8. Thomas, M. S., Ansari, D., & Knowland, V. C. (2019). Annual research review: Educational neuroscience: Progress and prospects. Journal of Child Psychology and Psychiatry, 60(4), 477-492.

Neurons and Astrocytes Interact to Create Day-Night Cycles

Post by Anastasia Sares

The takeaway

Recent work shows how a partnership between neurons and surrounding cells called astrocytes helps to regulate our body’s central clock. This highlights the importance of non-neuronal cells in brain function, and adds a piece to the puzzle of how the brain manages day/night cycles.

What's the science?

The body’s circadian rhythm includes cycles of wake and sleep, hunger and digestion, blood pressure, hormones, and many other daily patterns. The brain region responsible for this is the suprachiasmatic nucleus, which maintains a circadian rhythm even in the absence of any light. But how do all the cells in this nucleus stay synchronized with each other and avoid sending out contradictory signals? This mystery becomes even more puzzling given that the main neurotransmitter in this region, GABA, is inhibitory, which should inhibit activity across the whole network instead of creating the cycling behavior we actually see.

This week in PNAS, Patton and colleagues demonstrated that support cells called astrocytes help to regulate the activity of neurons in this area by “vacuuming up” the GABA floating around outside of cells during the day and letting it accumulate at night.

How did they do it?

The authors obtained the brains of mice and extracted the suprachiasmatic nucleus, slicing it so it was only micrometers thick and mounting these slices on membranes. The slices were kept in a solution that would allow the cells to live and the neurons to keep firing. Each of these slices was then infected with adeno-associated viral vectors (AAV), which introduce genetic material so that the cell itself produces a custom molecule. In this case, the inserted gene encoded for a fluorescent protein that would latch on to GABA molecules. With the fluorescent molecules active, the brain slices would glow when there was GABA present, and go dark when the GABA disappeared. The authors observed that GABA concentrations were low during the day and peaked at night, even though the neurons that should release the GABA were firing more during the day.

The authors then re-analyzed their previously published single-cell RNA-sequencing studies of suprachiasmatic nucleus slices harvested in daytime vs nighttime. Some of the genes being transcribed differently in day and night were involved in GABA transport by astrocytes, which are support cells present in brain tissue. Using the same fluorescent tagging method, they investigated the activity of these GABA transporters, and what happens when they are chemically blocked.

What did they find?

GABA transport proteins in astrocytes were up-regulated during the day, meaning that the astrocytes are likely “installing” them in their membranes and using them to move GABA out of the intercellular space. At night, the opposite is true: there are fewer GABA transport proteins, and thus GABA builds up in the intercellular space. This cycle, in turn, influences how often neurons in the suprachiasmatic nucleus fire, and how often secondary transmitter molecules called neuropeptides are released—these neuropeptides go on to influence circadian behavior.

Inhibiting the activity of GABA transport proteins disrupted the circadian rhythm in the brain slices, and initiating the clock of the astrocytes was able to restore circadian rhythm to “clock-less” neurons in slices genetically engineered to lack certain proteins that would help the circadian clock function. So, although it was previously thought that GABA control of neuronal activity was not important, it is now thought that astrocytes actively remove it during the day instead, and allow it to accumulate at night supporting daily cycles of neuronal activity.

What's the impact?

These findings call attention to the often-forgotten “support” cells that can be found throughout neural tissue, showing that they may in fact be orchestrating important brain functions. It also brings us closer to understanding how our day/night cycles work, how they might be disrupted, and what might be the consequences of that disruption.