Biomarkers for Diagnosing Alzheimer’s Disease in the Indigenous Population

Post by Soumilee Chaudhuri

The takeaway

This study reveals unique patterns of blood-based clues linked to Alzheimer’s Disease (AD) in older American Indian (AI) individuals — a critically understudied population in healthcare and biomedical science. Blood-based biomarkers in this population suggest early and more widespread presence of AD-related changes in this indigenous cohort, compared to other ethnic groups.

What's the science?

Alzheimer’s Disease (AD) is the most common form of dementia and affects over ~6.5 million people in the United States. As a major public health concern, early identification of AD is important for effective prevention and treatment strategies for this debilitating disorder. However, existing methods of diagnosis such as cerebrospinal fluid (CSF) analysis and positron emission tomography (PET) imaging are invasive, expensive, and unaffordable for minoritized populations, including American Indians for whom treatment is often inaccessible due to socioeconomic and systemic factors. Therefore, there is a critical need for noninvasive and low-cost biomarkers that can accurately detect AD pathology, particularly in these underserved communities. The Strong Heart Study (SHS) addresses this gap by investigating blood biomarkers related to AD in a cohort of older American Indian individuals recruited from designated field centers in Arizona, North & South Dakota, and Oklahoma. By examining associations between these biomarkers and clinical, imaging, and cognitive findings, the authors provide critical insights into AD characteristics, diagnostics, and risk factors specific to the American Indian elderly population.

How did they do it?

Initially, SHS recruited American Indian adults aged 64-94 years old from tribal lands in the US Northern Plains, Southern Plains, and Southwest starting in 1981-1991; thereafter, the survivors of this initial cohort were recruited for cognitive aging and AD studies in 2010 ( N = 818) and invited back for a second visit in 2017 (N = 403). Five blood biomarkers of AD including phosphorylated tau (ptau), amyloid beta (A), glial fibrillary acidic protein (GFAP), and neurofilament light chain (NfL) were obtained from all participants using designated research platforms. Participants also underwent magnetic resonance imaging (MRI) and neuropsychological & cognitive testing (such as memory, executive function, simple and divided attention, etc.). Researchers used statistical methods such as regression analyses and Receiver Operator Characteristics (ROC) to assess the relationships between these blood biomarkers and various imaging measures and cognitive outcomes.

What did they find?

The researchers found significant differences in blood biomarker levels related to AD pathology among the SHS cohort of older American Indian individuals compared to other ethnic groups. The levels of normal Amyloid beta (A) — the lack of which is a hallmark of AD pathology — were significantly lower in this cohort compared to non-Hispanic white (NHW), African American (AA), and Hispanic/Latino populations, indicating a greater degree (almost 3 times) of pathology in this cohort compared to other populations. However, the levels of GFAP and NfL — other biomarkers of AD — were similar in this American Indian cohort as compared to other age-comparable populations. Associations were observed between blood biomarker levels of AD and MRI and cognitive test scores, suggesting potential implications for AD diagnosis and risk assessment in this population. Taken together, these findings reflect that previous studies of comparable risk between American Indian and non-Hispanic White individuals may underestimate AD prevalence in American Indians.

What's the impact?

This is the first study to investigate blood biomarkers for Alzheimer’s Disease in a population historically underrepresented in research: American Indians or the Indigenous people. These findings highlight the importance of understanding cognition and aging in diverse populations facing healthcare disparities so as to improve diagnosis and craft efficient, non-invasive, and affordable treatment strategies. It also shows us that previous studies have severely underestimated the risk of AD in this vulnerable population and urges us to be mindful of recruiting diverse participants from all demographics in AD studies so as to tailor diagnosis and treatment strategies to all.  

Access the original scientific publication here

How Does the Endocannabinoid System Reduce Chronic Pain Following Injury?

Post by Lani Cupo

The takeaway

Neuropathic pain can result from an injury or disease, and has been related to disruptions in circadian rhythms. Evidence suggests a novel link between circadian rhythms, the endocannabinoid system, and neuropathic pain. 

What's the science?

Prior evidence suggests disruption to circadian rhythms can increase sensitivity to neuropathic pain, however, the role of the underlying genes and proteins that control the circadian rhythms (clock genes) is still poorly understood. This week in PNAS Nexus, Yamakawa and colleagues use mouse models to investigate the role of the clock genes in the development of neuropathic pain, finding a previously undocumented role of the clock genes in neuropathic pain and a link with the endocannabinoid system.

How did they do it?

The authors performed a set of experiments in mouse models to investigate the role of a specific protein known as period2 (Per2 - integral in regulating circadian rhythms) in the development of neuropathic pain. To induce neuropathic pain in the mice, the authors used a well-established model involving the ligation (or clamping) of part of the sciatic nerve in the hind limb of an animal, producing chronic pain that can be measured with tests for pain sensitivity. First, the authors performed the operation in control mice as well as those lacking the protein Per2 and examined whether mice without Per2 still developed hypersensitivity to pain. Additionally, they examined the quantity and form of glial cells following the injury.

To examine what receptors were involved in neuropathic pain, the authors injected a series of compounds that blocked specific receptors in turn and examined the pain response; if the pain response was absent when a certain receptor was blocked, they would know it was key in hypersensitivity to pain. Next, the authors sought compounds whose production was controlled by binding the identified receptors, as well as cells that produced these compounds. Finally, they examined whether increasing expression of these receptors in mice with functioning Per2 protein reduces the neuropathic pain response.

What did they find?

First, the authors were surprised to find that in mice without Per2, there was no evidence of hypersensitivity to pain. They had expected that the Per2 protein was involved in fluctuations of pain sensitivity over the day, however, their results indicate Per2 is actually involved in the development of pain sensitization in general. While pain hypersensitivity was absent in mice lacking Per2, the authors observed alterations in glial cells in mice both with and without Per2, suggesting the lack of Per2 did not prevent changes in this molecular mechanism. 

Next, the authors identified a specific type of adrenergic receptor (α1-AR) involved in the lack of pain hypersensitization in mice without Per2. This receptor is part of a superfamily (G-protein coupled receptors) that, when activated, act as messengers by producing other compounds in a cell. In this case, the authors found that in mice without Per2, an endocannabinoid, 2-AG, was increased, with its production modulated by activation of α1-AR. Specifically, they found Per2 alters the expression of these receptors and the levels of 2-AG produced by astrocytes in the spinal cord. So, in summary, disrupting circadian rhythms by altering the protein Per2 altered a specific receptor in astrocytes which changes the expression of an endocannabinoid, 2-AG, and reduces pain hypersensitivity.  

What's the impact?

This study describes a new role of the circadian clock proteins and the endocannabinoid system in the development of neuropathic pain. The results increase the understanding of how disruptions in sleep cycles may impact neuropathic pain and may, in time, lead to new forms of treatment.

Access the original scientific publication here.

The Shallow Brain Hypothesis

Post by Meredith McCarty

Is a neural network a good model of brain function? 

The brain is a complex physical system that enables the processing of sensory information, the formation of memories, and the guidance of behavior and cognition. To advance the field of machine learning, artificial neural networks were developed, inspired by our understanding of brain connectivity and function. These networks are used in both scientific and technical applications like graphic processing units (GPUs used in video game hardware), healthcare, scientific research, aerospace engineering, and artificial intelligence.  

In neuroscience research, the design of neural networks that can capture aspects of how the brain processes information has incredible implications for theoretical and experimental understanding. However, whether contemporary neural network techniques adequately capture the complexity and structure of the brain is under debate

The complex architecture of the brain

To understand the current debate of how to best design neural networks, we must first understand the basic architecture of the brain. 

When sensory information (visual, auditory, taste, touch) travels from the peripheral nervous system into the central nervous system, these signals arrive to subcortical regions and are relayed into a brain region called the thalamus. The thalamus is located deep within the brain, beneath the cortex, but exhibits rich connectivity with cortical and subcortical regions. There are thalamic regions that receive and transmit information from subcortical sources (first order), and thalamic regions that transmit information between cortical regions (higher-order). These higher-order thalamic-cortical dynamics are the subject of much current research, as these signals have been found to be involved in not just sensory processing, but also attention, arousal, consciousness, and many other cognitive functions

Higher-order thalamic nuclei receive and transmit information to the cortex via complex connectivity patterns. Within the cortex, pyramidal neurons receive information from numerous cortical and subcortical sources. These pyramidal neurons are unique, as they are the most excitatory cells within a given cortical column, receiving information from numerous cortical and subcortical sources. There are many local recurrent connections within each cortical column, as well as long-range connections between distant cortical columns across the cortex. As such, the cortex is involved in both primary sensory processing as well as higher cognitive abilities and is strongly interconnected via pyramidal neurons to transmit information to distant cortical, thalamic, and subcortical regions

While an overly simplistic summary, this connectivity between subcortical, thalamic, and cortical regions is an essential feature of neural dynamics. However, much remains to be understood regarding this complex interconnected system

Hierarchical deep learning neural network models

Early development of neural network models was based on observed connectivity patterns in the visual cortex. Researchers found evidence of hierarchical information processing, from lower to higher cortical areas. Feedforward neural network models are inspired by this architecture and are generally structured with information flowing from input layers, through hidden layers, to output layers. 

The application of deep learning methods introduces “learning” into these networks (known as backpropagation) to enable the model to fine-tune itself. This method requires the adjustment of weights throughout the network hierarchy, with some debate as to how this would be implemented at the rapid scale present in the brain’s architecture. Contemporary neural network models often utilize recurrence, meaning that there is a bidirectional flow of information forward and backward. There is a diversity of architectures used in current neural network modeling, but much debate as to whether a primarily hierarchical-based network design is capable of capturing the computations occurring in the brain

What’s the Shallow Brain Hypothesis?

This potential discrepancy has led to the development of the Shallow Brain hypothesis. The focus of this hypothesis is that the inclusion of the thalamo-cortical and subcortical connectivity patterns of the brain (as opposed to a primarily hierarchical-based network) is essential to model neural dynamics effectively. The primary tenet of this hypothesis is that hierarchical cortical processing is integrated with a massively parallel process to which subcortical areas substantially contribute.” In other words, the transmission of information from the deep regions of the brain directly to the outer cortex and vice versa, bypassing the hierarchical transmission of information through each layer, is very important to brain function.

The Shallow Brain hypothesis is built from the evidence that each cortical column is a highly complex computational unit specialized to process information through distinct recurrent architecture. Across the classical cortical hierarchy, these distributed cortical columns comprise a massive array of parallel recurrent networks. Through extensive thalamic-cortical and cortical-subcortical connections, these parallel recurrent networks are integrated with each other to enable flexible and rapid information processing in the brain. 

Proposed benefits of the Shallow Brain hypothesis include a more physiologically plausible mechanism for local learning, increased speed of information flow in a parallel rather than serial architecture, and the capture of complex representations and flexible integration of features in network models. The Shallow Brain hypothesis outlines many dimensions by which the Shallow Brain architecture can more accurately and realistically capture the dynamics of information processing in the brain. 

The Shallow Brain hypothesis raises many interesting questions, with implications for neuroscience and computational modeling research. 

  • Are neural networks with primarily cortico-centric designs and theoretical underpinnings missing essential features of information processing occurring with subcortical (i.e., deep) regions of the brain? 

  • Are shallow architectures, as proposed in the Shallow Brain hypothesis, able to outperform other architectures in capturing neural dynamics?

  • Does the thalamus play an essential role in information processing, and does disruption of thalamic activity lead to deficits in learning and other cognitive faculties?

  • Finally, does the integration of parallel cortical processing occur at a cortical or a subcortical level?

The development of novel hypotheses of how neural networks should be designed has implications for both neuroscientific research and technological application alike.

References +

Sherman, S.M. The thalamus is more than just a relay. Curr Opin Neurobiol. 2007.

Kumar, V.J., Beckmann, C.F., Scheffler, K., Grodd, W. Relay and higher-order thalamic nuclei show an intertwined functional association with cortical networks. Communications Biology. 2022.

LeCun, Y., Bengio, Y., Hinton, G. Deep learning. Nature. 2015.

Olgenburg, I.A., Hendricks, W.D., Handy, G., Shamardani, K., Bounds, H.A., Doiron, B., Adesnik, H. The logic of recurrent circuits in the primary visual cortex. Nature Neuroscience. 2024.

Voges, N., Lima, V., Hausmann, J., Brovelli, A., Battaglia, D. Decomposing neural circuit function into information processing primitives. Journal of Neuroscience. 2023.

Sherf, N., Shamir, M. Multiplexing rhythmic information by spike timing dependent plasticity. PLoS Computational Biology. 2020.