Predicting the Longitudinal Spread of Atrophy in Neurodegenerative Disorders

Post by Shireen Parimoo

What's the science?

Progressive neurodegenerative diseases like Alzheimer’s disease and frontotemporal dementia (FTD) are thought to result from the spread of misfolded proteins throughout the brain, eventually leading to neuronal loss and atrophy. The spread of brain atrophy typically follows a distinct pattern over the course of each disease, giving rise to a variety of behavioral symptoms. For example, AD pathology is characterized by the spread of misfolded tau protein that begins in the entorhinal cortex, a region of the brain linked to memory function. The pathology then spreads from this “epicenter” to other anatomically and functionally connected areas (i.e. its network). However, few longitudinal studies have investigated individual differences in the spread of atrophy from an epicenter to its network in neurodegenerative diseases. This week in Neuron, Brown and colleagues used network-based modeling of structural and functional magnetic resonance imaging (MRI) scans to predict longitudinal atrophy in patients with progressive neurodegenerative diseases.

How did they do it?

Structural MRI scans were obtained from 72 patients diagnosed with the behavioral variant of frontotemporal dementia (bvFTD) and the semantic variant of progressive primary aphasia (svPPA), as well as from 288 age-matched controls. Patients were scanned twice: once at baseline and again about a year later. Gray matter volume (GMV) was estimated in each voxel of the control participants’ scans and compared with patient scans, producing a GMV atrophy map for each patient that identified regions of relative atrophy. A subset of the control participants also completed a task-free functional MRI scan that was used to generate functional connectivity (FC) maps, which contained brain regions that were co-activated. To do this, the authors specified 192 cortical areas as seed regions and identified other co-activated brain regions, resulting in 192 FC maps for each participant. These maps were then averaged to produce a group FC map for each cortical seed region. For each seed region’s FC map, the authors correlated the FC values in each voxel with the atrophy in each patient’s GMV map. The cortical seed region whose FC map was most highly correlated with GMV atrophy was chosen as the epicenter for that patient. For example, if the anterior temporal lobe’s FC map was most highly correlated with a patient’s GMV atrophy map, then the anterior temporal lobe was chosen as that patient’s epicenter.

To identify the factors underlying the spread of atrophy from the epicenter, the authors specified a generalized additive model with baseline atrophy, the shortest path length to epicenter, and nodal hazard of each functionally connected region in a network as predictors. The shortest path length is the shortest distance between the epicenter and its functionally connected nodes. The nodal hazard is a measure of how much a node is at risk of atrophy based on the degree of atrophy present in its 5 functionally connected neighbours, with higher values suggesting a greater risk of atrophy. To determine model accuracy, the authors correlated patients’ actual atrophy in the follow-up structural scans with their predicted atrophy values. A cut-off correlation value of r = 0.23 was used to sort accurate (r >= 0.23) and inaccurate (r < 0.23) predictions.

What did they find?

Distinct epicenters of atrophy were observed across the two patient groups, including the anterior cingulate cortex and the frontoinsular cortex among those with bvFTD, and primarily the anterior temporal lobe in patients with svPPA. The spread of atrophy throughout the brain was also unique in each patient group. In bvFTD patients, atrophy progressed to the posterior cingulate cortex, precuneus, inferior parietal lobule, posterior inferior temporal cortex, and the dorsolateral prefrontal cortex. On the other hand, atrophy in svPPA patients spread to the orbitofrontal cortex, posterior temporal lobe, the anterior cingulate cortex, and the mid-cingulate cortex.

neuron.png

The spread of atrophy was predicted by the shortest path length to epicenter, nodal hazard, and baseline atrophy of brain regions. In particular, regions closer to the epicenter had the greatest atrophy over time, whereas regions farther away from the epicenter did not show much change in atrophy. Similarly, regions with higher nodal hazard values (i.e. more atrophied neighbouring regions) had greater longitudinal atrophy than regions with low nodal hazard values (i.e. fewer atrophied neighbours). Interestingly, the relationship between baseline and longitudinal atrophy showed an inverted-U pattern, whereby regions with intermediate levels of atrophy showed the greatest atrophy over time compared to regions with low or high levels of baseline atrophy. The predicted spread of atrophy correlated with the actual spread of atrophy over time (r = 0.64) and the model accurately predicted longitudinal atrophy for 59 out of 72 patients in the study. Thus, there was high spatial overlap in the model’s predictions of atrophy and the actual atrophy observed in the patient scans one year later.

What's the impact?

This study is the first to identify patient-specific epicenters of gray matter atrophy in bvFTD and svPPA and predict their longitudinal spread of atrophy. There is often considerable heterogeneity in both the behavior and the neuropathology associated with neurodegenerative diseases, and the network-based approach used to characterize the spread of pathology in this study has important implications for accurately predicting individual trajectories of disease progression.

Brown_quote_Oct22.jpg

Brown et al. Patient-tailored, connectivity-based forecasts of spreading brain atrophy. Neuron (2019). Access the original scientific publication here.

Awake Memory Consolidation Can Bias the Way New Information Is Perceived

Post by Flora Moujaes 

What's the science?

Our brains consist of an estimated 100 billion neurons, which are connected to each other by over 100 trillion synapses. Whenever you experience an event, a specific set of neurons and a pattern of connections is activated. Memories are thought to be stored in these patterns of connections. However, we still don’t fully understand the process through which representations of experiences are consolidated into long-term memory. This week in Trends in cognitive sciences, Tambini and Davachi review new evidence from recent human fMRI studies showing that memory consolidation occurs through reactivations that happen outside of conscious awareness during awake periods, and that memory consolidation can bias on-going cognition.

What do we already know? 

The infamous patient H.M. had his hippocampus and surrounding medial temporal lobe (MTL) surgically removed in 1953, in an attempt to cure his epilepsy. However, the surgery left him unable to form new memories. This indicates that while the hippocampus and MTL are vital for the formation of new memories, long-term memory involves additional storage in cortical networks outside the MTL. Studies across multiple species have since confirmed that the hippocampus is vital for acquiring new memories and that these memories can then be transformed across hippocampal-cortical networks for storage in long-term memory. This transformation is widely believed to involve repeated memory reactivation, both during sleep and ‘offline’ during awake periods.

What’s new? 

The authors propose that repeated memory reactivation in the hippocampus during awake periods is related to memory strengthening. This memory reactivation happens offline, outside of conscious awareness. While more restful states promote memory reactivation, studies have shown it can occur alongside cognitively intensive tasks. Memory reactivation is closely related to the salience of the initial event, as it’s more advantageous to strengthen memories that may provide a greater learning potential. Memory reactivation is also associated with long-term memory storage, as studies have shown reactivation promotes memory integration across hippocampal-cortical networks. What’s also 'new' is that these processes have been studied mostly in animals, however, in this review, the authors summarize evidence that similar mechanisms can be studied in humans using non-invasive measures like fMRI. 

The authors propose that spontaneous reactivation can shape the way in which we experience and interact with the world. For example, emotional arousal is known to increase memory, and so if an emotional memory is consolidated offline while a memory task is being performed, performance on the task might improve. However, it is important to note that the mechanisms underlying the reactivation of memories may be similar to the mechanisms underlying the retrieval of memories, and so future work is needed to help disentangle human reactivation that supports ‘online’ cognition versus consolidation.

fmri.png

What's the bottom line? 

Human neuroimaging studies have shown that memory consolidation through reactivation occurs during awake periods. Tambini and Davachi emphasise memory consolidation is a complex process as: (1) it is related to memory strengthening, (2) it can be used to track the salience of information and thus whether information will get stored in long-term memory, and (3) it can bias the ways in which new information is encountered and processed. Overall, this review furthers our understanding of the process through which experiences are consolidated into long-term memory and highlights many new and exciting avenues for future research. 

Tambini and Davachi. Awake Reactivation of Prior Experiences Consolidates Memories and Biases Cognition. Trends in cognitive sciences (2019). Access the original scientific publication here.

Working Together Changes the Way We Process Others’ Actions

Post by Anastasia Sares

What's the science?

Some time ago, just before the turn of the new millennium, scientists discovered that when one monkey watched another monkey performing an action, like reaching for an object, neurons fired in the brain as if it was performing the action itself. These neurons, often called ‘mirror neurons,’ are the subject of much debate, with some researchers claiming that they underlie things as complex as human empathy, while others remain more skeptical.

In general, humans are great imitators. It takes little effort for us to repeat someone else’s actions, and much more effort to withhold that response (as any child who has played “Simon Says” will tell you). However, we also seem to be very good at performing separate, complementary actions while working towards a goal. Think of lumberjacks sawing a tree trunk, or musicians performing a duet. What supports these uniquely human activities is what Sacheli and colleagues call a “Dyadic Motor Plan,” and this week in Cerebral Cortex, they aimed to find the brain regions involved. The study was performed at the University of Milano-Bicocca, Milan, Italy, in collaboration with the IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.

How did they do it?

Participants completed a music-like task with interactive and non-interactive conditions. With a “partner” (seen via video displayed on a monitor) they took turns performing one of two actions to a wooden cube (either touching the top with the index finger or pinching the sides of the cube). Each action was paired with a musical tone (G or C). The participant always saw the partner’s action and heard the resulting musical note before performing their own action. Each sequence was four actions long — #1 partner, #2 participant, #3 partner, and #4 participant. A small colored square indicating how the participant should respond during each trial was presented at the end of action #1 (after the participant had seen their partner’s first action). In the non-interactive condition, the square’s color indicated a previously-learned sequence of notes that the participant should perform, regardless of what their partner did. In the interactive condition, the color indicated a previously-learned melody that they were expected to continue along with their video partner, cooperating to produce all the necessary notes. In addition to manipulating interactivity, the authors had participants perform some trials in which their action (tap or pinch) matched their partner’s previous action and some in which the actions did not match.  Humans can experience visual interference when they see a partner perform an action different than the one they are about to perform. In other words, it takes more effort to process and execute a non-matching action. The interactive task condition was built so that the partner’s actions could be predicted and were part of a shared goal (i.e., playing a melody together), which should lead to a “dyadic motor plan,” and reduce visual interference.

What did they find?

The authors measured response times as well as brain activity. In the non-interactive condition, the reaction times were longer for non-matching actions: evidence of visual interference. In the interactive condition, however, there was no difference in reaction time between matching and non-matching actions. The authors also found a region of the frontal lobe (the premotor cortex) where the pattern of brain activity differed between conditions. This region was selectively active during the interactive condition regardless of whether the participant’s actions matched their partners, indicating that no differences in brain activity reflected simple imitation of a partner’s action. However, there was an interaction between condition (interactive versus non-interactive) and time during the four-part sequence; the region exhibited greater activity during action #1 versus later in the sequence, before the colored square had been presented. Because the colored square in the interactive condition indicated which goal (melody) participants were working towards with their partner, this brain activity likely reflects the participants' attempt to predict the partner’s next action and musical note.  

cerebral_cortex_img.jpg

The authors interpreted this to mean that when we do something together with a partner, our brain tries to predict our partner's contribution to the shared goal to see whether it meets expectations. They also emphasized that this region of the brain is situated in the frontoparietal network, which is involved with predictions like anticipating a partner’s goals.

What's the impact?

This research shows that pursuing common goals with another member of our species has an impact on how our brains process and react to visual information. Mimicking another’s actions may be helpful in some cases, but the reflex to imitate takes a back seat when we have more information and a better sense of context.

Sacheli_Oct15.jpg

Sacheli et al. How Task Interactivity Shapes Action Observation. Cerebral Cortex (2019). Access the original scientific publication here.