Rugby Players Show Signs of Neurodegeneration in the Brain

Post by Anastasia Sares

The takeaway

This study reveals that former rugby players have elevated blood levels of a protein called Tau, which is associated with neurodegeneration, underscoring the risks of participating in contact sports where sub-concussive impacts are common.

What's the science?

Rugby is a high-contact sport where players can expect to experience head impacts regularly. Previous research has shown that players of high-contact sports have an increased risk of dementia, specifically Chronic Traumatic Encephalopathy (CTE) – a neurodegenerative disease first made famous in boxers, which can include mood swings, aggression, and memory problems. Studies on the brain tissue of players who have died with CTE show changes in the level of dementia-related molecules, specifically a protein called Tau. This is consistent with the idea that the head impacts and concussions suffered by players may trigger processes of neurodegeneration. However, it is difficult to track these subtle processes in people who are still alive but whose concussions or injuries are far in the past. 

This week in Brain, Graham and colleagues were able to show changes in dementia-related molecules in a large sample of ex-rugby players based on a blood sample, along with MRI data showing decreased brain volume and disrupted connectivity.

How did they do it?

Participants included 200 rugby players and 33 healthy controls, on average in their forties, along with a sample of older adults from a separate study, including 69 people with late-onset Alzheimer’s disease and their age-matched healthy controls. The authors obtained blood samples from the participants and used precise immunosorbent assays to detect specific proteins. An assay like this uses synthesized immune proteins (antibodies) that normally bind to foreign invading substances. By engineering the antibodies to bind to different proteins instead, they can capture these proteins and analyze the quantity of any molecule they choose. In this case, they were interested in dementia-related molecules such as amyloid beta and phosphorylated tau217, as well as other brain trauma indicators (plasma neurofilament light and glial fibrillary acidic protein). The participants were also scanned with MRI so that the size and shape of different brain regions could be estimated.

What did they find?

The rugby players had elevated levels of one key molecule involved in neurodegeneration: p-tau217. This increase in p-tau217 was associated with greater odds of traumatic encephalopathy syndrome (the clinical/behavioral symptoms of CTE). Alzheimer’s patients, on the other hand, had elevated levels of all molecules, even above the levels of the rugby players. 

MRI scans showed that players had lower brain volume in certain areas, like the frontal and cingulate cortex and the hippocampus—areas involved in executive function, emotional regulation, and memory. These changes in brain volume were related to the amount of time spent playing professionally. Finally, greater amounts of p-tau217 in the blood were related to a smaller volume in the hippocampus, a center of emotion and memory formation. It is important to note that these rugby players were recruited by self-referral, and one of the reasons for self-referral could include cognitive concerns. So, the results in this sample may be more extreme than for all rugby players.

What's the impact?

This study shows that rugby players coming in for cognitive concerns do indeed have elevated levels of tau protein, and that this is correlated with structural brain changes that could be part of CTE. Given their risk, it is important to monitor rugby players for signs of cognitive decline, and the methods used in this study are a useful step in developing this monitoring capability.

Access the original scientific publication here.

Large Language Models Could Influence Voter Attitudes in Elections

Post by Rebecca Glisson

The takeaway

Large language models (LLMs) such as ChatGPT can engage people in persuasive conversations that may change their opinions. When voters had conversations with an LLM about election candidates, they were more likely to change their political opinion on the candidate.

What's the science?

Large language models (LLMs) are used today as a way for the general public to quickly gather information about a topic they are unfamiliar with, despite how often these models can present misleading or false information as facts. There is a developing concern about how this will affect voter decisions in democratic elections. This week in Nature, Lin and colleagues studied how interacting with LLMs can change voter attitudes towards political candidates.

How did they do it?

The authors tested the effects of human-AI conversations on voter attitudes for four elections: a United States presidential election, a ballot election in Massachusetts for legalizing psychedelic drugs, a Canadian federal election, and a Polish presidential election in 2024 and 2025. They asked participants to rate their attitude toward each candidate or voting option and how likely they would be to vote on a scale of 0 to 100. Each participant would then have a “conversation” with an LLM which would try to persuade them for one candidate (or ballot measure), or the other. The authors used several different LLMs for the experiment, including the widely-known ChatGPT, but also DeepSeek, Llama, and a combination of models using Vegapunk. The model was instructed to have a positive and respectful conversation with the participant while working to increase the participant’s support for the model’s assigned candidate. After the experiment, participants were again asked to rank their support from 0 to 100 and the authors compared how their answers changed before and after the interaction with the LLM. Finally, the authors used a combination of Perplexity AI’s LLM and professional fact-checkers to study how accurate the statements in each of the LLM’s interactions with participants were.

What did they find?

The authors found that for each group and election, participants were persuaded in whichever direction the model was assigned to work towards. This effect was even stronger if the participants interacted with an LLM that was advocating for the opposite of their initial preference. For example, if a person said they supported Trump for the US presidential election, and interacted with a pro-Harris LLM, they were more likely to lean towards Harris than someone who had initially been a Harris supporter. These same trends were true for the ballot measure in Massachusetts and the elections in Canada and Poland. The authors also found that overall, the statements made by the LLMs were mostly accurate. However, they found an interesting trend that LLMs that were arguing in support of the right-leaning candidates in each country made more inaccurate statements.

What's the impact?

This study is the first to show that LLMs, which have recently gained popularity among the population, can persuade people to change their opinions towards political elections and candidates. While some might consider this an exciting development in how to persuade voters in elections, the tendency of LLMs to produce misinformation is important to consider before trying to use them on a larger scale. Studies like these can be extremely valuable for understanding the risks of using new technology before it is properly understood and regulated.

How Are Cognitive And Physical Endurance Linked?

Post by Amanda Engstrom 

The takeaway

Engaging in cognitive tasks during physical activity makes exercise feel harder. Individuals with stronger cognitive abilities are less affected by this mental “cost,” suggesting that cognition and endurance capacity are closely linked. 

What's the science?

The combination of physical activity and cognitive tasks (such as navigation and working memory), known as cognitive-motor dual tasks, may play a critical role in the evolution of human foraging strategies and sustaining goal-oriented physical effort. In ancestral environments, humans had to perform dual functions for hunting and foraging, which likely shaped the evolution of human cognition. Prior work has shown that cognitive demands during short-duration movement can compete with locomotor resources and reduce physical endurance. This negative association has been attributed to the increased perception of physical effort and mental fatigue; however, this has not been directly tested. This week in PNAS, Aslan and colleagues conduct a randomized trial to elucidate how cognitive demands influence long-term endurance and fatigability.

How did they do it?

The experiment included thirty healthy individuals ages 18-53 (17 females, 13 males). Participants completed two endurance walking sessions at roughly 65% of their estimated maximum heart rate. One exercise session was done while simultaneously performing various executive function tasks (Exercise + Cognition; E+C), and the other was walking alone (Exercise alone; EA). During each session, the authors measured the participants' perceived effort using the Borg Rating of Perceived Exertion (RPE) scale and their perception of fatigability, which is the change in the participants' perception of effort. The authors also tracked the participants' oxygen consumption and carbon dioxide production to estimate the energetic cost of exercise and their respiratory exchange rate. Finally, to test whether higher cognitive function might buffer the negative effects of dual tasking on endurance, the authors evaluated participants’ cognitive performance before any physical activity, focusing on skills relevant to foraging success, like executive function, visuospatial abilities, and memory. 

What did they find?

Perceived effort was significantly greater during the E+C condition compared to EA. Six participants chose to stop early from exercise in the E+C condition, compared with only one participant stopping early in both conditions. 

This supports the idea that cognitive-motor dual tasks increase perceived effort during endurance activities. However, perceived fatigability—the rate of change in perceived effort—did not differ between conditions. The physical expenditure of participants, measured by net metabolic power and overall respiratory exchange rate, was significantly lower during the E+C condition compared to EA. Interestingly, participants had a greater respiratory exchange rate initially in the E+C condition compared to EA, but the values converged as the trial went on. This indicates that early in the E+C condition, participants utilized a greater proportion of carbohydrates as fuel than in the EA condition, but this proportion declined at a slightly more rapid rate over the duration of the E+C condition. In both the E+C and EA conditions, participants with better delayed memory had lower perceived effort. Additionally, participants with lower visuospatial ability and figure recall scores had a larger increase in perceived effort from the E+C to EA conditions compared to participants with higher scores on these tests.

What's the impact?

This study demonstrates a bidirectional relationship between aerobic endurance and cognitive function. Dual tasking appears to reduce endurance by increasing the overall perception of effort. The findings also show that the subjective feeling of effort does not necessarily track actual physiological cost. Overall, this study provides a framework for understanding how cognition can both constrain and support goal-oriented physical activity from an evolutionary standpoint as well as for modern training practices.


Access the original scientific publication here.