Risk Factors Affecting Brain Regions Vulnerable to Aging and Disease

Post by Meagan Marks

The takeaway

The “LIFO” network is a collection of high-functioning brain regions most susceptible to decline by aging and disease. Particular genes and modifiable risk factors such as diabetes, pollution, and alcohol intake are shown to be linked to the vulnerability of these regions.

What's the science?

Regions of the brain associated with high-process functions such as memory, attention, and execution tend to degenerate earlier and faster than the rest of the brain. These regions are also the last to develop during adolescence, coining them the “last in, first out” (LIFO) network. It is known that these LIFO regions are especially vulnerable to diseases like Alzheimer’s, Parkinson’s, and Schizophrenia, however, the genetic and modifiable risk factors that influence the sensitivity of these regions remain unknown. This week in Nature Communications, Manuello and colleagues aimed to identify these factors via statistical analysis to understand further how the LIFO network is regulated and to determine potential behaviors that may shield or exacerbate its decline.

How did they do it?

To determine the specific genes and modifiable risk factors linked to the LIFO network, the authors ran a series of statistical tests using data from the UK Biobank, a large-scale biomedical database that holds genetic, lifestyle, and health information of thousands of participants. To begin, the authors first calculated the grey matter volume of the LIFO regions of nearly 40,000 participants using brain scans obtained from the biobank. This allowed the authors to determine how much degeneration had taken place (less volume = more degeneration). Next, using computational analysis, the authors were able to sift through the genomes of participants to identify which genes were significantly associated with the grey matter volume of the LIFO network and which variants, or versions, of these genes were significantly associated with a lower volume. Finally, the authors looked at the association between the LIFO network and 15 modifiable risk factor categories that were previously linked to dementia. The authors were able to use the health and lifestyle data of biobank participants to do so, statistically determining which of these factors were significantly associated with LIFO network grey matter volume.

What did they find?

After genomic analysis, the authors identified seven gene clusters, or groups of genes, that were linked to the LIFO network. The top gene variants in these clusters (3,934 total) were from genes that regulate immune cell trafficking, inflammation, and neurogenesis, genes that have been linked to blood pressure, sleep duration, and cognitive performance, and genes located in a genetic region associated with Alzheimer’s disease and other neurodegenerative disorders. In total, these results suggest that individuals with specific variants of the genes identified may have a LIFO network more vulnerable to disease and aging.

Regarding modifiable risk factors, the authors found that 12 of the 15 modifiable risk factor categories had at least one factor that significantly affected the LIFO brain network. Taken together, these 12 factors significantly explained 1.5% of the vulnerability of the network after age and sex were removed. Of these factors, the authors determined that diabetes, alcohol intake, and pollution by nitrogen dioxide were the most harmful to the LIFO regions. This suggests that individuals who have been diagnosed with diabetes, who consume copious amounts of alcohol, or who have high exposure to nitrogen dioxide pollution may be at a higher risk of cognitive degeneration in these brain regions. 

What's the impact?

This study is the first to identify specific genetic and modifiable risk factors associated with the LIFO brain network, which contains the high-functioning regions most vulnerable to decline by aging and disease. Determining the specific gene variants and modifiable risk factors that are associated with LIFO regions may help to identify individuals at higher risk for cognitive decline. Recognizing these factors will allow patients and providers more time to protect against potential decline and help explain the biological mechanisms behind degeneration. 

A New Model of Synaptic Plasticity: Neurons Depend on Their Neighbours for Learning

Post by Laura Maile

The takeaway

Plasticity, or strengthening of synapses over time, is dependent on a complex interaction involving both excitatory and inhibitory inputs from a network of nearby neurons. When we learn, we rely not on single inputs and outputs, but on dynamic communication between networks of neurons.  

What's the science?

Synaptic plasticity is the collection of changes to both excitatory and inhibitory connections between neurons that occur when we learn. Historically, the understanding has been that this plasticity functions at the single synapse level, relying on the activity of a single presynaptic neuron and the response of its partner across the synapse. Hebb’s theory of learning states that when a presynaptic neuron repeatedly fires and activates a neighboring neuron, their connection is strengthened. More recent evidence has shown that learning and plasticity are more complex, integrating both excitatory and inhibitory information from other neighboring synapses, dependent on the circuitry of nearby networks. Scientists have not yet agreed upon a framework to explain this concept of interdependent synaptic plasticity in biological models. This week in Nature Neuroscience, Agnes and colleagues describe a new model of synaptic plasticity that relies on the activity of multiple neighboring synapses.  

How did they do it?

The authors created a theoretical model consisting of a set of rules integrating the timing, strength, distance, and identity of excitatory and inhibitory inputs to describe the interaction between sets of neurons during learning. They first utilized two excitatory neurons isolated from other inputs and presented different stimulation patterns to the pre and post-synaptic neurons. To increase the system's complexity, they introduced neighboring synapses to the same excitatory postsynaptic neuron. Next, to determine the influence of multiple neighboring inputs, they modeled a single postsynaptic neuron with several presynaptic inputs spaced apart uniformly. The authors then sought to understand how synaptic plasticity influences the receptive fields of neurons, or their ability to respond to different stimuli. To do this, they simulated a neuron receiving eight different inputs, to mimic the input of eight different frequencies of sound. They mimicked the learning period by using inhibitory inputs to gate or limit the time the postsynaptic neuron could be influenced by excitatory inputs. They next wanted to test their rules on a more spatially and structurally complex dendritic tree, mimicking the tree-like organization of synapses onto a single neuron. To do this, they connected dendritic compartments that could be independently activated, to a single neuron.  

What did they find?

Their model demonstrated that long-term potentiation (LTP), the process of synaptic strengthening upon repeated activation of a presynaptic neuron, can be initiated by the presynaptic neuron, and increases when the pre and postsynaptic neurons fire in synchrony. When neighboring synapses also increase their firing, the postsynaptic neuron showed increased LTP, in a fashion dependent on time and distance from the postsynaptic neuron. When multiple neighboring neurons targeted the same postsynaptic neuron at equal distances, their distance and ability to influence one another drove competition.

In experiments modeling receptive fields, the authors demonstrated that learning occurred when the gating inhibitory neurons were shut down, allowing the postsynaptic neuron to experience strong stimulation by excitatory inputs. With dendritic tree modeling, they showed that the plasticity of the excitatory synapses was dependent on inhibitory gating, distance from the cell body, and co-activity of surrounding inputs. Inhibition was found to directly influence excitatory synaptic plasticity. Inhibitory plasticity is slower than excitatory, and has strong control over excitatory plasticity, preventing too much change in excitatory weights and stabilizing learning. Finally, they created a model of a neuronal network, in which setpoints are used to balance LTP with synaptic weakening, creating a stable network that allows for learning without too much runaway excitation.  

What's the impact?

This study found that synaptic plasticity depends on a network of nearby synapses. The model developed can help explain how clusters of synapses develop and strengthen into stable systems. This work helps neuroscientists better represent and understand the complex dynamics of neural connections that change over time as we learn.  

Access the original scientific publication here.

Neuralink’s Brain Chip: Understanding Implantable Brain-Computer Interfaces

Post by Shahin Khodaei 

What is Neuralink’s brain chip?

In January 2024, Neuralink, the neurotechnology company owned by Elon Musk, implanted a “brain chip” in a human for the first time. This human is a man named Noland Arbaugh, who has tetraplegia (paralysis in both arms and both legs) due to a spinal cord injury. Neuralink’s coin-sized device was inserted into his skull with the help of a surgical robot, with microscopic wires implanted into the brain tissue to record neural activity. Information from the device is then wirelessly transmitted to a receiving unit for processing. Two months in, the device has given Mr. Arbaugh the ability to use his brain activity to move a computer cursor with enough dexterity to play online chess and the video game Civilization VI.

The Neuralink device is an example of brain-computer interface (BCI) technology. In short, BCIs allow direct communication between the central nervous system (CNS) and a computer. As a result, BCI technologies expand the natural outputs of the CNS, which would normally involve the use of muscles – for example, moving your tongue and mouth to speak, or using your hands to manipulate objects. With BCIs, an entirely new set of artificial outputs from the CNS becomes possible. By directly monitoring brain activity, these devices have been used to type, move a computer cursor, or operate a robotic arm.  

What are the components of a BCI system?

BCI systems consist of four key components: 1) signal acquisition, 2) feature extraction, 3) feature translation, and 4) device output. 

1) Signal acquisition: Any BCI system begins by measuring signals from the CNS using sensors. For implanted devices, the sensors are electrodes that are surgically placed under the skull, either on the surface of the brain or penetrating the brain tissue. Brain activity can also be measured in non-invasive ways – for example, by placing sensors on the scalp to measure electrical signals known as the electroencephalogram. In either case, such signals are small in size and therefore need to be amplified, and then converted into a digital format that can be processed by a computer.

2) Feature extraction: Signals recorded from the CNS are rich with information. However, not all the information within the signal is relevant for a particular BCI. In this component, the relevant features of the signal are extracted for further processing. The extracted feature should have a strong correlation with the user’s intentions. For an implanted device, the extracted feature is often the activity pattern of groups of neurons around the sensors.  

3) Feature translation: The extracted features are then given to a translation algorithm. This algorithm is designed to translate the relevant features of the signal into commands that reflect the user’s intent. For example, a specific activity pattern may be translated to “move the computer cursor upward”, and another pattern to “move the computer cursor downward”. In this way, the user’s goal is deduced from their brain activity.

4) Device output: The commands from the translation algorithm go on to operate an external device. Depending on the nature of the BCI system, the final output may be the movement of a computer cursor, the operation of a robotic arm, steering an electric wheelchair, etc.

In some instances, there may be a fifth component, where the BCI system delivers input back to the brain to modulate the CNS. This input may be delivered by directly applying electrical currents into the brain tissue through the implanted electrodes, or by non-invasive methods such as transcranial magnetic stimulation.

Progress in BCI technology

Major advances have been made since the first report of BCI technology in the 1960s when electrical signals recorded from the scalp were used to control a slide projector. The main goal of BCI research and development so far has been to assist people affected by stroke, spinal cord injury, or CNS disorders such as amyotrophic lateral sclerosis. The most common use of BCIs has been to replace natural CNS output that is lost to injury, which was reported as early as 2006. In that year, Hochberg et al. published a study in Nature reporting that a patient with tetraplegia (similar to Mr. Arbaugh) used an implanted BCI device to control both a computer cursor and a robotic arm.

Hochberg’s 2006 device implanted 96 electrodes into the participant’s brain. Nearly 20 years later, the BCI implant from Neuralink can measure brain activity using more than 3000 electrodes placed in brain tissue. This represents a significant improvement in the signal acquisition component of BCIs. With ongoing innovations in machine learning, the signals can be also processed in new ways for better feature extraction and translation. Together, these technologies will likely advance the capabilities of BCIs.

There have also been exciting uses for BCIs to not just replace, but to restore natural CNS output. In a 2023 study published in Nature, researchers developed a BCI system where the device output was electrical stimulation of the spinal cord. Specifically, the stimulation activated areas of the spinal cord that controlled muscles involved in walking. Using this strategy in a participant with tetraplegia, their device translated brain activity into leg movements to restore the participant’s ability to walk. 

BCIs also have the potential to either enhance or supplement the natural outputs of the CNS. In this way, researchers have explored how BCIs can help the general population. For example, a BCI can improve performance in tasks that require intense concentration by detecting brain activity that indicates loss of attention and playing a sound to restore concentration. 

Potential risks associated with BCI devices

Often, implanted BCI devices require an invasive and high-risk open brain surgery to place sensors into the brain. There is an unavoidable risk of damage to the brain area where the implant is placed, as well as possible complications such as infection, bleeding, and brain swelling. While these risks may be acceptable for users with severe disabilities who can greatly benefit from implanted BCIs, they discourage most individuals from getting an implant. It is worth mentioning that newer minimally invasive BCI devices are currently being developed. An example of this is the Stentrode sensor developed by Synchron. Instead of open brain surgery, the Stentrode is inserted into the brain through the jugular vein, using a minimally invasive endovascular surgery

If and when BCIs become more broadly used, there will be a growing risk to user’s privacy and safety. These devices are likely to measure brain activity in increasing detail as the hardware and software evolve. If the recordings of brain activity are not immediately discarded, it is critical that they are stored safely and privately. This is particularly important as the BCI field attracts more private companies, whose business interests may not align with the user’s expectation of data privacy. Additionally, the digital components of the BCI system, in particular the device output, may be vulnerable to threats such as hacking.  

Takeaway

Implanted BCI technologies show great potential in assisting individuals with neurological injury or disease. As the tools evolve to improve all components of BCI systems, BCIs could become an important technology to not only replace and restore function for people with disabilities but also to enhance and supplement performance in the general population. ​

References +

He et al. Brain–computer interfaces. 2020. Neural Engineering. Access the publication here.

Hochberg et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006. Access the publication here.

Lorach et al. Walking naturally after spinal cord injury using a brain-spine interface. Nature. 2023. Access the publication here.

Maiseli et al. Brain-computer interfaces: trend, challenges, and threats. Brain Informatics. 2023. Access the publication here.

Mitchell et al. Assessment of safety of a fully implanted endovascular brain-computer interface for severe paralysis in 4 patients: the Stentrode with thought-controlled digital switch (SWITCH) study. 2023. JAMA Neurology. Access the publication here.

Musk and Neuralink. An integrated brain-machine interface platform with thousands of channels. Journal of Medical Internet Research. 2019. Access the publication here.

Oi. Neuralink: Musk’s firm says first brain-chip patient plays online chess. 2024. BBC. Access the publication here.

Shih et al. Brain-computer interfaces in medicine. Mayo Clinic Proceedings. 2012. Access the publication here.