A Neural Population Selective for Song in Human Auditory Cortex

Post by Andrew Vo

The takeaway

The human brain has regions specialized for music compared to speech or other sounds. Using brain recordings and imaging allows further decoding of brain responses specific to different types and features of music.

What's the science?

Music is an important part of society, culture, and the human experience. Research has demonstrated our brains have areas that selectively respond to music compared to speech or other sounds. However, whether this brain response to music contains further information about different types or features of music remains unknown. This week in Current Biology, Norman-Haignere et al. used a combination of brain recordings and imaging to identify neural subpopulations representing different types of music.

How did they do it?

The authors used intracranial recordings (ECoG or electrocorticography) from 15 human patients as they listened to a set of 165 natural sounds (e.g., diverse music, speech, vocalization, and ambient sounds). This recording method has the advantage of high temporal resolution of brain responses to brief auditory stimuli. These data were then analyzed using a custom algorithm that decomposed its statistical structure into components that represented different neural populations in the auditory cortex. Due to the limited spatial resolution of ECoG, the authors correlated their initial findings with functional magnetic resonance imaging (fMRI) responses to the same set of sounds but in a different set of 30 volunteers.

What did they find?

The authors identified 10 reliable components (patterns) from ECoG recordings that were stable across participants. Two of these components responded selectively to speech sounds, regardless of whether that speech was native or foreign to the listener. A different component responded strongly to music, both instrumental and with singing, and less so to speech or other vocalizations. Finally, a single component responded exclusively to music with singing (i.e., song). Using fMRI data, these components were found to be differentially represented along the superior temporal gyrus in the auditory cortex. Brain responses selective for speech, music, and song could not be explained by non-specific features of sound as the identified components showed comparatively weaker responses to matched synthetic sounds.

What's the impact?

This study showed that the human brain not only represents music uniquely from speech or other sounds but that this activity contains further information about different types of music — of note here, for song. The findings here demonstrate how combining the spatiotemporal features of ECoG and fMRI may allow for better decoding of music in the human brain.

How Can Biomarkers Help In Neurodegenerative Disease Detection?

Post by D. Chloe Chung

Why are people talking about biomarkers?

In the 1990s, scientists defined biological markers or “biomarkers” as changes at the cellular or molecular level that can be sampled and measured in cells or biological fluid. These measurable changes can reflect biological processes happening in the organism and suggest if there is any abnormal change in its biology. There has been a huge effort to develop biomarkers as clinical tools to detect diseases as early and accurately as possible and monitor prognosis in patients. Accurate biomarkers are highly valuable because the chance of successfully treating a certain disease can dramatically increase with an earlier diagnosis. Advancement in biomarkers can be especially helpful for patients with neurodegenerative diseases as there are a limited number of ways in which the brain can be accessed in living people.

What kinds of biomarkers are out there?

One of the most actively investigated biomarkers is found in cerebrospinal fluid (CSF), a clear and colorless bodily fluid that fills up the space around the brain and the spinal cord to protect and insulate our nervous system. Changes in the level of proteins in the brain and the spinal cord can be observed in the CSF, which is very useful in identifying pathological changes associated with neurodegenerative disease. Doctors can collect CSF from people through a lumbar puncture, also called a spinal tap. However, this medical procedure can be invasive for patients as it involves inserting a needle into the gap between bones of the lower spine to sample CSF. Recently there has been a push to design methods that can detect protein changes in the blood instead of (or in addition to) CSF because drawing blood is less invasive than a lumbar puncture.

In addition to directly sampling bodily fluids such as CSF and blood, another set of biomarkers is based on neuroimaging which can detect changes in the brain non-invasively. For example, magnetic resonance imaging (MRI) can reveal structural changes in the brain with a high spatial resolution. Given that the brain often shrinks as neurodegeneration becomes worse, MRI is helpful in observing long-term changes in the brain structure of patients and quantifying disease progression. Another popular imaging technique is positron emission tomography (PET) that can show changes in metabolic or biochemical pathways in the brain. For this option, radioactive agents called “tracers” are injected to label molecules of interest in the brain. PET imaging results of the tracer pattern in the brain can reveal the status and location of pathological changes.

Biomarkers for Alzheimer’s disease

Alzheimer’s disease (AD), the most common form of dementia, is characterized by two major disease hallmarks: plaques made of amyloid beta protein and tangles made of the microtubule-associated protein tau, both found in the brain. The current estimate of Americans with AD is 5.8 million individuals, pointing to the importance and urgency of developing reliable biomarkers to detect the disease early and seek appropriate treatment and care. Thanks to multiple studies, several methods are now available to readily detect the levels of amyloid-beta and tau in the CSF and blood. For example, in the CSF and blood of AD patients, the toxic version of amyloid-beta appears to be dramatically decreased which may reflect its accumulation in the brain and the subsequent impairment in its clearance. Also, the level of abnormally phosphorylated tau is substantially increased in the CSF and blood of AD patients. Clinicians can also use these measurements to monitor disease progression in patients. While the levels of CSF-derived amyloid-beta and tau tend to be more accurate in their correlation with the degree of disease, their repeated measurement is difficult due to the invasive nature of the procedure. Hence, recent research efforts are highly focused on improving the detection of amyloid-beta and tau in blood.

Studies on neuroimaging biomarkers for AD have gained momentum over the past years. PET imaging advancements allow specific detection of amyloid-beta plaques or tau tangles and have helped locate pathological protein accumulations in the brain and assess disease progression. Because PET can detect such disease-related changes before the full-blown onset of clinical symptoms, this has also been useful in determining which individuals without obvious symptoms of AD may benefit from clinical trials. Furthermore, continual advancements in PET imaging tracer efficiency increase detection sensitivity and accuracy. In addition to detecting specific proteins like amyloid-beta or tau, monitoring the brain structure and evaluating the degree of neurodegenerative changes via MRI have elevated our understanding of the changes happening in the brain over the course of the disease.

Biomarkers for Parkinson’s disease

Parkinson’s disease (PD) is a debilitating movement disorder in which patients experience severe tremor, muscle rigidity, and posture and balance impairment. Like many other neurodegenerative diseases, early detection of PD is currently difficult as its diagnosis heavily relies on the presentation of clinical symptoms which may only appear after the disease has progressed. Hence, many researchers and clinicians have pursued the development of potential biomarkers. One of the key pathological hallmarks of PD is the aggregation of a presynaptic protein called alpha-synuclein. Researchers have found ways to detect aggregated forms of alpha-synuclein in the CSF, which can distinguish possible PD from other diseases. A similar diagnosis is possible using blood from PD patients, although more studies are needed to confirm the efficacy of this method. Neuroimaging biomarkers have been developed to monitor changes in dopaminergic neurons, to which damage is another hallmark of PD. Since the accuracy of PD diagnosis increases when multiple biomarkers are considered, researchers continue to pursue additional novel biomarkers that might aid clinicians and patients.

Biomarkers for Amyotrophic Lateral Sclerosis

Amyotrophic lateral sclerosis (ALS), commonly known as Lou Gehrig's disease, is a devastating disease involving the progressive death of motor neurons in the brain and the spinal cord. Currently, the most popular biomarker option for ALS is the level of neurofilament light chain (NFL), part of a structural protein highly expressed in neuronal axons. Several studies have shown that, compared to healthy people, ALS patients tend to have a much higher level of NFL in their CSF. Increased NFL levels can be also detected in the blood of ALS patients, although at a lower concentration than the CSF. It is speculated that NFL is released into the extracellular space when neuronal axons are damaged, after which they enter the CSF and eventually leak into the blood. The increased level of NFL in the CSF and blood can be used to predict and monitor the degree of axonal damage experienced by ALS patients. Researchers have also reported that the level of fluid-based NFL correlates well with disease progression, suggesting the potential value of NFL as a prognostic marker for ALS.

The future of biomarkers for neurodegenerative diseases

For the past few years, researchers have made tremendous progress in developing various biomarkers for neurodegenerative diseases. Information gathered from biomarkers has allowed clinicians to track pathological changes in the brain of living people and evaluate whether patients are responding to treatment options. Researchers have also benefited from biomarkers when selecting potential participants for clinical trials. Technical advancement will continue to optimize biomarkers, ideally enabling early and accurate detection of disease so that patients can have a wider window for effective treatments. Many researchers are actively investigating ways to increase the sensitivity of blood-based biomarkers and new neuroimaging tracers are being developed to enhance the diagnostic accuracy. Since combining multiple biomarkers can greatly increase the chance of correct diagnosis, additional innovative and effective biomarkers are needed. We are living in an exciting time where widely available biomarker-based diagnostic tests for neurodegenerative diseases may be in our not-too-distant future.

References +

Mayeux. Biomarkers: Potential uses and limitations. NeuroRx (2004).

Hansson. Biomarkers for neurodegenerative diseases. Nature Medicine (2021).

Zetterberg & Blennow. Moving fluid biomarkers for Alzheimer’s disease from research tools to routine clinical diagnostics. Molecular Neurodegeneration (2021).

Márquez & Yassa. Neuroimaging Biomarkers for Alzheimer’s Disease. Molecular Neurodegeneration (2021).

Verde et al. Neurofilament Light Chain as Biomarker for Amyotrophic Lateral Sclerosis and Frontotemporal Dementia. Frontiers in Neuroscience (2021).

How Biomarkers Help Diagnose Dementia. National Institute of Aging (2022).

Parnetti. CSF and blood biomarkers for Parkinson's disease. Lancet Neurology (2019).

Inhibiting the Development of Threat Memories with TMS

Post by Megan McCullough

The takeaway

Decreasing activity in the primary sensory cortex through transcranial magnetic stimulation interferes with the formation and consolidation of threat memories in humans.

What's the science?

The ability to predict events based on past experiences is important as it allows us a degree of self-protection. However, some individuals are impacted by memories in a way that negatively affects their behavior and wellbeing. Previous studies have primarily used drug administration to study human threat memory. Because drugs cannot be administered only to specific brain areas, it does not allow researchers to study the specific regions involved in developing threat memories. Previous research has implicated the sensory cortices in the development of threat memories, but it is still unclear whether they are a necessary brain region for threat memory development. This week in Biological Psychiatry, Ojala and colleagues studied the relationship of the primary sensory cortex (S1) and human threat memory using transcranial magnetic stimulation (TMS), a technique that induces excitability changes in neurons in specific brain regions.   

How did they do it?

First, all participants underwent functional magnetic resonance imaging (fMRI) while fingers of their left hand were stimulated to localize the S1 in each individual’s brain. Participants then received TMS to the area identified by the fMRI. The authors used continuous theta-burst TMS to decrease activity in the primary sensory cortices of the participants. TMS was used because it is non-invasive and circuit-specific. Each trial of the experimental task involved electric pulses being administered to the fingers of the left hand while participants fixated on a cross on a screen and indicated with a button the stimulus pattern they perceived. Participants in the control group received the TMS to the left side of the brain from the stimulation while participants in the experimental group received the TMS to the right side of the brain (which contains the S1 corresponding to the fingers of the stimulated left hand). Right after the administration of the TMS, participants underwent threat conditioning; an electric shock was delivered intermittently during the experimental task. The next day, memory retention of the threat of the electric shock was assessed by measuring fear-potentiated startle during trials of the task.

What did they find?

The authors found a decreased fear response during the trials after fear conditioning in participants in the experimental TMS group. This shows that decreasing activity in the sensory cortices in the side of the brain where the stimulation to the fingers was represented, affected the formation of memories of the threat of the painful shock. This implicates the S1 in the consolidation of threat memories in humans. It also illustrates the usefulness of TMS in studying relationships between specific brain regions and the formation of fear memories.

What's the impact?

This study found that the primary sensory cortices are involved in the process of forming and consolidating memories associated with events that invoke feelings of fear. The study also shows that TMS can be an effective technique for interfering with the development of fear memories. This finding has clinical relevance as TMS is a potential treatment for individuals adversely affected by threat memories such as those with anxiety disorders. 

Access the original scientific publication here.