Inhibiting the Development of Threat Memories with TMS

Post by Megan McCullough

The takeaway

Decreasing activity in the primary sensory cortex through transcranial magnetic stimulation interferes with the formation and consolidation of threat memories in humans.

What's the science?

The ability to predict events based on past experiences is important as it allows us a degree of self-protection. However, some individuals are impacted by memories in a way that negatively affects their behavior and wellbeing. Previous studies have primarily used drug administration to study human threat memory. Because drugs cannot be administered only to specific brain areas, it does not allow researchers to study the specific regions involved in developing threat memories. Previous research has implicated the sensory cortices in the development of threat memories, but it is still unclear whether they are a necessary brain region for threat memory development. This week in Biological Psychiatry, Ojala and colleagues studied the relationship of the primary sensory cortex (S1) and human threat memory using transcranial magnetic stimulation (TMS), a technique that induces excitability changes in neurons in specific brain regions.   

How did they do it?

First, all participants underwent functional magnetic resonance imaging (fMRI) while fingers of their left hand were stimulated to localize the S1 in each individual’s brain. Participants then received TMS to the area identified by the fMRI. The authors used continuous theta-burst TMS to decrease activity in the primary sensory cortices of the participants. TMS was used because it is non-invasive and circuit-specific. Each trial of the experimental task involved electric pulses being administered to the fingers of the left hand while participants fixated on a cross on a screen and indicated with a button the stimulus pattern they perceived. Participants in the control group received the TMS to the left side of the brain from the stimulation while participants in the experimental group received the TMS to the right side of the brain (which contains the S1 corresponding to the fingers of the stimulated left hand). Right after the administration of the TMS, participants underwent threat conditioning; an electric shock was delivered intermittently during the experimental task. The next day, memory retention of the threat of the electric shock was assessed by measuring fear-potentiated startle during trials of the task.

What did they find?

The authors found a decreased fear response during the trials after fear conditioning in participants in the experimental TMS group. This shows that decreasing activity in the sensory cortices in the side of the brain where the stimulation to the fingers was represented, affected the formation of memories of the threat of the painful shock. This implicates the S1 in the consolidation of threat memories in humans. It also illustrates the usefulness of TMS in studying relationships between specific brain regions and the formation of fear memories.

What's the impact?

This study found that the primary sensory cortices are involved in the process of forming and consolidating memories associated with events that invoke feelings of fear. The study also shows that TMS can be an effective technique for interfering with the development of fear memories. This finding has clinical relevance as TMS is a potential treatment for individuals adversely affected by threat memories such as those with anxiety disorders. 

Access the original scientific publication here.

The “Edge-of-Chaos”: Brain Activity Underlying Consciousness

Post by Lani Cupo

The takeaway

Modeling the electrical brain activity underlying stages of consciousness reveals that the conscious brain exhibits activity poised at the “edge-of-chaos”, a critical point between stability and chaos. The loss of consciousness, such as with anaesthesia, corresponds with a transition away from the critical point, while psychedelics induce a state closer to the critical point.

What's the science?

As scientists seek to understand consciousness, they investigate patterns of electrical brain activity during various stages of consciousness, such as during generalized seizures, under anaesthesia, and after exposure to lysergic acid diethylamide (LSD). In doing so, they can examine the transition of brain activity from mathematically stable to chaotic dynamics. While previous research suggests that the conscious brain’s electrical activity exists at a critical point at the boundary of stability and chaos, it is yet unknown what phases (from a mathematical perspective) exist on either side of the critical point of wakefulness. This week in PNAS, Toker and colleagues sought to provide empirical evidence for the cortical dynamics (patterns of electrical brain activity) that underlie stages approaching consciousness and how these patterns relate to information richness.

How did they do it?

The researchers used a previously published model of low-frequency electrical activity in the brain reflecting cortical oscillations that allow for tuning of parameters associated with neuronal inhibition and excitation. By setting the parameters of the model based on the literature, the authors could simulate data for different brain states, including waking consciousness, generalized seizure, and anaesthesia, which have been previously validated in the model with acquired data. In the data they simulate, the authors assessed chaotic dynamics with the largest Lyapunov exponent, where a positive exponent corresponds with chaos and a negative exponent corresponds with periodicity (ordered pattern). The authors applied a modified 1-0 chaos test to this value where a value of 1 is assigned to chaotic systems and 0 to periodic systems. They also assessed the richness of information in the model with a measure known as the Lempel-Ziv complexity, which gives an estimate of the amount of non redundant information in a signal—in this case brain activity. The authors could then relate measures of chaos to measures of information richness, comparing between models of various states of consciousness. They also performed a “parameter sweep” where they simulated data with diverse parameters not corresponding with a specific brain state to explore the relationship between chaos and richness across the models. In addition to the simulated data for waking consciousness, generalized seizure, and anaesthesia, they also examined previously published data from two macaques and five humans during wakefulness, two macaques and three humans under anaesthesia, three humans experiencing generalized seizures, and 16 people after exposure to either saline or LSD.

What did they find?

The authors hypothesized that if they plotted the measure of information richness on the y-axis and a transition from periodicity to chaos on the x-axis of a plot, they would observe an inverse-U shaped curve, indicating that the richest information was correlated with the critical point at the edge-of-chaos, and information was lost as the system became more periodic or more chaotic. In line with their hypotheses, the authors found an inverse-U relationship between chaos and richness, with 0 on the x-axis representing the critical point of edge-of-chaos. Visualizing the simulated models for wakefulness, seizure, and anaesthesia reveals that wakefulness falls to the right of 0, towards the side representing chaos. This finding supports an old hypothesis that at a large scale, the brain’s electrodynamic system is at least weakly chaotic. Anaesthesia fell farther towards instability, but seizure fell on the periodic side of the graph, as predicted. Data from participants exposed to LSD suggests the psychedelic increases the information richness of the system and stabilizes it, moving the patterns of activity closer to the critical point when compared to consciousness.

What's the impact?

This study found a relationship between the chaotic dynamics of brain activity and the complexity of information in models representing the brain across different states of consciousness, suggesting wakefulness occupies a critical point between chaos and periodicity. The findings provide information to better understand states of consciousness, both in healthy brains (such as during sleep) and disorders related to consciousness.  

Access the original scientific publication here.

Selective Attention Modulates Activity in the Auditory Nerve

Post by Lina Teichmann

The takeaway

Studying the effects of selective attention on subcortical structures is usually not feasible in humans, however, testing cochlear implant (CI) users offers a unique opportunity to examine how top-down effects modulate activity in the auditory nerve. Using a cross-modal attention task, the current study shows that activity in the auditory nerve in humans is modulated by attention.

What's the science?

Attention relies on selecting relevant features from our environment while ignoring irrelevant features. For example, when listening to someone speak, we are able to focus our attention on the words they are saying while ignoring irrelevant background sounds. Direct evidence from animal studies suggests that these attentional mechanisms modulate auditory nerve action potentials. Studying similar effects in humans is usually difficult, as direct recordings from the auditory nerve are generally not feasible. However, this week in the Journal of Neuroscience, Gehmacher, Reisinger and colleagues present data from CI users, showing that auditory nerve activity in humans is modulated by attention.

How did they do it?

A group of CI users completed a cross-modal attention task while recordings were taken from a coil that temporarily replaced their CI. In every trial, participants saw a cue on a computer screen (either an eye or ear) to indicate whether to attend to an auditory or visual stimulus. Then an audiovisual stimulus was presented. The auditory stimulus was a tone delivered directly to the CI coil. The visual stimulus was a circle with black and white stripes oriented vertically. In some trials, oddball auditory (slightly different tone) and visual (slightly tilted version of visual stimulus) stimuli were presented, and participants were asked to press a button when they detected an oddball in the cued domain. 

What did they find?

Using a frequency analysis, the results showed that cochlear activity was modulated by selective attention. In the theta frequency range (5-8Hz), a higher power was associated with attending to the auditory domain. Relating these results to a concurrently recorded electroencephalography (EEG) dataset from one participant, the authors showed that the auditory nerve, as opposed to a source located elsewhere in the brain, was the most likely origin of the signal. Lastly, the authors showed that classification algorithms trained on single-trial activity recorded from the CI could distinguish whether the participant was attending to the visual or auditory stimulus. Together, these results support the hypothesis that auditory nerve activity is modulated by attention in humans.

What's the impact?

Previous work has shown that the neural signal is modulated by attention at the cortical level. However, evidence for attentional modulation in subcortical structures such as the cochlea was scarce, partially because direct recordings in humans are usually not feasible. The current study addressed this gap in the literature by studying auditory nerve activity directly in CI users. The results highlight that auditory nerve activity is modulated by attention in humans, providing new insights into the interplay between top-down and bottom-up effects in hearing.

Access the original scientific publication here.