Attention is Enhanced Prior to Anticipated Emotional or Neutral Stimuli

Post by Amanda McFarlan

What's the science?

We encounter many stimuli in our daily lives and must choose to attend to relevant stimuli while filtering out irrelevant ones. This is not always an easy task, especially in distracting environments. Researchers have focused on studying how distractions affect ensuing behaviour, but little is known about how we prepare for a distraction that we know is coming. This week in Psychological Science, Makovski and Chajut investigated how individuals prepare for anticipated distractions and whether they prepare differently depending on the distraction type.

How did they do it?

In the first experiment, participants performed a memory task, in which they were briefly shown an array with four differently coloured circles and asked to memorize it. This was followed by a designated period of retention during which participants were shown either a) no image at all or b) a neutral or threatening image, which they were instructed to ignore. Then, participants were presented with a colour probe and had to identify whether it was in the same position as it was in the memory array. In a subset of trials, a very small grey dot (‘dot probe’) appeared instead of the anticipated neutral or threatening image, and participants were asked to indicate when they detected the dot. 

In the second experiment, the methods remained mostly unchanged, except the dot probe appeared after the neutral or threatening images rather than before. Then, in a third experiment, the neutral and threatening images were exchanged for joyful and disgusting images respectively. Participants’ anxiety levels were measured prior to experiments 1 and 3 using the State-Trait Anxiety Inventory questionnaire.

What did they find?

The authors determined that memory performance was worse in the threatening image condition (compared to the no image and neutral image condition). However, participants were faster at detecting the dot probe when they anticipated being shown an image, regardless of the image category (neutral or threatening, joyful or disgusting). Although there was a trend suggesting that people with high anxiety suffered more from the threatening images compared to people with low anxiety, overall, anxiety levels did not influence the response time for detecting the dot probe. In experiment 2, the authors found that response time was greatly impaired when the dot probe appeared after a threatening image (instead of before) compared to after a neutral image or no image. Together, these findings suggest that, despite the effects of image contents (threatening versus non-threatening) on memory performance, preparation was unaffected by image contents.  

amanda_img_jan12.png

What’s the impact?

This study shows that participants were more attentive when they anticipated any type of  stimulus compared to when they were not anticipating a stimulus at all. This suggests that people prepare the same way for an anticipated stimulus regardless of the type of stimulus. Although emotional valence associated with the stimulus (e.g., threatening) and corresponding anxiety levels can have an impact on behavioural performance after the stimulus is presented, they do not affect the way we prepare for a stimulus before it is presented.

Makovski_jan12.jpg

Makovski and Chajut. Preparing for the Worst: Attention is Enhanced Prior to Any Upcoming Emotional or Neutral Stimulus. Association for Psychological Science (2021). Access the original scientific publication here.

How Should We Think About the Brain’s Response to Threat?

Post by Kasey Hemington

What's the science?

A threat is something with a high probability of causing either mental or physical damage. When we encounter a threat, many related processes occur in the brain, such as detecting the threat, learning to associate a cue with the impending threat, remembering what cues (and in what contexts) predict the threat, updating how or whether certain cues predict the threat over time, and deciding on the best behavioural response. These processes are typically studied independently and are each considered to be disrupted as distinct entities in threat-related disorders like anxiety and post-traumatic stress disorder (PTSD). This week in Trends in Cognitive Sciences, Levy and Schiller reviewed the neural basis of threat and proposed that we aim to understand threat in a more holistic manner; by studying the processes that make up the threat experience as interconnected phases of threat with common underlying neural computations.

What do we already know?

Though scientists often attempt to study them separately, it can be difficult to isolate the neural correlates of each aspect of the threat experience because each brain region known to be involved in these processes is involved in multiple processes. For example, the hippocampus, amygdala, and ventral striatum are involved not only in associative learning but also in decision-making, while areas of the prefrontal and parietal cortices are involved in decision-making but also learning. The insula is known to be involved in decision-making and learning, in addition to physiological reactivity, while the periaqueductal grey also plays a role in physiological reactivity, alongside providing threat-related signals to the amygdala. When it comes to understanding the brain’s response to threat, it may be more accurate to refer to the aforementioned regions as being part of one unified, global brain network.

What’s new?

Instead of studying each threat-related-process separately, the authors consider how different brain regions play a role in different ‘phases’ of the threat experience while asking the same neural computation-related question at each phase: in an uncertain and volatile environment, how does the brain use cues to predict outcomes?

For example, consider a person who witnesses a threatening event; the explosion of a blue car at close range. The authors divide this experience into five phases: 1) initial encounter (witnessing the explosion), 2) learning (that a blue car could signal danger), 3) post-association learning (e.g. learning whether the association should be generalized to cars of all colours), 4) memory retrieval and potential updating (remembering the danger in response to seeing another blue car, potentially stabilizing or destabilizing the threatening memory depending on the events and perception at the time of retrieval) and 5) decision making (e.g. choosing between whether to drive a car or ride a bicycle).  

kasey.jpg

The authors consider the neural correlates and clinical implications (for threat-related disorders) at each phase. 

Phase 1: As a threat becomes more imminent (for example, a predator moving into the field of view of its prey) there is a shift in brain activity from the prefrontal cortex to midbrain areas, and a corresponding shift in behavioural response from anxiety and fear to panic, freezing or fleeing. This pattern is mirrored in individuals with anxiety or PTSD; high anxiety is often experienced during anticipation of a threat before it is imminent. 

Phase 2: Learning the association between a cue and a threat occurs via prediction error in the brain; there is a difference between the expected and observed outcomes predicted by a cue, so the brain learns to update predictions. Synaptic plasticity in the amygdala results in the storage of threat memories. In individuals with anxiety disorders, learning may be overgeneralized in a maladaptive way to include cues that do not predict threat.

Phase 3: Extinction learning can counteract threat conditioning. It’s when repeated exposures elicit smaller and smaller responses to a stimulus over time. In the brain, the ventral tegmental area helps to compute a prediction error between expected and observed outcomes and sends a signal to other brain areas including the amygdala in order to update the memory with new extinction memories. In PTSD, defensive responses can linger following a threat for longer than they typically would.

Phase 4: When a memory is reactivated, this provides an opportunity to destabilize the memory (a cascade of cellular and molecular processes that put it in an unstable state) and potentially modify it in this unstable state. A clinical goal for PTSD and anxiety is to modify these threatening memories long-term, by reactivating the memory, alternating the memory to include a more adaptive emotional response, and ultimately altering the way an individual engages with the world. 

Phase 5: Decisions such as avoidance of a cue indicative of a threat are made based on subjective valuation of a potential outcome, for which the ventromedial prefrontal cortex and ventral striatum, in particular, are responsible. The uncertainty of the potential outcome is also encoded in the brain, including the ventral striatum, posterior parietal cortex, and anterior insula among other regions. Finally, the expected risk or reward and an individual’s overall tolerance for ambiguity are also weighed in the decision-making process. In PTSD and anxiety disorders, a decreased tolerance for ambiguity is observed.

What's the bottom line?

This review highlights learning, memory, and decision-making together as they relate to threat experiences and threat-related disorders. At the neural level, a response to threat can be thought of as computations that predict outcomes from cues. New associations become memories, which can be updated as behaviour and environments change. Finally, decisions can be made that incorporate these associations. This way of thinking about threats and related processes can help us to study the neural correlates of threat-related disorders like anxiety disorders and PTSD more holistically.

Levy and Schiller. Neural Computations of Threat. Trends in Cognitive Sciences (2020). Access the original scientific publication here.

Genetic Overlap Between Education, SES, and Psychopathology

Post by Anna Cranston

What's the science?

Socioeconomic status (SES) and education are known to be associated with psychiatric disorders and behaviors. However, it’s not yet clear exactly how these associations are related to our genetic risk as individuals. One way to understand this is through the use of genome-wide association studies (GWAS), which are essentially large-scale observational studies that scan the genomes from a large human population to identify potential genetic differences that might be associated with a particular trait or disease. This week in Nature Human Behaviour, Wendt and colleagues used GWAS data to determine potential genetic links across numerous psychiatric and brain traits, from depression and schizophrenia to brain volumetric changes, and how these could be potentially influenced by social factors such as our education, income or social status.

How did they do it?

The authors selected four educational factors (educational attainment, highest math class, self-rated math ability, and cognitive performance), two SES factors (household income and Townsend deprivation index), and GWAS data for many psychopathology and psychosocial factors (available from previous studies). These educational and SES factors were used to identify potential genetic variance that might lead to an increased or decreased susceptibility to particular psychological traits, such as depression, bipolar disorder, or schizophrenia. The authors used the presence of single nucleotide polymorphisms (SNPs) to identify the incidence of pleiotropy (i.e. when one gene affects multiple traits) in the sample group, and this method was used to determine genetic correlations between their selected psychological phenotypes. Since there is undoubtedly a lot of overlap in the genetic factors underlying SES/education, psychopathology, and psychosocial factors, the authors chose to use a specific statistical model, known as multi-trait conditioning and joint analysis. This model applies Mendelian randomization to disentangle this genetic overlap between traits, revealing associations that control for the genetic variance attributed to SES and educational factors. They also used an algorithm known as the linkage disequilibrium score regression (LDSC) which they used to estimate the SNP heritability of a trait. The authors then used transcriptomic profile analysis against each of the social factors to determine if there were tissue or cell-specific genetic variances between these factors, in order to ultimately pinpoint the exact genetic variation that determines these particular psychological traits in individuals. 

What did they find?

The authors found that specific genetic variance in SNPs was associated with both psychological traits including alcohol dependence, schizophrenia, and neuroticism as well as social factors such as education, income, and deprivation. For instance, a lower income was found to be highly genetically correlated with a higher deprivation index and a higher incidence of disorders such as ADHD, depression, and alcohol dependence. The group also found that genetic liability to a lower deprivation index (a measure of material deprivation) was associated with a significant increase in cortical grey matter. Education and SES phenotypes were found to be genetically correlated with neuroticism. When these relationships were controlled for, the heritability of neuroticism (specifically, the number of heritable components) increased.

anna_img_jan5.png

Transcriptomic profile analysis revealed that different psychological phenotypes resulted in differences in cortical and cerebellar tissue phenotypes. Their findings showed that better cognitive performance and higher education are genetically correlated with increased cortical and hippocampal, cerebellar, and frontal cortex enrichment. The authors also identified key genetic correlations with specific psychological traits. They found that conditioning for genetic effects associated with education and SES factors uncovered mechanisms related to excitatory neuronal cell types for bipolar disorder and schizophrenia. Further, inhibitory GABAergic cell types were correlated with an increased incidence of risky behaviors in individuals. These findings suggest that while individuals may be genetically predisposed to certain psychological disorders, their risk may be significantly moderated by social factors such as education and socioeconomic factors such as income and deprivation.

What's the impact?

This study identified specific genetic variation underpinning both psychopathological and psychosocial traits. The authors’ findings have identified the underlying genetic variation that is shared between these psychological disorders, as well as novel tissue and cell-specific variation within each of these psychological groups. These findings highlight the importance of specific brain regions and their shared transcriptional regulation in human mental health and disease, which may provide future insight into the biological basis of these complex psychological disorders.

anna_quote_jan5.jpg

Wendt et al. Multivariate genome-wide analysis of education, socioeconomic status and brain phenome (2020). Access to the original scientific publication here.