Smoking Tobacco is Associated with Reduced CB1R Density in the Brain

Post by Shireen Parimoo

What’s the science?

The cannabinoid type 1 receptor (CB1R) is a presynaptic receptor that’s present throughout the brain. It’s highly concentrated in areas involved in reward and addiction, like the basal ganglia, and it modulates GABA and glutamate (neurotransmitter) release in response to substances such as cannabinoids, alcohol, and nicotine. CB1R density is reduced in the brains of people with alcohol dependence and in chronic cannabis users. As CB1Rs are activated by nicotine, would CB1R density also be lower in smokers? In previous studies, participants who smoked tobacco also had alcohol and cannabis use disorder, making it difficult to find a direct link between nicotine use and CB1R density. This week in Biological Psychiatry, Hirvonen and colleagues systematically examined CB1R density in the brains of participants with nicotine dependence (and no other substance use disorder).

How did they do it?

Forty-six healthy men participated in the study; 18 had mild-to-moderate tobacco use disorder (smokers) and 28 were non-smokers (healthy controls). None of the participants had alcohol or cannabis use disorder. All participants underwent a two hour positron emission tomography (PET) scan, before which they were injected with [18F]FMPEP-d2, a radioligand that binds to CB1 receptors in the brain. This technique allows us to infer the density of CB1Rs by estimating the ratio of the concentration of ligand in the brain to plasma (VT). The authors also obtained genotype data from 43 participants, as carriers of the C allele of a single nucleotide polymorphism (SNP), rs2023239, in the gene CNR1 (encoding the CB1R receptor) tend to have higher levels of [18F]FMPEP-d2 binding. Participants’ smoking habits, like the age at which they started smoking and their frequency of smoking, were collected. Finally, the authors combined data from previous studies that used PET imaging in participants with alcohol and cannabis use disorder in order to examine the effect of smoking, substance use disorder, genotype, and body-mass index (BMI) on CB1R density in the brain.

What did they find?

Smokers had reduced CB1R density across several brain regions versus non-smokers. CB1R density was not reduced uniformly across the brain; it ranged from a 17% decrease in the prefrontal cortex to a 28% decrease in the midbrain. Even after ruling out the effect of BMI and genotype, the difference in CB1R density in the brains of smokers and non-smokers remained significant. Interestingly, CB1R density was not related to the age at which participants started smoking, how often they smoked, or to their level of nicotine dependence. After combining data across multiple studies, the authors also found an effect of smoking, other substance use disorders, and BMI on CB1R density. However, these effects diminished when the authors accounted for the effect of genotype. Finally, participants with a substance use disorder who also smoked did not exhibit additional CB1R down-regulation compared to those who only smoked (although CB1R density in both groups was still lower than in healthy controls).

shireenpic1800 (1).png

What’s the impact?

This is the first study to report reduced CB1R density across various brain regions of male smokers compared to healthy controls, without the confounding effect of other substance use disorders. Importantly, the authors also demonstrated that consumption of multiple substances – such as alcohol and tobacco – does not have an additive effect on CB1R density above and beyond dependence on one substance. These results provide further insight into the effects of nicotine dependence, though more research is needed to determine whether these findings will generalize to females and to other substance use disorders.


Hirvonen et al., Decreased Cannabinoid CB1 Receptors in Male Tobacco Smokers Examined with Positron Emission Tomography. Biological Psychiatry (2018). Access the original scientific publication here.

Learning That Is Spaced out over Time Engages the Brain Differently

What’s the science?

We use feedback from rewards every day to learn new things. For example, if we are offered a mango and we have enjoyed several mangos previously, over time we might learn to favor mangos. Research on the neural underpinnings of this form of reward-based learning typically focuses on short-term learning (across several minutes). However, we don’t know what happens when learning occurs over several weeks time, as it might in many everyday situations. Gradual learning from reward feedback relies on a dopaminergic system in the brain, but short-term learning paradigms in typical experiments may also rely on short-term (‘working’) memory systems. This week in the Journal of Neuroscience, Wimmer and colleagues used behaviour and functional magnetic resonance imaging (fMRI) to understand the mechanisms underlying reward learning over a period of several weeks versus within a single session in humans.

How did they do it?

The authors completed two similar studies for replication purposes. In the first study, 33 participants completed a behavioural and fMRI experiment, while in the second study, 31 participants completed a behaviour-only experiment. In both studies, the participants' task was to learn whether the best response to a stimulus (scenes presented on a screen) was either ‘Yes’ or ‘No’ (the wording was arbitrary). The stimuli had been randomly assigned by the experimenter as either reward-associated or loss-associated. Reward-associated stimuli resulted in the participant winning $0.35 for ‘Yes’ (on average) and losing $0.05 for ‘No’. Loss-associated stimuli resulted in the participant losing $0.25 when ‘Yes’ was selected and gaining $0.0 when ‘No’ was selected. Feedback was given after each trial, and feedback was probabilistic, meaning there was an 80% chance that the best response would result in the best outcome/payment. In an initial learning session in the lab, participants learned about 8 'spaced' stimuli. Next, three learning sessions for the ‘spaced’ stimuli were done online (over ~ two weeks). In a lab session about two weeks later, participants learned about 8 new 'massed' stimuli for the same number of times as the previously seen 'spaced' stimuli.

wimmerpic1800.png

After learning, participants also rated whether they thought the stimuli were reward-associated or loss-associated. In the first study, fMRI data were also collected after learning was completed. Finally, three weeks later, participants tried to remember and rate whether each stimulus was reward-associated or loss-associated. This was completed online in the first study and in the lab in the second study.

What did they find?

In both the ‘spaced’ and ‘massed’ conditions, participants learned the best 'Yes' or 'No' response quickly, and performance was equivalent at the end of learning for both the ‘massed’ and ‘spaced’ training sessions. However, three weeks after learning, participants remembered the value of the ‘spaced’ stimuli much better. Additionally, during learning, working memory capacity was associated with learning after participants got used to the task (in the ‘massed’ session). These results indicate that the spacing out of learning sessions results in better long-term memory for whether stimuli are reward-associated or loss-associated, and that working memory is used during shorter learning paradigms (the ‘massed’ paradigm). During fMRI, using a searchlight (multivariate pattern analysis) approach, the authors discovered that there were very different patterns of activity for reward-associated versus loss-associated stimuli for ‘spaced’ stimuli but not ‘massed’ stimuli within the medial temporal cortex and prefrontal cortex. Patterns of brain activity were also different between the ‘spaced’ and ‘massed’ conditions in the striatum, a region of the brain known to be involved in reward learning. These results suggest that the neural mechanisms underlying ‘spaced’ and ‘massed’ learning may be partially different.

What’s the impact?

This study is the first to clearly demonstrate the effect of learning over a period of several weeks versus minutes on the maintenance of learning and the neural underpinnings of learning. The results have implications for studies of disorders that may involve changes in reward learning, such as addiction and mood disorders.

wimmerquote1800.png

Wimmer et al., Reward learning over weeks versus minutes increases the neural representation of value in the human brain. Journal of Neuroscience (2018). Access the original scientific publication here.

The Behaviour and Neurobiology Underlying Leadership Decisions

What's the science?

Leadership is critical within our society. Leaders such as teachers, soldiers, politicians and parents, to name a few, are continuously responsible for making decisions that will affect others. One aspect of leadership is the acceptance of responsibility, as leaders are responsible for others in the choices they make. Despite the fact that leadership is central to human societies, we still don’t understand the neurobiology of leadership and why some choose to lead and others chose to follow. This week in Science, Edelson and colleagues use a decision-making task to examine leadership choices and the brain regions involved.

How did they do it?

They developed a behavioural task to examine leadership preferences. The participants performed 2 tasks: a baseline task and a delegation task. In all tasks, the participant’s payoff was dependent on their choices, and therefore assessed preferences related to risk, loss and ambiguity. In the baseline task, they were required to choose whether they would accept a gamble with varying probabilities of loss or gain over several trials. In some trials the exact probability of gains and losses were shown to the participant (to assess risk), while in some trials these probabilities were not shown (i.e. ambiguous and closer to real life decisions). In the delegation task, participants were required to decide on the same gambles, however, they could choose to lead and make a decision on behalf of their group, or defer and follow the choice of the group members. In this task there were two trials, ‘self’ and ‘group’ trials, where the choice of the leader would directly affect the payoff of him or herself or affect the payoff of the group members. The group members as a whole had more information about the probabilities of an outcome, which mimics a real-life scenario where deciding as a group may be advantageous. The authors analyzed baseline choice data to determine whether preferences for risk, loss or ambiguity were associated with leadership scores (obtained using established scales, as well as real life data) and whether there is a shift in these preferences when making choices that impact others. They then used computational modelling and fMRI analysis to understand the preferences for leadership and the brain activity underlying these choices.

What did they find?

In the ‘group’ trials, participants deferred to the decision of the group more often compared to the ‘self’ trials (17.3% increase in deferral rate), demonstrating an overall preference for avoiding responsibility. Responsibility aversion (or avoidance of responsibility) was correlated with leadership scores, where those with lower responsibility aversion had higher leadership scores. Responsibility aversion was not related to individual preferences/ values related to risk, loss and ambiguity, nor was it related to regret, blame, and guilt in social situations (assessed in a separate experiment). They then used a computational model taking into account the participants preferences to try and model these decisions. From participants preferences they were able to derive subjective values (i.e. difference in value between accepting or rejecting a gamble for each participant) of choices to lead or defer. They found that deferral choices were greater when there was a low subjective value difference (i.e. less discriminability between the value of the two choices). In other words, when the participant was more certain of their choice (high discriminability), they were more likely to take on responsibility. They derived ‘deferral thresholds’ whereby a critical level of certainty decides whether they will defer or not in a gamble. Using computational modelling they found that when taking responsibility for others there was a shift in the deferral threshold, indicating a higher demand for certainty of a choice. These changes in deferral threshold were correlated with leadership scores. The greater the shift in the deferral threshold, the more likely an individual was to defer (i.e. not lead). Since individual values about risk, loss and ambiguity were not related to leadership scores, these results suggest that the key to determining whether one will lead or not lies in the shift in the amount of certainty needed (about their decision) now that they are responsible for others.

                                      Brain, Servier Medical Art, image by BrainPost, CC BY-SA 3.0

                                      Brain, Servier Medical Art, image by BrainPost, CC BY-SA 3.0

Brain activation was higher in the middle temporal gyrus during group trials compared to self-trials, and temporo-parietal junction activity was correlated with the decision to defer (i.e. not lead). Activity in several brain regions including the medial prefrontal cortex (involved in self-reflection) was associated with the subjective value difference while activity in the anterior insula was associated with the probability of choosing to lead. The authors fit a dynamic causal model to these four brain regions and found a relationship between individual differences in activity in this brain network, shift in deferral threshold and leadership scores. Temporal gyrus activity was also associated with a reduced or inhibited influence of the medial prefrontal cortex on anterior insula activity. This inhibitory influence of medial prefrontal cortex on the anterior insula was reduced in leaders, suggesting a potential neural mechanism underlying the shift in deferral threshold and responsibility aversion.

What's the impact?

This is the first study to examine the behaviour and neurobiology underlying leadership preferences. We now know that the majority of individuals show responsibility aversion and that this is a critical driver in the decision to lead. Those who were less likely to lead show a greater difference in their demand for certainty when deciding to take responsibility for others. Further, this study provides important insight into the brain regions involved in leadership choices. These findings have important implications for understanding leadership in society and can help to inform leadership decisions and consequences.

Edelson et al., Computational and neurobiological foundations of leadership decisions. Science (2018). Access the original scientific publication here.