Threshold for Odor Detection Adapts Based on Past Experience

Post by Shireen Parimoo

What's the science?

Animals react to sensory input from the environment, but sometimes the input isn’t strong enough to elicit a behavioral response. How much sensory input is needed for organisms to detect it? Several models attempt to explain how external sensory information, like sound, is detected in the brain. For example, the absolute threshold model proposes that a sound will be detected once it reaches a certain intensity (i.e. the threshold). According to the derivative model, the rate at which a sound’s intensity changes will determine when it is detected, whereas the fold change model posits that detection depends on how much the sound changes in proportion to its original intensity. Although these models have been applied to explain sensory detection in various organisms and across different modalities, no study has directly compared them with each other. This week in Neuron, Levy and Bargmann used computational modeling and calcium imaging to develop a unified model for odor detection in Caenorhabdtis elegans (roundworms).

How did they do it?

Roundworms have a simple nervous system that makes it possible to record the activity of specific neurons. The authors measured the sensory activity of an olfactory neuron called AWCON in response to changes in levels of the odorant butanone. Specifically, using a microfluidic setup, AWCON calcium activity was recorded in immobilized animals across a wide range of odor concentrations and timescales. Neuronal activity and navigation decisions were also examined in animals freely moving in odor gradients controlled by a specialized microfluidic device.

The authors rigorously tested many models that predict neuronal activity features (such as neuronal response and latency of response) and navigation behavior, including the absolute threshold, derivative, and fold change models. They also created an adaptive concentration threshold (ACT) model in which sensory activity is initiated when the odor concentration reaches a threshold, however, this threshold is continuously adapting to the odor. The ACT model includes (i) a threshold constant, which changes the neuron sensitivity, and (ii) adaptation time, which determines how long is the neuron memory of the external information. To determine whether the ACT model is generalizable, it was also tested on a separate dataset of neuronal activity in zebrafish in response to visual input. To identify the molecular basis of sensory detection, they examined the role of EGL-4, a protein kinase in the AWCON neuron that is involved in olfactory learning. They compared the effect of butanone concentration in loss-of-function mutants without functional EGL-4, gain-of-function mutants with enhanced EGL-4 activity, and wild-type animals. Finally, they performed theoretical studies to determine which model can allow both accurate and fast sensory responses, two key features for sensory neurons performance. 

What did they find?

Previous models did not adequately predict the observed neuronal responses and latencies, and could only match a subset of the experimental observations. For instance, they found that calcium responses depend on butanone concentration and the rate of concentration change, inconsistent with the absolute threshold and the derivative change models. The ACT model, on the other hand, predicted neuronal responses for both slow and fast changes in butanone concentration. The ACT model also predicted neuronal activity and aversive navigation decisions, like reversals and pauses, in more natural conditions, while animals freely navigated in odor gradients. This indicates that odor sensation and navigation are driven by an adaptive threshold mechanism that allows a comparison of past and current sensory inputs.

Shireen_neuron_pic.png

Interestingly, loss of EGL-4 function elongated the threshold adaptation time relative to wild-type animals and enhancement of EGL-4 function shortened it, suggesting that the protein kinase EGL-4 tunes the adaptation time of the sensory detection threshold. The ACT model also predicted activity in the optic tectum of zebrafish in response to visual input, demonstrating generalizability. Finally, the authors show that in contrast to alternative models, an adaptive-threshold mechanism allows sensory neurons to respond both fast and accurately to external stimuli, highlighting its benefit in reliable environment sensation.  

What's the impact?

Combining computational modeling with quantitative assays, this study is the first to systematically compare previous sensation models and to demonstrate how sensory detection is driven by a combination of current and past sensory inputs from the environment. The ACT model is powerful because it encompasses elements of previous models under different conditions and further generalizes to visual stimuli. These findings pave the way for future research to uncover the neurobiological basis of sensory detection and test the generalizability of the model across organisms and sensory modalities. 

Levy & Bargmann. An adaptive-threshold mechanism for odor sensation and animal navigation. Neuron (2020). Access the original scientific publication here.

Learning from Your Mistakes: The Role of Dopamine Activity in Prediction Errors

Post by Lincoln Tracy 

What's the science?

Understanding how associative learning occurs in the brain is one of the most important questions in neuroscience. One of the key concepts in associative learning relates to the idea of a prediction error — a mismatch between what we expect to happen and what actually happens. Both humans and animals use prediction errors to learn; the greater the error, the greater the learning. Prediction errors can be calculated using the method of temporal difference. The ability to map millisecond by millisecond changes in neuronal dopamine firing activity has been a major step forward in understanding prediction errors. However, there are still aspects of prediction errors that are yet to be fully explored. Previous research has demonstrated that optogenetics can be used to shunt—or attenuate—neuronal dopamine activity to prevent learning about a reward when it is delivered. This week in Nature Neuroscience, Maes and colleagues used second-order conditioning to determine whether blocking or shunting neuronal dopamine activity with laser light when a visual cue that predicts a reward is presented prevents learning from occurring in a similar fashion.

How did they do it?

The authors took rats whose genome had been altered to express Cre recombinase, an enzyme derived from bacteria, that could be controlled by a tyrosine hydroxylase promoter. The rats underwent surgery, where a Cre-dependent viral vector carrying halorhodopsin was injected into the ventral tegmental area (VTA) of the brain. Optic fibers were also implanted into the VTA; these would be targeted by the lasers during optogenetic stimulation. The rats were then placed on a food-restricted diet for four weeks before they were conditioned to associate a specific visual cue (stimulus A; a flashing light) with a reward (a chocolate-tasting sucrose pellet). After the training period, the rats completed two experiments; a second-order conditioning experiment and a blocking experiment. In both experiments, the percentage of time the rats spent approaching the food port where the pellet was delivered was taken as a measure of how conditioned they had become. The second-order training experiment had two types of trials. In both types of trials, the previously conditioned cue (the flashing light) was used to reinforce learning about two novel cues. That is, a second novel stimulus, either a chime (stimulus C) or a siren (stimulus D), was presented after the flashing light. In the C trials, continuous laser light was beamed onto the VTA half a second before the presentation of the flashing light so to disrupt the dopamine transmission that would normally occur when the reward predicting cue was presented. In the D trials, the light was beamed onto the VTA at a random time point after the flashing light was presented. Following the training, the rats also completed probe testing, where the chime and siren were presented without a reward. The authors then compared the behavioral response between the two trial types to determine if disrupting dopaminergic transmission impacted learning.

In the blocking experiment, the conditioning cue (the flashing light) was presented in separate compounds with each of two novel audio stimuli, a tone (stimulus X) or a click (stimulus Y). Each of these compounds was paired with reward. Normally, under these conditions, the conditioned light blocks learning about the relationships between X (or Y) and the reward. The question was, if the conditioned cue carries information about the prediction of up going reward, then disrupting this prediction would prevent the light from blocking learning about X. To test this, the laser light was beamed onto the VTA in the X and Y trials at the flashing light or at a random time point between trials, respectively. Learning these compounds was compared to a compound that consisted of a non-conditioned steady light and another audio cue, a white noise (stimulus D), which was also paired with a reward. The rats underwent probe testing following the blocking (compound) training, where the X, Y, and Z stimuli were presented alone and without a reward.

What did they find?

Optogenetic manipulation did not alter responding during second-order training. However, during the probe test, the rats responded to the D stimulus more frequently than the C stimulus. These results indicate that attenuating dopaminergic activity at the start of the reward-predictive cue prevented second-order conditioning from occurring to stimulus C. Similar to the second-order experiment, optogenetic manipulation did not alter responding during blocking training. During the probe test of the blocking experiment, the rats responded more to the control stimulus, Z, compared to the blocked stimuli, X or Y. These results confirmed that the conditioned flashing light was able to block learning about the novel cues, X and Y, but showed that attenuating the dopamine signal to the flashing light did not disrupt the ability of this stimulus to block learning about the novel stimulus X, suggesting that the dopamine signal to good predictors of reward represents a prediction error and not a prediction about reward.   

lincoln_nature_neuro_pic.png

What's the impact?

This study provides clear evidence that the increases in firing activity of dopaminergic neurons following the presentation of a reward-predicting cue serve as prediction errors to support associative learning in a similar fashion to the previously shown reward-evoked changes in dopaminergic firing. Importantly, these findings suggest a broader role for dopaminergic signaling in driving associative learning than what is thought in current theories. 

dopamine_quote_Feb11.jpg

Maes et al. Causal evidence supporting the proposal that dopamine transients function as temporal difference prediction errors. Nature Neuroscience (2020). Access the original scientific publication here.

Distinct Patterns of Activity Underlie the Motivation to Be Fair

Post by Shireen Parimoo

What's the science?

Why are people motivated to be fair? People can be fair for prosocial reasons when they value the well-being of others, or for strategic reasons when being unfair might cost them something. In the ultimatum game, which is often used to evaluate fairness, people offer to split a sum of money with a recipient who accepts or rejects the offer. Participants typically offer 40% of the sum, which suggests that they could be acting prosocially by providing a nearly equal split. Conversely, they could be acting strategically to ensure that the recipient does not reject the offer. The ultimatum game activates regions of the brain like the dorsolateral prefrontal cortex (dlPFC) that are involved in strategic processing. Prosocial behavior is thought to be supported by Theory of Mind (ToM), which is the ability to empathize with and understand other people’s mental states. No study has yet to examine the pattern of activity in brain regions belonging to the ToM network while people make fair or unfair decisions. This week in Social Cognitive and Affective Neuroscience, Speer and Boksem used functional magnetic resonance imaging to distinguish between patterns of activity associated with prosocial and strategic motivations in the cognitive control and ToM networks.

How did they do it?

Thirty-one young adults played the ultimatum game (UG) and the dictator game (DG) while undergoing functional magnetic resonance imaging scanning. They had to split €20 and could offer between €0-14 to their opponent. Half of the trials consisted of the UG and the other half of the trials consisted of the DG. Unlike the UG, there is no strategic advantage to offering a fair split in the DG, as opponents cannot reject offers made by participants. To evaluate behavior, the authors calculated the difference between the amount of money that participants offered in the two games. Participants were categorized as selfish players if there was a large difference in their offers between the two games, which suggests that they were acting strategically during the UG by offering more money to their opponent.

The authors examined patterns of activity in the ToM and cognitive control networks during the two games. First, they used Neurosynth (an online database of fMRI studies) to identify brain regions that are often active during ToM and cognitive control tasks, which included the temporoparietal junction (TPJ) and the medial prefrontal cortex (mPFC) in the ToM network and the dlPFC and posterior cingulate cortex (PCC) in the cognitive control network. For each participant, they created a model (a support vector machine classifier) to distinguish between the two games based on the pattern of activity in these networks and in individual brain regions. The classifier was trained on brain activity on a subset of UG and DG trials and then tested with a different set of trials to predict whether the pattern of activity corresponds to the UG or the DG. They correlated classifier performance with behavior to determine how patterns of activity related to participants’ motivations in the two games. Finally, to identify other brain regions that might be differentially activated by the two games, the authors applied the classifier to the whole brain by targeting a small area at a time and then correlated classifier performance with behavior.

What did they find?

In general, people made higher offers to their opponents in the UG than in the DG. There were large individual differences in motivation, as prosocial participants made similar offers between the two games whereas selfish players offered comparatively less money to their opponent in the DG. Classification accuracy in the ToM and cognitive control networks was related to behavior. Distinct patterns of activity in these networks were found to underlie prosocial and strategic motivations, as the classifier was more accurate at distinguishing between the two games when participants were behaving strategically than when they were driven by prosocial motivations.

Shireen_theoryofmind.png

Patterns of activity in individual regions of the ToM and cognitive control networks also differed between prosocial and selfish players. For example, activity in the left TPJ was more different across the two games in selfish players than in prosocial players. Similarly, classification accuracy in the bilateral dlPFC and PCC was higher when the difference in offers was larger, suggesting that the pattern of activity was more distinct between the two games in selfish than in prosocial players. Finally, classifier performance in other regions like the bilateral TPJ, mPFC, and the left IFG was also related to behavior. These results indicate that prosocial players exhibited similar patterns of activity in the two games because they did not differentially engage in strategic and prosocial reasoning. On the other hand, selfish players engaged regions in the ToM and cognitive control network differently when they were motivated to behave strategically in the UG game, even when their offers do not differ from prosocial individuals.

What's the impact?

This study is the first to demonstrate that distinct patterns of activity in the ToM and cognitive control networks underlie prosocial and strategic motivations. Importantly, these results provide a deeper insight into how people rely on both cognitive control processes and ToM processes like empathy to make fairness decisions. 

Speer and Boksem. Decoding fairness motivations from multivariate brain activity patterns. Social Cognitive and Affective Neuroscience (2020). Access the original scientific publication here.