Two Neural Features Related to Information Encoding and Behavior

Post by Stephanie Williams

What's the science?

To understand how neurons encode information related to the external world, such as information from the environment, we need to understand which statistical features of individual neurons contain that information. In the past, research groups have suggested the mean firing rate of individual neurons or groups of neurons may represent information about stimuli. Other research has looked at the amount of noise that populations of neurons share (correlated noise), however, the specific statistical properties of neurons related to the information encoding are not well understood. This week in The Journal of Neuroscience, Noguiera and colleagues identify neural features that explain most of the variance in information encoding and behavior. The authors specifically address two questions 1) what features are related to information encoding 2) do those features affect behavioral performance? 

How did they do it?                             

This study involved both experimental data collection and theoretical modelling. For the experimental arm of the study, four monkeys were trained to perform three different tasks. Two of the three tasks were direction discrimination tasks (one coarse discrimination and one fine discrimination), and the third was a spatial attention task, in which two of the monkeys had to detect a change in the orientation of some lines displayed in a circle (i.e. a Gabor patch). The authors recorded neural activity while the monkeys performed a given task from two brain regions: the middle temporal area (MT) and area 8a in the lateral prefrontal cortex. Performance on the tasks was quantified as the number of correct reports of motion for the direction tasks, and as the mean reaction time for the attention task. To test which neural features were related to 1) information encoding and 2) behavioral performance, the authors isolated features of interest by iterating through each extracted feature, and changing the values of one set of features while holding all of the other features constant. By using a statistical technique, called bootstrapping, they could select the bootstrap iterations that produced feature values that were in a narrow range around the median for that particular feature. They used these bootstrapped samples to generate the “fluctuations” in the feature that they were changing during their iterations (they call this method “conditioned bootstrapping”). The authors then trained a binary classifier to predict which task the monkeys were performing and the specific behavior performed in the task (i.e. which Gabor patches were attended to or left vs. right motion stimulus).

For the theoretical arm of the study, the authors defined decoding performance, mathematically simulated experimental data, and tested decoding performance on the simulated data. To define decoding performance, the authors first derived a mathematical expression that described the theoretical optimal performance of a linear classifier. They defined two terms representing the two categories of features of interest 1) the population signal feature, which is a measure of how the overall modulation of activity of the measured neuronal population changes as a function of the stimulus condition, and 2) the projected precision feature which is related to the trial-by-trial variability. To simulate population activity, the authors built a neural population activity model with a large ensemble than was recorded experimentally (N = 1000 model neurons), with each neuron’s activity modeled as a function of stimuli (2 stimuli) of different strengths (3 different strengths). They also incorporated a mathematical term that represented noise correlations (corresponding changes in trial-to-trial variability) between neurons. To model behavioral performance, the authors used an optimal linear classifier to make predictions from simulated neural activity. They then compared the performance of their theoretical decoder with the performance of the decoder trained on their experimental data

What did they find?

The authors found that two of the features they examined were important features that affected how much information was encoded, and were the strongest predictors of behavioral performance. They found that changing both the 1) population signal feature and the 2) projected precision feature (one at a time, while holding all other features constant), significantly affected the amount of encoded information and also predicted changes in behavioral performance.

stephanie_nov26.jpg

The first feature related to population tuning was specifically a length metric that joined the mean population responses across different experimental conditions. The second feature, related to the amount of trial-by-trial variability was calculated as the inverse of the population covariability projected onto the direction of the population signal. Importantly, the authors did not find that other features (such as global activity and mean pairwise correlations), which had previously been suggested by other research, were related to the amount of information encoding when they controlled for the two features they identified. However, it is worth noting that changing the global activity and correlation features did change the amount of information encoded in population activity when the two features that authors identified were not controlled for.  

What's the impact?

The authors show for the first time that two features, population signal and projected precision, modulate the amount of information encoded by finite neuronal populations and predict changes in behavior. They also show that two other features did not modulate the amount of encoded information or behavioral performance. These findings shed light on the specific properties of neurons involved in encoding information.

Nogueira et al. The effects of population tuning and trial-by-trial availability on information encoding and behavior. J. Neurosci (2019). Access the original scientific publication here.

The Role of Evolution on Brain Connectivity in Schizophrenia

Post by Elisa Guma

What's the science?

Schizophrenia is a debilitating psychiatric disorder characterized by hallucinations, delusions, and cognitive dysfunction, often associated with impaired brain connectivity. The genetic origin, human-specific traits, and similar prevalence observed across societies (1% of the population is affected globally) have led to the idea that human brain evolution may have played a role in the development of the disorder. This week in Brain, van den Heuvel and colleagues aim to investigate schizophrenia-related changes in brain connectivity in the context of evolutionary changes in the human brain by comparing humans and chimpanzees.

How did they do it?

To measure brain connectivity, the authors studied diffusion-weighted imaging data (sensitive to the integrity of the brain’s white matter tracts and myelin levels) from individuals with schizophrenia and gender-matched healthy control subjects, as well as from chimpanzees. Connectome maps were created by 1) subdividing the cerebral cortex into 114 subdivisions based on a commonly used brain atlas, and then 2) calculating cortico-cortical connectivity between every subdivision or brain region and every other brain region. The measure of cortico-cortical connectivity was based on whether each pair of brain regions were interconnected by common diffusion-weighted streamlines (measuring common white matter tracts). The authors compared the connectome maps of individuals with schizophrenia to those of controls to examine differences in brain connectivity. In order to confirm whether the results they observed were schizophrenia specific, they performed the same analysis for a variety of other neuropsychiatric and neurological disorders including major depressive disorder, bipolar disorder, obsessive-compulsive disorder, autism spectrum disorder, a behavioural variant of frontotemporal dementia, and mild cognitive impairment. The authors also performed their analysis in a separate cohort of schizophrenia and control subjects to ensure their results were not biased by the data acquisition of their original sample.

To determine the human-specific connections in the brain, the authors compared the connectome maps between humans and chimpanzees. Connections that were present in at least 60% of the human subjects and 0% of the chimpanzees were classified as human-specific connections (and vice-versa for chimpanzee-specific connections). Human-chimpanzee-shared connections were classified as those present in at least 60% of humans and 60% of chimpanzees. To investigate the evolutionary nature of dysconnectivity (i.e. reduced connectivity) in schizophrenia, the authors compared human-specific connection patterns to brain dysconnectivity patterns due to schizophrenia. The authors extended their cross-species comparison to include rhesus macaques, a more distantly related primate species, to further investigate the evolutionary specialization of the human-specific connections they identified. Finally, the authors validated their findings by repeating the analysis using a different brain atlas that is designed to map homologous regions across humans and chimpanzees and subdivides cortical areas into 38 rather than 114 regions.

What did they find?

First, the authors show that many brain regions involved in higher-order processing had greater dysconnectivity in patients with schizophrenia. These findings were consistent with previous studies. Next, the authors found a 94% overlap between the human and chimpanzee cortico-cortical networks, with 3.5% human-specific connections, and 1.1% chimpanzee-specific connections. Many of the brain regions identified to be human-specific are involved in language networks and important for semantic comprehension. The authors also identified human-specific connections in regions implicated in cognitive control, social cognition, emotional processing, and bonding behaviour. Interestingly, these behaviours are often impaired in individuals with schizophrenia. In fact, the authors found high similarity between patterns of schizophrenia dysconnectivity and human-specific connections in comparison to connections shared by both humans and chimpanzees. This held true when the authors extended their cross-species comparison to include macaques, and when the data were analyzed with the alternative brain atlas. Furthermore, the human-specific connections were not associated with brain dysconnectivity patterns for any other disorders. There were trend level associations with bipolar disorder, which shares a genetic background with schizophrenia, as well as some symptoms like psychosis.

schizophrenia.png

What's the impact?

These findings provide compelling evidence for the hypothesis that human brain evolution may have played a role in the development of schizophrenia. The authors show that human-specific features of cortical connectivity are associated with patterns of cortical dysconnectivity in individuals with schizophrenia, suggesting that evolutionary pressure to develop higher-order functions may have rendered the brain vulnerable to dysfunction. In future work, it could be interesting to see how these patterns extend to other measures of brain function.

vandenHeuvel_Nov19.jpg

Van den Heuvel et al. Evolutionary modifications in human brain connectivity associated with schizophrenia. Brain (2019). Access the original scientific publication here.

Auditory Cortex Contributes to Threat Memory

Post by Sarah Hill

What's the science?

The same learning principle made famous by Pavlov and his dogs - classical conditioning - is exercised when an animal associates a neutral stimulus with a threat. For example, a mouse that has learned to associate an auditory cue with an impending aversive stimulus (e.g. a foot shock), will exhibit freezing behavior upon hearing the cue even if the cue is no longer followed by a shock. This type of aversive learning results in a threat memory, a form of memory important for future avoidance of aversive stimuli. Whether the auditory cortex, a brain region located in the temporal lobe, is involved in threat memory is unclear, as lesions to this brain region have had mixed effects on memory. This week in Neuron, Dalmay and colleagues show that the auditory cortex and other subregions of the temporal cortex contribute to threat memory acquisition and retrieval.

How did they do it?

The authors used optogenetic methods to inhibit the auditory cortex, and conditioned mice to associate an auditory cue with a foot shock. They carried out both discriminative (i.e. using a conditioned stimulus [CS+] and a neutral stimulus [CS-]) and non-discriminative conditioning (i.e. using only a CS+), presenting one set of animals with complex naturalistic auditory cues (akin to sounds heard in nature) and another with pure tones. To test threat memory, they presented mice the next day with acoustic stimuli, this time without the associated foot shock, and recorded freezing behavior as a measure of the fear response. An analogous series of experiments were then carried out to determine the contribution of neighboring brain areas to threat memory. Optogenetics techniques were similarly used to inhibit adjacent regions of the temporal neocortex, including the ventral region of the secondary auditory cortex, the temporal association cortex, and neuronal axons projecting to the amygdala, the brain region that mediates the fear response. Fear conditioning was again carried out followed by threat memory testing.     

What did they find?

Mice with auditory cortex inhibition exhibited reduced freezing behavior following presentation with complex naturalistic auditory cues, but not after presentation with pure tone cues, suggesting that the role of the auditory cortex in threat memory is dependent on stimulus complexity. This effect was observed whether the auditory cortex was inhibited during fear conditioning or during memory retrieval, as well as in the context of both discriminative and non-discriminative conditioning. Thus, the auditory cortex was shown to contribute in a stimulus-dependent manner to discriminative and non-discriminative threat memory expression. In contrast, mice with inhibition of adjacent temporal cortex regions displayed significant memory impairments regardless of stimulus complexity. Some neocortical subregions were shown to contribute more to threat memory than others — particularly the ventral region of the secondary auditory cortex and the temporal association cortex. Finally, inhibition of amygdala-projecting neurons resulted in reduced freezing behavior when paired with complex, but not pure tone, auditory cues. In other words, complex acoustic stimuli selectively activate direct information transfer between the neocortex and amygdala to elicit auditory threat memory expression.            

threat_memory_Nov19.jpg

What's the impact?

This study conclusively demonstrates a role for the temporal cortex, including the auditory cortex, in auditory threat memory. These findings are particularly important for understanding the extent to which the neocortex participates in learning and memory and the circumstances in which this form of neocortical processing occurs. 

Letzkus_Nov19.jpg

Dalmay et al. A Critical Role for Neocortical Processing of Threat Memory. Neuron (2019). Access the original scientific publication here.