How Does the Brain Map Our Increasingly Complex Social World?

Post by Flora Moujaes

What's the science?

When navigating the world around us it is imperative we keep track of our increasingly complex social network: who our family, friends, and co-workers are, and how they relate to each other. Developing a map of our social environment is useful as it allows us to make inferences in novel situations or based on sparse information. We know from previous research that when we encounter a new physical environment, such as a new city, we first sample the environment, building up small and separate representations. Then, as we get to know the physical environment better, we integrate these representations into a coherent internal map. So is the same process used to represent abstract relationships, such as social networks? This week in Neuron, Park and colleagues use fMRI to show that the brain builds maps of social networks the same way it builds maps of physical space.

How did they do it?

To investigate how the human brain constructs maps of social hierarchies, 27 participants were trained on a task where individuals ranked in two social hierarchies: popularity and competence. The training involved the participants being given relational information about the two dimensions, on different days. For example, the participant may be presented with two individuals, Alice and Bob, and informed that Alice is more popular than Bob. The true social hierarchy could thus be mapped as a two dimensional grid defined by the two hierarchies: popularity and competence. 

To explore whether the human brain represents social hierarchies as a one dimensional or multidimensional map, and what brain regions are involved in the representation, the training period was followed by an fMRI experiment. This experiment examined whether participants represent social hierarchies in a single dimension (e.g. they have separate maps for competency and popularity) or in two dimensions (e.g. competency and popularity are represented in the same map). To do this participants were required to make inferences about the relative competency and popularity of novel pairs of individuals. Neural activity was examined in various regions of the brain including the hippocampus and entorhinal cortex, which organize both spatial and non-spatial relational information into a reference map, and the orbitofrontal cortex, which is theorized to represent a goal or the current state in a task structure to guide goal-directed decision making. 

What did they find?

The researchers found that the brain spontaneously represents individuals’ status in social hierarchies in a map-like manner in 2-D space, as participants were able to generalize to both social hierarchies (popularity and competence) when presented with novel pairs. They also found that distances between people in the 2-D grid were related to neural activity, as the pattern similarity between faces represented in the hippocampus, entorhinal cortex, and the medial orbitofrontal cortex was related to the distance between faces in the social network grid.  This result is particularly striking, as the grid itself was never shown to participants, demonstrating that participants instinctively built up this grid-style map of social hierarchy from relational information they received about pairs of participants.

Social_image_Jul28.png

What's the impact?

Overall this study suggests that the brain utilizes the same neural system for representing our physical space and our social network. It shows that by building a social network map, participants are able to make accurate inferences about novel situations. Further, these results support the theory that the hippocampus and entorhinal cortex play a key role in constructing a global map from local experiences, whether physical or social.

social_quote_Jul28.jpg

Park et al. Map making: Constructing, combining, and inferring on abstract cognitive maps Neuron (2020). Access the original scientific publication here.

Alpha-Synuclein Induces Brain Changes that Precede Locomotor Deficits

Post by Amanda McFarlan

What's the science?

Alpha-Synuclein (α-syn) is a soluble protein that is abundantly present in presynaptic neuronal terminals in the brain. α-syn is commonly found in a soluble form, however, it can also aggregate to form insoluble fibrils that play a key role in many neurodegenerative diseases including Parkinson’s disease and dementia with Lewy bodies. Imaging studies of synucleinopathies (diseases caused by an accumulation of α-syn aggregates) have provided new insight into how brain areas are affected by α-syn aggregates. However, it remains unknown how α-syn pathology causes changes in the brain over time. This week in The Journal of Neuroscience, Chu and colleagues used diffusion and functional magnetic resonance imaging (MRI) to investigate how an injection with α-synuclein fibrils affects the structure and function of the mouse brain over time.

How did they do it?

The authors performed bilateral intramuscular injections of either α-syn fibrils or phosphate-buffered saline (PBS, a control) in transgenic mice expressing the mutant human α-syn. They used MRI to collect images of these mice at 3 timepoints: pre-injection, 4 weeks post-injection, and 12 weeks post-injection. At each time point, the authors performed 4 different scans: an anatomical scan, a diffusion MRI (used to measure microstructural differences), sensory-evoked functional MRI (scan taken while applying 60 seconds of thermal heat stimulation on the mouse hind limb) and resting-state functional MRI (used to measure spontaneous activity across the brain and identify functionally correlated brain regions). The authors also assessed changes in locomotor activity at each time point using the rotarod task which measures the time it takes for a mouse to fall from a rotating rod. To extrapolate their findings, the authors used Cox proportional hazards regression models to determine which measurements were the most accurate for predicting survival time. 

What did they find?

The authors found that compared to control mice, mice injected with α-syn fibrils had reduced fractional anisotropy (a measure used in diffusion imaging that is thought to reflect fiber density, axonal diameter and myelination) in the cerebellum, vermis, anterior medulla, posterior medulla and somatosensory cortex at 4 weeks post-injection. Additionally, they determined that mice injected with α-syn fibrils had reduced fractional anisotropy compared to control mice in the pons and thalamus at 12 weeks post-injection

Amanda (1).png

Mice injected with α-syn fibrils displayed no differences in blood-oxygen-level-dependent (BOLD) response pre-injection and at 4 weeks post-injection during the sensory-evoked MRI. However, at 12 weeks post-injection, mice injected with α-syn fibrils had a reduced BOLD response compared to control mice in the posterior medulla, anterior medulla, pons, and midbrain, suggesting that injection of α-syn fibrils reduced sensory activation. Mice injected with α-syn fibrils also had reduced fractional amplitude of low frequency fluctuations (ALFF, a measure of spontaneous activity at rest) compared to control mice in the midbrain, thalamus and striatum at 4 weeks post-injection. There were no differences in the latency to fall during the rotarod task, indicating that locomotor activity was not impaired in mice injected with α-syn fibrils compared to control mice. Lastly, using the regression models, the authors determined that a reduction in fractional anisotropy in the pons at 12 weeks post-injection was the greatest predictor of a lowered chance of survival.

What’s the impact?

This is the first study to show that structural and functional changes in the mouse brain that precede impairments to locomotor activity are detectable as early as 4 weeks following an intramuscular injection with α-syn fibrils. Notably, it was determined that survival time could be predicted by changes in fractional anisotropy in the pons. Together, these findings highlight the utility of diffusion and functional MRI in identifying markers of α-syn pathology in the brain. Understanding and improving these techniques may be especially relevant in clinical settings for detecting markers of synucleinopathies in human patients.

alpha_syn_quote_Jul28.jpg

Chu et al. Alpha-synuclein induces progressive changes in brain microstructure and sensory-evoked brain function that precedes locomotor decline. The Journal of Neuroscience (2020). Access the original scientific publication here.

Cortical Network Responses and Visual Semantics of Movie Fragments

Post by Stephanie Williams

What's the science?

Previous neuroscience research has rigorously investigated neural processing of so-called “low-level” visual features, such as moving lines, dot patterns, etc. Recently, it has become possible to investigate more “naturalistic” stimuli that humans might encounter in daily life. In neuroimaging experiments, these naturalistic stimuli might consist of real films or natural sounds. Higher-level concepts can be extracted from these more complex naturalistic stimuli, such as whether there are people present or absent in a particular frame of a film.  This week in Nature Scientific Reports, Berezutskaya and colleagues develop a procedure for extracting high-level semantic concepts from a film and use a neural encoding model to predict cortical responses in an electrocorticography dataset. 

How did they do it?                             

Patients with medication-resistant epilepsy who had electrodes previously implanted into their brain for clinical purposes were shown a short film, presented in 30 second chunks. The authors analyzed whether they could map the high-level information from the movie onto the participants’ neural responses. To extract the high-level semantic information, the authors developed a 3 part procedure: 1) they applied a visual concept recognition neural network model to extract visual concepts, 2) they used a word embedding language model to extract semantic relationships and 3) they used dimensionality reduction techniques to capture the components that represented the majority of the variance of the extracted concepts. To extract the visual concepts, the authors used a commercial computer vision model that processed the raw pixel information from the film and grouped the most likely concept labels by probability. The visual concepts represented object names (eg. camera, tv), and also abstract concepts such as emotions and qualities. Next, to extract high-level semantic information, they applied an artificial neural network language model to learn word embeddings (mathematical representations of words). The output of the language model was a semantic vector at each frame that represented a series of linguistic and semantic ties between the words that corresponded to the visual concepts. For example, the output might represent the presence or absence of characters in a frame of the movie, or motion versus stillness. 

The authors performed principal component analysis on the semantic vectors to reduce the dimensionality of the data and then focused their analysis on the principal components that explained the majority of all variance (70%). They then sorted movie frames according to how much variance was explained by a particular principal component. Next, the authors used the high-level semantic information to model the neural responses of subjects to each frame of the movie. They fit an encoding model to predict neural responses in a specific frequency band called high-frequency band (HFB, 60-120 Hz). To understand the delays in neural processing associated with high-level cognitive information, they tested a series of different time shifts relative to the onset of the film. They tested 16 different time shifts, 8 of which occurred before the film started, and 8 of which occurred after the film started. They also created a cortical map of prediction accuracy to understand which regions showed the highest accuracy.

The authors also investigated whether different regions showed a specialization for specific semantic concepts. They used the beta-weights from their linear encoding model across the principle components for each electrode (location) and clustered the beta weights. They analyzed whether the resulting clusters of electrodes were characterized by distinct networks, and they extracted the top 5 semantic components for each cluster. To check that the results of their analysis were not due to the processing of low-level visual features rather than semantic concepts, the authors also attempted to predict neural responses using only the low-level features. The authors were interested in understanding whether subsequent layers in a visual neural network recognition model would show a hierarchical-like build-up, showing increasing similarity to the extracted high-level semantic concepts, or increasing prediction accuracy of the neural data. They focused this analysis specifically on the pooling layers of a publicly available object recognition model that had been trained to recognize objects in images. They compared neural prediction accuracy of the last pooling layer, which they expected would be sensitive to object and general shapes, with the prediction accuracy of the semantic concepts. 

What did they find?

The authors found that it is possible to reduce naturalistic visual stimuli to semantic concept principal components that are easily interpretable. The authors also found that the extracted high-level semantic information captured fundamental distinctions in the film (see figure). When the authors analyzed the prediction accuracy for the high-frequency band responses as a function of time shift relative to the film onset, they found the highest accuracy for a time shift of 320 milliseconds after the stimulus onset. The cortical map of high-frequency band response prediction accuracy showed that the best prediction accuracy occurred for the occipitotemporal, parietal and inferior frontal cortex. When the authors clustered electrodes by their beta-weights from the linear encoding model, they found that there were some clusters that mapped well onto a specific cortical network. For example, electrodes in cluster #1 were found in a cortical region called the lateral fusiform gyrus. The two semantic concepts that contributed most to the neural activity in this cluster were the presence of humans and human faces. The authors repeated this for many other clusters, showing distinct semantic concept specificity for each cluster. These results show that high-level semantic concepts are associated with distinct functional cortical networks.

stephanie (1).png

When the authors examined whether low-level features could be used to make similar neural predictions, they found that prediction accuracy was worse than it was for predictions made with semantic information. This finding confirms that the authors’ results were indeed due to the semantic processing features rather than low-level features of the film. When the authors analyzed how sequential layers in the visual object recognition model were related to the semantic concepts, they found that there was a gradual increase in similarity from the first to the last intermediate layer of the model. Similarly, when the authors analyzed the relationship between sequential layers of the model and the neural prediction accuracy, they found that fit to the neural data showed a graduate increase in accuracy. When the authors compared the fit of the last intermediate pooling layer with the fit from the semantic concepts, they found a difference in whole brain prediction accuracy that favored the semantic components. Together, these results show that there is a gradual emergence of the semantic features from the lower-level visual information in the visual recognition model.

What's the impact?

This work advances our understanding of how visual information from naturalistic stimuli is interpreted by the human brain. The authors also developed and verified a new method of extracting high-level semantic concepts by combining visual object processing and natural language processing.

movies_quote_Jul28.jpg

Berezutskaya et al. Cortical Network Responses Map onto Data-driven Features that Capture Visual Semantics of Movie Fragments. Nature Scientific Reports. (2020). Access the original scientific publication here.