Using Artificial Neural Networks to Understand Emotion Recognition in Autistic Adults

Post by Negar Mazloum-Farzaghi

The takeaway

Compared to neurotypical adults, autistic adults have difficulty recognizing facial emotions. Machine learning helped to determine that neural activity in the inferior temporal cortex may explain these differences.

What's the science?

Autism spectrum disorder (ASD) is characterized by difficulty recognizing others’ moods and emotions from facial expressions. Previous research has found that the fusiform face area and the inferior temporal (IT) cortex are involved in facial recognition. Moreover, neural activity in the human amygdala has been associated with recognizing facial emotions. A critical question that remains to be answered is whether atypical facial emotion recognition in autistic adults can be explained by perceptual difficulties or by an atypical development and functioning of regions associated with facial emotional processing. Brain-mapped computational models can help us predict how facial emotion is represented across different brain regions in primates and how these representations are associated with performance on tasks involving facial emotion judgement. This week in The Journal of Neuroscience, Kar used artificial neural network models of primate vision to investigate neural and behavioral markers of emotion recognition in ASD.

How did they do it?

Kar began by analyzing behavioral and neural measurements conducted by Wang and Adolphs (2017) and Wang et al. (2017). During the behavioural task, Wang and Adolphs (2017) presented images of faces to neurotypical controls and high-functioning autistic adults. The participants were asked to make a judgment as to whether each face depicted happiness or fear. Wang and Adolphs (2017) found that compared to controls, autistic individuals had reduced specificity in facial emotional judgement.

To further investigate their findings using computational modeling, Kar trained artificial neural network models to perform the same task. The neural network models were comprised of layers of units that closely corresponded to the brain areas (i.e., IT cortex) and neurons in the primate ventral visual cortex. In order to test the accuracy of these models, Kar compared the facial emotional predictions they made with the behavioral measurements obtained from the neurotypical controls and autistic adults. Next, the author deleted different layers of the network in order to determine which layers of the network were responsible for discriminating between the behavior of the neurotypical controls and autistic adults.

In order to find out whether the network models could establish the IT cortex or the amygdala as the primary contributor to facial emotional processing, Kar also reanalyzed recordings obtained from electrodes implanted bilaterally in the amygdalas of patients with epilepsy by Wang et al. (2017). Similar to the task described above, participants were presented with images of faces and were asked to discriminate between two emotions; fear and happiness. Finally, to assess the efficiency of neural connections in autistic adults during facial emotional processing, the author looked at the synaptic strengths (weights) of the connections between the IT layer of the neural network models and its behavioral responses. Moreover, the author added various levels of noise to the activity of the IT layer in the neural network, to evaluate whether added noise would improve or weaken the match between judgments made by neural network models and by autistic adults.

What did they find?

Kar found that the artificial neural network models could accurately predict human facial emotion judgements at an image-by-image level. Interestingly, the network’s behavioral responses matched the patterns observed from the neurotypical control group more than those of the autistic adults. The author found that the greatest difference in the network’s ability to match the behavioral responses of controls versus autistic adults was based on the final and deepest layer of the network, which resembled the primate IT cortex. These results suggested that the neural activity in the primate IT cortex could play an essential role in abnormal facial emotion processing in autistic individuals. In terms of the role of the amygdala in facial emotional processing, the author found that, when controlling for the IT-cortex layer of the network models, the amygdala provided very little information during facial emotion recognition, suggesting that the IT cortex is the main driver of amygdala’s role in discriminating the emotion of a face.

Finally, the weights in the network models (the importance of different connections) were more strongly related to the neurotypical controls than the autistic adults, suggesting that the neural connections in the autistic adults were noisier and therefore different during facial recognition. Moreover, added noise increased the match between the network’s performance and that of the autistic adults, however, it did not improve the network’s similarity to the neurotypical controls. These findings suggested that atypical facial emotional processing in autistic individuals might be due to the presence of additional noise in their sensory representations.

What's the impact?

This study found that primate IT activity was a critical neural marker involved in a computational model of atypical facial emotion recognition in ASD. Overall, this study showed that artificial neural network models of vision were a useful tool in probing underlying neural and behavioral mechanisms involved in autism and that these neural models could be a promising tool in investigating other aspects of cognitive function.

Access the original scientific publication here.

Slow Oscillations Facilitate Brain-Wide Communication and Memory Consolidation During Sleep

Post by Lani Cupo

The takeaway

Coordinated waves of global brain activity play an important role in memory consolidation and information flow during non-rapid eye movement sleep, a sleep phase otherwise characterized by dampened activity.

What's the science?

Previous research shows that non-rapid eye movement sleep (NREM) is important in the consolidation of memory, possibly through waves of coordinated activity across the cortex called slow oscillations. However, evidence also suggests that NREM is characterized by a loss of network connectivity, which seems to be in opposition to the role of this sleep phase in memory consolidation. This week in PNAS, Niknazar and colleagues used electroencephalography (EEG) in healthy young adults to investigate the causal function of slow oscillations on the direction of information flow and on memory.

How did they do it?

The authors conducted a sleep experiment where 59 healthy young adults underwent EEG. Participants completed a memory test (word pair associations) before and after sleeping. During sleep, they wore an EEG cap measuring brain activity with electrodes at 32 different locations, or channels (10 for reference, 22 for recording). Data from the 22 channels were examined to identify the presence of slow oscillations, and data from 12 channels (three each in the frontal, parietal, temporal, and occipital lobes) were analyzed to establish the direction of information flow. Of these 12, data from four channels were considered as potential “sources” of activity, while all channels were considered potential recipients, or “sinks”. When patterns of slow oscillations were detected near the source channels, the authors used a mathematical model (generalized partial directed coherence) to compare waves of activity following the activation throughout the other electrodes. They examined how the magnitude of information (captured by the height of peaks) was impacted by the distance of the source to the origin of the slow oscillation, how much signal at sinks was impacted by the distance of the sink from the origin of the slow oscillation, and how much other factors impacted the flow of information. Then, the authors tested the hypothesis that global slow oscillations facilitate more information flow than local slow oscillations. Finally, they examined whether information flow associated with slow oscillations impacted memory performance on the post-sleep test.

What did they find?

First, the authors found that NREM sleep is indeed a period where neural communication is very low. However, slow oscillations allow for widespread communication across the whole brain. During this burst of causal information flow, the position of the slow oscillation impacts the magnitude of information flow, with the source channel closer to the oscillation sending the greatest magnitude of information and sinks farthest from the oscillation receiving the greatest magnitude of information. For example, if a slow oscillation was detected in the frontal lobe, a source electrode in the frontal lobe would show more outgoing information compared with one in the parietal lobe, and a sink at the occipital lobe (far away) would receive more information than one at the central channel. This suggests that slow oscillations are promoting large waves of long-distance brain communication. Supporting this result, the authors found that global slow oscillations that propagate across more electrodes allow for greater information flow than localized ones.

Finally, investigating the impact of information flow on hippocampal-dependent episodic memory, global slow oscillations also facilitated greater improvement, while local ones only facilitated improvement when the source electrode was close to the source of the oscillation and when there was greater distance between the source and sink pair. 

What's the impact?

These findings begin to reconcile disparate findings in the field—namely that NREM sleep is characterized by dampened activity, yet seems to underly the consolidation of episodic memory. This research provides evidence that bursts of long-range communication during slow oscillations allow memory consolidation to occur during NREM sleep.

The Impact of Sleep, Physical Activity, and Sedentary Behavior on Dementia Risk

Post by Leanna Kalinowski

The takeaway

Three modifiable daily behaviors –  sleep, physical activity, and sedentary behavior – independently and collectively impact dementia risk by influencing brain structure.

What's the science?

Dementia, defined as a loss of cognitive functioning that is severe enough to interfere with daily life, does not yet have an effective treatment despite its increased prevalence. Previous research has determined that three modifiable daily behaviors – sleep, physical activity, and sedentary behavior – are associated with the risk of developing dementia. Specifically, sleep deprivation and lack of physical activity are likely to exacerbate brain atrophy, defined as the loss of neurons in the brain. However, research on these associations is limited and based on relatively small numbers of study participants. This week in Molecular Psychiatry, Huang and colleagues investigated the impact of sleep duration, physical activity, and sedentary behavior on dementia risk and brain structure.

How did they do it?

The researchers used data from 431,924 participants in the UK Biobank study, which is a large-scale biomedical study that has been collecting data in the United Kingdom since 2006. At baseline, participants were asked several questions about sleep duration, physical activity, and sedentary behavior, and underwent structural MRI scans. All participants underwent a follow-up assessment an average of nine years after baseline.

To measure sleep duration, participants were asked how many hours of sleep they get per night and were divided into three categories: low sleep (0-6h), moderate (7h), and high sleep (8+h). To measure leisure-time physical activity, participants were asked to describe the frequency and duration of different activities undertaken over the previous four weeks. These activities were then converted into their metabolic equivalents (MET), which is the resting metabolic rate obtained during quiet sitting. After summing total MET for all physical activities, physical activity was divided into three categories: low (<400 MET/week), moderate (400-1200 MET/week), and high (>1200 MET/week). Finally, to measure sedentary behavior, participants were asked to report the number of hours spent watching TV and using a computer during a typical day. Sedentary behavior was then divided into three categories: low (0-2h/day), moderate (>2-4h/day), and high (>4h/day).

What did they find?

When measured independently, sleep, physical activity, and sedentary behavior were each associated with dementia risk. When compared to those in the moderate sleep category, both those in the low and high sleep categories demonstrated an increased dementia risk. In terms of physical activity, the risk of dementia dropped with every MET increase up until 1200 MET/week, above which there was little additional benefit of physical activity. As for sedentary behavior, there was no increased risk of dementia until the rate of sedentary behavior exceeded 3h/day, at which point the risk of dementia increased exponentially as the number of hours/day of sedentary behavior increased.

Taking each measurement together, the researchers found that a combination of moderate sleep, moderate-to-high physical activity, and low-to-moderate sedentary behavior showed the lowest risk of dementia. Structural MRI scans from these individuals showed larger cortical and subcortical grey matter volumes, suggesting that these activities impact dementia risk by affecting the structure of the brain.

What's the impact?

This study was the first of its kind to examine the independent and joint effects of sleep, physical activity, and sedentary behavior on the development of dementia. Findings from this study provide important insights into different risk factors for dementia and highlight the importance of changing these behaviors to reduce dementia risk.   

Access the original scientific publication here.