Using a Neural Network to Understand the Brain’s Magnetic Fields

Post by Kasey Hemington

What's the science?

Magnetoencephalography (MEG) measures brain activity by recording magnetic fields generated by the brain, using sensors that sit inside a helmet around the head. The goal is to study brain activity coming from a particular brain region (or ‘source’) associated with an experimental task, however, the sought-after signals in these recordings are very noisy as magnetic interference can come from the outside environment or or the ongoing background activity of the brain itself. It is also difficult to reliably extract signals from the same source across individuals, taking into account each individual’s own neuroanatomy. Machine learning can be a useful tool for teasing out these signals. These models excel at classifying patterns of brain activity, but often aren’t easily interpretable or robust to inter-individual differences. The week in NeuroImage, Zubarev and colleagues designed a convolutional neural network (CNN) classifier (a machine learning algorithm) to identify neural sources for a variety of stimuli, and compared its accuracy to other modeling approaches.

How did they do it?

The authors tested their CNN on MEG data from planar gradiometer sensors for three different experimental paradigms: Experiment 1) Event-related magnetic fields were measured in response to visual stimuli (a checkerboard pattern), auditory stimuli (tones) and transcutaneous median nerve stimulation in seven healthy adults, Experiment 2) In response to a visual cue, 17 healthy adults imagined moving their left or right hand, and Experiment 3) In 250 healthy adults (part of the Cam-CAN data set), event-related field recordings were captured in response to visual (checkerboard pattern) or auditory (tones) stimuli.

A neural network model works by selecting several ‘layers’ to be trained sequentially to predict/classify an experiment-related outcome (e.g. predict whether a participant is imagining moving their left or right hand in Experiment 2). The authors selected the layers such that the underlying neural activity could be easily interpreted; the input (first) layer of their CNN was spatially filtered data, in order to identify activation patterns associated with a certain source/location in the brain. The second layer identified patterns in time associated with the event of interest (e.g. the time course of the brain’s response to a visual stimulus). The data input to this layer is the output of the first layer; a number of spatial components. The third and final (output) layer of the model simplified the second layer by suppressing features that were unrelated to classification (using l1 regularization). The authors named this model LF-CNN. They also created an alternative, vector autoregressive CNN (VAR-CNN) in which interactions between spatial components were modelled, however, in this model spatial components cannot be individually interpreted. To evaluate their models, they compared the ability of their two models to accurately classify experimental outcomes to benchmark machine learning models, including a support vector machine and the popular EEGnet model, among others. They used a leave-one-out approach to test the model, meaning the model was trained and validated using data from all but one participant, and tested on the remaining participants. This process was then repeated for each participant resulting in an average classification accuracy score.

Kasey.png

What did they find?

In Experiments 1, 2, and 3 the VAR-CNN model exhibited the best performance of all models tested (with 86% and 76% and 95.8% classification accuracy for the left-out participants respectively). The LF-CNN model had the second best score in all three experiments, compared to other benchmark models. These findings indicate that the CNN models designed in this study were best at correctly decoding the type of stimuli (Experiments 1 and 3) or imagined movement (Experiment 2) experienced by the participant based on brain activity.

The authors also tested the model’s performance in a ‘real time’ scenario, meaning they allowed it to learn or make adjustments after receiving feedback for each trial, and only allowed minimal, fast preprocessing so the model could be run quickly. Fast preprocessing is necessary if the model would need to be interpreting brain activity while the participant was still performing the experiment (for example, to provide real time feedback as a brain-computer interface tool). There was a significant increase in performance of the VAR-CNN when the model was trained in real time, and was allowed to learn from feedback given after each trial. In Experiment 3, the VAR-CNN and LF-CNN models outperformed the benchmark models both with and without real time learning. The authors also confirmed that the activation patterns derived from the LF-CNN model captured the spatial patterns and frequency content/properties of the brain responses expected for each experiment.

What's the impact?

This study demonstrates that a machine learning algorithm – a CNN - can be used to interpretably and reliably classify MEG brain activity recordings in different experimental conditions and identify the underlying neural sources, including in real time. The findings have implications for use in real time brain-computer interface applications and studies in which classification accuracy and inter-subject reliability are paramount.

Zubarev_quote_June25.jpg

Zubarev et al. Adaptive neural network classifier for decoding MEG signals. NeuroImage (2019). Access the original scientific publication here.

The authors’ open-source software package to apply these methods can be found here.