Improved Brain Blood Vessel and Functional Repair after Stroke via Nogo-A Therapy

Post by Kasey Hemington

What's the science?

Ischemia (lack of oxygen) after stroke can cause severe disability, in part because there is limited blood vessel regeneration (angiogenesis) and repair in damaged brain tissue around the stroke (periinfarct zone). An axonal guidance molecule called Nogo-A is an angiogenesis inhibitor, so blocking this pathway after stroke could result in improved recovery via increased vascular restoration. This week in PNAS, Rust and colleagues assessed the effect of a genetic deletion or antibody-mediated neutralization of Nogo-A on mice after cerebral (brain) ischemia.

How did they do it?

The authors included 13 control mice, nine mice deficient for Nogo-A, and five mice deficient for S1PR2, (Nogo-A’s receptor) in the study. They also applied an anti-Nogo-A antibody to control mice, as an alternative to genetic deletion of Nogo-A. The mice’s motor skills were assessed at several time points between three and 21 days after ischemic brain injury. The authors also characterized gene expression using mRNA extracted from the periinfarct zone, in order to understand what genes were upregulated after stroke. Three weeks post-stroke, histological analysis was performed on the mice’s brains, and vascular function and regeneration, synapse and neurotransmitter function, and cell survival were evaluated. The authors also evaluated the relationship between any functional improvement in mice deficient of Nogo-A and angiogenesis, by applying an anti-VEGF antibody, in order to directly link vascular repair with functional improvement. VEGF is a growth factor critical for angiogenesis.

What did they find?

The authors found that several genes were upregulated post-stroke, including increased expression of Nogo-A and S1PR2 and other inhibitory vascular and neural factors, indicating these genes may be involved in post-stroke brain tissue damage or poor recovery. In control mice three weeks post-stroke, the authors observed low vascular branching and low overall vascular area in the periinfarct region, compared to the uninjured hemisphere of the brain. In Nogo-A and S1PR2 deficient mice, the vascular area was 179% and 53% improved and vascular branching was 179% improved and 85% improved respectively. In the uninjured hemisphere of the brain, vasculature and blood perfusion were not altered in Noga-A or S1PR2 mice compared to control mice. Similar results were found when the vasculature of mice was treated with an anti-Nogo-A antibody. The results suggest inhibiting Nogo-A improved the re-vascularization post-stroke, but didn’t alter vascularization in healthy brain tissue. In functional tests, three weeks post-stroke, Nogo-A and S1PR2 deficient mice experienced less paw dragging, and less error touches in a horizontal ladder test. Paw dragging was negatively correlated with vascular branching, indicating the functional improvement was related to the degree of vascular repair. When VEGF-mediated angiogenesis was blocked using an anti-VEGF antibody in Nogo-A deficient mice, vascular branching and other metrics were decreased compared to Nogo-A deficient mice (that received a control-antibody). These results suggest that the anti-VEGF antibody counteracts the beneficial effects of Nogo-A deletion.

Kasey (1).png

What's the impact?

This study demonstrated that blocking Nogo-A, which can inhibit angiogenesis, results in improved both vascular and functional (behavioural) recovery post-stroke. This study, performed in mice, points to Nogo-A and other neurite outgrowth inhibitors as promising areas for future research, with the potential to improve vascular repair and functional recovery for stroke patients.

Rust_quote_Jul2.jpg

Rust et al. Nogo-A targeted therapy promotes vascular repair and functional recovery following stroke. PNAS (2019). Access the original scientific publication here.

Choice Signals in The Visual Cortex

Post by Stephanie Williams

What's the science?

Activity in sensory brain areas can covary with perceptual choices, or “decisions”. The amount of decision-related information contained within visual brain areas that are early or mid-level in the brain’s visual processing stream is still under investigation. It is possible that choice-related information in these sensory areas could contain information about current choices as well as past choices (histories). Alternatively, they could only contain information related to the current choice. This week in the Journal of Neuroscience, Jasper, Tanabe and Kohn measured neurons in visual cortical regions and analyzed how well choice predictions could be made using information from (1) individual neurons, (2) populations of neurons and (3) choice and reward histories.

How did they do it?

The authors recorded from individual neurons and populations of neurons (up to 30 neurons) simultaneously in primary visual cortex (“V1”) and midlevel visual cortex ( “V4”) while macaque monkeys performed a visual orientation discrimination task. Two male monkeys were trained with liquid rewards to respond to a visual task that involved a circle with lines inside that appeared at different orientations (angles; called an ‘orientation grating’). In the task, two targets would appear after the orientation grating, and the monkeys would have to glance (called a “saccade”: a fast eye movement towards the target) upwards or downwards to the target that matched the orientation they had just seen, which the authors tracked with an eye-tracker. The authors trained the animals until they became “experts” at the task, reaching asymptotic performance. The authors note that in contrast to previous studies, which routinely tailor their tasks to the known responses of the neurons they record from, the authors did not choose this task based on the functional properties of the individual neurons they recorded from.

The authors implanted two 48 microelectrodes-arrays into the V1 and V4 regions of each animal. Monkey #1 had electrode arrays implanted first into the left hemisphere, and then implanted in the right hemisphere, resulting in 3 datasets (2 from Monkey #1, 1 from Monkey #2) in total. The authors mapped the spatial receptive fields of the neurons on the first day of the recording by showing the monkeys different gratings at different locations and orientations. The authors used the information from their electrophysiological recordings to predict the monkey’s behavioral choices. They were interested in understanding (1) How their predictions of the animal’s choice improved when they analyzed small populations of neurons rather than individual units and (2) How their predictions of the animal’s choice improved when they included the reward and choice history of the animal in their model. They also investigated whether the choice information in the neuronal responses reflected choice history. They used two choice history variables (each with values of -1 0 or 1) in their analyses, which contained information about the choice of the monkey on the previous trial, and whether the monkey received a reward.

What did they find?

The authors found that they could predict the animal’s responses using recordings of individual V4 neurons, but only if they looked at the time between the disappearance of the task (“stimulus offset”), and the choice that the monkey made, which was relatively late in the behavioral trial. They could not predict decisions with V1 neurons during the same time window. The authors had originally analyzed the window occurring  0-250 ms after stimulus onset, and found only weak choice prediction values in this window. They conclude from these results that there is choice information relayed by some V4 cells and that it occurs in the epoch between stimulus offset and the appearance of the choice targets.

Stephanie_img_Jul2.png

When the authors considered populations of neurons rather than individual neurons, they found they could predict the monkey’s decisions. They found a weaker choice signal in V1 than in V4, despite the higher sensitivity to orientation in V1. This result led the authors to suggest that choice signals may be determined by the proximity of the population to the decision area than by the sensitivity of the neurons. They could improve their prediction of the animal’s choice by considering their choice and reward history. They didn’t find evidence that choice history was represented explicitly in V1 or V4, which is consistent with previous studies. Instead, they found that the component of the decision-related information that was incorporated in the sensory information was the decision at the current time.

What's the impact?

The authors show that both decision history and neuronal responses in visual areas can be used to predict perceptual decisions. They show that choice-related signals for this forced alternative task can be found in V4, but not in V1. These findings have important implications for understanding how decision information is processed in the brain, demonstrating how integrating information across neurons can be informative in predicting decision behavior.

Jasper et al. Predicting perceptual decisions using visual cortical population responses and choice history. J Neuroscience (2019). Access the original scientific publication here.


Using a Neural Network to Understand the Brain’s Magnetic Fields

Post by Kasey Hemington

What's the science?

Magnetoencephalography (MEG) measures brain activity by recording magnetic fields generated by the brain, using sensors that sit inside a helmet around the head. The goal is to study brain activity coming from a particular brain region (or ‘source’) associated with an experimental task, however, the sought-after signals in these recordings are very noisy as magnetic interference can come from the outside environment or or the ongoing background activity of the brain itself. It is also difficult to reliably extract signals from the same source across individuals, taking into account each individual’s own neuroanatomy. Machine learning can be a useful tool for teasing out these signals. These models excel at classifying patterns of brain activity, but often aren’t easily interpretable or robust to inter-individual differences. The week in NeuroImage, Zubarev and colleagues designed a convolutional neural network (CNN) classifier (a machine learning algorithm) to identify neural sources for a variety of stimuli, and compared its accuracy to other modeling approaches.

How did they do it?

The authors tested their CNN on MEG data from planar gradiometer sensors for three different experimental paradigms: Experiment 1) Event-related magnetic fields were measured in response to visual stimuli (a checkerboard pattern), auditory stimuli (tones) and transcutaneous median nerve stimulation in seven healthy adults, Experiment 2) In response to a visual cue, 17 healthy adults imagined moving their left or right hand, and Experiment 3) In 250 healthy adults (part of the Cam-CAN data set), event-related field recordings were captured in response to visual (checkerboard pattern) or auditory (tones) stimuli.

A neural network model works by selecting several ‘layers’ to be trained sequentially to predict/classify an experiment-related outcome (e.g. predict whether a participant is imagining moving their left or right hand in Experiment 2). The authors selected the layers such that the underlying neural activity could be easily interpreted; the input (first) layer of their CNN was spatially filtered data, in order to identify activation patterns associated with a certain source/location in the brain. The second layer identified patterns in time associated with the event of interest (e.g. the time course of the brain’s response to a visual stimulus). The data input to this layer is the output of the first layer; a number of spatial components. The third and final (output) layer of the model simplified the second layer by suppressing features that were unrelated to classification (using l1 regularization). The authors named this model LF-CNN. They also created an alternative, vector autoregressive CNN (VAR-CNN) in which interactions between spatial components were modelled, however, in this model spatial components cannot be individually interpreted. To evaluate their models, they compared the ability of their two models to accurately classify experimental outcomes to benchmark machine learning models, including a support vector machine and the popular EEGnet model, among others. They used a leave-one-out approach to test the model, meaning the model was trained and validated using data from all but one participant, and tested on the remaining participants. This process was then repeated for each participant resulting in an average classification accuracy score.

Kasey.png

What did they find?

In Experiments 1, 2, and 3 the VAR-CNN model exhibited the best performance of all models tested (with 86% and 76% and 95.8% classification accuracy for the left-out participants respectively). The LF-CNN model had the second best score in all three experiments, compared to other benchmark models. These findings indicate that the CNN models designed in this study were best at correctly decoding the type of stimuli (Experiments 1 and 3) or imagined movement (Experiment 2) experienced by the participant based on brain activity.

The authors also tested the model’s performance in a ‘real time’ scenario, meaning they allowed it to learn or make adjustments after receiving feedback for each trial, and only allowed minimal, fast preprocessing so the model could be run quickly. Fast preprocessing is necessary if the model would need to be interpreting brain activity while the participant was still performing the experiment (for example, to provide real time feedback as a brain-computer interface tool). There was a significant increase in performance of the VAR-CNN when the model was trained in real time, and was allowed to learn from feedback given after each trial. In Experiment 3, the VAR-CNN and LF-CNN models outperformed the benchmark models both with and without real time learning. The authors also confirmed that the activation patterns derived from the LF-CNN model captured the spatial patterns and frequency content/properties of the brain responses expected for each experiment.

What's the impact?

This study demonstrates that a machine learning algorithm – a CNN - can be used to interpretably and reliably classify MEG brain activity recordings in different experimental conditions and identify the underlying neural sources, including in real time. The findings have implications for use in real time brain-computer interface applications and studies in which classification accuracy and inter-subject reliability are paramount.

Zubarev_quote_June25.jpg

Zubarev et al. Adaptive neural network classifier for decoding MEG signals. NeuroImage (2019). Access the original scientific publication here.

The authors’ open-source software package to apply these methods can be found here.