Post by Amanda McFarlan
What's the science?
Research has shown that when presented with an auditory stimulus, neural activity in the auditory cortex tracks rhythmic patterns in the stimulus. There are two distinct hypotheses that explain this phenomenon: the oscillatory hypothesis and the evoked hypothesis. The oscillatory hypothesis suggests that the auditory cortex has an intrinsic neural oscillator that will synchronize to an acoustic stimulus, as long as the frequency of the stimulus is within a range close to the oscillator’s resting frequency. Conversely, the evoked hypothesis suggests that the auditory cortex responds to each individual acoustic stimulus and shows evidence of rhythmic firing because the inputs it receives (i.e. music, speech, etc.) are rhythmic themselves. This week in the PNAS, Doelling and colleagues used computational models to study these neural behaviors and to uncover whether human auditory processing follows the oscillatory hypothesis or evoked hypothesis.
How did they do it?
The authors created two distinct computational models, an evoked model and an oscillatory model, based on the evoked and oscillatory hypotheses that describe the mechanisms of auditory neural processing. The evoked model was convolution based, while the oscillatory model was based on the Wilson-Cowan model of excitatory and inhibitory neural populations. Musical stimuli of varying frequencies (0.5 to 8 notes per second) from piano pieces were used as inputs for both models. To compare the outputs from both models at the different frequencies, the authors developed a phase concentration metric that analyzed the phase lag between the stimulus input and the model output in both models across stimulus rates/frequencies. Next, the authors used their phase concentration metric to analyze data from a previous study in which 27 participants listened to musical stimuli (the same stimuli used in the computational models) while undergoing magnetoencephalography (MEG) recordings. They used confidence intervals and Gaussian fitting to compare the participants’ data with their computational models. In a subsequent experiment, the authors aimed to reduce the effect of evoked responses by altering the musical stimuli such that the musical notes were either smoothed in their onset (resulting in a reduced evoked response) or characterized by a sharp attack (evoked response present). They had 12 new participants undergo MEG recordings while listening to these altered musical stimuli, and compared data from the participants’ recordings with their computational models.
What did they find?
The authors found that in their evoked computational model, the phase lag between the musical note stimulus and the model output increased as the frequency of the musical note increased, suggesting the phase lag is frequency dependent. The oscillatory computational model, however, was better able to keep up with the change in musical note frequencies, and displayed a relatively consistent phase lag between the stimulus and model output. Next, they used their phase concentration metric to analyze MEG data that was collected while participants listened to musical stimuli (the same stimuli used for the computational models). They determined that the mean phase concentration metric from the analyzed MEG data was better matched to that of the oscillatory model compared to the evoked model, suggesting that there may be an oscillatory mechanism in the auditory cortex. The authors reasoned that, although the oscillatory model was found to be a better predictor of MEG activity compared to the evoked model, the well-documented evidence for evoked responses in the literature suggested that the auditory cortex may use a combination of both evoked and oscillatory mechanisms to process external stimuli. To investigate the role of evoked responses, they analyzed MEG recordings from participants who were presented with the ‘smoothed’ or ‘sharp attack’ musical stimuli. They found that, similar to the first experiment, the oscillatory model was better than the evoked model at predicting the MEG activity when participants were presented with a sharp attack stimulus. Notably, they determined that when the evoked response was not present (when the smoothed stimulus was presented), the oscillatory model was an even better predictor of the MEG activity compared to the evoked model. These data suggest that the relative weights of oscillatory vs. evoked responses are shifted based on various stimulus features, including sharpness of the stimulus note onset.
What's the impact?
This is the first study to show strong evidence of an oscillatory mechanism for processing neural inputs in human auditory cortex using MEG recordings and computational modelling. These findings provide insight into the underlying mechanisms by which the human auditory cortex integrates information. The techniques used in this study may be useful for studying other sensory brain regions to further explore the role of oscillatory activity in the brain.
Doelling et al. An oscillator model better predicts cortical entrainment to music. PNAS (2019). Access the original scientific publication here.