How Social Sensitivity Affects Adolescent Learning

Post by Rebecca Hill

How is adolescent learning different?

Adolescence, a period from ages 10-24, is a transformative time when most people are extremely sensitive to peer influence and their own emotions. Learning during adolescence may be different than learning during childhood or adulthood, since adolescents are more sensitive to their social environment. This sensitivity could contribute to adolescents’ vulnerability to developing mental health issues.

How is the adolescent brain different?

Adolescents report more frequent and intense emotions than adults and experience more complex emotions than children. Self-consciousness and embarrassment, as well as the desire to be liked, peak during this time. These heightened emotions are most often and most strongly experienced in social settings. Adolescents have more activity in areas of their brain involved in emotional processing such as the amygdala and the hippocampus, which help them respond to social cues. Social exclusion also causes adolescents to respond with more neural activity than children.

During adolescence, learning can be heightened in some situations. We more easily remember this time period than memories from childhood or later adulthood. Older adolescents (16-18) learn more efficiently than younger adolescents (11-16). This stage is critical for learning a second language, developing taste in music, and sociocultural learning. So how is learning in adolescence affected by this social and emotional sensitivity? First, let’s take a step back and introduce two types of learning that happen in adolescence.

What is associative learning?

Associative learning, or learning to associate two unrelated things to each other, can be easier to study experimentally than other types of learning. There are two main types of associative learning:

1)    Pavlovian learning: when you learn one stimulus is associated with another stimulus, leading to the first stimulus becoming associated with the response to the second stimulus. Also known as classical conditioning. An example is a dog learning that a bell chiming means it will soon get fed dinner and begins to get hungry just when hearing the bell.

2)    Instrumental learning: when you learn a stimulus is associated with a response, which is then either rewarded or punished. This leads to the stimulus itself causing a change in the response levels. Also known as operant conditioning. An example is a mouse learning that a light turning on means that it should press on a button, which will be rewarded with food.

Learning happens in several stages. During acquisition, the association between stimuli and responses is formed. After a learning test is over and the stimuli stops being rewarded or punished, extinction occurs and the response to the stimulus is “unlearned”. Researchers use these techniques to better understand how learning is affected by social sensitivity in adolescence.

What are the advantages and disadvantages of social sensitivity?

Many studies have tried to understand human emotional sensitivity by drawing comparisons with adolescent rats. One associative learning experiment found that adolescent rats were more affected by social rewards than drug rewards when compared to adult rats. In humans, adolescents were motivated by all positive peer feedback, even from the least reinforcing peer, while children and adults responded only to the most positive peer feedback. Taken together, this means that social contact, even in the smallest amounts, can be a strong reward for adolescents.

On the other hand, adolescents continue to respond to social threats, even after the initial threat is gone, long after children and adults stop responding. This presents the issue that social punishments impact adolescents much more than children or adults. In addition to this, adolescents are worse at instrumental learning - when behaviors are strengthened or weakened based on whether they are reinforced or punished - than adults, even though they are more sensitive to social stimuli. As adolescents become adults, they get better at social learning, despite being more sensitive to social feedback when they’re younger.

How does this impact adolescent development?

Being able to learn from social cues is crucial, especially during adolescence. Since adolescents show more Pavlovian reward learning, researchers have suggested a connection with addiction vulnerability. It is well known that adolescents are more likely to use drugs or alcohol if their peers do.

While social sensitivity can lead to negative outcomes such as drug addiction, researchers also suggest it can positively impact adolescents as well. For example, while adolescents have higher vulnerability to mental health conditions like anxiety, they are also more affected by social feedback. Exposure therapy, like that used in Pavlovian learning, can be effective at treating anxiety. Because of this, researchers suggest that supportive friends might be better able to help buffer stress since adolescents are particularly influenced by social rewards during associative learning. By studying these effects of social sensitivity on adolescents, we may be able to better treat the mental health and addictive disorders that adolescents are particularly at risk from.

References +

Altikulaç, S., Bos, M. G., Foulkes, L., Crone, E. A., & Van Hoorn, J. (2019). Age and gender effects in sensitivity to social rewards in adolescents and young adults. Frontiers in behavioral neuroscience, 171.

Guyer, A. E., Silk, J. S., & Nelson, E. E. (2016). The neurobiology of the emotional adolescent: From the inside out. Neuroscience & Biobehavioral Reviews, 70, 74-85.

Johnson, D. C., & Casey, B. J. (2015). Extinction during memory reconsolidation blocks recovery of fear in adolescents. Scientific Reports, 5(1), 8863.

Jones, R. M., Somerville, L. H., Li, J., Ruberry, E. J., Powers, A., Mehta, N., ... & Casey, B. J. (2014). Adolescent-specific patterns of behavior and neural activity during social reinforcement learning. Cognitive, Affective, & Behavioral Neuroscience, 14, 683-697.

Knoll, L. J., Fuhrmann, D., Sakhardande, A. L., Stamp, F., Speekenbrink, M., & Blakemore, S. J. (2016). A window of opportunity for cognitive training in adolescence. Psychological Science, 27(12), 1620-1631.

Koppel, J., & Rubin, D. C. (2016). Recent advances in understanding the reminiscence bump: The importance of cues in guiding recall from autobiographical memory. Current directions in psychological science, 25(2), 135-140.

Tang, A., Lahat, A., Crowley, M. J., Wu, J., & Schmidt, L. A. (2021). Children’s shyness and neural responses to social exclusion: Patterns of midfrontal theta power usually not observed until adolescence. Cognitive, Affective, & Behavioral Neuroscience, 21(6), 1262-1275.

Towner, E., Chierchia, G., & Blakemore, S. J. (2023). Sensitivity and specificity in affective and social learning in adolescence. Trends in Cognitive Sciences.

Vink, M., Derks, J. M., Hoogendam, J. M., Hillegers, M., & Kahn, R. S. (2014). Functional differences in emotion processing during adolescence and early adulthood. Neuroimage, 91, 70-76.

Yates, J. R., Beckmann, J. S., Meyer, A. C., & Bardo, M. T. (2013). Concurrent choice for social interaction and amphetamine using conditioned place preference in rats: effects of age and housing condition. Drug and alcohol dependence, 129(3), 240-246.

Natural Brain Waves Correspond with Eye Movements During Reading

Post by Lani Cupo

The takeaway

The brain’s neural activity imposes its own rhythm onto processing. A matching pattern of activity has been found in language processing, not just for speech which has a rhythm of its own, but also in reading.

What's the science?

Previous studies established that neural oscillations, or rhythmic neural activity, are involved in processing language, such as speech and sign language. However, language (either spoken or signed) contains a rhythm of its own, making it difficult to understand whether the synchrony between speech and brain oscillations arises because of the speech’s rhythm (outside in) or the rhythm of the brain (inside out).  This week in The Journal of Neuroscience, Henke and colleagues provide evidence that the brain may impose its own intrinsic rhythm onto language processing using eye-tracking and electroencephalography (EEG) during reading.

How did they do it?

The authors analyzed openly available data from the Zurich Cognitive Language Processing Corpus, where twelve participants were instructed to read 300 sentences on a screen while they tracked their eye movements and brain activity (with EEG). From the eye-tracking data, the authors could study fixations, where the eye fixates on a target, and saccades, where it moves between targets. If rhythmic neural activity is relevant to reading, the authors hypothesized that the eye movements should also be rhythmic and that the cycles of eye movement data would be correlated with cycles of brain oscillations, which they assessed by examining their phase coherence. Additionally, language comprehension is thought to be broken into multi-word groups or “chunks” where words are meaningfully related to one another. The authors expected to see that the cycles of the eye movements relate to the formation of chunks; at the end of a chunk, there would be a change in the duration of a fixation. To that end, they examined if the fixation durations and the EEG signal were always at a specific point within their cycle at sentence endings.

What did they find?

The authors found the expected rhythmicity of the eye movements. Importantly, the eye movements synchronized with brain oscillations from electrodes above the visual cortex in two different frequency bands. These included the theta band, involved in visual attention and syllable parsing (important for single-word comprehension), and the delta band, involved in chunking speech into multi-word units. Because the language stimulus was written, the rhythmic activity was not likely imposed on the brain by the external stimulus, but naturally arose. Contrary to their hypothesis, the authors could not relate the changes in fixation duration to sentence endings. They suspect this is because they sampled only a single fixation per word, which might have impacted the baseline in their statistical approach. However, they could relate the cycles of oscillatory brain activity to chunking.

What's the impact?

While the study does not allow the authors to claim that the neural oscillations causally impact the fluctuations in eye movements during reading, the data provide evidence that the synchronized brain activity and eye movements may shape reading and information processing. In time, such work might be extended to reading-impaired populations, including individuals with dyslexia, to improve support for individuals who struggle to read.

Transgender Listeners Show Reduced Visual Bias When Classifying Voices

Post by Anastasia Sares

The takeaway

While we usually draw on multiple senses and general predictions to inform our perception, this can sometimes backfire, introducing bias into our judgments. This study found that transgender and nonbinary people were less susceptible to visual bias, and better able to classify a person’s vocal range while watching videos of them singing or speaking. This trans advantage could come from more extensive experience with voice-body mismatch in daily life. 

What's the science?

The brain is constantly trying to fuse information from its different senses and make predictions based on that information. Unfortunately, this can sometimes lead to biases when we are only asked to judge based on one sense alone, or if we are confronted with something that doesn’t match our ingrained predictions. One example of this is the McGurk effect, where misleading visual information causes people to perceive a different syllable than the one they heard: a “da” played over audio combined with the visual of someone saying “ba” can result in people reporting that they heard “ga” instead. Visual bias is especially problematic for voice-body mismatches in the context of opera. A person’s body size and shape doesn’t necessarily indicate what range they can comfortably sing, but the stereotypes are strong and can (consciously or subconsciously) influence the roles that opera singers are cast in. This can affect their long-term vocal health and be detrimental to their careers.

Visual biases are not set in stone, however. There is some evidence that they can be mitigated through training, like musicians learning to resist the McGurk effect. One group of people who may have natural sensitivity to voice-body mismatches are the transgender and nonbinary communities, since voice is a strong gender cue and often a source of insecurity or fear of being outed.

Recently in Frontiers in Psychology, Marchand Knight and colleagues showed that, when asked to judge vocal ranges of different speakers, trans and nonbinary people are more resistant to visual biases than their cis peers, making their judgments more accurate.

How did they do it?

The authors conducted an online experiment including a cis group of participants as well as a trans group, which was composed of a mix of trans and nonbinary identities. Participants started by learning about different voice categories used in opera (from low/dark to high/bright: bass, baritone, tenor, alto, mezzo, soprano) and next used this voice-typing scale to rate clips of people speaking and singing. Participants first got the audio-only (no video) versions of the clips, then the visual-only versions (guessing voice type purely based on looks), and finally the full clips with both video and audio. The researchers intentionally chose some actors who they thought might show stronger voice-body mismatches to better measure the effect of visual bias.

What did they find?

Participants were fairly successful at classifying voice type based on hearing the voices in the audio-only condition, but in the visual-only condition they tended to revert to a gender binary (rating videos of female- presenting people around the “mezzo” voice range and male-presenting people around the “baritone” range). The highest and lowest voice types had the most discrepancy between their audio and visual ratings.

When audio and visual were presented together, ratings fell somewhere in between the two previous conditions, showing that the visual information was influencing participants even though they had been asked to rate solely based on the audio. However, the trans participants were better at resisting the visual biases, so their ratings in the audiovisual condition were closer to the audio-only condition. Cis participants’ ratings were pulled more toward the visual information, 30% more so than trans participants. This difference in ability did not seem to be strongly related to demographic differences between the groups or to gender views in general, as far as the researchers could measure. 

What's the impact?

These findings highlight a strength of the trans and nonbinary community, in a time when most research is focusing on the disadvantages they suffer. It also brings up a crucial issue that can affect the vocal health of opera singers, and calls for it to be addressed.

Access the original scientific publication here. 

[Disclosure: The writer of this BrainPost summary is also a collaborator on the publication]