Neurons Detect Cognitive Boundaries to Separate Memories

Post by Andrew Vo

The takeaway

We experience our lives as a continuous stream that is organized and stored in our memories as discrete events separated by cognitive boundaries. A neural mechanism in the medial temporal lobe (MTL) detects such boundaries as we experience them and allows us to remember the ‘what’ and ‘when’ of our memories.

What's the science?

How is our continuous experience of the world transformed into discrete events separated by boundaries in our memories? Whereas we have a clear understanding of how the brain encodes our spatial environments with physical boundaries, the neural mechanism by which nonspatial memories are shaped by abstract event boundaries remains unknown. This week in Nature Neuroscience, Zheng et al. recorded neuronal activity within the MTL of human epilepsy patients and tested their memories for video clips separated by different types of event boundaries.

How did they do it?

The authors recorded single-neuron activity within different regions of the MTL (including the hippocampus, amygdala, and parahippocampal gyrus) of 20 epilepsy patients as they performed a task. During an encoding phase, individuals watched 90 distinct and novel video clips that contained either no boundaries (i.e., a continuous clip), soft boundaries (i.e., cuts to different scenes within the same clip), or hard boundaries (i.e., cuts to different scenes from different clips). During a scene recognition phase, individuals were presented with single static frames (either previously presented ‘target’ clips or never-before-seen ‘foil’ clips) and asked to identify the frames as either ‘old’ or ‘new’ along with a confidence rating. During a time discrimination phase, individuals were shown two old frames side by side and asked to indicate the order they had previously appeared along with a confidence rating.

What did they find?

Scene recognition accuracy did not differ between boundary types. In contrast, time discrimination accuracy was significantly worse when discerning the order of frames separated by hard boundaries compared to soft boundaries. These findings suggest a tradeoff effect in which hard boundaries improve recognition but impair temporal order memory. The authors identified ‘boundary cells’ as those neurons in the MTL that showed firing rate increases following both soft and hard boundaries whereas ‘event cells’ were those neurons that responded only to hard but not soft boundaries. The level of boundary cell firing rate during encoding predicted later scene recognition accuracy, while the coordination of event cell activity with ongoing oscillations in the brain predicted later time discrimination performance. When examining neural state shifts (i.e., changes in the population activity across boundary-responsive neurons), larger shifts were positively related to improved recognition accuracy but negatively related to time discrimination—revealing a neural mechanism for the tradeoff between recognition and temporal order memories. 

What's the impact?

This study revealed a neural mechanism in the MTL that responded to boundaries separating discrete events and helped to shape the content and temporal order memories for these events. A particular highlight of this paper is the use of single-neuron recordings in human patients, which allows for a more direct study of memory-related brain activity compared to less invasive approaches such as functional MRI or EEG.

Utilizing Artificial Intelligence Improves Planning and Decision Making

Post by Lincoln Tracy

The takeaway

Benjamin Franklin once said, “Failing to plan is planning to fail”. Artificial intelligence can teach people to improve their planning and decision-making strategies by using optimal feedback processes, thereby avoiding sub-optimal outcomes.

What's the science?

Decision-making is an important part of everyday life, but it is often plagued by errors that can have shocking consequences. In many cases, these consequences could be avoided if proper planning strategies were implemented. A crucial part of developing optimal planning strategies is reliable, valid, and timely feedback. However, many real-world settings do not provide enough high-quality feedback to help people discover the optimal strategies on their own. This week in PNAS, Callaway and colleagues developed an artificial intelligence tutor to help people quickly discover the best possible decision-making strategies before testing its effectiveness across several experiments in different settings.

How did they do it?

First, the authors used artificial intelligence to develop a virtual tutor to teach people optimal decision-making processes. The tutor was designed to provide metacognitive feedback (to help participants learn the optimal strategies for themselves) rather than direct feedback (e.g., “you should have gone left”) during an initial training phase before the testing phase began. The authors then recruited over 2500 participants across six different online experiments hosted on Amazon Mechanical Turk or Prolific to test the effectiveness of the intelligent tutor (against direct action feedback or no feedback) in six different settings:

·       Experiment 1 introduced participants to the Web of Cash game, where they were required to navigate a spider through a web from its center to an outer edge. Each space on the web contained a reward (or a loss), and participants were aiming to collect as many rewards as possible. All rewards and losses were hidden initially, meaning participants did not know the optimal path to obtain the most rewards. However, participants could pay a small fee to uncover the reward on each space on the web. Participants undertook the training phase with metacognitive, direct, or no feedback before completing the testing phase. The authors quantified participant’s performance relative to how often they used the optimal strategy to navigate the spider through the web.    

·       Experiment 2 tested whether the metacognitive feedback training was effective in a more complicated alteration of the Web of Cash game than Experiment 1 (routing an airplane through a larger series of airports).

·       Experiment 3 tested whether the metacognitive feedback training could be retained by adding a 24-hour delay between the training and testing phases of the Web of Cash game.

·       Experiment 4 tested whether metacognitive feedback training was effective in a less structured version of Experiment 1.

·       Experiment 5 tested metacognitive feedback in a real-world context—planning an inexpensive road trip (the Road Trip paradigm). Rather than navigating a spider through a web, participants were required to road trip across a country, stopping at several hotels, to end up at a city with an airport. A fourth training condition (watching a video about if-then plans) was added. 

·       Experiment 6 explored which aspect of the metacognitive feedback (i.e., a time-based penalty for selecting a suboptimal move or a message describing what move the participant should have made) made the largest contribution to the improved scores on the Web of Cash game.

What did they find?

In Experiment 1, the authors found participants performed better on the Web of Cash game after receiving metacognitive feedback compared to the other two feedback conditions. This suggests metacognitive feedback increased participants’ ability to make better decisions without having to think harder. Participants who received metacognitive feedback also performed better in a more complicated version of the Web of Cash game (Experiment 2), when there was a 24-hour delay between training and testing (Experiment 3, suggesting training effects were retained over time), and in a less structured version of the game (Experiment 4). Metacognitive feedback resulted in better performance on the more naturalistic Road Trip paradigm compared to the video only and no feedback groups, suggesting metacognitive training can be transferred to new situations (Experiment 5). Metacognitive feedback with both the delay penalties and information about the optimal choice improved performance more than either of the components individually, and neither individual component improved performance more than receiving no training (Experiment 6). This suggests both aspects of metacognitive feedback are critical to the improvements in decision-making and planning.

What's the impact?

This study found metacognitive feedback provided by an artificially intelligent tutor taught people to quickly learn effective decision-making strategies. The novel feedback method performed better than conventional approaches to providing feedback to improve decision-making performance. These findings represent the first steps in using artificial intelligence tutors in increasingly realistic situations to improve decision-making processes and lead to more optimal outcomes.

Access the original scientific publication here.  

Determining Heart Rates of Others by Looking at their Faces

Post by Lina Teichmann

The takeaway

People can identify who a heartbeat belongs to by looking at a short video of their face. This suggests that it is possible for us to infer or feel other people’s internal bodily states.

What's the science?

Internal bodily states such as heartbeats have been suggested to influence how we experience the outside world. For example, our cardiac rhythm might be coupled with the emotional experience of a given scenario. This week in Cortex, Galvez-Pol and colleagues highlighted that we also have the ability to infer other people’s internal bodily states via visual assessment alone. Their findings show that observing someone else’s face is enough to determine the most likely owner of a given heartbeat.

How did they do it?

Participants completed 5 behavioural tasks to examine whether heart rate can be inferred above chance when viewing other people’s faces. In the initial experiment (referred to as the ‘natural configuration’), short videos of two actors were shown along with a square flashing at the same frequency as one of the actor’s heartbeats. Participants were asked to choose which of the two actors is the most likely owner of the heartbeat depicted. This experiment was replicated with a different set of participants later. In follow-up experiments (the ‘inverted configuration’), the authors inverted the faces of the actors, adjusted the colour of the faces for consistency throughout the video displays, showed still frames instead of videos, and replaced the face displays with geometric shapes.

What did they find?

In all experiments using face stimuli, the authors found that participants performed above chance at determining who is the most likely owner of the depicted heartbeat. In trials where the true heartbeats of the actors were most different, participants performed better than when the actors’ heartbeats were more similar. Performance was best in the natural configuration (dynamic videos with upright faces). Performance decreased but was still above chance in the inverted configuration. When the faces were replaced with geometric shapes, participants performed at chance.

What's the impact?

The results of this study suggest that we are able to infer other people’s internal signals such as heartbeats via visual assessment. It is possible that this is due to an ability to use visual cues such as varied redness of the face in response to blood pumping or small pulsing movements of the head, face, or eyes. Alternatively, we may be able to infer someone’s health (and therefore their heart rate) from their appearance. Overall, the study provides interesting insights into how we infer the internal states of others using visual perception, that warrant further research. 

Access the original scientific publication here.