Empathic Artificial Intelligence: The Good and the Bad

Post by Shireen Parimoo 

What does it mean to be empathic? 

Empathy is one of the most distinguishing human traits. Empathy allows us to take on others’ points of view, share emotional experiences, and help others feel understood and cared for. As a result, empathy facilitates social bonding and helps strengthen interpersonal relationships. There are three main components of empathy: 

1. Cognitive empathy is our ability to recognize and understand others’ emotional states.

2. Emotional empathy involves affective resonance, or the ability to share in the emotions of others by feeling those emotions ourselves.

3. Motivational empathy refers to the feelings of care and concern for others that make us want to act to improve their well-being. 

Over the years, machines and robots have found their way into many roles that were previously filled by humans. Robotic pets that keep older adults with dementia company reduce their feelings of loneliness and improve their well-being. Chatbots and voice assistants that are powered by AI technology help us with a wide range of situations and provide personalized solutions to our problems. Even empathic conversational AI agents can be used to solicit donations for charitable causes, with features like a trembling voice both showing and eliciting empathy from listeners resulting in more donations. Going a step further, smart journals have been developed to incorporate AI into the journaling process, providing users with real-time feedback and even coaching. Technology like this can be immensely useful for those who are unable to afford therapy or require immediate feedback.

With the advent of large language models like ChatGPT and the adoption of increasingly intelligent technology into our day-to-day lives, there are several ongoing debates surrounding artificial intelligence (AI). Can AI agents be empathic? If so, when is it ethical to use them, if at all? What are the benefits and harms of allowing empathic AI agents to interact with people? Should the use of AI be regulated? This topic overview will touch on some of these questions by introducing examples of human-AI interactions, describing empathic AI and its uses in different contexts, and discussing the pros and cons of empathic AI.

What does empathic AI look like? 

People often treat AI similarly to other humans. We ascribe emotional states to AI agents and in interacting with them, experience similar reactions as we would with other humans. For example, Cozmo is a social robot that can express rudimentary forms of happiness and sadness. When denied a fist bump, Cozmo expresses sadness by turning away and making a sad sound. In response to the sad gesture, both children and adults show concern for Cozmo. Similarly, people feel more guilty and ashamed when voice assistants like Siri respond to verbal aggression with empathy rather than avoidance.

Artificially intelligent agents can simulate – if not genuinely feel – some aspects of empathy. ChatGPT, for instance, can recognize the user’s emotional state (cognitive empathy). When informed that "I feel horrible because I failed my chemistry exam”, ChatGPT responded with a sympathetic statement (“I’m sorry to hear that you’re feeling this way”) and showed insight into what the user might be feeling (“It's completely normal to feel disappointed or upset about exam results”). It then provided suggestions for coping with the situation (e.g., “give yourself time to feel”, “focus on the future”), much like a friend or mentor might provide in a similar situation (motivational empathy). 

Although AI can simulate expressions of cognitive and motivational empathy, it is unclear if AI can engage in emotional empathy because affective resonance (i.e., the ability to resonate with the emotions of others) may have a neurophysiological basis. For example, people who watch others in pain will activate some of the same brain regions as those who are experiencing the pain. Even seeing pictures of people with a pained facial expression will activate brain areas involved in pain empathy. This ability makes it easier for us to feel what the other person is feeling but may be difficult for a non-biological agent like AI to achieve. Nevertheless, it may be enough that AI agents can express empathy in various situations and elicit specific emotional responses from humans, which raises the question: are there any costs associated with empathic AI? 

The benefits and harms of empathic AI 

A major risk of adopting AI technology in general is that it can propagate the biases of those who create it. It is already known that many machine learning models and AI tools exhibit biases against certain sociodemographic groups. For instance, an algorithm used in the US healthcare system showed racial bias against Black patients who were predicted by the algorithm to be healthier than their equally sick White counterparts, thereby preventing them from receiving the extra care that they required. ChatGPT also exhibits gender biases against women. When writing recommendation letters, ChatGPT described men in terms of their skills and competence (‘expert’, ‘respectful’) whereas women were described in terms of their appearance and temperament (‘stunning’, ‘emotional’). In fields such as healthcare and technology where racial and/or gender bias is present and minorities are under-represented, these biases have the potential to manifest in harmful ways for the users.

Nonetheless, there are numerous ways that empathic AI can benefit our lives. As mentioned before, conversational AI can increase prosocial behavior by nudging people to donate to charitable causes. In this situation, empathy is not necessarily directed toward but instead evoked in the user. Research indicates that people are receptive to expressions of empathy from AI, which may be particularly useful in the healthcare context. For example, patients are more likely to disclose information, adhere to their treatment, and generally cope better when they perceive their physician to be empathic towards them in their interactions. When healthcare practitioners like physicians and therapists are not readily available (e.g., in between appointments) to provide patient-centered care, empathic AI can fill the gap by providing emotional support as needed.

People can also use empathic AI services such as smart journals in their daily lives without being restricted by cost or the fear of social judgment that often prevents people from seeking help. An AI agent can also provide empathy consistently and reliably because it does not suffer from compassion fatigue, whereas people might begin to feel the burden of continually providing emotional support. However, there is a risk of becoming too dependent on AI for emotional support with potentially negative consequences.

On the other hand, expressions of empathy from AI can be seen as inherently manipulative because AI agents cannot yet truly feel empathy. Empathy offered by healthcare practitioners is driven by their emotional states and past experiences that allow them to relate to their patients, which is something that AI inherently cannot do. Moreover, even though people can benefit from expressions of empathy from AI, this is largely only true when they are aware that they are interacting with AI agents. We may hold AI to a different standard and have different expectations from our interactions with AI agents compared to other people. If people do not realize that they are receiving feedback from AI agents, such as in virtual therapy, its effect can be diluted and even negatively impact well-being, erode trust, and call into question the ethics of using such technology or platforms. Lastly, the potential for manipulation and deception is particularly important to keep in mind and guard against when empathic AI is used in interactions with vulnerable populations like children and the elderly. There are cases where AI has been misused to commit fraud through social engineering, such as conversational AI mimicking the voice of an individual’s family member to obtain sensitive information.

References +

Ashcraft et al. (2016). Women in tech: The facts. Report.

Chin et al. (2020, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems). Empathy is all you need: How a conversational agent should respond to verbal abuse.

Efthymiou & Hildebrand. (2023, IEEE Transactions on Affective Computing). Empathy by design: The influence of trembling AI voices on prosocial behavior.

Inzlicht et al. (2023, Trends in Cognitive Sciences). In praise of empathic AI.

Montemayor et al. (2022, AI & Society). In principle obstacles for empathic AI: Why we can’t replace human empathy in healthcare.

Obermeyer et al. (2019, Science). Dissecting racial bias in an algorithm used to manage the health of populations.

Pelikan et al. (2020, Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction). "Are You Sad, Cozmo?": How humans make sense of a home robot's emotion displays.

Perry, A. (2023, Nature Human Behavior). AI will never convey the essence of human empathy.

Portacolone et al. (2020, Generations). Seeking a sense of belonging.

Singer et al. (2004, Science). Empathy for pain involves the affective but not sensory components of pain.

Srinivasan & González. (2022, Journal of Responsible Technology). The role of empathy for artificial intelligence accountability.

Wan et al. (2023, arXiv). “Kelly is a warm person, Joseph is a role model”: Gender biases in LLM-generated reference letters.

Xiong et al. (2019, Neural Regenerative Research). Brain pathways of pain empathy activated by pained facial expressions: A meta-analysis of fMRI using the activation likelihood estimation method.

Mindsera Smart Journal. https://www.mindsera.com/

A Neural Signature of Drug Craving in Methamphetamine Users

Post by Christopher Chen

The takeaway

By measuring brain activity and applying machine learning to detect patterns, researchers have identified a neurobiological signature of drug craving in methamphetamine use disorder (MUD). These findings represent a significant advancement toward developing personalized and effective therapeutic interventions for MUD and other substance use disorders. 

What's the science?

Whether a person becomes addicted to a drug or suffers from drug relapse is strongly grounded in the neurobiological mechanisms of craving and drug cue reactivity. Therefore, researchers have focused on developing treatments that target craving to reduce drug use.

Machine learning tools can be used to develop predictive models correlating brain activity and drug craving. In combination with EEG and behavioral data, machine learning has previously helped classify individuals with MUD and healthy controls with over 90% accuracy. However, while these studies demonstrated promising results, they used conventional low-density EEG. Conventional EEGs at the 32- and 64-channel scale lack the specificity to localize the source of certain brain signals. High-density (128 channels or higher) EEG can better assess the regional and global brain activity related to craving in MUD.

In a recent article in Cell Reports Medicine, Tian and colleagues leveraged high-density resting-state EEG and machine learning to investigate the neurophysiological signatures of MUD. Ultimately, this study aimed to identify individual-level functional connectivity in individuals with and without MUD in the hopes of generating reliable biomarkers that could be used to predict MUD.

How did they do it?

Researchers generated brain functional connectivity networks (FCNs), essentially visualizations of synchrony between brain regions, using data from resting-state high-density EEG from individuals with MUD and healthy controls (HC) who watched a 5-minute video depicting various scenarios of methamphetamine use. The data included 465 region of interest (ROI) pairs, spanning different frequency bands (delta, theta, alpha, beta, and gamma) and two resting conditions (eyes closed and eyes open).

The researchers quantified FCNs using a measurement called imaginary coherence (iCoh), a measurement of the synchrony between EEG signals from two channels. After characterizing FCNs in each subject, researchers used a machine learning technique called a relevance vector machine (RVM) to build models to predict craving scores for individuals with MUD. To further validate these predictive models, researchers applied them to another resting-state EEG dataset of 44 different individuals with MUD.

Additionally, the researchers were interested in seeing whether their models could distinguish between individuals with MUD and HC. They applied the RVM model to classify individuals with MUD and HC using FCNs from various brain signaling frequency bands and resting conditions. Finally, they compared the predictive capacity of their models with models derived from another well-known marker of brain activity called EEG spectral power. 

What did they find?

EEG-tailored machine learning models pinpointed crucial brain regions like the medial prefrontal cortex (mPFC), angular gyrus, orbital gyrus, and insula, along with their connections, as critical in mediating craving — findings that align with previous MUD studies. Interestingly, there were also unique connections at certain frequencies in the brain — delta and beta bands — that correlated with craving, suggesting these may be potential therapeutic targets. 

The researchers found that the most robust biomarker for MUD came from activity networks in the beta frequency, while participants had their eyes open (REO beta). Using data from REO beta conditions, the models exhibited the strongest predictive capabilities for cue-induced craving, correlating well with craving in individuals with MUD. Importantly, the prediction was prospectively replicated in an independent EEG dataset. The model also effectively identified abnormalities in MUD individuals, bettering previous methods by associating brain activities and interactions with MUD using source localization and iCoh. REO beta models were also best at classification performance across all frequency bands and resting conditions, expressing over 80% accuracy in determining whether an individual had MUD.  

What's the impact?

The results illustrate the effectiveness of integrating advanced brain imaging techniques with machine learning tools in identifying robust neurobiological biomarkers for drug addiction. Furthermore, the study successfully developed replicable predictive models for craving and showed that FCNs are potent measurement tools for characterizing brain activity related to drug craving. Looking ahead, these insights underscore the potential of leveraging similar combinations of imaging and AI-driven techniques to create more personalized and effective therapeutics for alleviating MUD and other drug-use disorders. 

The Development of Consciousness in Infants

Post by Laura Maile

What is consciousness?

Scientists have long attempted to understand where consciousness resides, whether it involves a network of brain areas, and when in development it emerges. Current theories of human consciousness state that consciousness develops as the brain becomes capable of integrating information, making us aware of ourselves and our environment. There are many different theories, however, on how and where consciousness is represented. Higher-order theories, for example, require that one be able to represent an external experience in the mind, and place importance on the prefrontal cortex. In contrast, integrated information theory places more emphasis on posterior cortical areas, and rests on the ability of the brain to integrate different stimuli to generate information by the whole. In general, it is agreed that consciousness is represented in the brain, likely as an integration of signals across multiple brain areas.  

How do you measure consciousness?

It is of critical importance to develop and agree upon measures of consciousness, specifically for infants, as they are unable to follow directions or communicate verbally. Infants do possess the ability to respond to stimuli such as the sound of their mother’s voice, different facial expressions, and noxious stimuli. Their responses can be measured both behaviorally, through limb withdrawal, facial grimacing, eye movement, vocal and sucking activity, and physiologically, with changes in heart rate and neural activity in brain areas that respond to environmental stimuli. Most current theories of consciousness are based on physical processes that can be measured via electroencephalogram (EEG) recordings and fMRI, and behavioral indicators of consciousness, such as the capacity to respond to environmental stimuli. fMRI studies, which record high-resolution hemodynamic activity in the brain that respresents neural activity, have identified cortical “hubs” and networks of brain regions that are active during different activities and states. By studying brain activity across different states of development, scientists have determined how functional networks change over time. Conceptually similar to EEG, which uses a net of electrodes fitted to the skull to record brain activity, magnetoencephalography (MEG) is a less invasive alternative that can be used to measure fetal brain activity. 

When does consciousness emerge?

There are conflicting theories about when exactly consciousness appears during development. Some theories, for example, require that individuals have a sense of self and an understanding of their own mental state in order to be qualified as “conscious,” which would mean that humans are not conscious until some time after their first birthday. Other recent evidence indicates that consciousness may appear in early infancy or even before birth, as soon as thalamocortical activity appears in the brain at about 24-26 weeks gestation. Rather than base our understanding of the emergence of consciousness on a specific theory or set of conflicting theories, some scientists suggest that the field measure markers of consciousness in adults and observe when they first emerge in infants. 

Activity in the brain shows that the primary cortical areas that process vision, auditory, and sensorimotor information are active in response to external stimuli at birth, indicating that newborn babies can process many sensory inputs. Some of these areas are also present and active prior to birth in the developing fetus. Those brain areas involved in more complex processes such as attention, executive function, and memory appear less complex at birth and develop over the first two years of life. Some specific functional activity networks have been linked to the capacity for or recovery of consciousness after injury in adults. Three of these networks, the default mode network, dorsal activity network, and executive control network, have recently been identified as distinct and functional networks in newborn babies.  

Behavioral data shows that shortly after birth, infants can process auditory and visual inputs that allow them to recognize their mother’s voice, show sensitivity to music, and even show preference for their native language. Visual acuity is low at birth, but brain imaging data shows responses in the visual pathway to distinct visual inputs at two months of age. While most perceived senses expand and develop as the infant ages, there is also data indicating that young infants aged 4-6 months can perceive more distinct sounds and faces than older infants and adults. Additionally, multisensory integration that requires conscious perception of individual stimuli has been demonstrated in 4-5 month old infants. 

What's next?

The study of infant consciousness has become an increasingly important and studied topic in the field of consciousness research. Continued improvement in methods to measure brain activity and other markers of consciousness in fetuses and infants is needed. A more complete understanding of the neural correlates and functional basis of consciousness will also require a tightening of the many theories of consciousness into one universally accepted theory. 

The takeaway

A definitive answer on when human consciousness begins has yet to be identified, but the increase in studies on conscious experience in infants and preterm fetuses is bringing us closer to one. Recent evidence points to an earlier onset of consciousness than was previously described in human infants, indicating some level of consciousness is present at birth, and potentially even in the late stages of gestation when brain activity and behavioral responses to external stimuli can be measured. 

References +

Bayne, T et al., Consciousness in the cradle: on the emergence of infant experience. Trends in Cognitive Sciences. 2023.

Padilla, N et al., Making of the mind. Acta Paediatrica. 2020.

Seth, AK et al., Theories of consciousness. Nature Reviews Neuroscience. 2022.