Why We Make Decisions Together

Post by Anastasia Sares

What’s the science?

Collective decision-making behaviors have been demonstrated in social animals like bees, ants, and fish. Humans are also social creatures, and like these other species, we often make decisions together, even though we strongly value autonomy. What benefit is there in giving up some of our autonomy and making a decision as a part of a group? This week in Nature Human Behavior, El Zein and colleagues suggested that we decide together in order to dilute risks and negative outcomes.

What do we already know?

Previous research in this area has focused on whether collective decision-making results in a better decision overall. In some circumstances this process is helpful, but other times a group can get derailed and make a non-optimal decision. Since group decisions aren’t necessarily better in terms of accuracy, it is important to understand why we bother with them at all. After all, we like to have a choice when deciding what kind of product to buy, or what career to pursue. Some decisions are made together out of social obligation or a sense of fairness, but this may not account for all of the collective decision-making situations we observe.

What’s new?

The authors propose that one of the main reasons that individuals make decisions collectively is because it minimizes the risk taken by any one member. It’s what animals do when they herd or flock together, making it less likely that any one member is attacked (known as the dilution effect). Humans, even when they are not in physical danger, are very averse to certain emotional risks, especially regret or responsibility for a negative outcome. Making a decision as part of a group reduces the feeling of personal responsibility and can help us to cope with the stress of difficult decisions (like parents deciding whether or not to keep an injured child on life support). It may also protect us from social backlash (like when “whistleblowers” call out bad behavior of very powerful individuals). However, when taking the group perspective and not the individual perspective, the decrease of personal responsibility comes with its own problems: at worst, no one assumes responsibility for negative outcomes, and they are not addressed at all. Think of the bystander effect, where witnesses to an emergency situation are less likely to step in and help if others are present, or the tragedy of the commons, where individuals tend to over-use common resources.

anastasia.png

What's the bottom line?

There are a number of factors that push us toward collective decision-making: social inclusion and fairness, the idea that we are smarter together, and, as El Zein and colleagues emphasize, protection from negative consequences. In the future, it will be important to evaluate the relative contribution of these different factors in the drive to collective decision-making. This will help us better understand the behavior of the different social groups and governing bodies that permeate human society. Perhaps then we’ll know when to say, “many hands make light work” and when to say, “too many cooks spoil the broth.”

Hertwig_quote_May7th.jpg

El Zein et al. Shared responsibility in collective decisions. Nature Human Behavior (2019).Access the original scientific publication here.

Can We Alter the Progression of Huntington’s Disease?

Post by Anastasia Sares

What's the science?

The Huntingtin (HTT) gene has a number of roles in our brain, including neural development and transport of neuronal cell components, and we still don’t understand everything about it. We do know that the gene has an area where the base pairs “CAG” repeat a number of times. Sometimes, during DNA replication, the “CAG” gets stuck on repeat: if there are over 35 repeats, this leads to Huntington’s disease. The more repeats, the earlier the onset of symptoms, which include chorea (dance-like movements), muscle rigidity, lack of coordination, dementia, and depression. Statistically, 50% of the children of a person with Huntington’s will also have the disease.

In the mere 25 years since the discovery of the gene causing Huntington’s disease in 1993, there are now a myriad of possible approaches to treat this devastating genetic disease. This week in Neuron, Tabrizi and colleagues inventoried different treatment options for Huntington’s disease at the DNA, RNA, and protein level, showing how far in clinical trials each one has progressed, and evaluating their pros and cons.

What do we know?

In our body’s cells, genetic material (DNA) lives in the nucleus. In order to make functional proteins that do work in the rest of the cell, the DNA must first be transcribed into RNA, a messenger that takes the instructions outside of the nucleus, and then translated into proteins. The many repeats of “CAG” base pairs in the mutant HTT gene get translated into a long chain of abnormal material in the resulting protein. Because of HTT’s integral role in cell, these bad proteins have a variety of different effects, not least of which is that they can fragment off and cause neurofibrillary tangles. The tangles may lead to cell death in important brain regions like the striatum, which is responsible for movement selection and initiation.

anastasia_image.png

When it comes to treating the disease, there are many different plans of attack. It might be possible to directly modify the mutant Huntingtin gene itself, chopping it out of the DNA. We could also target RNA, the messenger. Finally, we could intervene at the level of the Huntingtin protein, breaking down the mutants before they have a chance to affect other parts of the cell. However, silencing HTT, especially early in life, can cause a host of problems. A successful therapy must either silence ONLY the mutant HTT, or find a balance between reducing mutant HTT and leaving enough normal HTT for successful neural development. There’s another problem, too. Because of the mutations in the HTT gene, the cell doesn’t always follow the normal rules about where it should start and stop in the process of creating RNA or proteins. This can result in a number of non-standard proteins which are also toxic. An optimal therapy would be able to remove or reduce these non-standard proteins.

What’s new?

To make a treatment acceptable for use in humans, the method must first be demonstrated to be effective in cell cultures, other mammals, and non-human primates. It then proceeds to rigorous multi-phase clinical testing. Recent advances in DNA technologies like the CRISPR/Cas9 system allow for precision manipulation of DNA, and go directly to the source of the problem for a one-time treatment (this means the non-standard proteins will be taken care of as well). However, these technologies are very new and are still in the preclinical stage. Most DNA treatments, including CRISPR/Cas9, and also some RNA treatments, are currently very invasive, requiring insertion of foreign viral proteins directly into the brain. This is irreversible and might provoke inflammation or other immune responses, not to mention the high risks of brain surgery in general.

The most clinically advanced treatments for Huntington’s disease are RNA-targeting methods, especially antisense oligonucleotides (ASOs). Unlike the highly-invasive DNA treatments, ASOs can be administered via lumbar puncture. However, ASOs might have to be administered repeatedly, which isn’t ideal, and they can’t target all of the abnormal proteins generated by the mutant HTT gene. At the protein level, one therapeutic method would be to stimulate the cell’s native machinery to degrade mutant Huntingtin proteins faster (through PROTACS). However, this is also preclinical and needs to be developed further, as we don’t yet know the best way to deliver them to the central nervous system or what side effects they might have. No matter which method is chosen, silencing both normal and mutant HTT seems more promising since it won’t have to be as personalized for patients with different numbers of CAG repeats. However, if we are to decrease HTT on a system-wide level, the timing of the intervention is critical. It would be important to delay start time of the therapy  to avoid the period of neural development but also start treatment early enough for it to be effective. Having better detection methods for Huntington’s disease progression will be crucial to this endeavor.

What’s the bottom line?

The principles behind Huntington gene therapies also extend to many other genetic diseases. The main problem is how to successfully deliver these therapies. The most powerful and specific therapies are also the most invasive and dangerous, and editing DNA comes along with ethical concerns. There is still much work to be done in bringing these therapies to the clinic, and future research will need to focus on providing a safe delivery of therapies while mitigating harmful side effects.

Tabrizi et al. Huntingtin lowering strategies for disease modification in Huntington’s disease. Neuron (2019). Access the original scientific publication here.

New Year, New Me: The Neuroscience of Habit Formation

Post by Deborah Joye

What is habit formation?

As 2018 winds to a close, many of us begin looking to 2019 with the intention of making changes in the new year. Whether it’s eating healthy, learning a new skill, or getting better sleep, the underlying goal of most New Year’s Resolutions is to build new, life-changing habits. What can neuroscience teach us about forming long-lasting habits more easily? In essence, habit formation involves learned associations between an event and a behavioral response. Before we develop automatic associations (habits), we begin with purposeful, goal-directed behavior. Action-outcome associations are goal-directed behaviors in which an individual performs some action, and, if the outcome is rewarding, the behavior is reinforced. So, if I eat a treat and my mood improves, I will be more likely to eat that treat again. With training and repetition, this behavior becomes automatic and the association becomes an ‘outcome insensitive’ stimulus-response association. Once this behavior is automatic, I might eat that treat and feel no different, or even feel a little bit sick, but that wouldn’t necessarily stop me from eating it again. This is because the stimulus of seeing the treat now leads to the automatic response of eating it. Once we’ve performed a particular action sequence enough times with a similar response, the brain tries to free up processing space by saving an automatic stimulus-response association that can be triggered with almost no thinking.

What’s happening in the brain?

During the process of learning goal-directed associations, connections between the cortex (responsible for higher-level cognition, like thinking and planning) and basal ganglia (important for selecting a movement for a particular situation) change their activity, reflecting the switch to more automatic associations. A signal arises during the early learning process (before the behavior is automatic) in a basal ganglia region known as the dorsolateral striatum (DSL), which ‘chunks’ the task-related events together so that the brain sees the whole task from beginning to end as one event. Neurons related to the task fire at the beginning or end of the task (or both) while neurons, unrelated to the task, are quiet. Thus, the entire task is represented as a single event within the DSL. During the shift from trial-and-error learning to a more consistent task response, the strength of this ‘chunked’ representation increases. This representation appears to remain stable as long as the routine is performed and at least partially reinforced (rewarded). Other cells in the DSL have different roles in habit formation; some cells don’t respond during the task at all but respond right after, representing ‘outcome feedback’. When the task is first being learned, some DSL neurons respond to correct performance while other cells signal when the task is performed incorrectly. As the response to the task becomes more automatic, the number of cells that appear to participate in error-signaling (signalling for incorrect performance) gets smaller and smaller, and the cells responding to correct performance increase. This lack of error-signaling for well-formed habits could be the reason why our brains are less sensitive to the outcomes of habitual actions and the reason why habits are so difficult to change.

The DSL is not the only brain region that forms a chunked representation of learned tasks. A version of the chunking pattern also develops in a region of the prefrontal cortex known as the infralimbic cortex. In contrast to the DSL pattern, the chunking pattern in the infralimbic cortex develops later in the learning process as the response to the task becomes consistent and outcome-insensitive. The infralimbic pattern is also different from the DSL pattern because it’s sensitive to changes in the task that require changes in behavior, such as changing which action is needed to receive a reward. The infralimbic pattern decays rapidly when the rules of the task change and it re-emerges when an alternative routine takes shape. Overall, different regions of the brain may function in parallel to promote habit formation – an infralimbic response (cortical-associative-limbic circuit), and a DSL response (basal ganglia).

What’s new?

It was previously thought that the brain circuitry underlying goal-directed actions and habitual actions were competing with one another for dominance in the brain. The idea was that all actions begin as goal-directed, and then the habitual action system takes over and inhibits the goal-directed connections, freeing up brain processing for other things. However, more recent evidence suggests that the two systems can actually work together. For example, goal-directed action circuitry may be needed to initiate a given routine, but habitual automaticity can result in completion of a complex set of behaviors that the brain has learned to see as one unit (see ‘chunking’ in the DSL and infralimbic cortex described above). A goal-directed action such as entering the bathroom to get ready for bed can result in a habitual sequence of actions, resulting in using the bathroom, brushing your teeth, and washing your face. The complementary actions of the goal-oriented and habit systems may be why we begin driving to work, when we are intending to drive somewhere else, or why we intend to drink black coffee and end up pouring in cream and sugar– the goal-oriented action is overtaken by a habitual routine. Research has also shown that context cues (for example, cues from our environment) play an important role in habit formation. When people are trained on a sequential task (performing step 1, then step 2, then step 3), repeated practice resulted in fast reporting of the next step when primed with the prior step. When people are particularly fast at reporting the next step (interpreted as strong habits), their habits were likely to persist, even when they intentionally wanted to add, remove, or change one of the steps. The influence of environmental cues on habitual action can also be seen, for example, in individuals who maintain sobriety while in a controlled environment such as a rehab facility, but struggle to remain sober and relapse once immersed in the environment in which they formerly used drugs. 

Why does it matter?

Understanding how the brain represents goal-directed and habitual actions is integral to understanding pathologies such as addiction and compulsive disorders (binge-eating, obsessive-compulsive disorder), and for grappling with the complex emotions surrounding every day habits. Addiction disorders represent a situation wherein the drugs of abuse have hijacked the person’s reward system, forming very powerful habitual associations. Understanding the deep connections that the brain forms between a given stimulus, the perceived reward, and the related environment can lead to better, longer-lasting treatment for addiction as well as decreased societal stigma surrounding addiction. The same can be said for eating disorders such as binge-eating, where food has become associated with significant reward regardless of hunger status, resulting in disordered eating and pathological weight gain. Finally, understanding how the brain forms habitual connections is critical for addressing the emotional component of bad habits. For many of us, there can be a stigma attached to bad habits if they are thought to reflect a personal flaw or failing (laziness, selfishness, etc.). It is useful to know how we can improve our behavior to create better habits. But, perhaps, it is more important  to realize which aspects of habit formation are under our personal control.

What can I do about it?

What can we do to change our habits or to create new ones? Research has shown that people with good self-control are not necessarily exerting a high level of effort to maintain their good habits. Instead, people with strong self-control have weak habits for unhealthy behaviors and strong habits for healthy behaviors. One way to reinforce this is to redesign our environments to actively avoid circumstances in which bad habits arise. Research investigating eating habits in college students show that their studying significantly improves when smartphones are hidden and that eating improves when junk foods are hidden or even just placed out of arm’s reach. Even placing a bowl of fruit in a prominent space in the kitchen can result not only in eating more fruit, but also in an identity shift (“I am a healthy eater”), which can help cement new habitual action. This corresponds nicely with the idea of ‘habit discontinuity’, which involves a significant redesign of your environment, such as moving to an entirely new location. The absence of the normal habit cues makes implementing new habits much easier, since cognitive resources are not being spent on resisting the temptation of old cues. Another idea is “temptation bundling,” which involves combining something you like to do habitually with something you don’t like to do (for example, watching a certain TV show that you love only while you’re at the gym). This gives a rewarding value to something that previously was not perceived as rewarding in an effort to bootstrap better habits. Finally, it is crucial to consider the impact of stress on habit formation. Since habit learning is enhanced during and immediately after a stressful situation or environment, awareness of how we cope with stress is critical. It is possible that, by consciously exerting effort to choose good habits during times of stress, we may reap the benefits of more rapidly-developed and long-lasting change in our habitual actions in general. This is something to keep in mind as many of us visit with our extended families during the holiday season.

Smith, K. S., & Graybiel, A. M. (2016). Habit formation. Dialogues in Clinical Neuroscience, 18(1), 33–43. http://doi.org/10.1111/clr.12458_111

Carden, L., & Wood, W. (2018). Habit Formation and Change. Current Opinion in Behavioral Sciences, 20, 117–122. http://doi.org/https://doi.org/10.1016/j.cobeha.2017.12.009

Robbins, T. W., & Costa, R. M. (2017). Habits. Current Biology, 27(22), R1200–R1206. http://doi.org/10.1016/j.cub.2017.09.060