Speaking foreign languages may help protect your memory

People who speak more than two languages may lower their risk of developing memory problems, according to a study released today that will be presented at the American Academy of Neurology's 63rd Annual Meeting in Honolulu April 9 to April 16, 2011.

"It appears speaking more than two languages has a protective effect on memory in seniors who practice foreign languages over their lifetime or at the time of the study," said study author Magali Perquin, PhD, with the Center for Health Studies from the Public Research Center for Health ("CRP-Santé") in Luxembourg. Perquin is helping to lead the MemoVie study which involves a consortium of partners from different hospitals and institutions.

The study involved 230 men and women with an average age of 73 who had spoken or currently spoke two to seven languages. Of the participants, 44 reported cognitive problems; the rest of the group had no memory issues.

Researchers discovered that those people who spoke four or more languages were five times less likely to develop cognitive problems compared to those people who only spoke two languages.

People who spoke three languages were three times less likely to have cognitive problems compared to bilinguals. In addition, people who currently spoke more than two languages were also four times less likely to have cognitive impairment. The results accounted for the age and the education of the participants.

"Further studies are needed to try to confirm these findings and determine whether the protection is limited to thinking skills related to language or if it also extends beyond that and benefits other areas of cognition," said Perquin.

The research was conducted in Luxembourg, where there is a dense population of people who speak more than two languages.

The MemoVie study was supported by The National Research Fund (FNR) from Luxembourg.

High cholesterol and blood pressure in middle age tied to early memory problems

Middle-age men and women who have cardiovascular issues, such as high cholesterol and high blood pressure, may not only be at risk for heart disease, but for an increased risk of developing early cognitive and memory problems as well. That's according to a study released Feb. 21 that will be presented at the American Academy of Neurology's 63rd Annual Meeting in Honolulu April 9 to April 16, 2011.

For the study, 3,486 men and 1,341 women with an average age of 55 underwent cognitive tests three times over 10 years. The tests measured reasoning, memory, fluency and vocabulary. Participants received a Framingham risk score that is used to predict 10-year risk of a cardiovascular event. It is based on age, sex, HDL cholesterol, total cholesterol, systolic blood pressure and whether they smoked or had diabetes.

The study found people who had higher cardiovascular risk were more likely to have lower cognitive function and a faster rate of overall cognitive decline compared to those with the lowest risk of heart disease. A 10-percent higher cardiovascular risk was associated with poorer cognitive test scores in all areas except reasoning for men and fluency for women. For example, a 10 percent higher cardiovascular risk was associated with a 2.8 percent lower score in the test of memory for men and a 7.1 percent lower score in the memory test for women.

Higher cardiovascular risk was also associated with a 10-year faster rate of overall cognitive decline in both men and women compared to those with lower cardiovascular risk.

"Our findings contribute to the mounting evidence for the role of cardiovascular risk factors, such as high cholesterol and blood pressure, contributing to cognitive problems, starting in middle age," said study author Sara Kaffashian, MSc, with INSERM, the French National Institute of Health & Medical Research in Paris. "The study further demonstrates how these heart disease risk factors can contribute to cognitive decline over a 10-year period."

To increase physical activity, focus on how, not why

Most people know that exercise is important to maintain and improve health; however, sedentary lifestyles and obesity rates are at all-time highs and have become major national issues. In a new study, University of Missouri researchers found that healthy adults who received interventions focused on behavior-changing strategies significantly increased their physical activity levels. Conversely, interventions based on cognitive approaches, which try to change knowledge and attitudes, did not improve physical activity.

"The focus needs to shift from increasing knowledge about the benefits of exercise to discussing strategies to change behaviors and increase activity levels," said Vicki Conn, associate dean for research and Potter-Brinton professor in the MU Sinclair School of Nursing. "The common approach is to try and change people's attitudes or beliefs about exercise and why it's important, but that information isn't motivating. We can't 'think' ourselves into being more active."

Behavior strategies include feedback, goal setting, self-monitoring, exercise prescription and stimulus or cues. Self-monitoring, any method where participants record and track their activity over time, appears to significantly increase awareness and provide motivation for improvement, Conn said.

"Health care providers should ask patients about their exercise habits and help them set specific, manageable goals," Conn said. "Ask them to try different strategies, such as tracking their progress, scheduling exercise on their phones or calendars, or placing their pedometers by their clothes. Discuss rewards for accomplishing goals."

The study, featured in the American Journal of Public Health, incorporated data from 358 reports and 99,011 participants. The researchers identified behavioral strategies were most effective in increasing physical activity among healthy adults. Successful interventions were delivered face-to-face instead of mediated (i.e. via telephone, mail, etc.) and targeted individuals instead of communities.

"The thought of exercise may be overwhelming, but slowly increasing activity by just 10 minutes a day adds up weekly and is enough to provide health benefits," Conn said. "Even small increases in physical activity will enhance protection against chronic illnesses, including heart disease and diabetes. Preventing or delaying chronic disease will reduce complications, health care costs and overall burden."

Previously, Conn completed a meta-analysis of interventions for chronically ill patients and found similar results. Conn found that interventions were similarly effective regardless of gender, age, ethnicity and socioeconomic status.

The study is featured in this month's issue of the American Journal of Public Health. Conn's research is funded by a more than $1 million grant from the National Institutes of Health.


Journal Reference:

  1. Vicki S. Conn, Adam R. Hafdahl, and David R. Mehr. Interventions to Increase Physical Activity Among Healthy Adults: Meta-analysis of Outcomes. American Journal of Public Health, 2011; DOI: 10.2105/AJPH.2010.194381

Therapy for depression can be delivered effectively by non-specialists, study suggests

NewsPsychology (Feb. 15, 2011) — Depression can be treated effectively with psychotherapy by mental health nurses with minimal training, according to new preliminary research findings.

The study, led by Durham University’s Mental Health Research Centre, shows that patients with severe depression can be treated successfully with behavioural activation — a psychotherapy for depression — by non-specialist mental health staff which could potentially lead to considerable cost-savings for the British National Health Service (NHS).

Currently, psychotherapies, such as behavioural activation, are delivered by specialist clinicians and therapists. In the study, the mental health nurses received five days training in behavioural activation and one hour of clinical supervision every fortnight.

Although the findings are preliminary, the researchers say they could pave the way for increasing access to psychological therapies for people with depression and could help to alleviate the shortage of specialist therapists. Estimates suggest that less than 10 per cent of people with depression, who need some form of psychological therapy, get access to it.

The research, conducted by Durham University, University of Exeter, and the University of York, is published in the British Journal of Psychiatry.

In the study, researchers compared behavioural activation treatment delivered by mental health nurses with usual care delivered by GPs. Forty seven patients participated in the trial. They found that the patients treated with behavioural activation by the nurses showed significantly more signs of recovery, were functioning better and were more satisfied with the treatment compared to the group who received what is classed as ‘usual care’ by their GP.

Behavioural activation is a practical treatment where the focus is on pinpointing which elements in someone’s life influence their moods. Changes over time in these person-environment relationships are explored and worked on to help the person engage in a more rewarding daily structure. This is done through self monitoring, scheduling and exploring difficult situations and the person’s responses to these.

Lead author of the study, David Ekers, is an Honorary Clinical Lecturer at Durham University and Nurse Consultant at Tees, Esk and Wear Valleys NHS Foundation Trust.

He said: “This is a small-scale study and certainly more research with bigger trials is needed but it shows some very promising early findings. The results indicate that with limited training, generic mental health workers can be trained to deliver clinically effective behavioural activation to people with long-standing depression.

“Behavioural activation therapy has already been shown to be equally effective as cognitive behavioural therapy but previous studies have always tested it with experienced psychotherapists. This is the first time it has been shown that behavioural activation can be an effective treatment when delivered by ‘inexperienced’ therapists.

“All of this is particularly relevant in the current economic climate whereby there may be increased risk of depression, and demands on the NHS in that area could become heavier.”

Depression is the third most common reason for people visiting their GP, according to the Office of National Statistics. Depression occurs in one in 10 adults in Britain at any one time, with one in 20 people at any one time suffering from major or ‘clinical’ depression.

Colin Walker, Policy and Campaigns Manager for mental health charity Mind, commented: “Mind has found evidence that one in five people with mental health problems are waiting over a year between asking for help and receiving access to talking therapies. Expanding the types of therapies on offer and how they are delivered might be an effective way of reducing the time that people wait to receive support but much more research is necessary to ensure that this approach is truly effective.

“It’s vital that mental health workers inexperienced in providing talking therapies are adequately trained to deliver such services and that this is not just adopted as a cost saving exercise to replace other types of treatments.”

Email or share this story:


Story Source:

The above story is reprinted (with editorial adaptations by newsPsychology staff) from materials provided by Durham University, via EurekAlert!, a service of AAAS.

Journal Reference:

  1. D. Ekers, D. Richards, D. McMillan, J. M. Bland, S. Gilbody. Behavioural activation delivered by the non-specialist: phase II randomised controlled trial. The British Journal of Psychiatry, 2011; 198 (1): 66 DOI: 10.1192/bjp.bp.110.079111

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of NewsPsychology ( or its staff.

Why problem drinking during adolescence is never a 'phase'

The Rutgers Alcohol Problem Index (RAPI) is widely used to assess adolescent drinking-related problems. The predictive power of RAPI scores, however, has not been examined on a longitudinal basis. A new study of RAPI has confirmed that not only is it an effective screening assessment, but that it may also — when administered in late adolescence — be predictive of alcohol diagnoses seven years later.

Results will be published in the May 2011 issue of Alcoholism: Clinical & Experimental Research and are currently available at Early View.

"RAPI is a self-report questionnaire on the frequency with which an adolescent has experienced 23 consequences of drinking alcohol, such as getting into a fight with a friend or family member, in the preceding 18 months," explained Richard J. Rose, Professor Emeritus in psychology and brain science at Indiana University, Bloomington. "This is the first study in which adolescent RAPI scores were used to predict later diagnoses of alcoholism. And it is the first study of pairs of twin brothers and sisters who differ in their RAPI scores to ask whether these co-twins later differ, as expected, in alcohol outcomes. They do."

"It might seem silly to even question the existence of a direct pathway from problem drinking to alcohol dependence in that alcohol dependence is clearly the culmination of an escalating pattern of heavy and problem drinking," noted Matt McGue, a professor in the department of psychology at the University of Minnesota. "The issue here though is whether drinking in adolescence carries particular weight in the development of alcohol dependence in adulthood. That is, adolescents, because of social factors or because their brains are still developing, may be especially susceptible to the effects of heavy drinking."

Rose and his colleagues assessed 597 Finnish twins (300 male, 297 female) at age 18 with RAPI, and later interviewed them at age 25 with the Semi-Structured Assessment of the Genetics of Alcoholism to assess alcohol abuse and dependence diagnoses.

"The key finding was that the more drinking-related problems experienced by an adolescent at age 18, the greater the likelihood that adolescent would be diagnosed with alcoholism seven years later, at age 25," said Rose. "That predictive association was stronger in females than males, and was confirmed in within-family comparisons of co-twins who differed in their age 18 RAPI scores. The analysis of co-twins ruled out factors such as parental drinking and household atmosphere as the source of the association, because twins jointly experience these."

"Certainly RAPI is predictive of later risk of alcohol dependence," said McGue. "This means that RAPI can be used to identify a group of late-adolescents who are at high risk for developing alcohol dependence."

However, he added, this may not reflect so much a direct causal effect of adolescent drinking as it does that individuals who transgress social norms in adolescence by drinking heavily may be those same individuals who transgress social norms in adulthood by drinking abusively.

"In this alternative conceptualization, the major risk factor is thought to be behavioral disinhibition," said McGue. "The innovation in this study is that the authors were able to confirm the association of adolescent drinking with alcohol dependence within twin pairs. Since twins tend to have similar levels of behavioral disinhibition, showing that the heavy drinking twin was more likely to be alcohol dependent in part controls for the confounding with behavioral disinhibition.

"Furthermore," he added, "we do not really know why some with high RAPI scores did not become alcohol dependent and conversely why some with low scores did. It will be important in future research to investigate whether factors such as behavioral disinhibition can help account for these discrepancies."

Rose said these findings have important implications for clinicians. "The first step in intervention is to identify those at elevated risk," he said. "Screening for drinking-related problems in adolescence may reliably identify many of those at elevated risk for development of alcoholism, and a self-report instrument such as RAPI offers an efficient approach for such screening. Our results suggest that RAPI is not only an efficient screening assessment; it is an effective one, now shown to be predictive of diagnosed alcohol outcomes."

"While this association may not seem surprising," said Rose, "the strength of the association, in females as well as in males, and in co-twins who differ in drinking but share their childhood environments and half or all of their segregating genes, was of surprise."

"I would say for sure that heavy drinking in adolescence is a real danger sign, regardless of whatever the causal mechanisms are," added McGue. "Heavy drinking in adolescence is an indication that preventive intervention is warranted."


Journal Reference:

  1. Danielle M. Dick, Fazil Aliev, Richard Viken, Jaakko Kaprio, Richard J. Rose. Rutgers Alcohol Problem Index Scores at Age 18 Predict Alcohol Dependence Diagnoses 7 Years Later. Alcoholism: Clinical and Experimental Research, 2011; DOI: 10.1111/j.1530-0277.2010.01432.x

New study finds no cognitive impairment among ecstasy users

The drug known as ecstasy has been used by 12 million people in the United States alone and millions more worldwide. Past research has suggested that ecstasy users perform worse than nonusers on some tests of mental ability. But there are concerns that the methods used to conduct that research were flawed, and the experiments overstated the cognitive differences between ecstasy users and nonusers.

In response to those concerns, a team of researchers has conducted one of the largest studies ever undertaken to re-examine the cognitive effects of ecstasy, funded by a $1.8 million grant from the National Institute on Drug Abuse (NIDA) and published in the journal Addiction. The study was specifically designed to minimize the methodological limitations of earlier research.

In contrast to many prior studies, ecstasy users in the new study showed no signs of cognitive impairment attributable to drug use: ecstasy use did not decrease mental ability.

Lead author John Halpern is quick to point out that this group of researchers is not the first to identify limitations in prior studies of ecstasy users. "Researchers have known for a long time that earlier studies of ecstasy use had problems that later studies should try to correct. When NIDA decided to fund this project, we saw an opportunity to design a better experiment and advance our knowledge of this drug."

The researchers fixed four problems in earlier research on ecstasy. First, the non-users in the experiment were members of the "rave" subculture and thus repeatedly exposed to sleep and fluid deprivation from all-night dancing — factors that themselves can produce long-lasting cognitive effects.

Second, participants were screened for drug and alcohol use on the day of cognitive testing, to make sure all participants were tested while 'clean'.

Third, the study chose ecstasy users who did not habitually use other drugs that might themselves contribute to cognitive impairment.

Finally, the experiment corrected for the possibility that any cognitive impairment shown by ecstasy users might have been in place before they started using the drug.

The resulting experiment whittled 1500 potential participants down to 52 carefully chosen ecstasy users, whose cognitive function was compared against 59 closely-matched non-users, with tests administered at several stages to make sure participants were telling the truth about their drug and alcohol use.

So does this mean that ecstasy really is the risk-free, hangover-free, miracle drug that lets young ravers and gamers party all weekend without having to pay the price?

Says Halpern, "No. Ecstasy consumption is dangerous: illegally-made pills can contain harmful contaminants, there are no warning labels, there is no medical supervision, and in rare cases people are physically harmed and even die from overdosing. It is important for drug-abuse information to be accurate, and we hope our report will help upgrade public health messages. But while we found no ominous, concerning risks to cognitive performance, that is quite different from concluding that ecstasy use is 'risk-free'."


Journal Reference:

  1. John H. Halpern, Andrea R. Sherwood, James I. Hudson, Staci Gruber, David Kozin, Harrison G. Pope. Residual Neurocognitive Features of Long-Term Ecstasy Users With Minimal Exposure to Other Drugs. Addiction, 2010; DOI: 10.1111/j.1360-0443.2010.03252.x

Earliest humans not so different from us, research suggests

That human evolution follows a progressive trajectory is one of the most deeply-entrenched assumptions about our species. This assumption is often expressed in popular media by showing cavemen speaking in grunts and monosyllables (the Geico Cavemen being a notable exception). But is this assumption correct? Were the earliest humans significantly different from us?

In a paper published in the latest issue of Current Anthropology, archaeologist John Shea (Stony Brook University) shows they were not.

The problem, Shea argues, is that archaeologists have been focusing on the wrong measurement of early human behavior. Archaeologists have been searching for evidence of "behavioral modernity," a quality supposedly unique to Homo sapiens, when they ought to have been investigating "behavioral variability," a quantitative dimension to the behavior of all living things.

Human origins research began in Europe, and the European Upper Paleolithic archaeological record has long been the standard against which the behavior of earlier and non-European humans is compared. During the Upper Paleolithic (45,000-12,000 years ago), Homo sapiens fossils first appear in Europe together with complex stone tool technology, carved bone tools, complex projectile weapons, advanced techniques for using fire, cave art, beads and other personal adornments. Similar behaviors are either universal or very nearly so among recent humans, and thus, archaeologists cite evidence for these behaviors as proof of human behavioral modernity.

Yet, the oldest Homo sapiens fossils occur between 100,000-200,000 years ago in Africa and southern Asia and in contexts lacking clear and consistent evidence for such behavioral modernity. For decades anthropologists contrasted these earlier "archaic" African and Asian humans with their "behaviorally-modern" Upper Paleolithic counterparts, explaining the differences between them in terms of a single "Human Revolution" that fundamentally changed human biology and behavior. Archaeologists disagree about the causes, timing, pace, and characteristics of this revolution, but there is a consensus that the behavior of the earliest Homo sapiens was significantly that that of more-recent "modern" humans.

Shea tested the hypothesis that there were differences in behavioral variability between earlier and later Homo sapiens using stone tool evidence dating to between 250,000- 6000 years ago in eastern Africa. This region features the longest continuous archaeological record of Homo sapiens behavior. A systematic comparison of variability in stone tool making strategies over the last quarter-million years shows no single behavioral revolution in our species' evolutionary history. Instead, the evidence shows wide variability in Homo sapiens toolmaking strategies from the earliest times onwards. Particular changes in stone tool technology can be explained in terms of the varying costs and benefits of different toolmaking strategies, such as greater needs for cutting edge or more efficiently-transportable and functionally-versatile tools. One does not need to invoke a "human revolution" to account for these changes, they are explicable in terms of well-understood principles of behavioral ecology.

This study has important implications for archaeological research on human origins. Shea argues that comparing the behavior of our most ancient ancestors to Upper Paleolithic Europeans holistically and ranking them in terms of their "behavioral modernity" is a waste of time. There are no such things as modern humans, Shea argues, just Homo sapiens populations with a wide range of behavioral variability. Whether this range is significantly different from that of earlier and other hominin species remains to be discovered. However, the best way to advance our understanding of human behavior is by researching the sources of behavioral variability in particular adaptive strategies.


Journal Reference:

  1. John J. Shea. Homo sapiens is as Homo sapiens was: Behavioral variability vs. 'behavioral modernity' in Paleolithic archaeology. Current Anthropology, 2011; 52 (1): 1 DOI: 10.1086/658067

Kinship caregivers receive less support than foster parents despite lower socioeconomic status

— Children placed with a relative after being removed from their home for maltreatment have fewer behavioral and social skills problems than children in foster care, but may have a higher risk for substance use and pregnancy as teenagers, according to a report in the February issue of Archives of Pediatrics & Adolescent Medicine, one of the JAMA/Archives journals. These relatives — known as kinship caregivers — appear more likely to be single, unemployed, older, and live in poorer households, yet receive fewer support services than do foster caregivers.

Most children who are removed from the care of their parents live with non-related foster parents, according to background information in the article. However, the number of children placed in kinship care is growing, and more than 125,000 children currently live in a relative's care. The increase is due to a decline in the number of foster homes at the same time that demand for out-of-home placements has increased. "Despite the move toward kinship care, the evidence for improved outcomes of children in kinship care vs. foster care has been conflicting," the authors write.

Christina Sakai, M.D., and colleagues at University of Texas Southwestern Medical Center and Children's Medical Center, Dallas, studied 1,308 children entering out-of-home care after reported maltreatment. Of these, 572 were placed in kinship care and 736 in foster care. At the beginning of the study and after three years, researchers conducted face-to-face interviews and assessments of children's behavioral and mental health and health service use, along with caregivers' receipt of services such as financial support, parent education and training, peer support groups and respite care.

Kinship caregivers were more likely than foster parents to have a low socioeconomic status — they were four times more likely not to have graduated high school and three times more likely to have an annual household income of less than $20,000. However, they were less than half as likely as foster parents to receive any form of financial support, about four times less likely to receive any form of parent training and seven times less likely to have peer support groups or respite care.

At the three-year follow-up, children in kinship care were more likely to be with a permanent caregiver than were children in foster care (71 percent vs. 56.4 percent). They also had 0.6 times the risk of behavioral and social skills problems and half the risk of using outpatient mental health services or taking psychotropic medications. However, adolescents in kinship care had seven times the risk of pregnancy (12.6 percent vs. 1.9 percent) and twice the risk of substance abuse (34.6 percent vs. 16.9 percent).

"Our findings indicate that kinship caregivers need greater support services," the authors write. "The findings also indicate that kinship care may be associated with a reduced risk of ongoing behavioral and social skills problems and decreased use of mental health therapy and psychotropic medications. Conversely, adolescents in kinship care have higher odds of reported substance use and pregnancy. These findings suggest that increased supervision and monitoring of the kinship environment and increased caregiver support services are urgently needed to improve outcomes of children in kinship care."


Journal Reference:

  1. C. Sakai, H. Lin, G. Flores. Health Outcomes and Family Services in Kinship Care: Analysis of a National Sample of Children in the Child Welfare System. Archives of Pediatrics and Adolescent Medicine, 2011; 165 (2): 159 DOI: 10.1001/archpediatrics.2010.277

Arranged marriages and distrust: Influence of parental choice on mate guarding

— Mate guarding is classified as excessive or unwarranted jealous or protective behavior towards a spouse or mate. This is common among many different species and can be useful to defend territory, guarantee paternity, or prevent disease. The authors of a new study published in Personal Relationships have discovered that this behavior is more common in societies which practice arranged marriages or in cultures that place a high value on parental influence in the choice of mate for their children.

Furthermore, the authors comment on the fact that mate guarding is not an exclusively male phenomenon, and women can be just as forceful in protecting their monogamous relationships.

In many cultures, rules, behavioral practices, and physical measures, including veiling and walled courtyards, have been applied to prevent contact between women and potential sexual partners. The current findings indicate that the occurrence of mate guarding is more prevalent in Muslim, Indian, Chinese, Turkish, Moroccan, and South Asian societies.

Lead author A.P. Buunk, "In Western cultures, most husbands do not actively try to prevent contacts between their wife and other men and may even accept a moderate degree of flirting. In contrast, in many Islamic cultures husbands actively prevent even superficial contact between a female member and another man. If a male cannot guarantee the paternity of their offspring, they could very well be investing precious resources in another man's offspring. It therefore becomes most important to ensure the fidelity of the female mate."

There is considerable evolutionary evidence that in most societies and historical periods, marriage has been at least partly arranged and has been based on a series of familial considerations rather than on the desires of the individuals concerned. In their article the authors emphasize that the degree in which parents control the mate choice of their children is an important factor in the occurrence of mate guarding. Additionally, freedom of mate choice or the ability to form a love-based union seems to make mate guarding less necessary. The findings clearly indicate that in cultures and social contexts in which freedom of mate choice is valued highly, the level of mate guarding is relatively low.

There are different reasons why men and women may choose to engage in mate guarding. In Venezuela, a man may pursue an arranged marriage to form important social or business alliances with other men. In this case a man may feel that he needs to guard his "property" zealously. A woman in an arranged marriage may fear desertion, and with it the stigma of divorce, as a result of her husband's infidelity and would therefore be more likely to engage in mate guarding behavior. Buunk, "If a marriage is not based on choice or love a person is more likely to become jealous over seemingly inconsequential events. This is probably because it is harder to be sure that the other person is in love with you out of their own volition."


Journal Reference:

  1. Abraham P. Buunk, Alejandro Castro Solano. Mate guarding and parental influence on mate choice. Personal Relationships, 2011; DOI: 10.1111/j.1475-6811.2010.01342.xr its staff.

Presence of peers heightens teens' sensitivity to rewards of a risk

It is well known that teenagers take risks — and that when they do, they like to have company. Teens are five times more likely to be in a car accident when in a group than when driving alone, and they are more likely to commit a crime in a group.

Now, a new study sheds light on why.

Temple University psychologists Jason Chein and Laurence Steinberg set out to measure brain activity in adolescents, alone and with peers, as they made decisions with inherent risks. Their findings, published this month in Developmental Science, demonstrate that when teens are with friends they are more susceptible to the potential rewards of a risk than they are when they are alone.

"We know that in the real world teenagers take more risks when with their friends. This is the first study to identify the underlying process," said Steinberg, a developmental psychologist and a leading international expert on teen behavior, decision making and impulse control.

"Preventable, risky behaviors — such as binge drinking, cigarette smoking and careless driving — present the greatest threat to the well-being of young people in industrialized societies. Our findings may be helpful in developing ways to intervene and reduce adolescent risk taking," said Chein, a cognitive neuroscientist and the lead author of the study.

Using functional magnetic resonance imaging (fMRI), Chein and Steinberg looked at brain activity in adolescents, young adults and adults as they made decisions in a simulated driving game. The goal of the game was to reach the end of a track as quickly as possible in order to maximize a monetary reward. Participants were forced to make a decision about whether to stop at a yellow light when they came to a given intersection or run through the intersection and risk colliding with another vehicle.

Taking the risk to run through the yellow light offered the potential payoff of moving through the intersection more quickly, but also the consequence of a crash, which added a significant delay.

Each participant played the game alone and while being observed by their friends. While adolescents and older participants behaved comparably while playing the game alone, it was only the adolescents who took a greater number of risks when they knew their friends were watching.

More significantly, according to Chein, the regions of the brain associated with reward showed greater activation when the adolescents knew they were being observed by peers. "These results suggest that the presence of peers does not impact the evaluation of the risk but rather heightens sensitivity in the brain to the potential upside of a risky decision," he said.

"If the presence of friends had been simply a distraction to the participant, then we would have seen an impact on the brain's executive function. But that is not what we have found," said Chein.

The researchers posit that the presence of friends heightens sensitivity to reward in teens because being with friends is so important at that stage of life.

"We know that when one is rewarded by one thing then other rewards become more salient. Because adolescents find socializing so rewarding, we postulate that being with friends primes the reward system and makes teens pay more attention to the potential pay offs of a risky decision," said Steinberg.


Journal Reference:

  1. Jason Chein, Dustin Albert, Lia O’Brien, Kaitlyn Uckert, Laurence Steinberg. Peers increase adolescent risk taking by enhancing activity in the brain’s reward circuitry. Developmental Science, 2010; DOI: 10.1111/j.1467-7687.2010.01035.x