Do ‘brain-training’ exercises have any effect? Psychologists suggest they’re a waste of time
A number of experiments have suggested that astonishing cognitive improvements could be induced by simple training-game interventions
Spend enough time playing “brain-training” games, and you’ll get pretty good at games. But you won’t necessarily get better at anything else.
That’s the conclusion of an extensive review published in the journal Psychological Science in the Public Interest this week. A team of psychologists scoured the scientific literature for studies held up by brain-training proponents as evidence that the technique works – and found the research wanting.
Training tools enhanced performance on the tasks that they tested, which makes sense: Spend enough time matching coloured cards or memorising strings of letters, and you’ll start to get really good at matching colours and memorising letters. But there is “little evidence that training enhances performance on distantly related tasks or that training improves everyday cognitive performance,” the authors write.
They also argue that the studies used to promote brain-training tools had major problems with their design or analysis that make it impossible to draw any general conclusions from them.
“It’s disappointing that the evidence isn’t stronger,” Daniel Simons, an author of the article and a psychology professor at the University of Illinois at Urbana-Champaign, told NPR. “It would be really nice if you could play some games and have it radically change your cognitive abilities. But the studies don’t show that on objectively measured real-world outcomes.”
Brain-training programmes have been controversial for years. Starting in the mid-2000s, a number of experiments suggested that astonishing cognitive improvements could be induced by simple training-game interventions.
One of the most high-profile studies, published in the Proceedings of the National Academy of Sciences in 2008, found that about four weeks of brain training dramatically improved young adults’ ability to solve problems they had never encountered before. The big claim was that the technique could produce “vertical transfer” of cognitive skills – in other words, playing games would boost the brain’s ability to do more sophisticated tasks.
But other researchers have had trouble reproducing this work. In 2014, a coalition of 70 scientists published an open letter on the website of the Stanford Centre on Longevity questioning whether there was any scientific evidence that training games actually improve general cognitive function.
More than 100 brain-training proponents responded with an open letter of their own on the website Cognitive Training Data. They argued that, though more research is needed, there is evidence that cognitive-training regimens work, and they listed 132 studies to back up their claims. Some of those studies were the same ones that sceptics of brain training cited to cast doubt on the technique.
“How could two teams of scientists examine the same literature and come to conflicting ‘consensus’ views about the effectiveness of brain training?” Simons and his colleagues write.
Hoping to bring some clarity to the debate, they analysed all 132 studies and tried to apply some objective standards. To definitively prove that brain-training results in vertical transfer, studies should have a good control group, one that was assigned a comparable task to brain training to prove that it is really the specific technique that led to the improvements.
They should test a large number of participants to weed out the possibility of results that are a statistical fluke. And they should account for expectations and biases – people who play training games are likely to expect to become smarter, much as people who take a placebo pill expect to feel better.
According to Simons and his colleagues, nearly every one of the brain-training studies they looked at failed to meet these standards. The studies that subscribed to psychology’s best practices suggested that brain training made participants better at the specific task being tested but did not lead to any generalised improvements.