About the SA Blog Network

Beautiful Minds

Beautiful Minds

Insights into intelligence, creativity, and the mind
Beautiful Minds Home

New Cognitive Training Study Takes on the Critics

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

Brain training: yay or nay?

It’s not so simple.

Traversing the swamp of studies on cognitive training is bound to give even the boldest explorer a migraine.

But don’t despair, I’m here to help guide you along!

As we all know, people differ quite a bit from one another in how much information they can maintain, manipulate, and transform in their heads at one time. Crucially, these differences relate to important outcomes, such as abstract reasoning, academic performance, reading comprehension, and the acquisition of new skills.

The most consistent and least controversial finding in the literature is that working memory training programs produce reliable short-term improvements in both verbal and visuospatial working memory skills. On average, the effect sizes range from moderate to large, although the long-term sustainability of these effects is much more ambiguous. These effects are called near transfer effects, because they don’t transfer very far beyond the trained domain of cognitive functioning.

What are far more controversial (and far more interesting) are far transfer effects. One particular class of far transfer effects that cognitive psychologists are particularly interested in are those that show increases in fluid intelligence: the deliberate but flexible control of attention to solve novel “on the spot” problems that cannot be perfomed by relying exclusively on previously learned habits, schemas, and scripts.

Here is where we enter the swamp.

Some studies have reported absolutely no effect of working memory training on fluid intelligence, whereas others have found an effect. The results are mixed and inconclusive. Various critics have rightfully listed a number of methodological flaws and alternative explanations that could explain the far transfer effects.

Enter Susanne Jaeggi and her colleagues, who in a brand new study, address the critics head on (and some). Through careful consideration of their study design, they attempted to resolve the primary methodological concerns of previous research. First, they randomly assigned adults to either engage in (a) working memory training or (b) answer trivia questions presented in a multiple-choice format. This latter condition served as their active control group. This has been a major criticism in the past: without an active control group, it’s possible that far transfer effects are due to placebo effects or even a Hawthorne effect.

Their working memory intervention consisted of 4 weeks of training on an “adaptive n-back task.” This task requires fast updating of information in working memory, and the program adapts to the performance of the participant. On each trial, participants have to remember the location of information presented before (1-back), the time before last (2-back), the time before that (3-back), etc. They administered two versions: an auditory version involving spoken letters, and an auditory + visuospatial version, in which both spoken letters and spatial locations had to be processed simulateously.

Crucially, the researchers also administered multiple measures of cognitive ability. This included measures of visuospatial reasoning– the ability to consciously detect complex visual patterns and rotate images in the mind– and verbal reasoning– the ability to comprehend sentences, make verbal inferences, and solve verbal analogies. This is important, because if you want to measure a construct such as visuospatial or verbal reasoning, it’s important to administer multiple indicators of that ability.

The researchers also considered the role of individual differences in the effectiveness of working memory training. People differ substantially from each other in their motivation to engage in cognitive training, as well as their need for cognition (enjoyment engaging with cognitive complexity). People also differ in the the extent to which they have a growth mindset (i.e., believe that intelligence can be modified by experience). The researchers measured all of these personal characteristics.

Finally, the researchers assessed the long-term effectiveness of their training, by including a follow-up measurement three months after completion of training.

What did they find?

The Results

Even after addressing the major criticisms of their past work, Jaeggi and colleagues still found far transfer. In particular, they found far transfer to visuospatial reasoning when people engaged in working memory training. In contrast, no effects were found when people were trained on trivia knowledge (the active control group).  These effects were found despite using a working memory task that did not involve visuospatial stimuli, suggesting that the working memory training effect on visuospatial reasoning was independent of content.

They propose that a crucial mental mechanism that might have accounted for their effects is the discrimination between relevant and irrelevant stimuli. Their n-back working memory task requires ignoring distracting stimuli and quickly and efficiently focus on the most relevant stimuli to accomplish the task. They argue that their measures of visuospatial reasoning also required that skillset. This is important, because other recent research, that reported a failure to find far transfer effects, administered “complex working memory span tasks” that do not have the same demands on attention. Jaeggi and colleagues suggest that part of the reason for the inconsistency across studies might have to do with the particular working memory task that is used during training.

While the researchers only found small far transfer effects on verbal reasoning, they note that the reliabilities of their verbal tasks were significantly lower than the reliabilities of their visuospatial reasoning tasks. They also acknowledge other possibilities for the visuospatial/verbal reasoning discrepancy, such as people may have less practice on spatial than verbal tasks, so have more room for improvement. Of course, it’s also possible that there is just greater transfer of working memory training to visuospatial reasoning than verbal reasoning (other labs have also found that to be the case).

In terms of the long-term effectiveness of their training, they found no significant effect at a three-month follow-up. The researchers offer sensible caution here in interpreting this finding:

“If we consider WM training as being analogical to cardiovascular training, occasional practice or booster sessions may be needed in order to maximize retention (e.g., Ball, Berch, Helmers, Jobe, Leveck, Marsiske, & Willis, 2002; Bell, Harless, Higa, Bjork, Bjork, Bazargan, & Mangione, 2008; Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006; Haskell, Lee, Pate, Powell, Blair, Franklin, & Bauman 2007; Whisman, 1990). Unfortunately, at the current stage of knowledge, we do not have a good sense of how often such booster sessions would have to take place. Ultimately, the definition of factors that promote the longevity of training and the investigation of how cognitive training affects real-life outcomes would be essential from an applied point of view; yet, overall, we remain skeptical about the interpretation of our follow-up data, because of the small and uneven sample sizes.”

What about individual differences? Here, things get even more interesting.

First, they found that people who have a growth mindset about intelligence (believe that intelligence is malleable) showed greater improvement on the visuospatial reasoning tests than those who have a fixed mindset about intelligence (believe intelligence can’t change). This effect was only found, however, in the active control group. In other words, those who believed intelligence can change showed a greater placebo effect than those who think intelligence is fixed. This finding highlights the importance of using an active control group of people that have a wide range of beliefs about intelligence. Without such a group, some of the far transfer findings can be accounted for by a people’s beliefs about the malleability of intelligence. Nevertheless, this finding doesn’t negate the far transfer effects found in the working memory training condition, which held even after taking into account a person’s implicit theories about intelligence.

Second, the researchers found that intrinsic motivation mattered. Those who completed the study reported relatively stable engagement levels throughout the four weeks of training. In contrast, those who did not complete the study reported gradually declining engagement levels over the course of the intervention. As the researchers note, this raises the intriguing question: who actually signs up for these darn cognitive training studies, and who sticks with it for the entire four-week period?

Their data provides some hints. On the one hand, those who signed up for the study reported that they have more cognitive deficits in their lives than those who completed the pretest but dropped out of training. However, those with the highest pretest scores and the highest need for cognition scores ended up being the ones who actually completed the training!

Therefore, it seems as though the kind of person who is most likely to want to engage with cognitive training and stick with it for the entire regime is someone with a combination of (a) already high working memory, (b) high need for cognition, and (c) self-perceived cognitive deficits. The troubling implication here is that those who are most likely to complete these cognitive training studies are not the ones who need it the most.

Which brings us to perhaps the biggest challenge for cognitive training researchers: to get the cognitive training in the hands of those who need it the most, and keep them engaged throughout the entire intervention. Because let’s face it, these working memory tasks are boring. And for those with low working memory and little desire to engage in heavy cognitive manipulation of a random stream of letters and symbols, these interventions can be downright frustrating.

To be fair to Jaeggi, in her prior research she has tried to make the working memory tasks more fun for children by turning them into video games. But there’s still a long way to go on the whole cognitive-training-is-super-fun front. Also, if you really want to increase cognitive ability, I’m not convinced that working memory training is the best way forward. It seems to me that working memory training is best suited to improving working memory. But to truly increase fluid intelligence over the long haul, I’m a bigger fan of addressing those skills directly, through long-term engagement in logical and critical thinking. Indeed, recent research by Silvia Bunge and colleagues have found that engagement in reasoning training not only improves subsequent reasoning performance, but also strengthens connectivity between key areas of the brain (in the frontal and parietal lobes) associated with complex cognition.

By this point, I hope you can see some of the complexities involved in this kind of research, and why it’s not simply a matter of “brain games are bogus.” In my view, the field really needs to evolve beyond the search for broad conclusions to look at more nuanced effects, including a consideration of different intervention programs, and multiple environmental and personal factors.

While these findings by Jaeggi and colleagues probably raise more questions than they answer, that is how science works. They are to be commended for systematically attempting to address the critics, and attempt to get the science right. Thankfully researchers like them exist, because this research is immensely important. We owe it to those who really could benefit from cognitive training– such as children with specific learning disabilities and children growing up in stressful, intellectually impoverished conditions– to get the science right, and get them the help they truly need to flourish.

© 2013 Scott Barry Kaufman, All Rights Reserved

image credit preview: istockphoto; image credit #1: brain leaders and learners; image credit #2:; image credit #3:

Scott Barry Kaufman About the Author: Scott Barry Kaufman is Scientific Director of The Imagination Institute in the Positive Psychology Center at the University of Pennsylvania. Follow on Twitter @sbkaufman.

The views expressed are those of the author and are not necessarily those of Scientific American.

Rights & Permissions

Comments 5 Comments

Add Comment
  1. 1. rosabw 1:52 pm 10/9/2013

    Lookie heah…I know I sound like an idiot, but that’s nothing new. The GREATEST game my child ever played, the game that taught him to read…was…Learning to Read with Pooh, by Disney. It blasted him multisensorily: music, color, phonics, sight-words, pictures, rhythm, visual, tactile, spatial, riddles, …about 15 different areas of interest and the child made the choice. Enrichment is always multifactorial.

    What you need to help children is already there. It’s just hard work. Really, really hard work. I don’t know what type of educational backing they had, but it was magic!

    Think of the regular classroom, where about 2 senses are fed, and it’s mostly rote.

    I need to get a life, sorry….

    Link to this
  2. 2. kane_WMC_lab 2:19 pm 10/9/2013

    Thanks for the thoughtful article, Scott. In general, my view is that only their results from the single-n-back training are modestly compelling. Please allow me to submit the following criticisms of the new study that you discuss:

    1) Their dual-N-back training showed far transfer to visuospatial reasoning only via a one-tailed F test, and it would not have been statistically significant by a two-tailed test. (Indeed, their transfer effect to visuospatial reasoning at the composite level seems to be driven almost entirely by *one* of their spatial measures, the Form Board, and it’s also the one test that this group began with a mean that was *far* below the other two groups. Seems to me like regression to the mean.)

    2) Their dual-N-back training actually *failed* to replicate their prior findings of improvement to Ravens and BOMAT tests of fluid reasoning; in the present study, those tasks showed no improvement with training versus the control group.

    3) The authors motivate the study by suggesting strongly that prior failures to show far transfer from dual-n-back training were due to experimenters paying the subjects and thus reducing (or missing the subjects with) intrinsic motivation; they further argue that many of these subject-paying studies showed smaller training benefits than do their more intrinsically motivated participants. Here, however, these unpaid, ostensibly intrinsically motivated subjects showed no more training benefit than these other studies.

    4) As you note, the only effect of belief here was that subjects in the *control group*, and not the training groups, who held more incremental beliefs about intelligence showed more transfer than did those who held less incremental beliefs. So, motivation differences don’t seem to explain the field’s mixed results (which, at this point, seem to be leaning toward failures rather than successes to show far transfer).

    5) The motivation/belief effects are also potentially problematic for the following reasons:

    a) The incremental-belief result in controls came from a median-split analysis, rather than from treating belief scores continuously.

    b) Why should incremental beliefs affect transfer to visuospatial reasoning but not verbal reasoning?

    c) Why should incremental beliefs but not Need For Cognition predict transfer?

    6) As is widely acknowledged, it is quite challenging to develop an active control group that effectively matches training and control subjects on beliefs/expectations/placebos. My own subjective view is that a knowledge trivia “training” game if unlikely to cause subjects to believe that they are improving the cognitive abilities needed to perform that transfer tasks. My subjective sense could well be wrong, though, and so I would strongly advocate that all further research in working memory training follow the Boot-Simons-Stothart-Stutts (2013) recommendation to actually assess subjects’ beliefs about the potential benefits of their particular training regimen for the particular transfer tasks. Otherwise, one is left wondering whether differential placebo effects account for the results here, comparing single-N-back to active controls.

    7) I find it interesting that the authors make the analogy to physical training to explain away the fact that they did not find the transfer effects to be long-lasting; on the surface, this seems quite plausible. At the same time, how many researchers would believe that one very specific form of physical training (say, high hurdling) done for 20 minutes a day for *20 days* would have substantial general transfer to broad athletic abilities? Am I the only one for whom this analogy points out a serious plausibility problem for very-short-term working memory training?

    Of course, no one study will determine whether (or what kind of) working memory training might improve general cognitive abilities. Ultimately, the field will need to rely on meta-analyses of studies like this one, completed by these authors and by others. For now, I remain skeptical of broad transfer after only a few hours of training. (Months or years of training? Now THAT would be interesting!)

    Thanks for providing me with the opportunity to comment.


    Michael J. Kane

    Link to this
  3. 3. Happy Phil 3:07 pm 10/9/2013

    I agree with your observation that showed the best ‘scores’ were among those who found cognitive tasks to be challenging and fun.

    At the very beginning of this report, you pointed out differences in people’s information processing capacity, and how that affects their cognitive abilities. I am not sure that cognitive training will increase processing capacity, but sharpening one’s skills would certainly be helpful.

    Thank you for your ‘thoughtful’ article.

    Link to this
  4. 4. gd1968 5:30 pm 10/9/2013

    Great post!

    A couple of thoughts:
    1) many of the follow-up studies for cognitive training have had only cross-sectional tests, which might show, for example, that there were no generalized cognitive gains, only specific focal improvements in tasks similar to the cognitive games that had been practiced (see Adrian Owen’s 2010 study based on the huge BBC online cohort)

    I agree wholeheartedly that this type of finding is to be expected. But what I suspect is true is that people who have practiced a particular cognitive skill may not instantly be able to apply that skill to a new activity (that is, they cannot instantly generalize their skill), but I suspect that the RATE at which they could improve in a new skill would possibly be significantly improved.
    So for example, if someone has spent a hundred hours mastering a memory game, they may not do well in a new type of memory game (cross-sectionally), but all those hours of training could well teach them the “meta-skill” of knowing how to master the NEW memory game more quickly.
    The only way to measure this possibility would be to see how quickly subjects could learn or master a NEW cognitive task (over tens or hundreds of hours) after they had previously practiced and master a first task. If one only does a single cross-sectional evaluation, the cognitive training effects from the first task would be greatly underestimated.

    Analogously, suppose a child spends a hundred hours in baseball training one summer, and becomes very proficient at baseball skills. If that child is evaluated for basketball skills at the end of the summer, there might not be any evidence that baseball camp improved the child’s “general” athletic ability, and the child would probably be no better than any other random child at shooting hoops. But if that same child were followed for another hundred hours of basketball training, I’ll bet he or she would improve more rapidly than a child who had no previous sports camp experience: the previous training may not DIRECTLY generalize, but it may provide the child with an understanding, confidence, skill, and experience with the training process, thereby making future training efforts more efficient.

    2) my major complaint about cognitive training games is that they often do not have a great deal of relevance to actual useful intellectual tasks. It would be SO much more useful to have rapid reading comprehension and memory games, logical reasoning games, algebraic games (e.g. the wonderful app called DragonBox), creative writing games, even social empathy games etc. rather than trying to remember some shape that appeared 10 seconds ago! If the games offered on training sites involved more directly relevant skills, I think the generalizability of improvements would be much larger!

    Link to this
  5. 5. Dr Sarah McKay 12:44 am 10/10/2013

    Great article. I get asked this question ALL.THE.TIME … I think the key is this: “… it seems as though the kind of person who is most likely to want to engage with cognitive training and stick with it for the entire regime is someone with a combination of (a) already high working memory, (b) high need for cognition, and (c) self-perceived cognitive deficits…”


    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article