Sponsored By

Methodological problems in cognition and perception videogame research

A paper published in Frontiers of Cognition by psychologists who study the cognitive benefits of playing video games questioned the research due to methodological shortcomings.

Wai Yen Tang, Blogger

September 12, 2011

9 Min Read

Cross-posted at VG Researcher.

Walter R. Boot (Florida State University) , Daniel P. Blakely (Florida State University) and colleague Daniel J. Simons (University of Illinois at Urbana-Champaign) have published an article in Frontiers in Cognition where they discussed the methodological problems concerning on cognition and perception research of videogame play. The article is available for download from the journal's website (link).

Boot and company outlined the methodological problems found throughout their line of research and surprisingly, they are actually well-known experimental considerations. Their discussion focused on recent studies linking video game play to improved cognition in college-aged participants. Nevertheless, the authors state their criticisms are applicable to studies conducted with the aim of improving the memory and attention of seniors and young children.

Many studies in the gaming and cognition literature test for differences in perceptual and cognitive abilities by comparing the performance of expert and novice gamers on different laboratory tasks, predicting that years of game experience should result in sharper vision and better hand-eye coordination.  These studies are cross-sectional or correlational in nature.  A limitation of this type of approach is that  the direction of the effect is unclear. Instead of game play causing better abilities, individuals who have the abilities required to be successful gamers may be drawn to gaming (superior abilities in this explanation, causes gaming).  Some third variable might also be responsible for gaming behavior and better performance on tests of perception and cognition.  “Cross-sectional data by itself cannot prove game benefits”, according to Dr. Boot.

The authors identified one experimental consideration that may be especially important in explaining the superior performance of gamers when game effects might not exist at all: demand characteristics.  Demand characteristics refer to confounding variables which cause participants in a study behave in a way that fits the researchers’ expectations. The authors suggest that the way in which gamers and non-gamers are recruited for lab experiments may change their behavior when they get to the lab.  Recruitment advertisements and flyers seeking “video game experts” may bias gamers to be more motivated to perform well on tests of perception and cognition.  They know they are being recruited because they are good at something.  As a result expert gamers try to do their best, conversely novices might do otherwise.  This is an increasing concern as media coverage on the beneficial effects of video games increases. “Expert gamers may come into the lab knowing exactly how they are expected to perform if they know they’ve been selected because of their video game experience”, according to Dan Blakely, coauthor of the article.

The authors recommend that participants in video game studies should be selected covertly.  That is, they should have no reason to suspect their game experience is why they were selected to participate in the study.  “Pre-screening”, in which participants are selected based on answers to a large survey, with video game experience being only a few questions of a much more extensive questionnaire, should help make the connection between video game experience and their research questions less obvious. They suggest not mentioning video games at all, or lead participants to believe they are in an experiment studying some other topic. In any case, a post-experiment debriefing is also recommended in order to determine whether knowing the brain training benefits of video games might affect participants’ performance or whether participants have figured out what the experiment is actually about.

The authors advise that researchers should strive to look for other individual or group differences that might modulate the benefits of video game training, which can provide clues related to the mechanisms responsible (true benefits, motivational differences, differences in strategy, etc.).  For example, they noted that the sample of expert/novice participants is often exclusively men. An equal gender representation should help generalize their results.

Another way to study game effects is called a game training or intervention study, in which non-gamers play a video game to see whether it causes any change in measures of cognitive performance. The amount of video game play can last between several days to several weeks, depending on lab resources. This approach can demonstrate a causal relationship between gaming and superior cognition, unlike expert vs. novice comparisons, but it is not without problems. Training studies are akin to drug trials where there are two comparable groups, one group is given the experimental treatment and the other is given a placebo treatment. A placebo, or fake treatment, is necessary to control for improvements resulting from the expectation that one should improve after receiving some kind of treatment.  Of course, in drug trials participants don’t know whether the pill they are taking contains actual medicine or just sugar (i.e., a placebo). However, in video game research, a placebo treatment is rather tricky to define.  Participants know which treatment they receive.

The benefits of gaming seem to be restricted mostly to action video games like first-person shooters.  The control/placebo treatment in most existing studies are games very different from the experimental ones, typically non-action games such as Tetris or The Sims.  People who receive these control or placebo treatments don’t improve.  Those receiving the action games do.  However, there is the untested assumption that both games are ones that participants believe should improve their performance (i.e., they both produce placebo effects of the same size).  The authors note that none of those studies tested whether they do.  They question whether slow-paced games like The Sims generate the same expectation of improvement.  They argue that it is reasonable to assume that participants who receive fast-paced, visually demanding action game treatments are more likely to expect improvement when they encounter fast-paced, visually demanding assessment tasks that measure their visual and attentional skills. 

Another approach often adopted is to compare the experimental group  to a control group that receives nothing, however such a method makes a placebo effect explanation more likely. Thus, controlling for and actually measuring placebo and expectation effects is a primary methodological consideration. The authors believe that both groups should have an equal expectation that they should improve on outcome measures. At this point, we don’t know what participants expect after game training or whether these expectations can account for observed improvements.  Following the same line of thought on the necessity of an active control group, I am unsure if there were any studies that compared a video game treatment and an existing training program, kind of like comparing Brain Age versus a non-video game training program designed to do the same thing.

Another criticism leveled by the authors is that often participants in the control condition of game experiments DON’T improve.  This is a bit weird, because at the end of the study, they have already completed all of the tests of perception and cognition once before (abilities are measured before and after training to look for differential improvement of the experimental group).  The lack of a practice effect in the control group is a peculiar methodological consideration.  The logical expectation is that both groups should improve their scores, but that the experimental group should have greater improvements. What have been considered “game effects” on cognition might be attributable to an odd baseline (lack of improvement of the control group).  This group should have improved just based on doing all of the assessment tasks twice.  This may also mean that the control treatment wasn’t doing its job:  generating some expectation of improvement.

When it comes to interpreting game effects, another important question to consider is what game experience might actually be changing if it changes anything at all. Instead of changing basic cognitive abilities, game experience might change how one approaches other tasks.  The authors note that even short-term exposure to an action video game can make players perform faster (but less accurately) on other tasks.  The authors also note instances of gamers not giving up as easily on difficult laboratory tasks measuring change detection.  The authors suggest that to better understand performance improvements on tests of cognition, they can  have participants describe their strategy (i.e., think-aloud), give retrospective reports on how they approached the cognitive tests, and record their eye movements while they perform these tasks. This information will allow researchers to see whether game play improves basic perceptual and cognitive abilities, or whether game experience changes how people approach a task.  Or we could simply eliminate this concern by giving them a cheat sheet (encouraging everyone to use the same, best strategy).

A final issue about game studies is the way these studies are reported (often across multiple journal articles).  Game training studies and psychophysiology studies, in the case of communication studies, are time and resource intensive, so there is an incentive for researchers to gather as much data from participants as possible from which they could get several publications from a single experimental project. Ideally, one project should look at one outcome variable. This is also an issue for journalists that they should be mindful that multiple reports of beneficial video game effects might all come from one project.  When one study is split across many journal articles, it may give the impression that many independent replications of video game effects exist when in reality there may be only a few.  The authors advised that, should researchers report results in different papers, they should link them explicitly, the same for journalists and press releases. And when they’re done publishing all of the project’s results, I would like a compilation book or review article that would give me and others a good, big picture of the study in its entirety. A related issue is file-drawer bias, where studies that failed to replicate or find significant results are kept from being published. The problem is where can we publish failed studies? Perhaps publishing in the “Journal of Methodologically Sound Replication Studies” or PloS ONE.  The authors note one meta-analysis in which, once publication bias is corrected for, video game effects are essentially cut in half.

So, the take-home message is that the research on the cognitive and perceptual benefits of video games is exciting, but we should, as researchers and gamers, exercise caution and practice the best methods that can clearly qualify video game effects and remove the alternative explanations.

Boot WR, Blakely DP and Simons DJ (2011). Do Action Video Games Improve Perception and Cognition?. Front. Psychology 2:226. doi: 10.3389/fpsyg.2011.00226

Read more about:

Blogs

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like