Sponsored By

In the new issue of the Susquehanna Financial Group's Video Game Journal, authors Jason Kraft and Chris Kwak follow up on <a href="http://www.gamasutra.com/php-bin/news_i...

Brandon Boyer, Blogger

September 19, 2006

2 Min Read

In the new issue of the Susquehanna Financial Group's Video Game Journal, authors Jason Kraft and Chris Kwak follow up on their earlier study, and again statistically disprove a direct correlation between video game review scores and the resulting performance at retail, reiterating that there is little relationship between a game's critical and commercial reception. The original study, conducted in December of last year, took a sample of 275 games across genres for the PlayStation 2 and charted their relative NPD sales data and Metacritic scores, concluding, "The low correlation may indicate that this random sample, which includes PS2 titles across six years and multiple genres, is the result of a failure to account for a number of other factors. Ratings may only tell part of the story." Saying that the study "touched a nerve" and caused a flood of emails and calls with complaints that the authors' "sample size was too small; others that we had not looked at multi-variable regressions; and still others that we did not disclose enough of the data," Susquehanna followed up with a wider range of samples. This time taking a sample of 1200 PlayStation 2 titles, the authors once again charted relative NPD and Metacritic data, and once again have concluded "the correlation between game ratings (Metascores) and unit sales is statistically insignificant." In fact, the study found that by increasing the sample, "only 15.8% of the movement in game unit sales ... can be explained by movements in game ratings," a figure down from its original 17.3% from the sample of 275 games. To help try and trap for other factors that might explain the findings, the authors looked specifically at franchise correlation, including sequels and new franchises, and selected data from other consoles, in all cases failing to find sufficient direct score to sales matches outside extreme isolated cases. In conclusion, the authors state, "a theory (that game ratings matter) that fails under scrutiny is accepted as conventional wisdom. Conventional wisdom is wrong. And we have not even addressed the causation argument – that a higher rating causes a game to sell better. There is no reason to argue causation, because while correlation does not equal causation, the absence of correlation means no causation in our case."

About the Author(s)

Brandon Boyer


Brandon Boyer is at various times an artist, programmer, and freelance writer whose work can be seen in Edge and RESET magazines.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like