Sponsored By

At the Triangle Game Conference last Wednesday, Julianne Greer, editor of Escapist magazine, moderated a panel discussion titled “Teaching to the Test: The Impact of Reviews on Game Development”.

Lewis Pulsipher, Blogger

May 4, 2009

8 Min Read

At the Triangle Game Conference last Wednesday, Julianne Greer, editor of Escapist magazine, moderated a panel discussion titled “Teaching to the Test: The Impact of Reviews on Game Development”.  To explain the title, K12 teachers tend to teach what is on end-of-grade tests to the exclusion of almost everything else.  The panel considered how much game development studios and publishers create games to meet the “test” of reviews.

Panelists included Juan Benito (Creative Director, Atomic Games), Eric Peterson (CEO of Vicious Cycle), Dana Cowley (PR Manager, Epic Games), and Shaun McCabe (Production Director, Insomniac Games east coast).

Their answer to the main question was “definitely not,” though they do pay attention to what individual game fans say on forums and email.  The only exceptions would be a sequel, especially if another studio did the previous game, or a licensed property, where reviews of past games for that license can give clues to what needs to be changed or added.

Benito saw fan opinion as more "pure from the heart" than the reviews, which led into a discussion of whether reviews are influenced by manufacturers.  This can be overt, through junkets or other “bribes”, or through the influence of advertising money.  (Consumer Reports magazine refuses to accept advertising to avoid any appearance of influence by manufacturers.)  While the panelists had heard of this kind of shady dealing, only one knew of it happening (from his days at Microprose); however, Greer stated that Escapist magazine had received such influence offers (which they rejected).

Yet there’s a danger of shutting out the non-hardcore audience if you base your decisions solely on feedback from the minority who express opinions online.  McCabe said listening to players is so important that some companies have changed their marketing department name to something like “community relations”.

How much do reviews affect sales?  I was surprised that no one cited any survey, as surely someone has investigated this question; panelists speculated that reviews have a strong influence on hard core players, but virtually no influence on casual (e.g., Wii) games, as those are impulse buys.  Greer showed slides from research company EEDAR showing that certain categories of games (RPGs, Music & Rhythm, Sports) received consistently higher aggregate review scores than others, with some lagging far behind (Arcade, Skill & Chance, General Ent (such as Wii Fit), and Narrative).  We have no way to know whether this is a bias from reviewers or an actual difference in game quality, though I’d suspect it’s because most reviewers are hardcore players.

Another interesting slide compared scores for 360 and Wii versions of the same title; the 360 scores were much higher than Wii for “hard core” game categories, much lower for “casual” titles.

Benito looks at the Wii as a "critique-proof platform".  Another panelist joked that if you put the word “Party” in a Wii title, it will automatically sell at least 200,000 copies as parents want their children to be playing “party” games.

Peterson cited the Wii-exclusive game Madworld as a game that has very good reviews, but poor sales (180,000 according to VG Chartz, only 66,000 in the month of release according to Wikipedia).  And his company has a children’s game with two and a half million sales but reviews in the 40s.

This was part of a discussion of the quality of reviews.  Panelists clearly did not care for reviews in general, probably because they felt so many were poorly-written and often contained mistakes.  One panelist specifically referred to the reviews on IGN and Gamespot as “white noise”, and all panelists clearly felt that reviews are often “subjective” rather than “objective”.  Of course, “subjective” can be just as accurate (in fact, more accurate) than objective, depending on the situation, the problem is that reviewers don’t explain their biases and why they feel as they do, so readers have no basis to judge the opinions. 

Moreover, with a single numeric rating, reviewers are going to go with their personal preferences, so, for example, a shooter fan reviewing a children’s game isn’t likely to give it a high rating (which is likely what happened with the children’s game Peterson mentioned).

The discussion was not intended to be a critique of reviews, but I’d make a number of observations.  I used to be paid to review board- and RPG-related materials for TSR’sDragon and other magazines 25-30 years ago, but I’d not write assigned video game reviews, as a proper review requires playing through the game, a much larger time commitment for video games.  Someone pointed out that film reviewers commit only two or three hours to watching a film, quite a contrast.  Panelists had seen reviews where they knew the reviewer had barely played the game.  In fact, this has influenced some companies to make sure the first few minutes of a game are exceptionally engaging, a good idea in general but especially good for the “snap reviewers”.

Magazine and Web site “exclusives” tend to be more favorable than reviews, as the writer knows the studio-publisher has done his company a favor by granting the exclusive.  "Maybe that's why previews are so different from reviews.”  This comes back to the nature of the fans, who go to the sites with the “newest news”, who want to see the latest artwork for the Zerg (in a recent PC Gamer Magazine) or see the latest screenshots.  In my opinion this is a major reason why video game magazines are having a difficult time surviving: they can’t be as up-to-the-minute as the Web sites.

The “cult of the new” also tends to drive reviewers to snap decisions and sloppy behavior; if the review comes out too late, it’s no longer “news” and is ignored by many.

Some reviewers clearly don’t understand how reviews, of any medium, work.  They should answer three questions:

·                 what were the creator(s) trying to do

·                 how well did they do it

·                 was it worth doing

To answer these questions they must explain “why”, not merely say “this is a piece of junk” or “I don’t like the graphics” or “what a dumb idea”.  But this makes reviewing more difficult, more work.

If the reviewer separates these questions sufficiently, a reader can see that the children’s game was well-done, even though the reviewer thought it wasn’t worth doing because he isn’t interested in children’s games. 

One panelist suggested reviewers ought to “take a step back” and watch others play the game, in order to acquire more than one point of view.  They also need to put themselves in the shoes of a person who’s saved his pocket-money to buy a game, as opposed to reviewers who have piles of freebies to try out.

Reviewers who assign an actual numeric evaluation should provide several scores for different types of players, e.g., hard core, casual, RPG fans, shooter fans, whatever is appropriate to the audience.

Aggregation of reviews, and use of the aggregates in contracts, touched off a lively discussion.  The personal preferences of the reviewer means a lot when he or she assigns only one number, not numbers selected for different kinds of players.  Yet the numbers are now used in many contracts to govern the royalty received by a studio–the higher aggregate score leads to higher royalties.  Panelists felt this was a poor way to do business, and that it was being used to take advantage of inexperienced developers to trick them out of some of the profit. 

Possibly it was also an excuse for a publisher to save marketing money, yet “you can't sell games without marketing” (Cowley). The panel did not officially include audience participation, but one beside-himself audience member finally asserted that the panelists were dead wrong (I can’t say his actual words!), that if the developer has confidence he should take the opportunity to make more with a high aggregate.   Clearly this is a contentious issue, and with that we ran out of time.

Read more about:

Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like