Sponsored By

The Silent Revolution of Playtests, Part 2

Continuing his <a href="http://www.gamasutra.com/view/feature/3963/the_silent_revolution_of_.php">series on playtesting</a>, ex-Ubisoft veteran Pascal Luban examines the practicalities of getting consumer feedback on your game.

Pascal Luban, Blogger

April 9, 2009

13 Min Read

[Continuing his series on playtesting, ex-Ubisoft veteran Pascal Luban (Splinter Cell series) examines the practicalities of getting consumer feedback on your game.]

Proximity, responsiveness, relevance... these are the watchwords of efficient playtests.

In the previous installment of this article, I had explored the reasons for the rising importance of playtests in game development.

In an industry where games represent increasingly high financial risks for publishers, playtests have come to function as a strong guarantee for quality gameplay. I will share with you today my experience regarding the methodology employed in preparing and conducting them.

Heeding the Clients: The Design Teams

Foremost, one must be aware of a fundamental say: the role of playtests is not to redo the design in place of the design teams -- for either game or level design. They are instead conducted to help them. This observation is crucial, because it drives the entire approach to playtests.

Firstly, we must respect the hard work of the design teams. Having had my own responsibilities in game and level design, I know how difficult it is to make "a good game". We must respect those who put their whole hearts into building the best game possible; we must not scorn or undervalue their work.

Secondly, playtests must adapt to the needs of the design teams. Good tuning for maps or gameplay mechanics is often the result of trial and error. Knowing this, designers should require experimentation; playtests can afford them the opportunity to test out their hypotheses regarding design issues, and must therefore adapt to particular needs as they arise.

Lastly, playtest results must be made available to the concerned parties as soon as possible, as time allotted for game development is always short.

Preparing a Playtest Campaign

A playtest campaign generally requires around one month of preparation. We must first define its objectives, because they will determine what types of playtesters we shall have to recruit, the scale of the sessions (1, 2, 4, 8, 12 players), and their duration (from half a day to a full week).

We will also have to attend to the logistics as well as the legal framework (non-disclosure agreement, eventual monetary compensation for playtesters when sessions last over a half-day, etc.) And we must, of course, prepare the design teams to effectively utilize the playtests.

One does not grow the best crops in dry land; a playtest's effectiveness is rooted in the playtesters themselves. Half the battle in running an effective playtest campaign lies in wisely choosing playtesters, which requires investment of time, energy, and perhaps a bit of money and patience.

Recruiting takes time: we must not only hire as many candidates as possible (in order to have a solid pool of playtesters). We must also evaluate them. The purpose of evaluation is obviously to judge the candidate's gaming competence, but also his ability for analysis and self-expression.

Evaluation may take several forms. An initial selection can be done through a more or less thorough questionnaire, to be completed by the candidate. The true evaluation, however, must be done during the sessions themselves, where we can observe the candidates at play.

We must establish a protocol for obtaining the most consistent results possible. There is no "all-purpose" evaluation protocol; we must also be able to adapt to specific circumstances as the situation mandates.

When I built a playtest structure at the Bucarest Ubisoft office, I encountered an interesting problem: we needed playtests for console games, but all the players we could find locally were exclusively PC gamers. I had to set up a specific protocol to evaluate the ease with which our Romanian candidates could adapt to console gaming.


Ubisoft's Splinter Cell: Chaos Theory

The protocol consisted of briefly explaining the gameplay controls of a complex game (the multi-player mode in Splinter Cell: Chaos Theory), and then setting them loose in the game in order to gauge the speed at which they adapted to the gameplay. This selection method proved to be quite efficient.

Candidate selection must therefore be done according to a given playtest campaign's objectives. We may have need of only extremely skilled players who have already mastered the genre, or we may require novices, if the objective is to playtest the accessibility of the game.

Communication regarding playtests also takes time. Before candidates can turn up on your doorstep, they must first be made aware of your need. In my experience, while recruiting through generic classified ads will yield a high number of candidates, many will be too young (careful of those labor laws!), and most will be only casual gamers.

A good way to recruit experienced players is to make use of forums, gaming clans or specialized stores. It takes much more time but I always got great playtesters this way. In playtesting, quality matters more than quantity!

Organizing the Sessions

I shall address three aspects of playtest organization: the composition of the team, the preparation of the playtest protocol, and its logistics.

Recruiting must start at least four or five days before the session itself. At this stage, the playtest manager already has access to a database of candidates that have already been evaluated or, at least, identified. He can thereby choose his playtesters according to the session's theme. Invites are sent by e-mail.

At this point, we realize the importance of having a great number of candidates, since most are not available at will. We must therefore engage in mass-mailing to ensure sufficient availability of playtesters come session day.

It is also best to invite at least one more playtester than necessary, since last minute withdrawals are commonplace. It is also usually a good idea to ask playtesters to confirm their presence via e-mail.

Protocol setup is an important part of session preparation. Some playtests are organized near the end of the development cycle, to tune up maps or the game system. The protocol for this type of playtest is often straightforward: we must allow the playtesters to play for a maximum of time, note game statistics, and organize open Q&A sessions.

The time when playtests are most useful, however, is during earlier stages of the development cycle, when the game system and maps are still in gestation. Let us not forget that the earlier we detect any issues, the easier and cheaper it will be to correct them.

During the development of maps for the multiplayer version of Splinter Cell: Chaos Theory, I had organized playtests to evaluate the structure of the then still-embryonic maps.

I specifically remember the Aquarius map: By having it tested by highly experienced playtesters, we -- including the level designer who had built the map -- quickly realized that the map was far too large.

Having noticed this problem, he immediately rebuilt his map, which took little time as the map was still just a prototype. It took him a few iterations to downsize his map to the optimal size. In the end, Aquarius became one of the game's most popular maps.


Ubisoft's Splinter Cell: Pandora Tomorrow

Playtests allow us to shed light on many problems and to validate (or invalidate) hypotheses set by the design team. During the development of the multiplayer version of Splinter Cell: Pandora Tomorrow, specific playtests were undertaken with the purpose of tweaking the characteristics of certain pieces of equipment, such as the smoke grenade.

The latter is one of the most-used accessories by the spies, since its cloud slows down the spy's opponents (the mercenaries), and it can even put them to sleep if they stay too long in its area of effect.

Tuning the smoke grenade's parameters was not so simple -- if its range was too wide, it would be an unstoppable weapon for the attackers (they would simply need to employ a single grenade in a corridor to block any access by their opponents).

On the other hand, if the grenade's effect zone were too small, the weapon would be completely useless (defenders have vision modes allowing them partial visibility through the cloud). Finding the right values took us a lot of time.

Lastly, to be relevant, protocols must adapt to problems encountered in previous sessions as well as to the test requests put forth by the design team. This commensurability with the development team's needs is one of the hallmarks of a successful playtest. I shall address this point later on.

Let us now talk about logistics. Good playtests require a stable build of the game without too many bugs. When directing playtests in the middle of the development cycle, this may be easier said than done. Regardless, the game must be sufficiently stable, and maps must be rid of the most detrimental bugs (such as the inability to climb a ladder, for example).

A game delivery protocol must be set up with the development team. The latter must deliver a playtest-ready version of the game to the internal debug team, which will rapidly review the game to ensure that the version is playtestable.

When issues arise, cooperation between the debug and development teams will allow for swift corrections of issues, and subsequently the production of a stable version suitable for playtests.

Such organizational finesse requires a lot of discipline from all of the teams involved. Another good practice is to prepare a checklist for the level and graphic designers, so that they can make sure that their own maps are free of blocker bugs. Finally, the playtest session manager himself must make sure that the version is indeed playable.

Playtest Sessions

Playtests are especially instructive when design team personnel attend the sessions; indeed, a game or level designer will base his work on ideas he will formulate upon observing the behavior of the players.

However, players do not always react as expected, and we must take their diversity into account.

By seeing with his own eyes how real players use equipment or navigate a map's topology, and by asking them the reasons for their behavior at the end of the session, the designer can rapidly make optimizing adjustments -- a demonstration is always more efficient than a long speech! It is thus highly recommended to encourage the designers to attend the playtests.

That's why I strongly recommend that playtests should be conducted on the premises of the development studio itself. Remote playtests are valuable for tweaking map and system settings, but less so for playtests on an embryonic game.

Obviously, playtest observers must follow certain rules: they must not voice their comments or ask any questions until they are authorized by the playtest session manager, in order to preclude influencing the game session or the playtesters' judgement.

If it is desirable for designers to attend the playtests, it is simply essential that the playtest session manager does so. He must not simply organize the session and ask his questions at the end; he must actually watch the playtesters at play.

The reason is as follows: early playtests often have a limited number of playtesters, and the problems found are liable to be numerous. This fact is likely to affect the relevancy of feedback received, rendering it inconsistent at best and flat-out contradictory at worst. The manager must take all of this into account, evaluating the relevance of the feedback himself.

Note, however, that the involvement of the playtest manager can be cause for controversy. In some cases, a playtest manager must simply behave as a mere observer; in fact, this is generally the best attitude to have during playtests occurring later in the game development, when it is time to fine-tune game system settings.

The objective at this point is to collect a maximum of statistical data from a high number of playtesters.

By contrast, during early playtesting meant to evaluate the strengths and weaknesses of embryonic maps or game systems, the comparatively low quantity and greater heterogeneity of the collected data require a more aggressive, reactive, and direct involvement on the part of the manager.

At this point, he must necessarily "get his hands dirty", as he'll be working with incomplete data. While there is a risk of error here, my experience has shown me that playtest results are actually more concrete at this stage, and thus more useful.

My experience amidst one of the best development studios in France has taught me that the playtest manager must be wholly invested in the final quality of the game, and must not be content with being a mere observer.

This conclusion once again indicates the need for a close relationship between the playtest and the development teams.

Debriefing

We thus arrive at the final result of a playtest session. The general idea is to bring the playtest conclusions as quickly as possible to those who most need it -- generally the designers and project leaders. Debriefing may take several forms.

First, design team members who observed the playtests may put their most pressing or immediate questions to the playtesters. They often leave the playtesting room with some strong ideas burning in their mind.

Then comes the report, which must make a clear distinction between the facts (statistics etc.), opinions from the playtesters, and the manager's own observations and conclusions. Raw data must be provided so that the designers know on which bases the manager drew his conclusions.

Putting all the cards on the table is a good way to establish trust with the ones who will read the report. Let us not forget that the purpose of playtests is to improve the game, and not to settle scores.

A full-fledged report takes time to compile and to write so a shorter, intermediary debriefing might be needed if the needs for crucial feedbacks is urgent.

As a final note, I'll mention that I had begun to experiment at the Milan Ubisoft studio with a protocol allowing a remote office (in another city or even another country) to obtain a hot report on a map playtest.

Named D3 for "Debrief Dynamique à Distance" (Remote Dynamic Debrief), this protocol consists in quickly establishing a list of the main open issues, and organizing an online session where the concerned designers (at the development office) and the playtest session managers (at the playtest office) can log on.

They can then explore the maps while the playtest team explains the issues with much precision, and all can work together in developing possible solutions. A playtester may even join them, contributing further to the dialogue.

Previous Chronicles

The Silent Revolution of Playtests, part 1

The Megatrends of Game Design, part 1

The Megatrends of Game Design, part 2

The Megatrends of Game Design, part 3

The Megatrends of Game Design, part 4

Physics in Games: A New Frontier

Multiplayer level design, part 1

Multiplayer level design, part 2

Multiplayer level design, part 3

Read more about:

Features

About the Author(s)

Pascal Luban

Blogger

Pascal Luban is a freelance creative director and game designer based in France. He has been working in the game industry as a game or level designer since 1995 and has been commissioned by major studios and publishers including Activision, SCEE, Ubisoft and DICE. In particular, he was Lead Level Designer on the 'versus' multiplayer versions of both Splinter Cell: Pandora Tomorrow and Chaos Theory, he designed CTF-Tornado, a UT3 mod multiplayer map built to showcase the applications of physics to gameplay, he was creative Director on Wanted – Weapons of Fate and lead game designer on Fighters Uncaged, the first combat game for Kinect. His first game for mobile platforms, The One Hope, was published in 2007 by the Irish publishers Gmedia and has received the Best In Gaming award at the 2009 Digital Media Awards of Dublin. Leveraging his design experience on console and PC titles, Pascal is also working on social and Free-to-Play games. He contributed to the game design of Kartoon, a Facebook game currently under development at Kadank, he did a design mission on Treasure Madness, zSlide's successful Free-to-Play game and completed several design missions for French and American clients. Pascal is content director for the video game program at CIFACOM, a French school focusing on the new media industry.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like