Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

The Cult of the Peacock

A story about the slow death of the videogame manual. Examines the history of interaction design both in academia and the games industry. Explores the downsides of our relentless drive towards usability. Begins with a teapot and ends with peacock.

Brendan Vance, Blogger

January 6, 2014

22 Min Read

It's easy to forget that at one time all videogames had manuals. I used to like reading manuals. Manuals were cool. Now, instead of manuals, we have interactive tutorials. They take about fifty times longer to produce, three times longer to consume, and players hate them so much that their highest aspiration is to become completely transparent. Currently I spend most of my waking hours developing them. It should come as no surprise that I hate them too.

This is a story about how these things happened. It's sort of a companion piece to the article I wrote about Liz Ryerson's Problem Attic in that it examines the reasons why games like that became unfashionable, how this is a bad thing, and what we might do to fix it. It's a story about the history of interaction design both in academia and the games industry, as well as my experiences travelling through those spaces. It's a story about how I got the kink in my neck, and the slow death of the videogame manual. It begins with a teapot and ends with a peacock. More than anything, though, it is about apotheosis. There are four parts. Shall we begin?

1. The Three Commandments

In 1988 a person named Donald Norman published a book that we know today as The Design of Everyday Things. This book is sort of like the Old Testament for interaction design people like myself; it has come to define the way we think about our audience, how we shape our process, and ultimately what it is we hope to accomplish with our work.

More than 100,000 copies sold!

The first commandment of DOET is that there are no dumb people: Only dumb objects. As a user you are never at fault for being unable to figure out how to use something. If you can't convince your thermostat to turn on at 8:00 AM on weekends but 5:00AM every workday, that isn't because you're stupid and ought to read the manual; it's because the thermostat's designers made the thing too hard to understand. Programming your thermostat should not require research. You are a busy person who has more interesting things to do than waste time puzzling over a dumb and incomprehensible machine. (Puzzling over dumb and incomprehensible machines is the designer's job.)

The second commandment is to make objects more usable by designing affordances, noticeable features that show people what they're supposed to do with an object (that is, what actions the object affords) and ideally make it impossible to use that object incorrectly. Much of DOET's first chapter is about how to design doors. The side of a door that you have to push inward should have an affordance on it that looks easy to push but difficult to pull; and conversely, the side that you have to pull outward should look easy to pull but difficult to push. You should be able to see through most doors so that you know where they lead and can avoid accidentally smacking someone on other end, but they shouldn't be so transparent that you accidentally run into them or mistake them for windows. Designers are weirdly fixated on doors.

The third commandment is every software designer's best friend and worst enemy: Use good affordances to produce accurate cognitive models of how objects work. Cognitive models are like a person's mental map of a tool or a computer application. Accurate ones, which result from helpful and consistent affordances, allow you to recover from mistakes because you understand exactly where you went wrong and know exactly how to undo it. Inaccurate ones, by contrast, result from unhelpful or inconsistent affordances and tend to leave you paralyzed. Have you ever watched a novice computer user superstitiously avoid touching ANYTHING because they're afraid of contracting a virus or somehow erasing all their photographs? To my grandmother, using Windows Vista is like traversing the Mines of Moria.

DOET judges the user's needs most important, and her perspective most valuable. It is about the apotheosis of the user; it makes her into God, and with holy might it strikes the fear of Her into objects and those who make them. Designers whose products were easier to create than they are to use shall be flogged by a rolled up thermostat manual. Designers who make doors that you can't tell whether to push or pull shall have their eyes pecked out by the ravens of the valley. Designers who unwittingly architect the Mines of Moria shall be stoned with stones until they are dead. I have it on good authority that The Design of Everyday Things transformed the people responsible for Clippy into pillars of salt.

DOET, alongside all the important research around it, culminated in something called User Centered Design, a philosophy in which "user error" does not exist and programmers are sad. Under UCD, designers first figure out what their software should do by interviewing potential users and observing them while they work. Next they prototype some way of improving those users' lives, like an improved piece of productivity software. Then they observe how well their prototype works, then they refine it some more, then they observe how well their refinements work and then they refine the refinements. If they're skilful and lucky, the five- or six-hundredth iteration of the third or fourth complete design overhaul will yield something that is actually helpful to people; if they're really lucky, someone will then decide to pay for it.

2. The Shadow of the Teapot

I came to university in 2006, by which time DOET and UCD reigned as the dominant religion of interaction design. I came because I wanted to make videogames, not thermostat firmware; at that time, however, the universities around Vancouver had no formal videogame programs. There were really only three options: I could take computer science, I could take interaction design, or I could attend this thing called SIAT that awkwardly slapped the two together alongside Art. I chose the third option, which was a wise decision because it turned out making videogames is exactly that: The awkward slapping together of computer science, interaction design and Art. When I started to actually make games, however, I discovered that although all three disciplines were represented interaction design had accumulated the largest share of power. I was taught that videogames, being made almost entirely of software, are prone to all the same failings and benefit from all the same design techniques. Games were good when they didn't need a manual. Games were better when they honoured their users in every way possible. Games were best of all when we viewed them as the unrighteous progeny of their designer's original sin; they needed to be prototyped, they needed to be tested, and ultimately they needed to be saved. As the instrument of DOET I became responsible for their salvation.

Pictured: Regret

Here is an anecdote. For my upper-division game design course our team decided to make a Civilization-like set in the ice age wherein the player hunts and gathers resources to help her village survive. I spent many late nights towards the end of the project programming a tutorial system that pops brief explanatory dialogue boxes at the player only once (typically the first time she uses some relevant mechanic). We could think of all kinds of reasons why this was not ideal, but we'd been busy with all the game's other aspects and hadn't had time to do anything more elaborate in only 13 weeks. On the last day of the course our professor brought in a game industry expert whose task was to select the best student project and award its developers a (drum roll please) $50 Future Shop Gift Card For Excellence in Game Design. He spent about five minutes with our prototype. During that time he intentionally skipped every dialogue box, clicked around randomly, repeatedly announced he had no idea what to do, and unfavourably compared it to a hidden object game he'd played in the row behind us. He said there was definitely no way anything in our prototype could be the sexy new core mechanic for a Need For Speed sequel. Then he walked away. (Designers who expect the player to read things shall receive no $50 Future Shop gift card.)

I sent five years in SIAT, studying under the long shadow of DOET's iconic teapot. Over that time I developed both a burning resentment of and a guilty longing for user testing that I suspect will follow me for the rest of my life. But I also learned how to make videogames, and today I am not unduly proud to say that I work in the videogame industry (where programmers are also sad). Here usability is no less a religion than in design school, although it's a different sort of religion. We inhabit a feudal hierarchy of executives, marketers, designers and other stakeholders who each harbour their own beliefs about who players are, what they enjoy and what they will purchase. There is lots of money involved, of course, which is the domain of the stakeholders. But there is design involved too, which is the part where we diaspora from the shadow of the teapot can relieve all our guiltiest longings. As a game designer my job is to render unto Caesar what is Caesar's, and unto the User what is the User's. Sometimes these interests are at odds (see: The history of microtransactions), but at other times they intersect.

UCD, in games and all other software, makes for happy bosses and happy players because it helps you make justified decisions. In games there are 'artistic' decisions and 'design' decisions, the difference between them being that artistic decisions are non-falsifiable. You often make artistic decisions about your game's intrinsic features, like what mood you want it to have, and no one can claim with certainty that these decisions are right or wrong. Artistic decisions are tough because in the absence of certainty you will never in your life convince an entire company to agree on something. Marketing will make a marketing argument, the creative director will make a creative argument, and then whoever has enough power to decide will get to decide. Design decisions, by contrast, are very falsifiable because you can test them. You first make design hypotheses, which usually concern your game's extrinsic features like the specific techniques you use to achieve whatever mood the winner of your last argument decided. Then you make some prototypes featuring those decisions, hand them to the player, and see what works. Caesar loves design work because it mitigates risk (or at least obscures it); the User loves design work because it frees her from confusion and frustration. Everybody loves design work so much that in the modern game industry we have adopted analytics as a method of converting every single risky, contentious artistic decision into a safe, testable design decision. The Zyngas of the world do not make artistic choices about what colour to make the title screen. They simply do a split test: Show half of all players a blue one and the other half a green one, then choose the option whose test group proved more likely to click the 'Play Now!' button. In effect design has now been weaponised, and Art can't really keep up.

What Caesar loves even more than weapons is that UCD improves a game's market performance on every conceivable level. A game that is easy to use enjoys a wider potential audience because it requires less expertise and effort on the part of its players. It demos better at trade shows and in game competitions, where people have a limited amount of time to evaluate it. It even receives more favourable product reviews, in part because writers face tremendous competitive pressure to finish an entire game and publish something ASAP. When someone has to consume 60 hours of content in three days so she can hit her publishing deadline, it had better damn well be easy to digest. The market for games and game culture is very crowded and the competition is fierce. No one has time to sit around scratching their head; your best chance at getting anyone to look at the work you do is therefore to UCD the crap out of it.

Here is another anecdote. At my game industry job I once sat in on a testing session during which a brand new player decided he would try out each button on our title screen while talking through what he thought each one would do. He enabled and disabled mute; he found the credits where he expected the achievements to be; he tapped a few things that weren't implemented yet. He then tapped on the button that led to our pictographic, 100% text-free 'How to Play' screen (by now we thought we had learned our lesson about players and reading things). He looked around for the smallest possible amount of time the human brain will allow, succeeded in recognizing that it was indeed a tutorial, and then immediately tapped the 'close' button. He later spent several minutes wondering aloud what the game's controls might be. (Designers who expect the player to look at things shall not leave the office before dusk.)

3. The Cult of the Peacock

These various incentives have coalesced into a dominant school of thought regarding what games are and can be. This school teaches that if it's not fun (or at the very least quick and painless) to be taught about some feature, we shouldn't include it; that clarity is better than complexity; that elegance is better than messiness; that one button is better than two. It teaches that the purpose of a game is to explain itself to you, and that somewhere in the act of explaining lies that game's intrinsic value. We have thereby converted the scariest, most contentious question of all (what should this thing be?) from an artistic decision into a design decision. We have done this because it is profitable, but also because it has a tendency to produce beautiful results.

Pictured: The product of an iterative decision making process.

What we've neglected to consider, though, are the side effects of placing so great a burden upon designers. Consider this. Each game (as well as any other work of media) possesses a 'burden of learning': All the things a person must understand in order to consume it as its authors intend. This burden of learning falls somewhere on a spectrum with two opposite extremes. One extreme emphasizes the discovery of features by the consumer, which is to say it 'burdens' her. Works on this end are often challenging to the audience in the Art sense (not the Super Meat Boy sense); they require time and energy to parse. English poetry, for example, burdens the reader by assuming she is literate and therefore omitting any kind of tutorial explaining what each Roman glyph represents or how verbs work. The other extreme of this spectrum emphasizes the teaching of features by the designer. This work is accessible to the audience, which is the opposite of challenging. It includes things like airport signage, Bolshevik propaganda and of course videogames, all of which tend to deal in clear and elegant ideas because those are the easiest to communicate.

By pushing the burden of learning further and further towards the designer (and demanding less time and energy from our users) we have managed to create all manner of wonderful games that many people can understand instantaneously without the aid of manuals, previous videogame experience, The Rosetta Stone, et cetera. These games sell really well and a lot of people like them. But each step towards the accessible end of the spectrum carries with it an unseen cost: The designer's time and energy. A designer is kind of like a Turing machine: Given enough iterations she can figure out how to teach any player any game mechanic without causing boredom or confusion. But those iterations are not free and time is not unlimited, and for this reason there is an opportunity cost to performing them. The time a designer spends discovering how to better explain one mechanic cannot be spent improving the game in any other way. Thus, the more accessible you make a game the more time each feature costs, and the less time is available to do really anything except work on accessibility.

Over the past twenty-five years or so the burden on designers has crept forward alongside the march of technology, and as a result the form and content of games has changed. Accessible games are endlessly adaptable; that's why we've all been remaking DoomSuper Mario Bros and Tetris this entire time. Complicated games are more fragile and have a tendency to boom and bust. In the '80s it was fashionable for the Ultima franchise to use every key on the keyboard, explaining itself through paper manuals, cloth world maps and a whole lot of inquisitive key presses. But when the rising cost and complexity of development outpaced the growth of its audience, Origin Systems went extinct. (Today its bones serve as novelty shelving units for Sims 3 expansion packs.) This did not happen because Mario games are 'better' than Ultima games in any definitive way; the two have different strengths and are difficult to compare. Yet in the school of thought I am describing accessibility is the only possible goal. Super Mario Bros uses four buttons whereas Ultima IV uses all twenty-six of a standard keyboard's letter keys. All. Twenty. Six. (Good luck teaching that stuff solely through clever antepieces.) Because this school of thought has become so entrenched in everything from development to marketing to criticism, Mario gets to be the state of the art while Ultima occupies a tiny undernourished niche.

The further we shift the burden the higher we raise the bar, and it has now reached so absurd a height that the decisions we make to reach it defy all reason. My last project was an iPad game. Because the user will not pay money for things on the App Store, the project was based on microtransactions. Because the user exists perpetually in a state of 'about to abandon your game in favour of watching cat videos', it was essential that our menus (and their embedded microtransactions) were neither confusing nor boring, remaining unobtrusive while simultaneously being present all the time. Because the user will not read a bunch of text, these menus somehow had to communicate all their associated game mechanics instantaneously through graphics and interaction alone. Everything had to be snappy; everything had to work perfectly. I spent four solid months implementing UI for this project, about three quarters of my total time investment. I worked longer hours than I ever have, becoming snappish, aloof and even more cynical than usual as mild burnout set in. I developed neck problems that won't go away. It was one of the worst experiences of my life. We succeeded in the end at producing an excellent user interface, but it's hard not to view that as a Pyrrhic victory. I might have fared better had it been possible to, say, spend one afternoon writing a one page document explaining how the game works. I might have, in that case, spent four months (minus the one afternoon) improving the actual videogame part of our project. But that is not what we do in games anymore, because no one will read a one page document. Instead I spent that time iterating endlessly on features no one would identify as our product's primary source of value but which everyone agrees are essential to resembling a high quality videogame (and thereby generating commercial and critical success). The proof is in my creaky spine: It is not what we explain but how elegantly we explain it that the industry values most. This fact is as abhorrent and destructive as it is lucrative. (Designers whose tongues are made of silver shall be rewarded with an opportunity to raise the bar from which they hang.)

The modern videogame is like a peacock, and affordances are its proud and luxuriant tail feathers. The primordial videogame was a humbler beast; but when our fondness for big, healthy, shimmering affordances came to the forefront of our affections, natural selection took hold. With each generation the feathers grew and grew. Eventually the beast became spangled all in affordances, its tail feathers ablaze even though it had little worth affording. Today we are both blessed and cursed with a giant kaleidoscopic rear end, as irresistible to female peafowl as it is to hungry feral cats. Videogames are caught in a Fisherian Runaway (what game designers might call a 'race to the bottom'). Our belief in clarity and elegance, though it has yielded spectacular results, is not the very best way to make videogames; it may not even be a particularly good way. We suffer from the bar we've set for ourselves and the burdens we place upon designers. We are wrongly convinced, even in the critical community, that works like Problem Attic are unworthy of attention solely because they prioritize different features and challenge players in a way we deem to be unfashionable.

I don't think this is what DOET intended. In truth, The Design of Everyday Things has only so much to say about videogames as an art form. Unlike a thermostat or an operating system, no amount of study can tell us exactly what a videogame should do for a player; its purposes are fundamentally artistic rather than designed. Sometimes they can be messy and often they can be obscure, but these are virtues in themselves. Games are made of software, but they are not only software. They can benefit from good affordances, but there is joy in discovering features for oneself. A videogame is not an everyday thing; yet by behaving as if it is we've warped the practice of game development. Our wayward school of thought pretends towards DOET's long shadow, but its claim is illegitimate. It does not honour the user by bringing her the best possible work; instead it coddles her by doing what she asks rather than imagining all that she might appreciate. In the process it leaves her without imagination, blind to the world beyond the tail feathers. It is more like a cult than a religion, and its time should pass.

4. Resolutions

It's easy to forget that at one time all videogames had manuals. I used to like reading manuals. Manuals were cool. Now, instead of manuals, we have interactive tutorials. They take about fifty times longer to produce, three times longer to consume, and players hate them so much that their highest aspiration is to become completely transparent. Currently I spend most of my waking hours developing them. It should come as no surprise that I hate them too. (Designers who covet the heresies of their forebears shall lead a life of want.)

Not Pictured: An explanation of what the 'Z' key does. It brings up "ztats".

Fortunately I plan on changing the way I spend my time: I'm going to write a lot of manuals! I think manuals might be the best way to improve videogames. I wrote a manual about Problem Attic that I mentioned earlier, and it was a very rewarding experience. It was beneficial to the author because it served to raise a little bit of awareness about the game (which is especially important for works that inhabit an atypical position on the 'burden of learning' spectrum). The piece got way more attention than anything I might have written about GTA V or something, firstly because the market for that stuff is extremely saturated and secondly because games like GTA V don't really require analysis. (Some poor programmer already destroyed herself making that thing accessible enough for everybody. Maybe hundreds of programmers.) Most importantly, the act of writing it helped me learn a lot about Problem Attic and improved the way I think about videogames.

If you are a critic, I highly recommend writing a manual about a game you like. Game authors often can't write about their own stuff because to produce an 'official' reading of their work would limit everyone else's ability to interpret it. Their work must speak for them; only we players can speak back. Techniques like close reading can help build a comprehensive understanding of what makes games truly valuable (not just what makes them easy to learn). And by building this understanding we take important steps towards reining in the cult of the peacock; as game festivals and other high-brow critical establishments pay more and more attention to challenging works the critical pressure towards accessible games will start to subside; ultimately we might even gain a better semblance of balance (like every other popular medium enjoys).

When I'm not writing manuals I will probably continue developing videogames, and I'd like to change the way I do that as well. I plan on thinking much harder about how I evaluate potential game features. "Because then the user doesn't have to think" or "but how do we teach that?" should not be trump cards in every single argument about whether to include stuff. It's easy to turn everything into a neat little design decision, but making a few more artistic ones would be better in the long term for users and for my sanity. I also plan on thinking very carefully (and complaining very loudly) about the actual development cost of implementing what appear to be vital or 'must-do' features just because they increase the size and lustre of our tail feathers. It's greedy to make something shiny just because we feel we can; it's smarter to consider just how many 'could-do' features we'll lose in the process, and whether our game might be diminished by recklessly chasing the bar. It's okay to ask for trust from your users rather than just money; sometimes they even enjoy it. I think it's important to build that trust through improving game literacy rather than trying to eliminate the need for it.

First you write the manuals. Then you get the literacy. Then you get to stop feeling quite so bad all the time. This is the hypothesis. I long to test it.

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like