Sponsored By

A player journey through a game is the result of a thousand little choices, leading to success or failure in the game and to enjoyment or dislike of the game itself. Understanding how players react to different kinds of choices can lead to designs that help them make the kind of choices that they'll enjoy as well as an understanding of how some game designs can unintentionally elicit bad choices.

John Hopson, Blogger

February 6, 2002

15 Min Read

The play of any computer game can be described as a series of choices. A player might choose the left or right hand tunnel, decide to skip this target and save ammunition, or play a fighter rather than a mage. The total path of a player through the game is the result of a thousand little choices, leading to success or failure in the game and to enjoyment or dislike of the game itself. The principles underlying the choices players make and the way in which a designer can shape those choices is a key component of game design.

As in my previous article, the kind of psychology discussed here is often called behavioral psychology. This sub-field of psychology focuses on experiments and observable actions, and is a descriptive rather than normative field of study. Instead of looking at what people should do, it studies and tries to explain what they actually do. By understanding how people react to different kinds of choices, we can design games that help them make the kind of choices that they'll enjoy, and understand how some game designs can unintentionally elicit bad choices.

Maximizing

The most obvious thing to do when confronted with multiple options is to pick the choice or pattern of choices that maximizes reward. This is the sort of solution sought by game theory, one that mathematically guarantees the greatest level of success. While most players don't try to work out the exact algorithms behind weapon damage, they will notice which strategies work better than others and tend to approach maximal reward.

Usually, participants maximize when the choices are simple and deterministic. The more complex the problem, the more likely they are to engage in exploratory actions and the less likely they are to be sure that they are doing the optimal thing. This is particularly true in situations where the contingency is deterministic. If the pit monster attacks every time the player gets to a certain point, they'll quickly pick this up and learn the optimal point to jump over it. If it attacks probabilistically, the player will take longer to guess what rules govern the pit monster's attack.

While maximizing is the best thing for the player, it's probably not a good thing for the designer. If the player is doing as well as it's possible to do, it implies that they've mastered the game. It also means that the game has become perfectly predictable and most likely boring. A contingency with an element of randomness will maintain the player's interest longer and be more attractive. For example, subjects will generally prefer a 30 second variable interval schedule (rewards being delivered randomly between zero and sixty seconds apart) to a 30 second fixed interval schedule (rewards being delivered exactly 30 seconds apart), even though both provide the same overall rate of reward.

There is another, subtler problem with maximizing. As discussed in the previous article, sharp declines in the rate of reward are very punishing for players and can result in quitting. If the player has learned to maximize their reward in one portion of the game, creating a high and consistent level of reward, moving to another part or level of the game will most likely result in a drop in reward. This contrasting low level of reward is extremely aversive and can cause the player to quit. It may even be an effective punishment for exploring new aspects of the game, as the transition from the well understood portion to the unknown marks an inevitable drop in rewards.

To avoid maximizing, there are two basic approaches. First, one can make sure that the contingencies are never so simple that a player could find an optimal solution. The easiest way of doing this is to make the contingencies probabilistic. Massive randomness isn't necessary, just enough to keep players guessing and engaged. Second, the more options there are within the game, the more things there are to compare, the less likely it is that there will be a clear ideal strategy. If all the guns in the game work the same but do different levels of damage, it's easy to know you have the best one. If one gun is weaker but does area damage and another has a higher rate of fire, players can explore a wider variety of strategies. Once there is a clear best way to play the game, it ceases to be interesting in its own right.

Matching

Once there are multiple options producing rewards at different rates, the most common pattern of activity observed in humans and animals is matching. Essentially, matching means that the player is allocating their time to the various options in proportion to their overall rate of reward. More formally, this is referred to as the Matching Law, and can be expressed mathematically as the following equation:


Let's say our player Lothar has two different areas in which he can hunt for monsters to kill for points. In the forest area, he finds a monster approximately every two minutes. In the swamp area, he finds a monster every four minutes. Overall, the forest is a richer hunting ground, but the longer Lothar spends in the forest the more likely it is that a new monster has popped up in the swamp. Therefore Lothar has a motive to switch back and forth, allocating his time between the two alternatives. According to the Matching Law, our player will spend two-thirds of his time in the forest and one-third in the swamp.

The key factor in matching is rate of reward. It's the average amount of reward received in a certain period of time that matters, not the size of an individual reinforcer or the interval between reinforcers. If the swamp has dragons that give Lothar 100 points, while the forest has wyverns that give him only 50 points but appear twice as often as the dragons, the overall rates of reward are the same and both areas are equally desirable.

Now that I've set up a dichotomy between matching and maximizing, let me confuse things a bit. Under many circumstances, matching is maximizing. By allocating activity according to rate, the player can receive the maximal amount of reward. In particular, when faced with multiple variable interval schedules, matching really is the best strategy. What makes matching important to our understanding of players is that matching appears to be the default strategy when faced with an ongoing choice between multiple alternatives. In many cases, experiments show subjects matching even when other strategies would produce higher rates of reward.

Matching (and switching between multiple options in general) also has the helpful property of smoothing out the overall rate of reward. If there are several concurrent sources of reinforcement, a dip in one of them becomes less punishing. As one source of points falls off, a player can smoothly transition to others. A player regularly switching back and forth between options also has a greater chance of noticing changes in one of them.

Overmatching, Undermatching, and Change-Over Delays

At its discovery, matching was hailed as a great leap forward, an example of a relatively complex human behavior described by a mathematical equation, akin to physics equations describing the behavior of elementary particles. However, it was quickly discovered that humans and animals often deviated from the nice straight line described by the Matching Law. In some situations, participants overmatched, giving more weight to the richer option and less to the leaner option than the equation would predict. In others, the participants undermatched, treating the various contingencies as more equal than they actually were.

Neither of these tendencies is especially bad for game design, in small quantities. As long as the players are exploring different options and aren't bored, we don't usually care how much time they spend on each. Extreme undermatching implies the player isn't really paying attention to the merits of each option. Overmatching can mean that the player has chosen an option for reasons other than merit, such as enjoyment of the graphics.

Fortunately for behavioral psychology, these deviations could be predicted and controlled. One important factor in determining how closely participants match is the amount of time and/or effort required to change between options. The farther apart the options are or the more work is required to switch between them, the more players will tend towards overmatching. For example, imagine a typical first person shooter game, in the vein of Quake or Unreal. If switching from their current gun to a different one has a delay of 20 seconds during which they can't fire, they'll switch from one to another less often than they would otherwise. Even if the current gun isn't perfect for the current situation, the changeover cost might keep the player from switching. If the delay is long enough, switching can become non-existent as the costs outweigh any possible benefits.

At the other end of the spectrum is the case where changeover is instantaneous. Consider a massively multiplayer game where monsters spawn periodically in various locations. Switching between multiple spawning sites normally takes time, but suppose a player could teleport instantly from one to another with no cost. The best strategy would be to jump continuously back and forth, minimizing the time between the appearance of a monster and the kill. That makes sure the player gets as many points as possible in a given period of time.

Obviously, neither of these extremes is really desirable for game designers. Ideally we want to be able to adjust the time/difficulty/expense of changing strategies to strike just the right balance exploration and exploitation. What that balance is has to be an individual choice, the concept of a change-over delay is just a tool for achieving that balance.

Risk

Another important factor players consider in choosing between alternatives is risk. Game theory says that players should weigh the options such that they'll maximize overall reward in the long term. For each alternative, they should multiply the possible reward by the odds of receiving that reward and choose the best option.

However, this article is concerned with what players actually do, not what they mathematically should do. Psychologists generally use two terms to describe how subjects react to risky situations. Subjects are risk-prone when they prefer the more uncertain alternative and risk-averse when they tend towards safer options. In one experiment, pigeons were offered a choice between two keys to peck. The left provided 8 pieces of food every time, the right provided 16 half the time and no food half the time. The pigeons consistently preferred the more reliable schedule, and were therefore risk-averse. In a later study, the left key produced 3 bits of food every time while the right key produced 15 one-third of the time. In this study, the pigeons preferred the riskier alternative.

So far, this is perfectly in accord with game theory, with subjects taking risks when those risks offer an overall greater chance of reward. But what about the example mentioned earlier in this article, where subjects preferred a variable interval schedule to a fixed interval schedule? Even when the two options provided equal rates of overall reward, subjects preferred the probabilistic option. The difference lies in the expected outcome of each individual response. In the pigeon experiment we just described, each choice was discreet. A peck, an outcome, and the subject was presented with a fresh choice. Each choice contained the totality of possible outcomes, so the subjects' behavior reflected the total contingency.

In the fixed-interval / variable-interval experiment, one could respond any number of times on the fixed interval option but would not receive the reward until the interval had elapsed. On the variable interval schedule, every single response had a small chance of being rewarded. Therefore, there was always a reason to try the variable schedule, but only occasionally a reason to respond on the fixed schedule. The subjects were responding to the proximate outcomes, rather than the overall outcomes. This is an example of how subtle changes in the schedule can cause drastic changes in behavior. Whenever we provide players with rewards, we're creating a schedule of reinforcement that will influence them to behave in particular ways. Because we can't avoid these effects, we have to understand them so that they can be made to work for us, rather than against us.

Odysseus' Choice

One factor we haven't addressed yet is when the decisions are made. Many of the choices we make in games don't have immediate effects, only helping or harming the player minutes or hours down the line. A character might have to choose whether to take a potion that gives them extra strength now or save it for later play. A player in a tank combat game might choose a fast, lightly armored tank rather than a slower, better protected one. Not all choices are followed by immediate consequences, and this delay often distorts the player's perception of their options.

Take the situation where a person has two possible options, each with a different level of reward. For example, a person might choose between receiving one piece of candy or two pieces of candy. If the delays are equal, the person would naturally choose the one with the larger reward. However, as the delay to the lesser reward decreases, the relative value of that reward starts to rise. If someone is offered one piece of candy right now compared to two pieces next year, most people would probably choose the more immediate reward.


Because he wanted to hear the Sirens but also make it home alive, Odysseus ordered his crew to tie him to the mast and to plug their ears.

This kind of decision making is often studied in children, who tend to be more strongly affected by these delays. However, its effects can be seen throughout life, from decisions about saving money to the relative addictive qualities of recreational drugs. A drug which takes effect faster will generally be more addictive than a slower one of equivalent strength.

A practical question arising from this research is under what circumstances do people tend to make more accurate decisions. One of the answers that psychologists have discovered has a parallel in an ancient Greek myth, Odysseus and the Sirens. Odysseus knew his boat was about to sail near the place where the Sirens were singing and that anyone who heard them would throw themselves into the sea in a vain attempt to reach them. Because he wanted to hear the Sirens but also make it home alive, he ordered his crew to tie him to the mast and to plug their ears with beeswax so they would not hear the call. In this way, his ship sailed safely past, his crew unhearing of both the Sirens and his pleas to be untied.

Because he made the decisions at a long delay from both outcomes, his choice was a good one. If he'd waited until the Sirens were right there and had to choose, his decision would have maximized the short term happiness of listening to their song over the longer term reward of making it home alive.

More generally, the more distant all of the outcomes are, the more people's choices tend to maximize long-term success. Of course, you may not want players doing deep long-term thinking. It's up to the designer what's best for his or her game, whether to skew the players towards one option or another, towards one strategy or another. Delays between action and outcome are just one of the tools available to influence how players choose.

Conclusion

To explain every choice a real human being makes would take a model as complex as the human mind. Psychology cannot offer use that yet, but it can give us rules of thumb and general patterns of choice that can describe a generous portion of what we do when presented with multiple options. Every game offers its players a sequence of choices, each with attendant consequences for choosing wisely or poorly. By understanding some portion of the rules that govern how human beings react to those choices, we can design games that elicit the kinds of choices that make the game a more enjoyable experience for the player.

______________________________________________________

 

Read more about:

Features

About the Author(s)

John Hopson

Blogger

John Hopson is the head of User Research at Bungie and has been the lead researcher for a wide variety of games ranging from AAA blockbusters (Halo, Age of Empires) to small indie games (Trials HD, Shadow Complex). He's also the author of a number of articles on the intersection of research and games, including the infamous 'Behavioral Game Design'. John holds a Ph.D. in Behavioral and Brain Sciences from Duke University and is currently the chairman of the IGDA Games User Research SIG.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like