Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Rational Choice Theory, Social Games and Game Design

Do you think you're a rational human being? Are you sure? This post discusses the concept of rationality as it relates to gameplay. It's also a part of a series that was originally posted on Michael Fergusson's social game design blog.

Michael Fergusson

October 12, 2010

4 Min Read

I hope that so far, I've been able to illustrate that great game design can be used to motivate specific behaviours during gameplay (and why this is important for both business and game designers to understand).

As we move forward with the series, I wanted to touch on a related concept about how these decisions are motivated, and in particular, I wanted to discuss a fascinating study I came across by Andrew Colman, Ph.D. from the University of Leicester. His paper discusses cooperation, psychological game theory and the limitations of rationality.

Rational Choice Theory, Cooperation and Social Games

In general, if someone is behaving as a rational actor in the the classical sense, they will choose the path that provides the greatest reward at the lowest cost. This is why many scientists use rational choice theory as a framework for understanding social and economic behaviour in collaborative situations.

In this context, ‘rationality’ simply means that a person acts as if balancing costs against benefits to arrive at an action that maximizes advantage (and doesn’t have much to do with the decision being sane or crazy).

It’s also common to describe social interaction in collaborative environments as a part of this framework, as often rational choice theorists see social interaction as a process of social exchange.  This makes sense: economic action involves an exchange of goods and services; social interaction involves the exchange of approval and certain other valued behaviours.

In order to emphasize the parallels with economic action, rewards and punishments in social exchange have generally been termed rewards and costs, with action being motivated by the pursuit of a ‘profitable’ balance of rewards over costs (important when game designers aim to get people to do things for them in games).

Colman’s research suggests that when it comes to ideas about rational action, particularly in social interactions (settings where cooperation could be necessary), the more traditional model of rational choice theory — a person acts as if balancing costs against benefits that maximizes their advantage — often don’t apply.

For example in certain interactive situations, players cooperated to advance collectively, even if that cooperation meant that they reduced the benefit they personally received. (The players were acting in a way that required for them to do way more for what rationality would suggest is required for the payoff they received.) So we can say they were not acting strictly rationally.

Dan Ariely, the author of Predictably Irrational also illustrates this exact point by using the classic economics game known as the ultimatum game as an example. In the game, one person called the  “sender” has $20 and offers a “receiver” a portion of the money.

Some offers are fair (an even split) and some are unfair (you get $5, we get $15). The receiver can either accept or reject the offer. If he rejects it, both sides get nothing. Traditional economics predicts that people—as rational beings—will accept any offer of money rather than reject an offer and get zero (we are seen as self-interested rational maximizers).

But behavioral economics shows that people often prefer to lose money in order to punish a person making an unfair offer. When it comes to psychological game theory, his research goes an even further step.

Research has shown that players not only consider how best to maximize their gameplay, or to opt-out all together and choose to lose out completely, but they also factor in such notions as cooperation and fairness when those decisions are made.


If each potential member makes this same calculation, as rational choice theory expects them to do, then no one would ever do something that didn’t maximize their benefits. But in fact they do.

For example, when playing pet pupz, a player will feed their friend’s puppy or send a gift, not just to advance in the game, but also to show that they care. In this case, the incentive is not to come out on top as the person who has the most puppy coins, but the”benefit” is a intrinsic satisfaction a player gets from helping another player, or perhaps the social approval that is gained from displaying this status.

If game designers can understand that often traditional rational framework rules might not (obviously) apply, then we can design our games to take advantage of these specific behaviours. This can encourage longer periods of player engagement with your game and influence the revenue generation and virality of the game. What do you think about rationality and games? Leave me a comment or email me at michael [at] ayogo [dot] com. 

Read more about:

Featured Blogs

About the Author(s)

Michael Fergusson


Michael is the Chief Executive Officer and founder of Ayogo Games Inc., and is dedicated to the idea that playing is one of the most productive things we can do. He's been an entrepreneur and innovator on the Web for over 15 years, and the games he has developed have been played by millions on computers and Smartphones all over the world. This year, Michael has been invited to speak on game design at SXSW, Vancouver's F5 Expo, Games and Health -- Vid Week, Banff Television Festival and NextMedia in Toronto, and has been featured by Fast Company, MIT Technology Review, CBC, and the Globe and Mail. Michael lives in Vancouver with his beautiful wife and four amazing children.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like