As we move through the world we sometimes find ourselves in interactions where what is best for ourselves is not best for those around us. Pollution, overuse of resources, the maintenance of roads, and welfare for the poor and unfortunate are all examples. This type of situation is often referred to as an n-player social dilemma: there are a number of players(n), there is a temptation for individuals to act selfishly, and selfish behavior has negative consequences for the group as a whole.
In virtual worlds it may seem that we are fortunate because resources can be unlimited. If energy shortage is a problem, just program in unlimited energy. If pollution is a problem, just design production so there is no waste or byproduct. However, having unlimited resources in games is often not an option for designers because the nature of human behavior ties scarcity to motivation and value. A quick glance at a supply and demand curve can demonstrate that scarcity drives up price. Scarcity is also closely tied to status. A scarce good is a signal for skill or resources. It feels good to collect and own things that are scarce because they are generally valuable, or they signal that you are unique and awesome. In these scenarios competition and selfishness are fun for the winner. The game is zero-sum in that the winner gets the spoils and the losers get only the experience of trying. However, sometimes when a good is scarce, it can be more rewarding to preserve or sustain the resource for all to use. In these situations there maybe gains in fun, or even resources for patience and cooperation. Cooperation makes the pie bigger and everybody gets a slice. In addition, there are social benefits as the ensuing cooperation sows trust and creates/consolidates friendships.
Besides situations with scarce resources, social dilemmas can also occur as a result of pollution, which can occur via undesirable behaviors like griefing or ninja looting – see my post on communication channels in Brink. In these situations, an individual maybe motivated by an intrinsic motivation that is not directly related to advancement in the game. Perhaps they enjoy dominating another individual, or maybe they are just trying to make a cooperative experience competitive. Research on the motivations of griefers is limited, but most of us have probably been on the butt end of griefing, or experienced someone playing in a selfish or anti-social fashion.
Another reason social dilemmas might exist is that designers may accidentally/or intentionally design them into the payouts of a game. Once a feature is in the game it can be hard to change or remove. This may lead to fun and excitement for players who enjoy competition or selfish play, but for social players who want to make friends, or those who want to try and cooperate for a change of pace, it is preferable to at least have a mechanic available that allows a group to avoid having only selfish outcomes in social dilemmas. In some games where social dilemmas were not solvable via game mechanics emergent groups have formed using third party resources or other means to provide a solution.
In the following section I will detail some of the findings from research on social dilemmas. In doing so, I will focus on findings that detail how to promote or increase cooperation. It’s worth noting that I won’t be doing a literature review. Rather, just pointing out where the basic idea comes from (if you have questions about further reading feel free to get in touch). In addition, I’ll provide some examples of game scenarios where this type of research could be applied as a game mechanic. Hopefully, enterprising game designers can use these insights to recognize and, when necessary, build in features that allow social dilemmas to be solved, and perhaps more importantly use these findings to design game mechanics that incentivize cooperation, and allow them to more easily form stable friendships in online games.
1,2, & 3: Iteration, Reputation, and Choice of Partners
Social Dilemma Research
During 1980, in one of the seminal events involving game theory and strategy selection research, Robert Axelrod, a professor of political science at the University of Michigan, held a tournament that pitted various strategies against one another in a repeated prisoner’s dilemma. What made the tournament unique what that Axelrod invited famous social scientists from all over the world to submit strategies that they believed would excel and then pitted them against one another. The results of the tournament were particularly interesting because a very simple and relatively cooperative strategy won. Its name was Tit-for-tat.
In order to understand why Tit-for-tat was able to succeed, it is first important to understand the context in which the tournament was played. Looking at the prisoner’s dilemma in more detail shows that being cooperative opens a player up to deception and defection. In fact, the Nash equilibrium of the prisoner’s dilemma is mutual defection, but there were two key features that allowed cooperation to be more successful than defection in Axelrod’s tournament.
The first feature is iteration. Since Axelrod’s tournament was played over multiple rounds sustained cooperation could do better than defection. If two players could team up they could do better by cooperating over the long run than two players who were defecting. However, defectors who pair up with cooperators for just one round or remain anonymous can still perform well.
In order for mutual cooperation to be successful other conditions must exist as well. Players must first be able to identify those who have cooperated with them in the past. In a population of players it isn’t useful to cooperate expecting mutual cooperation and have the other player defect against you. In this situation, players must have both memory and reputation must persist. These features allow cooperators to cooperate solely with other cooperators since they can identify defectors and sanction them with mutual defection.
Implications for Game Design
Reputation, iteration, and choice of partners are important features for any game design not just social dilemmas. Almost all modern online multiplayer games are iterated and feature matching systems, however, iterated matching systems can come in a variety of forms. It is worth noting that an important variable in iterated matching systems is reputation. With reputational markers, and an appropriate iterated matching system, players can avoid players who have caused problems in the past. In addition, some companies have created mechanisms for identifying cheaters and problem players. Allowing them to remove these players from the system, which lets players engage in iterated random matching with less fear of being matched with cheaters, ninja loots, griefers, and jerks.
- Iterated Random Matching – Players are matched randomly with other players. Depending on the size of the community you might never see the same person twice. Random matching was used more often in the early days of online gaming. As games have begun to sit on meta-platforms like GameSpy, Steam, Facebook, and Xbox Live random matching has given way to more sophisticated guided matching systems. However, random pairing can still be found in a wide range of games such as FPS, sports, and RTS.
- Iterated Guided Matching – Players are matched with other players on a range of features including reputation, experience, and play style. Some degree of persistence is entailed in the player profile as stats and preferences are tracked. Guided matching is different from self-selection because it is guided by a third-party (generally a computer algorithm).
- Iterated Self-Selection – Players are allowed to self-select their partners. Self-Selection works in some conditions like MMO guilds where players can take time to be choosey, however it may actually slow the process of play in games where pairing must occur frequently and holds up more intrinsically motivating elements of play. Imagine trying to self-select each group in CoD match. Most games provide the appropriate and obvious design feature here; providing options for both self-selection and guided matching. Consider World of Warcraft, which followed in the footsteps of many MMOs by only allowing self-selection for instanced dungeons. Once Blizzard recognized that players were creating their own emergent matching systems (players are less picky about 5 man dungeons that raids) Blizzard was able to provide players with a guided matching system (Dungeon Finder) that build groups on the fly, saved players considerable time, and incentivized pick-up groups.
So far I have been talking about matching systems where everyone exists in a search space. The only constraint to space is filtering information. However, games like MMOs have physical space. Although matching systems like a dungeon finder or an auction house remove these constraints, space is still worth considering as a mechanism for matching players. For example, in MMOs space provides a means for matching players who are on the same quest or are looking for a trade partner. It seems obvious when I identify space as a mechanism for matching, but space is a feature that meta-systems like Xbox-live often lack and something that we take for granted. Space also plays a big role in social dilemma research1. As spatial configurations influence the success and failure of different strategies.
Space is still an useful way to allow your players to discover others with similar needs and interests. One example is how Rift uses zone events to filter players into public groups. Here the matching system is purely self-selection. Players who are interested in participating in the event simply need to find other players who are participating – Rift designers should consider that this is sometimes a challenge – and then they can join a public group where they know everyone is trying to achieve the same goal. Here space segregates the population based on goals, interests, and motivations.
Reputation systems are closely tied to matching systems. Reputation provides players with a mechanism for quickly determining if a given partner is trustworthy. There is a very wide range of scientific literature on reputation – enough to fill its own post, or even a book. In online games and social media reputation systems have grown in importance because they solve coordination problems for players. Reputation systems are often combined with anti-cheat systems to filter out the problem players who are recognizable. They also provide a second level of reputation for members of the community to avoid problem players. There are a wide variety of reputation systems that exist, and I am going to discuss what I believe are a few important elements of them. In most of the scientific literature on social dilemmas anonymity breeds defection. Players are probabilistically more likely to defect when others can’t identify them because when a player is anonymous it is impossible to sanction them. In general the key to reputation systems is to create a system that makes it difficult to be anonymous, which seems easy enough but can be a challenge in online environments.
- Persistence – The first place to start is by creating player profiles that are persistent. When a players profile exists at a meta-level and travels between games the player must be more concerned about their reputation. Before social networks most profiles were not persistent, but now it is the norm for a meta-level profile to exist for a gamer.
- Switching Costs – Persistence isn’t really that useful without some switching cost. Without switching costs a player has no repercussions for leaving their old profile and staring a new one. The thought here is, “If my current profile gets a bad reputation then I can just make another one”. Luckily there are many things that raise switching costs. Bound virtual items have real value and are locked on the character. Facebook profiles have the cost of creating a new identity (high cost when you use your real identity to begin with). Simple things like stats and achievement can add to switching costs.
- Friction – Of course a player who wants to be a jerk could simply create a fake account and go to town. This is where tying accounts to real-ids and credit cards has value. Friction is generally thought of as the bane of web-developers, but in the case of reputation friction is an essential part of the switching cost. If creating a new account takes time, requires a credit card, costs money, is tied to my phone number, or has some sort of activation period, then switching costs are raised an reputation is by consequence more persistent.
Some scientist argue that reciprocity is intrinsic to human nature because evolutionary pressures favored those who engaged in reciprocal transactions. In a previous blog post I detailed how this might be possible. Given that I’ve covered some of the literature on reciprocity in the past I’ll quickly note that reciprocity can improve cooperation in a social dilemma because it allows for gains from trade, facilitates communication, and creates ongoing relationships between individuals or groups.
Some writers including members of this site have questioned if social games are using the innate human desire for reciprocity for evil. However, reciprocity is also a force for good. I’ll list some mechanisms that take advantage of reciprocity in games (I won’t cover nearly all of them):
- Left 4 Dead allows players to heal one another when they are injured. Rather than keep the health for themselves they can share in the hope that someone will share with them in the future.
- Most MMOs Have a Need/Greed/Pass feature. Individuals who establish that they are willing to pass or greed on items that are not immediately useful to them expect future reciprocal exchange.
- In Settlers of Catan allows players to trade with one another. Reciprocal alliances often form where players make mutually beneficial exchanges in hopes of increasing the probability of winning.
- In Social Games players often have the ability to give gifts to others. The hope is that other will return the favor.
- Team Sports require a sharing of a ball or puck in order to make the team better. The hope is that if I pass the puck or ball to you for a tap in, you’ll do the same for me later.
- These are only scratching the surface so feel free to comment if you can think of additional situations.
How to build a game mechanic around reciprocity:
- 1. Give the players a means to reciprocate If there are not opportunities for reciprocal exchange then players won’t be able to do so. Often scarcity and gains from trade and specialization provide mechanism for players to reciprocate with one another. Space is an important consideration if your game has a lot of diversity. Using space to direct players who have common interests, or putting players with common interests into groups is a means to achieve this.
- 2. Make sure players can communicate their intentions. In a reciprocal transaction the first player to act altruistically puts themselves in danger of defection. If players can communicate their intentions quickly and efficiently it can lead to less confusion and reduce the chance of accidental defection. Providing players with a mechanism to make credible threats can increase cooperation. When a player can retaliate defection becomes less likely. One easy way to do this is through persistent reputation, for the reasons listed above.
- 3. Give the players a reason to interact again. Once players have helped each other don’t let the positive feelings and cooperation fade away. Give those two players another opportunity to cooperate.
5: Payoff Structure
It may seem obvious that the payout structure of a social dilemma can increase or decrease cooperation, but one of the more interesting findings of social dilemma research is that playing certain strategies or making structural changes can make a prisoner’s dilemma (where the equilibrium is mutual defection) into an assurance game (where the equilibrium is mutual cooperation).
Tit-for-tat is one example of a long term strategy that turns a prisoner’s dilemma into an assurance game beyond first interaction. Tit-for-tat makes what is often called a credible commitment to only two strategies. Allowing opponents to recognize that in any interactions past the first they have two options. Mutual cooperation, or mutual defection.
Game theory can be used by developers to map the basic payout structure of certain design elements. It often seems that problems with defection occur in games because developers did not adequately consider the temptation for players to defect. Figuring out player motivation is a difficult proposition. Players are often not transparent and may find intrinsic motivations in exploring the boundaries of a game or doing unexpected things. In these cases it is still worthwhile to understand the basic structure of the game. Game theory can provide the elements to sketch a rough formalization of why players are behaving in unanticipated ways. By rearranging the payout structure of the game or letting other players adjust their strategies developers can try and avoid defection and increase cooperation.
Research demonstrates that introducing communication channels or increasing bandwidth almost always leads to more cooperation in a social dilemma. However, as we have seen in many games, communication coupled with anonymity can lead to another channel for abuse and noise pollution. In my previous post I explored why communication channels suffer from pollution.
For those who are interested in designing communication systems that are less prone to abuse read on here.
Research on social dilemmas has typically focused on the use of sanctions as a means to prevent free-riding and defection. However, in recent years, probably due in part to the influence of games, social scientists have begun to focus on rewards. Game designers are certainly far ahead of social dilemma researchers when it comes to a ‘theory’ of rewards. Evidence suggests that the careful use of rewards can lead to more cooperation and an overall higher payout than sanctions. When game developers design systems they should consider if players will have a higher aggregate satisfaction through rewards or sanctions. Sanctions tend to be more powerful than rewards, so in some cases preventing the behavior is more important than making sure everyone can earn rewards. Once again a simple game theoretic incentive model can go a long way toward helping designers understand if rewards or sanctions are appropriate.
In order to increase cooperation a reward system could be built into the game to incentivize the following things:
- Contribution to the public good/or not polluting – In the case of not polluting or contributing to a public good, evidence suggests reward systems will incentivize contributions and decrease defection. Hence, cooperation is increased.
- Mentoring/Helping others/playing in a pro-social fashion – rewarding players for mentoring others is a good way to increase cooperation. In fact, if players with high status are encouraged to be mentors they may spread norms of pro-social behavior or mentoring.
Sanctions can take a variety of forms. The most common are:
- Player-driven – Players have the power to sanction one another. Like communication channels, this opens a powerful avenue for abuse if the player-driven sanctions do not operate under checks and balances. Player-driven sanctions do offer an opportunity for players to take ownership of policing themselves. In social dilemma research self-policing often provides a sense of ownership over the system or resource and creates norms of respect and cooperation. Player-driven systems also don’t suffer from some of the surveillance problems that other systems suffer form, as humans are more tuned to anti-social patterns than automated systems. Once again, persistent reputation is an important factor for these kind of systems.
- Automated – Automated systems find players who exhibit certain properties and expel them. With increasingly sophisticated machine learning algorithms it may not be long before artificial agents are able to police game worlds with very low error rates. Until then these systems suffer from false positives and an inability to detect many forms of anti-social behavior.
- GM Based arbitration – For game developers this can be a reasonable, but costly solution. Game-masters are hired to investigate problem behaviors. Unlike player communities game-masters don’t generally abuse their powers of arbitration, and they can also detect problems that automated systems can’t.
Finally, there is nothing that says companies can’t use a mix of all these strategies. In fact using rewards and sanctions together is something that games have done for a long time. One interesting application of rewards and sanctions working together that I really like can be found here it’s called Speed Camera Lottery.
For my final mechanic I want to turn the tables. In the previous sections I have been discussing what social dilemmas can teach games. However, the reader up to this point might be thinking that games can probably teach policy makers and institutional designers important things as well. I completely agree. In the section about rewards, I mentioned that game designers probably know much more about rewards than social dilemma researchers. I believe game designers and game mechanics also have a lot to offer when it comes to increasing efficacy and meaning.
Often the production function of a public good takes the form of a step function or has a very slowly increasing slope. Situations like this might include recycling, conservation, making charitable contributions, or paying taxes. In these situation the number of individuals involved and the lack of feedback make it difficult for an individual to recognize any noticeable improvements due to their own contribution. Studies of social dilemmas have shown that a simple perception of efficacy leads to increased contributions to a public good1,2. Organizations know and understand this problem. Consider an organization that pairs donors up with sponsor families. They are demonstrating how the donor’s contributions of cents per day is actually making a difference. These systems also often incorporate other feedback, such as letters from the family or photos.
A Step Function
When the production function is a step function there may be no noticeable increase in output until a given threshold is reached. If individuals can be convinced that only a few more donations are required in order to reach that threshold research1 has demonstrated that contributions can be increased. In addition, research1 has indicated that placing individuals in groups where each player understands that their individual contributions are critical to the provision of the good increases overall contributions. The key here again is finding a way to convince individuals that their contributions matter for the success and failure of the group.
One of the tasks of the new gamification movement is to figure out how to move beyond just providing individuals with badges and achievements and instead provide an individual with a real sense of meaning. Think about the psychological payoff for getting an achievement versus making the final contribution to move to the next step on a production function. Social media and game mechanics provide us with mechanisms for breaking up projects into many small step-functions. What if I could join a Facebook group that recruits a few hundred friends to recycle their plastics, glass, and paper for a month. Imagine if feedback was provided showing how the amount donated improved air quality, saved landfill space, and made new materials. In one of his posts, James Cummings showed us how Stanford University was creating a game that promoted more efficient energy use. Hopefully this is just the tip of the iceberg, and game developers can provide insights to policy makers about how to increase meaning and efficacy to increase cooperation and reduce free riding on public goods.