Sponsored By

Battling Toxicity In Online Video Games

Combating toxicity/harassment in online games is an ongoing battle. In this post I cover some past research done on the topic, as well as offer some new avenues to explore.

Game Developer, Staff

January 4, 2018

14 Min Read

Toxicity is Bad 

If you have ever been the target of #toxicity while playing #videogames #online, you know how injurious the experience can be. If you have been lucky enough to avoid being the target of toxic players, examining any one of the multitude of online #gaming victimization reports can quickly clarify why toxicity is so hurtful. Toxicity can be cruel, cutting, and distressful on many levels:

“I was playing league and some guy called me a man kissing faggot. I told him I was a girl so that wasn't really accurate. HUGE mistake. he spent the rest of the game talking about how he was going to find me, kidnap me, tie me up and rape me repeatedly with his "one inch asian dick." needless to say I'm never letting my gender slip in chat again.”

“I suppose at some point I said something that gave it away or he finally picked up on my accent and he asks in this incredulous voice: "wait are you a nigger?" I reply back with "what the fuck" and another teammate mimics my response but that was the tip of the iceberg. From that point on if he ksed or csed me it was "well it’s better this way niggers don't know how to properly use money" or if I died or made a mistake "well I guess you lived up to my expectations." It went on for a bit till I finally muted him and just went about my business. I know he kept it up because my team from time to time would go "that's racist" or "wtf."

“(Toxicity) is a big part of the reason why I'm not playing the game as much anymore… I have horror stories up the wall of people that I've dealt with. ”

Toxicity is not only damaging to gamers’ emotionality to the point that they are willing to forgo an identity they spent hours building, but it is damaging to the game developers as well. Making games is hard and distressing (and if you have not yet read Schreier’s brilliantly written Blood, Sweat, and Pixels describing how grueling the process can be, I highly recommend you do), but it is a labor of love. People go into game development because they love games, and they want to craft invaluable experiences for other to share. If the game is toxic, it hurts.

Additionally, as toxicity pushes players to withdraw from gaming, it also hurts the industry. Mike Ambinder, a psychologist at #Valve, reported at a talk that toxicity is the only factor he has found to predict gaming withdrawal. A game cannot be profitable if no one is playing it. Given all this, it seems imperative to continuously battle toxicity--a problem that may never fully be solved.

Previous Work on Curbing Toxicity

Through talks and articles, a number of researchers have shared their experiences and thoughts of dealing with toxicity in gaming. Ambinder mentions that embedding a question at the end of #DOTA2 matches helped curb toxicity by about 12.5%, claiming cognitive dissonance can explain the technique’s success. Jeffrey Lin, whose work at #RiotGames --the maker of #LeagueofLegends ( #LoL )--was featured in Nature, utilized priming and starting the game’s chat muted to lower toxicity (although he does not hypothesize why he expected the latter to work). More recently, Kam Fung gave an interesting talk on LoL’s honor system, and how they designed it to increase pro-social behavior.

Others have tackled the thorny issue of toxicity more theoretically, without sharing personally collected or analyzed data. Ben Lewis-Evans gave a wonderful talk at a past Game Developers Conference ( #GDC ) on toxicity. He discussed how game design and structure might be modified to create a more pro-social gaming environment. Similarly, one of my favorite Extra Credits episodes (seriously, if you have not yet watched it--watch it!) teamed up with gamers, academics, and industry members to tackle toxicity combating ideas.

And the industry, in many cases, has responded to the cry for change. It is often tirelessly and continuously working to combat toxicity, making it impossible to know what methods it has already examined, and what interventions it is currently testing. Nevertheless, as I enjoy solving applied problems concerning topics I am passionate about, I have decided to offer a few solutions as well. Some of these are based on the previous works and ideas I could find--such as those mentioned above--while others are rooted in more traditional psychological research.

In this post I focus on reducing toxicity by enacting changes on three different fronts. The first is providing players with tools to help them avoid exposure to toxic players. An example of a current, popular tool serving that purpose is the ability to mute other players. The second front concerns the onset of toxicity, and attempts to curtail toxicity early on, so that it does not snowball throughout the game. Finally, I examine possible ways to augment toxicity regulation, as well as treating the offenders. Although neither these fronts--nor the methods I suggest for each of the category are exhaustive in any way--I hope they prove generative of further discourse and ideas to improve games and make them fun, safe spaces for everyone to enjoy.

Muting Frequency

Although toxicity is common in online gaming, players can avoid it by muting other players. Unfortunately, without knowledge of other players’ histories, it is impossible to know whether a player should be muted until after being victimized by him or her. As the Extra Credits team points out, players who have an extremely high frequency of being muted, are likely toxic players. Extra Credits suggested auto-muting them from the start.

While that may be an effective method to spare players from victimization, it may also further inflame the toxic players. It is possible that upon realizing that they are being singled out for punishment in a game they have not actually aggressed in, they will become further exacerbated, and engage in higher toxicity levels. It may be more effective to provide players with information regarding the frequency with which other players are muted. By seeing at the beginning of a match what percentage of others’ play time they spent muted (on average), players can make an informed decision of who to mute before starting the game. This should allow players to avoid future toxic encounters.

Curbing Toxicity’s Onset

Although it is important to empower players, and give them tools to avoid exposure to toxicity, toxicity still occurs, making it important to curtail. Although most toxicity is not enacted by chronic abusers, it is important to address, as it can be just as harmful to the victims. These players are likely people that slipped up, or had a bad day, and are consequently retaliating by taking it out on other players. In fact, most players reform quickly after experiencing mild consequences in response to their toxicity.

If toxicity is not malicious, but rather the product of a momentary lapse of judgement or cognitive overload, it stands to reason that these toxic players are not fully aware of the consequences of their behavior. These players might curb their toxicity if alerted to it early on. Although I am not aware of any data on the topic (other than anecdotal stories), it is likely that early toxicity predicts greater overall toxicity in a match. These behaviors snowball, as victims sometimes fight back, and the original toxic player is often further aggravated.

Therefore, it appears imperative to alert players to their toxic behavior as soon as they begin to slip. This might be accomplished in a variety of ways. One way might be to momentarily alter the color of the screen, similar to the way it flashes red in many games when a player is shot. A momentary flash of color (and it may be pay to test a few colors, as one may be more effective than others) may help players quickly they are about to go down the slippery slope of toxicity. Provided that is not a path they would like to go down, that momentarily distracting alert may help curb toxicity early on. This would provide potential offenders with an impetus to inhibit toxic behaviors, and curtail toxicity’s onset.

Creating Gaming Zones With Stricter Regulation

Once toxicity has occurred, it must be dealt with. One option is to have the industry take a harsher stance on toxic behavior. Some might worry that by limiting players’ ability to have “fun”, companies might be financially self-handicapping. While I do not have empirical data to undermine that argument, I think Player Unknown Battlegrounds’ (PUBG) code of conduct offers anecdotal evidence.

PUBG not only limits players’ ability to utilize any discriminatory language, they explicitly prohibit the usage of extreme foul language of any kind. Furthermore, they conclude the rules with the following disclaimer: “These rules are neither final nor exhaustive - we reserve the right to suspend disruptive users even if their behaviour doesn’t fall under any of the above rules directly. Be nice, play fair and respect others and yourself.” Their rules do not seem to be slowing down their popularity, and with 1.34 million concurrent players they recently broke Valve’s own DOTA 2 record on Steam. One might argue based on this case that not only will curbing toxicity not hurt business, but it might help make it better.

However, even if a company does not wish to enforce stricter rules on language usage globally, it might still be able to create a space where all players feel comfortable. While this may seem like the dichotomy of eating one’s cake while trying to keep it whole, as an avid cake-eater, I think this is less complicated. Companies can selectively enforce stricter rules in zones that players opt into.

Players might be offered the opportunity to participate either in a regular- or strict-enforcement game. In regular-enforcement games, players could continue acting as toxic as they currently do without extra penalization. Those who wish to both avoid exposure to the current toxicity levels, and are willing to not engage in it, can can opt to play in a strictly regulated environment. This would allow some players to say what they want, while still offering a safer, cleaner space for those who need it to be comfortable playing.

Thus, implementing stricter filters for toxic communications, as well as harsher, immediate consequences would be limited to only the "safe zone" games. A demand for high-regulation zones can be deduced from the success of Smash Sisters, which ESPN reports as a place to “have fun... (with) no trash-talking.” Finding the perfect balance for tougher regulation may be challenging, but creating predetermined, opt-in virtual spaces may help increase players’ psychological safety and satisfaction.

Trustworthy Players as Informal, Semi-Moderators

An easier way (albeit not a mutually exclusive one) to increase regulation may be to leverage the copious human capital in games--the players. Currently, players can often report a certain amount of offences committed by other players weekly. (To the best of my knowledge,) Every report has equal weight in determining whether the reported player will be penalized.

Often, reported players get penalized only after a certain number of reports are filed against them. Doing so ensures that false reports--such as reports intended to punish another player out of spite--do not lead to penalization. Rather, a pattern of offensive behavior must first be determined to assure that the player is likely truly misbehaving.

However, all players only start out equal(ly untrustworthy as reporters and keepers of the virtual peace). Once they file enough complaints, it should be fairly simple to discern statistically who reports accurately, and who abuses the reporting system for their own agenda. For example, if someone has played 1,000 games, and her (or his) reports are of players who end up being penalized 95% of the time, it seems to reason that this reporter is almost certainly not attempting to abuse the system.

Therefore, awarding these types of players--who have previously proven their trustworthiness--additional weekly reports may help better regulate a game. Furthermore, they could be leveraged as unofficial (or official--The new semi-moderator status might be presented as some sort of award for accurate reporting) moderators by making reports count more.

For example, while it may usually take 20 reports to ban a player for a week, a player with “super reports” filed by the unofficial player-moderators against him or her, may only require 5 reports to be banned. Similarly, perhaps players with a high false reporting rate should receive even fewer reporting opportunities, and have their reports carry less weight in the toxicity trend analysis, until they can demonstrate that they are not abusing the reporting system. By treating players’ reports disparately based on previous knowledge of their perceived levels of trustworthiness, regulation may become faster and cheaper, leading to a decrease of toxic players in the game.

Augmenting Punitive Message

Although reducing toxicity begins with identifying the offenders, to rehabilitate them them must receive information on what they did wrong. To correct someone’s behavior, that person must first understand what they did wrong, why it was wrong, and what they should differently next time. Over the past few years, companies issuing penalties to toxic players have begun accompanying the penalty notice with an explanation of the punishable offence.

Augmenting current ban reports toxicity offenders receive may increase their efficacy. Previous research suggests that modifying the reports to target players’ perceptions of control may be fruitful. According to Azjen’s theory of planned behavior (1985), not everyone views their experienced events as a product of their controllable behaviors. This is unfortunate, as employees with high self-control beliefs do not require punishment to behave appropriately (Workman & Gathegi, 2006), demonstrating the importance of cultivating control perceptions in possible offenders.

Converging evidence on the importance of control comes from research on framing messages os opportunities. This research demonstrates that people are most likely to act (e.g. not cheat) when a behavior is framed as way that defines their identity by utilizing an actor noun label. The actor noun frames behavior as defining of the individual as a person. For example, unlike telling someone to not be a cheater, telling someone to not cheat does not indicate that person is cheater; rather it indicates that person may potentially engage in an occasional, unethical behavior. Therefore, instructing “Don’t be a cheater” is superior to instructing “Don’t cheat” (Bryan, Adams, & Monin, 2013), as the latter does necessarily indicated the person is a chronically immoral.

Furthermore, as actor nouns promote perceptions the the traits in question are essentialized--biologically based and immutable--offenders should be referred to as their potential, positive self, rather than their current, abusive self. This promotes essentializing the player as a good person, along with highlighting the player as the actor (e.g. “You are an important contributor to the game” versus “You make important contributions to the game”; Walton & Banaji, 2004).

Finally, as people are more receptive when harmful effects are attributed to the activity rather than the actors (e.g. “Immigration drives down wages” versus “Immigrants drive down wages”; McGlone, ), the harmful effects of toxicity should be attributed to the abuse itself rather than the player. Thus, an example of an augmented message might include a sentence claiming: “You are an important contributor to the game, and we hope to see you in many future matches. However, as verbal abuse destroys the player community, measures must be taken against it-- though we look forward to your return. Be the change you would like to experience; don’t be an abuser”.

Coda

Redressing toxicity is a complicated, long-standing problem. Although it may never be completely eradicated, it is not one that the industry--either as humans or businesses--can ignore. As evident from their interest in the topic, the industry is continuously and actively attempting to fight toxicity.

In this post I proffered a few additional ways to fight toxicity at its different levels. I began by suggesting a method to help avoid victimization all together. I continued by proffering a way to curb toxicity snowballing out of control, as well as ways to dissuade past toxic players from continuing to engage in harmful behaviors in future games.

One idea focused on decreased tolerance--either overall, or inside designated high-regulation zones. Another concerned increased moderation by capitalizing on trustworthy players’ reports, and giving them either increased reporting opportunities, and/or increased weight to their reports. Finally, I suggested possible modifications to the ban reports, in hopes of increasing their efficacy.

These fronts and ideas are not exhaustive, and fighting toxicity is a full time job (and one I would love to fill!) For example, all these ideas focus on curbing toxicity strictly at the individual level, as opposed to motivating change at the community level. I do have thoughts about creating larger systems to promote prosocial norms. However, they will, unfortunately, have to wait for an additional post on the topic.

Thoughts? Musings? Questions? Please add them in the “comments” section!

Read more about:

Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like