Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Moderating any online community is always a difficult proposition, but how do you stop gamers that are intent on breaking the rules?

Janne Huuskonen, Blogger

May 9, 2022

8 Min Read

Toxic behaviour in gaming has become a widespread problem. There are many factors behind the issue of bad player behaviour, but the end result is that community leaders are forced to devote a lot of time and resources - that would be much better spent elsewhere - to curb toxicity.

Currently, game community moderation is done using a combination of banned word lists, informal rules, human moderators and advanced (and not so advanced) technology. The fact that misinformation, racism, misanthropy and all kinds of toxic behaviours are a constant theme in the media shows the limited effectiveness many of these methods have.

Creating a moderation strategy is made much harder when we don’t know exactly what motivates some gamers to behave in disruptive or toxic ways. Some point to the high-pressure environments of esports or the competitive advantage gained from distracting other players, whilst others suggest toxicity in online environments has simply become the norm. Whatever the reason, it seems like there will always be a minority motivated by upsetting people and ‘winning’ at all costs.

Despite the best efforts of game companies, players are constantly finding new workarounds to get past rule-based systems they have in place - which is made easier if those systems aren’t picking up toxic behaviours in the first place. For example, the streaming platform Twitch uses a combination of user reporting and rule-based moderation, which is essentially a comprehensive list of banned words, phrases and emojis to tackle its own community problems. Yet, streams are regularly targeted by ‘hate raids’, where users use bots to overwhelm a streamer's chat section with hateful messages. Leading many users to repeatedly rally behind the hashtag ‘#Twitchdobetter’, calling for the platform to introduce more substantive safety policies.

Competitive gaming has an especially bad toxicity problem

Toxic behaviours seem especially prevalent within the competitive gaming scene. Our research has shown that nearly three-quarters (70%) of players in online multiplayer games have experienced some form of harassment. This includes sexual harassment, hate speech, threats of violence, doxing (publicising others’ private information), spamming, flaming (sending strong emotional statements meant to elicit negative reactions), griefing (using the game in unintended ways to harass others), and intentionally inhibiting the performance of one’s own team.

These disinhibited behaviours are fueled by the anonymity that virtual environments afford, as well as players seeing others' behaviour as a benchmark for what is acceptable for themselves. Research suggests this might help explain why toxicity is somewhat contagious, as exposure in previous games has shown to increase the likelihood that a player will commit similar acts in future games.

The esports juggernaut Overwatch has had its own issues with toxicity beyond just the game itself. Jeff Kaplan, Blizzard Entertainment’s Vice President, explained how their exhaustive moderation efforts also negatively impacted other areas of operations. “We want to make new maps, we want to make new heroes, we want to make animated shorts. But we’ve been put in this weird position where we’re spending a tremendous amount of time and resources punishing people and trying to make people behave better.”

Effectively manually moderating at scale can be a real challenge, and if the community is determined to continue engaging in toxic behaviour, even gaming giants can struggle to manage the problem.

The right tool for the job?

Larger games such as Fortnite or Minecraft use a combination of human and automated moderation, identity verification and community education. Most commonly text chat is automatically parsed for banned words, with anything ambiguous or voice-based passed onto human moderators. The problem with this - beyond the fact it’s not very efficient - is that human moderation can be susceptible to bias, which is a problem in itself.

Facebook already has 15,000 moderators on its payroll, and a 2020 study from New Your University suggested it needed double that just to keep up with the current post volumes. It’s a well-known fact that Facebook has gone to great lengths to curtail inappropriate content. But Mark Zuckerberg even admitted in a white paper that human moderators “make the wrong call in more than one out of every 10 cases,” which in real terms, equates to about 300,000 mistakes every day.

For example, if there were three million posts to moderate each day this equals 200 per person if they are moderating 25 posts every hour in an eight-hour shift. 150 seconds is not a lot of time to decide if a post meets or violates community standards.

It’s no wonder that companies are looking for more innovative tools to solve their own moderation problems. FaceIt is a popular esports community of 22 million players that have tested a community-based approach powered by machine learning to help moderate its matches. The system uses training data from previous cases of flagged behaviour, using the decisions of moderators as a guide to processing new reports of abuse.

Within the first month of it being used to moderate Counter Strike: Global Offensive (CS:GO) matches, the system identified 4% of all chat messages as toxic, affecting more than 40% of all matches. After a further two weeks, the system had issued 20,000 bans and 90,000 warnings, leading to a 20% reduction in toxic messages and 15% reduction in the number of matches affected by this behaviour.

These figures show promising results, and at face value, community feedback may seem like a good approach - but the system still leans heavily on human moderation, and only flags behaviours that have previously been addressed by the community moderators.

Relying on such a narrow dataset opens the door for bias. This means that from the outset, a system could be engineered with inherent bias - leaving moderation flawed and specific demographics at risk of being alienated. For example, making reference to ‘building a wall’ may not seem offensive to some, nor would it necessarily be flagged by community or rule-based moderation. But the phrase - popularised by the Trump administration - could hold a lot of negative connotations for someone of Mexican descent. What’s more, the bias will become even more pronounced if there’s one ‘bad apple’ moderator that simply flags posts and opinions they disagree with.

The community feedback approach also has other drawbacks. More people will be exposed to harmful content, as by design, users need to see inappropriate content before they can flag it to community leaders. That might be manageable with a small player community, if it means a handful of nasty chat messages getting through. But what about an MMO with millions of players? In that scenario, even a small percentage means thousands of toxic chat messages per day.

AI will be essential to deliver Moderation at scale

At scale, the sheer volume of chat logs and data only compounds the challenge, making manual moderation increasingly ineffective, costly and labour-intensive. With so many odds stacked against them, community leaders are now leveraging innovative tools to curtail the issue.

AI has the potential to be a truly transformative tool, but it has its limitations and shouldn’t be considered a silver bullet against inappropriate content. Despite common misconceptions, AI is an incredibly broad area of research, and the tools and systems that are encompassed by the term have widely varying capabilities.

For instance, traditional rule-based AI approaches often just use generic lists of words, dictionaries and ontologies. Whereas, advanced AI systems learn to mimic human decision making on a much deeper level, understanding the semantic meaning of sentences, context and inference, which the rule-based systems can't. Advanced AI is much more accurate, much faster to implement, and can scale to new languages within days rather than months or years.

Advanced AI is the logical successor to take up the mantle from human moderators. Primarily, it avoids the inherent bias of human moderators which, as exemplified by Facebook’s self-confessed inefficiencies, can leave high numbers of users exposed on a daily basis.

The best approach is to use a system that’s neither 100% AI nor 100% manual, but a combination of the two. Human input is vital, as it elevates the capabilities of advanced AI systems by helping train and guide the system’s decision making as it works, improves and continues learning. Human moderators have another vital role to play, as they need to define which content is appropriate and what needs to be moderated in a given community. Using a human first approach ensures AI is used ethically. Letting AI take on the brunt of the duties also means human moderators have much less repetitive manual work and can focus on more rewarding, value-driving tasks.

As gamers continue to spend more time online, ill-suited moderation tools will continue to leave players and communities open to abuse. Despite industry-wide efforts to improve moderation practices - such as Discord’s recent acquisition of Sentropy, or Twitter’s ‘bug-bounty’ programme that aims to eliminate moderation bias on the platform - many industry leaders continue to use ineffective solutions. This risks further normalising and ingraining toxic behaviours as an accepted part of online gaming culture - which presents broader implications to the industry.

Moderation is still seen as a post-launch problem, but the reality is that moderation needs to be considered as part of the design process. There is no point building community-driven games if you community leaders are not given the right tools to curtail toxicity, and we know that a toxic experience drives players away. AI-based tools offer a way to moderate games at the scale needed - but adoption won’t happen for as long as big publishers rely on outdated and disproven methods.


Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like