Toxicity in League of Legends' community is a perennial topic. The company has long aimed to reform its player base -- and today institutes a system whereby players can submit offensive comments made in the game's chat for automated punishment.
The system won't automatically scrub the chat for negativity, but if any is reported by players, it'll be analyzed by the system and, within 15 minutes, offending players will be handed suspensions or permanent bans.
"Your reports help the instant feedback system understand and punish the kind of verbal harassment the community actively rejects: homophobia, racism, sexism, death threats, and other forms of excessive abuse," writes Jeffrey Lin, Riot Games' lead designer of social systems.
After the initial human-checked test-run for the system in North America, it'll be rolled out across the game globally. Notably, the company plans to expand the system to offer instant rewards to players who show good behavior sometime down the road, too.
In March, Lin told Gamasutra that the majority of players clean up their act after a warning -- because they're not pervasively negative. And the standards of behavior, Lin says, are set by the community itself: "So when a player is in the game, and they have some negative behavior, they get a message saying that, 'Hey, your peers don't think this is okay online. Your peers don't think this is cool.' That's actually why we see the change."
The full interview with Lin offers an in-depth look into how League of Legends' player behavior is policed by its developer. The full details on the new automated system are available at the official League of Legends blog.