"Our goal is to get it so you don’t have to wait for a report to happen."
- Blizzard's Jeff Kaplan, chatting with Kotaku about why the company is testing out machine learning to combat toxic player behavior.
In a recent chat with Kotaku, Blizzard's Jeff Kaplan says the company is now enmeshed in trying to figure out how (or if) machine learning algorithms can be used to train automated systems which will catch and punish toxic Overwatch players.
“We’ve been experimenting with machine learning,” Kaplan said. “We’ve been trying to teach our games what toxic language is, which is kinda fun. The thinking there is you don’t have to wait for a report to determine that something’s toxic. Our goal is to get it so you don’t have to wait for a report to happen.”
This follows through on ideas voiced by Blizzard late last year, when it made a show of establishing a "strike team" to combat toxicity in Overwatch. More notably, it offers an interesting look into how a big company like Blizzard thinks about machine learning algorithms, which are growing steadily more popular in the game industry.
Last year, for example, backend services provider PlayFab partnered with IBM to offer game dev clients player data analysis generated by machine learning, and Electronic Arts boss Andrew Wilson talked a big game about using machine learning to generate "interesting and personal stories on a real-time basis."
However, in Blizzard's case it's being used to train AI to spot toxic language in multiple languages (though not, according to Kaplan, in direct messages between players) and, perhaps one day, toxic in-game behavior.
“That’s the next step,” said Kaplan. “Like, do you know when the Mei ice wall went up in the spawn room that somebody was being a jerk?”
Devs curious about the state of machine learning and Blizzard's anti-toxicity efforts should read the rest of his comments in full over on Kotaku.