An artificial intelligence designed to address toxicity in competitive games has banned 20,000 players from Counter Strike: Global Offensive (CS:GO) as part of its first live implementation.
Christened 'Minerva,' the 'Admin AI' was designed by FACEIT with the help of Google Cloud and Jigsaw, and was trained through machine learning to address toxic behavior at scale.
As revealed in a post of the FACEIT blog, the AI's first practical implementation focused on identifying and acting on toxic messages from the text chat in CS:GO matches.
After months of training to minimize the likelihood of false positives, the AI was able to weed out harmful messages "without manual intervention" and act on them by issuing offending players a warning for verbal abuse.
Similar messages in a chat were then marked as spam, while the punishment for repeat offenders became increasingly severe. For those of you with a taste for all things numerical, Minerva analyzed over 200,000,000 chat messages over the past few months, resulting in 7,000,000 being marked as toxic.
In its first month and a half of activity, the AI dished out 90,000 warnings and 20,000 bans for verbal abuse and spam, with the number of toxic messages falling from 2,280,769 in August to 1,821,723 in September -- a decrease of 20.13 percent.
"In-game chat detection is only the first and most simplistic of the applications of Minerva and more of a case study that serves as a first step toward our vision for this AI," wrote FACEIT, commenting on its future hopes for Minerva.
"We’re really excited about this foundation as it represents a strong base that will allow us to improve Minerva until we finally detect and address all kinds of abusive behaviors in real-time. In the coming weeks we will announce new systems that will support Minerva in her training."
You can find out more about the toxicity-quashing AI by checking out the full FACEIT blog post.