Intel has teamed up with Spirit AI to develop moderation tools that use machine learning and artificial intelligence to moderate voice chat in video games.
It’s an announcement that came up during the Game Developers Conference last week, and one that, according to a PCWorld writeup on the talk, details technology that still looks to be a few years off.
Spirit AI detailed similar AI-powered moderation tools during GDC a couple of years ago, but focused on text-based exchanges. The tech being developed by the two companies now would expand that same kind of moderation to voice-based communications. It’s naturally trickier to build this kind of moderation tech for audio than it is for text, however.
It wouldn’t, at least in its early stages, automatically censor words based on profanity filters or auto-ban players for abusive speech. Rather, the project aims to record and flag audio deemed to contain toxic or abusive speech and refer that file off to a human moderator.
Intel says that the plan is to give control on what constitutes abusive language to developers, and allow them to strengthen or weaken the filters as needed for their own individual games. Neither Intel nor Spirit AI offered a timeline for when it’ll officially launch the project, but the effort goes to show one potential way machine learning-powered tools can help developers manage different aspects of their games.