It's been over twenty years since the debut of Quake, and developers are still learning how to build artificial intelligence that can play and learn a multiplayer first-person shooter the way a human can.
Devs curious about the state of AI in games should check out a new post over on Google's DeepMind blog which explains how a team of researchers have been trying to train AI agents to "act independently, yet learn to interact and cooperate with other agents" by letting them play (what else?) 1999's& Quake III Arena.
An aesthetically-modified Capture the Flag mode, specifically, with a special twist: the map layout changes from match to match in a way meant to push AI agents to master general strageties of play instead of map-specific tricks.
"The challenge for our agents is to learn directly from raw pixels to produce actions. This complexity makes first-person multiplayer games a fruitful and active area of research within the AI community," reads an excerpt of the post. "We train agents that learn and act as individuals, but which must be able to play on teams with and against any other agents, artificial or human."
As with earlier DeepMind agents trained on games, these Arena-trained AI "see" only what a player would see (that is, the raw pixels being displayed on the screen, not a direct feed of game data) and are not given any training on how to play the game before being "sat" in front of it and let loose.
According to Google, the resulting "FTW" ("for the win") AI agent is capable of competing in multiplayer CTF matches at a very high level against either other bots or human opponents. In fact, it claims that after a 40-player tournament which included both human and AI players, the humans rated the AI players as "more collaborative" than their human compatriots.
You can read all the details (and mess around with some neat interactive visualizations) in the full DeepMind blog post.