Sponsored By

Game AI: The State of the Industry, Part Two

Last week in Part One of this article, Steven Woodcock took inventory of the current state game AI, based on the roundtables he led at the 2000 Game Developers Conference. Now in Part Two, Ensemble Studios' Dave Pottinger looks at what the future holds for game AI, and University of Michigan Professor John E. Laird discusses bridging the gap between AI researchers and game developers.

November 8, 2000

20 Min Read

Author: by John E. Laird

Last week in Part One of this article, Steven Woodcock took inventory of the current state game AI, based on the roundtables he led at the 2000 Game Developers Conference. Now in Part Two, Ensemble Studios' Dave Pottinger looks at what the future holds for game AI, and University of Michigan Professor John E. Laird discusses bridging the gap between AI researchers and game developers.

As I slowly reclined back into the seat of the last E3 bus this spring, I was certain of two things: some really great games were coming out in the next year and my feet hurt like hell. A lot of the games that created a buzz featured excellent AI.Since my fellow Ensembleites assured me (repeatedly) that no one really cared to hear about my feet, I thought I'd use this space to talk about some of the games coming out in the next 18 months and the new and improved AI technology that will be in them.

Better AI Development Processes and Tools

AI has traditionally been slapped together at the eleventh hour in a product's development cycle. Most programmers know that the really good computer-player (CP) AI has to come at the end because it's darn near impossible to develop CP AI until you know how the game is going to be played. As the use of AI in games has matured, we're starting to see more time and energy spent on developing AI systems that are modular and built in a way that allows them to be tweaked and changed easily as the gameplay changes. This allows the AI development to start sooner, resulting in better AI in the final product. A key component in improving the AI development process is building better tools to go along with the actual AI.

For Ensemble's third real-time strategy (RTS) game, creatively code-named RTS3, we've spent almost a full man-year so far developing a completely new expert system for the CP AI. It's been a lot of work taking the expert system (named, also creatively, XS) from the in-depth requirements discussions with designers to the point where it's ready to pay off. We've finally hit that payoff and have a very robust, extensible scripting language.

The language has been so solid and reusable that, in addition to using it to write the CP AI content, we're using it for console and UI command processing, cinematic control, and the extensive trigger system. We also expect to use XS to write complicated conditional and prerequisite checking for the technology tree; this way, the designers can add off-the-wall prerequisites for research nodes without programmer intervention. Finally, we will also use the XS foundation to write the script code that controls the random map generation for RTS3. The exciting aspect of XS from a tools standpoint is that we will have XS debugging integrated with RTS3's execution. For fans who used the Age of Empires II: The Age of Kings (AoK) expert-system debugging (a display table of 40 or so integer values), this is a huge step up, since XS will significantly increase the ease with which players can create AI personalities.

Better NPC Behavior

In the early days of first-person shooters, non-player characters (NPCs) had the intelligence of nicely rounded rocks. But they've been getting much better lately -- look no further than Half-Life's storytelling NPCs and Unreal Tournament's excellent bot AI. The market success of titles such as these has prompted developers to put more effort into AI, so it looks as if smarter NPCs will continue to show up in games.

Grey Matter Studios showed some really impressive technology at E3 with Return to Castle Wolfenstein. When a player throws grenades at Nazi guards, those guards are able to pick up the grenades and throw them back at the player, adding a simple but very effective new wrinkle to NPC interactivity. A neat gameplay mechanic that arises out of this feature is the player's incentive to hold on to grenades long enough so they explode before the guards have a chance to throw them back. Thankfully, Grey Matter thought of this and has already made the guards smart enough not to throw the grenades back if there's no time to do so.

More developers are coupling their AI to their animation/simulation systems to generate characters which move with more realism and accuracy. Irrational did this with System Shock 2 and other developers have done the same for their projects. The developers at Raven are doing similar things with their NPC AI for Star Trek: Elite Force. They created a completely new NPC AI system that's integrated into their Icarus animation system. Elite Force's animations are smoothly integrated into the character behavior, which prevents pops and enables smooth transitions between animations. The result is a significant improvement to the look and feel of the game. I believe that as the use of inverse kinematics in animation increases, games will rely on advanced AI state machines to control and generate even more of the animations. As a side benefit, coupling AI to animation gives you the benefit of more code reuse and memory savings.

Better Communication Using AI

Since the days of Eliza and HAL, people have wanted to talk with their computers. While real-time voice recognition and language processing are still several years off, greater strides are being made to let players better communicate with their computer opponents and allies.

For example, in our upcoming Age of Empires: The Age of Kings expansion pack, The Conquerors, we've enabled a chat communication system that lets you command any computer player simply by sending a chat message or selecting messages from a menu. Combined with AoK's ability to let you script your own CP AI, this lets you craft a computer ally that plays on its own and lets you have conversational exchanges with it in random-map games. This is a small step toward the eventual goal of having players talk to their computer allies in the same way as to humans. Unfortunately, we still have to wait a while for technology to catch up to our desire.

In addition to adding great new features, many upcoming games simply have improved on existing AI features, particularly in the area of pathfinding. No one likes screaming at the stupidity of unit movement. Despite the seemingly simple nature of the problem, pathfinding in games has become a big topic in recent years. Many games (including our own Age of Empires) have been roasted for bad pathfinding.

In the next year, we will likely see more true 3D games, necessitating the use of pathfinding algorithms that work in three dimensions rather than a hacked-up 2.5 dimensions (two dimensions with a small number of third-dimension planes at fixed heights). Pathing and moving true 3D flying units is much harder than moving units around on the ground, due to the desire to have units bank and turn realistically. So far, no one has proffered a simple solution for pathing in true 3D while taking into account things such as turn radius and other movement restrictions. Instead, most games path without any movement restrictions, use movement restrictions when possible while the unit follows the path, and resort to a contrived turn-in-place approach when movement restrictions conflict with the path.

To help compensate for the addition of this extra calculation complexity, we will likely see innovations in the way standard pathfinding algorithms (such as A*) are used. For example, I expect developers will begin to time-slice pathfinding systems so that particularly long routes can be computed over multiple game-world updates and renders. This task can get complicated in a world with dynamic terrain and many moving units, but it can be done if you're willing to spend the memory on it. And improving paths while still maintaining high frame rates is a big advantage.

Also upcoming are more hierarchical pathfinding techniques. Different pathfinding algorithms or data sets can be tuned to a particular need (for example, long or short paths). A hierarchical approach also allows paths to be generated at progressively more detailed levels on an as-needed basis.

Hierarchical AI

Not surprisingly, RTS games have some of the most demanding AI needs. Their AI has to meet a player's expectations of a challenging strategy game, yet still make decisions within milliseconds in order to meet the game's frame-rate requirements. Hierarchical approaches to AI have been successful in helping address these needs.

In hierarchical RTS AI, there are different layers to the AI. The strategic AI makes high-level decisions such as "What units should I train?" The tactical AI executes the orders given by the strategic AI in the best possible way, deciding things such as where to train the units requested by the strategic AI. Usually, the strategic AI is evaluated far less frequently than the tactical AI. There's often a third layer, which we'll call entity AI. Entity AI represents the physical entities in the game, such as units or groups, and is manipulated by the tactical AI. Thus, the entity AI is usually processed more frequently than the tactical AI (particularly if the entity AI has combined AI and animation responsibility).

As the genre matures, RTS developers are finding more interesting ways to use this type of system. The upcoming Homeworld sequel, Cataclysm, builds heavily on the idea of combining simple AI behaviors. Unit aggression stances are used typically to control how far units pursue enemies. That concept is combined with the simple idea of patrolling between two waypoints. So, if the units are set in an aggressive stance while patrolling, they will attack any targets they come across. However, if the units are set in an evasive stance, they will avoid enemy contact during patrols. While this isn't hard to do (assuming the code is written well), it's an example of how the entity AI can evolve to become more complex.

The Conquerors features another example of behavior combination. In AoK, your villagers stand around loafing after finishing that lumber camp on the edge of your town. In The Conquerors, villagers are smarter; they begin chopping wood after finishing constructing a lumber camp. Again, this seems simple, but it makes for a much better game (and was one of the most well-received features by AoK fans at E3). Features such as this can also help offload responsibility from the oft-overburdened tactical AI. If the tactical AI can rely on villagers to keep working after building a resource drop-site, it can remove another round of villager-tasking from its plate.

A little farther out on the RTS horizon are our own RTS3 and Blizzard's Warcraft 3. Both will rely heavily on autonomous agent behavior (a fancy name for entity AI). Similar to combining simple behaviors, an event-driven hierarchical entity AI can alleviate a lot of needless AI polling by executing code only when there is a reason to do something. This frees up processor time for more AI, graphics, and other tasks.

A comprehensive group-AI system also makes it a lot easier to implement features such as group-based protection. Imagine that you've ordered a group of melee units to protect some ranged units. If the ranged units aren't in danger or actively taking damage, you probably want the melee units to go beat on something. However, as soon as the ranged units take damage, you want the guarding melee units to rush over and stomp the attacking units. This is possible in a non-group-AI system, but it requires very clunky data structures and is a lot harder to achieve. And if it's a lot harder to code, then it will take longer to develop and be less robust (read: really, really buggy). On the other hand, if you have a group system you can simply pass the damage notification up to the group and let it quickly iterate through its guarding units, commanding them to attack the evil enemy units as necessary.

Fun versus Difficulty

One long-standing AI question is, "To cheat or not to cheat?" It used to be that game developers had to bypass the game rules that bound players in order to empower the AI. Weak AI can hurt the game experience, and allowing the computer to cheat was the only way around that problem. Happily, that's been changing over the last few years. Several games have been released in which the AI has at least some difficulty levels that don't cheat (Age of Empires, for example). This has all been done under the assumption that increased difficulty means more fun. A better, more difficult AI is more fun to play against, right? Not always.

Fresh from of a frustration-filled game against a few of The Conquerors' AI opponents, Tim Deen (one of Ensemble's designers) sent out an e-mail declaring that he really wished we'd focus on making the AI more fun to play against for the RTS3 project. Some healthy discussion ensued and we discussed the relative complexity of making an AI player harder to play against versus more fun. The consensus was that it was a lot easier to make an AI more difficult to play against. So, being good lazy programmers, we had done just that without really giving it much thought.

As we start to build AI systems that can stomp good players into the ground fair and square, we need to look at the next step. That next step should be making the game fun. Since it's not much fun to play against an AI that never has a chance to beat you, the AI has to be able to put up a really good fight. Naturally, we have tools to do that, and it's easy to measure the success of that approach using lots of fun spreadsheets and graphs. It's more difficult -- and, more significantly, considerably more subjective -- to make an AI fun to play against. Conveniently, many of the tools that we already have from building difficult AIs can be leveraged to make the game more fun to play.

Unreal Tournament has some great bot code that can really compete with the best players. Yet, it's also fun to play against. It intentionally makes mistakes and doesn't always do the best thing it can. While that may not be the most interesting thing from an academic AI perspective, it's a lot more fun than getting shot in the back every single time.

In our RTS3 project, we're going to use the XS scripting language to control the level of difficulty. Since we have an idea of how long we'd like each game to take, our AI designers can check things such as game time, how many of the CP's units have been killed, how many of the human player's units were killed by the CP, or the score of the game to see who's "winning." Armed with that information, they can scale back the quality of the AI to make sure the game doesn't drag out long after the outcome is really determined. If you start to augment that ability with other features such as game history logging, you have the makings of a good opponent that quickly scales to your initial difficulty level and continues to give you a challenging game even as you get better.

The next year or so still looks to have a heavy focus on graphics, particularly as Xbox, Playstation 2, Gamecube, and the good old PC continue to vie for visual supremacy. But, perhaps less glamorously, AI keeps chugging along and is getting better. As more developers dig into AI and realize that good AI is just as difficult as pushing tons of polygons to the screen, AI is getting increased attention. Almost every developer at E3 had an answer to the question, "So what new AI stuff are you doing with this game?" You can't get much cooler than that, which is why I'm optimistic about the continuing improvement and refinement of AI in games.

Bridging the Gap Between Developers and Researchers

One would think that the combined coolness factor of artificial intelligence and computer games would be an irresistible topic, bringing game developers and AI researchers together. Unfortunately, there has been a mutual lack of interest between serious game developers and academic AI researchers. Game developers have picked up a few AI techniques, such as decision trees and the ubiquitous A* algorithm for path planning, but there has been nothing like the knowledge transfer that has taken place with graphics. When game developers look at AI research, they find little work on the problems that interest them, such as nontrivial pathfinding, simple resource management and strategic decision-making, bot control, behavior-scripting languages, and variable levels of skill and personality -- all using minimal processing and memory resources. Game developers are looking for example "gems": AI code that they can use or adapt to their specific problems. Unfortunately, most AI research systems are big hunks of code that require a significant investment of time to understand and use effectively.

Why AI Research and Game Development Diverge

AI researchers rarely use computer games for their research, outside of classic board and card games such as chess, checkers, and bridge. Possibly they see most game AI problems as simple "engineering" problems. This view has not been completely unjustified because often the goal of game AI is not to create intelligence, but to improve gameplay through the illusion of intelligent behavior. Many of the techniques used to improve the illusion of intelligence have nothing to do with intelligence, but involve "cheats," such as giving game AIs extra production capability or the ability to see through walls, or "faking it" by creating bots that "talk" to each other but completely ignore what is said. There also has been a drift in AI research toward problems and approaches where precise empirical evaluation is possible. Needless to say, gameplay isn't something that today's AI researchers feel comfortable evaluating.

Although there is currently a significant gap between game developers and AI researchers, that gap is starting to close. The inevitable march of Moore's law is starting to free up significant processing power for AI, especially with the advent of graphics cards that move the graphics processing off the CPU. The added CPU power will make more complex game AI possible. Still, game developers should still be wary of AI researchers who say, "My algorithm doesn't run in real time right now, but just wait. In a few more years, I'm sure the processing power will be there."

A second, equally powerful force that is closing the gap is sociological. Students who grew up loving computer games are getting advanced degrees in AI. This has the dual effect of bringing game research to universities and university research to game companies -- already there are at least five AI Ph.D.s at game companies. AI researchers are discovering that building interesting synthetic characters in computer games is much more than just an engineering problem. Moreover, games provide cheap, robust, immersive environments for pursuing many of the core AI issues. They could be the catalyst for a rebirth in research on human-level AI (see my paper on the subject, listed under For More Information).

The final force is the game-playing public, who are starting to demand better AI. With the saturation in the quality of computer graphics, better physics and AI are the two technologies that have the most potential to improve gameplay. Players are looking for more realistic AIs to populate their worlds with interesting non-player characters (as in The Sims) and humanlike opponents who must be out-thought and not just out-shot (and who don't cheat). AI can also provide dynamic game control, adjusting the gameplay based on how the game is played. Imagine playing a first-person shooter where the AI not only reacts to your behavior, but also anticipates your actions by using an internal model of the way you play the game to make its plan. It also adjusts its skill at the tactical level to match yours, so that the game is never a blowout for either side. Our research group has built such a bot using our own Soar AI engine connected to the deathmatch version of Quake 2 (see my paper under For More Information). Our research is a peek at what can come out of research labs. The combination of complex AI and computer games can improve existing game genres, and give rise to some new types of games.

Closing the Gap

What can computer game developers do to hasten the collaboration of developers and AI researchers? The most important thing is to make commercial computer game interfaces available to AI researchers. Developers of games such as Unreal, Quake, and Half-Life publish DLLs, making it possible for not only hobbyists but also AI researchers to build bots that play games. If developers from other genres such as real-time strategy games follow suit, you would see an explosion of research on AI for these games. Game developers can also join AI researchers in discussing AI problems and solutions in open forums. There is now a yearly symposium sponsored by the American Association for Artificial Intelligence (AAAI) on AI and interactive entertainment that brings together game developers and AI researchers.

One final note, building good AIs is hard work. Automated learning approaches such as neural nets and genetic algorithms can tune a well-defined set of behavioral parameters, but they are grossly inadequate when it comes to creating synthetic characters with complex behaviors automatically from scratch. There is no magic in AI, except for the magic that emerges when a great programmer works very hard.

Read more about:

Features
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like