In the first of this two-part report on the state of game AI, Steven Woodcock shares what issues came up while moderating the AI roundtables at the 2000 Game Developers Conference. Next week, in Part Two, John E. Laird will discuss how academics and developers can better share information with each other, and Ensemble Studios' Dave Pottinger will peer into the future of game AI.
One thing was made clear in the aftermath of this year's Game Developers Conference: game AI has finally "made it" in the minds of developers, producers, and management. It is recognized as an important part of the game design process. No longer is it relegated to the backwater of the schedule, something to be done by a part-time intern over the summer. For many people, crafting a game's AI has become every bit as important as the features the game's graphics engine will sport. In other words, game AI is now a "checklist" item, and the response to both our AI roundtables at this year's GDC and various polls on my game AI web site (www.gameai.com) bear witness to the fact that developers are aggressively seeking new and better ways to make their AI stand out from that of other games.
The technical level and quality of the GDC AI roundtable discussions continues to increase. More important, however, was that our "AI for Beginners" session was packed. There seem to be a lot of developers, producers, and artists that want to understand the basics of AI, whether it's so they can go forth and write the next great game AI or just so they can understand what their programmers are telling them.
As I've done in years past, I'll use this article to touch on some of the insights I gleaned from the roundtable discussions that Neil Kirby, Eric Dybsand, and I conducted. These forums are invaluable for discovering the problems developers face, what techniques they're using, and where they think the industry is going. I'll also discuss some of the poll results taken over the past year on my web site, some of which also provided interesting grist for the roundtable discussions.
Resources: The Big Non-issue
Last year's article (Game AI: The State of the Industry) mentioned that AI developers were (finally) becoming more involved in the game design process and using their involvement to help craft better AI opponents. I also noted that more projects were devoting more programmers to game AI, and AI programmers were getting a bigger chunk of the overall CPU resources as well.
This year's roundtables revealed that, for the most part, the resource battle is over (Figure 1). Nearly 80 percent of the developers attending the roundtables reported at least one person working full-time on AI on either a current or previous project; roughly one-third of those reported that two or more developers were working full-time on AI. This rapid increase in programming resources has been evident over the last few years in the overall increase in AI quality throughout the industry, and is probably close to the maximum one could reasonably expect a team to devote to AI given the realities of the industry and the marketplace.
Even more interesting was the amount of CPU resources that developers say they're getting. On average, developers say they now get a whopping 25 percent of the CPU's cycles, which is a 250 percent increase over the average amount of CPU resources developers said they were getting at the 1999 roundtables. When you factor in the increase in CPU power year after year, this trend becomes even more remarkable.
Many developers also reported that general attitudes toward game AI have shifted. In prior years the mantra was "as long as it doesn't affect the frame rate," but this year people reported that there is a growing recognition by entire development teams that AI is as important as other aspects of the game. Believe it or not, a few programmers actually reported the incredible luxury of being able to say to their team, "New graphics features are fine, so long as they don't slow down the AI." If that isn't a sign of how seriously game AI is now being taken, I don't know what is.
Developers didn't feel pressured by resources, either. Some developers (mostly those working on turn-based games) continued to gleefully remind everyone that they devoted practically 100 percent of the computer's resources for computer-opponent AI, but they also admitted that this generally allowed deeper play, but not always better play. (It's interesting to note that all of the turn-based developers at the roundtables were doing strategy games of some kind -- more than other genres, that market has remained the most resistant to the lure of real-time play.) Nearly every developer was making heavy use of threads for their AIs in one fashion or another, in part to better utilize the CPU but also often just to help isolate AI processes from the rest of the game engine.
AI developers continued to credit 3D graphics chips for their increased use of CPU resources. Graphics programmers simply don't need as much of the CPU as they once did.
Trends Since Last Year
A number of AI technologies noted at the 1998 and 1999 GDCs has continued to grow and accelerate over the last year. The number of games released in recent months that emphasize interesting AI -- and which actually deliver on their promise -- is a testament to the rising level of expertise in the industry. Here's a look at some trends.
Artificial life. Perhaps the most obvious trend since the 1999 GDC was the wave of games using artificial life (A-Life) techniques of one kind or another. From Maxis's The Sims to CogniToy's Mind Rover, developers are finding that A-Life techniques provide them with flexible ways to create realistic, lifelike behavior in their game characters.
smart rover navigates a maze in CogniToy's Mind Rover
The power of A-Life techniques stems from its roots in the study of real-world living organisms. A-Life seeks to emulate that behavior through a variety of methods that can use hard-coded rules, genetic algorithms, flocking algorithms, and so on. Rather than try to code up a huge variety of extremely complex behaviors (similar to cooking a big meal), developers can break down the problem into smaller pieces (for example, open refrigerator, grab a dinner, put it in the microwave). These behaviors are then linked in some kind of decision-making hierarchy that the game characters use (in conjunction with motivating emotions, if any) to determine what actions they need to take to satisfy their needs. The interactions that occur between the low-level, explicitly coded behaviors and the motivations/needs of the characters causes higher-level, more "intelligent" behaviors to emerge without any explicit, complex programming.
The simplicity of this approach combined with the amazing resultant behaviors has proved irresistible to a number of developers over the last year, and a number of games have made use of the technique. The Sims is probably the best known of these. That game makes use of a technique that Maxis co-founder and Sims designer Will Wright has dubbed "smart terrain." In the game, all characters have various motivations and needs, and the terrain offers various ways to satisfy those needs. Each piece of terrain broadcasts to nearby characters what it has to offer. For example, when a hungry character walks near a refrigerator, the refrigerator's "I have food" broadcast allows the character to decide to get some food from it. The food itself broadcasts that it needs cooking, and the microwave broadcasts that it can cook food. Thus the character is guided from action to action realistically, driven only by simple, object-level programming.
Developers were definitely taken with the possibilities of this approach, and there was much discussion about it at the roundtables. The idea has obvious possibilities for other game genres as well. Imagine a first-person shooter, for example, in which a given room that has seen lots of frags "broadcasts" this fact to the NPCs assisting your player's character. The NPC could then get nervous and anxious, and have a "bad feeling" about the room -- all of which would serve to heighten the playing experience and make it more realistic and entertaining. Several developers took copious notes on this technique, so we'll probably be seeing even more A-Life in games in the future.
Pathfinding. In a remarkable departure from the roundtables of previous years, developers really didn't have much to ask or say about pathfinding at this year's GDC roundtables. The A* algorithm (for more details, see Bryan Stout's excellent article Smart Moves: Intelligent Path-Finding) continues to reign as the preferred pathfinding algorithm, although everybody has their own variations and adaptations for their particular project. Every developer present who had needed pathfinding in their game had used some form of the A* algorithm. Most had also used influence maps, attractor-repulsor systems, and flocking to one degree or another. Generally speaking, the game community has this problem well in hand and is now focusing on particular implementations for specific games (such as pathfinding in 3D space, doing real-time path-granularity adjustments, efficiently recognizing when paths were blocked, and so on).
As developers become more comfortable with their pathfinding tools, we are beginning to see complex pathfinding coupled with terrain analysis. Terrain analysis is a much tougher problem than simple pathfinding in that the AI must study the terrain and look for various natural features -- choke-points, ambush locations, and the like. Good terrain analysis can provide a game's AI with multiple "resolutions" of information about the game map that are well tuned for solving complex pathfinding problems. Terrain analysis also helps make the AI's knowledge of the map more location-based, which (as we've seen in the example of The Sims) can simplify many of the AI's tasks. Unfortunately, terrain analysis is made somewhat harder when randomly generated maps are used, a feature which is popular in today's games. Randomly generating terrain precludes developers from "pre-analyzing" maps by hand and loading the results directly into the game's AI.
Several games released in the past year have made attempts at terrain analysis. For example, Ensemble Studios completely revamped the pathfinding approach used in Age of Empires for its successor, Age of Kings, which uses some fairly sophisticated terrain-analysis capabilities. Influence maps were used to identify important locations such as gold mines and ideal locations for building placement relative to them. They're also used to identify staging areas and routes for attacks: the AI plots out all the influences of known enemy buildings so that it can find a route into an enemy's domain that avoids any possible early warning.
Another game that makes interesting use of terrain analysis is Red Storm's Force 21. The developers used a visibility graph (see "Visibility Graphs" sidebar) to break down the game's terrain into distinct but interconnected areas; the AI can then use these larger areas for higher-level pathfinding and vehicle direction. By cleanly dividing maps into "areas I can go" and "areas I can't get to," the AI is able to issue higher-level movement orders to its units and leave the implementation issues (such as not running into things, deciding whether to go over the bridge or through the stream, and so on) to the units themselves. This in turn has an additional benefit: the units can make use of the A* algorithm to solve smaller, local problems, thus leaving more of the CPU for other AI activity.
Formations. Closely related to the subject of pathfinding in general is that of unit formations -- techniques used by developers to make groups of military units behave realistically. While only a few developers present at this year's roundtables had actually needed to use formations in their games, the topic sparked quite a bit of interest (probably due to the recent spate of games with this feature). Most of those who had implemented formations had used some form of flocking with a strict overlying rules-based system to ensure that units stayed where they were supposed to. One developer, who was working on a sports game, said he was investigating using a "playbook" approach (similar to that used by a football coach) to tell his units where to go.
State machines and hierarchical AIs. The simple rules-based finite- and fuzzy-state machines (FSMs and FuSMs) continue to be the tools of choice for developers, overshadowing more "academic" technologies such as neural networks and genetic algorithms. Developers find that their simplicity makes these approaches far easier to understand and debug, and they work well in combination with the types of encapsulation seen in games using A-Life techniques.
Developers are looking for new ways to use these tools. For many of the same reasons A-Life techniques are being used to break down and simplify complex AI decisions into a series of small, easily defined steps, developers are taking more of a layered, hierarchical approach to AI design. Interplay's Starfleet Command and Red Storm's Force 21 take such an approach, using higher-level strategic "admirals" or "generals" to issue general movement and attack orders to tactical groups of units under their command. In Force 21 these units are organized at a tactical level into platoons; each platoon has a "tactician" who interprets the orders the platoon has received and turns them into specific movement and attack orders for individual vehicles.
Most developers at the roundtables who were working on strategy games reported that they were either planning to implement or already had used this type of layered approach to their AI engines. Not only was it a more realistic representation, but it made debugging simpler. Most of those who used this design also liked the way it allowed them to add hooks at the strategic level to allow for user customization of AIs, building strategies, and so on, while isolating the lower-level "get the job done" AI from anything untoward that the user might accidentally do to it. This is another trend we're seeing in strategy games that players find quite enjoyable -- witness the various "empire mods" for games such as Stars, Empire of the Fading Suns and Alpha Centauri.
Can AI SDKs Help?
The single biggest topic of discussion at the GDC 2000 roundtables was the feasibility of AI SDKs. There are at least three software development kits currently available to AI developers:
- Mathématiques Appliquées' DirectIA, an agent-based toolkit that uses state machines to build up emergent behaviors.
- Louder Than A Bomb's Spark!, a fuzzy-logic editor intended for AI engine developers.
- The Motion Factory's Motivate, which can provide some fairly sophisticated action/reaction state machine capabilities for animating characters. It was used in Red Orb's Prince of Persia 3D, among others.
Many developers (especially those at the "AI for Beginners" session) were relatively unaware of these toolkits and hence were very interested in their capabilities. It didn't seem, however, that many of the more experienced developers thought these toolkits would be all that useful, though a quick poll did reveal that one or two developers were in the process of evaluating the DirectIA toolkit. Most expressed the opinion that one or more SDKs would come to market that would prove them wrong.
Figure 5: Red Orb's Prince of Persia 3D used The Motion Factory's Motivate SDK
In discussing possible features, most felt that an SDK that provided simple flocking or pathfinding functions might best meet their needs. One developer said he'd like to see some kind of standardized "bot-like" language for AI scripts, though there didn't seem to be any widespread enthusiasm for this idea (probably because of fears it would limit creativity). Also discussed briefly in conjunction with this topic was the matter of what developers would be willing to pay for such an SDK, should a useful one actually be available. Most felt that price was not a particular object; developers today are used to paying (or convincing their bosses to pay) thousands of dollars for toolkits, SDKs, models, and the like. This indicates that if somebody can develop an AI SDK flexible enough to meet the demands of developers, they should be able to pay the rent.
Technologies on the Wane
One of the interesting areas that game AI is beginning to explore is the realm of terrain analysis. Terrain analysis takes the relatively simple task of path-finding across a map to its next logical step, which is to get the AI to recognize the strategic and tactical value of various terrain features such as hills, ridges, choke-points, and so on, and incorporate this knowledge into its planning. One tool that offers much promise for dealing with this task is the visibility graph.
Visibility graphs are fairly simple constructs originally developed for the field of robotics motion. They work as follows: Assume you are looking down at a map that has a hill in the center and a pasture with clumps of trees all around it. Let appropriately shaped polygons represent the hill and the trees. The visibility graph for this scene uses the vertices of the polygons for the vertices in the graph, and builds the edges of the graph between the vertices wherever there is a clear (unobstructed) path between the corresponding polygon vertices. The weight of each connecting line equals the distance between the two corresponding polygon vertices. This gives you a simplified map against which you can run a pathfinding algorithm to traverse the map while avoiding the obstacles.
There are some problems with visibility graphs, however. They only give raw connection information, and paths built using them tend to look a little mechanical. Also, the developer needs to do some additional work to prevent all but the smallest units from colliding with polygon (graph) edges as they move, since the path generated from a visibility graph doesn't take into account unit size at all. Still, they're a straightforward way to break down terrain into simplified areas, and they have uses in pathfinding, setting up ambushes (the unobstructed graph edges are natural ambush points), and terrain generation.
It's become clearer since last year's roundtables that the influence of the more "nontraditional" AI techniques, such as neural networks and genetic algorithms (GAs), is continuing to wane. Whereas in previous years developers had many stories to tell of exploring these and other technologies during their design and development efforts, at this year's sessions there was much more focus on making the more traditional approaches (state machines, rules-based AIs, and so on) work better. The reasons for this varied, but essentially boiled down to variations on the fact that these approaches are better understood and work "well enough." Developers seemed to want to focus much more on how to make them work better and leave exploration of theory to the academic field.
Genetic algorithms have taken a particularly hard hit in the past year. There wasn't a single developer at any of the roundtables that reported using them in any current projects, and most felt that their appeal was overrated. While last year's group had expressed some interest in experimenting with using GAs to help with game tuning, the developers who had tried reported this year that they hadn't found this to be very useful. Nobody could think of much use for GAs outside of the well-known "life simulators" such as the Creatures and Petz series.
The one exception to this, as previously noted, is the continued use of A-Life techniques. From flocking algorithms that help guide unit formations (Force 21, Age of Kings, Homeworld) to object-oriented desire/satisfaction approaches (The Sims), developers are finding that these techniques make their games much more lifelike and "predictably unpredictable" than ever before.
Where We're Headed
Always interesting at the roundtables are the inevitable discussions of where the industry in general, and game AI in particular, is headed. As usual, we got almost as many opinions as there were attendees, but some common trends could be seen emerging down the road.
Everybody thought that game AI would continue to be an important part of most games. The recent advances were unlikely to be lost to a new wave of "gee-whiz" 3D graphics engines, and the continued increase in CPU and 3D card capabilities was only going to continue to give AI developers more horsepower. There was the same feeling as last year that the industry would continue to move slowly away from monolithic and rigid rules-based approaches to more purpose-oriented, flexible AIs built using a variety of approaches. It seems safe to assume that extensible AIs will continue to enjoy some popularity and support among developers, mostly in the first-person shooter arena but also in more sophisticated strategy games.
Academia and the defense establishment continue to influence the game AI field (see John Laird's "Bridging the Gap Between Developers and Researchers" to be published in Part Two next week), though it sometimes seems that the academic world learns more from game developers than the other way around. For the most part, developers seem to feel that the academic study of AI is interesting but won't really help them ship their product, while researchers from the academic field find the rapid progress of the game industry enviable even if the techniques aren't all that well documented.
There can be no doubt that the game AI field continues to be one of the most innovative areas of game development. We know what works and tools are beginning to appear to help us do our jobs. With CPU constraints essentially eliminated and the possibilities of good game AI now part of the design process, AI developers can look forward to a bright future of innovation and experimentation.
Far and away the best place to find out more about any aspect of game AI is the Internet. There are more excellent web sites filled with tutorials, information, sample code, and so on, than anybody could possibly list in one place. Some of the recommended ones include:
Steven Woodcock's site, dedicated to all things game-AI-related. Provides links to other AI resources, reviews on AI implementations in games already on the market, and archives of various Usenet threads.
Another excellent site dedicated to all aspects of game development, there is an extensive list of resources and an active discussion group on the topic.
This site remains the single best source for any information about flocking and related A-Life technologies.
PC AI magazine has a marvelous web site crammed with all kinds of useful AI resources. From sample applications to research papers, you can find it here.
John E. Laird's site
American Association for Artificial Intelligence
Of course Usenet continues to be a great place to do research on a variety of AI-related topics. The best newsgroups for this purpose remain comp.ai.games, comp.ai, and rec.games.programmers.
Laird, J. E., and M. van Lent. "Interactive Computer Games: Human-Level AI's Killer Application." Proceedings of the AAAI National Conference on Artificial Intelligence, August 2000.
Laird, J. E. "It Knows What You're Going to Do: Adding Anticipation to a Quakebot." Proceedings of the AAAI 2000 Spring Symposium Series: Artificial Intelligence and Interactive Entertainment, March 2000 (AAAI technical report #SS-00-02).
Unfortunately, there really aren't very many books that discuss game AI. Probably the best comprehensive reference remains:
Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. Upper Saddle River, N. J.: Prentice Hall, 1995.