Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Gameplay Programming Hints: Building AI

These are some quick hints to make AI construction easier and more effective.

David Paris, Blogger

January 13, 2014

12 Min Read

I don't know about your experience, but I have found that the art of making AI is occasionally viewed as some sort of mystic power, only granted to the select few to understand and accomplish.  The same people who can write a complete and well balanced combat skill system, or multi-layer inventory management sometimes figure that the ability to make NPCs act like something other than an aging potato is beyond them.  Don't worry, it isn't.

Also, let's face it, if you happen to work for a small dev house, then you are already used to wearing a dozen hats, so you're likely going to end up wearing that AI Builder hat at some point too.  Consider this your head start.  As an added bit of good news, it also tends to be a lot of fun.

Hint #1: Understand what you are trying to accomplish

I know, kind of a "well duh!", but hang in there.  Before you sit down and start grinding out your system, make sure you know exactly what your goal is.  Are you trying to write AI to win a game?  Are you trying to construct AI that feels like a human opponent?  Perhaps you need to make AI that controls a group of quest NPCs in a reasonable fashion.  Whatever the case, you have a particular problem that needs solving, so the first step is to determine exactly what that really means.

Often, you only have a rough idea what this entails so start there.  For example: "I need to build AI for an NPC companion."  Ok, that's great.  Now tell me more.  What can it do?  What sort of interactions will the player have with it?  How much autonomy does it get?  "It should follow me around, help in combat, and inject conversation at key points in the story."  Even better.  Start there and write these out.  More importantly, every single one of these goals can actually be broken down into smaller actions.  "Help me in combat" might mean fight, or healing, or even having the companion body block for me at key moments.  It can include absolutely anything you want (or have time for anyways), but you'll make your life a lot easier if you can define these up front.  Because it is way easier to build your system when you know what you want it to be able to do.  That's not to say you can't extend it later, you definitely can, but just like any project, the more you can specify your requirements early the less time you'll lose on reworks later.

Be as detailed as you can, and more importantly, break down each and every action into smaller actions.  If your companion can "fight", then what does fighting include?  Does it move to enemies?  Cast spells?  Use items?  What resources will it need to manage?  You're never going to hurt yourself by over-detailing what is being done ahead of time because this is going to help you out later as the foundation for your future decision logic.

Hint #2: Computers are bad at complex decisions

Wait, what?  But, um, that's not what we were told!  Yet, it is absolutely true.  However, computers are really great at making a whole lot of very simple decisions.  By using this then, we can build immensely complex behavior.  Therein lies your real challenge as an AI programmer, but in my mind, this also is the true difference between good AI code and bad.  The more you can strip away the clutter and simplify the decisions that matter for the AI, the easier it becomes to separate desirable and undesirable behavior.  Then, once you understand what you want the AI to do in a general sense, yuo can layer on the complexity in the form of customized solutions at every step of the way.  However, these solutions should once again be built upon a principle of many layers of simple decisions that when put together, create the appearance of a complex whole.  This is probably the single most important thing I want to pass along here, and I'm bad at saying it, so let me give you some examples to illustrate this point a bit.

In one of my past lives, I did company and platoon command level simulation AI for the US military.  When I first started this, the prior AI guy had taken a monolithic approach with a massively complex algorithm for controlling each action of the 100+ soldiers while they performed some predetermined task.  He refined this for each command requested, microscopically adjusting the behaviors of each and every soldier until at last the simulation performed exactly how he wanted.  When the company was given a "Clear Building" order on a particular (known) building, it would sweep it thoroughly of hostile forces, floor by floor, room by room, in an elaborate predefined ballet that was beautiful to watch.  He spent a huge amount of time tweaking and retweaking it, making sure every tiny detail was perfect.

Except...

Well it turns out one of the buildings actually had another door (here) see.  And can the simulation handle it if there are 3 enemy in this room instead of 2?  What if there was a potential enemy sniper positioned covering the northern approach and I need to adjust my plan to accommodate?  What if...

Each one of these points would generate a giant cloud of sweat, retuning, and adjustment which required the AI to special case exactly that scenario.  Unsurprisingly, the brittleness of the simulation was hugely limiting to what it could be used for.  Instructors wanted to be able to modify scenarios to emphasize a particular teaching point, except that now the dependencies in the AI decision making were so deep that even small changes would cause it to horribly faceplant.

When I took over I used a very different approach.  Instead of trying to solve a massive problem in one feel swoop, I taught the AI to solve very simple problems and then layered these solutions to create more and more complex behaviors.  The military structure made this particularly easy to visualize, but a similar approach can work for almost any type of AI.

The basic idea was this: Each level of decision making would make a decision based on a tailored set of knowledge (more on that later) to decide what to do.  Each decision would then lead to an action or actions that were tasks dispatched to the next lower level of AI.  That level of AI would these make simple decisions that would lead to actions dispatched down to the next level, and so forth.  Until finally it reaches a level at which the actions are very simple indeed and this level of AI engages in the behavior specified.  The hierarchy can be of any depth you choose, as long as the decisions are sufficiently clear.  If at any point you find yourself writing very complex decision logic at some level, start thinking about how that decision can be broken into a set of simpler decisions.

So let's look at our example for a moment:  Start by taking a group of soldiers to use.  Let's say I have 9 guys (a 'squad' if you will), consisting of two 4 man fire teams, and a squad leader.  We'll go ahead and call our highest level of AI the SQUAD level.  Each of our 4 man teams and the squad leader represent the next lower level in the AI.  We'll go ahead and call this the TEAM level.  It doesn't matter that one of our TEAMs only has 1 guy in it, that's ok.  The SQUAD level of the AI solves all of its problems purely with TEAM level AI tasks.  So our particularly SQUAD knows it has 3 TEAMs to work and sends them all to do stuff.  It then gets notified when a TEAM's status changes (finishes its task, got shot, etc...) or if a self-chosen trigger gets fired off (time based, event driven, whatever).  The TEAM level of the AI then sits above another level, which we'll call the NPC level.  Each NPC represents a single soldier.  The TEAM level AI solves all of its problems with tasks for specific NPC AIs.  Once again these tasks go down a level to the NPC AI which solves these increasingly precise and narrow tasks. 

So in this case we might tell the SQUAD AI to "move there".  The SQUAD AI then looks for a rough path that is appropriate for its TEAMs and feeds it down.  The TEAM AIs then pick the best specific positions for the NPCs to move and sends these.  Finally the NPC AI handles all the mechanics of an individual soldier moving to their exact position ("stand", "look", "run", "crouch", etc...).  The important thing here is that every individual decision is very clear and easy but the branches allow it to become exponentially complex in the enactment.

Hint #3: Limit information under consideration

As we continually strive to simplify our decision making, we should remember that different levels of the AI don't need the same information.   For example, while a SQUAD level AI might care that this large patch of ground represents a zone of good cover, it does not need to know the 98 separate locations that are considered optimal positions.  Instead the SQUAD AI chooses to move its TEAMs through an area with good cover, and the TEAM AI chooses exactly which of those cover points to use for its individual NPCs.  Exposing every single cover point to the SQUAD AI just bogs down its decision making process, whereas the TEAM AI needs to know all those precise locations.

But wait, we can do better than that.  The TEAM AI doesn't actually care about cover positions that are far away does it?  No, it won't use those.  Instead, it only cares about ones that are within a very narrow area of consideration.  So once again we cull our data before letting the AI think about it.  This simplifies the decision making process, and let's us make good decisions faster, rather than wasting cycles considering places that are well outside our interest.

There's another trick you can use here too, which is to precompute important static (or commonly occurring) information and save this.  For example, when I mentioned 'cover points' above, I'm really talking about places that provide a good high visibility place to shoot from in relative safety.  This is generally determined by some fairly complex terrain analysis that when performed by 300+ guys at runtime would bog us down a bunch. Instead, we've gone ahead and done that analysis during our data export process, and saved the cover point information in easy-to-digest, area-specific format, ready for the AI to grab and use when needed.  This sort of precomputation is great for static maps, but remember that often times you can compute interactions between highly common objects/situations ahead of time, and store that data for use later.

Hint #4:  Random (just not too random) is good for you

Everything we've mentioned above is great for calculating the best behavior possible.  But that's not always a good thing.  Predictability can make for a very stale opponent.  If you are going to play rock/paper/scissors and you know your opponent will always pick rock, you now know how every single game will end.  Have that same opponent randomize between the three equally strong possibilities, and things are interesting again.

Similarly, sometimes that same lack of randomness can lead to situations where the computer is simply 'too good', and feels unfair to play against.  Let's say you're playing against an FPS bot.  If that FPS bot automatically head shots you every single time you enter its view range, you aren't going to have a very interesting experience.  Instead, the AI goal is generally to create something that feels like you're playing against a human-ish opponent.  One that takes a similar amount of time to aim, and one that may randomly miss to a degree similar to a human.

Building small amounts of randomization into your decision model can lead to less predictability, better replay value, and compound upon itself to create a deeper and more varied experience all around.  Just remember though, that it doesn't feel right for an AI opponent to do stupid things.  It is great to randomize between multiple good paths, but choosing bad paths should be limited carefully.  Usually you're looking to randomize between an optimal choice, and a nearly optimal choice not an optimal choice and a terrible one.  The first of these leads to an interesting interaction.  The second is a head-smacking feeling of "argh, why would it do that?"

With a system that includes multiple layers of decision-making, these small randomizations can make the whole thing feel extremely deep and each playthrough unique.  You may be fighting the same group of goblins attempting to overrun your position on the hill, but this time there's one that sneaks around from behind.  The time before a couple threw spears while running up.  The time before that half of them held back and supported at range while the other half charged in for hand to hand.  The amount of code difference between a model that produces this sort of variety and one that simply picks a single 'best' behavior and uses that is very small, but the replay possibilities in the first approach are vastly better.

Wrapping Up

Hopefully that at least gives you a framework to start thinking about.  Beyond that, I'd just suggest the same sort of stuff that is good for all code - keep it simple and clear, make use of reusable chunks, and document your decision influencers.  Future generations of code maintainers (which just might end up being you later after you've stopped thinking about it) don't want to have to painfully unravel what you were doing.  Make sure your approach is clear and understandable and both you and they will be better able to modify and extend it later.

Cheers!

Read more about:

Featured Blogs

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like