Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Building the AI of F.E.A.R. with Goal Oriented Action Planning

First Encounter Assault Recon proved a breakthrough moment for game AI. So let's revisit what made it so special and explore how it works in detail.

Tommy Thompson, Blogger

May 7, 2020

17 Min Read

'AI and Games' is a crowdfunded YouTube series that explores research and applications of artificial intelligence in video games.  You can support this work by visiting my Patreon page.

The AI of F.E.A.R. is a well-worn subject and one that I have already covered on the 'AI and Games' YouTube channel, in fact it was the very first episode back in 2014!  But in keeping with the channel breaking 100,000 subscribers, I decided to return to the the topic and deliver it again in richer detail as part of my sub-series AI 101.

The video game industry has strived to find accessible and pragmatic approaches for non-player characters (NPCs) to make intelligent decisions quickly and effectively. This has led to innovations ranging from navigation meshes for helping a character move around a 3D environment to Finite State Machines in games such as Half Life for character behaviour. Since the mid-2000’s Behaviour Trees, which were popularised by Halo 2, are arguably the most commonly applied technique in the AAA video game industry. But in this piece we’re going to take a look at the other technique that achieved popularity at the same time. Let’s take a look at Goal Oriented Action Planning and the game that popularised it: First Encounter Assault Recon.

 

Automated Planning

To understand Goal Oriented Action Planning – and in-turn how it is used in F.E.A.R. and many other games – we need to take a look at the theory from which it derives from: an existing AI technology known as automated planning. Automated or 'AI Planning' is a process whereby a system attempts to figure out a sequence of actions that will achieve a distant goal set upon it by a designer: this is called a plan. To do this, we model the problem in some sort of language or encoding that tells us all the information that can be true about the world at that time. These are known as facts or predicates. Where a simple fact tells us something we might need to know at a later point in time. We store all of the predicates we have about how the world looks like at any point in time within what’s known as a state.

For example, consider a closed Door we label as ‘Door1’. I might want a non-player character to open Door1, so I need to know in a predicate whether it’s open or closed. So the current state of our world is one closed door. Now given I have this information I can build an action that allows me to open it.

Now each action is usually broken into three parts: the objects that are involved in the action (in this case a door), the actions preconditions – meaning what facts of the world must be true before I can apply the action – and the effects – which represent how the world changes as a result of completing the action, adding new information to the world state or deleting existing facts that are no longer true. So say I want my non-player character to open Door1, some valid preconditions for an open-door action would be that the door is closed and that the NPC is standing in the same room as the door. Once the action is completed, the effect would be that the door is now open. Now say we wanted to walk from room A to room B, but the door between them is closed. We’d need to make sure in our planning model that we explain that the door connects room A and B to each other and that you cannot walk between rooms if there is a closed door. At which point the system will create a plan, which is to open the door, and then walk from room A to B. Changing the state of the world to where the NPC is now in Room B.

In a nutshell this is what planning is all about: figuring out what to do and in what order to do it. This process of encoding problems and allowing for planning systems to solve them is why it has been an active and developing research field for several decades. While F.E.A.R. helped bring planning to games in 2005, planning has previously been used in a variety of real-world logistics problems since as far back as the 1970s. This includes problems such as robotics – ranging from smaller home-based devices to the likes of the Lunar and Mars Rovers, control of largescale mechanical systems such as wind turbines, power stations and manufacturing and in complex spatial problems such as search and rescue operations, military planning and disaster relief.

But this hides some of the nastier details underneath. These planning encodings rely on us being able to abstract the problem we’re solving into something simple for a planner to solve. Returning to that door opening problem, I said that the NPC needs to be in room A and that Door1 connects the two rooms. This works fine in a planning model, but there’s a bunch of discrepancies between the model and how it works in real life. For example, where in room A is the door to room B? What direction does the NPC need to move towards the door? In fact, in the planning model the NPC can open the door if they’re in a room that the door connects. Meaning this wee fella is now apparently telekinetic and can open doors with their mind from very far away? Rather than the more realistic interpretation which is that not only does the NPC need to be in RoomA, but also needs to be standing right next to Door1 in order to open it. Hence while planning is great for solving the larger problem of what to do in a given situation, executing the plan of action in the world – be it in the real world or in a simulation like a video game – is a lot harder.

So now that we know what planning is and how it works, let’s take a look at Goal Oriented Action Planning and how it brings AI planning to video games.

Goal Oriented Action Planning

Goal Oriented Action Planning – or GOAP for short – adapts STRIPS planning – the Stanford Research Institute Problem Solver planning system from 1971 – for use in games. Leading development on this project was Dr Jeff Orkin, the AI lead for Monolith Productions on both 2002’s No One Lives Forever 2 and F.E.A.R., where GOAP was first implemented.

As detailed in Orkin’s 2006 publication at the Game Developers Conference whilst studying his PhD at MIT Media Lab, the GOAP system is driven by a Finite State Machine. A Finite State Machine is a system whereby an AI exists within one state of behaviour and then based on an event occurring will transition into another one. However, the FSMs adopted in Half Life, have over 80 unique states exist in the codebase. Although whether that state was available was determined by the type of character that the state machine was controlling. However here in GOAP, the Finite State Machine has only three states:

  • Moving into a position in the world.

  • Playing an animation.

  • Interacting with what is known as a smart object: an item in the world that AI characters can interact with.

Now you might be a little confused how this works, but it’s all based on a really clever observation of how non-player character AI works. If you consider an enemy in a game, all they do is play animations in certain circumstances. It’s when that animation is played in the right place at the right time it appears clever and when several clever animations are played in a sequence, it appears intelligent. Hence each of these three states is ultimately playing an animation.

  • GoTo: is playing a movement animation (be it walking, running, jumping, diving or climbing) but it’s also heading to a specific location.

  • Animate: is simply playing an animation, this could be acknowledging the players presence, to shooting their weapon or any simple idle animation which this NPC a bit more character.

  • And lastly Use Smart Object: which is a situation such as having a NPC hit a switch, or open a door, or sit in a chair, or knock a table over. In each case, the animation is tied directly to the object and quite often in games – such as that found in my episode on the AI of BioShock Infinite – the object will tell the character’s animation system what it needs to do.

So, instead of a more grounded state machine where each possible behaviour is enumerated as a separate state, the whole thing is executed in a much smaller and highly data-driven implementation. It will execute specific movement actions and animations that are passed into the system and if several movement, animation and use smart object states are executed in the correct sequence, we’ll get some smart looking behaviour.

But how does it know what states to be in, in what order and what information to pass into each state? Well, that’s where the planning part of GOAP kicks in. Using a planning encoding like we talked about before, the system creates plans that solves specific problems in the planning encoding and uses A* search to find the best actions to take in solving the problem. The actions within the resulting plan are then translated into specific FSM states, with data being passed of locations to move towards, animations to play and smart objects to ineract with.

To better understand how this is works in practice, let’s go crack open the hood of FEAR and explore how GOAP is adopted within the game.

G.O.A.P. in F.E.A.R.

So let’s take a deeper look at how GOAP is applied in FEAR. Portions of the games C++ codebase have been publically available for some years now. First with the original FEAR SDK being released by Monolith, followed by the FEAR public tools as modders have continued support for it. And with very little effort you can crack it open in Visual Studio and read through the AI source code. So let’s walk through how GOAP is applied within the codebase and for those wanting to try their hand themselves I’ve left links in the description for you to check it out.

Each AI character in FEAR that has any sort of AI, needs to have goals assigned to it. These goals are ones that the planner will run a search through the action space in order to find a plan of action for that character to execute. This applies to all active non-player characters in the game, from the soldiers and assassins to the nightmare spectres at the end of the games and even the rats that are running around on the ground, which is a super important point I’ll come back to in a minute. This means that the other characters, such as Jin Sun-Kwon and Jankowski, and even Alma herself are all scripted by designers.

Without a goal, the AI characters will do literally nothing. They need to be given goals to accomplish, which the planning system will then aim to resolve. There are just shy of 70 goals encoded within the game files. In each case, these goals can be initialised, updated and terminated and more critically, have functions that allow for them to calculate their overall priority at that point of time in the game. So for example, if a soldier has been assigned the Patrol and KillEnemy goals, but it has no knowledge of the player being nearby, then KillEnemy will have a priority of 0, while Patrol has a much higher priority given there is a patrol node that’s been assigned to that character. However, in the event that it has knowledge of a player being nearby, the Patrol goal has a much lower priority, given it recognises that the player is now a threat to it.

Now in order for the planner to solve that goal, the character will have a collection of actions available to it that it can search through. There are 120 actions in the game, ranging from simple animations to different attack variants, moving to locations, using smart object nodes, reloading weapons or just standing idle. All of which have their own corresponding C++ classes. The actions are setup in code, but not all NPCs have access to all actions. Designers use a separate database editor to assign specific actions to each enemy type, allowing them to customise just how versatile each character is when solving their goals.

With a goal assigned to a given character, it will await a chance to access the planner and find a solution to its current situation. This is where the larger issues I hinted at earlier come to light. The system needs to find actions that this particular character can execute, search through the possible actions within the state space, find a plan that reaches the goal, but all of this needs to be revalidated against the game during execution. This is because something can happen either during planning time or when executing the plan that can break it. Returning to my door example from earlier, if you’re generating a plan to open the door and move between rooms, but during the planning process or as you’re executing it, the door is opened by someone or something else, it invalidates the plan.

This is an issue that the vast majority of planning systems face, in that they’re rather existential in nature, and assume that only the planner can change the world. So it can’t accommodate for other characters doing stuff in the world as part of the plan. None of the enemy AI in FEAR know that each other exists and co-operative behaviours are simply two AI characters being given goals that line up nicely to create what look like coordinated behaviours when executed. Not only are they unable to walk towards navigation points in the world where other characters are standing, but have zero knowledge of that characters existence. Hence if another character does something that would break the proposed plan, then each character needs to be able to recognise that so it stops what it’s doing and asks for another plan. Or change the goal if necessary.

This is handled in three different ways: firstly once a plan is devised, the system will then run a quick validation check, that takes a copy of the world state, executes the plan on that copy and ensures it will satisfy the goals as intended, with each actions preconditions and effects working as intended. Secondly, each goal can override the ReplanRequired function, which continually checks whether the current plan should be abandoned and a replacement one be retrieved from the planner. A good example of this is when an enemy character is shot, while executing a plan to satisfy the KillEnemy goal. However, this is only allowed in the event the character is not executing an animation that cannot be interrupted. Lastly, each action in the plan needs to be validated again during execution, with preconditions and effects being checked. In the event they’re not satisfied as intended, the system will bail out of the plan and force a re-plan to occur.

However, if that action is going to work as intended, the action is ‘Activated’ in the code base. When that happens, it tells the Finite State Machine which of the three states to transition into. And passes over all of the data it needs to complete the states execution. So say for example a soldier needs to reload, then it transitions into the Animate state and passes in the specific animation parameters that are needed to so that the character will reload their gun.

Once an AI character completes their devised plan successfully. It will then be assigned a new one and can plan again. By managing both when plans succeed as well as when they fail, this helps maintain the pace of the game, ensures the AI always knows what to do and don’t stand around idle waiting to be slow-mo shot in the face. In truth the length of plans in FEAR are often rather short, typically 1-2 actions long, with only a handful ever reaching 3 or 4 actions in length. This largely makes sense given the overall pacing of the game.

That said, this goal management and planning process does create a problem, one that even the developers at Monolith didn’t know about for many years. As detailed in a 2014 research paper by Professor Eric [Jacopin, 2014] in which he analyses the performance of the GOAP planner in FEAR, it’s discovered that this process of continually replanning creates a small performance overhead in some levels of the game, thanks to the rats. The rats in FEAR are – as I mentioned – using the same system as the soldiers and other combat AI. Typically their plans are pretty simple, move to another location in the world and preferably away from the player if they’re nearby. The need for the rats to replan doesn’t take into consideration whether the player is nearby. Hence you can meet a couple rats in the opening seconds of the level and they’ll still be running around planing for new actions 20 minutes later while you’re on the other side of the map.

This far from an exhaustive overview of the codebase and I do encourage anyone who can work their way around C++ to have a read through the code, it’s really insightful and rewarding for game developers to read through.

Closing

FEAR’s use of Goal Oriented Action Planning is still held to this day as some of the most exciting and fun enemy opponents to come across in modern video games. And while it’s now been almost 15 years since the games release, GOAP has continued to have a lasting impact within the video games industry. While far from an exhaustive list, there are many highly popular or cult classics that have adopted the methodology or something similar to it.

This includes the likes of Condemned Criminal Origins, S.T.A.L.K.E.R.: Shadow of Chernobyl, Just Cause 2, Deus Ex Human Revolution, the 2013 reboot of Tomb Raider and more recent titles by Monolith such as Middle-Earth: Shadow of Mordor and Shadow of War, as well as modified versions in Transformers: War for Cybetron and Empire and Napoleon Total War.  Planning is still a popular technique in video games – though arguably not as common as say behaviour trees. But over time GOAP and its STRIPs-style approach is now being adopted less frequently, with more contemporary titles adopting another planning technology known as Hierarchical Task Network or HTN planning, which has been used in titles such as Killzone 2, Max Payne 3 and Dying Light. I’ll be dedicating a future segment of to HTN planning, but I’ve already covered the implementation of this planning method in Horizon Zero Dawn.

But for now I hope this help everyone better understand not only why people continue to laud the AI of FEAR, but also the underlying technology being adopted. I’ve listed some other useful resources on GOAP in the below for you to check out.


 

 

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like