Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

In this second and final entry on the AI of Horizon Zero Dawn, I explore the sensor systems used by individual machines, the animation tools and the two distinct navigation systems on land and in the air.

Tommy Thompson, Blogger

February 6, 2019

13 Min Read

AI and Games is a crowdfunded series about research and applications of artificial intelligence in video games.  If you like my work please consider supporting the show over on Patreon for early-access and behind-the-scenes updates.

In part 1 of my case study on Horizon Zero Dawn - Guerilla Games 2017 Playstation exclusive - I explored how the game is built to create herds of AI-controlled machine animals. This requires a complex agent hierarchy system where each machine can make decisions about how to behave using a Hierarchical Task Network planner, but also groups agents together to dictate their roles and responsibilities as part of a herd. This is all part of a system known as 'The Collective', which maintains the ecosystem of all machines in the world as you are playing it.

In this closing entry, we're going to look more closely at the systems that each individual machine can utilise as part of their core behaviour. This includes the likes of sensors systems, navigation for both land and air machines and how execution of AI behaviours is heavily tied into the animation systems to give each a machine an unsettling yet realistic behaviour.

Sensors and Movement

So first up, let's take a look at the sensors used by machines to trigger AI responses as well as how animation ties into the overall execution of their behaviour. There are a significant number of unique sensors that a machine can use: visual sensors such as the Watchers eye, to radar and proximity sensors on Longlegs, aural sensors that hear explosions at distance to stone throws closeby as well as the ability to sense players colliding with them directly. Each machine has a collection of these sensors calibrated with their own sensitivity values, making it easier to sneak up on say a Watcher or Grazer, but a lot more challenging to catch a Stalker unawares.

Now a traditional sensor system might just enable for events to occur around an AI and they either 'see'/'hear' it or nothing, but the sensor systems are actually a lot more nuanced.  This is achieved through information packets that are attached to objects that can cause stimulus in one of the machines sensors: this includes the player, other NPCs, rocks, fired arrows, other machines and wildlife. This data tells the receiver - the machine that sensed something - information about what it is that it detected and it's state. Hence machines as well as human NPCs can tell the difference between a dead body lying in front of it and an arrow that whizzed past their head and missed them, but it also helps ensure that things such as the player hiding in long grass or behind trees cannot be seen, given they recognise they're not visible in that state.

Each AI character - machine and human alike - can handle and interpret sensory data in a different way, so certain information might be ignored by some characters whereas others react to it promptly. In fact, depending on the strength of a sensor in a given machine, it can actually reduce the data from the sensory event that they can read. This helps manage the emergent properties of the game such each character type responds to information in their own unique way.

Now should an AI make a decision based on this sensory data, we've still got to make sure it looks realistic. So when a machine has decided to make any action, such as moving to a new location, investigating a disturbance or attacking the player, there is still the issue of animating it during execution such that it looks as realistic as possible. The animation of these machines is a big challenge, given that they need to be able to look like they're animal inspirations, but also have a distinct machine-like behaviour at times. This requires both the navigation and combat systems to pay attention to the distance the machine is going to traverse and animation toolchain that adjusts root bones of animations and warp them to suit based on the perceived distance and time it needs.

This ensures that regardless of how far the machine is moving and how fast it is doing so, it can start the animation, move into the main part of the behaviour and then blend it correctly at the right time. This is important for things such as running to points, where the machine not only needs to slow down and stop at the right point, but it's even more relevant in combat. Many of the animations used for attacks have two distinct sequences: there's the wind-up, which telegraphs the attack, followed by the big finish where the damage is dealt. Horizon Zero Dawn uses a similar method to that discussed in my case study on the AI of DOOM, where the the system controls the current locomotion of the machine, blends movement or attack animations to suit at specific points and then ensure that the machine lands or stops in the right at the finish.

Navigation

Now there is still one big issue I haven't yet talked about and that is navigation. Ensuring these machines can wander around the environment is a real challenge, given this large variety of unique enemy types are all different sizes and shapes. So they need to be able to move through terrain in a way that makes sense for them. But also, they need to be able to recognise changes in local geometry and adapt to that or simply ignore it depending on their type. Now this requires a commonly used AI tool known as a navigation mesh. A nav mesh stores information about how a given character can move across the map based on what are perceived to be obstacles in the world. While you can calculate it at runtime, they're often built or baked before the game is released and loaded into memory when necessary.

Given Horizon Zero Dawn has such a large map and only specific segments of it are relevant at a given point in time - since the AI are only active and moving around if they're near you - the navigation mesh is built at runtime but only around the immediate region of the player. But thing is, there isn't just one nav mesh, there's six of them! Four of them cater for character movement based on the size of the object: small, medium, large and extra-large. Hence humans can move around on the small mesh alongside watchers while the likes of the Thunderjaw has a nav mesh pretty much all to itself. Plus the two extra nav meshes: one for swimming machines such as the Snapmaw as well as a unique mesh that ensures machines stand in good locations should the player be trying to mount them.

In each case, obstacles can block or alter a navigation meshes structure and the system recomputes changes in real-time such that moving obstacles (and even other machines) can impact the ability to move around the space. What's interesting about this is that obstacles have differing properties and can either prove to be completely impassable or simply undesirable to walk across - but much of that is dependant on the state of the machine's AI behaviour. As mentioned in part 1, machine patrols actively avoid stealth vegetation when generated but when investigating local disturbances, while grass is still considered undesirable, it will walk through it if necessary. This same principle actually applies to small rocks and trees: these are impassable obstacles, except for larger machines such as Behemoths, Rockbreakers and Thunderjaws. These beasts can smash rocks apart and uproot trees, but only if they're in an angered state or giving chase to the player. Outside of that behaviour, they'll treat them just like any other obstacle.

Moving Through the Air

While this navigation toolchain caters to land-based machines of all shapes and sizes, it doesn't work at all for those that are based in the air. Non-player characters that move through the air not only have to be aware of the nearby obstacles such that they don't crash into trees or cliff rock, but they also need to be wary of the elevation of nearby geometry. The world of Horizon Zero Dawn is full of rolling hills, forests, rock outcrops and steep mountain climbs. For the two flying machine types: the Glinthawk and the Stormbird, they need to know how to navigate the air such that they can take off, fly a patrol route, land and also swoop down and attack the player when necessary. To achieve this, the game not only has the nav mesh system on land but a completely separate navigation system in the air.

This proved to be a challenge for the AI team on the game. The technique used is known as hierarchical path planning over MIP maps. MIP Mapping is a technique used in computer graphics that aims to minimise the memory overhead of a texture or images by providing a collection of the same image gradually lowering in resolution. It's ideal for managing level of detail in games so that objects hundreds of metres away can be visible but use less texture memory than those directly in front of the player where you need them to be the highest quality possible. This approach was considered given that when a machine is flying a path around the world, it doesn't need to know with complete accuracy the local geometry of where it will be a minute from now, but it really needs to know the lay of the land immediately around it should it decide to land. The path planning system for aerial machines uses MIP mapping for the height map of local geometry - a data structure that tells us the elevation of a given x/y position of the world - with four levels as they move down from top to bottom become increasingly more complicated and realistic. Level 3 is the simple and abstract model, while map level 0 is a pretty accurate height map of the world.

Much like the navigation mesh, the mip maps are built at runtime when needed, given a machine doesn't need to know the entire worlds elevation data when flying within a fixed region. When they need to fly to a location, the flying navigation AI starts by using the A* search algorithm over the highest level of MIPMAP, hence it calculates the simplest version of its flight path against a rough version of the geometry. The A* makes flying up and over obstacles more expensive than flying around them, hence you'll see machines glide around mountains and cliff tops more often than fly over them. Each time it calls the A* algorithm it only has a fixed number of iterations, so once the path is completed on the simplest mip map (known as level 3) it will then take a given segment of the path it calculated and then refine it by borrowing down to mip map levels 1 and 0 to make it more realistic and respect the geometry more accurately. Plus it then smooths the paths out such that it removes steep slopes or sharp turns and makes them more realistic. This system work really well given that any flying machine that is in the air always has a flight plan - even when it's rubbish - and then it can boil it down to something more practical by repeatedly calling the search algorithm to refine the path to become more and more natural. It's also quite memory efficient, but it does have one caveat, in that given it's based on the maximum height of a given region of the map, they cannot fly under bridges or rock outcrops, but much of the time as a player you won't really notice it.

Between the land-based nav mesh and the air-based mip maps, flying machines can then coordinate how to attack, land, dive attack and even crash in a way that respects the geometry. Machines that are hovering above the player while attacking are still using the pre-calculated flight plan, only it's not necessarily moving directly forward along that path and plays the corresponding animation. The velocity of the machine is tied to whether it's flying, gliding or hovering and as such they can circle you in the air in a realistic fashion all the while still using the same navigation tool.

Take offs and landings use a separate system that talks between the flight navigation and ground-based navigation mesh: it search for a valid positions on the nav mesh it can land on, typically points that are slightly higher off the ground than the local average and then adjusts angles and velocity accordingly. Once it has landed, it's now using the corresponding nav mesh based upon the machines size. This same principle actually applies to when they crash too, except this time the only valid landing positions are based on the machines current heading and while it might look less graceful, it's fundamentally using the same tools. The specifically programmed edge case for this is the Stormbird's dive attack. Stormbird's will circle the player, then come crashing down to towards you crashing into the ground at your current position. It's using the same systems, but in a much more dramatic fashion. However one added caveat is when circling the player, the Stormbird will often wait until it blocks out the sun before making the attack. You might have noticed this when playing the game yourself and it is intentional. During testing of the Stormbird AI, the QA team noticed that it would periodically block the sun based on where you were standing and this made the attack all the more disorienting as the light shifted and blinded you during the dive. At that time it was purely accidental, but afterwards the AI team went out there way to ensure it does it more frequently and deliberately.

Closing

Horizon Zero Dawn delivers an experience unlike anything seen before, as a world teeming with mechanical life plays host to the tales of Aloy and the mysteries of her past. The AI and gameplay systems of the machines are critical in building this apocalyptic future for players to explore. As we've seen over the past two videos, this was a tremendous effort by a team of around 10 people through several years of development. Delivering AI systems at this scale that works well in massive-open world games is only becoming increasingly more difficult as games continue to increase in scale hence it's vital for game development community that these good practices are shared with the wider world such that we can learn from one another. Plus it's fun to learn about how games work and to appreciate the efforts of those who worked so hard to bring you giant mechanical Tyrannosaurs you can fight while riding on robotic horseback. I mean how cool is that?

References

  • Julian Berteling, 2018. "Beyond Killzone: Creating New AI Systems for Horizon Zero Dawn", GDC 2018.

  • Arjen Beij, 2017. "The AI of Horizon Zero Dawn", Game AI North 2017.

  • Wouter Josemans, 2017. "Putting the AI back into Air: Navigating the Air Space of Horizon Zero Dawn", Game AI North 2017

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like