Sponsored By

Book Extract: Artificial Intelligence for Computer Games - 'Perceiving'

In an extract from Funge's book, he deals with how an AI programmer obscures information from NPC AI to make it behave more realistically and less 'perfectly', looking at visibility, noisy sensor, and predictor percept-related issues.

John Funge, Blogger

January 21, 2005

15 Min Read

 

This article is excerpted from Artificial Intelligence for Computer Games: An Introduction, and discusses how an AI programmer obscures information from NPC AI to make it behave more realistically and less 'perfectly'.

Partial Observability

If a controller is given full and direct access to the game-state, it has complete access to all the information about the game world and the game world is said to be fully observable by the controller. In contrast, if access to the game-state is restricted, then the game world is said to be only partially observable. This section introduces some percepts that make the world only partially observable from an NPC controller's perspective.





TgCharacter* tgPerception::calcMyNearestCharacter() const
{
int which = tgGameState::nobody;
tgReal dMin = Inf;
for (int i = 0; i < gs->getNumCharacters(); i++)
{
if (i == myIndex) { continue; } // Don't include me!
tgReal d = calcMyDistanceToCharacter(i);
if (d < dMin)
{
dMin = d;
which = i;
}
}

assert(tgGameState::nobody != which);
return gs->getCharacter(which);
}

tgReal tgPerception::calcMyDistanceToNearestCharacter() const
{
tgRealVec p(getMyPosition());
p.subtract(calcMyNearestCharacter()->getPosition());
return p.norm();
}

Listing 2: Definition of percept to calculate the distance to my nearest character.

When the world is only partially observable, then many different game-states are indistinguishable. This is often a good thing as it means controllers can be made more general. That is, many percepts are defined so as to hide unimportant details and emphasize the crucial similarities between game-states. But there are occasions when information is deliberately hidden to avoid giving NPCs an unfair advantage. For example, if the tagged character is hidden behind an obstacle, then an NPC is generally not allowed to know its position. From the NPC's point of view, the tagged character could potentially be in any position that is obscured by the obstacle. The set of game-states in which the character is in one of the possible hidden positions it could occupy is referred to as a belief state.


Figure 1: Simulated visibility and audibility.

The next few subsections describe some specific ways for simulating some of the limitations of perception that put NPCs and player characters on a more level footing.

Visibility

The image produced by the renderer is often from the direction the player character is looking. Therefore, unless the player character turns her head, she cannot see what is behind her. Many games mitigate this limitation by augmenting the display with an overhead map view. Proper rendering of sound also lets a player hear something sneaking up behind them. Nevertheless, in many games the renderer makes it possible for NPCs to hide from player characters behind opaque objects. To be fair, the converse should also be true: player characters should be able to use objects like walls to hide from NPCs.

For NPCs, instead of using the renderer to implement visibility, a getIsVisible percept can be defined. As the name suggests, the getIsVisible method can be used to determine if a character is visible from another character's point of view. Figure 1 shows some simple situations involving visibility in the tag game. In particular, the tagged character is not visible for although it is within the NPC's view cone, it is hidden by an obstacle. In contrast, the player character is outside the view cone, but within the hearing radius and so is invisible but audible.

Visibility calculations are commonplace in computer graphics and similar code can be used for calculating visibility percepts. Coherency between frames can allow calculations on previous frames to be reused, which can make a dramatic improvement in the total cost of visibility calculations. In addition, fast approximate visibility tests using bounding boxes and spheres are sometimes sufficient for controllers' purposes. Good references on visibility testing are available from any graphics textbook that describes ray tracing and BSP trees.

Simulated Noisy Sensors

The human player will not in general be able to precisely judge object properties like location and speed. Instead, the human will be (implicitly) using values that contain a certain amount of noise. NPCs can be given similar noisy sensor readings by randomly perturbing the true values of various percepts. For example, in the tag game suppose p = (x, y) is the true position of the tagged character. Then the virtual method getTaggedPosition could be overridden in a subclass that returns the position (x + ßx, y + ßy), where ßx and ßy are chosen randomly according to some probability distribution. For example, standard distributions from probability theory can be used, such as a uniform distribution, or a normal distribution with the mean centered on the true position.

Discretization

A percept like getMyDistanceToTagged returns a floating-point number. However, it might be more useful for a controller to predicate its decisions on a more abstract notion of whether the tagged character is close or not. For example, if the distance to the tagged object is d, then the getIsTaggedCloseToMe method computes the predicate: d < k, where k is some fixed threshold.

Converting a value (like distance) into a predicate that is either true or false is just one example of discretization. In general, a value can be discretized (or bucketed) into more than simply two values. For example, instead of an angle or a unit vector, a direction could be discretized into one of eight compass directions.

Although discretization does hide information about precisely where an object is, it is often done more as a convenience than as a hindrance. For example, discretizing positions into a grid is important in order to apply certain types of search algorithms.

Predictor Percepts

Space Invaders was one of the earliest successful computer games. A key part of the game challenge was being able to accurately anticipate where the invaders would be in the future. This was because the invaders were moving and the bullets took some time to travel. So (unless you got lucky) the bullets would miss if you aimed at the invader's location at the time of firing. Instead, you had to time your shot to end up where the invader would be by the time the bullet arrived. In modern computer games, NPCs sometimes need to use similar kinds of anticipation to shoot back at you. It is convenient therefore to define predictor percepts that predict future values. Predictor percepts are also useful for predicting any hidden values.

In the tag game, a specific example of a predictor percept is calcPositionTaggedFuture that predicts the position of the tagged character at some specified time in the future. If the tagged character is currently the player character, then its future position is uncertain because it depends on the future action choices of the human player.

Of course, an NPC can sometimes cheat and know what a player character will do before the appropriate animation is played, but an NPC cannot know for certain (unless the player character is confined somehow) where the player character will be in five minutes. However, if it is important for an NPC to independently show up in the player character's neighborhood in five minutes there are alternatives to trying to predict the location. In particular, if the NPC in question could have plausibly gotten to the player character's destination in time, then it can just be magically teleported there (or at least moved extra quickly). This will work so long as the player character does not see the NPC being teleported (unless the ability to teleport is part of the game). It will also appear odd to the player if an NPC she saw some time ago heading in the opposite direction suddenly manages to show up later on nearby.

Predicting future values associated with another NPC can be calculated with more certainty and without resorting to teleporting. This is because one NPC can, in principle, ask another NPC what it will do in a given situation. There are, however, reasons why this is not always possible or desirable:

Influence of the player character. An NPC will react to the player character and so the NPC's future behavior is contingent upon the player character's future behavior. Since the human player's behavior is uncertain, then so is the NPC's behavior. Of course, if the player character is unable to influence an NPC (for example, if it is too far away), then it may be possible, for a time, to calculate some future values with certainty.

Random number generators. Random number generators are often used in games to randomize action choices inside controllers, and other decisions inside the simulator. Random number generators typically (unless they have access to specialized hardware) generate pseudorandom numbers. Pseudorandom numbers are not really random because they are generated by a deterministic computer program, but they (ideally) share many statistical properties with true random numbers. If an NPC knew the algorithm by which the pseudorandom numbers were being generated, it could, in theory, remove the uncertainty they introduce. This would be complicated and (as explained in the upcoming realistic perception explanation) undesirable.

Game world complexity. Many game world simulators are so complex that it is hard to accurately approximate their functionality. Consequently, the only certain way of knowing what will happen in the future is to use the real simulator to look forward in time. The problem with going too far into the future is that simulation is usually CPU-intensive and there is still the problem of predicting what the player character will do.

Approximate simulator. An NPC needs to predict both where it and other NPCs will be in the future using a discrete representation of the game world. One way to do this accurately would be to temporarily discard the discrete representation, simulate forward using the continuous representation and the game simulator, and then rediscretize the answer. In practice, this would be slow and cumbersome so the game world's physics are usually also approximated inside the discrete representation directly. To be fast, the approximate game world physics ignores many details but it can still be accurate enough to be useful. Nevertheless the resulting prediction of the NPC's future position is not certain.

Realistic perception. Even if an NPC has access to information that would reduce uncertainty about the game world, it is often undesirable to take advantage of it. In the example, a character uses the simulator to precompute trajectories of falling bricks and then nonchalantly walks through them without any fear of being hit.

Therefore, regardless of whether some future value is really random or not, from the NPC's point of view, it can make sense to think it is. Thus in the tag game, calcPositionTaggedFuture corresponds to the value of a random variable p' = (px', py') that ranges over possible future positions. Notice that the possible future positions represent a belief state about the future.

Before proceeding, you should realize that just because probability is being used to describe predictor percepts, it does not mean that probability has to be used in the implementation of a predictor percept. For example, an implementation of a predictor percept that does not explicitly use probability at all is given toward the end of this section. The implementation falls out as a special case from the more general theoretical framework based on probability. Therefore, regardless of the final implementation, it is still worth exploring the theory behind predictor percepts in a little more detail. To do so, consider just the subproblem of determining the probability that the tagged character's future x-coordinate px' at some fixed time in the future, is in some given interval (a,b). The probability could depend on the current and past values of any number of percepts, but assume the NPC makes the reasonable approximation that it only depends on the current x-position px and velocity vx. Then the NPC needs to calculate the conditional probability P(a< px' < b | px, vx) that, given the current position and velocity, px' will be in the interval (a, b).

Usually the conditional probability distribution is unknown and one possibility is to try and learn it. In particular, learning a conditional probability distribution for the player character can lead to some powerful AI effects. It is, however, complicated and CPU-intensive. In addition, the game designer may not want the NPCs to be too good at predicting the player. As pointed out earlier, even if more accurate or longer term predications are required, it is often easier to cheat. An alternative to learning, or cheating, is to simply define a probability distribution as part of the definition of the behavior of the NPC. That is, the NPC is simply defined to be a character that computes the conditional probability in a certain way. For example, Figure 2 defines a possible underlying conditional probability density function from which the conditional probability can be calculated as the area under the curve, as indicated by the shaded region.

However the probability is determined, the final value of the predictor percept needs to be determined for use in a controller. A controller could be defined to take a probability distribution directly as input, but a controller usually expects a single value for the value of a percept.


Figure 2: Example conditional probability density function.

 

In many cases, defining a distribution for a predictor percept at all is overkill. Instead, a percept can just directly compute a single value. You can think of it as a short-circuited version of defining a distribution in terms of the most likely value and then picking the most likely value. As an example, the following code defines the tagged character's future position directly using the current velocity to perform a simple linear extrapolation of the current position:


tgRealVec tgPerception::calcPositionTaggedFuture() const
{
tgRealVec p(getTaggedPosition());
return p.add(getTaggedVelocity());
}

 

Extrapolating from the current position is also sometimes referred to as dead reckoning, and is often used to reduce communication between machines in networked games. The idea is that if two networked instances of a game world use the same dead reckoning algorithm to predict uncertain future values, then communication over the network only has to take place when the difference between the predictions and what subsequently happens exceeds some margin of error.

Note that, predictor percepts that predict the output of a controller blur the distinction between percepts and controllers. For example, if the calcPositionTaggedFuture is particularly good at predicting the tag character's behavior, then, in theory, its predictions could be used as the controller for the tagged character. Similarly, a controller can be used as a predictor percept.

--

For more information about Artificial Intelligence for Computer Games: An Introduction (ISBN # 1568812086), please visit http://ai4games.sourceforge.net/.

______________________________________________________

Read more about:

Features

About the Author(s)

John Funge

Blogger

John Funge is a co-founder and one of the lead scientists at a new Silicon Valley based company focusing on AI effects for computer entertainment. John previously worked at Sony Computer Entertainment America's (SCEA) research lab. Before that John was a member of Intel's Microcomputer Research Lab (MRL). He received a B.Sc. in Mathematics and Computer Science from King's College London in 1990, an M.Sc. in Computation from Oxford University in 1991, and a Ph.D. in Computer Science from the University of Toronto in 1998. For his Ph.D. John successfully developed a new approach to high-level control of characters in games and animation. John is the author of numerous technical papers and two books on Game AI, including his new book Artificial Intelligence for Computer Games: An Introduction. John is the Associate Editor for AI and Goal-Directed Action Planning for the International Journal of Intelligent Games and Simulation (IJIGS) and a member of the International Game Developers Association (IGDA) Artificial Intelligence Interface Standards Committee (AIISC). His current research interests include computer games, machine learning, knowledge representation and new democratic methods.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like