Sponsored By

Anticipatory AI and Compelling Characters

Blumberg, the leader of developer Blue Fang (Zoo Tycoon)'s Synthetic Animal Team, and former director of the Synthetic Creatures Lab at MIT, discusses the concept of anticipatory AI in building video game characters that are both "compelling and emotional".

bruce blumberg, Blogger

February 16, 2006

17 Min Read

Introduction

Much of the work in game AI has focused on the ‘big' problems: path planning, squad planning, goal-directed behavior, etc. The result is characters that are capable of increasingly intelligent behavior. However, acting intelligently and acting aware and sentient is not the same thing. But if we are to create the kind of compelling and emotional characters upon which the next generation of computer games will be based, we must solve the latter problem, namely how to build characters that seem aware and sentient.

An important theme of the work of the Synthetic Characters Group at the MIT Media Lab was to understand and build the kind of AI needed to create a sense of an inner life. Our belief, presented most cogently by Damian Isla, was that this sense of an inner life arose not out of the large motion and behavior of the character but out of what Isla termed the low-level motion and behavior.

Examples of such behaviors include: the shift in gaze, and widening eyes as a result of perceived motion just on the periphery, the slight stiffening of a cat's tail that presages a predatory pounce, the slinking gait of a fearful dog in expectation of being punished; the catch in breath in response to a startling noise. In traditional animation, these movements and behaviors would be labeled as secondary anticipatory actions. And yet, as Isla puts it, “much of the low-level animation described is significant precisely because it is indicative of some kind of emotional or knowledge state internal to the character. If a character frowns and continually glances towards a door, we might infer that it is because the character is anxious about someone soon coming through it.” [Isla]


Damian Isla's work on expectations addressed the problem of forming and validating expectations about the likely location of objects that have been observed in the past, but aren't currently visible. His dog would show surprise, confusion, satisfaction, or frustration based on whether the world matched its expectations.

The shift in glance conveys that the character is aware of its environment, that it possesses an expectation about what is to happen, and an attitude with respect to the event as to whether it is going to be good or bad. All of which serve to create a sense of a plausible and comprehensible inner life.

In this article, we wish to focus on “anticipatory AI”, that is, the AI needed to support the anticipatory behaviors that prepare the eye and the mind of the observer for what is to follow because it is these very behaviors that are at the crux of building convincing and compelling characters. We begin by discussing why anticipatory behaviors are so important in nature, as well as, in animation. We will then focus on 3 types of behavior that serve this function of preparing the observer for what is to follow, and discuss the AI implications of each. These include:

  1. Making perception perceivable: what a character is observed to perceive gives cues as to what it will do, and why, and ultimately, what it will feel.

  2. Making expectations perceivable: from perception to action to outcome, a character's expectation with respect to what they are observing, the expected outcome of their actions, and their attitude with respect to the actual outcome of their actions should be made manifest to the observer.

  3. Making impending changes in motivational state perceivable: dramatic changes in motivational state should be telegraphed to the observer so as to prepare them for the change.

Anticipation: Preparing the Eye and the Mind

The importance of anticipatory movements, or more generally anticipation was one of the earliest and most important discoveries of classical animation. To paraphrase two of the pioneers, Frank Thomas and Ollie Johnson: by anticipating the action, the animator allows the audience to focus on how the action is being done rather than in trying to figure out what is being done. In other words, anticipatory movements prepare the eye and the mind of the audience for what is to follow. [Thomas]

As with many of the rules of animation, anticipation has its roots in nature. For example, anticipatory movements may arise out of the physics of the movement like the wind-up before the throw, or the coil of a rattlesnake before it strikes. Or they may arise out of behavior, like the retraction of the lips that precede a snarl, or the raised hackles on the back of the neck, both of which signal an impending attack. Indeed, some anticipatory movements are so predictable that animals use them as so-called “intention movements” to predict what another animal intends to do.

The biological value of attending to intention movements is clear. By doing so, animals can reliably predict what another animal is about to do and plan its response proactively. In some cases, this may allow it to avoid conflict, in other cases it may allow it to predict the zag in the zigzag and meet the prey mid-zag.

We are no different than other animals in our conscious and subconscious reliance on the perception of anticipatory behaviors as important cues into the mind of other beings, be they other people, animals or animated characters.


In "Alpha Wolf," we focused on making the emotional state of the wolf pups manifest to the user via expressive motion, posture and rendering.

Anticipation: Making Perception Perceivable

One of the most important and telling anticipatory behaviors is that of perception. Virtually all animals orient before acting so as to better perceive the focus of their attention. Cats will move their ears in response to a noise before visually orienting. Dogs, being highly olfactory will sniff the ground or the air. Seeing animals perform behaviors such as these provides us with important cues as to what they are attending in the world. In addition, the manner in which perceptual behaviors are performed often provides clues about how the animal feels about what perceives, or expects to perceive. We then use these cues to predict what the animal is going to do and/or fee next.

Since the characters that populate our games are almost always visual creatures, their visual behaviors from gaze to glance are typically the most important cues that people use to infer what a character is thinking, and about to do. Indeed, these behaviors are the canary in the mine when it comes to providing the foundation for the appearance of motivated behavior. Get it right and you are 80% there. Get it wrong and no matter how good the animation, the subsequent behavior will not seem motivated.

In our work at the Media Lab, we typically made the AI dependent on information that was acquired through the “look-at” so as to force ourselves to get it right. Indeed, we went so far as to implement synthetic vision in a number of our systems. That is, the scene would be rendered from the perspective of the character, and relevant information would be extracted from the resulting image via image processing. By encoding information in the image via false coloring, and by taking advantage of the power of today's generation of graphics cards, this approach was remarkably fast and powerful.

This sensory honesty, as Burke and Isla called it, insured that the animal would only act on what it could realistically sense. In addition, because the “look-at” was a fundamental part of the system as opposed to a cosmetic effect, it was a priority for everyone to ensure that the behavior was correct, and as a result, it was both believable and compelling.

Making the perceptual acts observable and believable to the audience is the first thing to get right with respect to creating a sense of an inner life. Seeing the turn of the head, the cock of the ears, the sniff of the nose is more than simply a cosmetic effect. A system may model the diffusion of a scent trail but if the user doesn't see the simulated dog sniff the ground and cast back and forth in order to acquire the scent trail, the resulting behavior will come across as artificial. It is especially important that the character react to those perceptual events that the user expects to be salient to it. For example, a character should start when it hears a sudden noise unless there is a good reason for it not to.

Before we move on, there are three quick points to make. First, making perception perceivable to the audience is more than just a character issue. As the Disney animators know, it has important implications for staging (how the user's eye is directed to the important action in the frame), for example, via a close-up camera shot. Second, what a given character attends to in a given situation is highly character and context specific. An audience will expect a young recruit in his first battle to attend to completely different stimuli than the grizzled sniper whose focus is exclusively on her next target, having seen it all before.

Thus, what a character appears to perceive must match the audience's expectation of what such a character should perceive given its personality and context. Third, how the perceptual acts are performed is every bit as important as performing them in the first place. It is in the “how”, that is, in the quality of motion that the character's expectations are revealed to audience. That is, what they expect to see and how they expect they will feel when they see it. And this brings us to the second topic, namely making expectations perceivable.

Anticipation: Expected Expectations

Disney animators conceptualize every action as telling a story. The anticipatory action prepares the audience for what the character is about to do. It together with the body of the action communicates not only what the character is about to do and then do, but also the character's expectation of how the action is going to play out. The end of the action, the so-called follow-through, communicates how the character feels about the outcome: was it what was expected or not, and if not, was it better or worse than expected. The operative word here is “expectation.”

The action from beginning to end conveys the character's expectation of how the world works in general and the effect of their actions upon that world in specific. Having expectations, acting on them, and reacting to the mismatch between expectations and outcomes are the hallmarks of sentient behavior. Thus, if we want our characters to appear to be sentient, expectations must inform their behavioral choices and the manner in which those behavioral choices are performed so as to make the character's expectations manifest.

Expectations are nothing more than predictions about some aspect of the future state of the world. In some cases, the prediction is contingent on the character performing some action, e.g., if I sit and look cute, my owner will give me a piece of steak. In other cases, the character has no control over the contingency, e.g., every time I go by that mean dog, he barks and lunges, and I get scared. Another, less dire prediction might reflect important contingencies between events in the world, e.g., dad going to the closet and getting the leash means a walk is coming. As each of the previous examples illustrate, expectations often reflect lessons learned from past experience.

Having and communicating expectations convey a sense of sentience for at least three reasons. First, expectations allow characters to respond to other events and characters on the basis of what they expect will happen, and not simply to just what has happened. This ability to act based on some expectation of the future, especially one that is grounded in past experience, is taken by most people as a hallmark of what it means to be aware. Second, when an expectation is accompanied by a visible emotional response, it signals that the character cares what happens to it, and this in turn makes it much more likely that the audience will care too. Third, expectations are the basis for learning from experience since an expectation can be measured against an actual outcome and modified accordingly. Once again, learning from experience is expected from a sentient and aware being.


Dobie could be trained using traditional real-world dog training techniques. Dobie would learn the expected outcome of his actions based on experience, and had a simple emotional system that reflected those expectations and experiences.

Much of our work at the Media Lab investigated various aspects of expectations. Chris Kline set the stage by exploring how and why expectations could, and should, be incorporated into a behavior-based character model. In particular, he investigated the problem of expectation generation and how to recognize, and respond to, expectation violations. Damian Isla's impressive work on expectations addressed the problem of forming and validating expectations about the likely location of objects that have been observed in the past, but aren't currently visible. These expectations reflect the uncertainty associated with the passage of time given the last observed behavior of the object. His system utilizes synthetic vision as its way to perceive the world and update its expectations.

Thus, when his creatures glance from side to side in an inquisitive manner it is because they are inquisitive and are glancing about to update their internal model of the world. The resulting motion is very compelling. Moreover, Isla shows how confusion, surprise and frustration arise naturally out of the model. Rob Burke implemented a very elegant system based on expectations in which his characters could learn about causality and adapt to changes in the world, all the while going about their business. Finally, in the group's Dobie work, in which an animated dog can be trained like a real dog, expectations were used as the basis for a simple emotion system. For example, Dobie's affect conveys his expectation based on past experience of whether the outcome will be good or bad the moment he decides to perform an action, and he responds appropriately to a mismatch between an expected and actual outcome.

Anticipation: Telegraphing Changes in State

It is very rare that even a sudden change in an animal's mood isn't preceded by observable cues that warn of the impending change. Indeed, a cat that is on its back and purring in response to having its tummy rubbed, may stiffly flick its tail once or twice before it sinks its claws and teeth into the unsuspecting hand. Here, the gross behavior of the animal is consistent with being in one motivational context, but an anticipatory behavior is signaling imminent change to a new motivational context. It also tells the observer that the cat has a representation of the future, and the observer's hand is very much part of that future. The alert observer may be caught by surprise that the animal is moving into this new context, but the change seems motivated when preceded by some sign that it is coming.

The absence of anticipatory behaviors that predict significant changes in motivational state (and hence behavior) is a consistent weakness of many AI models of behavior. As a result, changes from one motivational state to the next often appear startling and ultimately inexplicable. One source of the problem is that by modeling motivational contexts as finite states, such a system only knows how to display that state when the system is in that particular state.

Indeed, the system may be architected in such a way that it is impossible for it to know that it is about to switch contexts. Another source of the problem is that most systems are focused on reacting to what has happened rather than on anticipating what will happen and communicating to the audience how the character feels about that expectation. Finally, in some systems the latency associated with responding to a startling event is so long that it seems unbelievable.

Indeed, telegraphing changes in state is inextricably linked to incorporating expectations into the perceptual and behavioral architecture, and on getting the anticipatory behaviors right, from perception to behavioral cues, and then staging it all so that the audience's eye is directed to the cues the character is giving with respect to what it is about to do and how it feels about it.

Conclusion

If we are to create the kind of compelling and emotional characters upon which the next generation of computer games will be based, the characters must seem as if they possess a rich inner life that is reflected in all they do, how they do it and how they feel about it. In this article, we have suggested that an important step toward achieving this goal will be to get the anticipatory behaviors right. That is, those behaviors that signal what the character is about to do and how they feel about it. While often subtle, they are the basis for our belief that we are observing a sentient being, aware of its world and its place in that world. In the end, it's about getting the little things right.

Acknowledgements

The author would like to thank the former members of the Synthetic Characters Group at the Media Lab since many of the ideas in this article arose out of their insight and work. I would also like to thank John Wheeler of Blue Fang Games for his many contributions to this article. Thanks as well to the other members of the Synthetic Animals Team at Blue Fang Games.

References

Blumberg, B., M. Downie, et al. (2002). "Integrated Learning for Interactive Synthetic Characters." Transactions on Graphics 21, 3(Proceedings of ACM SIGGRAPH 2002).

Burke, R. (2004). Great Expectations: Predictions in Entertainment Applications. Life-Like Characters: Tools, Affective Functions and Applications. H. Prendinger and M. Ishizuka. Berlin, Springer-Verlag: 477.

Isla, D., R. Burke, et al. (2001). "A Layered Brain Architecture for Synthetic Creatures." Proceedings of The International Joint Conference on Artificial Intelligence. Seattle, WA.

Isla, D. and B. Blumberg (2002). "Object Persistence for Synthetic Characters." Proceedings of the International Joint Conference on Autonomous Agents and Multi-Agent Systems, Bologna, Italy.

Kline, C. (1999). Observation-based Expectation Generation and Response for Behavior-based Artificial Creatures. The Media Lab. Cambridge, MA, MIT: 69.

Thomas, F. and O. Johnson (1981). The Illusion of Life: Disney Animation. New York, NY. Hyperion.

_____________________________________________________

 

Read more about:

Features

About the Author(s)

bruce blumberg

Blogger

Bruce leads Blue Fang’s Synthetic Animal Team, which is focused on creating the technology, tools and processes that will enable Blue Fang to create expressive, intelligent and engrossing animal characters that will set the bar for the next generation of digital entertainment experiences. Prior to joining Blue Fang, Bruce was an Associate Professor at The Massachusetts Institute of Technology (MIT) Media Lab where he served as the Director of the Synthetic Characters Group. This group performed ground breaking research on behavior, learning and motor control for autonomous animated characters. As Director of the Synthetic Characters Group, Bruce has led teams that created and developed numerous award-winning interactive installations at such prestigious venues as SIGGRAPH, Ars Electronica, E3, and AAAI. An impressive list of published papers can be added to the list of awards he and his group have received, including Grand prize for “AlphaWolf” at the 2002 Digital Art Awards. Prior to joining the MIT Media Lab, Bruce held management positions at Apple Computer and NeXT Computer. Bruce holds a Master of Science degree in Management from the MIT Sloan School of Management and a Ph.D. in Media Arts and Sciences from the MIT Media Laboratory.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like