Sponsored By

Is it possible to imbue game characters with emotion? No, but simulating the illusion of life without having, or needing, the complexity of human emotional responses is certainly within the realm of possibility. See how some games are making steps in this direction.

Ian Wilson, Blogger

May 7, 1999

13 Min Read

Characters that display emotion are critical to a rich and believable simulated environment, especially when those characters interact with real people possessing real emotions. Emotion is the essential element that creates the difference between robotic behavior and lifelike, engaging behavior. Traditionally, animators have painstakingly created these behaviors for prerendered animations. This approach, however, is not possible when we wish to use autonomous, interactive characters that possess their own unique personalities and moods. Truly interactive characters must generate their behavior autonomously through techniques based upon what I call artificial emotion (AE).

Why do we have real emotion?

As human beings, we have an innate understanding of what emotions are. However, outside of academia, we rarely hear discussions on how emotions are produced and, more importantly, on why we have emotions. Within academia, these issues are subject to much contention and debate. That said, allow me to offer my own thoughts on these issues.

When attempting to simulate natural systems, we first need to ask, "What is the nature of this system and what is its purpose or reason for being?" Very few, if any, systems in the natural world exist for no reason.

fin_fin.gif

Fujitsu’s fin fin

Emotions are an integral part of our decision-making systems. Emotions tune our decisions according to our personalities, moods, and momentary emotions to give us unique responses to situations presented by our environment. But why do we need unique responses to situations? Why don’t we all have the same responses? To answer this question, we need to look beyond the individual at humanity as a group or society of individuals. I believe personality has evolved as a problem-solving mechanism. Our unique personalities determine that we all think and hence solve problems in unique and different ways. In an evolutionary sense, this diverse method of solving problems is highly effective. If we had only one method of problem solving there would be a large, if not infinite, number of solutions that would be outside of our problem solving capabilities. So personality has evolved as a way of attacking problems from many different angles: from bold high-risk solutions to cautious and precise incremental solutions; from solutions discovered though deep thought and reflection to solutions discovered by gaining knowledge from others (socializing).

Emotion is, to a large degree, an emergent system. Its use must be looked at in terms of its interaction with society rather than in isolation to gain a better understanding of its reason for being.

We can look at a corporation as an example. Here, at the top of the hierarchy, we have a CEO who is bold and fearless, making broad decisions with little regard to details. At the other end of the hierarchy, we might find someone who is fearful of the unknown, is timid, and has great respect for details. The organization, and in a greater sense society, needs both types of people and the many others in between to function efficiently. Imagine if we all had identical decision-making systems, which gave us all the same responses to situations, but those responses were wrong. We wouldn’t last very long as a species.

Layers of emotion

Fundamental to our AE-based behavior system is the notion that emotions comprise three layers of behavior. At the top level are what we term momentary emotions; these are the behaviors that we display briefly in reaction to events. For example, momentary emotions occur when we smile or laugh at a joke or when we are surprised to see an old friend unexpectedly. At the next level are moods. Moods are prolonged emotional states caused by the cumulative effect of momentary emotions. Underlying both of these layers and always present is our personality; this is the behavior that we generally display when no momentary emotion or mood overrides (Figure 1).

These levels have an order of priority. Momentary emotions have priority over mood when determining which behavior to display. One’s mood, in turn, has priority over one’s personality (Figure 2)

Figures 1 and 2 show the various layers of emotional behavior. Momentary emotions are brief reactions to events that assume the highest priority when we select our behavior. These momentary behaviors are short-lived and decay quickly. Moods are produced by momentary emotions, usually by the cumulative affects of a series of momentary emotions. Moods can gradually increase in prominence even after the momentary emotions have subdued. The development of moods depends on whether the momentary emotions are positive or negative (punishments or rewards in a reinforcement sense). If a character were to receive a stream of negative momentary emotions, then the mood would obviously be bad and would decay slowly. The personality layer is always present and has a consistent level of prominence.

fig1.jpg

Figure 1. The three layers of emotional behavior.

The behavior that a character displays depends upon each emotional layer’s prominence. The more prominent the layer, the higher the probability of that behavior being selected.

fig2.jpg

Figure 2. Emotional priorities.

Where can we use AE?

With the notable exceptions of P.F. Magic’s Catz and Dogz series, Fujitsu’s fin fin, and Cyberlife’s Creatures series, autonomous AE of any significant depth is rarely seen in the world of interactive entertainment. Why is this the case?

The field of interactive entertainment is dominated by genres that require the user to either conquer and/or to kill everything in his or her path. Little emotion is required by the opposition, besides perhaps a little hard-coded fear or aggression that manifests itself in simple movement patterns. Emotion primarily serves a social function in interactive entertainment. Emotional responses are used to make the characters that we encounter believable and engaging. For example, if we were to walk into a virtual bar and all of the characters in the bar had distinct personalities, the scene would be a very immersive and believable social situation. If the characters showed no emotion, our suspension of disbelief would be immediately broken and we would be reminded that we were in a computer-generated simulation rather than in our own fantasy world. Of course, if all of the bar’s customers had guns and our sole purpose was to dispatch them to a simulated afterlife, then this really wouldn’t constitute a social situation and emotion might not be required.

A key to the use of AE, then, is the context of situations in which it is used. An important area of growth is in the field of girls’ entertainment, pioneered by Purple Moon and its friendship adventures built on Brenda Laurel’s excellent research into girls’ play behavior and girls and sport. For more information on Ms. Laurel’s research, see http://www.purple-moon.com/cb/laslink/pm?stat+corp+play_behavior and http://www.purple-moon.com/cb/laslink/pm?stat+corp+girl_sport.

Social cooperation is a key element in this area and as such is an ideal place to use autonomous characters with AE. In these situations, the characters’ emotional states and their emotional responses to the players’ actions are what make the experience enjoyable, interesting, and entertaining. After playing the first of Purple Moon’s titles, I was a little disappointed to find that it used only static animations, which limited its sense of immersion. A full, living, 3D world would have increased its impact (and cost) dramatically.

creatures.jpg

Cyberlife’s Creatures

Of course, processor overhead is always a problem with an element as computationally complex as AE. The reason that Catz, Dogz, and Creatures succeed in displaying characters with believable emotional behavior is that this element is generally the games’ sole area of concern. Graphics and other elements are kept to an acceptable minimum so that maximum resources can be devoted to behavior generation. As we’re not yet at the stage where we can throw unlimited resources at character AE, we should learn from those titles that employ it successfully and design our simulations intelligently with these constraints in mind. In other words, fight the battles you can win.

Still, a significant amount of ingenuity and optimization would certainly contribute to the use and availability of AE. Consider the graphics technique of LOD (level of detail), in which objects farther from the viewer are displayed in progressively lower and lower levels of detail. Using LOE (level of emotion), characters farther away from the viewer would generate and display progressively lower and lower levels of emotion. If a character is out of sight, we generally don’t care about its emotional state. In addition, one can also be careful in the choice of characters to use. Using human characters necessarily implies that their behavior is deep and complex. Unfortunately, because we are most attuned to recognizing human emotion, we are also very well attuned at recognizing flawed human behavior, which can break the illusion of an otherwise well-constructed simulated environment. One way to attack this problem is to use nonhuman characters. Cats, dogs, and Norns all show engaging levels of interactive emotional behavior that maintains the illusion of life without having, or needing, the complexity of human emotional responses.

An important point to reiterate here is that we’re specifically dealing with autonomous interactive characters. These characters have responses and behaviors that cannot be prescripted or predefined to any great degree and must instead employ systems that are able to produce behavior in response to changes in the environment and interactions with the user.

How can we use AE?

Artificial emotion produces two fundamental components as output: gestures and actions. Actions are a general category and are dependent upon the context of the situation in which the character exists. A simulation’s movement system uses AE to select and/or modify an action. When selecting an action, AE indicates what actions are appropriate to the character’s personality and current mood. So a timid character is unlikely to do anything aggressive, for example. When modifying an action, AE can help to determine how an action is carried out. An outgoing, extroverted character might perform an action enthusiastically, although this probably wouldn’t be the case for an extreme introvert. Our primary use of AE, however, is in driving gestures, namely hand, body, and facial gestures. Gestures are the way in which we communicate our emotions to the outside world. Without them, we would seem cold, flat, and unemotional — rather like a computer character. These AE-driven gestures are tied directly to our characters’ personalities and moods and follow definite patterns.

This body language adds an extra dimension to a character’s behavior, giving life and depth to simulations populated by autonomous characters that now posses unique personalities. We are all used to seeing environments populated by characters that all have identical motions or body language. They all stand stiffly upright and move like clockwork toys. Would it not be refreshing to see a sad looking fellow, shoulders hunched over, arms hanging limply and walking slowly as he makes his way through our environment? This idea immediately introduces all sorts of theatrical and cinematic possibilities, such as populating our environment with a whole cast of unique characters. Our viewer’s experience would be enriched as well. "Who is that guy? Why does he look so sad? What’s his story? Should I go and ask him?" The kinds of questions that occur to the viewer of a truly interactive experience are simply irrelevant without AE.

dogz.gif

P.F. Magic's Dogz

(It should be noted that I could also substitute the acting term character in place of my term personality. Character might be a more appropriate term, but could confuse the reader because I’m using character to indicate an autonomous agent in this article. The terms are, however, interchangeable.)

The future of AE

I can imagine a scene; I’m searching for a lost city in a wild, remote jungle with my trusted autonomous companion Aeida. Suddenly, we find the entrance to the city and walk in. It’s still inhabited. The inhabitants’ body language changes when they see us, reacting to our sudden intrusion. Some become fearful, backing away and curling into a nonthreatening posture. Others do the opposite, standing upright, shoulders back, chest out, and fists clenched — looks like trouble. We stand motionless for a time, until a very jovial character smiles broadly at us, laughs, then comes over to greet us, telling the other inhabitants to do likewise. The inhabitants’ interactive behavior, and more importantly their individual behavior, creates a living world for us to explore and within which to entertain ourselves. This environment would be socially oriented; our decisions and actions would be based upon the personalities and moods of the characters that we encounter. Essentially, the characters’ decisions and actions would be interactively based upon ours; nothing would be prescripted (unless the designer of the experience wished it that way, as in interactive theatre).

Such a world would require that designers spend a good deal of time designing their characters for deep and engaging roles. Designers will need to add the skills of scriptwriting and storytelling to their growing repertoire of talents. Interactive theatre and cinema is a relatively new area that is emerging around autonomous characters. Those who are interested in participating in its development would be wise to start their reading now. A great place to start looking is the web site composed by Andrew Stern of P.F. Magic at http://pw2.netcom.com/~apstern/index.html. Here you’ll find links to just about every conceivable source in these fields and many more besides.

The convergence of many factors — processor speed, market awareness, and the maturation of the entertainment field to name a few — will revolutionize the way in which we use characters in simulations. Whole new avenues and genres will open up before us. The timing of these developments may not be a moment too soon, considering the growing (and plausible) perception that videogames turn kids into desensitized and violent members of society. Designing experiences around social interaction may not push your buttons, but society at large will probably thank you (and give you lots of free press). This subject is full of real emotion, which could, ironically, be averted through the use of artificial emotion.

Visionary, life designer, philosopher, creative genius, egomaniac, and legend in his own day dreams. Ian (Gamasutra Profile) aims to be the father of believable, emotional, virtual characters in the emerging arena of entertainment simulation. He is currently trying to establish artificial emotion as a separate field of study from AI (which is, according to Hollywood, going to take over the world and enslave us all!). He can be reached at [email protected].

Read more about:

Features

About the Author(s)

Ian Wilson

Blogger

Visionary, life designer, philosopher, creative genius, egomaniac, and legend in his own day dreams. Ian (Gamasutra Profile) aims to be the father of believable, emotional, virtual characters in the emerging arena of entertainment simulation. He is currently trying to establish artificial emotion as a separate field of study from AI (which is, according to Hollywood, going to take over the world and enslave us all!). He can be reached at [email protected].

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like