Sponsored By

Agitating For Dramatic Change

Randy Littlejohn believes that there's little interest by game developers in interweaving non-linear story elements, strong character development and the principles of drama into interactive designs. He wants a new kind of interactive experience that is comfortable and compelling for the masses. Here's his blueprint for it.

Randy Littlejohn, Blogger

October 29, 2003

51 Min Read

Forms of computer-based interactive entertainment are heavily controlled by the idea that they are "games", which are produced for a narrow (but profitable) market of "gamers". Thus, fast, fun arcade-like experiences, artificial puzzle-solving, gaining points and "winning" have been the main emphasis in interactive design, even while the graphic and sound environments have become more and more realistic -- even as NPCs have become embedded with so-called "AI".

The idea of story is largely used to set the stage for first person shooters and role-playing games. Once the game begins, story elements become simplistic, linear or at least pre-defined, and "underwhelming" -- if they exist at all. Character development is something left behind after opening movies and seldom-read documents that come with the game, which outline who's who, and why they're doing what. It is rare indeed to find good character development and multi-layered, gradually unfolding stories in computer games - to say nothing of good, emotionally moving drama. I have heard the justification that computer animated NPCs are simply not sophisticated enough to pull off a dramatic performance - and yet poorly animated Saturday morning cartoons can be emotionally involving (if rarely, but the point is that they are indeed sometimes moving). The NPCs in Half-Life 2 are more life-like and have more ability to communicate a range of emotions than perhaps in any game before, except for the days of live-action games. Nevertheless, judging only from the E3 demo, Half-Life 2 still seems to be basically a "shooter", rather than an interactive drama, albeit in a more realistic universe than usual.

halflife2.jpg

Valve has make NPCs more lifelike than ever in Half-Life 2, but at its core, the game is still a "shooter", not a drama.

No, it's not that NPCs can't emote. Instead, I think that given the emphasis of "game-think", and a market of "gamers", it's clear that the ideas of story and drama are simply a low priority.

And there's nothing at all wrong with this. Computer games serve a lucrative market. If it's not broken, don't fix it. It's just that I think a far bigger market is being left untapped.

In addition, I've found that people who are not professional writers or professional storytellers, but who may be "designers", "level-designers" or "producers" hash out a story premise for a game, or will decide on a setting populated by a certain kind of characters and monsters, who live in a matrix of certain rules. Sometimes a professional writer will be brought in to take what has already been decided upon and flesh it out. The professional writer may write a background story that sets the stage for the action and/or will write up biographies for the main characters. Much of this will never been seen in the game itself, beyond opening movies and cinematics. Sometimes professional writers will even get in on dialogue writing. But in terms of actual game design, my experience has been that in general, there's little attempt or little interest in interweaving non-linear story elements, strong character development and the principles of drama into interactive designs. This hampers appealing to a mass audience as much as the insistence on developing interactive entertainments by game-think alone.

Other kinds of interactive entertainment, based on good storytelling, good character development and an adaptation of the principles of drama, targeted to consumers with computers, but who are not avid gamers -- are waiting to be designed - and profited from. I think that the masses are ready to spend money for an interactive drama that leaves the trappings of computer 'games' behind. Whoever builds this groundbreaking system is going to get rich.

This article is a follow-up to an earlier Gamasutra article I wrote, "Adapting the Tools of Drama to Interactive Storytelling". That article has much more to say about the nature of drama. I suggest reading it first before continuing with this article. For the purpose of this article, drama is not a genre of entertainment. It is a toolset of principles developed over hundreds, if not thousands, of years to rigorously enhance communication. To quote Martin Esslin in An Anatomy of Drama, "For the expression of the imponderable mood, the hidden tensions and sympathies, the subtleties of human relationships and interaction, drama is by the most economical means of expression."

Interactivity for the Masses

I'm agitating for the creation of a new kind of interactive experience that is comfortable and compelling for the masses. This new art form would immerse the experiencer inside a reality very much like what he or she is already familiar with: film and television.
This is a search for a method of "interactive dramatic narrative presentation" and packaging.

What I see is an interactive drama for the masses who have computers, but who are not "gamers". The masses will be drawn to this experience because of three things: it's familiar like TV and film, the interface is simple and intuitive, and because the characters are emotionally evocative and their plight is understandable and just. There are no brainteasers laid artificially and superficially into the design. If there are to be puzzles, they are puzzles that evolve out of the dramatic backbone of the experience. In fact, everything that can be considered a trapping of 'game thinking' would be absent from this new kind of interactive dramatic experience. Though the designer knows that the experience will have a beginning that sets up the narrative, a middle with evolving conflict, and an end with a good resolution -- no one knows how the dramatic experience will evolve. In my vision, advancing from A to B to C will be a non-linear, yet also emotionally powerful, dramatic experience. So far experiments with interactive storytelling have failed to take into account the need to adapt the principles of drama to interactivity, and thus these experiments have been merely interesting, instead of truly emotionally involving.

In my imagined design, the moment-to-moment experience is not pre-defined. Nevertheless, a satisfactory dramatic experience demands there to be a definite beginning, middle, and end, which will support a rising level of tension until the dramatic climax and resolution is achieved. I see a system in which the dramatic and narrative principles and support elements are managed at the macro-level, in order to achieve drama, but in which these elements are active in a non-linear, non-branching way at the micro-level.

It will take a design team to create such a groundbreaking entertainment -- not just a designer. The team will be composed of a dramatist/storyteller/writer, a programming lead, an art lead, and a sound/music lead. There will be no talk of "levels" and such. There will be no talk of whether the experience will be a shooter, a role-playing game, or a massively multi-player on-line game. There will be no mention of the word "game". Instead there will be talk of "narrative environments", synthespians, synthespian directors, motivations, subtext and goals, emotional environments, and real-time adaptive music. There will be talk of the macro-level "drama engine", which provides for a three-act structure, like an umbrella, over non-linear narrative development. There will be development of interactive tools for dramatists who are not necessarily programmers.

In a nutshell I want to encourage a dramatic story-environment in which the experiencer and truly AI-smart NPCs, each with their own goals, biases, and methodologies, co-create the narrative at the micro-level, in real time, as their actions trigger the results of dramatic situations that are pre-defined at the invisible macro level by an interactive writer/ dramatist.

peopleputty.jpg

Considered for use at Sierra, Haptek's People Putty allows you to create a interactive 3D character, then using set of sliders, give your characters a range of emotions.

I have long believed that combining a story/drama world-authoring engine, perhaps something like Chris Crawford's "Erasmatron" project, with a front end something like Haptek's "People Putty", represents the major animation, management, and creator interface software components of such a project. At one time the People Putty engine was being considered for an adventure game at Sierra. I was present for long demos and was able to talk at length with the founder of Haptek, Chris Shaw. So I am very familiar with what they've done, and I'm impressed. I'm also impressed and fascinated by Chris Crawford's Erasmatron efforts, which I've been following for several years now. Yet, since his is a largely single person's effort, and since his development platform is only available for Mac users, I fear his efforts may take a very long time to pay off. Nevertheless, I encourage readers to check the Haptek and Crawford URLs.

The Drama Engine

I see rich, unplowed fields waiting for a new paradigm for the masses, a paradigm that leaves game-thought behind. Central to this new paradigm is the creation of a "drama engine" to be placed at the heart of a system.

Computer technology is advancing at an incredible rate, but few people outside of academia seem to be thinking about how to evolve the tools of drama so that they can work in a computerized, non-linear, interactive environment. Drama has always depended upon the control of audience perspective in a linear series of events. So drama must evolve now. That's my interest. But this interest needs a test bed.

I look out there and see that all of the components for a test bed are now available (though dispersed in various computer game development tools and non-entertainment projects). If combined, these elements could lead to a new kind of interactive entertainment - call it interactive drama - or interactive drama worlds - call it working towards an evolution of drama towards preparation for a real Star Trek Holodeck experience.

I envision a system combining interacting modules into a system to support life-like NPC's with the ability to "act" - call them "synthespians", as some have. The list of modules would include at least the following: Adaptive Learning, Pattern Recognition, Expert Systems, Speech Processing, and Text Parsing. But I do not envision creating autonomous agents that are truly "aware", of course. (If you are reading this in a way that does not allow the use of the above hyperlinks, see the section "Parts and Pieces").

As a metaphor for what I see, let me give an example: stage sets are only designed to the degree that they will be used. If a door in a flat is to be used, it is built strongly enough so that actors can repeatedly open and close a door and move through the doorway without the prop falling apart or shaking the flat in which it's embedded. However, if the door will never be used, there doesn't even have to be an opening in the flat - just a door painted on the surface.

Drama is smoke and mirrors - its elements only need to seem real. Bringing the metaphor back to AI, there doesn't need to be "real" understanding by the machine, or any "real" communication to make interactive drama work. It must only appear to the experiencer that NPCs are capable of real intelligence, understanding, needs, goals, emotion and communication.

I envision an interactive entertainment in which synthespians and the experiencer interact in a "drama-world" made of theatrically atmospheric environments saturated with exposition (story elements), dramatic potential and events orchestrated by a "drama-engine". I see a dramatic work arising from an environment where, given certain starting criteria, there can be an emergent and yet dramatic story involving believable, likable characters and characters who can be loathed, and yet be three-dimensional.

I am especially interested in the potential of an autonomous "sidekick" or "partner", who would inspire empathy in the experiencer, and who would help instigate an adventure/ quest. I envision a human stranger in a strange land, with the sidekick being the liaison, as well as potential friend, helper, and fighting partner. This sidekick would be an AI-smart synthespian who can learn and apparently reason, and who is obviously afflicted with needs and desires, as we all are, and who is motivated by a strong, just and dramatic goal that is in conflict with the state of the drama-world. These abilities are important if we are to empathize with the character. We must empathize before we care. We must care before we are emotionally involved. We must be emotionally involved if we are to experience the emotional roller coaster and payoff of good drama. We must care about our friend and his or her just cause. We must worry when our friend is threatened, or when the cause is threatened.

Invisible in all of this is the "dramatist" in the background - behind the curtain, who uses a new kind of tool to "direct" the theatrical potential of the unfolding experience by inputting narrative elements, inherent conflict, characters (with wants, needs, goals, schedules and action abilities that will collide in conflict) and dramatically "soaked" environments.

This new kind of production tool would be designed for a dramatist who is not necessarily a programmer. To program well takes years of dedication. To become an excellent composer takes years of dedication. To become a talented animator takes years of work. And to become a writer-dramatist takes years of dedicated work too. It makes sense to let each talent area work at what they do best. It is unrealistic to think that a single person can be a talented AI programmer, a compelling writer AND an insightful dramatist. I'm sure that somewhere such a renaissance person exists, but can we realistically expect one of these rare people at each interactive company?

This new development tool for the non-programmer writer-dramatist would allow for the development of at least characters and interactive story elements. The tool would need to plug into world-creation tools, such as existing level editors.

Synthespians in an Interactive, Dramatic World

This is what I mean by synthespians: I'd like to explore the potential of creating autonomous agents with believable "dramatic character". In other words, I'd like to see autonomous agents with goals, biases, and abilities who carry out apparent "intent" - all inspired by the principals of drama.

Synthespians within the drama world would be designed to a) do certain things on a certain schedule b) unless they are interfered with c) are tied to "communication libraries" and d) are autonomous in that they have goals and biases and abilities which allow actions to be taken towards their goals.

Like a real person, a synthespian may need to get up at a certain time, travel to work, stay at work for a certain period of time, stop by the store, come home, and stay at home for a certain period of time. But, because the agent has various goals (which may conflict) and is autonomous, the agent may try to work other actions into the overall schedule. If internal needs are strong enough, the agent may even violate the daily schedule in order to get something else done. But there would be a price to be paid for violating the overall schedule -- getting fired -- pissing off the mate, etc. I'm speaking here in mundane terms to illustrate the point. The "price that is paid" is part of the dramatist's pre-defined setup.

If a normal daily schedule is a goal, a synthespian may run up against other agents with goals that are in conflict with his or her or its goals, which initiates a change of goal priorities. In addition, if two or more synthespians have conflicting goals, then you have drama. As Babylon 5 creator J. Michael Straczynski illustrated so succinctly in his The Complete Book of Scriptwriting, CHARACTERIZATION + DESIRE = GOAL. GOAL + CONFLICT = STORY -- in this case, emergent story based on broad dramatic principles. That synthespians would have conflicting goals is part of the dramatist's pre-defined setup.

Synthespians may suffer conflicting wants and needs, which would lead to conflicting goals. This creates character. These inner conflicts are a part of the dramatist's pre-defined setup.

The sidekick could guide by helping to steer the character toward interesting places and away from areas that are boring or the player is not ready for. Sure, the player could ignore the advice, and the sidekick would still try to bail them out; but always trying to lead them back onto the path of the conflict that is at the heart of the drama world status quo, and thus towards emergent story and drama. The conflict at the heart of the drama world status quo is part of the dramatist's pre-defined setup.

Synthespians could interact with the experiencer by employing the tactics used in Commedia del Arte. Commedia del'Arte performances and techniques spread throughout Europe during the 16th and 17th centuries, with offshoots in France, Spain and England. In this form of performance the players follow the outline of a well-known story with well-known archetypal characters. But neither the audience nor the players know exactly how the story will be told until they begin to perform. Each player has a well-rehearsed repertoire of "tricks", or "skit pieces". The players will throw these tricks back and forth at each other at whim - each will react to the other's tricks by pulling out their own tricks to throw back. It's like jazz musicians following a chart, but not knowing who will play or what notes will happen in what order until they get there, and in the process they inspire and challenge each other. As the players do this, they are very aware of the audience. If the audience doesn't seem to be interested in one set of tricks, they'll try others, and in this way attempt to keep the audience entertained until the conclusion of the story. It's loosely scripted improvisation.

How is that applied to interactive drama and synthespians? Each of the characters the experiencer can potentially interact with can have a library of various things that they can do or say - a library of tricks. We don't know the course of the emergent drama/story because we don't know where the experiencer will explore first, second and third, and we don't know who the player will communicate with first, second and third. And in those communications we don't know how the experiencer will react. But (as the directors behind the curtain) we can make sure that in "Act I" all of the synthespians can be attached to their Act I libraries. In Act 2 they will all be attached to their Act II libraries and so forth, so that we get the growing dramatic tension of moving through acts, like in a play or movie. They could each have three libraries of activities and schedules, so that they are always doing appropriate things for each sequential act. Further, the major goals and biases for the main synthespian could be in libraries too. While we don't know how the story/drama will emerge, the dramatist will know essentially what story/drama will emerge, and will have content control of synthespians at the motivational level, and control of the dramatic structure at the act level.

It is necessary for synthespians to be like improvisational performers, in that they will develop ways of getting out of conversations, or leading conversations that are consistent without breaking character. If a synthespian doesn't have the information required by the experiencer, or isn't willing to share the information that it has, then there should be a method within its database which allows for dealing with this situation while staying in character.

I do not mean that the libraries attached to synthespians would include pre-written scripts. I am pretty sure that the stories in both Diablo and Blade Runner were implemented with a methodology similar to what I have described. But in those games it was painfully obvious that agents only knew a few things to talk about until the next level was triggered; where everyone suddenly had new stuff to talk about. Instead, more sophisticated options are necessary. We may need to look outside of the game/ entertainment industries for the tools we'll need. More on this below.

The idea of having Synthespians attached to libraries that are broken up into dramatic acts can work with story as well. In a linear story you can think of a string of pearls, where each pearl is a scene and the thread is the through-line of action. In an interactive drama world, think of a broken string of pearls, where the experiencer can explore and discoverer each pearl, like an ant discovering the pearls from a broken necklace on a tabletop. Each pearl can be seen as a location or an event that has embedded in it dramatic story elements. But what if the pearl, which contains the "end" information, could be found first, rendering the rest of the pearls a moot point?

To solve this problem, the pearls could be put into three groups. There would be "gates" between each group. Now our 'ant', the experiencer, can wonder at will through group "A" pearls, which include Act I information. Act I is designed to fulfill the exposition needs of the following acts, and work as a benchmark dramatic tension level. Group "B" pearls continue with the following story elements and up the ante in dramatic tension, and so forth. This way the experiencer has a non-linear trip at the micro level (finding the pearls within a group), and yet is lead to greater and greater heights of dramatic tension in an organized way, because of the macro level structuring of the pearls into three groups, each with its own dramatic purpose. I'm assuming here that the gating mechanism would be invisible, or at least not obvious. The content that is embedded in each "pearl" is part of the dramatist's pre-defined setup.

Drama-O-Rama

The following ideas and information are meant to work towards the idea of creating a drama engine, which includes dramatic rules, synthespians (special purpose semi-autonomous agents) and a bare stage. The bare stage would be the equivalent of a level editor, but the system also includes an interface that is designed to easily and intuitively input the basic tools of drama by a non-programmer: narrative elements, major and minor conflicting goals for the protagonist(s) and antagonist(s) and their allies, and a three-act structure.

The goal is the facilitation of interactive dramatic works, where the experiencer is immersed in a drama (action/adventure, mystery, thriller, sci-fi, whatever) with seemingly aware and intelligent NPCs. The dramatist/storyteller/director creates a world ripe with atmosphere, populated by intelligent actors, and supplies a dramatic goal/challenge for the experiencer and experiencer's allies, and counter-goals for the antagonist, and antagonist allies. What happens within this dramatic context is unpredictable. In other words, the dramatist creates the dramatic potential, but the drama/story evolves in an unpredictable way based on the actions of the experiencer and the intelligent NPCs in a simulated living world.

laird.jpg

John Laird, Professor and Associate Chair of
Computer Science and Engineering Division
at the University of Michigan.

The final result (finished dramaworld) is stripped of the dramatist's interfaces and is fitted with an end-user interface. The finished drama world becomes a sim for the experiencer, which can be copied and sold.

Character auto-routines for simple behavior as expressed through facial expression and body language would be pre-defined, standardized, and "clickable" code generation for apparent emotional response to pre-defined stimuli, which is embedded in the environment and other characters. Reverse parsing and synthetic speech is assumed. Character emotional response generates code that is automatically embedded in text, to be parsed by a synthetic speech system. The embedded code triggers an emotional rendering of the synthetic speech.

Haptek's Virtual Friend software already includes these abilities, although it is up to the user to manually create scripts with the embedded codes for the emotional coloring of speech (and for character movement, costume changes, morphing, etc.). In the system I see, the macro management of the drama engine takes over the script writing for the characters, based on rules input by the human dramatist.

Some producers are beginning to experiment with the basics of what I'm talking about. For example, in Black & White your creature can be trained to interact with the objects and inhabitants of the game world through your Pavlovian tutelage. In Microsoft's game Halo, NPCs have knowledge about the "state of the world" as they've perceived it (memories of enemy sightings, weapon locations); an emotion system that changes based on events (growing more fearful during enemy onslaughts); and a decision-making system that consults the other three systems to decide whether to attack, run for cover, or initiate another behavior.

John Laird and his colleagues have launched a research project at the University of Michigan to explore the possibilities of social interaction: an Unreal Tournament mod called Haunt 2, in which the player controls a ghost in a house inhabited by NPC "humans." Unable to handle most physical objects directly, the ghost must, in Laird's words, "entice, cajole, threaten, or frighten the AI characters into manipulating the objects in the world." The subtleties of Laird's social interactions are not yet commonplace in today's game AI, which still largely revolves around creating better military bots and training them to hunt down enemies in more believable ways, but the subtleties of social interactions is what we must go for in interactive drama. We need to go further.

I look towards new tools such as AI.implant, DirectIA, and Stottler Henke's SimBionic, as discussed in Eric Dybsand's five-part series. Each of these products adds to a toolset, but the toolset must work within the context of a drama manager, and more tools are needed.

haunt2.jpg

Haunt 2 is an attempt to "...integrate knowledge-based, goal-oriented reasoning...with emotions, personality, and physical drives that have been used in simple, knowledge-lean agents in other systems," according to the developers.

The specialized modules of Halo's artificial brains (see "Wild Things: They fight. They flock. They have free will. Get ready for game bots with a mind of their own" By Steven Johnson) mirror what we now understand about the human mind's architecture. Instead of a single general intelligence, the brain is more like a Swiss army knife of task-specific tools -- face recognizers, syntax decoders, memory subsystems -- that collectively create our varied and adaptable intelligence. What if we could take advantage of progress in these areas?

The NPCs I envision should be sophisticated artificial intelligence bots -- their decisionmaking guided by complex neural nets and simulated emotions, their perceptual systems honed to detect subtle changes in their environment -- real-time perceptions about the world around them (aural, visual, and tactile). There should be nuanced natural language routines, perhaps Webcam gesture recognition, and machine learning. NPC's should be able to communicate among themselves, share new ideas and collaborate on group tasks. We're looking for managed, yet unscripted, emergent behavior, as always - based on the principles of drama.

Proposed "Drama-O-Rama" Systems

The following features would plug into something like current level editor software packages, which includes 3D environment creation, but would add the ability to introduce and tune the behavior of NPCs."

Motivation Module

Synthespians form goals and act as a result of wants and/or needs. Wants and needs are activated by timers or level detectors. Here are some examples of human wants and needs: hunger, curiosity, sleep, and acknowledgement. There are many more, of course. After a specific period of time, or at a specific level, a query is activated by a synthespian. The query is a request to retrieve relevant data in a memory system. Retrieved data is the basis for an action to be taken in an effort to maintain want and need levels, which are specific to each NPC.

fitnesslandscape.gif

Will Wright and The Sims team view the evolution of a Sims player's character as a traversal of a "happiness landscape", in which player make decisions about the pursuit of material wealth versus social fulfillment. Exclusively pursuing social standing or material goods result in lower ultimate success (the left and right light-blue corners on the diagram) than a more balanced approach (resulting in a path up the middle to the dark blue peak).
Source: Will Wright's 2003 GDC lecture slides.

Will Wright and his team for The Sims came up with a happiness landscape, borrowing from evolutionary theory's concept of a fitness landscape, in which organisms climb ever-higher peaks of adaptive fitness as natural selection runs its course. Rather than traversing a genetic landscape of fitness, you're traversing a spatial landscape of happiness, but it could also be a "wellness" landscape, or a "learning" landscape, or a "drama" landscape -- or a combination.

This module would have an input for drama/story wants and needs specific to each major NPC character. In this way, the synthespians can act autonomously, as in a sim, but also are guided by the invisible hand of the dramatist.

Memory System Module

What if NPC memory could work like human brains, in that linear, intellectual data is stored in one part of a relational database, while pattern recognition data (visual similarity, symbolic association, association by opposite, association by color, and association by theme, etc.) could be stored in the other side? What if these two sides of the database were relational?

Questions put to this relational database (by way of the experiencer questioning an NPC) would set into motion a process of scanning for matching linear content, as well as visual material that simply fits a pattern suggested by the query. This module would also have an interface for the easy input of story/drama-related material.

What if this relational database was the center of a chatterbot like A. L. I. C. E. (http://alicebot.org). What if each NPC could be a chatterbot?

This module would have an input interface to receive drama/story-inspired answers to synthespian wants and needs queries.

Between the above module and this one, the dramatist is able to supply drama/story-inspired wants and needs, and also drama/story-inspired ways to solve those wants and needs in story-important synthespians.

This module feeds into a Course of Action Module.

Course of Action Module

Though main characters would first act on the dramatist's main goals, synthespians would be able to form subservient goals as a result of their own wants and needs, and would have their own unique abilities to go about satisfying their wants and needs based on a course of action.

In order to accomplish these, the NPC would choose from potential physical actions, or compose a statement/ question to be fed to the natural language module. The physical action could be anything from a facial expression of emotion, body language, or complex physical activities such as travel, interaction with the environment or another NPC, etc.

These goals would be informed by an Expert System.

Expert System

Aural, visual, and tactile data is fed into the memory system, but also into an "expert system" module. For details of expert systems, see below, under Parts and Pieces.

Living World Module

The persistent dramaworld should be programmable through an easy interface so that environments can be imbedded with dramatic potential. The dramatist may wish to input data such as dynamic weather, geophysical, and atmospheric conditions, which can be triggered by proximity or by dramatic act (three-act dramatic structure). This is where the dramatist/storyteller inserts sets and props. This is like a level editor enhanced with inputs for actionable dramatic potential.

Synthespian Evaluation Module

Were needs and wants satisfied?

If "yes" go back to an "unmet goals temporary database" or the Motivation Module.

If "no" go back to data memory module for a new data search, followed by a new course of action, or try again if another course of action isn't found. This would be a continuous loop, until something intercedes - something like a more important drama/story-inspired action, the emergence of a greater need, the incapacitation of the synthespian, etc.

Output Modules

Real-time 3D animation execution.

Real-time text or speech execution.

Real-time music synthesis.

Parts and Pieces

Here are only some of the AI systems I've run across that may have the potential to be adapted for use in interactive-drama worlds.

  • Expert Systems. Underlying a hunch are dozens of tiny, subconscious rules - truths we've learned from experience. Add them up and you get instinct. Program those rules into a computer and you get an expert system. Built by TriPath Imaging, FocalPoint screens 5 million Pap smears for signs of cervical cancer per year. Programmers quizzed pathologists to figure out the criteria they consider when identifying an aberrant cell. Like human lab techs in training, FocalPoint teaches itself by practicing on slides that pathologists have already diagnosed.

    Is there something here that could be adapted for use in a drama engine? Maybe so. Perhaps we could take inspiration from this system and adapt the ideas for a kind of sim-dramatist, which manages the evolving drama/ story by becoming an expert dramatist. What if the NPCs were able to each make use of this kind of an expert system? What if a non-programmer dramatist could easily "teach" this kind of a system?

    I know about work being done to teach expert systems that make use of a general world knowledge database. One expert system now seems to have the knowledgebase of a four-year-old child, which is simply amazing. But the database for a drama engine expert system wouldn't need to be that ambitious. It would only be concerned with the rules and principles of drama.

  • Adaptive Learning. Ascent Technology's SmartAirport Operations Center is a logistics program. In this program genetic algorithms use natural selection, mutating and crossbreeding a pool of suboptimal scenarios. Better solutions live, and worse ones die - allowing the program to discover the best option without trying every possible combination along the way. Figuring out ways to optimize complicated situations is what genetic algorithms do.

    Perhaps this has potential for our "Course of Action Module".

  • Pattern Recognition. The Falcon program, designed by San Diego-based HNC, maintains a perpetually mico-adjusting profile of how, when, and where customers use their credit cards. Good behavior is more predictable than fraudulent behavior. By studying habits, Falcon develops a keen eye for deviant behavior, which it detects using a combination of neural networks and straight statistical analysis. Neural networks work roughly like the brain: As information comes in, connections among processing nodes are either strengthened (if the new evidence is consistent) or weakened (if the link seems false).

    This system could analyze the actions of key NPCs and the player. The results are fed into the "expert system".

  • Speech Processing. Handspring has an after-hours tech support program that verges on conversational. The program extracts essential words like "PDA", "screen", and "error message". Using statistical analysis, the program identifies phonemes' within a spoken sentence and assembles them into a variety of possible words. "Noise" words get discarded, keywords kept. Based on the combination of keywords kept, the program might suggest a fix -- or probe for more information, in a "disambiguation" routine.

    Could something like this be at the center of our chatterbox-like synthespians?

  • Text Parsing. Monster.com, a job bank, uses an intelligent Web crawler called FlipDog to find new customers. The crawler develops a sense for which parts of sites are more likely to contain jobs, then parses the pages to pull out the relevant information and files it in a database. Rather than rely on dictionaries, FlipDog focuses on word position and format clues. This works best for documents with relatively consistent features.

    Could this system also be adapted to the abilities of a chatterbox?

Conclusion

I'm saying four things.

  • There is a mass market out there that is ready for interactive-drama.

  • Before we can create interactive drama, the principles of drama must be evolved for interactivity.

  • Dramatists are the ones who must evolve drama for interactivity. These dramatists must, of course, understand what has already been accomplished in interactive entertainment, but they don't need to be programmers.

  • Finally, it may be advantageous to look outside, as well as within the game industry for inspiration as we tool up for a new kind of interactive entertainment, based on not only the principles of drama, but also on advances in AI.

Large corporations have bought up smaller game companies. The tried and true gaming genres have become dogma, even though the interactive entertainment industry is just a child compared to TV, which is a child compared to film, which is a child compared to thousands of years of dramatic evolution in theatre and storytelling. But large corporations are good at making money and loath to take a risk. This puts the brakes on the evolution of interactive entertainment - for now. It will take a brave company to break new ground.

For Further Study

Façade

I recommend reading the paper, "Mid-project technical report, December 2002: Architecture, Authorial Idioms and Early Observations of the Interactive Drama Façade". The creators of Façade, a self-described "experiment in electronic narrative", have built (and continue to refine) a prototype that is very much like what I suggest in this article. Here is my response to Façade.

The idea of using a programming language that allows for parallel actions is first rate. Organizing in terms of dramatic "beats" and "beat goals" is a very good idea. Though they are concentrating on a one-act play simulation, their "beat sequencer" could be extended to manage the narrative and dramatic elements for a multiple-act interactive play.

facade.gif

Façade, by Michael Mateas and Andrew Stern, is "an attempt to move beyond traditional branching or hyper-linked narrative." You are invited to a couple's house for dinner and witness a fight that ends their marriage. By replaying the game, you see if your actions can save their relationship.

Having read through the Façade paper quickly, I'm obviously not an expert on their system, and it's possible that I've missed some of the nuance in their project. But based on what I understand, this is how I would proceed with the design of an ideal project. I would follow the path of Michael Mateas, Andrew Stern, but with some differences.

Missing in Façade is the idea of the characters coming to the stage with actionable "attitude" - the end-result of the characters innate tendencies and past experiences as described by a dramatist in a biography. Instead, I get the feeling that the characters come to the stage with neutral characterization, but are inclined to take certain actions, per the author's beat goals for that character. This is a subtle difference, but drama is about subtlety. Characters can be more complex and evocative if they bring the baggage of their backgrounds (as evidenced through their attitude, which is evidenced through their habitual facial expressions, mannerisms, and body posture), to the narrative actions they perform - especially if they are required to perform actions that are in conflict with their character baggage.

Also missing is an authoring interface, which would allow a dramatist who is not a programmer to create dramatic works with the drama world engine they've created.

It appears that the reason the production of Façade has been so work-intensive and time consuming is due to the fact that just two people are doing most of the work, which is unreasonable in a production of a commercial project. Also unrealistic for a commercial project is the need to rely upon renaissance people who are multiple language programmers, artists, storytellers and dramatists. I think that it's important to create a utilitarian drama world engine instead of worrying about narrative content at the beginning. In short, I think that it's a mistake to create a drama world engine around a specific narrative, and around specific people.

Though the upfront work of creating the kind of machine Mateas and Stern have created, along with an interface authoring shell, would be an extensive commitment, once a "sim-drama-stage-with-authoring-interface" is ready, many dramas could be produced with relative speed, when compared to Façade, by someone who is simply a dramatist trained to use the interface.

I see a paradigm where any good writer could provide narrative content - with perhaps the direction and encouragement of a dramatist experienced in using our drama world engine. This dramatist would understand the need for starting with the pieces of a deconstructed narrative that will come together in various ways in real time within the drama simulation world.

Perhaps such a drama-engine-trained dramatist would invent a new way of working with writers. Animation writers are use to having someone else create a world and certain key characters. They are accustomed to pitching story ideas. When a story idea is accepted, the writer is given a green light to create an outline. When the outline is accepted the writer is given the green light to write the script. And then the script usually goes through a polish. Perhaps in some way similar a writer could pitch a story/ story-world idea. When accepted the writer could pitch the biographies of key characters. In other words, all of the key elements of a story/ drama would be written, but the story itself would not. Instead, the narrative elements would be pumped into the drama engine and the story would happen in real time.

This way the company only needs one extra development person, in addition to artists and programmers - a trained drama world director who could adapt anyone's writing. The company does not have to find talented storytellers who are also interactive authors - a rare commodity. Nor would the company have to find cutting edge, multi-language programmers who are also dramatists and storytellers. Instead the company, once it has committed to the creation of an easy to use drama world engine, could hire and train a few drama sim world directors, and get to work creating many drama sims in the same kind of production-line way in which television and films are produced.

In the paradigm I see, the very same drama world engine used for authoring is also the drama world engine used by the public. Simply, it is stripped of the authoring interface and replaced with the end user interface.

I understand the need for proof of concept demos. But a proof of concept demo does not have to be even a one act dramatic simulation. Instead, a proof of concept demo could be a demonstration of the ability of a non-programmer user to direct synthespians, input dramatic beats and beat goals and orchestrate the seamless movement from dramatic moment to dramatic moment, and from scene to scene, and from act to act. Wouldn't it be impressive to let someone else (an executive) decide upon the narrative content of a demo, and then within a couple of weeks be able to offer a real time interactive drama based on the suggested narrative situation?

Let's say that we want to create "sim-scene" demo after our drama engine and authoring interface is built. Here's the situation: the player engages a very harried synthespian to ask for directions. Naturally there are a lot of ways to fake this scene, but let's talk about how this could happen as a simulation of a real dramatic interaction.

Say the synthespian has a dramatist-induced need to get off stage, and thus a goal to walk to part of the set, open a door, go through it, and thereby leave the stage. But the synthespian has a dramatist-induced mandate to stop and talk with whoever engages her in a kind way. This is a basic moment of dramatic conflict. How does it work in terms of design?

As in Façade the synthespian has a base of possible beats, each with beat goals, and the experiencer can interrupt these beats. When the primary beat is interrupted before its goal is reached, a secondary beat comes into play, etc. The synthespian is also a learning chatter bot. It has the ability to "understand" plain language. It can respond to plain language with a database of possible things to say, as influenced by the current beat it's working from and the real time actions of the player. The database of possible responses is relational and divided into positive, neutral, and negative responses. But there is also a timer, which impacts "emotional sliders". Emotional inputs that can be "tuned" in real time is something that I don't think Façade has. The emotional inputs would be needs, wants, and biases. For instance, the longer the synthespian is detained, the more the "need slider" goes up. The more the "need slider goes up, the more the "bias slider" (towards the experiencer) goes down, towards negative. The more the synthespians' bias towards the experiencer becomes negative, the more the apparent personality of the synthespian (as shown through actions such as facial expression and body language) becomes negative. At the same time the door the synthespian must go through is mapped so that the synthespian recognizes this environmental element as one that will resolve the need to leave the stage, a la the "emotional topography mapping" in the Sims. Façade doesn't seem to have this kind of emotional topography mapping.

As the synthespians "emotional sliders" go up and down, it chooses from the appropriate group of possible things to say. In the positive part of the database it can go through a give and take conversation about "directions" in a pleasant, evocative way. This would be the response to an experiencer input such as, "Pardon me." In the neutral part of the database would be responses to experiencer input such as, "Wait - I need directions!" On the negative side of the database the synthespian can go through a give and take conversation about being very busy and really having to leave now. At the far end of the negative database the synthespian can decide to tell the experiencer off, break away from the conversation, go to the door, and leave the stage. This would be a response to something like, "Hey, you! Stop! I want directions."

Of course this is a simplified example and the possible interactions would be more complex. For instance, the experiencer could start off on the positive side of the synthespian, but then end up pissing her off after a substantial interaction and emotional roller coaster. Perhaps as the conversation gets longer and longer she has more and more of a propensity to look at the door leading off stage and back away in that direction, etc. Perhaps just delaying her long enough, no matter how pleasant the experiencer is, will cause the synthespian to become more and more frantic (as evidenced by her facial mannerisms, gestures, body language and other physical actions). In terms of voice, the tone, strength of consonants, and rhythm could change as informed by emotional sliders. So, instead of immediate and global actions for the synthespians, as in Façade, I would concentrate on pre-defined emotional baggage, as expressed through habitual actions, which compete for expression with actions appropriate for the real time interaction taking place. Also competing for expression are actions informed by the author's mandated need to get off the stage. This mandate is the outcome of an authored need, which responds to a set mapped with emotional topography, as in The Sims. The need is to leave the stage. The door is mapped with potential to resolve this need. Finally, there would be a timer, and emotional sliders. The emotional sliders, timer and emotional topography could spawn beats. As in Façade, a drama manager would choose the beat actions with the highest priority on a moment-by-moment basis.

I would follow the path of Façade, but I would ask that the drama engine be created with utility in mind, rather than around a specific narrative. I would add needs, wants, and biases to our synthespians, which can be modified as "emotional sliders" in real time in response to other synthespian actions, player actions, and environmental events. I would want the ability to give the synthespians "emotional baggage", as evidenced through habitual actions and mannerisms. I would map the set with an emotional topography, as in The Sims so that some of the authoring would be taken over by the machine. Finally, I would call for an authoring interface that a non-programmer dramatist could use to create dramas with the drama engine.

Like a stage director or a film director, a dramatist trained to use our drama engine through a friendly interface (and probably with the help of other specialists -- like set builders, lighters, animators, etc.) would interpret anyone's written word, and make the thousands of decisions necessary to create an evocative synthetic drama -- in a predictable, financially viable, timeline.

Crowd!

Crowd! is a behavioral animation system built on a real-time engine called Neuro!Machine, which will be released in November by Vital Idea. With Crowd!, digital artists can integrate SynThespians into shots. These 3D performers follow rules for behavioral animation and can be directed to simulate any number of real world applications, including: battle sequences, filling stadiums, filling open spaces, the dynamic of city streets.

Originally conceived in 2001 with the internal development name Doppleganger, Crowd! has been in development for more than two years. It began as a plug-in to extend the power of an existing package; however, it rapidly "took on a life of its own."

Neuro!Machine may be integrated into an existing realtime gaming engine or be accessed via Crowd!Serve servers for realtime simulation over a network. Crowd! is an artificial intelligence-based standalone animation system for film, broadcast, simulation and electronic gaming.

Features include:

  • A distributed neural interactive simulation engine. Using Crowd!Serve, multiple machines across a network can offload the artificial intelligence computation for virtual performers in a sequence.

  • Intuitive, extensible, artist-friendly Interface: instead of having to be a programmer to use Crowd!, a node-based interface allows artists to visually develop complex behaviors for each digital performer.

  • Frame-based animation system: trigger events can be keyframed within the system to ensure maximum control. Performers within the scene listen for a trigger (for example, "Open Door") and respond at the given frame.

  • Parallel state machines: multiple state machines may be linked to a performer, based on priority, to drive complex behaviors.

  • Non-linear animation: by animating, or motion capturing, a set of animation cycles, performers within the scene can use non-linear animation to generate a number of unique movements.

  • Scripting language: using the Crowd!Script Language (CSL), the system may be extended to support your specific needs.

The Mind-Boggling Future Of Virtual Reality
By Sarah Scott Saturday National Post - Canada
March 17, 2002

"In the next decades, we're going to have worlds and renditions of reality that will make our modern way of thinking of the world -- perspectival and cognitive and mathematical and certain -- seem almost medieval."

"It is perhaps inevitable that, at the end of this story, science reunites with fiction. The U.S. military has used simulators for a long time to teach trainees how to fly a plane or use other types of machinery. Now, the military is taking simulation to a whole new level. It's set up the Institute for Creative Technologies at the University of Southern California -- an assembly of people from film, games, computer science and the army -- to create a virtual training ground filled with virtual people. "The goal is to create a prototype of something like the Holodeck," says Dr. Bill Swartout, the institute's technical director. Some of the people at ICT even worked on Star Trek, he says. "The Holodeck is a source of inspiration. It suggests research paths we're going down."

http://www.nationalpost.com/artslife/story.html?f=/stories/20020316/352666.html

Source: Gamasutra 2.11.02

MIT Invents Videos Of People Saying Things They Never Said
By Gareth Cook
Boston Globe Staff
May 15, 2002

"CAMBRIDGE - Scientists at the Massachusetts Institute of Technology have created the first realistic videos of people saying things they never said - a scientific leap that raises unsettling questions about falsifying the moving image.

In one demonstration, the researchers taped a woman speaking into a camera, and then reprocessed the footage into a new video that showed her speaking entirely new sentences, and even mouthing words to a song in Japanese, a language she does not speak. The results were enough to fool viewers consistently, the researchers report.

The technique's inventors say it could be used in video games and movie special effects…"

Gareth Cook can be reached at [email protected].
This story ran on page A1 of the Boston Globe on 5/15/2002.
© Copyright 2002 Globe Newspaper
http://www.boston.com

Eyematic Awarded Four U.S. Patents
May 24, 2002

Eyematic, the creators of the FaceStation software has been awarded four U.S. patents covering its facial animation and visual sensing technology. The FaceStation software uses the technology (developed over the past 10 years) to automate the 3D facial animation process for game, feature film, TV, web and wireless content creation. By using a standard Windows PC and a video device such as a webcam or DV camcorder, the FaceStation is able to allow an actor to "drive" a fully textured 3D head in realtime using their own facial expressions and head movements.

Before the creation of FaceStation, realistic 3D facial animation was limited to a small number of high-end applications, because of the expensive mocap hardware and laborious manual keyframe editing and process involved. "The visual sensing technology in FaceStation is a remarkable accomplishment. With it, FaceStation changes the rules and allows any 3D artist to create high quality facial animation in record time, regardless of budget or expertise," says Orang Dialameh, president and co-founder of Eyematic. For more info, check out www.eyematic.com

"Robots You Can Relate To"
Source: WIRED Magazine

Vision:
Machines that interact with people the way people do.

Why:
Sociable robots could teach the young, care for the infirm -- even befriend the lonely.

Visionary:
Cynthia Breazeal, 34.

Day job:
Director of the robotic life group at MIT's Media Lab.

Breakthrough:
In 2000, Breazeal created kismet, a robot head that displays a range of facial expressions in response to natural human visual and auditory cues. Her newest creature, Leonardo, maintains eye contact with its human companions and moves with surreal grace. Thanks to its touch-sensitive artificial skin, the furry, gremlin-like creature actually twitches when you tickle its ears and shyly pulls away if you try to hold its hand.

Who's paying attention:
Hollywood special effects company Stan Winston studio is collaborating with Breazeal on Leonardo. Her corporate sponsors include IBM, Intel, learning lab Denmark, Lego, Mattel, Nokia, and Sony, all of which are interested in natural human communication interfaces.

Quote:
"Think of your most beloved robot character in science fiction. That's essentially what I'm trying to build."

 

______________________________________________________

 

 

 

 

Read more about:

Features

About the Author(s)

Randy Littlejohn

Blogger

Though his education and professional background is in theatre, film and television production, Randy spent five years working for Sierra On-Line during the period leading up to the demise of the Oakhurst, California studio. He arrived through the back door of interactive entertainment as a video specialist working on building the Sierra On-Line video production facility in Oakhurst, California, and worked as the Camera/Lighting person on Phantasmagoria and then Director of Photography on Gabriel Knight II: The Beast Within. For the last three years of his time with Sierra On-Line he worked in game design. He has worked off-and-on in the interactive industry since then for a total of about ten years. Most recently he worked as dialogue tree writer/designer for EA's MMOG Earth and Beyond. Having a background in drama has given him a unique perspective of interactive entertainment.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like