Sponsored By

Read as the developers of Messiah wax nostalgically about the trials and tribulations of developing the game's engine. Don't expect to find any code here, but you will encounter some real-world problems, solutions, and nuggets of wisdom.

September 24, 1999

23 Min Read

Author: by Michael ‘Saxs’ Persson

This article is not a how-to guide, it’s a brain dump from the perspective of the engine programmer (me) of Shiny’s upcoming title, Messiah. Usually Game Developer articles are littered with formulas, graphs, and code listings that serve to up the intellectual profile of the piece. However, I’m not a mathematician and I don’t feel the need to state any information in the form of a graph — in this article I describe problems, solutions, and things I’ve learned in general terms, and that allows me to cover a lot more ground.

My interest in character systems started more than four years ago, when I was working at Scavenger, a now-defunct development studio. I was assigned to develop a "next-generation" X-Men game for the Sega Saturn. Sega wanted motion-captured characters and chose to use pre-rendered sprites to represent them. I observed the planning of the motion-capture sessions, examined the raw mo-cap data that these sessions generated, saw it applied to high-resolution characters on SGIs, and then received the frames which I was to integrate into the game.

The results were disappointing. The motion-capture data, which could have driven characters at 60 frames per second (FPS), was reduced to little bursts of looping animation running at12 to 15 FPS, and could only be seen from four angles at most. The characters were reduced to only 80 to 100 pixels high, and still I still had problems fitting them in memory. The models we spent weeks creating came out as fuzzy, blurry sprites.

Around that time, two new modelers, Darran and Mike, were hired for my team (and the three of us still work together at Shiny). These two talented modelers wanted to create the best-looking characters possible, but we didn’t know how to justify the time spent on modeling super-sharp characters when the resulting sprites came out looking average at best.

Eventually, Sega Software stopped developing first-party games and X-Men was canned. Soon thereafter we were asked to develop our own game. That provided me with the incentive to figure out how to represent characters in a game better. We knew we wanted at least ten or more characters on the screen simultaneously, but all the low-resolution polygonal characters we had seen just didn’t cut it. So I decided to keep pursuing a solution based on what I had been working on for X-Men, hoping that I’d come up with something that would eventually yield better results.

At first I flirted with a voxel-like solution, and developed a character system which was shown at E3 in 1996 in a game called Terminus. This system allowed a player to see characters from any angle rotating around one axis, which solved a basic problem inherent to sprite-based systems. Still, you couldn’t see the character from any angle, and while everybody liked the look of the "sprite from any angle" solution, many people wanted to get a closer look at the characters’ faces. This caused the whole voxel idea to fall apart. Any attempt to zoom in on characters made the lack of detail in the voxel routine obvious to people, and the computation time shot up (just try to get a character close-up in West-wood’s Blade Runner and you’ll see what I mean). I tried a million different ways to fix the detail problem, but I was never satisfied. The other problem with a voxel-based engine was the absence of a real-time skeletal deformation system. Rotating every visible point on the surface of a character in relation to a bone beneath the surface was not a viable solution, so we had to pre-store frames and again, as in X-Men, cut down in the playback speed and resolution. At that point I was ready to try a different solution.

When my team and I were hired by Shiny a little less than two-and-a-half years ago, I had done the prototype of a new character system after leaving Scavenger. Shiny was really excited about it and I continued to develop the system for the game that would eventually become Messiah. Let’s look at that system and examine the solutions I came up with.

System Goals

There were a number of goals for the new Messiah character animation system. The first was to put as few limitations as possible on our artists. Telling Darran to do his best in 600 polygons would surely kill his creativity. At the very least, it was an excuse to create only so-so characters. At that time for PC games, the polygon count for real-time animated characters was around 400 each, and Tomb Raider topped the scale at about 600 to 800 polygons per character. My fear was that this number was going to change significantly during the time it would take to develop Messiah, and apparently I was right in that assumption.

Another problem I wanted to solve was the need for our artists to create a low-resolution version of a character for the game, a higher resolution for in-game cutscenes, and a high-resolution version for the pre-game cinematics and game advertisements. Why, I thought, should we have to do all this extra work? I wanted the artists to have no excuse for creating mediocre models, and I wanted to eliminate their duplicitous work.

Whatever system I created had to be console-friendly. My two targets at that point were the Sony Playstation and Sega Saturn. Both had a limited amount of memory — in Sony’s case, only 1K of fast RAM. So it was important that the system could perform iterative steps to generate, transform, and draw the model.

Finally, I was convinced that curved surfaces would rule supreme in a few years time, so I wanted to make sure my system had that aspect covered.

I liked the visual results of my limited-resolution voxel model, and decided to make that my quality reference. The no-limit-on-the-artist modeling method seemed to work, so we stuck with that. This system supported automatic internal polygon removal, and using it Darran was able to dress the characters like Barbie dolls: he could stick buttons on top of the clothing, or model an eyeball and move it around without having to attach it to the eyelid. Clothing looked great, since all wrinkles were created using displacement mapping. Clothing shadows and light variations looked just right. In fact, most characters in Messiah now average 300,000 to 500,000 polygons to make the most of the system.

After a bit of back and forth, I decided to develop a system that fits patch-meshes as closely as possible to the body, and then generates the texture by projecting the original model onto the patch-mesh. To accomplish that, the following steps were necessary:

 

  1. Slice and render a volume representation of the model with all of the internal geometry removed.

  2. Connect the volume pixels into strings of data so it’s apparent what is connected to what.

  3. Apply bone influences.

  4. Separate the body into suitable pieces, so a patch surface can be fitted around it.

  5. Unify the body parts.

  6. Generate a special mesh that goes between the separate patch pieces.

  7. Prioritize the unified points.

Of course, somewhere in this process, I also had to figure out how to get the skeleton attached to the model.

 

Step 1. Rendering the volume model, removing internal geometry.

The first step is to cut up a model into a predefined number of horizontal slices. This determines the resolution of the rendering. A reasonable number is 400 slices for models we’re testing, and 1,000 to 1,500 for final models.

The dimensions of each slice’s shell (outer surface) must be determined. I experimented with several methods, but in the end the simplest one was the one that worked best. (Usually, if a solution is too complicated, it probably wasn’t the best solution anyway.) I "x-rayed" a character from four different angles (side, front, quarter-left, and quarter-right), noting the impact time for each ray, giving first and last hits priority since we know they belong to the outer shell, and storing the whole thing in four different databases, each corresponding to the x-ray orientation. Each database was cleaned for loose points (points having no immediate neighbor). Most internal points are removed by looking at the ray data. A combination of normal sign logic and proximity removes the inner layers, such as skin under clothing, and other points just under the surface.

Step 2. Connect the databases.

The next step is to connect the four databases into strings of data for each slice. This makes the final maps look a lot better since I can safely interpolate between points if I know they are connected. The routine goes through the four databases recursively, trying to find neighboring points one at a time. It finds neighboring points by taking into consideration criteria such as the surface continuity in the form of normal, continuous curvature, proximity to other points, UV closeness to other points, whether points lie on the same face, and so on. After this, we have neatly connected strings of data. A picture of the raw data can be seen in Figure 1.

mess1.jpg

Figure 1: This is how the data looks after volume rendering is completed.

 

Step 3. Apply bone influences.

When we first began trying to apply an animated skeleton to the model, we didn’t feel that any of the commercial packages would work for us. Neither Bones Pro nor Physique provided enough control. So Darran and I came up with the concept of painting bone influences directly onto the model (see Figure 2). The method is similar to using an airbrush: you set the pressure, method of application (average, overwrite, smooth), and start "painting" the bone influences. It’s done in real time, so you can play an animation, stop at a frame when you see something that needs correcting, and just paint on the new influences. Using this method, we got very good deformation data from the start. Since the deformation is applied directly to the high-resolution model, you can regenerate your game model in any resolution without having to recreate all the influences. You can also incorporate influences from another model if its structure is similar, and just clean up the influences later.

mess2.jpg

Figure 2: Here is an example of how the bone influence is done on the upper torso.

 

Step 4. Separate the body into suitable pieces.

Before the model can be converted into the native game format, it is necessary to define all body parts. This is done using cutplanes. These planes cleanly separate the body into various parts (arms, legs, torso, and so on), and at the same time these planes describe the common spaces where patch meshes must be generated to cover holes between body parts that emerge during tesselation (more about that shortly).

Attached to the cutplanes is the definition of projection paths. These define the projection axis from which the patch mesh (think of the patch mesh as a tapered cylinder) is generated. The number of horizontal and vertical segments is defined for each body part, so you can change the output resolution of the mesh. Figure 3 shows what projection paths look like.

mess3.jpg

Figure 3: The flexible projection paths and their editing.

 

Step 5. Unifying the body parts.

At this point, the model is ready for unification. This is the stage in which the mesh is fitted around the source material, at the appropriate resolution specified for each body part. It’s as if you shrink-wrap cylinders around each body part. The strings of raw data are then projected onto the final cylinder, extracted into a texture map, and separated so it can be saved separately, and UV coordinates are stored for each mesh point. I don’t call these points "anchor points," because we just use them as corner references for triangles, not for use in curved-surface calculations. Until now, it hasn’t been feasible to do any spline interpolation of the points since hardware performance is still not able to handle the resolution that we save the game models in (about 10,000 to 16,000 points per model at full resolution), but the resolution is fine enough so we can snap to points when we tessellate down the model. It makes the run-time version a lot faster.

Step 6. Patching holes in the mesh.

Patching holes in meshes was an aspect of the character animation process that proved to be very difficult to get right. We wanted a way to generate a mesh that would perfectly stitch up any hole that might be created when two cylinders of different resolution were joined together. For instance, to attach an arm to a torso, you cut away a round shape on the torso that fits the diameter of the cylinder representing the arm. A hole might appear at any point around the cylinder if connecting cylinders didn’t match up perfectly.

The problem is especially acute when cylinders of different resolutions are connected; this generates a sharp break where they are joined. I changed the system so that the last slice of cylinder being attached (for instance, an arm getting attached to a torso) wasn’t rendered prior to being connected. Instead, it was drawn directly onto the master cylinder, creating much better arm-to-torso transitions, and especially good leg-to-torso transitions.

Getting the texture mapping correct was difficult. Since the system uses intra-page mapping (to support the Playstation and for a more efficient video memory), wrapping is not supported. And because a character’s individual body parts are basically just tapered cylinders, it was never necessary to have wrapping. However, during the process of unifying the various body parts into a single character, some method of handling wrapping had to be devised. To solve the problem of properly aligning textures so that body parts appeared in alignment, I generated a point on the master body part, typically the torso, corresponding to a UV coordinate of 0 on the body part being attached, allowing textures on different parts to match up correctly. That solved the texture wrapping problem.

Currently, I’m working on a routine that sorts out the drawing sequence of the patch mesh, since a bumpy master body part can screw up the projection and cause the drawing sequence to generate some incorrect points, thereby creating holes in the mesh. That can become a big mess.

Step 7. Prioritizing points for tessellation.

The final step is to prioritize the unified points. For instance, you want to make sure that the tessellator doesn’t collapse the tip of a character’s nose in favor of some less important surface point. As such, you can weigh that point so it won’t disappear until the game turns off the priority routine when the model is displayed at low-resolution — in which case the nose doesn’t matter anymore. Similarly, you can prioritize an individual slice of the model if it’s important to the integrity of the model. That way you can make sure that the bend slice around the elbow is always there so the bend is kept clean. This process is vital for a stable-looking model — as a model drops polygons rapidly, there is a chance it will remove vital parts. Prioritizing slices and points goes a long way towards solving that problem. You might assume that prioritizing points all over the body would add a lot of polygons to your character, but in reality the tessellator just works around those points. In a wireframe view, you can see it just dropping more points in adjacent areas.

Working With the Final Model

The final model is saved with a separate map file for each body part, so you can easily load it into Photoshop to fix problems without having to do so on the model itself. When the run-time version of the character system loads a model for the first time, it reads in your preferences for the model’s appearance, scales the maps to fit your restrictions, and quantizes the maps if you want indexed color. A compressed file of the model is saved at this point, so the system doesn’t have to go through this routine every time the model is loaded.

Figure 4 shows the model in different resolutions. This shot was taken before the patch mesh was finalized, so there are some discontinuities in the hip area, but they are gone in newer versions of the tool.

System Pros and Cons

The process of creating a final model for Messiah is quite involved. It gets easier with each revision of the tools, but it still takes a bit of thought.

On the upside, we feel confident that the models we’re creating are "future proof" (yeah, yeah — I know, nothing really is, but for the sake of argument, let’s just let that comment pass). When somebody gets the bright idea of upping the average number of polygons per character from 800 to 2000, we won’t have to pull out our hair.

The tessellator is generally a lot better than other solutions at finding the right points on a model to eliminate. It doesn’t create holes between each body part because of the patch mesh that stitches the holes.

Having separate body parts makes cleanly amputating a limb easy, and the model can still be tessellated away even after a limb is severed.

mess4.jpg

Figure 4: A composite of the different stages of the tesselation.

Multi-resolution mesh (MRM) technology as described by Hugues Hoppe (a researcher in the Computer Graphics Group of Microsoft Research), has won a lot of acceptance for its ease of use. It is a good solution for static objects, but if you have highly complex objects, modeling them with a limited polygon count and mapping restrictions is not so easy. Using our system, we can map each button on a shirt separately and the program determines the final map without screwing up the tessellation effectiveness. Another drawback to MRM is that as soon as you start animating MRM objects, you start seeing artifacts. These visual artifacts are generated by the way MRM-generated LODs bend, which in turn is due to the "spider web" appearance of the mesh around a collapsed vertex. The method by which MRM determines which vertex to collapse is based on the base frame of the model, so an MRM-based system wouldn’t notice that the arm is bent in the animation and might remove vertices that might adversely affect the integrity of the model in that particular pose.

For static objects, MRM is preferred over the method I devised for Messiah, since it only changes one vertex at a time, so the mesh appears more stable. In fact, I made my own demo version of MRM for the Game Developers Conference earlier this year. Our team ended up using the routine for a few high-resolution objects in Messiah, and the technique worked nicely. On our system, we found that the tessellation artifacts get masked somewhat due to the fact that everything is moving and stretching with the animation.

Another important thing to note about our system is that the model can be processed in sections. You only need the rotation data from two slices at a time in order to render a model and make it fit easily in the cache. Even next-generation consoles have memory restrictions, so it’s vital to control the temporary memory footprint that calculations require. Because our system generates strips and fans, the rendering speed of our system sees big improvements on any hardware.

Disclaimer

This article is not meant as an advertisement for Messiah or the character system I’ve written; I merely want to state an alternative to the conventional approach to generating characters. The process is being revised continuously. It’s an on-going task to improve the process of converting models from 3D Studio Max to a game-optimized format. For now, Darran is keeping us busy with neat suggestions on how to make his life easier (and in doing so, we’re swallowing our weekends to make the modifications).

Saxs is busy working on Messiah, so if you have any questions you don't want answered, try e-mailing him at [email protected].

Messiah's Character Animation Tools (Or, Why I'm losing My Hair)

As the tools programmer for Messiah, one of the challenges I faced as I built our in-house development tools was deciding how to handle the vast amounts of raw data — a model with 400 rings contained anywhere from 50,000 to 250,000 points (see Figure 1). Each of these points is weighted with up to three bones, which further slows the drawing process. Some design ideas were "borrowed" from 3D Studio Max, which uses a frame-rate threshold that drops the application from shaded mode into wireframe mode to increase drawing performance. So in the tool, if the user is rotating, panning, or zooming in on the model and the frame rate is too low, the program starts to skip points and slices to improve the display speed. And as soon an operation is finished, the model is redrawn again at the original resolution (see Figures 5 and 6).

Unfortunately, this method of boosting performance could not be used to paint bone influences onto the points (see Figure 2) — each point must be visible in order for the user to see the effect that bones have on points as the influences are altered. And as Saxs stated in his article, just drawing a character model was slow (especially when we increased the number of slices to 1,000 and we got a couple million points). This forced us to rely on regional updates when setting the influences within the model. Each time the model is redrawn, the 2D location and information about each point was stored in a huge buffer. So when the user actually "paints" influence with the brush, we perform a 2D region test (I actually use standard Windows functions for this, which is slow), recalculate the position (as the influence just changed), and redraw the data within this region. The results are satisfactory, and it lets users change influences within the model in real time (using a reasonable amount of points).

mess5.jpg

Figure 5: A rotation is in process so detail is reduced on the model.

The first generation of the character tools had statically defined linear projection paths, which meant that if a model had body parts that were arced, curved, or bent, the generated map would be skewed when unifying. In other words, it was not a visual process at all. When it was time to upgrade the tools, the modelers had come up with some not-so-humanoid characters, which meant linear projections were bad. So we added visual editing features and the ability to save spline projection paths. A spline projection path is a type of positional keyframe system: you create position keys, move the positions around, change tension, continuity and bias, and give each position/key a time value. This time value gives resolution to the spline, which in turn defines the resolution within the cylinder. This feature lets us change the resolution of the body part/mesh as we traverse down an arm, for example, and we generate higher resolution around the areas of a body part that need to bend, as we did in the case of the shoulder area shown in Figure 3.

A Scalable engine Requires Scalable tools.

The scalability of the Messiah engine means more game data for developers to work with, which in turn means that our game development tools must be flexible enough to manipulate all that data. When we first started on the game, there was a limit of 400 slices per character. However, when that limit was raised, exporting the models from 3D Studio to raw rendering data became very slow. We had trouble fitting everything into memory, and machines often had to fall back and use swap drives to compensate (which is slow). When the rendering process ran out of swap space, we really had a problem. That’s when we heard that Interplay had a network-rendering farm consisting of ten DEC Alpha workstations. We headed over to Interplay’s offices to get some information on the setup, and afterwards we created a distributed computing version of our raw-data renderer. Using the new distributed renderer, 1,000-slice models that used to be rendered overnight could be rendered in one and a half hours.

mess6.jpg

Figure 6: When the rotation is complete, original detail is restored.

The communication used for our distributed rendering system is very simple. The server saves a command file containing commands for rendering the slices from the different viewpoints to a shared directory and each client exclusively opens this file and looks for a command that hasn’t been taken by any other client. The server checks this file at certain intervals to see if all the commands have been rendered. If it finds that all commands have been issued but some have not yet completed, it assumes that a machine is either very slow or has crashed, and it will reissue the command to another idle client.

When I started to write this article, the characters were rendered with a slice resolution of 1,000, but as I’m wrapping up this article, Darran has started to do 1,500 slices. That means that a raw binary file coming into our tool is 150MB, and that the problem now isn’t just with drawing speeds anymore. It is actually becoming a problem for the artist to work with the model. Making sure that all the points are influenced and assigned to a body part is in itself a challenge. The tools were written with 400 slices in mind, and work beautifully even at around 800 slices, but with insane (Darran) resolutions, we need to come up with better ways to influence and generate the finished model. More sophisticated caching techniques and regional updates are going to be used, and that will hopefully enable us to go to even higher resolutions. In a year or so we’ll probably have 2,000 to 2,500 slices, and 1GHz Pentium IIIs with 1GB of direct Rambus memory at our fingertips. When this happens, we want our tools to be able to take advantage of that power.

Torgeir Hagland programs his way around the world while conveniently dodging the Norweigan army. Write him at [email protected].

Read more about:

Features
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like