Sponsored By

Outsourcing Reality Integrating a Commercial Physics Engine

Licensing rendering engines is now a well-established practice, with great potential cost and time savings over the development of a single game. As game developers reach for new forms of gameplay and a better process for implementing established genres, the wisdom of licensing physics engines is becoming inescapable. This sophistication does come with a cost. Physics engines do more than just knock over boxes, and the interface between your game and a physics engine must be fairly complex in order to harness advanced functionality. Whether you have already licensed an engine and want to maximize your investment or you’re just budgeting your next title, gaining a better understanding of the integration process will save a lot of trial and error, and hopefully let you focus on better physics functionality while spending less time watching your avatar sink through the sidewalk.

January 21, 2003

27 Min Read

Author: by Matt Maclaurin

Licensing rendering engines is now a well-established practice, with great potential cost and time savings over the development of a single game. As game developers reach for new forms of gameplay and a better process for implementing established genres, the wisdom of licensing physics engines is becoming inescapable. Commercial engines such as Havok and Mathengine's Karma (at press time, Criterion Software, makers of the Renderware line of development tools, were in negotiations to acquire Mathengine) have become mature platforms that can save months in development and test. Their robust implementations can provide critical stability from day one, and their advanced features can offer time advantages when developers are exploring new types of gameplay.

This sophistication does come with a cost. Physics engines do more than just knock over boxes, and the interface between your game and a physics engine must be fairly complex in order to harness advanced functionality. Whether you have already licensed an engine and want to maximize your investment or you're just budgeting your next title, gaining a better understanding of the integration process will save a lot of trial and error, and hopefully let you focus on better physics functionality while spending less time watching your avatar sink through the sidewalk.

The bare minimum we expect from a physics engine is fairly obvious: we want to detect when two objects are interacting and we want that interaction to be resolved in a physically realistic way - simple, right? As you progress deep into integration, however, you'll find physics affects your user interface, logic mechanisms, AI routines, player control, and possibly even your rendering pipeline (Figure 1).

Here at Cyan Worlds, we're more than a year into our use of a commercial physics engine, having integrated it with our own proprietary game engine. I'm going to share with you some of the nuts and bolts of our integration process. In the first part of this article, I'll talk about the fundamentals: data export, time management, spatial queries, and application of forces. Then, with an eye toward character-centric game implementations, I'll visit the twin demons of keyframed motion and player control. In these areas, challenges arise because both of them require that you bend the laws of physics somewhat, and that means you must draw some clear distinctions between what is physics and what is programming for effect.



Figure 1: Physics has many (inter) faces.

Integration Basics: Geometry Export

There are three categories of geometry supported by physics engines. The simplest are primitives, represented by formulae such as sphere, plane, cylinder, cube, and capsule. Some-what more expensive is convex polygonal geometry. Convexity simplifies detection and response greatly, leading to improved performance and better stability. Convex shapes are useful for objects where you need the tighter fit that you can get from a primitive but don't have to have concavity. Finally, there is polygonal geometry of arbitrary complexity, also known as polygon soups. Soups are fairly critical for level geometry such as caves and canyons but are notoriously difficult to implement robustly and must be handled with care to avoid slowdowns.

Since these geometric types have different run-time performance costs, you'll want to make sure that your tools allow artists to choose the cheapest type of physical representation for their artwork. In some cases your engine can automatically build a minimally sized primitive (an implicit proxy) at the artist's request; in other cases the artists must hand-build substitute geometry (an explicit proxy). You'll need to provide a way to link the proxy to the visible geometry it represents, so that changes in the physical state of an object will be visible to the user.

Transforms

Transforms in a rigid-body simulation do not include scale or shear. This mathematical simplification makes them fast and convenient to work with, but it leaves you with the question of what to do with scale on your objects. For static geometry, you can simply prescale the vertices and use an identity matrix. For moving physical geometry, you'll most likely want to forbid scale and shear altogether; there's not much point in having a box that grows and shrinks visually while its physical version stays the same size.

In most cases, a proxy and its visible representation will have the same transform; you want all movement generated from physics to be mirrored exactly in the rendered view. To relieve artists from having to align the transforms manually - and keep error out of your process - you may find it worthwhile to move the vertices from the proxy into the coordinate space of the visible geometry (Figure 2a).

However, if the proxy geometry will be used by several different visible geometries, you may wish to keep the vertices in their original coordinate system and simply swap in the visible geometry's transform (Figure 2b). This method will let you use physical instances, wherein the same physical body appears several different places in the scene. This latter approach, while enabling efficiency via instancing, can be less intuitive to work with because the final position of the physical geometry depends on the transforms of objects it's used for and not the position in which it was actually modeled.

Time Management

Dealing with time cleanly is an extremely important thing to get right early on in integrating a physics engine. There are three key aspects of time relevant to simulation management: game time, frame time, and simulation time.

Game time is a real-time clock working in seconds. While you might be able to fudge your way from a frame-based clock to a pseudo-real-time clock, working with seconds from the start will give you a strong common language for communicating with the physics subsystems. The more detailed your interactions between game logic, animation, and physics, the more important temporal consistency becomes - a difference of a few hundredths of a second can mean the difference between robust quality and flaky physics. There will be situations where you want, for example, to query your animation system at a higher resolution than your frame rate. I'll talk about this kind of situation later in the "Integrating Keyframed Motion" section.

Frame time is the moment captured in the rendered frame. Picture it as a strobe light going off at 30 frames per second. While you only get an actual image at the frame time, lots is happening between the images.

Simulation time is the current time in your physics engine. Each frame, you'll step simulation time until it reaches the current target frame time (Figure 3). Choosing when in your loop to advance simulation can greatly affect rendering parallelism.

Rendering frame rates can vary; if your physics step size varies, however, you'll see different physical results - objects may miss collisions at some rates and not at others. It's also often necessary to increment, or step, the simulation at a higher rate than your display; physics will manage fast-moving objects and complex interactions more accurately with small step sizes.
Tuning your physics resolution is straightforward. At physics update time, simply divide your elapsed time by your target physics frequency and step the physics engine that many times. Careful though, if your frame rate drops, this approach will take more physics steps so that each step interval is the same size, which will in turn increase your per-frame CPU load. In situations of severe lag, this can steal time from your render cycle, lowering your frame rate, which then causes even more physics steps, ad infinitum.

In such scenarios, you need a way to drop your physics-processing load until your pipeline can recover. If you're close to your target frame rate, you may be able to get away with taking larger substeps, effectively decreasing your physics resolution and accepting a reduction in realism. If the shortfall is huge, you can skip updating the simulation altogether - simply freeze all objects, bring the simulation time up to the current frame time, and then unfreeze the objects. This process will prevent the degeneracies associated with low physics resolution, but you'll have to make sure that systems that interact with physics - such as animation - are similarly suspended for this time segment.

If you're receiving events from the physics engine, the difference in clock resolution between graphics and physics has another implication: for each rendering frame, you'll get several copies, for example, of the same contact event. Since it's unlikely that recipients of these messages - such as scripting logic - are working at physics resolution, you'll need to filter out these redundant messages.

Applying Forces

There are three ways to give an object motion in a physics world: you can apply a force to the object, you can apply an impulse, and you can set its velocity directly. Each has different trade-offs.

To be effective, a force has to be applied over a specific amount of time. In many sims, applying a force means "apply this force over the next simulation step." This is usually not what you want, as applying a force for 1/60th of a second won't push it very far unless it's a huge force. What you do want is a way to say, as simply as possible, "apply this amount of force for this amount of time." There are three ways to do this.

The first approach is to continually reapply the force each substep until you've reached your target time. For each force you wish to apply, keep track of how long it needs to be applied, and apply it one substep at a time. The problem with this approach is its complexity; you need to keep track of each force that you're applying, how long it's been applied for, and how much longer it's going to be applied. There's also the minor problem that you must apply forces over an integer number of substeps, which limits how finely you can tune your use of forces.

The second approach is to use impulses. An impulse is a force premultiplied by a time and which takes effect instantaneously. If you want to apply a force of 10 newtons continuously over 1/10th of a second, a 1-newton impulse will do the trick. The limitation to using impulses is that the force is not in fact applied for the entire time; all the energy is delivered instantly, and your object reaches its target velocity instantaneously rather that being gradually accelerated. For quick forces, such as a jump or a bullet, the simplicity of impulses makes them preferable to actual forces. If you want to lift something slowly, though, forces are the way to go.

The third approach -- velocities -- is both limiting and particularly useful for situations where you need very tight control. We'll discuss it in detail later in the "Player Control Strategies" section.

Spatial Queries

Physics engines by their nature incorporate high-performance spatial data structures. These are handy for a lot of query types:

  • Trigger volumes (switch to camera B when the user enters this region).

  • Line-of-sight (can I see the power tower from here?).

  • Ray casts for AI environment probing (can Watson see me?).

  • Proximity queries for AI (start talking when the player is within five feet).

  • Evaluating theoretical object placement (can this door close without crushing anything?).

  • Ray casts for picking (let the user click on the lever).

  • Volume queries for motion planning (can I walk all the way to the hatch?).

Spatial queries can affect many types of game logic. A good query interface will save you time every day; it's an area of integration that will reward careful planning. While it can be very game specific, there are a few design parameters for your query interface that apply to almost all games:

Cascading. One query can significantly narrow the field for multiple, more complex queries: a 20-foot sphere around your avatar can gather all potentially interesting objects for subsequent query by line-of-sight.

Triggers. Some queries are set up once and report only when their state changes. For example, a region might notify you when the player enters, rather than you having to ask all regions each frame. This will typically be delivered as an event from the collision system.

Explicit queries. Some queries are only relevant at a particular moment and must be resolved instantaneously, for example, "Is that door in my way?"

Query partitioning. Some questions are only asked about specific types of objects; a camera region may only ever care if an avatar enters it, not a creature or rolling boulder. If your physics engine has an "early out" callback, you can use such application-specific type information to partition the query space, eliminating expensive detailed testing for pairs of objects you know will never interact.

Integrating Keyframed Motion

If you're not using physics for a racing game or flight simulation, you're probably looking for interesting gameplay - big complicated machines, moving platforms, and the like. It's likely that many of these will be lovingly hand-animated by your talented artists. Unfortunately, hand animation is not obligated to obey the laws of physics. How do we integrate keyframed motion into a physically based simulation?

The approach I'll discuss here is particular to the Havok API; it happens to be what we're using, and a proper discussion of these details requires a bit of specificity. It should be illuminating regardless of your choice in API, however, as it demonstrates how time, movement, and frame rate can all affect your simulation.

There are two primary issues involved with "physicalizing" keyframed animation:

  1. Translate motion from the hierarchical scene graph into the flat physics world.

  2. Give the physics engine enough information about the moving object to allow it to interact realistically with other, non-keyframed objects.We've adopted a few simplifying assumptions for keyframed motion, which greatly simplify implementation while still capturing the essential functionality.

First, we consider keyframed motion to be nonnegotiable. A keyframed sliding wall can push a character, but a character cannot push a keyframed wall.

Our second assumption is that we do not ask the physics engine to resolve interaction between two keyframed systems. Because these systems are hand-animated and initiated by script, avoiding interdependencies is the level author's domain.

When considering the integration of physics and keyframed animation, we first need to gather the local-to-world transforms of all the keyframed objects, as we'll need them to feed positions and velocities into the simulation. Because physics has no sense of hierarchy, you'll need all your kinetic information in world space. One way to do this is to cache matrices as you traverse your scene graph in preparation for rendering. This process gives you the matrix that you need to match the flat transform structure of physics. Because of the no-negotiating rule for keyframed objects, you can go ahead and submit the keyframed objects to your rendering pipeline as you traverse, as physics will not change those transforms. This helps parallelism, since all static and keyframed geometry can be transmitted to the graphics card before physics even starts.

Keyframed objects participate only partially in the simulation; they are not moved by gravity, and other objects hitting them do not impart forces. They are moved only by keyframe data. For this reason, it is necessary to "freeze" the keyframed objects during the simulation phase in which such forces are calculated and applied.

Keyframed objects are further marked at setup time as zero-order-integration objects. This advises physics that these objects are explicitly positioned and instructs the engine to call back during each integration substep. In this callback, you are responsible for updating the position, orientation, linear velocity, and angular velocity for the keyframed object. This information is critical for determining what happens when, say, your avatar is standing on top of that keyframed elevator. Since the physics engine has no knowledge of the forces at work, it's relying on you to help it fake the results.

To illustrate the importance of getting the velocity right, think about the difference between standing on an elevator that's moving down and one that's moving up. In the down case, a collision between you and the elevator should be resolved by you moving down. In the up case, the exact opposite is desired. The only difference here is velocity, and an incorrect result will embed your player up to the knees in the elevator floor - undesirable by most standards.

The process of calculating velocities is a simple matter of interpolating position and orientation from the animated transforms that you stashed away a few paragraphs back. As an alternate, higher-quality-but-higher-cost approach, you can ask your animation system at each physics substep to interpolate a fresh position for you. This extra bit of work can be expensive, because you have to reinterpolate the motion channel not only for the object in question but also for any parent transforms.

What this gains for you is a greater degree of frame rate independence for keyframed physical objects. To illustrate the problem of frame rate dependence, take a look at Figure 4.

Figure 4 shows an elevator reaching the bottom of its descent and moving back up. At frames 1 and 2, it's in the same position but moving in two different directions. If you're sampling position only at frame boundaries, you'll conclude that the elevator is stationary. If you add a sample in the middle, you'll have a more accurate simulation, at a cost of reaccumulating all transform dependencies. This is a fairly dramatic case; in many other cases, you'll see the object calculate different velocities at different frame rates. How much this matters to your players depends in large degree on your game's animation speed, object velocities, and tolerance for error in motion. In a surprising number of cases, this winds up not mattering, but it's an accuracy trade-off of which you should be well aware.

The approach I just outlined is not the only one to handling keyframed motion. The Karma engine provides a different facility in which the keyframe data is used as a constraint to the object's position but does not control it directly. The end result is that the object is attached to the animation in a springy fashion; if there are a lot of people in your keyframed elevator, it will lag behind, springing ahead again as folks jump off. You can adjust the strength of the spring and the speed with which it acts. This is a neat gameplay effect and can be excellent for the right application.

Player Control Strategies

Player control of the avatar is, for many games, where you're going to spend the most time fine-tuning your physics integration. Every design trade-off you've made regarding physics resolution, applying forces, keyframe data, and the like will all come together to affect how your character navigates and how realistic it feels. The avatar is so central to the player's perceptions that any glitch becomes extremely visible. I'm going to talk about the strategy we're using for our application, a multiplayer, networked, third-person exploration game with a mix of indoor and outdoor environments and an emphasis on photorealism. Naturally, your approach will vary depending on the design of your game, but you'll probably recognize issues that apply to your own situation.

A key decision for player control is the shape of the proxy you'll use to do collision for your character. A popular choice is a simple capsule (Figure 5). This shape has several advantages: It's smooth on the bottom, so it can glide over uneven terrain; it's radially symmetric from above, so your avatar can turn in place without being pushed away from the wall; and it has no sharp corners, which can get caught on narrow doorways. A subtler advantage is that since it presents no sharp corners to the ground, it won't jump or stick as it hits polygon joins in an otherwise flat terrain.

Notice that the character's arm sticks out through the capsule. He's illustrating a point, which is that this capsule is used only for his gross movement in the environment, and it does not handle detail interactions between, say, his hand and a lever. We use a completely different mechanism for such detail interactions; the problems of detail interaction are beyond the scope of this article, but suffice it to say that they're different enough to justify separate mechanisms from those used for movement. As for the realism of the simplistic shape, it's instructive to note that a large percentage of a human's motor control goes into maintaining the illusion that we're not a bundle of flailing limbs all moving in different directions. A real human body does an extremely good job of moving our head along on a smooth path. As a result, a simplified physical body can actually lead to more realistic results than a multi-limbed physics body.

That's how we're shaped, but how do we move? What translates button presses into forward motion? There are three fundamental approaches. First you can set the position and orientation of your character directly. Second you can set the velocity (linear and angular) of your character. And finally, you can apply forces to propel your character.

Setting position is attractive because it's so simple: You're standing here and you want to move forward, so just add a vector. This approach falls apart pretty quickly, unfortunately, and it is the least friendly to using physics in a general fashion.

Assume we start each frame in a physically valid position. Our player tells us to move forward, so we construct a vector representing typical forward motion, orient it to our player's forward vector, and add it to our position. Easy enough so far, and if all games were played on an infinite flat plane, this would work great. But what happens when the position we want to occupy overlaps with a wall, or even with a slight rise in the ground?

Big deal, you say, we have a fancy physics package. We'll just ask it to validate the position before we finalize it. So what do you do when the position is not valid? You'll have to calculate the point of impact, figure out where your character is deflected, and so on. This situation only gets worse when you consider that there are other moving objects in theenvironment. The problem is that by setting position directly, you've shut your physics engine out of the loop and you now have to write more code to take its place. How do we get physics to do this work for us?

Forces are a natural way to move a physics body around. On the good side, you'll find that a lot of unplanned situations tend to work when you use forces: If your character hits some boxes, he'll knock them over. If he's hit by a rolling boulder, the force imparted by the boulder will combine with his walking force to move him in a new direction. He'll interact realistically with slopes and walls. In general, it's a major improvement.

On the other hand, using forces to move the player somewhat decreases your level of control over exactly how the player moves. Subtler issues such as friction come into play, and it becomes hard simply to say, "Walk to this spot." Forces tend to highlight the fact that we're using a simplistic capsule shape for the player and not a 400-bone musculoskeletal simulation. While a golf ball might fly 100 yards if you whack it with a paddle, a human won't, and the reasons why are a complex to emulate.

Positioning the player by setting velocity is a reasonably happy medium between the total physics-unfriendliness of setting position and the loose control provided by forces. Rather than saying what position you want to be in each frame, calculate how fast you need to be moving to reach your target position and set the velocity on your physics body accordingly.

This has many of the same benefits as forces. If your character hits a wall, he'll either stop or slide along it. If he steps off a cliff, he'll start to fall, and if he hits a slope he'll climb up it. Little rises and falls in the ground will be automatically incorporated into your character's movement, and you still have pretty tight frame-to-frame control of your character's movement; he won't go flying off down a hill if you're setting his speed each frame, and you won't get an unfortunate confluence of external influences causing him to fly through the air.

One drawback to this approach is that your motion is still based on movement on a flat plane, so you're going to see some unrealistic movement when, for example, the ground drops away rapidly. If you're just applying that forward-walk vector, downward gravitational force will be applied every frame, but it will be blown away by your preordained velocity. As a result, the character will fall at a slow, constant rate and won't accelerate toward the ground as he should; he'll only get one frame's worth of acceleration each time before starting over at zero.

There are two solutions to this problem. The first is to leave vertical velocity alone when you're walking, and the second is to stop walking when you're in the air. In actuality, both are necessary; you don't want a single-frame departure from the ground (common when hitting a bump) to interrupt your forward progress, so your walk behavior should continue for a short time after leaving the ground. Since this can cause a few frames of floating when stepping off a cliff, not setting vertical velocity is necessary to trim off any extra frames of floating when cresting a peak. A rule of thumb is that each navigational state should have a sense of what kind of velocity it can set: a walk can't set vertical velocity, but a jump can.

Another drawback to the velocity-based approach is that it does not automatically integrate external forces. If your avatar is walking forward and suddenly slammed by a 10-ton rolling boulder moving left, he won't budge unless you take extra measures to notice that the velocity you sent down last frame has been modified somewhat. Resolving this correctly is somewhat beyond our scope here, but it involves keeping track of the intended velocity and combining it intelligently with the actual velocity, rather than just setting it.

We've just touched on a few of the issues regarding player control in a physical environment. While they can be extremely challenging, solving these problems creatively will open up a lot of new possibilities.

Focus on Creativity

Now that we've been freed of the burden of writing yet another BSP-versus-bouncing spheres physics engine, we find that integrating a full-featured commercial engine can be just as much work. The critical difference between the two approaches is huge, though: a robust implementation of fully generalized physics is capable of forms of gameplay we haven't even dreamed of yet.

I think that physics engines are going to do for gameplay what rendering engines have done for visuals: provide a rich base of stable features, freeing implementers to focus on creative new functionality rather than being chained to an endless wheel of reinvention. We've already seen our play-testers using the laws of physics to invent new gameplay for which we hadn't even planned. Managed carefully, this combination of planning and discovery holds great promise for the future of games and gameplay.

For More Information

Havok
www.havok.com

Mathengine's Karma
www.mathengine.com

Source code for calculating a tight-fitting spherical
primitive from a polytope:
http://vision.ucsd.edu/~dwhite/ball.html

Generating convex hulls from arbitrary geometry:
www.geom.umn.edu/software/qhull

Using BSPs to break a level into convex shapes:
www.faqs.org/faqs/graphics/bsptree-faq

UNC's excellent pages on collision, with several academic
implementations:
www.cs.unc.edu/~geom/collide/packages.shtml

Russell Smith's excellent open-source physics engine, ODE:
www.q12.org/ode/ode.html

______________________________________________________

Read more about:

Features
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like