Sponsored By

Siggraph 2000 From a Game Development Perspective

Hardware, software, papers, general wackiness... If you missed the party last week in New Orleans, be sure to check out this wrapup of what Siggraph 2000 had to offer game developers. If you were there, check out some of the stuff you might have missed.

August 4, 2000

17 Min Read

Author: by Daniel Sánchez-Crespo Dalmau

The dog days of summer are here again, and that can only mean one thing: it's Siggraph time. The biggest graphics show on Earth took place last week in sunny New Orleans, to the great delight of 26,000 attendees. This year's event focused mainly on the new platforms for entertainment. Overall, the show was really huge: it lasted one full week, it offered over 40 courses, and there were more than 300 exhibitors in about 150,000 square feet. Thus, I'll outline the major hardware and software highlights of the show, and offer some insight on trends.

New Hardware On Display

Sony's GScube. The award for best booth this year probably should go to Sony. The Japanese giant had one of the most interesting exhibits on the expo floor, where it showcased its vision of the future of entertainment. The space was divided into two main areas: the first was dedicated to game systems, and the other to passive home entertainment. In the games arena, the PS2 was presented as the essential system for both casual and hard core gamers. Along with the consumer product, the Playstation 2 development system was shown, which some of us had already had the opportunity to see at the GDC last March. This new development system is very different and is an improvement over the original Playstation development process -- PSX games were programmed via a PC extension board. The PSX development process made creating games in a networked environments rather complex, as the card was assigned to the PC it was plugged into. This problem no longer exists with the Playstation 2 development system, which is a full-blown external device that can be operated in two different ways. The first (dubbed programming/debugging mode) is similar to what you had with the classic Playstation. The second (dubbed workstation mode) lets you hook the PS2 dev system to network via an Ethernet connection, transforming it into a Linux-based development server.

Sony's GScube, targeted at content creation and broadband delivery markets.

In the home entertainment arena, Sony's vision improves upon the classic TV concept by adding Internet access, broadband connectivity, and unprecedented digital image quality. Although we saw a large flat screen showing some impressive movies, both the console and home entertainment areas were nothing shocking for most of the attendees. What was shocking and unexpected sat in between these two areas: a black cube about the size of a microwave oven, with "GScube" printed on it. It was the living proof of Sony's plans to blend passive and interactive entertainment worlds together in the future.

The GScube is a rendering device targeted at the content creation/broadband delivery markets. It consists of 16 cascaded processing units, each of them based upon an enhanced version of a Sony Playstation 2. Every processing unit has an Emotion Engine CPU (which was jointly developed by Sony and Toshiba), and an improved Graphics Synthesizer equipped with a 32MB frame buffer (eight times the memory of a regular PS2). This yields a total 512MB of VRAM, and it can theoretically reach a peak performance of 1.2 billion triangles per second -- a number that sounds like it comes from science fiction books. The device must be controlled by an external broadband server which feeds data to the GScube, and at Siggraph that device was the brand-new SGI Origin 3400. At the Sony booth, we enjoyed a real-time battle between characters from the movie Antz rendered in real time, as well as interactive sequences from the upcoming Final Fantasy movie shown at 1920x1080 pixels and a sustained rate of 60FPS.

In the Antz demo, I counted 140 ants, each comprising about 7,000 polygons, which were rendered using a ported version of Criterion's Renderware 3. All ants were texture mapped, and the results looked surprisingly close to the quality of the original movie. The Final Fantasy demo was just data from the now-in-development full-length CG movie based upon the game series, rendered in real time by the GScube. It showed a girl (with animated hair threads) in a zero-gravity spaceship, with a user-controllable camera viewpoint. The demo rendered about 314,000 polygons per frame, and included an impressive character with 161 joints, motion-blurring effects, and many other cinematic feats. According to Kazuyuki Hashimoto, senior vice president and CTO of Square USA, the GScube allowed them to show real-time quality, in "close to what is traditionally software rendered in about five hours." Sony believes that the GScube will deliver a tenfold improvement over a regular PS2, and future iterations of the architecture expect to reach a 100-fold improvement.

Xbox Demos. The second surprise at Siggraph was seeing what seemed to be a fully-operational Xbox at the Nvidia booth. The device, still in its fancy silver X-shape, was plugged into a large screen and showed the same demos that we saw at the GDC and E3 (the girl dancing with her robot, and butterflies flying above the pool). Quality seemed a bit lower than the original GDC demos, but the animations still looked gorgeous. Whether the device was a real Xbox or just a very early prototype is unknown, but having more than a year of development time ahead, it seems premature to claim what we saw as the definitive architecture. We have to wait to see how the system evolves.

Exhibition Floor: Software

Improv Technologies. On the software side, Siggraph had two nice surprises in store for me. First, there was a new company on the expo floor called Improv Technologies, which is a spin-off from New York University's Center for Advanced Technology. The company is headed by Ken Perlin, who is also the guy behind the well-known noise function that bears his name. He was awarded the scientific achievement Academy Award in 1997 for the development of this function, as it's been widely used in many movies.

The goal of Improv Technologies is to create and deliver products based upon research that's been going on at NYU over the last two decades -- research that is closely related to game development. NYU research has traditionally been focused on the production of procedural character animation, using turbulence and noise functions as control systems. Their work tackles both the high and low levels of this problem. At the low level, it controls small character movements (such as frowning and smiling) in a realistic and convincing way. You can experience a very nice hands-on Java demo at http://mrl.nyu.edu/perlin/facedemo. At the high level, the folks at Improv have also explored layered animation (which was the goal of the original Improv system), which allows improvisational actors to interact and behave in real time in complex, networked environments. You can find additional information on those research subjects in the bibliography at the end of this article.

Improv Technologies demos Catalyst.

At Siggraph, Improv premiered their first two products: Orchestrate3D and Catalyst. Orchestrate3D is a project management tool for animation. It's core module is the Scene Conductor, an animation compositor. Using the same paradigm found in the original Improv system from NYU, animators can assign motion sequences to body parts, blend different motions in different areas of the body, layer motions, sequence animations, and so on. Catalyst, on the other hand, is a generic game engine with emphasis in the graphics subsystem. It includes an advanced character animation engine, plus a level engine. The character animation engine is designed to work with an animation package such as Maya or 3D Studio Max, and supports low (e.g. facial expressions) and high (e.g., full-body layered control) levels of detail. The level engine supports a number of advanced features like curved geometry, real time shadows, cartoon rendering and collision detection. Although neither product is available yet, the demos and talks at Siggraph make me believe that it would be wise to keep an eye on Improv, as their company is capable of creating some spectacular game development tools.

SGI's Marc Olano. The second software highlight doesn't come from the expo floor, but from a technical paper presented by Marc Olano and others from SGI. The technique introduced is based on a simple idea: to achieve Renderman-quality shading power using standard OpenGL calls. To those of you unfamiliar with CG software, Renderman is a widely accepted rendering standard that Pixar developed in the late 1980s, which has been used to create blockbuster movie effects shown in hits like Toy Story, Jurassic Park, and The Abyss. Renderman's main advantage is a very flexible, C-like shading language, which allows procedural definition of surface attributes, texturing, lighting, and so on. This language is the Renderman Shading Language, and its "programs" are called shaders. You can see an example Renderman shader (the ubiquitous marble texture) below:

#include "marble.h"

surface marble()
{
varying color a;
uniform string tx;
uniform float x; x = 1/2;
tx = "noisebw.tx";
FB = texture(tx,scale(x,x,x));
repeat(3)
{
x = x*.5;
FB *= .5;
FB += texture(tx,scale(x,x,x));
}
FB = lookup(FB,tab);
a = FB;
FB = diffuse;
FB *= a; FB += environment("env");
}

Renderman shaders are compiled into byte code, similar to the way Java operates, and they are executed during the rendering process. Although they deliver very high quality results, they are not well suited for real-time applications like games. Some games (the most noticeable example being Quake 3) have tried to replicate this "shader power" in real time, but results have never been close to what's possible with Renderman. What Marc Olano showed at Siggraph is a prototype technology called Interactive Shading Language. This shading language is roughly equivalent to Renderman in features and power. The great thing about it is that SGI has developed a technique to convert Renderman-style code into OpenGL calls automatically, so the CG shader becomes usable in a real-time environment. Theatrically, one could take a shader from movie production, plug it into SGI's shader compiler, and get pure, optimized OpenGL code.

In a sense, the system turns shader instructions into rendering passes into OpenGL's pipeline. Thus, using multitexturing on today's hardware, you can achieve similar results to those created by Renderman. Below, you can see a top-down comparison of the same scene rendered using Pixar's Photorealistic Renderman (top) and OpenGL multi-pass rendering with a shader compiler (bottom). The benefit of this new technique is clear: a company will be able to use the same technology in the production/CG area and the games department. And the fact that today's game hardware implements more and more Renderman-style functions (Nvidia's per-pixel lighting and shading are a good example) certainly helps. So someday soon we may see huge gains in visual quality by porting shaders to OpenGL, without incurring a significant performance hit.

A scene rendered in Pixar's Renderman (top) and an OpenGL multi-pass rendering with a shader compiler (bottom).

This Year's Vision

Siggraph just focused on hardware and software. It provided a wide variety of food for thought. With vast amounts of new technology available, it is impossible not to consider the long-term trends that will affect the games and graphics industry.

I think back to the early 1980s, when someone from the REYES team (an early CG application used to render scenes in Star Trek II -- that product later evolved into Renderman) said that "reality is just 80 million triangles per second." Today, it would probably be more accurate to say "reality is 80 million triangles per frame," as the original prediction was rather conservative. Still, there's something striking about that sentence. Way back in those pioneer days, 80M triangle per second was a huge number, and anyone capable of doing CG imagery of such quality would surely achieve lifelike results. Well, years later, people from Nvidia and Sony (among others) now offer that performance levelÉin real time. Clearly, the terms "real-time" and "rendered" have become almost synonymous.

The game development industry has not stood still, either. When that 80-million triangle goal was stated, games featured four to 16 colors, and fit within 64KB. Today, big-game budgets are into the millions of dollars, and development cycles are similar to those found in the movie industry. The quality of the results has also increased accordingly, and today's games are more related to movies (in terms of quality) than to old-style games. While some people think there's no relationship between the industries (other than some resource sharing), many believe that the different areas of electronic entertainment are converging towards a unique central vision, and to me, Siggraph 2000 provided ample proof.

For example, look at the GScube. The device offers a nice view of what the future of entertainment could offer. Two years from now, broadband will probably be the delivery mechanism of choice for home entertainment. GScubes (or whatever they will be called then) will likely offer rendering power on the order of the billion triangles per second. But, as Sony says, the GScube will not be a "console" platform -- it will be a "content delivery" platform. Thus, convent providers (be they cable carriers, telcos, movie studios or game companies) will use that kind of devices to feed us entertainment through broadband pipes, and the GScube will be the game server, and perhaps the Playstation 2 will become a game client.

How about e-cinema? With this level of rendering power, one could think of the new devices as real-time interactive cinema renderers. Some will say "hey, movie theaters are a group experience, so interactivity is useless." Yes, but how about interactive home cinema? Imagine sitting at home and watching a rendered movie in which you can make decisions.

If you want a collective interactive experience, recall what happened at the Electronic Theater, in Siggraph '91 (in Las Vegas) and at Siggraph '98 (in Orlando). At these events Loren Carpenter (another big name in the Renderman world) did a terrific experiment involving collective reasoning. Each member of the audience (all five thousand people) was given a wooden card (it looked like a paint stirrer) which had a red reflector on one side, and green reflector on the other. A camera was aimed at the audience from the stage, which sensed the balance of power between the two colors as the audience "voted." Then a giant game of Pong was displayed on the theater's screen on a separate screen. As more green or red reflectors were sensed by the camera, the paddle went up or down. Not surprisingly, this "collective brain" happened to be quite proficient at playing games such as Pong and while piloting a plane in a flight simulator (well, the plane crashed, but everyone had a blast anyway). Apart from being quite an interesting example of flocking behavior (simple local rules that yield emergent complex behavior), this experience is a mind-opener regarding collective entertainment. Now come back again and try to imagine mixing that with existing technology. Do you still think interactive, collective movies are out of the way? I don't.

In a different context, let's revisit the SGI talk. If Renderman code can be ported into OpenGL calls easily, and that kind of approach is widely accepted, movies and games will be able to share not only similar quality and production values, but also many core development techniques. If a movie production company builds an array of shaders for a blockbuster film, those resources could be instantaneously and seamlessly integrated in the gaming system, shortening production cycles and, more importantly, making the game engine and the movie look more similar.

The No-Shows

Siggraph 2000 was an interesting show, but there were some companies that were noticeably absent. Whatever the reason, seeing big players "vanish" from one of the world's main multimedia exhibits is regrettable. 3dfx, for example, had very good reasons to be there: the Voodoo 4 and 5 product lines, especially the high-end versions, are quite impressive products. No one doubts today that the company's market share has diminished thanks to nVidia, but 3dfx still holds the crown as many gamers' favorite brand.

Another company that was missing was Microsoft. Microsoft also "skipped" Siggraph '99, which is unfortunate, because many people were expecting to see the folks from Redmond showcase the multimedia abilities of Windows 2000 and the Xbox. With DirectX 8 peeking over the horizon, Siggraph would have been the perfect place to hear about the new tools and technologies straight from Microsoft. At least Microsoft offered us (via Nvidia's booth) a glimpse of the Xbox.

...And Weird Stuff

Finally, there was the weird stuff at Siggraph. Being a multi-discipline show made up of technical conferences and art exhibits, Siggraph is a perfect opportunity to see demos and technologies way ahead of their time. This year, two exhibits share my prize for "weirdest stuff of the show."

This animatronic dolphin grabbed the attention of visitors to Sun's booth.

 

First, I must honor the guys from Sun Microsystems for their impressive booth. Apart from displaying everything related to Java and Solaris under the sun (no pun intended), they had a spectacular "surprise guest": an eight-foot-long animatronic dolphin swimming in a water tank. The dolphin greeted the visitors, did funny noises, and was a very effective way of grabbing people's attention. The level of detail made many believe that this was a real creature. Apparently the creature has something to do with Sun's future marketing strategy, so I guess we'll discover that later this summer.

Second, we must honor Daniel Rozin for his "Wooden Mirror" installed at the Art Gallery, which was a work of true genius. The installation consisted of a mirror frame, and instead of a piece of reflective glass, it used an array of 830 small, axis-aligned wooden chips individually controlled by mechanical servos. In the middle of the array, a miniature camera sensed whatever stood close to the mirror, and the hundreds of motors rotated the wooden pieces so that they became (more or less) reflective, a reflected image -- just as if it were a mirror. Because of all the movement going on to build the reflections, the wooden mirror did exhibit some noise effects, resembling waves at the sea. But the display was visually impressive, and I don't think a written paragraph can do justice to it. So, check out the wooden mirror in action below.

Daniel Rozin's wacky "Wooden Mirror" concludes our Siggraph 2000 wrapup.

 

References

Siggraph Home Page
www.siggraph.org

Improv Technologies
www.improv-tech.com

Perlin, K. "Layered Compositing of Facial Expression."
http://mrl.nyu.edu/improv/sig97-sketch

Perlin, K., and A. Golberg. "Improv: A System for Scripting Interactive Actors in Virtual Worlds." Computer Graphics: Vol. 29 No. 3, available online at http://mrl.nyu.edu/improv/sig96-paper

SGI's Renderman Shader Compiler Page
www.sgi.com/software/shader

Upstill, Steve. The RenderMan Companion. Reading, Mass.: Addison-Wesley, 1992, (ISBN: 0-201-50868-0)

 

Read more about:

Features
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like