Sponsored By

What did Daniel do this summer? He explored SIGGRAPH. Now he's bringing you the low-down on SGI, the state of Fahrenheit, Nvidia's NV-10, the latest animation tools, looming trends, and how they did that cool thing in the Matrix.

August 20, 1999

21 Min Read

Author: by Daniel Sánchez-Crespo Dalmau

Once again, the computer graphics (CG) community joined together at SIGGRAPH, the world's biggest CG show. The event took place last week, August 8-12, at the Los Angeles Convention Center. This year marks the 30th anniversary of SIGGRAPH (the organization, not the show), and many special activities reminded attendants of the relevance of computer graphics throughout the last three decades.

As in past years, SIGGRAPH was divided into the classic three areas: the courses and panels, where world-class scientists and engineers discussed hot new topics; the main exhibition floor where companies showcase their new products and technologies; and the special activities, ranging from art galleries to movie passes and parties.

In my wrap up of the show, l'll focus on the exhibition floor, as it's the place where all companies pile up to show the world their new products. This year the expo occupied 160,000 square feet with 350 booths. According to event management, about 50,000 people came from 75 different countries. The hall was quite spectacular: huge screens showing movie FX, fridge-sized speakers pumping out dance music - and really good-looking dancers performing real-time motion capture demos (although SIGGRAPH is still far from matching E3). Another funny thing to point out is that SIGGRAPH seems to have its own fashion standard: the worse you are at matching dress colors, the more important you are. Yellow shirt on green shorts? Maybe a vice-president. Yellow shirt on green shorts and pink sunglasses? Probably president.

siggraphlogo3.gif

lacc.jpg

The SGI Affair

sgi2.gifI'll start with one of the big announcements that shocked everyone at SIGGRAPH. Silicon Graphics is about to embark upon a major restructuring involving most products and divisions. First of all, the company wants to become more efficient and competitive. Nothing new here, yet. What's new is that SGI is becoming more and more interested in the Linux scene, and in fact is planning to incorporate Linux as the main OS across its product range. Interesting, especially considering SGI already has a proprietary OS (IRIX) which, according to some SGI engineers at SIGGRAPH, will eventually disappear.

On the hardware side, SGI plans to begin using Intel chips (starting with the soon-to-ship Intel Merced) as the main CPU throughout all its products. This includes the low-end Visual Workstation systems (some of which already use Intel hardware), but will eventually reach higher configurations (and prices), such as the Origin supercomputers.

Additionally, SGI said it will also stop developing new graphics chipsets, and will start using Nvidia's hardware. So, what do we have if we put all this together, add some water, and shake well? We get SGI workstations built with mainstream parts, blended together using SGI's unique knowledge of graphics architectures to get the best price/performance ratio. As some engineers pointed out, SGI's current strategy can't be sustained much longer, as some companies (namely, Nvidia) are producing chipsets which rival SGI's own. Besides, a consumer-oriented company like Nvidia works with yearly (or 6-month) product-cycles, a speed that cannot be matched by a larger, more structured company like SGI.

What do I want to see? I'd really like a high-end SGI machine built with Merceds, Nvidia chips, and Linux as the OS. The question is whether SGI will be able to keep the price-performance ratio competitive with other vendors. Recent reviews of the Visual Workstations (models 320 and 540) report good performance, but complain that the machines are still a bit overpriced. Price cutting is the way to go.

Regarding the SGI/Nvidia relationship, SGI will be sending to Nvidia a group of very talented engineers to create a team that defines future chips' architectures. The team will thus have SGI's historical knowledge of graphics applications, and Nvidia's current market leadership. This move just seems to confirm what's obvious: SGI wants to abandon the chip creation business and is transferring technology to nVidia, a company that's already staffed by quite a few ex-SGI engineers.

Besides this big news, SGI showed off OpenGL Optimizer (the technology which was to be used as their gift to the Fahrenheit project, but more on that later), I saw those great-looking flat screens for visual workstations, and more SGI systems running Linux than you can shake a light sabre at.

Where was Microsoft?

One of the booths I was looking for the most this year was Microsoft's. Bill and friends potentially had lots to display this year: DirectX 7 is about to hit the market, Windows 2000 is on its way, and Fahrenheit is peeking over the horizon. So, I entered the exhibition floor, grabbed my map, and looked for them. But they weren't there. Microsoft did not attend SIGGRAPH this year, despite appearances in at least as far back as 1996. I asked around, I tried hotel suites, I looked for press releases. Nothing. So, all I can tell you about Microsoft comes from indirect sources.

To begin with, SGI is definitely backing away from Fahrenheit. Fahrenheit (aspects of which should be incorporated into DirectX 8) was announced last year and was touted at the time as a joint initiative between SGI and Microsoft which would allow scene graph control for graphics applications. The idea was to transfer volume clipping (hierarchical bounding boxes, and so on), texture management, and other scene-level tasks to this new API. This would have been similar to what SGI has done over the last decade with successful APIs such as Inventor and Performer. Nice idea, but maybe the marriage of the two companies was a bit unnatural. The result? SGI appears to be tired of working with Microsoft, and has stepped back from the project. Although it will comply with all the written contracts regarding Fahrenheit development, SGI is no longer interested in the API. No SGI machine will use Fahrenheit, as similar functionality is already present in a variety of proprietary SGI APIs.

fahrenheit.jpgWhat will shake out from this soap opera? First, it's clear that Microsoft is the sole backer of Fahrenheit. Conclusion number two is that Fahrenheit will lose one of its key features: cross-platform support. With SGI involved, the emphasis was on making a universal, cross-platform scene graph. If SGI steps back, Fahrenheit will probably be a Windows-only affair, and we'll be back to using a proprietary technology.

The guys at Sun Microsystems are probably laughing at all of these developments in the Fahrenheit evolution. For the last two years, Sun has been promoting its own scene-graph API, Java3D. This is, as you may have guessed, an extension of the basic Java language to access 3D hardware mapping calls either to OpenGL or Direct3D. Well, Java3D is out now, and already offers scene graph control, one year (at least) before Microsoft's Fahrenheit.

Nvidia cagey about NV10

nvidia.gifI thought the folks at Nvidia had quite a small booth, considering the recent success of the company's TNT2 chipset. To tell the truth, nothing new was shown to the public there, although demos of the company's next chipset, (known under by its codename, NV10) were shown behind closed doors. What Nvidia offered the public didn't come from its booth, but rather from the courses offered at SIGGRAPH that were taught by Nvidia engineers.

On the first day of the show, I attended a course titled "Lighting & Shading for Interactive Applications," presented by Mark Kilgard (former SGI employee, creator of the GLUT library, and now one of Nvidia's brightest minds). Kilgard, a damn good OpenGL programmer, showed some excellent demos he wrote to showcase special techniques. The first demo showed shadow volumes, and how they can be used to do real-time shadows in games. The demo showed nVidia's logo spinning over a scene, with a shadow-volume projecting the shadow on the floor. The demo ran quite fast, so a guy from the audience asked the key question, "Mr. Kilgard, could you tell us which hardware the demo is running on?" And as expected, Kilgard answered "This is a prototype board." Was what we were seeing running on Nvidia's next-generation board? We were told the board was handling transform and lighting, so this board may well be an NV10 prototype.

Another demo showed some new technology that may be included when this board ships (according to Nvidia's personnel, before Christmas). It showed an OpenGL logo texture mapped onto a quadrilateral, and the application of some nice bump-mapping. The way they achieve this is quite interesting: to get per-pixel, good looking bump-mapping, Nvidia plans to store per-pixel surface normals as an RGB texture map which encodes the XYZ triplets. Then a new texture blending mode will be added to OpenGL's existing multi-texturing capabilities. This new mode will dot-product two vectors (the normal stored in the texture and a light vector), and modulate (multiply) underlying layers according to the result. This way, Nvidia can provide a great way to evaluate the classic Lambert formula for diffuse lighting (which, in case you're interested, is I diffuse = Color surface * Color light * k diffuse * (N·L)).

3dfx talks T-Buffer

3dfxlogo2.gif3dfx didn't have a booth at SIGGRAPH. No music, no flashing lights, no nice dancers - they hosted folks in a hotel suite. Some days ago, a week before SIGGRAPH, 3dfx posted a news release introducing the technology behind its new graphics accelerator (which I'll call Voodoo4 or V4, even though the name has not been announced). The technology is called the T-Buffer, and for those of you familiar with OpenGL, I'll begin by saying that it's something similar to OpenGL's accumulation buffer. In most graphics boards, data is processed in a linear fashion. A number of triangles are sent to the board via the bus, and they are painted with texture maps and sent to a single frame-buffer. The T-Buffer, on the other hand, consists of N parallel frame-buffers that the data can be sent into, as shown in Figure 1. Whenever a triangle is to be rasterized, the developer can "specify" which of the buffers is going to receive the data. This way, a single game loop can send its data to different frame buffers (or T-buffers, to use 3dfx's terminology). Finally, when all the data has been processed, the resulting buffers are somehow combined into a single buffer, which will be the one dumped to your screen.

So now you know how a T-buffer works, but what exactly does it do? Imagine you want to render a bullet moving (quite fast) through the screen. In the old days, all you could do was use some tricks to simulate motion. With a T-buffered V4, you just have to render the bullet in different positions in a single frame using the T-buffers, resulting in a simulated motion blur (see Figure 1). But wait, there's more. You want anti-aliasing? Well, don't forget that every T-Buffer offers subpixel precision. So, if you dump a triangle to a T-Buffer, jaggies will automatically disappear. You can use T-buffers as a way to get soft shadows (penumbras) by distorting light positions. By distorting the camera position, you get depth-of-field effects. And so on. If you want to see all that you can do with a T-buffer, just grab a copy of any OpenGL book and look at the accumulation buffer, as the idea is basically the same.

motionblur.jpg

Figure 1. T-Buffers creating a
motion-blur effect

When asked, 3dfx's executives say we can expect higher color depth (32 bits, maybe?) and higher texture sizes (512x512 at least), putting the V4 in a competitive position with the NV10.

I know it may be premature, but I think 3dfx may be heading in the wrong direction with all this T-buffer stuff, and may well be trying to hype this feature to differentiate itself from competitors like Nvidia. The trend in consumer graphics chips today is the integration of geometry processing into the pipeline. On-chip transform and lighting are on the way, and on-chip culling, clipping, and vertex storage are already in sight. The accelerator market is basically following the path marked some years ago by SGI architectures. Grab the specs of an SGI InfiniteReality machine, and you'll see similarities with today's consumer designs. So, the geometry step is an obvious one. 3DLabs realized that some time ago, and its Oxygen product family already marked this phase in hardware. Nvidia seems headed in the same direction. By focusing on T-buffers, 3dfx is making a different move.

And let's not forget that a T-buffer can be emulated in software. On the other hand, 32-bit rendering can't, and large texture sets are also impossible.

This T-buffer announcement took place on August 2, and at SIGGRAPH, 3dfx showed a V4 simulation in a hotel suite. It actually wasn't the final board design - in fact, what the company showed was an array of four Quantum 3D Voodoo2 accelerators in SLI mode, wired together and configured to emulate how the V4 may behave someday. Some screenshots and MPEG files have been uploaded to 3dfx's web site for you to view.

Modeling Tools: Lightwave, Mirai, and Merlin

The new version of Lightwave was demonstrated by Newtek in what was the company's biggest booth ever. The wait has been painfully long, but the new version is finally here, and I have to say that it was worth it. To begin with, Lightwave 6 has changed the core rendering pipeline. From the old ray-tracing-only approach, they're moving to a hybrid ray-tracing-plus-
radiosity-plus-caustics-plus-volume rendering thing that looks quite promising. Radiosity is unbeatable when it comes to room and building illumination, and caustics are the only good way to simulate light concentration, as when a lens intensifies a light beam onto a surface, such as sunlight warping across the bottom of a pool (see Figure 2). Get ready for game intros located in pools or large halls. To be honest, their radiosity was still not ready, but some caustics demos looked quite promising.

caustic.jpg

Figure 2. Caustics, as shown
in this effect, are now supported
in Lightwave 6.

Rendering is just one of the major changes. The modeler (which is still independent of the layout and rendering program) now has better animation capabilities, thanks to an improved inverse/forward kinematics engine. This way, Lightwave gets even better at character modeling, one of its key features in the past.

mirai.gifSpeaking of character modeling, a package that deserves a mention is Nichimen's Mirai, a product I really liked. While it's not really new (it was launched in March), it's under continuous improvement. Mirai is the classic all-in-one formula; it integrates a modeler, renderer and animation toolset into one. The modeler is based upon a subdivision paradigm, which makes it really intuitive and also extremely powerful. Imagine that you want to create a spaceship. Grab Mirai and create a low-resolution sketch of the object. Then use Mirai's subdivision features to add polygons (and, thus, detail) to the model. Do your work in low-res, and see the high-res model update. I agree that it might not be the perfect tool to create the 3D model of a bridge, but Mirai is perfectly suited for creating game models, like those Quake enemies. Mirai is probably the most intuitive tool I've seen to create game models. The bad news is, as you may have imagined, the price: Mirai is $6495.

Another package I loved was Digital Immersion's Merlin VR. This company has a great concept and a promising product. Imagine you need to create a 3D model of a Coke can. Do you open up MAX or Maya and navigate their complex interfaces until you get the thing right? Or do you use Merlin, a real-time, lightweight modeler so intuitive that the manual comes written in the CD leaflet? C'mon, you're not always going to create Episode I-quality stuff. Many times, you just need a modeler to get simple work out of the way. In these situations, Merlin is the way to go. With a great Metacreations-meets-Truespace interface, the program sports OpenGL previews, shadows, lights, and textures. Need to work with files? Merlin imports .3DS, .DXF, .WRL, .OBJ and .X formats, and exports to .3DS, .VRML, .BMP and .AVI streams. Also included is the usual set of tools like taper, twist, and bend. Want boolean operations? Got 'em. Want to animate the lights and camera? You can. O.K., maybe an IK engine would have been a nice touch, but hey, priced at $129, Merlin rocks.

Trends, Hype, and Things To Come

Peeking into my crystal ball, the general trend (which was shown especially in the courses and panels) is that rendering is finally giving away some CPU cycles. Getting nice visuals is still a high priority in game design, but Voodoo3 and TNT2 can handle over 5 million triangles per second. As soon as geometry processing is transferred to the board (maybe around this Christmas, when the next Nvidia chipset and DirectX 7.0 reach the market), the throughput will reach the 10M target. By then, the CPU is going to have lots of free cycles. Well, maybe not lots, but more than it has today, that's for sure.

That's the reason why this year's SIGGRAPH had courses on physical modeling, agents, and so on. Because these are the tasks the CPU is going to take care of, as soon as the graphics subsystem gets out of the way. Sure, Carmack, Sweeney, and friends will still get magazine covers because the new boards will still require clever programming. But the AI guys are going to get some more cycles to play with, and a physics programmer will finally get a serious contract. Having attended several SIGGRAPH shows, I can tell you that physics and behavior modeling used to be referred to as "realistic, time-consuming simulations." Now they are "real-time simulations." With today's CPUs and graphics boards, maybe it's time to have enemies with more than a pea brain, or realistic environments where things behave properly and aren't just physics simplifications.

Want real physics? Then take a look at Next Limit's RealFlow, a physics modeler that creates true-to-life liquids and gasses. Model the system with RealFlow, and render it with your favorite package: Maya, Lightwave, MAX, and so on. A shot is seen in Figure 3, showing the level of realism you can expect.

realflow1.jpg

Figure 3. RealFlow creates realistic liquids and gasses.

If you're into real-time stuff, check out the MathEngine SDK, a C-callable API with functions to model springs, masses, forces, and inertia (among other physical behaviors). The code seems quite well organized and it's licensable.

If Sony's PlayStation 2 evangelists are right (and I bet they are), emotion synthesis will also be an emerging trend. This means facial expression, lip-synchronization, and other methods for conveying feeling will become mainstream technologies. A nice application I saw at SIGGRAPH was TalkSync, a 3D Studio MAX plug-in for lip synching. It allows you to type phonemes, and get a realistic jaw, tongue and lip position for your character.

Another cool demo was shown at the "Smarter agents" course. It was an avatar called Jennifer James. The character seemed to use a branching-tree conversation scheme, but was really well built (see Figure 4).

jennifer.jpg

Figure 4. The name's James. Jennifer James.
Smart agent.

Another emerging technology is image-based modeling (this is tricky to explain, so pay attention). We are used to getting a modeler, creating a realistic 3D model of something (let's say, a church), and rendering it on-screen. This is complex, and by today's standards, may require tens of thousands of triangles. This's the way that Quake, Unreal, and most other games work.

So here comes image-based modeling and rendering, changing this basic premise. Let's imagine you want a model of the same object (again, a church). Instead of grabbing a modeler and some artists, grab a video camera, and go take some stills or a movie of the place. Then, use an image vision algorithm to extract 3D data from this model. Instead of modeling classically, model by using stills or video sequences. The difference? Lower cost to get the same quality. Image-generated models can be as accurate and detailed as the ones generated by hand.

You want image-based rendering? Imagine a video game sequence consisting of a 10-second AVI of a guy playing Quake. Due to time coherence, there is little change from one frame to the next, and sometimes no change at all. So, what's the point of re-rendering the entire scene?

Image-based rendering tries to detect areas with little or no change, and re-render them with 2D warping algorithms. For example, if a group of texture-mapped triangles has had very little change, you can redraw them in 2D as if they were a sprite, maybe warping (transforming in a non-Euclidean way) space so that it seems as though the group really moved. Image-based techniques are not new. In fact, at SIGGRAPH 96 I saw some demos of surface reconstruction using three or six still images and a paper by Microsoft researcher Jim Kajiya (one of the brightest CG scientists) applying the same principle to the now-defunct Talisman. So, if they aren't new, what's the deal? The deal is that the techniques are becoming stable, well-known, and well-explored, so they are ready to enter mainstream applications. Take, for example, The Matrix. Remember the sequence when Keanu Reeves gets shot and avoids the bullets by moving lightning fast? Obviously the scene wasn't filmed in the real scenario. It was done with Keanu Reeves in front of a green screen, surrounded by 120 ultrafast cameras whose shots were later composited to get that cool rotation effect. What about the buildings in the background? The surrounding scene needs to be rotated to match the action. Well, all that was image-based modeled and rendered. This way, the first part of the sequence (when Keanu meets the bad guy at the roof of the building) was real, done with classic movie techniques. Then, the special effect was a composite, but still preserved the exact look of the real scenario, as shown in figure 5.

matrix1.jpg

matrix2.jpg

Figure 5. Image-based rendering techniques were used in
the movie The Matrix.

Image-based modeling and rendering in games may provide a way to improve frame-rate by avoiding the re-rendering of low-change areas. Also, it may ease the task of the modelers, as today's graphics chips require highly detailed models which are expensive and slow to build. Finally, it may provide a way to achieve some cool special effects as the techniques are applied by skilled game developers.

This year's SIGGRAPH had the usual electronic theater, where the best animation shorts were shown. Still, the big event this year was the world premiere of The Story of Computer Graphics, a movie that reviews 30 years of the industry. The movie was excellent, and portrayed key people and events very comprehensively. John Romero represented the game community. It was really great to see some people that I'd had heard of -- but never seen before -- talking about their inventions: Shuterland, Bezier -- the list would go on forever. If you're interested (even remotely) in CG, stop by SIGGRAPH's web site and order a copy of the movie. You'll enjoy it.

Read more about:

Features
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like