Sponsored By

GDC 2005 Proceeding: Interactive 3D Lighting in Sprites Rendering

This proceeding covers several techniques and methods to create sprites that are interactively lit by 3D lights, adding a better sense of depth and integration to finished images. Also covered are the unique benefits and drawbacks of each method.

eyal erez, Blogger

March 8, 2005

12 Min Read

This paper focuses on diffuse lighting of sprites in game engines. Sprites are flat polygons that typically contain a texture with an alpha channel and are oriented towards the viewing angle. The main benefit of using sprites is the ability to quickly render many complex objects as images that are projected onto light geometry, a plane.

A known limitation of sprites is the inability to be lit realistically.

Limitations include the following:

1. Each sprite is a flat polygon which results in a uniform surface illumination across the polygon.
2. Sprites always aim toward the viewing direction, which makes their illumination dependent on the camera's motion.

With the lack of traditional 3d lighting functionality, control over the surface color of the sprite is achieved by each particle's color attribute which modulates the sprite's texture. The particle's color attribute can be set through ramps, textures, fractals or any other pattern generator.

The goal of this paper is to explore new methods for simulating illumination and shading when rendering sprites in real time games.

Introduction

Since the early stages of computer games, sprites have been used to supply graphic content to the image. Each sprite represents an individual component (spaceship, missile, bullet, man, cloud) and the software controls it's transforms on screen based on the game action. Till this day sprites are being used extensively.

Many years after sprites were first used in games, the VFX industry started using 2d-sprites in 3d rendering by using 3d-planes that are always aiming towards the camera. One of the main uses for polygonal sprites is found in particle systems. Volumetric effects like smoke, water, sand, dust, fog, mist and even fire are easy to achieve using overlapping sprites. Instead of using millions of tiny points to describe a volume, numerous sprites can be used to describe larger volumes, while maintaining the freedom to dynamically control the behavior of the desired volume. The technique described in this paper focus mainly on effects lit by diffuse lighting as oppose to self-illuminated ones such as fire, or highly reflective ones such as water. However, the main concept of this paper can be used to illuminate sprites using other shading models not limited to diffuse.

Related Work

Sprites have a long history in computer graphics.

Most game engines render constant-shaded sprites which do not respond to conventional lighting, instead, driving their color from ramps or other pattern generators. Recent research in clouds rendering includes sprite rendering as a function of light [4] which uses a directional light and distance attribute from the center of the cloud to achieve normal direction of the sprite. Other methods include multiple forward scattering [1] or the use of voxels, light transmission and absorption models [3]. These methods are the closest related work to our method since they are using light properties to determine the shading of the sprites being rendered.

Our method is fairly different and offers a solution for 3-space lighting with several techniques to assign normal information to sprites. It is not limited to a specific effect and offers a general methodology for sprite rendering. It is based on Lambertian rule [2] for diffuse lighting and a normal attribute for each particle.

Sprites Cosmetics

1. Texture selection. Selecting a texture is solely dependent in the effects being created. Usually it's better to use several textures for variation purposes. It is important to use several distinct shapes describing different scales and densities. Choosing texture resolution is extremely important. Optimum results can be achieved by choosing a resolution that will approximately correspond to the amount of pixels it takes to draw the sprites. If the size of the apparent sprite on screen is about 60*60 pixels, texture resolution of 64*64 will get you the best looking results. Texture resolution of 32*32 will stretch on screen. Using a larger texture will result in extra filtering or artifacts. Since the sprite's relative size in the viewport often change based on the camera motion, using mipmaps would be a good practice for dynamically switching textures resolution based on apparent size on screen.

2. Alpha treatment and edges. Since both the color and opacity map of the sprites are dynamically modulated by their color attribute (rgbPP) and opacity attribute (opacityPP), artifacts might appear on the sprite's edges. These artifacts appear when the alpha value is greater than the color value. A good rule of thumb to avoid these artifacts is to set the luminance value of the opacity map based on the color map. The opacity map can be gained down if needed. Gaining up the opacity map will usually result in a dark edge. By setting the luminance value of the alpha based on the color, dark areas appear only when the particle color attribute is dark, as oppose to an edge getting darker due to high opacity value.

3. Pre-multiplication. Sprites rendering is done by drawing the polygons from back to front, blending it with an "over" operation. Prior to the drawing stage, the color map is multiplied by the particle color attribute (rgbPP). It is imperative that the color map on the sprite is unpremultiplied by its alpha to prevent artifacts. This artifact results from modulating a color that has been premultiplied by its alpha. Unpremultiplying the color map is done by dividing the color by its alpha which is a common trick when compositing CG elements in visual effects work. Any color correcting has to be done on an unpremultiplied image and than multiplied by its alpha again.

4. Density, size and opacity. Sprites are used to represent volumes, not individual particles. Therefore sprites should usually overlap each other quite extensively in order to reach a continuous volume. Since each sprite represents only one normal, it receives only one shading value. By having large overlapping areas, the sprites are blended together generating a continuous shaded volume.

Particle Base Lighting



Figure 1: normalPP - normals

Figure 2: normals - diffuse

Today's game engines and GPU's, offers complex particle system simulations using many particles/sprites. However, only now with programmable shaders can we take advantage of lighting techniques. The magic of good looking sprites is not in the amount of particles used, it's in the light and shading.

Problem: sprites do not except lights.

The principal limitation with sprites rendering is their orientation dependency to the viewing direction which is constantly changing in most games based on the player's action.

The diffuse lighting model uses a light direction and a surface normal in order to shade the surface. Sprite's surface normal is dependent on the viewing direction which alters the illumination values as the observation point changes. In order to solve this limitation, a constant normal value is needed. Per particle vector attribute that holds the particle direction (normalPP). This attribute is used in render time to calculate the shading instead of the sprite's geometric surface normal.

Setting the normalPP attribute is dependent on the shape of the overall volume generated by the particles.



Figure 3: Velocity normalPP - normals

Figure 4: Velocity normalPP - diffuse sprites

1. Emitting from surfaces. Consider emitting particles from a sphere's surface. The normalPP attribute on each particle should be set to the sphere's surface normal at the point where the particle was emitted. The diffuse shading model can then use the normalPP attribute instead of the sprite's surface normal to calculate the diffuse shading.

2. Velocity Based normalPP. In some engines acquiring the surface normal for each particle is not possible or computationally expensive. A good alternative for setting the normalPP is to normalize each particle's velocity on birth and copying the results to the normalPP attribute. However the birth velocity has to share the same direction as the surface normal at the point of birth.

Both methods are good for curved surfaces (clouds, terrain etc...). However the particles only maintain their integrity as long as they don't move around too much and still correspond to their surface normal.

Spherical Quads

Another limitation is that each sprite is evenly lit across the polygon since the normalPP value is constant for each of the vertices.

A different method for lighting sprites can be achieved by modifying the normals of the sprite (plane) to point outward, away from the center of the plane. Since the surface shading is linearly blended between the vertices. Pointing the normals outward will shade the plane as it was a sphere, while still using only one quad polygon. By using this method sprites are lit just like any other surfaces in the scene. You can think of this method as similar to using spherical particles that correspond to lights. One huge limitation using this method is that sprite orientation is dynamically changing based on camera position resulting in varying shading values which looks odd when the camera moves. This method works great for effects that are far away from camera, like clouds or other effects that are located in areas we can't reach. When sprites are far away, the camera motion doesn't change their orientation much. This method is the easiest method to implement and results in lit sprites that have gradient shading values across the polygon.

Vertex Shading Model



Figure 5: Spherical quads - shaded

Figure 6: Spherical quads - wireframe

The vertex shading method for lighting sprites is a combination of both a normalPP attribute together with the spherical quads method. This is suitable for most effects since it has the advantages of both methods and lacks their limitations. It benefits from gradient shading values across the polygon, yet is not affected by camera position.

This method is using a vertex shader program that reads in the normalPP attribute and the polygon's vertex normals that are pointing away from the center just like the spherical quads. The vertex program is then using these values to shade the polygon. The normalPP attribute controls the main orientation of the sprite and the vertex normals are used to shade the flat polygon like it was a sphere. We can achieve this by overwriting the vertex normal to be the unit vector of the normalPP attribute added to a factor of vertex normals.

newVtxNormal = normalise(normalPP + .3 * vtxNormal) ;

The result are vertex normals that are always pointing to the direction of normalPP with variation on each of the vertices normals that are now pointing away from the center, but still maintaining their normalPP orientation. The vtxNormal are always pointing to their world space orientation generating gradient shading across the polygon based on the lights. Once diffuse shading is achieved, we multiply the result in the color map and the rendering process blends the sprites using their alpha values.

 


Figure 7: normal - diffuse

 



Figure 8: Sprites - texture (no shading)

Figure 9: Sprites - textured, lit and shaded

References

[1] Harris,M. 2002. "Real-Time Cloud Rendering for Games". Proceedings of Game Developers Conference 2002. March 2002.
[2] Lambert, J, H. Lambert Law (Diffuse Illumination). 1728-1777.
[3] Nishita, T., Dobashi, Y. and Nakamae, E. 1996. "Display of clouds taking into account multiple anisotropic scattering and sky light." Proceedings of SIGGRAPH 1996, pp. 379-386.
[4] Wang, N. 2004. "Realistic and Fast Cloud Rendering", Journal of Graphics Tools. Publication in 2004.

______________________________________________________

Read more about:

Features

About the Author(s)

eyal erez

Blogger

Eyal is a Shader/Effects developer at Naughty Dog. In his 10 years of experience he has been involved in various productions from game development to commercials, music videos, TV series and feature films working in many studios like Tippett and Sony Imageworks. His main focus is particle system and procedural animation/ shading. Eyal’s credit list includes major visual effects films like The One, Matrix Revolution, Hell-boy, Spiderman2 and The Aviator.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like