Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Practical techniques for ray tracing in games

This article will be focusing on some techniques which you can use to integrate ray tracing into your game engine and thus obtain incredbile benefits in visual quality, and in some cases, performance.

Alexandru Voica, Blogger

March 18, 2014

19 Min Read

This article will be focusing on some techniques which you can use to integrate ray tracing into your game engine.  The techniques described below are incremental improvements over modern game engines architectures and yield a big benefit in visual quality, and in some cases, performance. 

We will be demonstrating these techniques at GDC 2014 in a version of the Unity Engine that has been extended to use PowerVR Ray Tracing.

What is ray tracing?

Ray tracing has been probably one of the most widely discussed topics in graphics. On a fundamental level, ray tracing is the ability for the shading of one object to take into account the geometry in the scene.

When you are adding ray tracing to your game engine, you add the ability for every surface you shade to generate a color based not only on the properties of that surface, but the properties of the entire scene.

Debunking the myths

Most people think ray tracing is only useful for photorealistic, physically accurate rendering. While ray tracing has been pioneered by these high-quality production renderers, there is absolutely no reason why it cannot be used in real-time renderers, especially since there is now efficient hardware support for accelerated ray tracing in the PowerVR Wizard GPU family.

Secondly, there is a myth that ray tracing and traditional rasterized techniques are incompatible; you often hear ray tracing and rasterized graphics presented as a dichotomy – you can only have one or the other. This however is not the case; ray tracing is a tool that can be easily integrated in existing real-time, rasterized game engines and can coexist with traditional rendering.

Finally, some people assume that ray tracing is slower and less efficient compared to rasterization. This view is flawed: if you look at state of the art rasterization techniques, they need to work even harder to achieve the same effects as ray tracing.

 

 

 

 

 

 

 

 

 

Figure 1: A scene produced using hybrid rendering (accurate transparency, reflections and shadows)

The main use for ray tracing in rendering is to accurately simulate the transport of light. In 3D graphics, rendering a surface means looking at how much of a scene will reach our eye. Even a basic understanding of the physics of light makes you realize that just the properties of that surface are not enough for a correct and realistic renderer; you need to know the properties of the entire scene.

This is why ray tracing is so powerful.  Its innate understanding of light actually means that some effects are more efficiently computed with ray tracing than using state of the art raster techniques. 

The examples below highlight some of the most common use cases for ray tracing. While the primary application of ray tracing is rendering, there is nothing that prevents it from being used for other purposes.

 

 

 

 

 

 

 

Figure 2: Some of the applications for ray tracing

There are many applications that would benefit from the ability to know the entire layout of the scene and can use ray tracing independent of rendering; physics and collision detection or visibility detection in AI are two perfect examples.

How do you add ray tracing to your game?

There are several approaches you can take to implement ray tracing in a game engine. The integration will depend on the ray tracing capability of your hardware; the faster your hardware can trace rays, the more dynamic your use of ray tracing can become.

The most common use case today is to support ray traced, pre-baked lighting in the form of lightmaps or light probes.

 

 

 

 

Figure 3: There are many options for adding ray tracing to your game

Unity 5 is the first development platform to ship with in-editor real-time lightmap previews based on Imagination’s ground-breaking PowerVR Ray Tracing technology. This addition is the initial step in bringing real-time, interactive ray tracing to mobile games and allows for near instantaneous feedback for changes to global illumination lightmaps by displaying an accurate preview in the editor’s scene view of how lighting will look in the final game.

 

 

 

 

 

 

 

 

 

Figure 4: The Unity 5 lightmap editor uses PowerVR Ray Tracing technology to improve pre-baked lighting

With this technology, artists can continue to iterate and refine the look of a level while final lightmaps update and bake in the background, dramatically decreasing the amount of time needed to make artistic adjustments to scenes.

On the other side of the spectrum, you can render everything using ray tracing. You can replace all of your rasterization code with ray tracing code so every pixel can be drawn by emitting rays into the scene. The color of each pixel can be calculated based on the objects in the scene and the rays that bounce off of them.

This produces very high-quality renderers and requires a very high ray budget (about 20 rays per pixel, at an absolute minimum). Therefore, fully ray traced rendering at Full HD resolutions above 30 fps is beyond the capabilities of current generation hardware.

Hybrid rendering is a middle ground

The approach that we are introducing in this article is hybrid rendering. This novel technique keeps the rasterized portion of a game engine intact and simply enhances it with ray tracing elements wherever it is most appropriate.

Hybrid rendering means that your rasterized game engine produces most of the elements of the frame, and the ray tracer can supplement certain elements or effects that can be used by the traditional game engine.

First, the ray tracer needs to get all of the geometry that rays can intersect with.  The way this works is by submitting the geometry to the ray tracer through a geometry pipeline similar to OpenGL and OpenGL ES. The ray tracer then takes this geometry and builds a 3D database of the scene. You only need to resubmit objects whose vertices have changed between consecutive frames. This is very different to rasterization, where you submit every part of the scene to the rasterizer on a frame by frame basis, even if there are no changes between successive frames.

Figure 5: Inputs to the ray tracer (world space scene intersection)

The ray tracer also requires a way to define the appearance of that geometry. In ray tracing, this is done by specifying what happens every time a ray collides with a triangle in your scene. A convenient way to describe this behaviour is using a shader. Just as a rasterizer converts a triangle into screen space to produce a fragment (i.e. you need a fragment shader to decide what color will be accumulated into the framebuffer for that fragment), a ray tracer will execute a ray shader every time a ray hits a triangle.

The final part of the process involves emitting some primary rays. For every pixel of your framebuffer, you need to define what primary rays are going to be emitted for that pixel; this is done using a frame shader.

If you’ve worked with a ray tracer before, you’ve probably rendered every pixel in a frame using the definition of a camera. The advantage of having a programmable shader for this task is there are no restrictions on how you emit your primary rays.

An overview of current generation game engines

To understand how ray tracing can be implemented in your game engine, let’s have a look at some of the current rendering pipelines of many modern game engines.

In a graphics textbook, every triangle is converted to screen space and rasterized. A shader is then run for every fragment of that triangle computing the lighting calculations for that triangle. A single color is emitted from the shader based on the properties of that fragment.

However, most modern game engines have developed a more sophisticated method called deferred lighting and shading (this software principle should not be confused with the deferred rendering part of TBDR, which is a GPU hardware architecture). In deferred shading, you submit your geometry to the rasterizer, it gets converted into screen space in a buffer and, in a subsequent pass, the shader is run for every fragment of that geometry. Instead of doing any lighting in that triangle’s fragment shader, you simply write out all the properties of that fragment into a set of multiple framebuffers which make up the G-buffer.

Figure 6: The G-Buffer has everything you need to ray trace

The result of the forward rendering pass is a buffer for the world space positions or depth values, normals, colors and material IDs. Executing a second pass takes this G-buffer as an input and computes the lighting component in world space based on the elements in the G-buffer.

Figure 7: Deferred shading used in most modern games engines that use rasterization

The key point here is that the elements in the G-buffer can be reused as the inputs for the ray tracer. For every pixel of the G-buffer, you have the properties of the surface that was visible to the camera. You thus have a normal, position, color and material ID for every surface in the scene; you can take these objects and pass them to the ray tracer which generates primary rays based on these properties. Rather than emitting rays from a camera, you emit rays from the surface that is defined by the G-buffer.

The diagram below shows how the pipeline will now look like:

Figure 8: In hybrid rendering you use the G-buffer to set up your rays

The forward rendering pass remains exactly the same; you rasterize the geometry and generate a G-buffer. You then pass the G-buffer to the ray tracer which emits rays into the scene from the surface defined by the G-buffer. Those rays are then shaded by the ray tracer in a way which takes into account the whole geometry in the scene, as described earlier.

Once ray tracing is integrated into the game engine’s pipeline, you can use it to implement a number of techniques like accurate real-time shadows, reflections, transparency and other complicated effects.

 

 

 

 

 

 

 

 

 

 

 

Figure 9: This is how your existing, rasterized-only game engine used to look

 

 

 

 

 

 

 

 

 

 

 

 

Figure 10: Meet your new game engine with added ray tracing: improved shadows, better transparency and reflections

 

Shadows

The first technique that can take advantage of ray tracing in a game engine is rendering shadows.

Shadows are an extremely important part of accurate 3D rendering. Until now, shadows have been generated using shadows maps. For each shadow casting light, you render the scene from the point of view of that light and produce a shadow map.

Then, when you render the scene from the point of view of the camera, you take into account the shadow maps for every pixel in a frame so that you work out whether that pixel is inside a shadow or not, for every depth value relative to the camera. This technique however has many problems.

The first is a resolution issue; for example, a very small object relative to the light could end up shadowing a huge region of the scene. In this case, you end up with pixelation artefacts in your shadows.

Figure 11: Shadow maps can produce visible artifacts (here visible on the roof of the car)

There are techniques that try to alleviate this problem (e.g. cascaded shadow maps) but this requires a huge amount of geometry throughput to render the scene at multiple different resolutions.

The additional complication brought upon by having multiple lights, each with its own cascaded shadow maps and each potentially rendering the entire scene, means the geometry processing rapidly becomes a performance bottleneck.

The last disadvantage about shadow maps is their inability to render life-like soft shadows using a simple 2D filter. When the shadow map is filtered to generate a soft edge, the blurring is applied across the whole shadow buffer and is not aware of the distance between the object casting the shadow and the object receiving it.  This leads to shadows that look fake or inaccurate.

Because ray tracing has an inherent understanding of the geometry of the scene, it is much easier to implement accurate shadowing using ray tracing.

For every point on your visible surface, you emit one ray directly toward the light. If the ray reaches the light, then that surface is lit and you use a traditional lighting routine. If the ray hits anything before it reaches the light, then that ray is discarded which means that surface is shadowed.

Figure 12: Ray traced hard shadows

Another major advantage of ray traced shadows over shadow maps is in the level of detail of your shadow which is not dependant or limited by the resolution of your shadow.

For example, in the screenshot below, when you zoom in on the pavement, you can see the tiny shadow details, thanks to ray tracing.

Figure 13: Ray traced shadows provide much higher image quality

So far, we have discussed hard shadows: when you shoot a single ray from a surface, you will produce very accurate shadows with well-defined hard edges.

In some cases, these shadows are good enough; on a perfectly clear day at noon, your shadows will have very well-defined edges.

But there are many scenes where this isn’t true. In these scenes you have soft shadows. A soft shadow has an intermediary region between a fully lit surface and a fully dark surface cast by a surface light. This region is called the penumbra.  A common example of soft shadows is generated by sun rays hitting the earth’s atmosphere: on a cloudy day, light from the sun will be scattered by water molecules in the air therefore some of the light will come from multiple directions. This is why, on a cloudy day, you will often have soft shadows.

Figure 14: Penumbra rendering requires multiple rays per pixel

The ray tracer can implement soft shadows very easily. You rely on the same algorithm described above but, instead of shooting a single ray from each point in your surface, you shoot multiple rays. Each ray behaves exactly like in the hard shadow case: if it hits an object, it is discarded and no action is taken. If it’s lit, it is computed using the (N · L) lighting calculation. However, there is one extra step: you have to average the result of all your rays to produce the final color. This means that, if all the rays are occluded, you have a completely dark surface; if all the rays reach the light, the surface is fully lit.

Figure 15: Soft shadows are very difficult to render in traditional graphics

But if some rays hit and some miss, you have a partially lit pixel that is part of a shadow’s penumbra region.

When casting multiple shadow rays from the same pixel, care must be taken to choose the best ray directions. If the light source in an area light, you should distribute the rays over the cross section of the light source visible from the surface. However, if you want to approximate daylight using an infinitely distant directional light, you should choose a cone of rays from the surface; to represent a perfectly clear day, the solid angle of the cone is zero. To represent cloudier daylight, the solid angle becomes larger and larger.

Figure 16: Choosing ray direction

For a good estimate of incoming light reaching the surface point, samples should evenly cover the domain.

Reflections

When light hits a perfectly specular surface, is it reflected at the incident angle. This physical law was first codified by Euclid in 3rd century BC and has been applied ever since to many rendered scenes in computer graphics.

In a real world, every object’s appearance is more or less influenced by reflections. For surfaces that aren’t mirrors, you have to blend the diffuse component of the color (albedo) with the specular component from the reflection.

Figure 17: Reflections are more common than you think

As with shadows, there are techniques that can be used to approximate reflections in a rasterizer. One of the most common techniques is to use a reflection map where you render the scene from the point of view of the reflected object to produce an environment map that is textured onto the surface. This works as a reasonable approximation if the environment is infinitely distant.  Unfortunately it break down with reflections of nearby objects or for objects are self-reflecting.

The drawbacks of using a reflection map for traditional rendering are similar to the ones using a shadow map.  Each and every reflective object requires another rendering pass.

In addition, there are resolution issues with relfection maps as well. If the topology of a reflective object and its position relative to camera can highlight small areas of the environment map, you will see pixelation.

To compute reflections with ray tracing, the process is less complicated and generates much better results. All you have to do is emit one extra ray from reflective surfaces; the direction of the reflection ray is computed from the incoming ray direction using the law of reflection. You reflect the incoming ray based on the surface normal to produce the surface ray.

Figure 18: Ray traced reflections are perfectly accurate

Since a hybrid renderer has no concept of an incoming ray, you calculate a virtual incoming ray angle based on the camera frustum. That ray is then emitted into the scene and it hits an object in the 3D database; you then shade that object using the same illumination calculation used for the directly visible surfaces. The result is further passed through shadow calculation, a diffuse lighting equation, albedo color and (potentially) secondary reflections for self-reflective surfaces.

Alpha blending

The last application of ray tracing is transparency. Rasterized graphics use a crude approximation of transparency called alpha blending. In order to render transparency using alpha blending, you sort transparent objects starting with the furthest from the camera; you then render them in that order.

When the objects are converted into fragments and blended into the framebuffer, they use the alpha component of the final color of the fragment to decide how much of the final fragment color comes from the current fragment and how much comes from the underlying framebuffer that was already there. In this way, you get a rough approximation of how transparency works but you will always have artifacts - some more visible than others.

Figure 19: Alpha blending is not how transparency works in real life

Ray tracing solves this issue because it provides real, physically-accurate transparency. In real life, when light passes through a semi-transparent object, some wavelengths are absorbed, others are not. This results in a colored tint to the image seen through the object.

To simulate this effect in a ray tracer, you can emit a ray with the same direction as the incoming ray from the back side of the intersected surface. You then modulate the color of the transparency ray with the color of the transparent surface. If that part of surface is completely transparent, the ray is re-emitted unaltered. When the transparency ray collides with the scene, you shade in the same way as a reflection ray, taking into account the ray color from the transparent object.

Figure 20: Ray tracing models transparency perfectly

Conclusion

This article has scratched the surface of what is possible using hybrid rendering in game engines, combining traditional graphics with ray tracing techniques.

The conclusion is straightforward: ray tracing-based effects are easy to implement, they do a better job approximating the physical light transport phenomena, and they avoid the artifacts of even the more sophisticated raster techniques.

Additionally, they can be more efficient.  Ray traced shadows allow for per-pixel sampling decisions so there is much less waste in scenes with many shadow casting lights.  Moreover, ray tracing reflections is a big win over using dynamic reflection maps in scenes with many reflective objects.  This is because there is no need to perform a separate rendering pass for each object when you can trace rays for the pixels that are visible.

If are at GDC 2014 and you want to know more about our PowerVR Ray Tracing hardware and software technologies and how it can be used in next generation game engines, Gareth Morgan will be discussing these topics and more in his presentation entitled “Practical Techniques for Ray Tracing in Games.”

Please leave a comment in the box below to let us know what you think about hybrid rendering and check out our blog where we will be looking at the underlying ray tracing GPU architecture and discussing various real-world applications.

This article contains significant contributions from Luke Peterson or Gareth Morgan.

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like