Most current implementations of fog in games use layered alpha images. This technique, however, does not bare significantly resemblance to how fog actually composites in real life, since density of the fog from the viewer is not modeled in any way.
A Simple Model
of Fog
In order to create fog effects in a game, it is first necessary to create
an analytical model that bears some resemblance to the mechanics of real
fog. Fog is a cloud of water vapor consisting of millions of tiny particles
floating in space. Incoming light is scattered and emitted back into the
scene. This model is too complex to render in real time, and so a few
assumptions and restrictions must be made. The following is a similar
model to what is used in depth fog.
The first and most important assumption, common to many real time fog
implementations, is that incoming light at each particle is constant.
That is, one particle of fog located at one end of a fog volume and another
particle are receiving the same amount of incoming light.
The next related assumption is that each particle of fog emits the same
amount of light, and in all directions. This, of course, implies that
a fog's density remains fixed. These two assumptions mean that, given
a spherical volume of fog, equal light is emitted in all directions.
Using these assumptions, a model of fog can be defined. If a ray is cast
back from a pinhole camera through the scene - the amount of fog that
contributes to the color of that ray is the sum of all the light along
the ray's path. In other words, the amount of contributing light is equal
to the area of fog between the camera and the point in the screen. The
light of the incoming ray, however, was partially absorbed by the fog
itself, thus reducing its intensity.
So, the proposed model of fog is (done for each color channel):
Intensity of Pixel = (1-Ls*As)*(Ir) + Le*Ae*If.Ls = Amount of Light absorbed by fog.
Le = Amount of Light emitted by fog.
As = Area of fog absorbing light
Ae = Area of fog emitting light
Ir = Intensity of the light coming from the scene.
If = Intensity of the light coming from the fog.
Since the area of fog emitting light is the same for the area of fog absorbing light, and the assumption is made that the amount of light emitted is the same percentage as absorbed, then this equal simplifies to:
Intensity of Pixel = (1- A*L)*(Ir) + L*A*If.L = Amount of light absorbed/emitted by fog (fog density)
A = Area of fog.
Ir = Intensity of the light coming from the scene.
If = Intensity of the light coming from the fog.
If this is a per pixel operation, then the incoming light is already computed by rendering the scene as it would normally appear. An analytical way of thinking of this problem is: The amount a pixel changes to the fog color is proportional to the amount of fog between the camera and the pixel. This, of course, is the same model that is used in distance fog. Thus, the problem is reduced to determining the amount of fog between the camera and the pixel being rendered.
Determining Fog Depth at each pixel location
Standard Depth fog uses the Z (or w) value as the density of fog. This
works well, but limits the model to omnipresent fog. That is, the camera
is always in fog, and there is (Save a definable sphere around the camera)
an even amount of fog at all points in the scene.
Of course, this does not work well (or at all) for such effects as a ground
fog, and this technique cannot be used for interesting volumetric lighting.
An alternative way to create fog is to model a polygonal hull that represents
the fog, and to compute the area of fog for each pixel rendered on the
scene. At first glance, this seams impossibly complex. Computing volume
area typically involves complex integration.
However, the shaft of fog along a ray can be closely approximated subtracting
the w depth a ray enters a fog volume from the w depth of the point it
leaves the volume, and multiplying by some constant. (Mathematically,
this is a simple application of a form of Stoke's theorem, where all but
2 of the terms cancel since the flux is constant in the interior).
![]() |
![]() |
![]() |
||
![]() |
||||
![]() |
Diagram 1: The amount of fog along a pixel as the difference between the point a ray enters the volume and exits. |
A Simple Case
The first
case to consider is a way of rendering a convex volumetric fog that has
no object in it, including the camera. The algorithm can easily be expanded
to handle objects (or parts of the scene) inside the fog, the camera inside
the fog, and concave volumes.
Computing this term on a per pixel basis involves several passes. Clearly,
for any view there are two distances of concern: the point the ray enters
the fog, and the point the ray exists.
Finding the point a ray enters a fog volume is computed by rendering the
fog volume and reading the w value. Finding the ray on the other side
of the fog volume is also not difficult. Polygons not facing the camera
are culled away - but since any surface not facing the camera would be
the backside of the fog volume, reversing the culling order and drawing
the fog again, renders the inside of the fog volume. With convex volumes
- there will never be case where the ray will pass in and out of a fog
volume twice.
To get the total amount of fog in the scene, the buffer containing the
front side w values of the fog volume is subtracted from the buffer containing
the back side w values of the fog. But, the first question is, how can
w pixels operations be performed? And then, how can this value be used
for anything? Using a vertex shader, the w is encoded into the alpha channel,
thereby loading the w depth of every pixel into the alpha channel of the
render target. After the subtraction, the remaining w value represents
the amount of fog at that pixel.
![]() |
![]() |
![]() |
||
![]() |
||||
![]() |
|
So the algorithm
for this simple case is:
- Render the backside of the fog volume into an off-screen buffer, encoding each pixels w depth as its alpha value.
- Render the front side of the fog volume with a similar encoding, subtracting this new alpha from the alpha currently in the off-screen buffer.
- Use the alpha values in this buffer to blend on a fog mask.
Adding
another variable: Objects in the fog
Rendering
fog with no objects inside it is not that interesting, so the above algorithm
needs to be expanded to allow objects to pass in and out of the fog. This
task turns out to be rather straightforward. If the above fog algorithm
was applied without taking into consideration that objects are in the
middle, the fog would be incorrect.
![]() |
![]() |
![]() |
||
![]() |
||||
![]() |
|
The reason
why this is not correct is obvious; the actual volume of fog between it
and the camera has been computed incorrectly. Because there is an object
inside the fog - the backside of the fog is no longer the polygonal hull
that was modeled, but the front side of the object. The distances of fog
needs to be computed using the front side of the object as the back end
of the fog.
This is accomplished by rendering the scene (defined as objects in the
fog) using the W trick. If a pixel of an object lies in front of the fog's
back end - it replaces the fogs backend with its own, thereby becoming
the virtual back part of the fog.
The algorithm changes to:
- Clear the buffer(s).
- Render the scene (or rather, any object which might be in the fog) into an off-screen buffer, encoding each pixel's w depth as its alpha value. Z buffering needs to be enabled.
- Render the back-side of the polygonal hull into the same buffer, keeping Z buffering enabled. Thus, if a pixel in the object is in front of the back-side of the fog, it will be used as the back end of the fog instead.
- Render the front side of the fog, subtracting this new w alpha from the alpha currently in the off-screen buffer
- Use the alpha values in this buffer to blend on a fog mask.
Unfortunately,
the above approach has one drawback. If an object is partially obscured
by fog, then the component that is not in the fog will be rendered into
the back buffer, effectively becoming the backside of the fog. Thus, then
the distance from these pixels to the camera would be counted as the distance
of fog - even though there is none.
Although this could be corrected by using the stencil buffer, another
approach is to redraw (or frame copy) the screen in the front side of
pass - thereby using the scene as the fog front as well as the back. This
causes objects partially obscured by fog to render correctly - those parts
not in fog result in a 0 fog depth value. This new approach looks like:
- Clear the buffer(s)
- Render the scene into an off-screen buffer A, encoding each pixel's w depth as its alpha value- Z Buffering enabled.
- Render the backside of the fog into off-screen buffer A, encoding each pixel's w depth.
- Render the scene into an off-screen buffer B (or copy it from buffer A before step 3 takes place), using the w depth alpha encoding.
- Render the front side of the fog volume into off-screen buffer B with w alpha encoding. Since the fog volume should be in front of parts of the object that are obscured by fog, it will replace them at those pixels.
- Subtract the two buffers in screen space using the alpha value to blend on a fog mask.
- Clear the buffer(s)
- Render the scene into an off-screen buffer A, encoding each pixel's w depth as its alpha value- Z Buffering enabled.
- Render the backside of the fog volume into off-screen buffer A, encoding each pixel's w depth.
- If the camera is not inside fog, render the scene into an off-screen buffer B (or copy it from buffer A before step 2 takes place), using the w depth alpha encoding. Otherwise, skip this step.
- Render the front side of the fog volume into off-screen buffer B with w alpha encoding. If step 4 was executed, the fog volume should be in front of parts of the scene that are obscured by fog, it will replace them at those pixels. If step 4 was not executed, then some of these polygons were culled away.
- Subtract the two buffers in screen space using the alpha value to blend on a fog mask.
Camera in the Fog
There is now one more neat trick to perform - allowing the camera to enter the fog. Actually, the fog clipping plane and the geometry clipping plane are aligned - then the trivial case will already work At some point - parts of the fog volume will be culled against the near clipping plane. Since the front plane is by default cleared with 0s (indicating that those pixels are 0 depth from the camera) than when the clipping of the front volume begins to occur - the pixel's being rendered on those polygons would have been 0 anyway.
There is one more problem that crops up. To accommodate an object moving through the fog - two steps were added, one of which acted as the front side of the fog. But if the camera is inside the fog volume, then a key assumption has been broken. Not all of the fog volume is actually rendered since part of the fog volume is clipped away. This means that Step 4 in the above algorithm now becomes a major problem - as it becomes the effective front side of the fog. The polygons of the fog volume can no longer replace those pixels set by the scene since the fog volume polygons have been (at least partially) culled away.
The solution to this is simple. Step 4 was added specifically to allow objects that were only partially obscured by fog to render correctly, since any pixel rendered in step 4 would be replaced by step 5 if it were in the fog. Obviously, if the camera is inside the fog - then all parts of an object are partially obscured by fog. Thus, step 4 should be disabled completely. The following is a complete and general implementation of the rendering of uniform density, convex fog hulls.
Further Optimizations
and Improvements
Cleary,
this is a simple foundation for fog - there are numerous improvements
and enhancements that can be made. Perhaps highest on the list is a precision
issue. Most hardware allows only 8 bit alpha formats. Because so much
is dependent on the w depth - 8 bits can be a real constraint. Imagine
a typical application of a volumetric fog - a large sheet of fog along
the ground. No matter what function used to take the depth and render
it into fog - there remains a virtual far and near clipping plane for
the fog. Expanding these planes means either less dense, or less precise
fog, while keeping them contracted means adjusting the fog clipping planes
for each fog volume rendered.
On new and upcoming hardware, however, there is a trick with the pixel
shaders. Why not keep some more bits of precision in one of the color
channels, and use the pixel shader to perform a carry operation? At first
glance it appears that 16 bit math easily be accomplished on parts designed
to operate at only 8. However, there is one nasty limiting factor - on
a triangle basis - the color interpolators work at only 8 bits. Texture
coordinates, on the other hand, typically operate at much higher precision,
usually at least 16 bits. Although texture coordinates can be loaded into
color registers, the lower bits of precision are lost . An alternative
is to create a 1D step function filled texture, with each texel representing
a higher precision value embedded in the alpha and color channels. Unfortunately,
the precision here is usually limited to the size of a texture.
Once the issue of higher precision is addressed, it is possible to render
concave volumes even with limited 8-bit hardware. This must be accomplished
by either rendering concave fog volumes as a collection on its convex
parts, or by summing the multiple entry points of fog and subtracting
away the multiple exit points. Unfortunatly, the high precision trick
will not work for the latter approach since there is no way to both read
and write the render target in the pixel shader. Although a system of
swapping between multiple buffers carefully segmented to avoid overlap
might work, this latter approach will probally not be feasible until hardware
allows rendering into 16 bit formats (i.e. a 16 bit alpha format).
Finally, there are many artistic enhancements that can be made on this
kind of volumetric effect. To make volumetric light, for instance, the
alpha blends modes can be changed to additive rather then blend, thereby
adding light to the scene. Decay constants can also be modeled in this
way, to accomplish some surface variations of fog density.
Additionally, fog volumes can be fitted with textures on top of them that
operate much like bump maps do - varying the height of the fog at that
point without changing the actual geometry. To create an animated set
of ripples in fog, for instance, one can take a ripple depth texture and
move it along the surface of the fog volumes, and adding it to the w depth.
Other texture tricks are possible as well - noise environment maps can
be coupled to fog volumes to allow primitive dust effects.
And of course, it can be quite fun to draw the fog mask without actually
drawing the object - creating an invisible object moving through the scene.
Supplement
The article, Volumetric Rendering in Real-time, covered the basis of volumetric depth rendering, but at the time of the writing, no pixel shader compliant hardware was available. This supplement describes a process designed to achieve two goals, to get more precision out of an 8 bit part, and to allow the creation of concave fog volumes.
Handling
Concavity
Computing the distance of fog for the convex case was relatively simple.
Recall that the front side of the fog volume was subtracted away from
the backside (where the depth is measured in number of units from the
camera). Unfortunately, this does not work with concave fog volumes because
at any given pixel, it may have two back sides and two front sides.
The solution is intuitive and has sound mathematical backing - sum all
of the front sides and subtract them from the summed front sides. As shown
in diagram one - this is the mathematical equivalent of breaking the volume
into convex chunks and summing the up.
![]() |
![]() |
![]() |
||
![]() |
||||
![]() |
(B1-A1) + (B2-A2) factors to (B2+B1)-(A2+A1) |
Computing
concavity is as simple as adding the multiple front sides and subtracting
them from the multiple back sides. Clearly, a meager 8 bits won't be enough
for this. Every bit added would allow another summation and subtraction,
and allow for more complex fog scenes.
There is an important assumption being made about the fog volume. Is must
be a continuous, orientable hull. That is, it cannot have any holes in
it. Every ray cast through the volume must enter through hull the same
number of times it exits.
Getting
Higher Precision
Although most hardware acceleration can handle 32 bits, it is really four
8-bit channels. The way most hardware works today, there is only one place
where the fog depths could be summed up: The Alpha Blender.
The alpha blender is typically used to blend on alpha textures by configuring
the source destination to multiply against the source alpha, and the destination
to multiply against the inverse alpha. However, they can also be used
to add (or subtract) the source and destination color channels. Unfortunately,
there is no way to perform a carry operation here: If one channel would
exceed 255 for a color value, it simply saturates to 255.
In order to perform higher bit precision additions on the Alpha Blending
Unit, the incoming data has to be formatted in a way which is compatible
with the way the alpha blender adds. To do this, the color channels can
hold different bits of the actual result, and most importantly, be allowed
some overlap in their bits.
![]() |
![]() |
|||
![]() |
![]() |
|||
![]() |
|
The above
will give us 12 bit precision in an 8 bit pipe. The Red channel will contain
the upper 8 bits, and the blue channel will contain the lower 4 -plus
3 carry spots. The upper bit should not be used for reasons which are
discussed later. So the actual value encoded is Red*16+Blue.
Now, the Alpha Blender will add multiple values in this format correctly
up to 8 times before there is any possibility of a carry bit not propagating.
This limits the fog hulls to ones which do not have concavity where looking
on any direction a ray might pass in and out of the volume more than 8
times.
Encoding the bits in which will be added cannot be done with a pixel shader.
There are two primary limitations. First, the color interpolators are
8 bit as well. Since the depth is computed on a per vertex level, this
won't let higher bit values into the independent color channels. Even
if the color channel had a higher precision, the pixel shader has no instruction
to capture the lower bits of a higher bit value.
The alternative is to use a texture to hold the encoded depths. The advantage
of this is twofold. First, texture interpolaters have much higher precision
than color interpolaters, and second, no pixel shader is needed for initial
step of summing the font and back sides of the fog volume.
Unfortunately, most hardware limits the dimensions of textures. 4096 is
a typical limitation. This amounts to 12 bits of precision to be encoded
in the texture. 12 bits, however, is vastly superior to 8 bits and can
make all the difference to making fog volumes practical.
Setting
it all Up
Three important details remain: The actual summing of the fog sides, compensating
for objects inside the fog, and the final subtraction.
The summing is done in three steps. First, the scene needs to be rendered
to set a Z buffer. This will prevent fog pixels from being drawn which
are behind some totally occluding objects. In a real application, this
z could be shared from the pass which draws the geometry. The Z is then
write disabled - so that fog writes will not update the z buffer.
After this, the summing is exactly as expected. The app simply draws all
the forward facing polygons in one buffer, adding up their results, and
then draws all the backward facing polygons in another buffer. There is
one potential problem, however. In order to sum the depths of the fog
volume, the alpha blend constants need to be set to one for the destination
and one for the source, thereby adding the incoming pixel with the one
already in the buffer.
Unfortunately, this does not take into account objects inside the fog
that are acting as a surrogate fog cover. In this case - the scene itself
must be added to scene since the far end of the fog would have been rejected
by the Z test.
At first, this looks like an easy solution. In the previous article, the
buffers were setup so that they were initialized to the scene's depth
value. This way, fog depth values would replace any depth value in the
scene if they were in front of it (i.e. the Z test succeeds) - but if
no fog was present the scene would act as the fog cover.
This cannot be done for general concavity, however. While technically
correct in the convex case, in the concave case there may be pixels at
which the fog volumes are rendered multiple times on the front side and
multiple sides on the backside. For these pixels, if the there was part
of an object in between fog layers than the front buffer would be the
sum of n front sides, and the back side would be sum of n-1 back sides.
But since the fog cover was replaced by the fog - there are now more entry
points then exit points. The result is painfully obvious - parts of the
scene suddenly loose all fog when they should have some.
![]() |
![]() |
![]() |
||
![]() |
||||
![]() |
The above diagram illustrates that without taking into account the object's
own depth value, the depth value generated would be B1 - A1 - A2 since
B2 was never drawn because it failed the Z test of the scene. This value
would be negative, and no fog would get blended. In this case, C needs
to be added into the equation.
The solution requires knowing which scenario's where the scene's w depth
should be added and which scenarios the scene's w depth should be ignored.
Fortunately, this is not difficult to find. The only situation where the
scene's w depth should be added to the total fog depth are those pixels
where the object is in between the front side of a fog volume and its
corresponding backside.
The above question can be thought of asking the question: did the ray
ever leave the fog volume? Since the fog hulls are required to be continuous,
then if the answer is no then part of the scene must have blocked the
ray. This test can be performed by a standard inside outside test.
To perform an inside/outside test - each time a fog pixel is rendered,
the alpha value is incremented. If the alpha values of the far fog distances
is subtracted to the corresponding point on the near fog distance, then
values greater then 1 indicate the ray stopped inside the volume. Values
of 0 indicate that the ray left the fog volume.
To set this test up, the alpha buffer of the near and far w depth buffers
must be cleared to 0. Each time a fog pixel is rendered, the alpha will
be incremented by the hex value 0x10. This value was used because the
pixel shader must perform a 1 or 0 logical operation. A small positive
value must be mapped to 1.0 in the pixel shader, a step which requires
multiple shifts. Due to instruction count restraints - the intial value
has to be at least 0x10 for the shifts to saturate a non-zero value to
one.
The rest is straightforward - all the front sides and all the backsides
are summed up in their independent buffers. The scene is also drawn in
its own buffer. Then all three buffers are ran through the final pass
where the scene's w depth is added on only if the differences of the alpha
values is not 0.
This requires a lengthy pixel shader. A great deal of care must be taken
to avoid potential precision pitfalls. The following pixel shader performs
the required math, although it requires every instruction slot of the
pixel shader and nearly every register. Unfortunately, with no carry bit,
there is no way to achieve a full 8 bit value at the end of the computation,
so it must settle for 8.
ps.1.1
def c1, 1.0f,0.0f,0.0f,0.0f
def c4, 0.0f,0.0f,1.0f,0.0f
tex t0 //
near buffer B
tex t1 // far buffer A
tex t2 // scene buffer C
// input:
// b = low bits (a) (4 bits)
// r = high bits (b) (8 bits)
// intermediate output:
// r1.b = (a1 - a2) (can't be greater than 7 bits set )
// r1.r = (b1 - b2)
sub r1.rgb,t1,t0
+sub_4x r1.a,t0,t1 //If this value
is non zero, then
mov_4x r0.a,r1.a //the
were not as many backs as
mad r1.rgb,r0.a,t2,r1 //front and must add in the scene
dp3 t0.rgba,r1,c4 // move red
component into alpha
// Need to shift r1.rgb
6 bits. This could saturate
// to 255 if any other bits are set, but that is fine
// because in this case, the end result of the subtract
// would have to be saturated since we can't be
// subtracting more than 127
mov_x4 r1.rgb,r1
dp3_x4 t1.rgba,r1,c1 // move into
the alpha
add_x2 r0.a,t0.a,t1.a // the subtract was
in 0-127
mov_d2 r0.a,r0.a //
chop off last bit else banding
+mov r0.rgb,c3 //
load the fog color
This pixel shader gives an alpha value which represents the density of
fog, and loads the fog color constant into the color channels. The Alpha
Blending stage can now be used to blend on the fog.
Finally, there is one situation which can cause serious problems - clipping.
If a part of the fog volume is clipped away by the camera because the
camera is partially in the fog, then part of the scene might be in the
fog. Previously, it was assumed the camera was either entirely all the
way in, or all the way out of the fog. This may not always be the case.
An alternative solution is to not allow polygons to get clipped. The vertex
shader can detect vertices which would get clipped away and snap them
to the near clip plane. The following vertex shader clips w depths to
the near clip plane, and z depths to zero.
// transform position
into projection space
m4x4 r0,v0,c8
max r0.z,c40.z,r0.z //clamp to 0
max r0.w,c12.x,r0.w //clamp to near clip plane
mov oPos,r0
// Subtract the Near
clipping plane
add r0.w,r0.w,-c12.x
// Scale to give us
the far clipping plane
mul r0.w,r0.w,c12.y
// load depth into
texture, don't care about y
mov oT0.xy,r0.w
Additionally,
please note that full source code will be available with the release of
DX 8.1, which is imminent. The code is in the volume fog SDK sample.
________________________________________________________