Introduction
There have been quite a few articles, web sites, and books published in the past few years that talk about generating game worlds procedurally. As game worlds have become larger and more complex, it has become less practical for game designers to generate every detail by hand. Mathematical procedures of all sorts have been around for quite a while to generate different types of texture maps, terrain, and 3D objects. If you want an excellent example of procedural modeling and texturing in action, take a look at Terragen. For a good example of procedural textures being generated realtime in a game engine, check out the manual for the animated textures used in Unreal.
If you've been interested in procedural texturing and modeling for even half the amount of time I have, you've probably read many of the same articles and web sites I've read. You may have even bought a book or two and written some code. If you're like me, you've also probably been frustrated by how much time it takes to sift through all the information out there to figure out what really works. Although this article won't solve all your problems or answer all your questions, hopefully it will get you much further along the path than you are now, or at least give you plenty of ideas.
This article
is the first part in a series that will explain how to procedurally
generate an entire universe and render it realtime at any level of
detail in a game engine. Here is what's planned for the first few articles
in the series:

Part 1: Generating Planetary Bodies. Part 1 will explain how to procedurally generate fullsize planetary bodies at any level of detail. It will discuss the pros and cons of different types of procedural algorithms, then explain the method I chose to use in more detail.

Part 2: Rendering Planetary Bodies. Part 2 will explain how to render fullsize planetary bodies at any level of detail. It uses a spherical version of the ROAM algorithm I came up with, and it is tailored specifically to the procedural algorithm explained in part 1.

Part 3: Matters of Scale. Part 3 will explain how to expand the concepts discussed in Parts 1 and 2 to generate a star system filled with planetary bodies, a galaxy of star systems, and ultimately an entire universe of galaxies. Aside from explaining the concepts, it will discuss problems with scale and frame of reference that you run into with really large game worlds.

Part 4: Planetary Texture Maps. Part 4 will explain the procedural creation of texture maps. This includes both how the demo generates the current simple texture map it uses for the planet and how that can be improved. For such a large model, creating multiple layers of textures may be the best way to deal with the drastically different levels of detail. This article will also discuss some optimizations for generating procedural textures.
A working C++ demo with source will be provided that generates and renders an Earthsized planet with an Earthsized moon revolving around it. The rendering is done using OpenGL, but it can be easily ported to Direct3D or another graphics library of your choice. I am publishing the source under the GNU Lesser General Public License, and it is being used in glElite, an opensource OpenGL version of a popular 1980s game called Elite, which is being written by Timo Suoranta.
Planetary Bodies: Static vs. Dynamic Algorithms
Unfortunately, most procedural routines I've found for generating terrain deal only with flat terrain. Few game designers work with a world large enough to need a sphere. If you see a flat algorithm you really like, you might need to use some creativity to map it to a sphere without causing any distortion. A good example of this can be found in Hugo Elias's article on Spherical Landscapes. The original algorithm generated random "fault lines" on a flat map, randomly raising or lowering the land on either side. The spherical version generates random planes that cut through the sphere, then randomly raises or lowers the land on either side. It's an interesting technique, and the pictures of it look pretty good.
The fault line (or plane) algorithm is an example of what I call a "static" algorithm. A static algorithm is one that generates a static mesh of vertices or height values. The most common form generates a 2D bitmap of height values, then uses the bitmap to build a triangle mesh for a flat landscape. While some static algorithms are very good, they all suffer from two crippling problems. The first is that you need a ridiculously large mesh to get any detail for a fullsize planetary body. The second is that the speed of the algorithm depends on the size of the mesh.
To give you an idea of the size we're talking about with planetary bodies, an order10 sphere based on an icosahedron (20 * 410 triangles) has almost 21 million triangles in it. For a planet the size of Earth, the triangle edges would be 300km long. The edges for an order20 sphere (22 trillion triangles) would be 0.3km long. It would be ridiculously slow, take up an enormous amount of storage space, and still wouldn't be able to show you details on individual hills or mountains. In that light, it seems that the only feasible way to generate a planet is to come up with something more dynamic.
My definition of a "dynamic" algorithm is an algorithm that takes some sort of vertex coordinates as parameters and returns the proper height value at those coordinates. These coordinates could be longitude and latitude, polar coordinates, 3D coordinates, or anything else you can think of that consistently and uniquely identifies a vertex. Paired with an adapting mesh algorithm such as the one that will be explained in Part 2 of this series, you would not need to store anything in memory or on disk except for the mesh you need to draw for that frame. The ideal algorithm would be initialized with a random seed and a number of defining parameters that would allow you to create a range of interesting planetary bodies from one algorithm.
Unfortunately, coming up with a dynamic algorithm like this which works well is difficult. Coming up with one that's fast is even more difficult. You could, for instance, be creative and make the fault plane algorithm into a dynamic algorithm. Instead of pregenerating a mesh, you could store the planes that cut through the sphere and use them to determine the proper height for any requested vertex. You will run into two problems doing this. The first problem will be that you need a lot of planes to get detail on a fullsize planet. What looks good from a distance would leave noticeable lines closeup. That will lead you to the second problem, which is performance. If you need 100,000 fault planes to get good detail, you will have to run each requested vertex through a loop 100,000 times.
One more thing to note before I move on is that it's also possible combine static and dynamic algorithms any way you want. A dynamic algorithm can be seeded with height values created by hand or by a static algorithm. Your requirements will help determine how much of one or the other to use, but keep in mind the size limitations with predefined meshes and static algorithms. If you need to be able to generate an entire planet down to a very detailed level, a dynamic algorithm needs to take over somewhere to fill in the gaps.
My First Attempt: The Plasma Algorithm
If you've ever read anything about procedural terrains, you've probably read about the plasma algorithm. If you've done any coding, you probably started with this algorithm. The basic concept is to start with a single square and subdivide it recursively into smaller squares by adding new vertices to the mesh. The height of each new vertex is offset up or down by a random amount, and the maximum offset allowed is reduced with each level of recursion. This algorithm produces some fairly interesting mountainous terrain, but that's all it gives you. There is no flat land anywhere at all.
Though the plasma algorithm is usually implemented as a static algorithm, my first dynamic planetgenerating algorithm was based on it. Instead of passing any kind of spatial coordinates to a truly dynamic function, I gave each triangle a random seed to identify it. Using the same random seed every time I split a specific triangle made it give you the same height value every time for the same vertex, so I used a triangle's seed to generate the seeds for its children. As long as I maintained the seed for each triangle properly, the algorithm produced consistent results. Once I coded it and got it all working, my first dynamic Earthsized planet came to life on the screen.
I started the camera position close to the ground, and I got very excited as I flew over shaded mountains and valleys. I moved the camera all over the planet to see how it looked. Then I set up a simple 1D texture to get a better feel for the height values I was seeing. Half of it was ocean blue to indicate the water level, and the other half went from yellow to green to white with increasing height; the texture coordinate of each vertex was determined by its height.
My bubble burst as soon as I saw the planet in color. It's hard to tell whether it looked more like thousands of little mountainous islands or thousands of little lakes sandwiched between all the mountains. Needless to say, it looked nothing at all like natural terrain. I tried a number of things to salvage the plasma algorithm. It is a very fast algorithm, and I feared that more complex algorithms would run too slow to be realtime. However, no matter what I tried with the plasma algorithm, my planet still looked terrible.
A True Dynamic Algorithm: Perlin Noise
Next I
learned about Perlin noise. Perlin noise is a very popular noise function
written by Ken Perlin. To put it simply, it generates smoothed random
noise using a lattice of random numbers. The lattice is an ndimensional
array of floats, and the noise function takes an ndimensional vector
as input. Think of the lattice as an ndimensional space and the input
as a point in that space. Because arrays must be accessed using integer
indexes and the point in space is defined using floats, the noise value
is determined by interpolating between the lattice values located at
the integer points around it. The number of integer points it has to
interpolate between increases exponentially with each dimension added,
so it's a good idea to minimize the number of dimensions you use. The
range of noise values returned is between 1.0 and 1.0.

Noise (0.02.0) 
Noise (0.04.0) 
Noise (0.08.0) 

FIGURE 1. Samples of 2D Perlin noise with different ranges of X and Y values. 
Depending on the range of the numbers you pass it in each dimension, 2D Perlin noise output can look like anything ranging from white noise on a TV screen to a very smooth random pattern. Because the same coordinates always generate the same output, decreasing the range of numbers you pass it into is like zooming in on part of the noisy image, while increasing the range zooms out. If you look at the images in Figure 1, you can see how increasing the range zooms out (look at the topleft corner of each image). Note how the leftmost image looks like an elevation map.
Though Perlin noise is considered very fast for such a powerful noise function, it is still too slow to use in a game engine for most purposes. As mentioned above, each extra dimension added slows it down exponentially, but in many cases using extra dimensions can be very useful. Adding another spatial dimension to the image on the left in Figure 1 would give you something that resembled volumetric fog. You can also add a time dimension, achieving interesting animation effects by setting one of the coordinates to a particular time value.
Simple Perlin noise is occasionally used to generate bump maps, reflection maps, and textures. However, because of its somewhat smooth and uniform appearance, Perlin noise is more often used as a base for more complex functions. More information on Perlin noise and its uses can be found at http://www.noisemachine.com, which has a set of online course notes written by Ken Perlin.
A Bit More
Interesting: fBm
One of the most common complex functions to write using Perlin noise
is called fractal Brownian motion, or fBm. Basic fBm is a fractal
sum of the noise function which looks something like this:
noise(p) + 1/2 * noise(2 * p) + 1/4 * noise(4 * p) + ...
While
reading this article, keep in mind that a number of information sources
confuse fBm with Perlin noise. Any noise function can be used to compute
a fractal sum. Perlin noise is just a fast method for generating highquality
noise.
To help you visualize what fBm output looks like, think of it as a
weighted sum of the Perlin noise images shown above in Figure 1. It
is usually implemented as a loop and the number of times to go through
that loop is called the number of octaves. Each time through the loop,
the coordinates passed to the noise function are scaled up by a certain
factor and sent to the noise function, whose output is scaled down
by a certain factor before adding it to the fractal sum. Because the
noise function will return the same result for the same coordinates
every time, you are essentially adding different parts of the same
image to itself at different scales using different weights. A simple
fBm routine looks a lot like this: A simple fBm routine is given in
Listing 1.
LISTING 1. A simple fBm routine.
// The number of
dimensions and the noise lattice are initialized
// separately in an Init() method. This code has been simplified
// for the article to make it easier to read.
float CFractal::fBm(float *f, int nOctaves)
{
float fValue = 0.0f;
float fWeight = 1.0f;
for(i=0; i
{
fValue += Noise(f) * fWeight;
// Sum weighted noise value
fWeight *= 0.5f; // Adjust the
weight
for(int j=0; j
f[j] *= 2.0;
}
return fValue;
}

fBm (3 octaves) 
fBm (4 octaves) 
fBm (5 octaves) 

FIGURE 2. Samples of simple fBm using a different number of octaves. 
It may
be difficult to see at first, but blending the three images from Figure
1 gives us the first image in Figure 2. As more octaves are added,
you can see how the basic pattern remains the same, but the pattern
is perturbed at a finer level of detail with each octave. As with
Perlin noise, you can zoom in or out on any part of these pictures
by changing the range of numbers passed to the function. The farther
you zoom in, the higher the number of octaves you need to maintain
a good level of detail and complexity in the image.
LISTING 2. A more flexible fBm routine.
// Note the use
of two extra member variables, the m_fExponent array
// and m_fLacunarity. Both are initialized in the Init() method, which
// allows you to customize the scaling factors used weight the noise
// values and to scale the coordinates. Also note that the number
// of octaves is now a float, allowing fractional parts of octaves.
float CFractal::fBm(float *f, float fOctaves)
{
float fValue = 0.0f;
for(i=0; i
{
fValue += Noise(f) * m_fExponent[i];//
Sum weighted noise value
for(int j=0; j
f[j] *= m_fLacunarity;
}
//
Take care of the fractional part of fOctaves
fOctaves = (int)fOctaves;
if(fOctaves > 0.0f)
fValue += fOctaves * Noise(f) * m_fExponent[i];
return
fValue;
}

fBm (H = 0.9) 
fBm (H = 0.5) 
fBm (H = 0.1) 

FIGURE 3. Samples of simple fBm with different H values. 
The exponent array that scales the result of the Noise() function is initialized using an exponential function that you control by changing a parameter called H, which acts as a roughness factor going from 0.0, which is very rough, to 1.0, which is very smooth. The difference in roughness is caused by how heavily the higher octaves are scaled, as the higher octaves contain much smaller details.

fBm (lacunarity = 1.5) 
fBm (lacunarity = 2.0) 
fBm (lacunarity = 2.5) 

FIGURE 4. Samples of simple fBm with different lacunarity values. 
Changing the lacunarity factor changes how your coordinates are scaled with each octave. It affects the output in odd ways, and I've read that most people just leave it at 2.0. Values between 1.0 and 2.0 seem to have some sort of recursive feedback, because you're getting close to blending an image with itself multiple times (think about the ranges you're passing into the noise function). Values below 1.0 actually make the noise ranges decrease with each octave, going from finer noise with a higher weight to coarser noise with a lower weight. Values above 2.0 cause your range to increase more quickly. In some ways that makes your image rougher because finer noise gets added with a higher weight, and in some ways it makes your image smoother because you more quickly get to a point where the noise is too fine to distinguish at the current resolution.
When using fBmbased algorithms to generate planetary bodies, keep in mind that the size of the planetary body, the range of numbers you pass in as coordinates, and the number of octaves you use work together to give you specific sizes of general terrain features (continents, ocean, coastlines, and so on) and the proper amount of detail given that range. The H and lacunarity factors also have a strong effect on your final output, especially when you zoom in. These are the kinds of things you just have to play around with for a while to get the feel of them.
The Next Step: Multifractals
The next level of noisebased algorithms has been called multifractals, and they're basically just a more complex form of fBm. Some perform a fractal product instead of a fractal sum (multiplying instead of adding). Some add variable offsets or apply other mathematical functions somewhere in the loop, like abs(), pow(), exp(), or some of the trig functions. Ken Musgrave has done a good bit of research in this area, and he's spent a lot of time working with multifractals to generate some interesting planetary models. There is a book he coauthored with Ken Perlin and some other big names in the graphics field called Texturing & Modeling: A Procedural Approach (Morgan Kaufmann, 1994). If you're interested in this subject, I strongly recommend that you pick up a copy. It goes into a lot more depth than I can fit into an article and covers a lot of other methods and uses for procedural algorithms.
I won't go over any specific examples of multifractals in this article except for the one I wrote to generate the planet in the demo. Like the fBm parameters, creating your own multifractal algorithm is just something you have to play around with and get a feel for. Keep in mind as you're testing things that some functions will look good on a planet from a distance, some will look good very close to the planet, some won't look good either way, and some will look good both ways. I feel that the one I wrote for the demo looks good both ways, and I'll explain the rationale behind what I did.
My planet function uses simple fBm, then takes the result and applies the power function to it. Since most of the numbers generated by simple fBm are between 1 and 1, this will tend to cause the numbers closer to 0 to flatten a bit. Thinking in terms of terrain, this causes land close to sea level to be more flat and land at higher altitudes to be more mountainous, which is somewhat realistic. Since this is not always the case on Earth, I call the noise function one more time to determine the exponent of the power function, which means you can sometimes have steeper land near sea level or flatter land at higher altitudes. For negative values, which indicate a value below sea level, the exponent is hardcoded to give a smoother ocean floor. (This may not be desirable if you want to have undersea vessels such as submarines in your game.)
LISTING 3. The authors' planet function.
// The number of
dimensions, the noise function, and the exponents are
// initialized in CFractal::Init(). To simplifythe code for this
// article, the number of octaves is an integer and the function
// modifies the array of floats passed to it.
float CFractal::fBmTest(float *f, float fOctaves)
{
float fValue = 0.0f;
for(i=0; i
{
fValue += Noise(f) * m_fExponent[i];//
Sum weighted noise value
for(int j=0; j
f[j] *= m_fLacunarity;
}
//
Take care of the fractional part of fOctaves
fOctaves = (int)fOctaves;
if(fOctaves > DELTA)
fValue += fOctaves * Noise(f)
* m_fExponent[i];
if(fValue
<= 0.0f)
return (float)pow(fValue, 0.7f);
return (float)pow(fValue, 1 + Noise(f) * fValue);
}

Planet (from space) 
Planet (coastline) 
Planet (mountains) 

FIGURE 5. Sample images from the author's planet demo. 
Don't get me wrong, it's not easy to figure out how a certain change to one of these algorithms will affect its output. If I had included a picture of its 2D output in grayscale, it would have looked a lot like the other fBm images I included, and there would have been no indication as to which was any better for generating planets. To get it just the way I wanted it, I had to tweak it a lot, looking at the planet closeup and from a distance using different initialization parameters. You should play around with it on your own for a while to get a good idea of how certain changes will affect your planet at different levels of detail.
I'm currently using 3D noise to generate my planet for the demo. When I want to create a new vertex at a certain position on the sphere, I pass it a normalized direction vector that points to the position of the vertex I want to create. I take the value returned, which should be close to the range of 1.0 to 1.0, and I scale it by the height I want my tallest mountain to be. Then I add that value to my planet's radius and multiply the unit vector by it to get a new vertex. All of these values are parameters I can use to initialize my planet object, along with the random seed, H factor, and lacunarity factor which affect the fBm output. This allows a wide range of planetary bodies to be created from one function.
The routine could be sped up using 2D noise and passing it latitude and longitude, but that would cause the terrain features to be compressed up near the poles, and a discontinuity would exist where the longitude wrapped around from 360 degrees to 0 degrees. Polar coordinates would not have compression at the poles, but would have two lines of discontinuity to worry about. If you try to skip a dimension, just passing X and Y for instance, you would end up with two hemispheres mirroring each other. If you can find a better way to represent 2D coordinates for a sphere that doesn't cause distortion, by all means try it to see how it looks and performs.
Final Notes
If you're interested, take a look at the source code for the demo. It uses OpenGL to handle all rendering, but it was written for Windows and doesn't currently compile under any other platforms. It shouldn't be very difficult to port it, but since my video card isn't supported very well under Linux, I never got around to it. The project was created with Microsoft Visual C++ 6.0, but it should compile without any problems using 5.0. Read the README.TXT file for the list of keyboard commands and some helpful tips.