informa
/
Programming
Featured Blog

Writing a Game Engine from Scratch - Part 4: Graphics Library

Writing an Engine from scratch can be a daunting task. In this Part we delve into depths of Graphics Library Programming. We look at how to write our own Rasterizer in order to understand fully how to use OpenGL/DirectX and possibly Vulkan to full extent!

Part 1 - Messaging
Part 2 - Memory
Part 3 - Data & Cache
Part 4 - Graphics Libraries 

 

This Article is stand-alone and can be read without the previous parts. While Programming knowledge or familiarity with OpenGL isn't a must to continue reading, it can be helpful. I'll assume some basic familiarity with general 3D Rendering or Modelling (What are Polygons, Textures, Vertices? - type of knowledge). The accompanying Code can be found here.

 

"Yes! We've finally reached the part about Graphics! Let's get some epic Scenes and Special  Effects going!" 
- Someone on the Internet I made up


... or at least he might have thought. But by now I hope you realized, that this isn't how we do things in this Series. 

At this stage, it would seem rather logical to discuss Graphics Libraries like OpenGL and DirectX and perhaps compare their Pros and Cons. We might even take a look at some simple OpenGL examples and see how we move on from there to "real" Game Graphics. After all, it would be rather simple to explain a "Hello Triangle" using OpenGL / DirectX and tell the reader, that all he needs to do now to make a simple Game is to add more Triangles and some logic to move them. 

But we aren't here to do simple! We are here to learn the real meat. The dirty details. 

Therefore, the goal of this Article isn't "how to use OpenGL/DirectX". Instead, our goal will be "how to create OpenGL/DirectX". 

I believe, if you understand the inner workings of a 3D Graphics Library, a lot of things become much clearer (Probably the coolest (yet slightly dated) book on this Subject is a 1728 page Tome from the year 2003 by André LaMothe). I hope a sort of intuition will develop when thinking about modern Computer Graphics. No longer will you simply copy-paste OpenGL snippets from StackOverflow without knowing why and how they work. From now on, you will copy-paste them, knowing why they work. Which is good enough in my book. 

We want to develop the Rendering Pipeline here. All Graphics that we Render are just a proof of concept meant to illustrate the inner workings and not the main focus.

Graphics Libraries are a huge Subject, far too big for a single Article. We will be skimming a lot of topics. I hope we can revisit them at a later point or point you to some excellent resource on the web.

The most prominent Graphics Library APIs are without a doubt OpenGL and DirectX. Therefore, this Article will mainly focused on implementing a small subset of OpenGL ourselves. We choose OpenGL as it applies to a broader scope of applications. But don't worry, most of what follows should hold true for DirectX as well. 

But what does a Graphics Library do? 

In short, a Graphics Libraries (GL) only job is to display Graphics of all shapes and sizes. It just displays what you tell it to display. Want to draw a line? There it is. A rotated Triangle? Voila! Want to give it some Color? No Problemo! 

A GL in itself is pretty dumb. It doesn't really understand the concept of Lighting or Animation or even Camera movement. All these things we must feed it ourselves. Instead, what a GL does is give us all the tools and freedom we need to do whatever we want graphically. (We ignore historic methods like T&L Units) 

What makes OpenGL and DirectX so special, is that they use Hardware (Graphics Card, Embedded Graphics, ... ) to accelerate all the drawing. The Code / Implementation for OpenGL / DirectX is actually in the Hardware specific Driver. Therefore, calling OpenGL functions results in working almost directly with the Driver. This also means, that these functions could be implemented differently for every single Graphics Card Series.

Thanks to the standardization of OpenGL and DirectX versions, we at least know they will do the same thing on each platform. Or at least they should. It has become pretty common practice for Hardware vendors to optimize certain routines specifically for a Game. If you've ever wondered, how a simple Driver update can cause massive speed gains in modern Games, this is it. This sadly can lead to some unwanted side effects. But this is not our Focus here - let's move on. 

In the "old days" (a few years back) a lot of games featured something called a "Software Renderer" or "Software Rasterizer" as an option. It was basically a purely CPU based implementation of 
a Graphics Library, meant for PC's without a dedicated Graphics Card. It could do all the same things, but was extremely slow for obvious reasons. 

So let's write a Software Renderer and pretend we are doing it on the Hardware!

We won't write our own mini-OpenGL implementation using Hardware Acceleration. That would be insane. Thus we will write a purely CPU based GL, that we will call sGL and will be based heavily on how things are done in OpenGL. We will simply pretend that our code runs on the Graphics Card.

All the Code will be available here . Don't worry, I'll try to convey most of the things in words instead of code. But a few code snippets will apear here and there. It is written in simplistic and very bad C++. I tried to write it as short and understandable as possible. So yeah, Globals... naked pointers... no classes... like a 7 year old.

We will try and stick as close to the overall OpenGL picture / API as possible. For example, where OpenGL might have a function called glBindBuffer(), we will have sGLBindBuffer().

But where to Start... 

Let's set our goal to display textured, alpha blended and shaded Triangles using our sGL. 

What do we need to start? Some way to draw Pixels to the screen. This isn't as trivial as it might seem in portable C++. For this reason we will use the SDL Library just for drawing multiple Pixels at once to the screen - nothing more, nothing less - otherwise this whole project would take forever. Yes, SDL uses OpenGL under the hood to draw, but we will just ignore that.

If all we have is a method to draw dots on a screen, how can we end up with actual Triangles?

As an exercise, you could spend some time and think about how you would draw a plain color Triangle on the screen, with nothing more than a setPixel() function and 3 Points defining the Triangle corners. 

It seems so straight forward. On paper, you could draw your 3 Points, connect them and fill in the middle. But doing that on the Computer with nothing but a setPixel() function isn't as obvious as it seems. We will explore the answer soon.

The Actual Render Pipeline

Let's take a quick look at a very simple diagram illustrating the most important aspects.

The overall idea is simple:

  1. We start with Vertices that describe our Triangle (3 Positions, Color, Texture Coordinates...)
  2. We pass those Vertices to the Graphics Card
  3. The Graphics Card can do some neat calculations on those Vertices - Transformations mainly (Vertex Shader)
  4. So far we have 3 Points. The Graphics Card now fills those 3 Points to form a Triangle (Rasterization)
  5. Each Pixel inside the Triangle can now be shaded (Pixel Shader)
  6. We draw the Pixels to the Backbuffer, as well as perform some Depth Tests, Alpha Blending, etc. 
  7. We draw the Backbuffer to the Screen

We will use this List as a guide to proceed and look at individual topics in a bit more depth.

1. and 2. Vertices, Indices, Textures and Uniforms

Nowadays, if we want to do Graphics, we need to get the Graphics Card involved. This also means, we need to store everything we want to display on the screen accessible enough for the GPU to address it directly. Usually this would be the Graphics Card own Video Memory (or the shared Memory for embedded Graphics).

We therefore have to send (or at least flag) everything we have stored on our RAM that we want to draw. This includes, but isn't limited to

  • Vertices - Describing the Geometry of our Triangles 
  • Indices - Describing the relation between the Vertices
  • Textures - Giving our Triangles some much needed Color
  • "Uniforms" (Sticking with OpenGL terminology) - Small bits of information we might need, like Matrices, Special Parameters and even Lighting

Usually, the process for this is simple. We tell our Graphics Library that we need to reserve some Space on the Graphics Card and then send it the information whenever we please.

Let's just take an exemplary look at how we do this in OpenGL and how we wish to do it in our own sGL.

OpenGL Description sGL
glGenBuffers() Generates a unique Buffer ID the User can identify his Buffer by sGLGenBuffer()
glBindBuffer() Creates a new (empty) Buffer with the supplied ID or uses an existing Buffer by that Buffer ID. sGLBindBuffer()
glBufferData() Creates and Initializes a buffer with the supplied Size and Data sGLBufferData()

The OpenGL specification is quite liberal with how these things are implemented. We will be very simplistic with sGL. So simplistic in fact, that we will not even bother creating a class and store everything in GLOBALS! HA!

Generating Buffer ID's? We just keep track of what the last ID was we issued and simply increment it each call.

Binding Buffers? We just store the Buffer ID we currently use in some global int. Let's call that g_sGLBoundBuffer.

Creating and Initializing Data? Well, Remember the Article on Memory Management? This applies to Video RAM the exact same way. But let's stay simple and do a dirty new operation.

void sGLBufferData(int size, in_out_vertex_t * vertexData)
{
	g_sGLVertexBuffer[g_sGLBoundBuffer] = new in_out_vertex_t[size];

	for (int i = 0; i < size; ++i)
	{
		g_sGLVertexBuffer[g_sGLBoundBuffer][i] = vertexData[i];
	}
}

Pretty easy if you don't care about Memory Management, huh?

Now obviously we hard-coded a lot of things. For example, OpenGL allows you to specify your own Data Layout for each Vertex. We stuck to a specific one called in_out_vertex_t. OpenGL also allows you to generate any type of Buffer with the above calls. We stuck to Vertex Buffers.

But that's okay! We are just here to see how it could be done to get a feeling for the broader picture. Creating Index Buffers and Texture Buffers would be quite similar.

But you may have noticed the first thing...

Memory Management?

Yes, OpenGL does not give you the option to specify exactly how things are stored in detail. We can do it now in our own GL but normally it's all hidden behind those Buffer calls. This can become a problem when moving / deleting / creating large Chunks. Memory Management can become a pain, especially for Textures

But this will Change once Vulkan arrives and should be part of DirectX 12.

Shader Code

What exactly the Shaders do will be discussed later. For now it's safe to assume that they are made up of Source Code and is just another Piece of Data that has to reach the Graphics Card. This Shader code will initially reside on the Main RAM until specifically send and bound using the following functions.

OpenGL Description sGL
glCreateShader() Creates an empty Shader (any Type) and returns a unique ID for us to reference it by - not needed, our Shader is on the CPU -

glShaderSource()

glCompileShader()

Loads the Source Code onto the Graphics Card and Compiles it - not needed, our Shader is compiled with the main Program -

glAttachShader()

glLinkProgram()

Combines Vertex / Pixel Shaders into on single Shader Program - not needed, we are interestingly more flexible -
glUseProgram() We define what Shader Program we want to use for our Rendering

sGLUseVertexShader()

sGLUsePixelShader()

In our own Implementation all we have is a simple sGLUseXXXShader() function. It does nothing more than store a function pointer to the current Shader we wish to use.

// Some Shader Code
in_out_vertex_t vertexShader(in_out_vertex_t inVertex)
{
	in_out_vertex_t out = inVertex;

	// Do something

	return out;
}

// The currently used Shader
in_out_vertex_t(*g_sGLVertexShader)(in_out_vertex_t);

// And Binding our Vertex Shader
void sGLUseVertexShader(in_out_vertex_t(*inVS)(in_out_vertex_t))
{
	g_sGLVertexShader = inVS;
}

Nothing all too fancy. If you aren't familiar with Function Pointers, that's fine. All that happens is that we store which Shader to use in g_sGLVertexShader.

Now the interesting part here is what happens in OpenGL - glShaderSource() and glCompileShader(). In OpenGL, the Shader Code is compiled after it is send to the Graphics Card. This means, you have no direct control over the compile process once your Game has shipped. It's equivalent to sending the source code of your game to the Player and pray that his Compiler does the exact same optimizations as yours did.

Now, there are ways to send the Shaders pre-compiled in OpenGL (glShaderBinary() comes to mind). But I am not aware of any portable methods - you'd have to create those binaries for each system you wish to support separately. Not fun.

3. Draw Call

Now, we have all the data on our Graphics Card, it's time to draw! What happens first? Obviously, the user has to issue some commands.

He tells the Graphics Card exactly what he wishes to use. He binds his Vertex Buffers, uses his Shader Program, sets his Textures... He also enables any additional things he wishes to use, like Alpha Blending, Depth Testing...

Then he tells OpenGL to Draw! Let's focus on one of the possible draw functions OpenGL provides:

OpenGL Description sGL
glDrawElements() Renders some number of Primitives / Polygons that have been specified in the Buffers.  sGLDrawElements()

For us, this will be the main drawing call. How do we implement this?

Well, for one, this draw call can specify how many Triangles (or Primitives in general) we actually want to render via count. So in our function, we will need to loop through count-Elements and draw them. For testing purposes, imagine this (wrong) snippet:

void sGLDrawElements(int count)
{
	// We loop each Triangle we need to draw
	for (int i = 0; i < count; ++i)
	{
		drawLine(
		g_sGLVertexBuffer[g_sGLBoundBuffer][i*3], 
		g_sGLVertexBuffer[g_sGLBoundBuffer][i*3 + 1]);
		
		drawLine(
		g_sGLVertexBuffer[g_sGLBoundBuffer][i*3 + 1], 
		g_sGLVertexBuffer[g_sGLBoundBuffer][i*3 + 2]);

		drawLine(
		g_sGLVertexBuffer[g_sGLBoundBuffer][i*3 + 2], 
		g_sGLVertexBuffer[g_sGLBoundBuffer][i*3]);
    }
}

This would basically draw a Wireframe of our Triangles, if we simply ignore the z-coordinate for now. I'll just leave the above snippet here as a simple look at how simple all this could be if we were stuck in Battlezone style Games. (Image taken from this wiki)

But how do we address the points we want to draw?

The Screen

Let's just assume that our Screen has two sets of coordinates. The set that interests us most is the set of coordinates that are fixed to the floating point range of -1 to +1 (can be customized, but I like to stick to -1 and +1) in both the x and y direction. These are pretty ideal for 3D use.

We also have a set of coordinates that are used to actually address individual Pixels, mostly known as the Resolution. For example the whole number range of 0 to 1024 in width and 0 to 768 in height. These numbers are hard to work with and are only used when we actually need to address discrete Pixels.

Let's stick to the -1 to +1 range for our Screen. If we were to draw Wireframes of Vertices with coordinates outside of the -1 and +1 boundaries, they wouldn't be visible. This is known as Clipping, pretty straight forward. It's wise not to waste processing power on doing any useless operations on Areas that have been clipped. For our Wireframe, this would mean that the drawLine function would simply exit once it notices it has left the bounds.

Just some things to keep in mind for our next Section.

3. Continued - Vertex Shader

Looping through the same list of Vertices and drawing the same Wireframe every frame is rather boring. We want fully textured and moving Triangles! Textured Triangles we will come to later. Let's see how we can move them.

What we want is translation, rotation, scaling or in general, transforming our Triangles. We might also want to apply some Color information to each Vertex or play with the u,v-coordinates. So, why not just pass each Vertex through a custom function that will do those operations for us? 

That's the Vertex Shader. In our little sGL it might look like this:

in_out_vertex_t vertexShader(in_out_vertex_t inVertex)
{
	in_out_vertex_t out = inVertex;

	out.x = out.x * 2.0f;
	out.y = out.y * 2.0f;
	out.z = out.z * 2.0f;

	return out;
}

Such a Vertex Shader would scale each Vertex out from the origin by a factor of 2. Nothing fancy. But if we add it to our sGLDrawElements method, we can assure that every Vertex currently being drawn has to go through this Shader! (Recall that we stored a reference to the above function in a function pointer called g_sGLVertexShader)

void sGLDrawElements(int count)
{
	// We loop each Triangle we need to draw
	for (int i = 0; i < count; ++i)
	{
		in_out_pixel_t resultTriangle[3];

		// First, we Apply the Vertex Shader to each of the 3 Vertices
		for (int j = 0; j < 3; ++j)
		{
			in_out_vertex_t currentVertex;

			currentVertex = 
			g_sGLVertexShader(g_sGLVertexBuffer[g_sGLBoundBuffer][i * 3 + j]);

            resultTriangle[j] = convertTo_in_out_pixel_t(currentVertex);
		}

		// Do other Stuff with the resultTriangle here, 
		// like drawing the Wireframe
    }
}

We store the Triangle we have passed through the Vertex Shader as resultTriangle, containing the 3 transformed vertices. For future use, resultTriangle has a different Format, that we won't bother with now.

With this System in place we just need to alter the Vertex Shader source code to play with every Vertex at once. Neat.

World, View, Projection

(Note: I'll just give a brief overview of these concepts here, the concrete details are mostly technical and not too interesting for us at this moment, as we are developing the framework to render Graphics. Here is a nicely formatted Article that goes into more details) 

We remember that large memory operations are expensive. So it makes sense that instead of adding  or multiplying a small number to every Vertex in our Buffer just to move the Scene, we can simply do this little operation in the Vertex Shader, without changing the data in memory!

This Transformation is usually called  World Transformation. If you want to alter the size, position or rotation of the World on the fly, it's easiest done in the Vertex Shader.

Now, a ever changing World is cool enough, but we will most likely still see the same thing on the Screen the whole time. After all, we are still rendering the same screen space (-1 to +1 for x and y coordinates) as before. We only render something interesting if its position happens to be inside the bounds of our screen coordinate system.

Artificially changing the screen coordinate system to see more of the world would be a strange thing to do (but not unheard of). So instead, why don't we keep shifting the World so that the interesting bits happen to be in our -1 to +1 range? This is called a View Transformation.

Now, we have ignored the z-coordinate thus far. Once we activate that, we suddenly get the notion of "front" and "back". We will simply define everything with a positive z coordinate as being in front of us and everything with a negative z-coordinate as being behind us.

Everything with a negative z-coordinate will simply be ignored when we draw.

But what if we want to see what is behind us? Well, we can translate it until it is in front of us or we rotate it.

Boom - we have a Camera that we can move in our World. And all this is done in our Vertex Shader! 

But wait. Something seems off... it's all... flat. There seems to be no depth, even though we have activated our z-coordinate!

The simple fact remains, our flat 2D Screens know nothing about depth. We have to artificially create depth. Actually what we have been doing (unintentionally) thus far is known as orthographic project

Latest Jobs

Sucker Punch Productions

Bellevue, Washington
08.27.21
Combat Designer

Xbox Graphics

Redmond, Washington
08.27.21
Senior Software Engineer: GPU Compilers

Insomniac Games

Burbank, California
08.27.21
Systems Designer

Deep Silver Volition

Champaign, Illinois
08.27.21
Senior Environment Artist
More Jobs   

CONNECT WITH US

Register for a
Subscribe to
Follow us

Game Developer Account

Game Developer Newsletter

@gamedevdotcom

Register for a

Game Developer Account

Gain full access to resources (events, white paper, webinars, reports, etc)
Single sign-on to all Informa products

Register
Subscribe to

Game Developer Newsletter

Get daily Game Developer top stories every morning straight into your inbox

Subscribe
Follow us

@gamedevdotcom

Follow us @gamedevdotcom to stay up-to-date with the latest news & insider information about events & more