Sponsored By

Mini-postmortem #2: Basic Shadow Mapping

In my second "mini-postmortem", I mention what I did right and wrong when implementing basic Shadow Mapping using DirectX9.

Gustavo Samour, Blogger

June 1, 2011

5 Min Read

 

Intro

In the previous blog entry I mentioned how, for the purpose of finding a job, I made a Soft Particles demo. I think that's the only piece of code I worked on during that time, because finding a job is a job on its own. Most of the time I was busy writing cover letters, editing resumes, solving programming tests, and preparing for interviews. But after getting hired again, I felt re-energized and wanted to code more demos. So I wrote a simple water shader sample and a basic shadow mapping sample. For my second entry in this series, I decided to share my experience with shadow mapping.


What Went Wrong


1.  Not fixing previous mistakes before starting this demo. As much as I believe in code reuse, I also believe there is code that shouldn't be reused. In the Soft Particles project, I had some code that didn't scale well. That code was copied and became my Water Rendering project, which later became my Shadow Mapping project. All the code that slowed me down during that simple first project slowed me down even more for the other, more complex projects.

2. Keeping my previous demo's scene setup. Trying to get shadows working on a scene with multiple objects already in it proved to be difficult. Not all of the objects had the same materials, so I had to add the "shadow receiver" code to the different shaders. At one point I had shadows working for some objects, but not others. Finding bugs meant going through every shader. An incremental approach would have worked better (and it did, as I mention in the "what went right" section).

3. Using a single vertex declaration (D3DVERTEXELEMENT9 array) for all geometry. All my vertex buffers had the same attributes. So I set a single vertex declaration during my renderer's initialization, and forgot about that for the rest of the program (I didn't set it when drawing). This worked fine with the geometry I generated in code, like planes, boxes, and even terrain. But later I wanted to add meshes. The fastest way for me was to use *.X files with the ID3DXMesh interface. When you draw one of these, it sets its own vertex declaration, so I got an error when my program was trying to draw my other geometry using the mesh's vertex declaration. Fixing it was no big deal, but I'm glad I got this error. It made me think about situations when could use more than one vertex declaration. For example, you may have some 3D models that require normals for rendering and you may have other models that don't require normals.


What Went Right


1. Simplifying the demo. At one point during development, things weren't working out and I was starting to get a little frustrated. It was then that I decided to cut a few things from the demo temporarily. This helped a lot. Instead of having things "work for some objects and not others", shadows "either worked or they didn't" for the one or two objects I had left in my simplified scene. This allowed me to find the problem quickly, and when the fix was applied, it still worked when I brought back the other objects. Bringing back objects incrementally was also a good decision, to make sure I focused on one thing at a time.

2. Debug drawing. Visual aids are as important in the digital world as in the physical world. My experience during this demo was with drawing the view frustum. Looking at the position, shape, and size of my light frustum helped me tweak values to get a good shadow map resolution and to make sure all shadow casters were inside it. This is one benefit of debug drawing, but there are plenty of uses:

-Visualizing the radius of a point light or the cone of a spotlight
-Checking bounding volumes for scene nodes or for collision purposes
-Viewing the skeleton on an animated mesh
-Drawing wireframe to see if occluded objects are being drawn
-Outputting normals in the pixel shader to see if they point in the right direction
-Testing need for more/less mipmap levels
-etc

3.  Using lunch time to code. Using this time worked great because I knew that almost every day, I had about one hour to use for short coding sprints. It kept me focused on achieving one or two things each day. If something wasn't going well, I had the afternoon, evening, and next morning as "breaks" in between. This doesn't apply to every project, but for small hobby projects, I think it works very well.  It also meant that I could relax when I got home and watch TV and hang out with my wife, instead of using that time for coding.


Conclusion


Although sometimes one is eager to get started right away on a project, it's a good idea to do some "pre-production" work. If you have issues that you think may hurt you later, it's best to take care of those before starting the new task. Also, if something isn't working out, try to simplify the problem as much as you can. On some projects it's okay to simplify later, but on some it's better to simplify from the start. Use visual aids when possible. Debug drawing can help you find problems and tweak values. Last but not least, find a good time span to work. Make sure you have plenty of time to get things done, but having just enough time keeps you focused and lets you do other things.




Screenshot of Shadow Demo - Tiger mesh taken from Microsoft DirectX SDK

ShadowDemo Screenshot

Mesh shadow on terrain
(Tiger mesh & texture taken from DirectX SDK)

Read more about:

Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like