Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Block Tanks: Procedural Dynamism in Unity

The development process for starting a fully dynamic voxel-based toybox tank game in Unity and all of the technical challenges that have arisen so far.

Trent Polack, Blogger

July 5, 2014

7 Min Read

As I'm finishing up SPACE COLORS, I decided to take a break for a bit and start on another new project. The project itself is, basically, a voxel-based turn-based tank combat game (far more light-hearted/streamlined/simplistic than the glut of tank battle games on mobile right now).

Pretty immediately, I decided on the art style that I wanted for the game: bright, dynamic, colorful toybox destruction that can work on mobile devices. Like a lot of my side-projects, I wanted the game to be something that I could develop, design, and do (most of) the art for entirely myself. When I start these projects, I tend to have a pretty absolute focus on them and I find I'm the most productive on them when I don't have any other dependencies to worry on other than my own task list. And, given that these side-projects are just worked on entirely in my spare time, I like to be able to make demonstrable progress on some feature every night whether I'm able to spend an hour or five. And, like all of my projects, I'm still using Unity because, at this point, it's just a completely natural environment for me to work within. And it affords me a lot of opportunities to get "temp" features/art/objects into a game before I can get around to a proper implementation. 

So far, the brunt of the work has gone into just writing what I'm calling the "build engine" — which is all of the procedural backend for generating all of the meshes in the game. Which ended up being... slightly more complicated than I originally expected. It's also been a whole lot of fun. For anyone interested and since I feel like going through the technical details of the thing, the kind of stuff that this backend has to deal with:

  • Generating all of the cubes that make up all of the objects in the game (the water, the terrain, the tanks, etc.) 

  • Having all of these cubes exist as "guiding objects" that largely just define the 3D mesh until they're acted upon (at which point they're temporarily turned into rigidbody actors that will bounce around and have effects attached to them). If any block moves outside of its acceptable threshold from its original position (which is just set to one world unit; ie, the size of a block), then it's destroyed once its rigidbody goes to sleep. 

  • Building as big a mesh out of all of these guiding objects as I can fit into Unity's mesh class (upper limit of 64,000 vertices).

  • Each mesh is split apart by material and has a single shared material (for performance reasons), so any changes to one material will affect an entire mesh class, so I can't do the kind of stuff I did in, say, Space Colors, where I flash an enemy white whenever it takes a hit.

  • Whenever anything changes in the game geometry, the mesh has to be regenerated in as quick and low-impact way as possible. So, say there are three tanks in the game and each tank is made up of four different materials. That's going to create at the very least four different meshes. Which means that a single mesh may have just the tank treads for all three tanks in it. So, if any of the tanks moves even a little bit, the mesh needs to be regenerated.

    • The bitch of this is that, since the dynamism of the game can have a lot of moving objects, the mesh regeneration needs to be as localized as possible. So, every single block has a reference to its mesh filter and a list of vertex indices, so that whenever a block changes position, I can very specifically target the vertices that need to be updated. This is still a non-trivial performance cost, since then the entire mesh needs to, basically, apply that change and optimize itself, but it's not a dealbreaker (even on mobile devices) until you have six-seven meshes that are radically changing all at once. Which is why I also have to batch mesh changes as efficiently as possible, so each individual change doesn't regenerate a mesh, but a group of changes can be lumped into a single mesh regeneration. 

  • To get the water effect, I generate a flat 128x128 field of cubes. I originally thought that I could then just modulate their y positions for some blocky water effects, but that ends up being a 16,384 transform operations per frame that would have to occur and about four-five mesh updates that have to be performed. Even spreading the updating of cubes over several frames didn't really do much for performance.

    • So, to get the effect of animating water, I resorted to a vertex shader that would modulate the y value of a vertex on the GPU and require no CPU impact whatsoever. This worked, but it also generating a completely smooth plane of water, since it was operating per-vertex, not per-cube.

    • To get the cube-based effect, I basically hijack a second set of texture coordinates that are passed into the vertex shader that only store the cube's distance from the origin. Then, in the shader, I use that texture coordinate value to create a sine wave applied to the water field, which, since each vertex of a cube is going to have the same center point, ends up creating the water cube effect that I wanted.

And now, a GIF of the water effect: 

 

And for a more general shot of the minimal gameplay of one tank blowing up another tank:

It's kind of amazing the amount of work that goes into producing something with such a simple style. That scene, for instance, has approximately 3 million triangles, 5.5 million vertices (though that number shrinks to 1.1M tris/2.1M verts when I disable shadows) and the performance is about 40-50fps on my iPhone and 80-90 in-editor. But, I still successfully hit the goal of everything being completely destructible and dynamic on mobile devices with reasonable performance. Though there are oodles of performance optimizations that I can still make; for instance, despite the fact that I got the number of draw calls down from the initial 15,000 or something obscene like that down to 40, the meshes themselves are still drawing each face of every cube. Future optimizations would be to generate meshes that ignore unseen faces as well as generating larger faces for contiguous surfaces.

Given that I spend most of my job doing far more production-oriented and creatively-oriented work, it's really a lot of fun to actually get back to a more holistic view of game development and remember just how complicated all of the more nitty-gritty details can get. And I think being able to not only recognize that, but actively practice it is one of the things that consistently makes me a better game developer. Well, I hope, anyway. 

Read more about:

Featured Blogs

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like