Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

XNA-to-Unity Part 2: Rendering

The XNA-to-Unity series continues by focusing on rendering issues you may encounter.

Luke Schneider, Blogger

September 19, 2012

9 Min Read

[Note: There are plenty of 2D toolkits for Unity.  They probably solve the same problems I'm talking about, so you might not need to worry about this stuff specifically, but it's usually better to know how something works than not.  If you're doing custom 3D models, this will also still be relevant.]

Xna_to_unity_2


Last time I covered the general setup for the XNA-to-Unity transition.  This time I’m going to delve into details of the most difficult system to port: Rendering.  Some of the problems I encountered are specific to my methods of rendering, but you’ll likely run into many of the same problems.

The important issues I had to solve for rendering included the following:

1) Rendering Lots of Dynamic Polygons: This seems like a given for XNA (I never thought about it), but figuring out a workable custom mesh system* in Unity is not simple.  Like pretty much all the rendering systems, this is one area where my exact approach may not be the best one, but I know it does work, so I haven’t messed with it much.

[*You can use custom OpenGL code for PC/Mac, but not other platforms.]

2) Sprite Sheets (or texture atlases): I didn’t use them at all for my XNA games because I didn’t need to, so I had nothing to start from.  In addition to needing to combine all my assets into a sprite sheet, I also had to get the sprite sheet data into the game, and deal with rendering artifacts that sprite sheets caused.  And adding new images or making changes to images is fairly common, so making this a relatively simple process is important.

3) Fonts: I used SpriteFont for XNA, and I render a lot of text mixed in with other images, so I wanted something that would mix easily with whatever I used for sprite sheets.

4) Full Screen Shaders: I used two major ones for XNA, and they are both important to the visual style in most of my games: Bloom and a custom reflective/refractive distortion shader.  XNA and Unity have fairly different systems for doing shaders, and I only barely knew what I was doing with the XNA ones, so this was a big struggle.  At the same time, it’s not a particularly useful thing for most people so I will just skip the details.

So those are some pretty major issues, and I had to solve 1, 2, and 3 (at least somewhat) for Super Crossfire before the game would even render.

Render Layers: My Dynamic Custom Mesh System

As I mentioned in XNA-toUnity Part 1, I have multiple Render Layers in most of my games.  A Render Layer is how I manage custom meshes in Unity.  Essentially what happens is this:

During the LateUpdate of each frame in my UnityManager script, the RGame’s Draw function is called.  Inside the Draw function:

1) Some variables for a Render Layer are initialized by a function called StartDrawing, including some local variables that mirror what will be copied (later) to the actual Unity mesh component.  These variables include vert positions, colors, and UVs, but *not* the triangle vert references.  (I’ll come back to this.)

2) A bunch of quads are “drawn”, which means the local variables for positions, colors, and UVs are set to whatever the game wants to draw for this render layer.  This is where I all my custom drawing functions like DrawQuadUI and DrawStringAlignCenter and DrawLineStrip are called (there are about 50 different drawing functions that I’ve built up over time).  The number of quads drawn is also tracked (everything is drawn as a quad for me, even if it’s only a triangle, which is almost never).

3) The EndDrawing function is called for the current RenderLayer.  This is where the *MAGIC* happens.  By MAGIC, I mean some funky stuff that reduces the amount of copying of data to the actual Unity mesh (and therefore generally speeds up drawing), but which might not be the best way to do it (see below).  Note the actual copying of the Render Layer variables’ data to the Mesh does not happen yet.

4) Steps 1-3 are called for every RenderLayer in the game until drawing is complete.  For Super Crossfire there’s only 1 layer, Slydris has 3, and for Inferno+ there are 5 (1 of which uses a Reflective/Refractive shader).  That’s also the number of draw calls each game makes on iOS (none of the iOS games use full-screen shaders, which add more draw calls and are quite expensive).

Ok, so that’s major part of the the rendering: Creating the position, UV, and color data for each mesh (one per Render Layer).  But the data isn’t in the actual meshes yet.

After the RGame draw function, I loop through each Render Layer in the LateUpdate function, and call the DrawFinal function on the RenderLayer.  The DrawFinal function copies over the render layer variables into “mesh.vertices”, “mesh.uv”, and “mesh.colors”.

Back in step #3 above, you’ll recall I did some “MAGIC”.  That is because I try to avoid also calling “mesh.Clear()” and copying triangle data into “mesh.triangles” because that triangle data is essentially always the same.  This code is called at the start of the game, and never again:

triangles_init

Every 6 vertices is always a quad.  It doesn’t have to be for your game, but it is for me for simplicity, and it means I only have to copy the triangle data when I do a mesh.Clear().  I also never have to see that two-triangles-make-a-quad code in all my custom drawing code (the 50 different functions mentioned above).

If you could see the quad count for my games, you would also notice another side-effect of the “MAGIC”: The count jumps by a fixed amount when it does.  My games are either drawing 1024 quads, or 1152, or 1280, or some other number that is a multiple of 128.  Let’s rewind a bit and figure out why…

Drawing With Custom Meshes in Unity

To draw with custom meshes in Unity, you need to create arrays for positions and UVs (and colors) that will be copied into the mesh itself.  In code you fill those arrays with your data, and copy that data into the meshes.  Not too difficult by itself, but there are complications to consider.

(Note: You should try not to change colors because it slows things down a little, but I rely on modifying colors a lot for drawing, so I’ll keep mentioning them.)

Whatever size your (UV, position, and color) arrays are is the size the mesh will be.  So if your arrays have data for 64 quads, that’s how many quads the mesh will have.  The problem is you never know how big the arrays will need to be until you’re done drawing each frame, and even then the range will vary a lot.  One frame you’ll need 180 quads, the next you’ll need 520.

At first, I just guessed a maximum I’d need for Super Crossfire, and went with that (it was about 10,000 quads).  That worked fine on the early PC build, even when I had to keep increasing it, but it’s a good way to have a consistently bad framerate on iOS (or low-end PCs).

So you need some way for the arrays to be sized dynamically before you copy the data into the actual meshes.  Here a few ways you could do this:

1) Do an Array.Resize() on the arrays when you want them to be bigger.  Since doing so is somewhat expensive, you’ll want to limit the number of times you do it.  A good way to do that is by increasing the size by a larger than necessary amount (say 128 quads at a time).  [This is the method I saw in the old version of SpriteManager that helped me understand the need to dynamically resize the arrays.]

2) Loop through your draw code twice.  Once to count the number of quads (or whatever) you need to draw, then initialize the arrays to that size, and draw again (for real).  [You'd still want to do use array sizes that are rounded to the nearest 64 or 128 so you don't have to call mesh.Clear() every frame.]

3) Keep a bunch of jagged arrays (array of arrays with increasing sizes), and draw to a maximum-sized array.  Then copy what your drew in the maximum-sized arrays to the actual closest-sized arrays in your jagged arrays.

4) Draw to very-large arrays like in #3, then initialize a second set of arrays with the desired size and copy the data to those new arrays.  You could even keep the second set of arrays around and only re-initialize them when the size changes enough to force a mesh.Clear() anyway.

I do #3 in my games because I’m averse to dynamically initializing large chunks of memory every frame, and it makes my game-drawing code free of clutter (the EndDrawing function is somewhat complex, but you only have to write it once), and because I didn’t think of #4 until I was writing this article.

I’m not sure if the others are faster or slower in any significant way, but I wish I’d tried #4 now that I thought of it, and I may do so in the future because it will probably take a lot less memory overall while allowing for a larger maximum size, and still has the the same benefits of #3 in terms of not changing any real part of the game’s drawing code (#1 and #2 both require incorporating how the system works in the RGame class).

All of that is just dealing with one Render Layer, but once you get it working, the same principles work for all the Render Layers.  Each Render Layer can have its own step size and maximum size (if you choose to have those).

The “MAGIC” mentioned far above is basically doing the grunt work of figuring out which of the jagged arrays to copy the data into (#3 above), and also setting the rest of the data (quads 1007-1024 if I’m drawing 1006 quads, for example) to 0 so nothing extra draws.

Next Time: More Font and SpriteSheet Details

I talked about 4 main problems in the summary above, but only got into details for the first big problem.  It’s one that’s specific to Unity and has a lot of info to digest, so I’ll continue the other rendering/pipeline details in a future article.

Read more about:

Featured Blogs

About the Author(s)

Luke Schneider

Blogger

Luke Schneider has been designing and developing games professionally for 13 years. As a designer at both Outrage and Volition, he was a key member on 5 major releases. During his 4.5 years of work on Red Faction: Guerilla, Luke served as both the lead technical and lead multiplayer designer. In 2010, he left Volition to form Radiangames. Radiangames released 7 small, high-quality games in its first year of existence, and is now working on a larger multi-platform game, Luke has presented at GDC each of the past 3 years. In 2009 and 2010, he covered various aspects of design on Red Faction: Guerrilla. Then in 2011, he discussed the monthly game development cycle at Radiangames as part of the Independent Games Summit.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like