Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Making 2D Games With Unity

I'll present an overview of a number of techniques I've used to a create a classic 2D pixel art look in Unity.

Josh Sutphin, Blogger

May 19, 2013

14 Min Read

(This article originally appeared at third-helix.com)

Unity is well-known for being an easy-to-use, cross-platform 3D engine and toolset, but that doesn’t mean you’re forced to make an FPS or third-person action-adventure game. I’ve been creating 2D sprite-based games in Unity for two years now - games like Conquistador and Fail-Deadly - and in this article I’m going to show you the techniques I used to achieve the classic 2D look.

More...

Who This Article Is For

I’m going to present a brief overview of a number of techniques I’ve used to create a classic 2D “pixel art” look in Unity. This article is not a beginners’ tutorial: I’m assuming you already know how to use Unity in a 3D context and are just looking for some pointers on how to make it work for 2D pixel art.

Sprite Setup

The first thing to understand is that even though you’re making something that looks 2D, it’s still technically a 3D scene. Each sprite in the scene is a single, textured quad, positioned in 3D space just like a regular model.

perspective

You’ll need to create and import a quad to use as your mesh. I made mine in Modo, my modeling package of choice. It’s just a simple one-sided quad, 1 unit to a side, with its face normal pointing down negative Z. I also applied a planar UV projection to normalize UVs across the face.

modo-quad

Why is it important for the quad to face down negative Z? Because you want to set up your game camera facing down positive Z in Unity so that world XY correspond to screen XY, and that means the quad will need to face the opposite direction so that it’s facing the camera, and thus can be seen.

coordinate-mapping

Incidentally, you may be wondering if you can just use Unity’s built-in Plane primitive instead of modeling your own quad. I don’t recommend this, because the Plane primitive actually consists of a 10x10 quad grid, meaning each sprite will render 100 times the amount of geometry that you actually need!

quad-vs-plane

In Unity, you’ll import your quad and then set up a prefab consisting of a MeshFilter and MeshRenderer, so that the mesh can be seen. You can make prefabs for different game objects - enemies, pickups, effects, etc. - like you would in 3D, just making sure that they all use this quad model.

prefab-setup

Texture Atlassing

To create different sprites you’ll need different textures. The simplest way to do this is to assign a different material to each sprite prefab, which contains an image of the sprite you want, but this actually has a nasty hidden performance cost. Every unique texture in the scene triggers a GPU context switch at runtime; the more unique textures you have, the more context switches have to happen every frame, and thus the worse your frame rate.

You can solve this problem by creating a sprite atlas. This is just a single texture with all of your sprites contained in it, in a grid:

sprite-atlas

Each sprite prefab has the same material assigned (more on the material assignment in a minute). You can write a simple script to handle the atlas lookup: just expose four numbers - min X, min Y, width, height - and then programatically set the sprite’s UVs to match that rectangle. Here’s the UV assignment code I used (note that you have to flip the V coordinate when translating from texture space to UV space, otherwise your sprite will be upside-down):

Vector2[] uvs       = new Vector2[m_mesh.uv.Length];
Texture texture     = m_meshRenderer.sharedMaterial.mainTexture;

Vector2 pixelMin    = new Vector2(
    (float)m_currentStrand.frames[m_animFrame].x /
        (float)texture.width,
    1.0f - ((float)m_currentStrand.frames[m_animFrame].y /
        (float)texture.height));

Vector2 pixelDims   = new Vector2(
    (float)m_currentStrand.frames[m_animFrame].width /
        (float)texture.width,
    -((float)m_currentStrand.frames[m_animFrame].height /
        (float)texture.height));

// main mesh
{
    Vector2 min = pixelMin + m_textureOffset;
    uvs[0] = min + new Vector2(pixelDims.x * 0.0f, pixelDims.y * 1.0f);
    uvs[1] = min + new Vector2(pixelDims.x * 1.0f, pixelDims.y * 1.0f);
    uvs[2] = min + new Vector2(pixelDims.x * 0.0f, pixelDims.y * 0.0f);
    uvs[3] = min + new Vector2(pixelDims.x * 1.0f, pixelDims.y * 0.0f);
    m_mesh.uv = uvs;
}

The principle behind this is quite simple. UV space represents a percentage of each dimension of the texture:

uv-diagram

Calculating the actual UV values for a particular sprite rectangle is tedious. It’s much easier to express the sprite rectangle in pixels, especially since Photoshop’s Info panel shows you the cursor’s current pixel coordinates and the pixel size of the selection:

info-panel

So, the code simply divides the pixel coordinates by the overall texture dimension to get a percentage along each axis, and voila: valid UV coordinates!

(Remember the gotcha, though: the V coordinate has to be flipped!)

My script actually does more than just assign a static set of UVs: it also functions as a simple animation manager. Since you can set UVs programatically, it’s easy to define an array of different UVs in sequence which define each of the frames of an animation, then programatically swap the UVs at the appropriate rate in order to animate the sprite. My script is simple, and requires manually entering pixel coordinates for each frame of each animation strand, which is admittedly tedious… but since I don’t have a ton of animation data, it’s been acceptable thus far. It would be straightforward (though beyond the scope of this article) to extend the editor to improve the process, for example by visually selecting rectangles directly on the texture in the editor UI.

Sprite Shader

Your sprite still needs a material to reference your texture atlas, and for that you need a shader. The most obvious choice is the default Transparent Diffuse, but even this simple shader does more than you need (such as supporting per-pixel lighting, which you’re probably not using in a traditional 2D sprite-based art style). Unlit Transparent Cutout is simpler, but we can get simpler still. I wrote a custom Sprite shader which is as bare-bones as I could get it:

// Custom sprite shader - no lighting, on/off alpha
Shader "Sprite" {
Properties {
    _MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
}
SubShader {
    Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
//    LOD 100
    ZWrite Off
    Blend SrcAlpha OneMinusSrcAlpha
    Lighting Off
    Pass {
        SetTexture [_MainTex] { combine texture }
    }
}
}

(I suspect this can be cheaper still, but my knowledge of ShaderLab is limited at best.)

Texture Filtering

If you’re going for the “pixel art” look, then it’s absolutely critical that you set your sprite textures to use Point filtering mode, not the default Bilinear. Point filtering preserves hard edges in the source texture, keeping your sprites nice and clean:

filter-modes

You’ll also want to disable Mip Map Generation (mip maps make faraway textures look better, but this only applies to a 3D perspective view) and check your texture compression settings. If you’re building for iOS the default compression setting is some flavor of PVRTC which will ruin pixel art. The most accurate setting, but also the most memory-intensive, is RGBA32. Since most pixel art uses a limited palette, you can typically get away with RGBA16 with no visual degradation, and reduce the memory footprint of the texture by half. If your sprite doesn’t need an alpha channel (perhaps this texture atlasses a bunch of background tiles?) then set RGB16 to save additional memory by discarding the alpha component.

Camera Setup

For a typical 2D style, you’re going to want to use an orthographic camera. With an orthographic camera setup, objects do not get smaller as they recede into the distance. This allows you to use the Z (depth) axis as a layering mechanism, controlling which sprites draw on top of which while ensuring everything still lines up nicely.

ortho-properties

Place your camera at the world orgin (0, 0, 0) and orient it to face down positive Z. Take note of the world axis display in the viewport: note that when you’re facing down positive Z, world X corresponds to screen X (increasing to the right) and world Y corresponds to screen Y (increasing from bottom to top). This makes it very easy to a) think of your game in traditional XY coordinates, b) translate between world space, screen space, and GUI space (more on that in a minute).

Orthographic Size

If you’re going for the “pixel art” look then the camera’s orthographic size is of critical importance; this is the trickiest part of nailing 2D in Unity.

The orthographic size expresses how many world units are contained in the top half of the camera projection. For example, if you set an orthographic size of 5, then the vertical extents of the viewport will contain exactly 10 units of world space. (The horizontal extents are dependent on the display aspect ratio.)

Recall that your sprite quad is 1 unit to a side. That means the orthographic size tells you how many sprites you can stack vertically in the viewport (divided by 2).

To render the pixel-art look cleanly, you need to ensure that each pixel of the sprite’s source texture maps 1:1 to the viewport display. You don’t want source pixels being skipped or doubled-up, or your sprites will look distorted and “dirty”. The trick to ensuring this 1:1 ratio is to set an orthographic size that matches your vertical screen resolution divided by the pixel height of a sprite.

Let’s say you’re running at 960x640, and you’re using 64x64 sprites. Dividing the vertical screen resolution (640) by the pixel height of a sprite (64) yields 10, the number of 64x64 sprites that can be vertically stacked in 640 pixels. Remember that the orthographic size is a half-height, so your target orthographic size in this case is going to be 5 (one-half of 10). It should look like this:

ortho-size-clean

If you set your orthographic size to half or double that target you may still get usable results, because the sprite’s vertical size will still divide evenly into the viewport’s vertical size. But if you set the orthographic size incorrectly, you will see some pixels skipped or doubled, and it will look very bad indeed:

ortho-size-dirty

Variable Resolution

You don’t need to be confined to a single, fixed resolution in order to render clean pixel art. The simplest way to handle variable resolutions is to attach a custom script to your camera which sets the orthographic size according to the current vertical resolution and a known (fixed) sprite size:

// set the camera to the correct orthographic size
// (so scene pixels are 1:1)
s_baseOrthographicSize = Screen.height / 64.0f / 2.0f;
Camera.main.orthographicSize = s_baseOrthographicSize;

While that is a simple fix, it does have a drawback: as the screen resolution decreases, you’ll see less and less of the world, and sprites will take up more and more of the screen. That’s the consequence of keeping a 1:1 ratio between source and screen pixels: a 64x64 sprite takes up more apparent space at 640x480 than it does at 1920x1200. Whether this is a problem or not depends on the needs of your specific game.

variable-resolution

If you want your sprites to remain the same apparent size regardless of screen resolution, then simply set the orthographic size to a fixed value and leave it there regardless of the screen resolution. The drawback there is that your sprites will no longer have a 1:1 source-to-screen pixel ratio. You can mitigate the ill effects of that by only allowing resolutions which are exactly half or exactly double your target resolution.

GUI Considerations

If you’re using Unity’s immediate-mode GUI, there’s a simple trick you can use to automatically rescale the GUI to fit the current screen resolution, even if you’ve hard-coded all your GUI coordinates. Simply put the following at the top of your OnGUI call:

void OnGUI()
{
    // scale the GUI to the current resolution
    float horizRatio = Screen.width / 1024.0f;
    float vertRatio = Screen.height / 768.0f;
    GUI.matrix = Matrix4x4.TRS(
        Vector3.zero,
        Quaternion.identity,
        new Vector3(horizRatio, vertRatio, 1.0f)
    );

You may occasionally need to translate between world- and screen-space coordinates. The built-in Camera.WorldToScreenPoint and Camera.ScreenToWorldPoint functions work perfectly well with an orthographic camera, but there is a gotcha: their notion of screen-space, and the GUI system’s notion of screen-space, use inverted Y axes.

When you use Camera.WorldToScreenPoint you’ll get back a point with X increasing to the right and Y increasing from bottom to top, with (0, 0) at the lower-left of the screen. The GUI system expects coordinates with X increasing to the right and Y increasing from top to bottom, with (0, 0) at the upper-left of the screen. So if you’re translating between world space and GUI space you’ll need to invert the Y coordinate:

y = Screen.height - y;

Physics in 2D

You can constrain Unity’s physics sim to run in 2D… sort of. Create a physics object and attach a ConfigurableJoint component to it, then set the “ZMotion”, “Angular XMotion”, and “Angular YMotion” properties to “Locked”. This prevents the physics object from moving along the Z (depth) axis, and constrains its rotation to only take place around that same axis (so it can’t pitch or twist “into” the screen). It’s no Box2D, but it’ll get the job done.

Note that you’ll need to set up this kind of ConfigurableJoint on every physics object in your scene. Unfortunately there is no way to globally constrain the entire physics sim to two dimensions; it must be done on a per-object basis.

Particle Systems

You don’t generally need to do anything special to use particle systems in 2D. Depending on the desired effect, you may wish to ensure the Z velocities are always zero (for example if you want to ensure a more-or-less even spread of particles in the camera plane, e.g. for an explosion). Because you’re using an orthographic camera, any Z motion in the particles will not be obvious. (If you see particles moving strangely, this is the first thing you should check.)

If you also want your particles to have a clean “pixel art” look just like your sprites, simply assign a material using the Sprite shader (discussed earlier) in the ParticleRenderer component. (Unfortunately I have yet to devise a way to atlas sprites in particle systems.)

Fin

That’s pretty much all there is to it. Best of luck in your 2D Unity endeavors!

(Josh Sutphin is an indie game developer, former lead designer of Starhawk (PS3), and creator of the Ludum Dare-winning RTS/tower-defense hybrid Fail-Deadly. He blogs at third-helix.com and tweets nonsense at @invicticide.)

Read more about:

Featured Blogs

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like