Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

A very simple idea: Instead of rendering everything in full resolution, why not render x-percentage of objects in the background in a lower resolution? This post is about how we tackled this for mobile devices for the KLM Jets Papercraft Air-O-Batics game

tomas sala, Blogger

September 29, 2015

14 Min Read

Endlessly generated levels  aren’t exactly a rare  science to build these days. However, having a vivid world, with tons of moving objects and complete custom chunk design that also has to run on low-end mobile devices, often proves more challenging than initially anticipated. In this post, one of our most experienced developers, Michiel Frankfort, explains how we tackled this for KLM Jets , Papercraft Air-O-Batics.

This is a re-post from a post by Michiel Frankfort who is a senior developer at our studio Little Chicken Game Company, (http://www.littlechicken.nl)

I think it’s pretty impressive: A brand new smartphone with Android 4.1 or higher, packing a dual-core chip with 512 of RAM… at only 89,- euros. Where I’m from, two iDevice cables cost more than that. But of course these ‘new’ smartphones look ‘supportive’ on paper, but in reality they use outdated and insufficient hardware.   So that means there are a lot of Android devices out there that don’t perform anywhere near as well for games as one would expect.  This comes with great dilemmas and potential problems for mobile games developers.

It is possible to exclude devices based on dpi-, screen size-, OS- and a few other settings. Unfortunately, this is simply not enough to exclude ‘slow’ devices. You can handpick and exclude devices, but there are literally hundreds of Android devices matching any OS- or screen filter we put up. Each supported device usually comes with multiple versions, resulting in an ever growing 1600+ supported device-list for any of our games. Handpicking these devices by individually testing/researching its supportive capabilities is simply too much work. Additionally you really don’t want to exclude too many devices as that is limiting the potential audience for your game.

In order to support the vast range of low- to high-end smartphones, there are usually three camps:

  • Don’t go 3D: By sticking to 2D gameplay it's much easier to produce a good performing game, but usually requires a significant amount of RAM when dealing with a lot of 2D-content.

  • Ignore cheap devices: “If you can’t afford an expensive smartphone, you will most likely not pay for our game anyway…” could have been a quote of some manager somewhere. ;-)

  • Stick with 3D, but strip the wold of any small detail and stylize it.

For our game we chose neither of these options and decided to go full 3D with a lot of detail. Here's the trailer with some gameplay footage at the end showing the game.

 

 

 

In order to quick-win a lot of performance we figured a reduction in draw distance significantly reduced shader calculations, pixel-fillrate, drawcalls and triangle count. After reducing the camera draw distance to such a level that the performance became acceptable on these low-end devices, it became apparent the game  and the ability to correctly navigate it by the player were severely compromised. For the first release we then used a combination of these fairly common  techniques to keep the game playable on low-spec devices 

But recently, we added New York City as new content. Very cool to build and play, but it became apparent that even the highest of ‘playable’ drawing-distance wasn’t enough to make you feel like you were flying through a big city. Massive camera-fog and Skyscrapers popping in relatively nearby isn’t exactly the feel we we’re aiming for.

Conclusion: we needed more drawing-distance, no matter if you were using a low-end or high-end smartphone.

An old and proven method to accomplish this is by LOD-ing. Level-Of-Detail is a technique that draws low-poly models, low-res textures and cheap shaders further away from the camera. The only problem is that the models in our game are as low-poly as we can get them and most shaders are already lightweight straightforward lightmapped/vertex shaders. Besides, further optimizing the art required a lot of work and time we did not have.  In other words: not much headroom there. Just stripping away unnecessary details helped and our level(chunk)-designers took the effort to balance-out drawcalls among areas in order to iron-out spikes and hiccups in performance. Meanwhile, me and my colleagues turned the game inside-out with Unity3D deep-profiler (see Mark Bouwman’s blogpost here) to reduce memory- and CPU spikes. These two optimizations where needed to apply a new and awesome trick, used by many proprietary high end game-engines: Extended Low Resolution Rendering

Extended Low Resolution Rendering

This is in principle a very simple idea: Instead of rendering everything in full resolution, why not render x-percentage of objects in the background in a lower resolution? This would save a lot of performance, because it reduces pixel-fill-rate and a ton of per-pixel shader-computations. For example, you could reduce your drawing-distance with 30% and use this circa 30% of gained performance on a low-resolution extension of your drawings-distance, effectively increasing drawing-distance, but roughly at the same performance costs.

Extended low res rendering

Added to  this low-res rendering, we used the low-res camera’s to cull away certain objects grouped into ‘layers’ and excluded from the camera’s culling mask. These include trees, fences and small details such as boxes, pickups, etc.

Pseudo-coded breakdown

 

[RequireComponent(typeof(Camera))]

public class LowResFrustumExtension : MonoBehaviour {

Awake() {

On awake we create one or multiple camera’s nested as a child into this gameobject. We then inherit all the settings found on the parent-camera component. For each camera, we assign a target RenderTexture (RT), with decreased resolution.

Thanks to the custom projection-matrix, we are able to go with anamorphic (non square) pixels, allowing us to scale down the ‘width’ by a factor of 2 and the ‘height’ by a factor 4. After rendering, the target-texture will appear ‘skewed’ but by blitting the RT back to full screen, it stretches back into place. We do not scale down with a square ratio,because this reduces tearing. There are a lot of vertical lines that will start to jitter when the ‘width’ is scaled down to a factor of 4, but for the (relative stable) horizontal lines, this looks okay.

Each camera and its target RT represents a ‘RenderLayer’. Each layer renders a certain area of the total render-distance. For example, this is one of the setups we used for Jets:

MainCamera (full-res)  0.1 to 25

First RenderLayer (half-res) 25 to 55

Second RenderLayer (third-res) 55 to 75

The RenderLayers need to be rendered in a ‘far-to-near’ order, so the camera’s depth equals the parent camera’s depth, minus its own RenderLayer index.

}

Update() {

On update we loop through all the RenderLayers and update the custom projection-matrix accordingly. Each layer should extend its parent main camera, therefore some values need to be inherited in each frame, such as fov, culling-mask, etc.

}

OnPreRender() {

This ‘OnPreRender’ is fired when the parent/main camera is about to be rendered. At this point, due to lower camera-depth, all the children RenderLayers are already done rendering. This means that there is a stack of RenderTextures available and ready to be ‘blitted’ to the final RT or screen.

So we first make sure the screen is targeted as its final render-target, then we loop through all RenderLayers and blit its RT to the screen.

}

}

Because this code is used in actual product, I will give you a pseudo-coded breakdown. We hope to be able to make the full code available in the future.

In principle, that’s it . Of course, the class itself and its helpers are much more complex than this, but this illustrates how a simple setup could work. The question is: Which values are correct? How many RenderLayers? How far can you scale-down the RT? There isn’t a single ‘truth’, but we did notice that more than one RenderLayer decreases performance on old devices, instead of increasing it.. Most likely because it needs to swap multiple RenderTextures (activate/deactivate) and old mobile phones especially lack fast low latency GPU-memory.

On top of that, for each camera all objects need to be ‘reconsidered’ and triangles need to be re-‘frustum-clipped’ as the near/far clipping distance moves further away for each camera. All this overhead adds to the overall ‘cost’ of this neat trick, but pays-off,  especially on modern devices with decent memory & CPU hardware.

So in the end, we reduced the drawing distance of the MainCamera from originally 40 to max 20, and the 2nd camera takes over from 20 to 70 in low resolution. So from max 40 to max 70 is a whopping +75% increase in camera drawing distance.

Here's a view on how that looks ingame, you can see the resolution switch behind the cardboard cranes:

 

So it comes for free?

Not really… further drawing distance, low-res or high-res, still requires extra operations:

  • Drawcalls will increase, so make sure this number has some headroom left. Twice the amount of objects means doubling the drawcall-amount (lets forget about dynamic batching for a second)

  • Triangle-count will increase as well, although in my experience triangle-count usually isn’t the bottleneck thanks to my very techy colleague Artists that know how to optimize for a mobile game.

  • Memory load: More objects needed for rendering means more objects needs to be alive… With a static scene this is not an issue, but with an endless runner where chunks and props are spawned randomly in front of the player, the memory footprint increases quite a bit.

  • Memory-swapping: For each low-res layer rendered in the background we need an RenderTexture that needs to be activated (uploaded/setactive @ GPU-memory) and deactivated (unloaded/disabled @ GPU-memory) after low-res rendering has completed. This is especially costly on older devices with slow memory like the iPhone 4s and older.

Additional options:

Because the low-res background layer(s) require their own camera to be rendered, we can do awesome things with these new layers. For example, you could easily exclude certain object-layers from the background, culling out vehicles and trees naturally, reducing the drawcalls and triangle count.

But despite the fact that low-res rendering saves performance, an increased drawing-distance will at least impact drawcalls and triangle count.

If you can deal with that, then this is an awesome new way of cranking out the last drops of extra fps you need for your game!

 

Read more about:

Featured Blogs

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like