(Disclaimer: Teofilo is now Senior Engineer at Archiact Interactive but worked as Core tech engineer at Black River Studios during the development of Angest)
In this article, I’ll talk a bit about how we were able to achieve high quality graphics in the recent released game Angest while keeping high performance. Angest is narrative-driven game made by Black River Studios (Finding Monsters Adventure, Rococo VR, Rock & Rails) that has been recently released for GearVR. As we know, GearVR is a very restrictive platform resource-wise. We need to keep the number of draw calls low, use few polygons, be aware of fillrate limitations (use transparencies wisely, avoid overdraw, reduce per-pixel operations), avoid overheating the device and etc. (More info can be found here, here and here). Needless to say that, with those restrictions, we can’t have fancy stuff like Physically-Based Shading, for example.
As devices and technology evolve, some of those limitations are being relieved. Just to cite a few: once Vulkan support is added to GearVR we will be able to having more draw calls; new devices like Galaxy S8 and Note 8 allows for more resources to be used and have better technology to avoid overheat; and, Single Pass Stereo Rendering (OpenGL multiview extension) to avoid having to render each eye separately.
Despite the existing restrictions, we did want to have appealing graphics in our game… and it should run on Galaxy S6 devices. In order to do so, we needed then to perform an exhaustive work performance-wise and resort to some tricks to achieve the lighting and color desired by the art director.
Two scenes in this game were specifically more challenging: Living Room and Aeroponics (Figure 1). The challenge here was that both scenes had a high number of polygons and also many dynamic objects. The good news was that, unlike our previous game (Rock & Rails) where the player was free to move between different tracks and see the environment from many different angles, in Angest we have control about the locations where the player can go, since the locomotion is checkpoint-based, and we could use that in our favor.
Figure 1 Living Room (left) and Aeroponics (right) environments.
Our first approach to reduce the number of polygons was to use Unity’s own occlusion culling system. However, the system was not as efficient as we needed at the moment; actually it became another bottleneck. We then decided to implement a checkpoint-based occlusion where each checkpoint is responsible for disabling occluded meshes (from checkpoint’s point of view) as soon as the player reaches it and enabling them back when the player moves to another checkpoint. Our system allows the user to select whether to disable the game object or only the renderer. We have two main reasons to give this option: 1) we have an Event System running and sometimes an occluded object must keep its behavior active despite being hidden; and 2) we might have collision problems with dynamic objects with physics.
Figure 2 Checkpoint-based occlusion inspector (left) and in action (right) (GIF).
With this system we were able to drastically reduce the number of polygons and to reduce a few the number of draw calls. In the Aeroponics scene we also took advantage of the fog, being able to hide more objects by reducing the fog distance from camera. Unfortunately, at that point of the project we didn’t have time to automatize the per checkpoint setup process (i.e., we had to manually setup every checkpoint), but later on we implemented a system similar to the one used in Dead Secret.
Reducing draw calls
We reduced the numbers of draw calls, but it was still high and we noticed that something was not correct with our batches. Static meshes with same material were in separate batches even not reaching the vertex limit (that is 64k by the way) and some dynamic objects with same material were not batching.
At that point, we didn’t have sorted our materials yet, so they were all at the same render queue (Opaque = 2000 and Transparent = 3000). Here we learnt our first lesson:
- Unity has its internal rules for drawing meshes with materials in the same render queue. It might or not group meshes with same material. In other words, you can have batches of same material separate by another different material that is in the same render queue.
How do we solve this? Give a unique render queue to each material. Once that is done, it’s guaranteed that all meshes using a given material will batch (if they match batching rules, of course).
Dynamic batching is way more restricted than static; there are several rules to be followed that you can see here. Maybe the most important rule is that your meshes cannot have more than 900 vertex attributes in total. In the Living room scene we have many books and the player can manipulate them freely. The first problem we found was that the lightmap was baked for them, adding more info per vertex and consequently breaking the batch. Removing that fixed the batching for most of the books, but there was still some not batching. So, the second problem found was that some books had inverted scale and this also breaks dynamic batching. After fixing this, the books were perfectly batched together.
My advice for those using dynamic batching is: be aware about the number of vertex attributes. Remove unused data from meshes and shaders, this is useful not only for batching but also to keep your meshes lighter what might help reducing loading times as well. Some few tips:
- If you are not using normal mapping, set ‘tangents’ to ‘none’ in Model Import Settings.
- Not using normals? Set ‘normals’ to ‘none’ too.
- Tick ‘Weld Vertices’ in Model Import Settings (Unity 5.6+)
- Remove unused UV coordinates.
- Also, explain to the artists the rules, so they can be aware when modeling.
We had made a good progress with this, but there was still room for improvements. In our game an available checkpoint is represented as shown in the gif below. As one can notice it is filled according to the time you keep gazing at it. At first, each checkpoint had an instanced material since only the one selected should be filled at time. The problem is that at some moments in our game, there are many checkpoints available and each of them was a draw call. This problem was solved by setting a shared material to all checkpoints, and then, only when a checkpoint is gazed its material is changed to the instanced version and animated. As soon as the gaze stops, the material is changed back to the shared one (Figure 3).
Figure 3 Checkpoint material changing (GIF)
Finally, we identified that some of our UI needed special attention. In Angest, we have many computers; the antagonist itself is a computer. Since computers play an important role in the game, their UI had to be very detailed and more details might mean more draw calls.
Once again we took advantage of our checkpoint system, in this case to alleviating the weight of the UI. Since the player is able to interact with a computer only when it is at its immediate checkpoint, we realized that we could change computer screens to an idle screen (Figure 4) with few draw calls whenever the player is in a checkpoint from where it is not capable of interacting with them.
Figure 4 Idle computer screen usage (GIF).
We also rearranged some UI elements in order to batch properly. At first, UI elements were arranged according to their meaning for the UI. Despite letting clear the role of each element, this kind of arrangement breaks batching. Ideally in order to batch things together you have to arrange elements according to their common properties (e.g. images in atlas A, images in atlas B, texts with font X, texts with font Y, etc.). With this new kind of configuration we were able to reduce a bit more the number of draw calls.
That's it for now! In my next article, I will talk about some tricks and techniques we used in Angest to achieve some nice visual effects. For more details about Angest development, you can also refer to the Victor's article about the Take and Event systems developed for Angest.