Sponsored By

Noesis Technologies discusses the mandatory characteristics that any modern user interface middleware must offer, and explains how its own solution, NoesisGUI, addresses them.

Game Developer, Staff

March 12, 2018

16 Min Read

Noesis Technologies discusses the mandatory characteristics that any modern user interface middleware must offer, and explains how its own solution, NoesisGUI, addresses them.

Noesis Technologies is a privately held company founded by a passionate team of developers with solid understanding of real time and games technology. Our vision is to provide efficient tools to help other companies deliver high quality experiences. Having that in mind, we created NoesisGUI, our User Interface middleware thoroughly designed to make the very best games.

Find more information about NoesisGUI. Contact Noesis here.

INTRODUCTION

The user interface is one of the most important player experiences in a video game, but it is still something that is constantly overlooked or left till the last minute, and always underestimated in terms of the time it takes to build one. So, at the beginning of a project, not enough team resources are assigned to the task, as it seems trivial compared to the 3D workload needed to develop the game. And some misconstrue GUI design as simply blitting a few HUD rectangles on top of the game.

The UI team is normally comprised of developers not exclusively dedicated to interface tasks, and it is in the middle of the project when the team discovers a catastrophic failure has been made. Sometimes, putting off UI development can take down the entire project.

MOTIVATION

At Noesis Technologies we suffered from this situation ourselves many times, and although there seemed to be many kinds of middleware out there specifically designed to solve this problem, we observed that none of them fully satisfied us. That’s the reason we decided to create our own, specific solution for video games, NoesisGUI. I want to share with you in this article the key elements that were important for us and the technical challenges we faced during development.

Specific for video games

Video game development has so many peculiarities that it is quite difficult to find a UI middleware out there that really helps with this task. Video games are a highly-optimized piece of code where all aspects need to be kept under control.

A state of the art middleware must provide precise control on memory allocations, and efficiently interact with the file system whenever a new asset is required. It is very important for a piece of middleware to keep the degree of intrusiveness as low as possible.

The middleware must never create threads under the hood, it should always provide a renderer-agnostic API, and it should never communicate directly with the GPU. These are key elements to achieve the best integration possible between the game engine and the UI technology.

The rendering algorithm is key to always maintaining a solid framerate. It is not so important to make frames render faster as it is having them rendering constantly and without janks. Even if there are huge amounts of pixels to draw in a single frame, the experience must be smooth, always. Many types of middleware out there cache everything into bitmaps with the assumption that there won’t be many changes per frame. This optimization helps UI render faster in specific cases, when not much change is happening. This is the best-case scenario.

But as soon as a the UI becomes more dynamic, with lots of animation per frame, you start to fall into worst-case scenarios. These scenarios are called performance cliffs. Your video game seems to be moving along fine until it hits one of these worst-case scenarios. A modern and GPU aware UI middleware must be prepared to paint every pixel on every frame.

Lightweight

A well-designed, non-intrusive middleware must always minimally contribute to the final binary size of the video game. Big size libraries are always a symptom of badly designed and bloated architectures. A middleware library must be as lightweight as possible. Static libraries are always the preferred option to eliminate dead code ,and compilation techniques like Whole Program Optimization are a plus. This is one of the reasons middleware that provides source code are always better alternatives than closed solutions.

A lot of available middleware out there forces you to have a virtual machine inside your game just for the UI. That is overkill, because probably you already have a scripting solution working in your game. It is always preferred to have a clean C or even C++ API, but avoiding all the new fancy obscure features that will bloat your code. More layers like C# binding or Lua scripting must be built on top.

Another popular idea that must be avoided is using middleware that contains internet browsers, like WebKit or Gecko, inside. These technologies were not designed to be real-time friendly, and their layout and rendering engines are highly inefficient for video games. Using this category of libraries is the best way to code-bloat your video game.

Declarative format


<StackPanel Background="Orchid">
  <Button Content="Start"   Command="{Binding StartCommand}"   />
  <Button Content="Options" Command="{Binding OptionsCommand}" />
​  <Button Content="Quit"    Command="{Binding QuitCommand}"    />
</StackPanel>


Declarative formats, like HTML, are more compact than the equivalent procedural code. They also establish a clear separation between the markup that defines the UI and the code that makes the application do something.

For example, a designer on your team could design a UI and then hand off the declarative format to the developer to add the procedural code. Even if the designer and the developer are the same person, you can keep your visuals in declarative files and your procedural UI code in source code files. This approach provides a drastic change to middleware using virtual machines where both the presentation and the logic is included in the same asset. In those cases, the code in charge of the UI logic is out of control of a programmer. This is a perfect recipe to end up with chaotic code hard to maintain. A similar situation happens when you allow artists to create shaders using visual tools. Another advantage of using text-based declarative formats is that they are easy to merge and track in source code repositories, as compared to opaque binary formats.

Data Binding

Data binding is the mechanism that provides a simple and powerful way to auto-update data between the model and the user interface. A binding object “glues” two properties together and keeps a channel of communication between them. You can set a Binding once, and then have it do all the synchronization work for the remainder of the video game’s lifetime. This way, the game does not interact directly with the UI.

For this to properly work, some kind of reflection information is needed from the native language. The reflection is the language ability to examine and inspect its own data structure at runtime. Low-level languages like C++ do not expose reflection directly and it is necessary to create a new mechanism on top of the language to emulate this functionality. In higher-level languages, like C#, reflection is part of the core language.

For example, in C++ you could create a mix of macros and templates to emulate reflection like this:


struct Person
{
    char name[256];
    float life;
    uint32_t weapons;
    
    _REFLECTION(Person)
    {
        Prop("Name", &Person::name);
        Prop("Life", &Person::life);
        Prop("Weapons", &Person::weapons);
    }
};

With the reflection information available from the host language, binding to it from the view should be as trivial as doing something like this:


<StackPanel Background="White">
  <TextBlock Text="{Binding Name}" />
</StackPanel>

Multithreading

With the rapid proliferation of multicore platforms and the growth in the number of threads that they may support, a multithreading-aware UI middleware is critical to avoid stealing those precious milliseconds from the game's main thread. As said before, middleware libraries must never create threads under the hood. They must provide entry points to be invoked from the correct thread at the right time. This way, the threading management task is delegated to the game.


The logic phase must be separated from the render phase, so basically the common approach is having an update mechanism at the UI thread where things like layout and animation are calculated and a render mechanism that directly interacts with the GPU from a different thread. Both stages can be executed in parallel. Additionally, the render phase must also be compatible with GPU parallelization when using modern architectures that can execute commands concurrently, like D3D12 for example.

Vector Graphics

Vector graphics are to user interface as ray-tracing is to 3D rendering. Any modern UI middleware must support vector graphics natively and all the render architecture must be built around them. You should render all controls with vector graphics so that they can be scaled larger or smaller and still look perfect--a unique interface for all supported resolutions in your game, all of them pixel perfect.

Vector graphics allows you to create amazing effects, such as gradient ramp animations, which are hard to achieve when using bitmaps and which greatly contribute to those subtle details that make your interface feel alive. When properly implemented, vector graphics always require less memory than the equivalent bitmap. That is important not only from a storage point of view, but also to reduce bandwidth usage when rendering.

The implementation must be 100 percent GPU friendly to take advantage of current graphic architectures. It is very important to pack all the information efficiently and do all the calculations in shaders.

For example, instead of uploading full ramps to the GPU, it is better if they are mathematically calculated and interpolated on the fly. One of the hardest challenges is to transform from path definitions with curves, like bezier and arcs, to triangles, the GPU native primitive. This process is called tessellation and it is one of the critical aspects of the vector graphics renderer. The tessellation is performed in two steps: the flattening, where curves are converted into straight segments and the triangulation, where contours coming from the flattening step are converted into triangles.

There are many ways to implement the triangularization, from 100 percent GPU implementations to CPU-assisted ones. Many of the 100 percent GPU implementations out there are incomplete from the point of view of supporting complex standards like SVG. SVG allows many tricky features like stroking, dashing and cornering.

In case compatibility with not-so-modern architectures--like mobiles for example--is needed, this step must be performed with the help of the CPU. In this case, the critical point is streaming the information from the CPU to the GPU as efficiently as possible. There are many GL extensions available that can help with this task. This is easier to achieve in unified memory architectures where the same memory space is accessible from any CPU or GPU in the system.

In the end, an acceptable architecture must support an all-dynamic scenario, where all the primitives could be moving at the same time during the same frame. Scenarios like this, that represent the worst-case scenario commented above, totally disallow using any kind of geometry caching mechanism.

Font rendering

High-quality font rendering is a mandatory feature for any UI middleware. Moving all calculations to the GPU to avoid any kind of intermediate textures is the big trend right now. Although this will probably be the case in the near future, right now, with still many low-resolution screen devices out there, the only valid generic approach is backing hinted glyphs into texture atlases. We recommend not to hint along the horizontal axis, only adjusting the glyphs vertically. This way, the original shape of each glyph is better preserved. If you use vertical hinting, then you also need vertical snapping. For the horizontal axis we prefer subpixel positioning.

While the text will have no animation most of the time, it must also move and rotate smoothly. To compensate for the lack of mipmaps (that would be expensive), we recommend oversampling along the horizontal axis. LCD subpixel rendering is becoming less popular these days because this technique has important limitations, and not all LCDs have the same linear ordering of RGB subpixels. Alpha composition is also hard to implement with LCD subpixels.

For bigger font sizes, distance fields are becoming more and more of an option, but they are still far from perfect, as they produce rendering artifacts or they are too complex to implement in shaders. Conventional mesh-based rendering is a good alternative, and it's easy to implement with minimal runtime cost and no preprocessing at all--an important factor when you are filling texture atlases dynamically.

Antialiasing

Almost all available UI middleware prioritizes quality over speed using algorithms that are really hard to translate to GPU at interactive rates. These kind of renderers need to cache results into bitmaps, making dynamic interfaces a worst-case scenario because they need to flush the cache each time and refill it with new content. For video games, a renderer must be able to fill all the pixels per frame avoiding worst-case scenarios like bitmap caching. To achieve that, the antialiasing algorithm must be delegated to the GPU.

Multisample antialiasing (MSAA) is cheap on current GPUs. It is even cheaper on tiling architectures with no extra memory cost. For high DPI displays, very common in mobile devices, antialiasing can even be disabled and still get good quality. With this in mind, the naive technique of just sending the polygons to the GPU is just the best approach. In case antialiasing is really needed and the performance cost is not acceptable, we have experimented with extruding contours of the primitives to get an approximation of pixel coverage for each pixel. The performance is close to not using MSAA, and the results are quite good, although shapes are slightly altered.

Rendering Integration

An efficient integration between the middleware renderer and the video game engine is not an easy task. Graphics APIs are state machines. In an inefficient integration, the GPU state is saved before rendering, and restored after that. Saving and restoring the GPU state can be really expensive. Also many times the UI must be integrated into the game world as a mesh that blends with the 3D world, or for example, when the user interface is being used in virtual reality environments. In all these cases the integration is not trivial, and the middleware must be graphics API agnostic, and it must offer appropriate callbacks to be implemented by the game. For the same reasons, we are not expecting the middleware to allocate memory on its own, or to open files or create threads. It should never interact directly with the GPU.

The middleware must send the primitives to be drawn using these callbacks. The set of primitives to be rendered must be properly sorted by shader to minimize the number of batches sent to the GPU. The algorithm is very similar to those used in 3D engines where objects are sorted by material and textures. Avoiding redundant state changes is also relevant for the UI. This is an important detail that is often overlooked.

The middleware must also offer callbacks compatible with modern graphics API like D3D12, Metal, or Vulkan where GPU commands can be dispatched in parallel from different threads or jobs, and provide an easy way to be integrated into those jobs. That way, the 3D scene and the UI can dispatch commands in parallel.

Animation

Visually-engaging user interfaces that feature rich and fluid animations is a mandatory feature for any middleware. Achieving a high framerate, using graphics hardware and operating in a separate UI thread are all desired characteristics of the animation system. Objects are animated by applying modifications to their individual properties. By just animating a background color or applying an animated transformation, you can create dramatic screen transitions or provide helpful visual cues. Subtle animations in each UI element are the key for having modern look-like interfaces. The middleware must offer enough performance to animate everything that is being shown on screen without slowing down the framerate.

Framework

A user interface middleware is more than just a renderer and a scripting language. It must offer a rich framework of classes that designers can choose from. Elements like Button, CheckBox, Label, ListBox, ComboBox, Menu, TreeView, ToolBar, ProgressBar, Slider, TextBox, or PasswordBox must be part of a rich palette of controls to be used or extended from. The middleware must offer well defined mechanisms to extend the provided elements by adding new functionality or just by aggregating existing ones.

How to measure and arrange collections of elements inside their parent panel is defined with the term “Layout”. It is an intensive and recursive process that defines where elements are positioned before rendering. Each panel exposes different layout behaviors. For example, you may need horizontal or vertical stacking, or corner anchoring or resolution-independent scaling. The layout system and the palette of panels offered by the middleware must be powerful enough to handle the different resolutions needed by the game.

Styling

Besides extending, a robust UI middleware must offer theming mechanisms to allow developers and designers to create visually-compelling effects and to create a consistent appearance for their game. A strong styling and templating model is necessary to allow maintenance and sharing of the appearance within and among games. A clean separation between the logic and the presentation is mandatory. This means that designers can work on the appearance of the game at the same time developers work on the programming logic.

CONCLUSION

After years of hard work and many UI libraries implemented, we decided to develop our own solution, NoesisGUI, which addresses all the points commented in this article. We also took time to analyze and experiment with the majority of products available in the market. We were unable to find any solution satisfying these commented key points. NoesisGUI is the result of more than five years of development, it is compatible with desktop, mobiles and consoles, and it was built to be lightweight and fast from the ground up. We also made it compatible with XAML and all its designing tools, like Microsoft Blend.

Please, feel free to comment about any area described in this article. We will be glad to hear about your opinion.

Find more information about NoesisGUI. Contact Noesis here.

Read more about:

Sponsor Resource Center
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like