How to get better fps on PC while porting games to consoles
In this article, we will discuss the best optimizing practices for console builds, which may also help you improve PC build performance.
The attitude to optimizing games is different for PCs and consoles. Having a well-optimized PC build is always good, but it’s rather a good practice that improves the player’s experience but is not always critical to have. But for console games, the game has to meet the performance criteria to be published. Strict and stable framerate, no long static unresponsive screens, no using all the RAM, saved data size, and read-write operations per minute limitations - getting to console markets requires following this and many more limitations.
In this article, we will discuss the best optimizing practices for console builds, which may also help you improve PC build performance.
Console-specific requirements for game builds
Games on consoles are expected to have a stable framerate, but each game's number may differ depending on game complexity and supported screen resolution, be it 4K, FullHD, or lower.
The most common numbers are 30fps for Gen-8 consoles and Nintendo Switch, 60fps for most games on PS5 and Xbox Series, and sometimes up to 120fps, which is supported by some games on PS5 and Xbox Series consoles. As the hardware is always predefined, it's up to the developer to choose the appropriate resolution and FPS plank to hold and fit in as much quality and content as possible.
Unity development perspective
There are various ways to optimize games. Nadiia Havrylenko, Unity Developer at Pingle Studio, shares our game optimization experience from the Unity engine perspective.
The optimization process is similar for PCs and consoles and consists of two steps: identifying the problem and fixing it.
Risk of Rain 2 screenshot, captured on PS4. Engine: Unity
Unity provides various tools for identifying the problem, such as Profiler, Memory Profiler, and Frame Debugger. When we work on console builds, the toolset depends on the specific console and its devkit. Properly using the devkit functionality provides a deeper look into the problem and a more thorough profiling process, which is extremely useful in occasional cases when universal PC tools fail to work because of platform differences.
Solving the development problems requires much more creativity. Once we identify the issue, fixing it may take more work and sometimes involves quality trade-offs.
Let’s split what we can do to optimize the console game into CPU and GPU parts.
The most common CPU optimization practices involve the following:
Modifying the slowly-performing coed fragments—everyone understands the importance of code quality, but not everyone pays proper attention to it. Rewriting the “slow” pieces of code, moving them to the async thread to run in the background, or caching data to use again later will most likely improve your build's framerate immediately. But note that these actions might also require some additional memory to perform.
Correct memory use—Watching out for memory leaks is another must if you want to improve your game's performance. Identifying and properly fixing these in Unity requires a deep understanding of memory management and knowledge of the structure of each specific project. The widely used Unity tool—Garbage Collector—can overload both CPU and memory, so remember about it even if it’s in the background more. If one frame creates a lot of bytes of garbage, the other will take a lot of milliseconds to clean it up.
Paleo Pines screenshot. Engine: Unity
In terms of GPU optimizations, once you detect a frame/scene/point of camera view, rendering which takes a lot of resources, you can do the following:
improve batching;
optimize the involved shaders;
bake more data to avoid calculating it in runtime (which will require more memory);
Transfer some calculations to the CPU (which will take more CPU time).
Extra attention to post-processing is important because most packages are optimized for PCs, not consoles. GPU optimizations are often platform-specific.
Optimization for consoles requires meticulous work, and we usually spend a lot of time on it. Some optimizations will not involve quality trade-offs and can benefit any platform, including the PC build: it can be optimized algorithms that still give the same result (and can be used on PC build), using async or parallel execution approaches, or simple oversights in settings or hierarchy that were located and fixed, etc. Other optimizations, like lowering texture quality, simplifying post-processing, and adding limits on spawned entities, will slightly alter the picture quality, used memory amount, and CPU load and, in rare cases, put limitations on the gameplay process. In this case, it's reasonable to keep them platform-specific - only for platforms that need it.
Olexandr Makeev, Unity Developer at Pingle Studio, shares some more insights on the topic in the context of Unity.
Different optimizations can help a game run more stable, with fewer hiccups and better framerate.
Each type of issue requires specific actions, but the approach to solving them is pretty much the same in Unity. Your best friend, in this case, is Unity Profiler.
The profiler helps to diagnose the root cause of the game's performance issues. Using profiler together with deep profiling allows you to see memory allocations, processing time spent and all of this is tied to specific functions your code is performing. That is the best way to start dealing with performance problems. Profilers consist of different tools, but the most important ones for optimization are the profiler window and memory profiler.
Speaking of the specific issues, here are the most common problems and how to solve them:
Long hiccups and freezes - you can detect them by seeing big spikes in profiler graphs. One of the most common things that causes freezes and hiccups is the Garbage Collector running its routine, creating too many things in one frame, and processing very costly algorithms synchronously. To fix this, search for functions that generate a lot of garbage allocations in the profiler and optimize them to not leave behind any unused objects if possible. In general, try to refactor your code to have a smaller memory footprint;
Demanding/time-consuming functions—There are always some functions you can’t get rid of, and they just require time for processing. Consider moving them to asynchronous execution or at least spreading their load over multiple frames via Corutines;
Running out of memory – It is a common issue for console titles as the memory available is much smaller than PCs. This issue is the main cause of crashes. Use a Memory Profiler to see what objects are taking up all the space and what objects are getting left behind unused. Very often, textures can be left behind as they have native and managed portions pointing at each other, which does not allow the texture to be collected by GC. Hint: Always name the textures you create! Also, leaving behind full arrays/lists of objects you will not use later or creating new arrays/lists instead of reusing the old ones leads to memory leaks;
Long loading time – this is often caused by the game doing work that does not necessarily need to be done right now. Sometimes, it is possible to postpone some things loading if they are offscreen or not even close to the player; they can be loaded later during gameplay. Also, all the things mentioned above apply to loading, too. It can be slowed due to immense garbage generation where GC is forced to run multiple times per frame to clean up all the mess left behind by the loading process. Optimizing your generation/loading functions to take less processing time using Profiler will also help.
And where are some more GPU and CPU-specific recommendations.
It is essential to understand the bottleneck in your game. Depending on whether it is a CPU or GPU you should consider different actions.
For CPU optimizations, you should reduce the constant load in any update functions. Reducing function processing time by just a little may lead to significant improvements as it gets called thousands of times per frame. You can apply everything mentioned above to reduce processing time and garbage generation.
For the GPU optimizations, consider using Levels Of Detail - a system that shows a less detailed version of mesh depending on the area that this mesh is taking on screen. Also, a significant boost can be provided by applying dynamic resolution scaling, which will allow it to smooth out any big performance load moments in the game where there are a lot of objects and effects on the screen.
The drop in resolution at this moment is usually not that evident, especially when combined with any fast upscaling method, but it helps keep a stable framerate. If you have heavy physics computations visible in the profiler, consider optimizing that part, such as using less costly physics methods, using them less often, and using physics layering.
Most importantly, optimizing the game might not be required if it runs at the desired performance level.
Unreal Engine development perspective
Kostyantyn Oleschenko, the Project Lead of the Unreal Engine Development Department at Pingle Studio, shares some insight on the article’s agenda from the perspective of working with UE games.
THE FINALS screenshot. Engine: Unreal Engine 5
No matter the platform or engine you target for your game, the general optimizing challenges remain the same—identifying the issues and fixing them—but with consideration of Unreal Engine’s specifics.
One of the first things we recommend paying attention to is the rendering type - forward or deferred render. Choosing the most appropriate render type for your exact game is the first step toward optimal performance for any Unreal Engine project.
The next thing we recommend considering is platform limitations. This is typical for consoles, but optimizing your PC build to meet these limitations might help your game cover a much wider range of PC configurations. The most common limitations are:
Post processes limitations – we mostly fix them by replacing post-processes with things like tone mappers and adjusting them to reach the best possible visual quality;
Decal limitations – they are often easily fixed by replacing them with planes and materials;
Light sources and other light-related limitations – to fix this, we usually redesign levels to use more static light and re-use dynamic lights;
Insurgency: Sandstorm screenshot. Engine: Unreal Engine 4.27
The next thing to discuss in our context is performance spikes, which occur in any porting project. We separate them into three categories and treat them accordingly.The same applies to resources: textures, skeletal animation binaries, and shader reflection, which may also require recompiling these resources into internal formats used by the console to achieve better performance.
VFX spikes.
Many Unreal Engine games feature heavy visual effects, which may work fine on high-end PCs with powerful graphic processors but cause problems even on high-end consoles. To lower the load caused by VFX, we work closely with VFX and tech Art teams and optimize each possible VFX to find a balance between looks and performance. But even some lighter VFX can be optimized and, as a result, give some extra frames, for example, by tuning the VFX settings and manually managing those VFX that are currently in the memory.Sound Effects spikes.
We recommend paying attention to the processing of sound classes, e.g., SoundCue, to fix the sound-related issues. Properly resetting the sounds with the correct category, concurrency settings, attenuation adjustments, and source effects chains should do the trick, even on low-end consoles like Nintendo Switch. Redesigning the sound-related gameplay logic and correct sound cache tuning are also helpful in reducing the hardware load caused by sounds. All the actual versions of Unreal Engine offer a decent toolset for working with sounds.PSO spikes.
This kind of spike occurs when the game light suddenly changes due to the specific game scenarios. It can mostly be fixed by carefully setting the PSO cache. This operation is often time-consuming: you have to collect records, compile the cache, build the application with cache, and repeat this algorithm for every spike. But the framerate you get by doing it is worth the effort.
Render development perspective
A decent part of all the possible optimizations comes from render development. We have a separate engineering development for this kind of work at Pingle. Yevhen Karpenko, one of the key engineers of Pingle’s Render department, shares valuable insight into the render development perspective.
There are four basic render-related issues that regularly come up during porting:
Consoles of various generations and manufacturers have different architectures, which requires optimizing and implementing the rendering code for each platform;
Consoles tend to have less powerful hardware than modern PCs, which can result in reduced graphics quality;
Consoles of the same generation may have different GPUs and memory architectures, which may result in the need to optimize rendering for each platform (e.g. PS4 and Xbox One);
Even within the same generation and one manufacturer, consoles can have different models with different sets of specs: PS4, PS4 Pro, and PS4 Slim, or Xbox One X and Xbox One X. This may also lead to the need to implement different renderers in terms of functionality, even within the same console.
Another crucial issue is the compatibility of the console graphics APIs with the original game's graphics API. The best case scenario is to port a DirectX game to an XBox console since the XBox graphics API is almost entirely DirectX compatible with a few exceptions.
If the original game was implemented on a different API, there may be incompatibility and differences between the APIs, including:
the declarative model for generating rendering commands;
differences between coordinate systems;
the concept of rendering states;
the shader language;
the shader model.
OpenGL, DirectX, and Metal are three popular graphics APIs used for game and 3D application development, and when porting to specific gaming consoles, developers may encounter the challenges described above.
All three graphics APIs use a different coordinate system to determine the position and orientation of objects in 3D space. OpenGL and Vulkan have the Y axis pointing up. DirectX, Metal, and consoles have the Y axis pointing down in Clip Space and texture coordinates.
OpenGL uses the [-1, 1] depth range in clip space, where the near plane is (-1), and the far plane is (1). DirectX and Metal use a depth range of [0, 1] in clip space, with the near plane (0) and the far plane (1).
These differences can cause problems when porting code between APIs. A standard solution involves flipping the Y-axis in shaders when porting from OpenGL/Vulkan and adjusting API-based depth buffer calculations.
Key differences exist in the formation of rendering commands between Metal, OpenGL, and DirectX. Vulkan, DirectX 12, and Metal focus on a low-level, command buffer-centric approach. Developers explicitly write rendering commands to a queue, which the GPU executes efficiently.
OpenGL and DirectX 11 offer a more state-oriented approach. Developers set various states (e.g., textures, shaders) that affect the behavior of subsequent render calls. DirectX 11, however, is similar to OpenGL in that it uses a state-based approach but with a more object-oriented structure.
Vulkan, DirectX 12, and Metal use pre-writing rendering commands to the command buffer. This allows the GPU to optimize execution and reduces the load on the drivers. OpenGL and DirectX 11 depend on state changes, which can result in redundant calls and less efficient execution of complex scenes with frequent state changes.
The difficulties of porting rendering to consoles also include differences between shader models. Shader models are specifications that define the capabilities and functions available in graphics processing unit (GPU) shaders. Different versions offer different levels of complexity and functionality, affecting the visuals achievable in games and applications.
Shader models define the instruction set, data types, and functions supported by the GPU for these shader programs.
Shader models can provide the following functions:
more complex lighting models (such as physics rendering);
advanced texture filtering techniques (such as anisotropic filtering);
tessellation shaders for detailed geometry;
compute shaders for general-purpose computing and asynchronous GPU computing;
more precise floating point numbers for calculations resulted in more accurate results and reduced visual artifacts.
In various cases described at the beginning of this section, different consoles can implement the set of functions in the shader model differently. This requires developers to do a more detailed job of porting rendering. This may include either full or partial backward compatibility between the implementation of shader models used in the ported game and the implementation of missing functionality in the shader model or its alternative.
Also, all graphics APIs have their own shader programming language and shader system, which may also be incompatible with graphics APIs. GLSL, HLSL, and Metal Shading Language (MSL) are languages used for writing shaders, but they serve different graphics APIs and have some key differences.
All three can create vertex, pixel, and geometry shaders that control the way objects are drawn on the screen. However, they all have different syntax, shader compilation processes, various loading and reflection processes, and processes for binding resources and data for shaders.
When porting a game to a console, from a rendering point of view, the shader code and compilation system can be a big problem because different graphics APIs, especially console ones, can implement resource binding for shaders, compilation, and shader capabilities at very different levels of abstraction, achieving higher performance and more correct hardware power use.
Five Nights at Freddy’s: Help Wanted 2 screenshot, captured on PS VR 2. Engine: Unreal Engine 5.
In addition to all this, because consoles have less powerful hardware than modern PCs, various implementation techniques can be used when rendering.
One such technique is upscaling. In video games, upscaling refers to techniques that increase the resolution of a rendered image. This becomes critical when porting a game from PC (where resolutions can vary greatly) to consoles (which have a fixed resolution).
During the upscaling, the game is rendered at the optimal resolution for the console to guarantee 60 fps. This resolution can be much lower than the accepted standards, for example, 720p, and then, by software or hardware of the console, it is increased to a more acceptable format.
The majority of resource-related cases cases require textures optimization, which may include:
changing formats;
transferring textures to internal console formats;
transferring textures to packaged formats, which can guarantee faster loading of textures;
change in texture size, especially when using upscale, when the original texture size is not needed if the game is rendered at a lower resolution.
And here’s some more examples of shader optimization:
rewriting unproductive sections of code,
choosing more appropriate data types,
choosing less complex and optimal shading algorithms.
A good example of shaders optimization we used at Pingle is the optimization of the geometry vertex format, which can be used to speed up the loading of geometry onto the video card and its rendering.
One of the real examples of optimization we use in Pingle is the optimization of the geometry vertex format, which can accelerate the loading of geometry onto the video card and its rendering.
Another example of optimization is the implementation of simpler and less performance-intensive shading algorithms, for example:
coarser shading algorithms or Screen Space Ambient Occlusion
transferring the real-time calculation algorithms and rendering passes to previously calculated ones,
baking shadow, environment, and reflection maps;
replacing console-heavy, console-intensive render passes with precomputed ones.
The conclusion: Why upgrade your PC build with console optimizations?
We know cases when poorly optimized PC games performed well after some time and patches, but it can hardly be called a good practice. And there are not too many PC-exclusive games to be released anyway.
If you, as a publisher or developer, plan a multiplatform release for your game, you will definitely pay special attention to optimizing the console versions. As you can see from this article, many of the optimizations required on the consoles can also increase the game's PC build performance.
By adopting console-like optimization for PC builds, developers ensure that their games meet high performance standards, which is crucial in a market where stability and fluidity significantly impact user satisfaction and retention.
The resource and time investment required to implement console optimization techniques into PC can be substantial. However, the potential return on investment (ROI) justifies this effort. Typically, optimizations that ensure consistent frame rates and efficient memory management on consoles can also boost performance on PCs, particularly on lower-end systems. This broadens the game's appeal and accessibility, potentially expanding the player base.
The initial stages of implementing these optimizations involve rigorous profiling and debugging, which might increase development time. However, once established, these practices lead to more streamlined and efficient development cycles for future projects. As a game development company with 16+ years on the market, we state that after integrating console optimizations, the time spent on subsequent optimizations and bug fixes is reduced, leading to faster overall development times. Adapting stringent console standards often helps exceed PC performance expectations, which is a win-win for both the developers and the gaming community.
About the Authors
You May Also Like