Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

I did some research and write an upcoming experimental Unity feature: Scriptable Rendering Pipelines. Why? Because it concerns you, it concerns me. It will eventually change the way you have to work. The readier you are, the better off you will be.

Ruben Torres Bonet, Blogger

April 19, 2018

11 Min Read

I decided to do some research and write an upcoming experimental Unity feature: Scriptable Rendering Pipelines. Why? Because it concerns you, it concerns me. But do not panic. Or at least, not yet. Maybe not even next year, but it will eventually change the way you have to work. The readier you are, the better off you will be.

A better architecture for Unity projects overview

 

What is SRP?

Scriptable Render Pipeline (SRP) is a new Unity system and way of thinking that allows any graphics programmer to develop a customized render loop. This means, you will be able to tweak, reduce, extend how your game creates frames. This will add potential for you to optimize your game, create custom visual effects, make your system more maintainable, fix till-now-unfixable bugs but its main strength is that it will enable you to learn how graphics work in more detail. This idea is basically the opposite to the (legacyblack-box built-in renderer, where Unity had the monopoly about the rendering algorithms applied.

This technology started shipping with Unity 2018.1 Beta. Be careful, though, it is still experimental and might remain in that state for some time. Its main pillar is a C# API that is tightly bound with the C++ engine. The API is very likely to change during the development of the feature. The main bet behind it is that you will have much more fine-grain control over the rendering process your game will execute.

Unity offers two render pipelines along their code:

  • LWRP: Lightweight Rendering Pipeline

  • HDRP: High Definition Rendering Pipeline

In order to understand SRP, it pays off to have the overall picture of the typical per-camera rendering process:

  1. Culling

  2. Drawing

  3. Post-processing

If you know these aspects, feel free to skip the next sections.

1. Culling

The rendering of a frame typically starts with culling. Let us start with an informal but simple definition that will help us understanding it for now.

Culling CPU process consisting in taking renderables and filtering them according to a camera's visibility criteria so as to produce a list of objects to be rendered

Renderables are basically game objects with Renderer components such as MeshRenderer and filtering just means whether it will be included it in the list or not. Note, however, that the real culling process also adds lights into the equation, but we will skip those.

Culling is important as it greatly reduces the amount of data and instructions the GPU will have to deal with. It might make no sense to render a running airplane if we are in a cave, since we do not see it (but it might, if e.g. it projects a shadow inside the cave). Culling itself takes some processor time, fact that engines have to be aware of when balancing the CPU/GPU load.

Incoming geometry/lights + camera settings + culling algorithm = list of renderers to draw

Culling

Culling step #nofilter #nomakeup 
 

2. Rendering

After we determined which data we should display, we just go for it. A commonly found process can be sum up in the following steps:

  1. Clear (back) buffer contents (CPU)

    We discard the buffers previously generated. It is usually the color and depth buffers, but it might include custom buffers for other techniques such as deferred shading.

  2. Sorting (CPU)

    The objects are sorted depending on the render queue (e.g. opaque front to back, and transparent back to front for proper blending).

  3. Dynamic batching (CPU)

    We try to group the renderers together as a single object so we can save draw calls. This optimization is optional.

  4. Command (draw call) preparation and dispatching (CPU)

    For each renderer, we prepare a draw command with its geometry data: vertices, uv coordinates, vertex colors, shader parameters such as transform matrices (MVP), texture ids, etc.. This instruction along its data is submitted to the API which will work together with the driver to pack and properly format this raw information into GPU-suited data structures.

  5. Render pipeline (GPU)

    Very roughly described: The GPU receives and processes the commands; the GPU frontend then assembles the geometry, vertex shaders are executed, the rasterizer kicks in, fragment shaders do their job, the GPU backend manages blending, render targets and all is written into different buffers.

  6. Wait for GPU to finish and swap buffers

    Depending on the VSync settings, it might even wait longer to do the back-front buffer swap.

That is an overly simplified typical rendering process.

Rendering

Plain rendering 
 

3. Post-effects

After the GPU filled the buffers (color, depth and possibly others), the developer may opt to apply further image enhancements. They consist in applying shaders to input textures (the created buffers) to overwrite them with the corrected image. Some are listed below:

Effect

Description

Performance cost

Bloom

It highlights the bright, emissive areas creating an aura kind of effect around the source

Medium

Depth of Field (DoF)

It blur certain parts of the screen depending on the set parameters

Expensive

SS Anti-Aliasing

Softens the abrupt transitions between pixel colours produced by the limited resolution

Light to expensive

Color correction

Changes the behaviour of colours according to the defined rules

Light

SS Ambient Occlusion

Adds contact shadows (it darkens areas between objects)

Medium

 

Note that the performance cost really depends on the platform, but as a general rule post effects are prohibitive for mobile.

One reason they are expensive is that every resulting fragment often requires multiple reads from the frame buffer (in RAM for integrated GPUs), some calculations and then overwriting the buffer. If you add this process to several post effects, you end up using too much memory bandwidth because of the generated overdraw.

Post-effects

Post-effects 

Now that we have an introductory understanding, back to our SRP topic. Still with me? Why should we learn about SRP? How will it impact you?

Why SRP?

The main issue is that Unity’s built-in renderer is monolithic, gigantig black-box rendering pipeline that contemplates every use case. Making it so generic comes at a big cost:

  • It is hard to optimize.

  • It is hard to change without breaking current projects.

  • It should maintain its compatibility with previous versions.

  • It is hard to do custom rendering processes and to fine-tweak projects.

  • Custom render code is prone to create side effects that are hard to trace.

  • Big studios are afraid of being limited.

That is the reason I bet Unity decided to go for SRP. And it is a big move, since a great deal of the packages you can find in the asset store will need adaptations to work with SRP (scene light intensity, materials, shaders, etc.).

The advantages of SRP are basically the opposite of its disadvantages plus some other neat added benefits, such as the possibility of working with upcoming tools such as Shadergraph for graphical shader programming (eventually making it rare using surface shaders). One of the biggest pluses, in my opinion, is the amount of learning you will achieve through understanding how rendering works.

Basic SRP

I wrote a simple SRP based on Unity examples to show how easy (but useless?) is to create a custom render algorithm. It starts by writing code for a scriptable object that will serve as a factory for Unity to instantiate our SRP during launch time:


[CreateAssetMenu(menuName = "SRP/Create RubenPipeline")]
public class RubenPipelineAsset : RenderPipelineAsset
{
    [SerializeField] private Color _clearColor;
    
    protected override IRenderPipeline InternalCreatePipeline()
    {
        return new RubenPipelineImplementation(_clearColor);
    }
}

Graphic settings

I added a dummy, optional variable that dictates the clear color to use. After creating a scriptable object instance, you will eventually have to assign it in the graphic settings so Unity can use it for the aforementioned task.

Graphic settings

After creating the SRP factory code, now we go to the real implementation:


public class RubenPipelineImplementation : RenderPipeline
{
    private Color _clearColor;

    public RubenPipelineImplementation(Color clearColor)
    {
        _clearColor = clearColor;
    }

    public override void Render(ScriptableRenderContext renderContext, Camera[] cameras)
    {
        base.Render(renderContext, cameras);
        RenderPipeline.BeginFrameRendering(cameras);

        // You can sort the cameras however you want here
        foreach (var camera in cameras)
        {
            RenderPipeline.BeginCameraRendering(camera);
            renderContext.SetupCameraProperties(camera);

            // Clear
            var cb = new CommandBuffer();
            cb.ClearRenderTarget(true, true, _clearColor);
            renderContext.ExecuteCommandBuffer(cb);

            // 1. Cull
            CullResults cullResults;
            CullResults.Cull(camera, renderContext, out cullResults);

            // 2. Render
            var drawRendererSettings = new DrawRendererSettings(camera, new ShaderPassName("BasicPass"));
            var filterRenderersSettings = new FilterRenderersSettings(true);
            renderContext.DrawRenderers(cullResults.visibleRenderers, ref drawRendererSettings, filterRenderersSettings);
            
            renderContext.Submit();
        }
    }
}

This process partially corresponds to the generic rendering algorithm we described before. The first task is to fire some events so Unity and third party plugins can inject custom code during rendering: BeginFrameRenderingBeginCameraRendering. Then, for Unity helper functions to work, we set some camera properties for drawing (matrices, FoV, perspective/orthographic, clipping planes, etc.) through renderContext.SetupCameraProperties. Then we clear the current color and depth buffer contents, setting the initial color to the provided one in the scriptable object. We do the culling process for the current camera, getting a list of renderers to be drawn. We proceed therefore to draw the cull results with the default settings and after we are done preparing all instructions, we submit them to the API + driver.

One more thing before we get to test our new render loop. We need a custom material with a custom shader to render our geometry. For that I prepared a simple unlit shader that will work with it. There is nothing fancy about it other than the requirement to name our shader pass through the LightMode pass.


Shader "Unlit/NewUnlitShader"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }

        Pass
        {
            Tags { "LightMode" = "BasicPass" }

            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            
            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;
            
            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                return o;
            }
            
            fixed4 frag (v2f i) : SV_Target
            {
                return tex2D(_MainTex, i.uv);
            }
            ENDCG
        }
    }
}

Are ready to try it out now? YES!

Final result

Final result

Am I proud of this? No. But one step at a time. For the curious people: how does it look on the API side? Is it obeying my commands? Let us phone our #bff RenderDoc.

RenderDoc

Final result checked with RenderDoc(check fullscreen)

So it did what we asked for, ignoring the draw calls for the Unity Editor specifics. No more, no less. You see why SRP is great? Full control, whole responsibility, maximum blame potential! I did not really made a point before, but the fact that we are using scriptable objects for SRP is really powerful. You can customize the parameters you offer and change them in run-time. I did not try yet, but I bet that we will be able to change the pipeline in real-time, allowing us to adapt it to different devices as we may please.

Predefined render pipelines

Unity comes with two predefined scriptable render pipelines that you can use without further complication. It is possible, in fact recommended, that you use any of these as a template to personalize your own pipeline because it is certainly hard to create and maintain one of these, trust me there. Let us shortly describe them:

LWRP (Lightweight Render Pipeline):

  • Specific rendering algorithm for low/mid-range devices

  • Stripped out version of the built-in renderer

  • Game renders at scaled resolution

  • UI renders at native resolution

  • No real-time GI

You can find online an official built-in renderer vs LWRP comparison as well as the LWRP source code, which I recommend you having a look if you have some (actually, lots of) spare time. Also check the comparison, as I will not be modifying this blog entry every time they change their mind.

The HDRP (High Definition Render Pipeline) targets high-end devices (desktop, PS4/XBO) and offers better quality out of the box, including deferred shading, TAA, HDR, PPAA, SSR, SSS, etc.. Again, grab a cup of tea and enjoy some relaxed, yet exciting source code reading time.

Performance

I am sorry to disappoint you, but it is not feasible to do benchmarking right now with a highly unstable API that is changing often. The results I found from people benchmarking HDRP and LWRP against the built=in renderer are inconsistent and hardly comparable. I will cover this in a future post, but expect it to become better for the reasons I mentioned in a few sections above.

Sumary

We have seen what rendering looks like, why Unity decided to implement such a big feature in favor of deprecating the current system, how Unity is doing it and the out-of-the-box possibilities you have for rendering starting today. You probably realized how simplified the post is, but writing it more thoroughly would discourage most readers.

In the next blog entries I will cover how to use the neat tool Shadergraph. Till then, enjoy the following summarizing picture!

SRP summary

SRP summary

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like