Featured Blog

UNET Unity 5 Networking Tutorial Part 2 of 3 - Client Side Prediction and Server Reconciliation

In Part 2 of our series, we look into how we can implement client-side prediction and server reconciliation using the Unity Networking HLAPI.

The Problem With Server Authority

In Part 1 of our Unity 5 Networking Tutorial, we started with a single player Unity project and turned it into a networked application using Unity 5's new networking features. However, you might have noticed (or predicted) a problem with our current implementation: there is a time delay between applying input and having the cube actually react to the input. Unfortunately, this is a limitation of letting the server have full authority over our simulation: each client has to ask permission from the server for every move, and since it takes a nonzero amount of time for information to move back and forth across the network, we experience latency.

In this article, we will attempt to hide this latency. We will use the approach described in the article series Fast-Paced Multiplayer by Gabriel Gambetta - I recommend that you look into this article series if you want to explore networked multiplayer programming further.

Unlike Part 1 of this series which was heavily Unity-centric, the concepts we will cover in this article should apply even to non-Unity environments, although we will be implementing client-side prediction and server reconciliation using the Unity Networking HLAPI.

Download The Project

If you have already downloaded the project folder from Part 1 of the series, you are all set - that already includes the code for Part 2. Otherwise, you can get a copy of the project folder so that you can follow along. If you learn better by reading code instead of words in an article, I highly recommend that you get the actual source code file for the project (it is only one file and only about 100 lines long) so that you can pick up the concepts more efficiently.

Let us begin!

Naive Prediction

Simply speaking, client-side prediction is a way to hide latency. Instead of running game logic exclusively on the server, we also let the client run game logic, which allows the client to process input and effectively predict how the server would react to those inputs. Through prediction, we are able to compute and render any effects of player input instantly on the client instead of waiting for the server. By eliminating the wait, we eliminate latency.

We can take this definition literally and implement client-side prediction as follows:

  1. Send inputs to the server.
  2. On the client, let the same inputs affect the state immediately.
  3. Ignore updates sent back by the server.

Theoretically, this would work if the client and the server implement the game logic in exactly the same way, and if the state is determined by player input and nothing else. In practice, a player's state is affected by a lot of other things (the level, enemies, other players, etc.), which means that for any non-trivial networked multiplayer game, ignoring a server's updates would eventually lead to a desynchronization.

Naive Reconciliation

Perhaps we can fix this by not ignoring the server's updates. We could perform prediction as usual, rendering the effects of any input immediately. When we receive an update from the server, we swap the predicted state with the server state, which should correct any inconsistencies. We refer to this process as server reconciliation.

Adapting from the Merriam-Webster dictionary:

reconcile: to make consistent or congruous, to check against another for accuracy

This seems to be a great solution, but consider the following situation:

  1. We start with our cube at position (0, 0).
  2. We press the right key at time t = 0 ms. Client-side prediction moves our cube to (1, 0) immediately.
  3. The server sends us back position (1, 0) at time t = 200 ms. We perform reconciliation, and since the prediction and the server result match, the cube stays in place.
  4. We press the right key again at time t = 300 ms. Client-side prediction moves our cube to (2, 0) immediately.
  5. The server sends us back position (2, 0) at time t = 500 ms. We perform reconciliation, and since the prediction and the server result match, the cube stays in place.

Now, take this same scenario, but let us pretend that our player is able to press buttons faster:

  1. We start with our cube at position (0, 0).
  2. We press the right key at time t = 0 ms. Client-side prediction moves our cube to (1, 0) immediately.
  3. We press the right key again at time t = 100 ms. Client-side prediction moves our cube to (2, 0) immediately.
  4. The server sends us back position (1, 0) at time t = 200 ms. We perform reconciliation, and the game sees an inconsistency, so the cube jumps back to (1, 0).
  5. The server sends us back position (2, 0) at time t = 300 ms. We perform reconciliation, and the game sees an inconsistency, so the cube jumps back to (2, 0).

Although we end up at the same place, server-reconciliation caused problems in the second scenario.

(Note: I feel that I have not provided a good illustration here. I highly recommend that you check out Part 2 of the Fast-Paced Multiplayer article series if you are looking for a better explanation with diagrams. Sorry about this.)

Prediction + Reconciliation

Fortunately, there is a way to make client-side prediction and server reconciliation work together. If you look at the second scenario above, you will see that the problem occurs when we receive a state that is older than the one we have already predicted. The solution to this is simple:

  1. On the client, keep all the inputs sent in a list.
  2. On the server, when we send the updated state, let us also send the latest input that caused the state.
  3. On the client:
    1. Use the updated state received from the server.
    2. Determine the latest input used by the server, and remove that input from your list together with any other older inputs.
    3. Reapply all the inputs left on the list on the state.

By following this algorithm, we can perform reconciliation and at the same time take into consideration any newer inputs not yet acknowledged by the server, which should prevent our state from rewinding back in time unnecessarily.

Later in this article, we will implement this algorithm and see how we can use it together with the new networking features of Unity 5.

Our Approach To Modifying State

Before we continue, let us take a quick detour and have a closer look at how we handle state in our code. You might have noticed that we wrote our cube movement code in a particular way:

CubeState Move(CubeState previous, KeyCode arrowKey) {

    // 1) We take the previous state as a parameter, then...

    // 2) ...we use arrowKey to figure out the new state, then...

    // 3) ...we return this new state.

    // Note: We do not modify any member variables of the MonoBehaviour directly!
    //       Instead, we return the new state, and the caller of the Move function
    //       takes care of assigning this to serverState.


Compare this to how else we might have written it:

void Move(KeyCode arrowKey) {

    // 1) We read the previous state from the serverState
    //    member variable directly (not as a parameter), then...

    // 2) ...we use arrowKey to figure out the new state (or alternatively
    //       even read input from here directly), then...

    // 3) ...we assign this new state directly to the serverState member variable.

    // Note: We do not return anything.


This design choice is deliberate. While it is certainly possible to do otherwise, using a pure function allows us to manipulate the predicted and server state as needed without worrying too much about how the rest of the game would be affected. Instead of handling the server state and the predicted state as separate cases (i.e. having a different variant of the Move method for each state tracked), we decouple the computation of a new state from any particular instance of a state. Since we started off with this design, implementing the combined client-side prediction and server reconciliation algorithm that we have described above should be straightforward.

Keeping Track Of Inputs Sent

In step 1 of our algorithm above, we ask the client to keep all the inputs we send in a list. To do this, we would need to make a few changes to our code.

Based on the use case we have described, it seems that a Queue would be a good data structure choice to store our inputs in. Let us add our queue as a member variable:

Queue pendingMoves;

In our Start function, we can initialize our queue, but we will only do so if we own the corresponding player object, since it would not make sense for us to perform prediction and reconciliation on an object that we are not creating inputs for.

if (isLocalPlayer) {
    pendingMoves = new Queue();

Finally, we will need to actually take the inputs that we create and put them in our queue. A good place to do this is in our input handling code:

foreach (KeyCode arrowKey in arrowKeys) {
    if (!Input.GetKeyDown(arrowKey)) continue;

Prediction Before Reconciliation

Now that we have a working input queue, we are ready to implement prediction. When reconciliation is involved, the predicted state is a function of two things: the last state received from the server (with an acknowledgement from the server of the last input received), and all other inputs newer than what was acknowledged by the server. Given these two ingredients, we can compute the predicted state by starting with the server state and applying all unacknowledged inputs. This also means that we update the predicted state whenever any of these two things change - when we receive a newer server state, or when we create new input. This is how it looks like in code:

void UpdatePredictedState () {
    predictedState = serverState;
    foreach (KeyCode arrowKey in pendingMoves) {
        predictedState = Move(predictedState, arrowKey);

Here, we assume that everything in our input queue is newer than the last input used by the server - we will enforce this later on when we implement reconciliation.

Which State Is Newer?

To implement reconciliation, we need the ability to take two states and determine which one is newer. We can do this by adding a field to our state struct that indicates age:

struct CubeState {
    public int moveNum;
    public int x;
    public int y;

Then, whenever we are asked to produce a new state, we simply take the age of the source state and add 1 to it. Hence, we end up with a series of states with ages 0, 1, 2, 3, etc.:

CubeState Move(CubeState previous, KeyCode arrowKey) {
    // compute dx and dy here...
    return new CubeState {
        moveNum = 1 + previous.moveNum,
        x = dx + previous.x,
        y = dy + previous.y

Reconciliation At Last

Finally, we are ready to implement server reconciliation, which we perform every time we receive a state update from the server.

First, let us make sure that we keep our pending moves queue fresh by discarding inputs that we no longer need. To do this, we need to take a look at the ages of the server state and the predicted state. Note that the predicted state will always be newer or as old as the server state. Their ages would be equal if the server has caught up with the client; otherwise, it means that there are still some inputs that the server has not taken into consideration yet. In both cases, we are interested in the difference between the ages of the predicted and server states - this is precisely the number of inputs that need to be predicted, which means that our queue should be exactly this long, and any extras should be discarded:

void OnServerStateChanged (CubeState newState) {
    serverState = newState;
    if (pendingMoves != null) {
        while (pendingMoves.Count > (predictedState.moveNum - serverState.moveNum)) {

Now that we have this function in place, we just need to call it whenever we receive a new state update from the server. In particular, since state updates are sent from the server through a SyncVar, we need to be able to determine when the value of the state variable is updated. Fortunately, the HLAPI allows us to do this by using a SyncVar hook function that provides this exact functionality. We just need to change our variable declaration to this:

[SyncVar(hook="OnServerStateChanged")] CubeState serverState;

Just a quick note - although this sounds like a property setter, there is a subtle but important difference: the hooks are not called when the SyncVars are given their initial values, and are only called on succeeding updates.

Rendering The Predicted State

One last step! On a predicted client, let us make sure that we use the correct version of the state. Using isLocalPlayer, we can decide whether to use the predicted state or the server state:

void SyncState () {
    CubeState stateToRender = isLocalPlayer ? predictedState : serverState;
    transform.position = stateToRender.x * Vector3.right + stateToRender.y * Vector3.up;

And with that, we have updated our project to use client-side prediction and server reconciliation!

Try It On Your Machine

If you haven't already, you can download the project folder for this Unity networked multiplayer demo. If you are looking for something a bit more complex, I am currently working on a few larger multiplayer demos, and if you are interested in those, grabbing a copy of the project folder would allow me to let you know once those become available.

In the final article of this series, we will be wrapping up by talking a bit about how we can specify the way our data is delivered. Also, I will try to give a few thoughts about the new Unity networking system in general.

Thank you for reading, and until next time!


Explore the
Advertise with
Follow us

Game Developer Job Board

Game Developer


Explore the

Game Developer Job Board

Browse open positions across the game industry or recruit new talent for your studio

Advertise with

Game Developer

Engage game professionals and drive sales using an array of Game Developer media solutions to meet your objectives.

Learn More
Follow us


Follow us @gamedevdotcom to stay up-to-date with the latest news & insider information about events & more