In a previous article I argued that uncertainty is of central importance for strategy games, because without it there can be no decisions. In the broader picture though, uncertainty is actually necessary for any kind of gameplay to be interesting. In the case of absolute certainty when there’s nothing more to discover, we usually just stop playing. Funnily enough, while we’re always striving for certainty in real life, it’s the death of any game. Therefore it’s imperative for game designers, critics and analytics to understand the mechanical means by which uncertainty is created and maintained, but also where it can go wrong. The following article, among other sources based on Greg Costikyan’s “Uncertainty in Games”, aims to shed some light on the topic.
The reason why uncertainty is so important can be understood by taking a closer look at the nature of any gameplay process. In essence it is all about learning, i.e. gaining competence. In this context Daniel Cook introduced the very useful conceptual idea of gameplay “arcs” and “loops”. A gameplay arc consists of four elements: The player holds a mental model of the game as a collection of knowledge (you could call this his “skill”), he then takes actions while playing, which are then evaluated by the game’s rules, resulting in feedback. This sequence already resembles scientific theory construction through hypothesis, experiment and observation. These similarities become even more evident when the arc is closed and becomes a gameplay loop: After receiving feedback, the mental model is adjusted accordingly and thus refined over many iterations - just like a scientific theory becomes more and more robust over time. Therefore, as for example Carlo Fabricatore concluded before, without learning there can be no gameplay.
While the arc structure for example fits linear puzzles or narrative games, loops are usually found in more skill-based and strategic games. In both cases though, the player is actively learning by interacting with the game system. Once the mental model is fully optimized, the end of any arc is already known beforehand and going through it again becomes a rather tedious affair. Similarly, if additional iterations of a gameplay loop do not result in further adjusting the model, they’re gameplay-wise completely redundant and uninteresting. Uncertainty is therefore of great importance for any kind of game: If we already know about all the possibilities a sandbox game (or “toy”) has to offer, exploring it doesn’t yield any gameplay value anymore. If we already know the solution to a puzzle, it’s not exciting to solve it. If we have perfectly mastered a motoric discipline and reach the maximum amount of points every time, we quickly get bored. And if we figured out a complex system of decision-making so thoroughly that we can find the best action in any situation without a problem, there are effectively no decisions to make anymore. With the importance of the topic established, the following sections will describe some of the possible sources of uncertainty.
A basic principle in trying to keep players from absolute certainty lies in creating a sufficiently complex system. If the space of possible game states is big enough, players can’t possibly incorporate all of them in their thinking process in every situation. An important requirement for this to work properly is of course a solid internal balancing between all the available actions. There must not be large parts of the possibility space that are effectively useless. A classic example is Chess. The heavily branching decision tree makes it practically unsolvable for a human mind. In combination with the usually highly limited time banks in tournament settings, this forces players to develop and employ heuristics, more or less rough estimates of what could make a relatively good decision in any given situation. In other words, players are adding to their mental model of the game over time. Of course, Chess still has its problems in that regard: Especially the more restricted and predictable early-game has been partially solved. This forces seriously competitive players to memorize a huge number of openings, essentially taking away some of the importance of actual “on-the-fly” decision-making. And even later parts of the game are very prone to requiring rote calculations and inducing "analysis paralysis". So, while it is an important factor, complexity alone is generally not enough.
In contrast to strategy games, puzzles are meant to be solved. Nevertheless their typical form of uncertainty generation relies on complexity, too. The most interesting puzzles are those whose difficulty lies right on the edge of what the player is able to handle. Since this makes the optimal amount of complexity largely dependent on the individual player, many prominent examples like Portal or Braid make heavy use of the principle of incremental challenge. Initially the player faces relatively easy tasks that only demand manipulation of a few components and are usually manageable without too many problems. In a sense those are extended tutorials aiming to get players with less puzzle experience “up to speed” without them having to resort to frustrating and unsatisfying “trial and error” methods. All the while they are allowing more experienced players to breeze through before entering the later levels where all the previously introduced mechanisms have to be combined, in the end resulting in an optimal challenge (“flow”) for the maximum amount of players.
At last, even sandbox games are often based on complexity. If the possible interactions between all the existing mechanisms are rather obvious and quickly exhausted, not too many players will be motivated to invest a lot of time into the “toy”. Successful examples such as Minecraft on the other hand rely on an enormous breadth of possibilities. Not only are players able to move freely through giant worlds, basically allowed to build whatever structure they want, but the relations between all the individual components are also numerous and diverse. The interplay of blocks, tools, items, NPCs, animals, weapons, monsters and other elements makes for a network of enormous possibilities that beg for long-term experimentation.
In addition Minecraft also employs randomness to generate a large variety of game states. The game world is randomly generated and even expands further and further dynamically as the player explores it. That way he can never be quite sure what to expect behind a forest, in a desert or on a mountain top. This so-called input randomness, happening before the player acts, is an effective way of consistently presenting the player with new kinds of situations and challenges. These days it experiences a revival in the course of the “roguelike renaissance” that made randomly generated content popular again, forcing players to explore edges of the system they might never see with a static setup. Actually though, this source of uncertainty is much older than video games altogether: Drawing from a randomly shuffled deck of cards, while much "closer" to the current game state than pre-game map generation, can just as well be interpreted as input randomness.
In strategy games this kind of random generation can also serve to combat solvability by making partial solutions much less useful. Again, this can be understood using Chess as an example. When the initial (symmetric) setup of the pieces is determined randomly (as is the case in the variant Chess960), memorizing opening books becomes a much less valid method of increasing one’s skill. Instead the importance of the players’ systemic understanding, i.e. the accuracy of the developed mental models and heuristics, is emphasized. With this in mind, the designers behind the recent Kickstarter success Prismata made another good case for why this form of randomness is particularly useful for competitive strategy games.
Even if players already know what the best action is, they can still be challenged by actually executing their plan. In turn-based games this is usually not a factor. Merely moving the pieces on the chessboard is of course not a reason for the game’s longevity. Especially video games tend to rely on real-time and direct control, though. In doing so they introduce a dexterity component into the system. A game like Super Mario Bros. creates its uncertainty, and therefore its interestingness, almost exclusively by how hard it is to master the controls. Conceptually it’s in most cases pretty clear what has to be done to complete a level. However it’s not obvious how exactly this can be accomplished: In which exact moment do the buttons have to be pressed? An even clearer example of this idea might be rhythm games such as Guitar Hero.
Interestingly those kinds of games can also be interpreted as puzzles: There is an optimal execution (solution) the player aims for. In rhythm games this results in a maximum amount of points, platformers on the other hand have developed the subculture of “speedrunning”. In contrast to classic (mental) puzzles that can only “recover” from being solved if the player actually forgets the solution, these “execution puzzles” tend to maintain their uncertainty for longer periods of time. Usually there always remains some variance as to how perfect the motoric execution can really be, which in turn supports the contest qualities of these systems. Most players simply won’t be able to perform perfectly every single time (and if they were, the validity of the contest as a measurement of skill has to be questioned). For example, in a shooter game not every single attempt to headshot an enemy will actually hit. And even a professional League of Legends player will, from time to time, miss a “skillshot”.
Another, somewhat more immediate, way of inserting uncertainty into a game system lies in explicitly hiding information. If it is not even possible for a player to know the full game state, there will usually not be complete certainty as to what the best possible action might be in any given case. A classic example is the “fog of war” in real-time strategy games that hides the opponent’s actions but can be combated by deliberate scouting. An even simpler example is Poker. The hands of the opponents are unknown and only probabilities for all the possible results of a round can be determined by any individual player. Optimal play is still possible in this case by performing the actions promising the highest chance of success, but even then there’s still the danger of losing if a more unlikely random event occurs after all.
Collectible card games such as Hearthstone carry this concept to extremes since not only the cards the opponent is holding are hidden but also his complete deck. While the separation of possible decks by hero class (in the form of class-specific cards) provides a little remedy in this case, many tournament organizers still think of this as overdone hidden information, basically making it impossible for the game to reliably test the players’ skill level. Therefore special tournament rules are sometimes applied, as was the case in the “Prismata Cup”, forcing players to reveal their decklists before playing. This does not only lead to increased, yet far from complete, predictability, but can also result in some strategically very interesting matches coming down the last cards in both players’ decks.
Even if Hearthstone were to be played with completely open (and deterministic) cards though, it wouldn’t be completely obvious what would happen in every situation. In the end there’s always the uncertainty factor of the opponent’s mind in multiplayer games, although it can be predicted quite reliably. After all it must be assumed that the adversary will, in any given situation, try to do what’s best for him. However, his exact evaluation of the game state is usually unknown and therefore could simply be seen as another form of hidden information. Depending on the accuracy of the involved players’ mental models, what they believe to be the best action (for themselves and also their opponent) might differ dramatically. Last but not least creativity, coming up with bold and surprising lines of play, takes on an important role in this context, too.
On the other hand, games in which the opponent’s action evaluation can not or just barely be derived from the current game state itself, tend to come down to mere luck. That’s for example regularly the case when the mechanism of simultaneous actions is involved. A classic example is Rock, Paper, Scissors whose counter-mechanism is used or adapted completely in many other games such as the card battler Yomi. Apart from the myth of “mind reading”, relying on this mechanism alone results in a game of, more or less, pure guessing. In this case there’s so much uncertainty involved in the game that it can barely be of any interestingness for the human mind. If there’s no order to be found inside the chaos to begin with, there can be no learning and therefore no actual gameplay. At this point it’s worth noting that some specific methods of creating uncertainty that were established over time actually appear quite suspect upon closer inspection. In many cases they do not only hurt the overall system, but also the player himself by making very inefficient use of his time. In this regard some warning notices can be found below.
Many games, at first sight, seem much more complex and interesting than they actually are. This pseudo-complexity can be created by several means, one of which is the concept of “quantity before quality” or “mass content”. A system bloated with loads of different components can seem quite impressive initially. Magic: The Gathering with its more than ten thousand cards or League of Legends and its more than one hundred playable characters are successful representatives of this philosophy. Mostly this concept tends to be primarily fueled by an according business model underlying the game though, and not the intention of designing a high-quality game. Ultimately these systems crumble under their own weight: Only a fraction of cards and characters are actually valid options. The rest is either buried in oblivion or alternating between “too strong” and “useless” over the course of seemingly infinite patch orgies. Huge parts of the game become meaningless noise and the barrier of entry is unnecessarily high. Camouflaged as the “metagame” this problem is then euphemistically regarded as added depth. Ultimately though, players merely have to internalize the new list of “overpowered” gameplay elements after every new balancing patch. In the end these systems simply can’t be balanced because they hold more content than they are able to bear mechanically.
Another way of making a game seem more interesting than it is includes lots of busywork. Forcing the player to perform a few chores on the side while fulfilling a completely trivial task can convey the impression of actually doing something worthwhile. It induces the comforting feeling of putting in effort to accomplish something. And indeed a lot of time in many action adventures is spent running down corridors or streets. The trick lies in presenting trivial tasks as difficult. Other examples include grinding that can dramatically prolong games (Final Fantasy), spectacular animations accompanying trivial gameplay challenges (God of War) and mere mathematical complexity obfuscating rather shallow mechanisms (World of Warcraft).
The above-mentioned randomness can, in spite of its great potential, have severe negative consequences when used the wrong way. Even input randomness is problematic if it reduces the competitive consistency of a game. In single-player games this can be observed when the possibility range of the random generation is too loose, i.e. it goes so far that multiple playthroughs are not comparable anymore. In Spelunky this for example leads to serious players restarting over and over again until they finally (and randomly) get a favorable setup. A similar phenomenon exists in the Civilization games. In multiplayer games such as Hearthstone even the initial luck of the draw can decide a whole match. In any case, this leads to a very inefficient use of the players’ time. It should always be a design goal to keep the input randomness as varied as possible while at the same maintaining a reasonable level of fairness and comparability.
Output randomness on the other hand, happening after the player decided to take an action, will in general lead to unfair results. It might “even out” over many matches but the immediate result of a round will still be severely affected by it, which results in compromised or at least problematic feedback. This form of randomness directly interferes with the gameplay loop mentioned previously: The feedback is distorted since the system behaves inconsistently, thus the learning process (i.e. the gameplay) suffers.
Unfair Hidden Information
Hidden information can come in unfair forms as well. A very clear example was given by Jonathan Blow in his “Design Reboot” talk. He compared the top-down shooter Smash TV to its spiritual successor Total Carnage. The former displays a room without showing the bottom entrance. The player is therefore never quite sure of whether there are enemies coming in from that side. In this case hidden information is used reasonably, to increase excitement and maintain the interestingness of the system. Without it the optimal behavior in most situations would be rather obvious. The situation is never unfair because the player always has enough time to respond once the information is revealed. Total Carnage on the other hand uses hidden information in almost sadistic ways. Mines are placed behind trees, completely invisible to the player so that he has to fail on his first try without standing a chance. It’s after all imperative to look for items behind every tree in the game. It’s all about memorizing the positions of the mines in the end. Simply letting the player fail without any warning is frustratingly bad design. However it still exists even in some modern titles, for example in the form of enemies attacking out of nowhere in action games.
A related idea that’s even more common in today’s games is the philosophy of “wiki design”. In its case the player’s lack of knowledge is not punished quite as harshly as with the mines in Total Carnage, but in a more subtle manner. In games like The Binding of Isaac and many roguelikes the player is at a severe disadvantage, simply because he isn’t told all the rules. If the game doesn’t even explain what the different elements do, then there’s of course also uncertainty as to what the player should do. As an analogy, think of a match of Chess wherein one player loses because he didn’t even know how the queen could move. This method of hiding rules is after all a pretty absurd way of going about the issue. And if it’s really necessary for a game to not break down immediately, how interesting can the system actually be? If it however isn't necessary, you might as well not prevent players from realizing how interesting your game is in the first place. Isaac’s designer Edmund McMillen recently had to somewhat admit that himself and added (if only vague) descriptions of all the items to the remake of his game. Most players circumvented the design weakness anyways by constantly consulting one of the many fan-made wikis.
Finally a few words regarding narrative decisions in games, as they for example exist in FTL or Out There. Their immediate problem is the obscurity they inherently introduce into the game. Game and story are two fundamentally different bearers of information. If a story decision now affects the underlying game state, this can never happen in a clear and coherent way. Again, let’s look at a Chess variant to illustrate the problem. Imagine there’s a story choice every turn: “The queen doesn’t feel too good today, should she go for a walk outside?” Depending on the player’s pick, the queen will more or less randomly move around the board. This example seems absurd, but that’s essentially what happens in the above-mentioned games or classic “game books” such as Lone Wolf. A possible solution to this conflict lies in minimizing the effects the story choices have on the actual game (see Bioshock). In this case however, it’s questionable as to why the player should even care about these choices in the first place. After all the game teaches him that they’re irrelevant. The other possible route the design could take involves eliminating any form of strategy, resource management and gameplay altogether. One example of this approach is The Walking Dead by Telltale Games, who could actually have gone way further. In many cases the characters play dumb causing huge consistency problems in the plot, just to leave a bit of trivial puzzle work to the player so that it can be called an “adventure game” after all.
But even on a lower level, even when the gameplay has already been mostly eliminated, there’s still quite some potential for problems in story decisions: Which point of view is the player actually supposed to take? Does he decide as an author trying to create a consistent and exciting story (“I pick X, because it fits the previous events and the general attitude of this character!”)? Or should one simply act based on personal preference and for example save the attractive lady or guy? Of course these conflicts can be reduced further, for example by not even putting an actual protagonist character in front of the player that could play a role in terms of story consistency, but instead a “blank sheet” the player can then fill with his own personality. However there are still problems with this approach in single-player games. Only a very limited amount of decisions and according reactions can actually be pre-planned. Especially the combinatorial state explosion in widely branching decision trees will necessarily call for somewhat preset characters with some assumptions made at the very least. As long as the transition to truly free collaborative storytelling (as it exists in some pen-and-paper role-playing games) is not made, there will always be potential problems.
In conclusion, taking a closer look at uncertainty in games reveals one thing: As important as it is for any kind of gameplay, as complex is the task of smoothly creating and maintaining it. Most modern videogames especially shy away from focusing on systemic complexity, because while it might potentially account for the most intellectual value, it is also much less easily perceivable than other factors. It’s comparatively easy to make a game seem interesting on the outset. Audiovisual spectacle and mass content can be advertised with impressive trailers and large numbers, and are thus well-tested means of generating attention. Another cheap trick is drowning a system in chaos and unfair situations so that there’s indeed always uncertainty involved when playing, but there’s at the same time not much actual gameplay value left. Creating a deep system whose loops and arcs stay interesting long-term and that stands the test of time without multiple content expansions a year requires a fundamentally different approach, though. It’s a challenge only few modern designers have yet taken on.
However, from a historical perspective this is completely understandable. The academization of game design is still in its infancy, especially in comparison to the immense underlying industry. It has only been a few decades that true specialists of the craft have even existed. Only just for about ten years have there been regular scientific and literary publications on the topic. There’s a lot of progress made on a theoretical level, but it has only been able to make its way into everyday design practice rather slowly. Again, that’s due to the rigid conventions of an industry that financially grew faster than it could artistically handle. High quality art demands a critical examination of the corresponding craft and especially full-time artists will begin asking serious and uncomfortable questions about the nature of what they’re actually doing. Slowly but surely we’re reaching this point in game design. That’s why we can assume to see better games in the next ten years than any human being has ever played before. And that’s really something to look forward to!