[Continuation of previous entry - Part 2 discusses uses for distributed computing.]
Although distributed computing has been applied to many processing-intensive tasks, it has seen minimal application in games thus far.
Raph Koster’s laws of online world design state: “Never put anything on the client. The client is in the hands of the enemy. Never ever ever forget this.”
This observation certainly holds true for Koster’s MMOs, whose appeal lies primarily with achievement simulation in a multiplayer environment. It is also true of any multiplayer game where the game’s competitive integrity can be compromised by a subverted client. Since opening the door to cheating can make competitive victory and simulated achievement feel hollow, distributed computing is not a suitable replacement for game servers in these situations.
However, other types of games, both single player and multiplayer, can benefit by putting other the processing power of other players’ machines to work in specific circumstances. Because distributed computing resources can be laggy and unreliable, the extra processing power must be non-essential to the game’s function, or there must be sufficient central hardware to smooth out troughs in spare processor availability. What kind of uses fit the bill, then?
Procedural content generation is an obvious candidate. Generation of complex game worlds, for example, could be sped up by breaking the work into pieces and offloading some of it to other networked PCs whose owners have opted into the system. Or, many simpler worlds could be created in parallel, with the user or the game selecting the most suitable from among several candidates.
Simulated worlds could also become more complex if additional processing power were available to handle the additional workload. It could be used to simulate activity by far-off empires or other unseen entities, allowing for intelligent and intricate development where developers normally cheat by simplifying the simulation algorithms. In situations where insufficient resources were available, the game could revert to simple algorithms.
Distributed computing could also be harnessed to make possible new game mechanics that would otherwise overwhelm a single machine.
For example, a time-travel mechanic could allow players to change the course of history and jump between various alternate pasts and futures to see the effects of their actions as determined by the game’s simulation. Decision points could be set up in advance of the actual jumps, giving the computing network the necessary time to run the needed simulations. Gameplay mechanics could even shorten or lengthen the gaps between decisions and jumps as necessary, translating fluctuations in available computing power into fluctuations in the game resource needed to travel in time.
There are quite a few other possibilities. There could be multiple dimensions or instances of “reality,” where actions in one instance ripple unpredictably into the other instances. Video reports could draw attention to micro-level, far-off, or otherwise unseen but noteworthy events. These would be simulated, rendered (likely with a camera other than the primary game camera), and compressed by a machine on the network and then uploaded to the player’s machine. I am sure that more creative minds than mine could think of even better uses.
The standardization of distributed computing technology is an important step in making distributed computing in games a reality. In most cases, having to create a new system for each title, with no platform or middleware support, would be prohibitively risky or expensive.
However, the potential upside is large, and the gaming demographic has proven its willingness to share its computing resources with applications such as [email protected] and [email protected], on consoles as well on PCs. With proper incentives for players, developers could no doubt command a significant resource in support of truly innovative game features.