Sponsored By

Cyberspace in the 21st Century: Part Three Foundations

Cyberspace is a medium in which alternate realities or virtual universes can be created. Therefore, if we're trying to create cyberspace then that means we need to build a system that can model a virtual universe -- typically, one similar to our own. For such a grand project it is important that its foundations are sound. In this installment of his ongoing series, Crosbie Fitch goes over many of the key technical issues (as opposed to the philosophical ones) that must be addressed before we can embark upon the design or implementation of this kind of system.

Crosbie Fitch, Blogger

December 1, 2000

33 Min Read

Cyberspace: the next online revolution –- massive multiplayer online games supporting billions of players, simultaneously.

You’ve seen the Web. Almost the entire planet is wired up. Everyone uses e-mail to talk to each other now, and even the telephone has changed. We’re going mobile, even wearable – other services are arriving such as SMS, WAP, I-Mode, etc. Millions of people even have their own web page. We are all connected as equals in this world and we work and play together in harmony. It’s one big global village and we’re going to find ever more sophisticated ways of telling each other stories and having adventures together.

If you thought the five digit figures for players connected to online games was large scale, you’re just seeing a MUD on steroids. Wait until you see an online world as large as the web, unfettered by bottlenecks caused by primitive client/server technology.

Everyone is gradually waking up to the fact that for an application to be of this global scale, it must be a distributed system. Consider file and resource sharing facilities such as Napster and MojoNation, tools such as NetZ, and games in development such as Atriarch. People are beginning to see that the future is distributed. Not everyone believes that global scale applications stopped with E-mail and the Web.

So, if you want to move on from small scale online games such as Ultima Online, Everquest and Asheron’s Call, toward global scale virtual environments such as those depicted in movies such as The Matrix, Tron, and Dark City, then you’re going to have to wrap your brain around the distributed systems approach.

In a nutshell? Well, instead of sharing files as with Napster, you’re sharing the latest news concerning the state of the game model. Some of this news is created by each player when they make changes to their environment, but mostly each player subscribes to the news concerning that part of the game world that they’re currently interested in. Each player’s computer processes the portion of the game model they possess locally. It will seem difficult to believe that this can work for real-time information just as much as MP3 files, but then that’s the old ‘paradigm shift’ raising its head again.

These are the continuing propositions of Crosbie Fitch and his distributed systems approach to online games


Establishing Some Ground Rules
Now, where were we? Oh yes, something about cyberspace. What the hell is it, again? And how do we create it?

Cyberspace is a medium in which alternate realities or virtual universes can be created. Therefore, if we're trying to create cyberspace then that means we need to build a system that can model a virtual universe - one typically similar to our own (otherwise we'd all go mad eh?).

For such a grand project it is important that its foundations are sound. Whilst some might be inclined to create million player Doom, with players blinking in and out of existence as they see fit, this tends to give rise to a rather intermittent and frustrating experience (the monsters simply hate it). Of course, teleportation may be a valid phenomenon in some alternate realities, but it is better to ensure that such a feature would be deliberate and not simply a consequence of technical compromise or expediency.

In this installment I'll go over many of the key technical issues (as opposed to the philosophical ones) that we must address before we can embark upon the design or implementation of a system.

Bedrock
The lowest foundation we have is the hardware. This is the operating platform on which we will be building our system. Discussing this may be rather unexciting, but like good chemists we have to define our equipment just to make absolutely sure we all know what we're dealing with.

Of an unknown and effectively unlimited number of players, every one of them has a computer which is connected to a common network.

Each computer has unknown and varying amounts of CPU, RAM, and persistent storage. However, it is considered to be highly reliable. When corruption does occur it is likely to be instantly detected. As far as the computers go we have to cope with anything from 4MHz PC/XTs connected via sneakernet (human assisted floppy disk interchange), to quad Pentium super-servers connected via optic fiber. In other words, we don't know Jack.

The network connection may only be available during play, and even when it is available, has unknown latency, bandwidth, and reliability. The minimum quality of service that is available is where only the integrity of individual packets can be ascertained. Guarantees of their delivery, ordering, and timeliness are rarely available, and where they are optional, tend to have an impact upon effective latency and/or bandwidth.

We'll place some limits on the scalability we're aiming for - just to make our lives a little easier. We'll say that the number of participating computers on the network can vary from two to a billion. Any more we'll leave to version 2. Note that a participating computer is not necessarily servicing a player all the time.

Thus everyone who might want to play at some time joins the big cyberspace party. They contribute varying amounts of resources (processing, storage, communication) according to what they can afford, and in turn receive a product enabled by the combination of these resources, which is an interactive entertainment experience of a shared virtual world.
Given this rather vague and hazy platform, we just have to create the product and ensure that it's all fair and everyone's happy. So without further ado, let's try and figure out how to build it. What is the nature of cyberspace?

The Isolation of Cyberspace
A virtual universe must be self contained, consistent and continuous. Any one of its denizens (if they were intelligent) must not have any reason to suspect that there is a fault in their universe, or that any other universes exist (especially ours). A virtual world should operate independently from our own, and so needs no knowledge of players or their geography. Indeed, players never exist in the virtual world. Player Fred does not meet player Jim in cyberspace. Rather, Fred's avatar meets Jim's avatar, and neither avatar is aware of Fred or Jim. If Fred and Jim have to stop playing, the virtual world persists, and the two avatars continue about their business - perhaps of a little more mundane nature than when Fred and Jim were influencing them.

Although players may be as gods to avatars, their hands shouldn't be able to reach into the virtual world and mess with it. All play occurs via avatar. This rule isn't cast in stone, but it is useful in understanding the different worlds we're dealing with, i.e. clarifying the boundaries.
Of course, the degree of influence a player has over their avatar, and the fidelity of the feedback they receive from it, could increase to such an extent that the player believes they are as one with their avatar. This is the point at which 'immersion' is reached. I suspect immersion is something you'll see appearing on LAN based systems (particularly those of theme parks) well before it gets anywhere near the Internet. It's not that you couldn't have it on a high latency network, but it would be like working in a deep-sea diving outfit. Hmmn, full immersion of a different sort…

Creators are the real gods of course (as opposed to players). Even so, while creation may be performed directly by these 'creators', once the universe has started going, any direct changes should be made so that they are unnoticeable. For example, if you change the color of a building then you should change the memories of every denizen that thought it was pink just a moment ago. Check out the movies The Truman Show and Dark City for examples of this rule in practice.

One more thing that separates the real world from the virtual is time. Time in the virtual world is to all intents and purposes independent of time in the real world, though for some strange reason (playability perhaps) it is likely to tick over at a similar rate. But consider that if everyone stopped playing, there'd be absolutely no problem in stopping the model of the virtual world (if it could be done). This is because the virtual world is modeled independently to ours, and so its denizens would not notice any temporal discontinuity.

Denizens must not have any reason to suspect that there is a fault in their universe

The model of the virtual world, while likely to be similar in behavior both in terms of content and its rate of change, is actually independent of the real world. Its isolation makes it convenient for us to package and distribute it across the Internet. Perhaps you could think of it as a large file that contains a 2D bitmap of Jeff Conway's game of life. The thing is, unlike most files, it's dynamic, changing according to rules, and each computer it's distributed to is only interested in a portion of it, which they model and view.

The Inevitable Avatar
Yes, it might be fairly convenient for cyberspace to be a self-contained model, but how do you interact with it? After all, aren't we discussing large scale interactive entertainment here?

Ok, let's go over it then. We are trying to allow each of an unlimited number of players to experience one of a selection of alternate realities via one of its denizens, a notional avatar.
The avatar is the corporeal presence in the alternate reality that is to act as an agent for the real player. The nature of the player's interaction will thus be by proxy, i.e. via the avatar. The player influences the avatar's behavior by suggesting its next actions, and the feedback consists of the avatar's senses transmitted back to the player. The player's experience will thus be interactive, albeit limited to the range of senses practicably conveyable via a typical PC or console.

The player's influence can vary from a high level goal such as "Rescue the princess" to puppet like suggestions such as "A good right hook now Rocky - no, left jab!" The latter, in the face of latency, is probably better conveyed by speech input rather than joystick.

This brings me to something I'll probably cover again later. The immediacy of the influence the player is permitted must be considered carefully, and primarily in light of the prevailing network conditions. You can't let a player's avatar get into the boxing ring with an avatar of a player who's connected on the other side of the planet. This is unless of course, the player's are prepared to relinquish the immediacy of their influence. One or both of them will have to accept the position of coach at the ringside, providing encouragement and strategic hints, rather than being able to issue the punches by joystick. The trick is to make the game flexible - scalable in other words, to the prevailing conditions. Of course, should both players be on the same LAN there'd be no problem. So the degree of interaction itself has to be scalable.

However, in order for the player to have flexible amounts of influence over their avatar, it means that the avatar will have to be able to act autonomously, even if it is usually servile. Of course, how sophisticated its behavior when the player's influence is reduced or absent will depend upon the sophistication of the AI code running the avatar. Ideally the avatar becomes imprinted with the player's personality and objectives, and so behaves in a consistent manner irrespective of whether the player is 'active'. Such super AI may be a while coming, but in the interim we'll have to provide a means for the games developer and player to program their avatar's 'offline' behavior.

An alternative approach, on the other hand, is to have avatars with well-defined personalities that the player is obliged to role play. This allows multiple players to take turns controlling a particular avatar - a very useful facility in virtual worlds with prominent characters. In any case, no player is likely to want to remain active for more than a few hours at a time. I can imagine a queue of people having to audition for, even paying good money for, the opportunity to play Captain Kirk in 'Trek World'.

Yes, this is all very well, but I haven't explained how interaction is achieved in technical terms have I? Ok, but a bit more explanation is required first.

Influence Rather Than Control
I guess we're all familiar with the idea of a 'model' forming the scaffold upon which a game is built. You know, that thingamajig that's comprised of 'state' and 'rules'? The model state is a bundle of variable data held in memory that represents the current state of the model, and thus game. The model rules consist of descriptions of behavior (sequences of actions) and the conditions or events (arising from the state) that cause them.

Sound familiar? Well, large-scale virtual worlds are just big games really. No mystery there, it's just that the model is bigger - vast, colossal, gargantuan. The tricky bit is distributing this huge leviathan of a game model between all players and, moreover, doing it efficiently and with respect to limited resources.

Yeah, yeah, but what about user input? I haven't forgotten it again have I? Ah, well, that's an extra-universal phenomenon you see. The denizens of the virtual world can't see the hand of god, so if us players are going to have any influence it's going to have to be a tad clandestine. But I already said that didn't I?

Well, user input is best achieved by hacking the system, i.e. tinkering with the model. In technical terms, we decouple the user's control from the game model. We don't issue 'user input events', or even modify state, but make phantom method calls. What I mean by this is that we mess with the mind of our avatar, but in a way so that it continues to think that its actions were of its own volition. No, I don't want to start ascribing intelligence to the avatar. I just want to establish the metaphysical mechanism by which the player obtains influence in the virtual world. We insert behavior into that part of the model of the virtual world that concerns itself with the behavior of the avatar.

Again the movies have been there before. This mechanism of god-like players interfering in a virtual world is in line with how most movies have depicted possession. A person appears fairly normal and no puppet strings are visible, but their behavior arises from the intentions of someone or something else. And the reason no one in the movies can conclusively prove that a person is possessed is because no one can detect the mechanism by which the mind is being influenced. The unknowability of the mind is an ideal cover for the channeling of the player's influence into the virtual world.

The alternative to an avatar is a golem or a less substantial projection of the player - right until you get down to the level of a poltergeist. In these cases the player's control is either noticeable in its absence or supernatural in appearance (and disappearance).
Finally, the key factor favoring the choice of 'avatar' as player presence is that its capability for autonomy is ideal in the presence of network problems and intermittent players.

Describing Perception
So back to the problem of coping with a huge game model. This is a live description of the virtual universe that must be communicated between all players' computers such that the actions of all players (via influence over their avatars) and their ramifications are available to each other. This description will consist of a compact collection of symbols into which can be encoded sufficient information that each participating computer will be able to reconstruct rich sensory experiences for its attendant players, and in the opposite direction, encode the player's influence.

Thus we need a means of describing an avatar's visual, auditory and other senses to the player, and to describe the appearance, dynamic and cybernetic behavior of every object in the virtual universe. Each player's computer will interpret this description to present a video image of the avatar's vision, a stereo sound image of the avatar's hearing, and any other senses or information as appropriate. It will also share in the workload of modeling the virtual world in accordance with the descriptions of its behavior, and communicate its modeling with every other computer.

As I mentioned earlier, influence is merely a translation of user input into equivalent behavioral suggestions, so any encoding is already provided for.

So, in more pragmatic terms, what does this all mean? How should we encode the description of the model? Well, putting it bluntly, we need a nice programming language.

Cyberspace Modeling Language
You must be joking? Not another language! What about VRML? Or others? There must be hundreds of candidates to be considered. I'm not specifying a system here; I'll let you choose the language, but I will make some recommendations of the features it should have.

  • General Purpose We don't necessarily know that we'll be dealing with a single type of application, so the language should have a general purpose basis. However, the ability to cope with the needs of 3D applications is likely to be indispensable.

  • Object Oriented Object orientation seems to be a pretty useful and popular way of doing things. It'll probably also make a good match with the notions of discreet 3D objects and levels of detail (derived classes being more detailed and less abstract versions of their bases).

  • Simple Data Structures Given we're expecting to communicate data quickly and across the network, I'd suggest that simple data structures are de rigueur. We should be able to keep data marshalling fairly efficient if we keep objects as the most sophisticated data structures, and deprecate complex built-in structures such as linked lists, pointers, and multi-dimensional arrays.

  • Multithreaded & Event Based Like most games engines we'll need the facility for event based processing, and support for such things as hierarchical finite state machines.

  • Compact representation
    Code, as well as data, has to be compact, because code is part of the model too - we're distributing everything! So some kind of p-code or byte code representation will be likely (just like Java).


Dividing the Labor
As long as we've got a means of representing model state and behavior, along with a way of distributing it, that's most of the job done. However, it's only the equivalent of the server side.

Remember that with a client/server approach the entire game model is stored on a central server computer. Well, if we ditch the client/server for the distributed approach, it's as though we've shared the server's workload out (with some duplication) among all the client computers. However, don't think that by doing so that client/server boundary has disappeared. That would be throwing the baby out with the bath water.

In a distributed games system the client/server boundary lies within each player's computer (rather than the network connection). The client is a separate software module that concerns itself with extracting information from the games model (held in the server module) in order to present it to the player. Treat the client module as a glorified IO module. It's job is to depict the scene in view of the avatar along with any other sensory information, and return the player's input intended to influence the avatar's behavior.

This gives us a clear delineation between the distributed modeling system and the conventional client-side scene modeling system. It means that the scene modeler can be platform specific whilst the distributed modeling system is highly portable. As we'll see later, there's no reason why platform specific encodings (textures, geometry, etc.) shouldn't be held within the model, i.e. there's no performance impact.

Scalable Fidelity
One of the neat benefits obtained in splitting world modeling from scene modeling, is that whilst world modeling has to be fairly consistent across the board, scene modeling can be scaled independently and according to locally available resources.

It doesn't matter if one player sees a low-resolution display, and the other a high resolution one. Whether a light brown, flat shaded, square coffee table, or ray traced Chippendale - it usually makes no difference to the essence of what's going on in the game.

Just as the quality of so many games can be affected by the player's choice of graphics card, so it is up to the player's budget for computer and network connection as to how good the fidelity of their experience of a distributed system will be.

I'm not just talking about graphics quality here. There are plenty of peripheral scenery elements that can be introduced into a scene without compromising its content. For example, given a high level description of an autumnal scene, one could imagine that more powerful systems may be able to afford the luxury of leaves blowing from waving branches, falling onto the ground and perhaps sailing away on a stream. Conversely, some systems may just about reflect the sky in the stream.

Hmmn, I guess my lack of Playstation 2 experience shows here. OK, forget leaves! Let's talk little woodland creatures that scurry in the undergrowth, or hop from branch to branch, having the odd fright now and then. As long as they stay in the background and they fit the ambience of the scene it doesn't matter if some players see them and some don't.

At the other extreme, you could have a scene manager that would take only the highest level descriptions and turn them into a 'synthesized speech' narration of action and scenery. Why shouldn't the blind have fun too?

Narrator: "Percival walks towards the castle. No signs of life."
"He nears the drawbridge. He starts walking across."
"The drawbridge starts to be raised"
Percival: "Eeek!"
Player: "Well, shout 'Stop!' and that you've a message for King John, you fool!"
Percival: "Stop! I have a message for King John, you fool!"

The important thing to remember here is that the player behind the avatar winding up the drawbridge may well be seeing everything in glorious 3D. Percival the avatar isn't blind, only the influencing player is - alternatively, the player might just as well be someone stuck in outer Mongolia, using a telex machine…

Percival the avatar isn't blind, only the influencing player is.



Choosing the Networking Method
If you thought it was fairly plain sailing so far, you're right. However, I think you'll find things get a little bit more choppy from now on. In fact some of you will think that the heading I've set us on will recklessly endanger the safety of the passengers. I just hope no one gets too upset when, a bit like Quint in Jaws, I smash the radio up with a baseball bat. So if you're like Police Chief Brody, the sort that espouses integrity and despises unreliability, you can grab those lifebelts now if you want - but they won't help you.

The question we're all asking is just how are we going to communicate the game across all these millions of players' impatient computers? It's the answer that may make you a bit queasy.

You may remember in my first installment that I mentioned the variety of mechanisms by which a game could be networked. There were a few choices, but I think there were enough hints to show you where my money was. I wish I could remember why, all those years ago, I concluded that a distributed approach was the best solution. I feel it would be a useful thing to document the thinking that led to it, but my memory isn't that good.

Yup, the brain works in mysterious ways. It is indeed difficult to recall the order of thinking that led one to arrive at a particular design of system. However, instead of spending years on introspection and oodles of shrink bills on recovered memory sessions, I've used the same mysterious ways of the brain to arrive at a solution. I woke up one morning with a way of rationalizing it all. You know - the design of the distributed system.

What follows is a methodology for selecting a distributed system and its characteristics for the purpose of enabling massive multiplayer games, I will not be considering security, revenue generation, administration, or any other ancillary issues at this stage. Suffice it to say there are no great white sharks you need to worry about right now.

Working Back from the Future
Our objective is scalability. That means we're into conquering the world and keeping it that way. We have to plan for things scaling up from what they are today to the limitless resources of the future. Rather than design a system from a contemporary perspective, with the handicap on ones imagination of contemporary feasibilities, it's better to start off designing a system that has the luxury of infinite resources. The trick is gradually pulling these resources down according to their relative growth curves.

Well, here's my view on the long term growth curves of computing resources. This is where you may have to stop reading and do something more useful with your life if you find yourself strongly disagreeing with me. As you will have gathered by now, my entire case rests on latency being fairly stagnant for the foreseeable future. Bandwidth, and by that I mean the 'available' sort, I expect will grow slowly. The bandwidth of a single connection (piece of optic fiber) may well double every so often, but that isn't the same as the bandwidth of a networked connection. Perhaps its growth is still exponential, but I'd rather not put in the same class as processing and storage. And as for the growth of these last two, well nothing short of global catastrophe will flatten their curves!

Latency: Asymptotic, e.g. a+b/t
Bandwidth: Parabolic, e.g. a+t^b
Processing: Exponential, e.g. a+b^t
Storage: Exponential

Let's see what this might mean in practice:

Year

2000

2001

2003

2007

2015

2031

Avg. Latency (ms)

200

191

177

159

140

124

Bandwidth (Kbps):

64

74

97

157

343

1,061

Processing (Bips):

0.1

0.16

0.39

2.5

96

144,000

Storage (RAM GB):

0.1

0.16

0.39

2.5

96

144,000


These figures are just an illustration of the growth curves -- don't read too much into them. I mean only to suggest that to all intents and purposes, current systems have meager amounts of processing and storage resources, especially when you compare them to what the situation will probably be like in a few decades' time.

Carte Blanche
Perhaps now it's a little more obvious why I'm suggesting that the trick is to pull down resources. Given carte blanche, almost any design will work. There's not really much point in suggesting any one in particular. However, it's interesting the way the choice of design starts to narrow as you limit each resource, and importantly in order of their cost. In the future, latency (and bandwidth to a lesser extent) will be the most significant cost and thus must have primary impact upon the system's design.

A Future With Today's Latency
Ok, so what happens if we don't quite have carte blanche? Let's say we have unlimited bandwidth and other resources, but messages are no longer delivered in an instant -- non-zero latency in other words. Even so, don't think this is simply where messages always take 200ms. Latency means you no longer know when any message will arrive at its destination, if at all. You might be able to measure some statistics of message trip time, and these might be maintained for a while, but you can't be certain. Coping with latency is key to designing massive multiplayer games.

First consider a peer-to-peer approach: every participant holds the entire model and transmits player input to every other participant. It works. However, if you were hoping for perfection then the game will be at the mercy of the longest message delays - the weak link in the chain as it were. No matter, maybe you can come up with 'coping mechanisms' to ameliorate these delays. If very long delays, or message losses in other words, are a likelihood, well, you could try continuously reconciling state between peers rather than surviving on input alone. After all, we haven't considered limiting bandwidth yet.

How about client/server? Well, it's a single authoritative model, so little chance of corruption. After collecting all input it could even return rendered frames to each player. Remember, we're pretending we've got the bandwidth to do this. Even if message loss occurs, it needn't cause widespread corruption or delay. Players with high latency or flaky connections would simply have to put up with it.

It's not that clear yet whether peer-to-peer or client/server is better. I suspect that the latter would probably win out at this stage, if only because it's more straightforward. Whilst the former might ameliorate latency (assuming it was related to network topology) this would probably require sacrificing global synchronization, i.e. in practice, synchronization is only required between players in the same mutual area of effect.

Limited Bandwidth
When you consider the bottlenecks that arise, to now introduce limits on bandwidth immediately knocks the client/server approach on the head. It also persuades us to review the design of the peer-to-peer model.

We can still afford to have every computer hold and compute the entire game model, but can we still receive billions of player input events? No, we must presume that there's probably not enough bandwidth. Broadcasting player input is now a no-no.

As I mentioned in the case of non-zero latency, we'd probably adopt a peer-to-peer design where each peer only subscribed to the input events of players that were within the same area of effect. We'd also have a continuous background process of model reconciliation, i.e. partial synchronization.

But, let's stop right here. Things are no longer quite so straightforward -- we've long since left Kansas as some might say. We are now in the realms of The Internet as it stands today. I can't cover in only a few sentences the wealth of techniques that have been developed over the last few years to cope within known bounds of latency and bandwidth - when trying to develop multiplayer games (or battlefield simulators). As I've mentioned previously, DIS and HLA are the acronyms to check out. They're the sort of approaches you'd use if money were no object, but you were stuck with a limited network.

However, whilst many of those techniques will work fine when you have known bounds of latency, bandwidth and numbers of participants, that is not the case we have to deal with. We are not operating in circumstances where there is theoretically a way of getting the right information to the right computers without noticeable delay or artifact. Indeed, we have to embrace at the core of our design the case where there is more going on than can be communicated in any reasonable time. This gives rise to the need to prioritize communication -- the first hint that we're destined for scalable approach.

This is also the point at which we realize synchronization is no longer the right word for what we're trying to do. We're now trying to design a modeling system that only tends toward agreement. We accept we'll never get perfect agreement, but as long as few players notice, it'll have to do. Just as in the real world, all we need is a consensus of reality. For example, I don't think anyone's been able to prove that a tree falling in a forest still makes a sound in the absence of witnesses. As long as it turns up on a tape recorder we're happy. Even reality does not necessarily need a synchronized model; it just needs to be synchronized when we inspect it. Dinosaur bones don't need to exist until we dig them up. Of course, it's easier for our Occam oriented brains to believe the universe doesn't work like this, but you never know…


Dinosaur bones don't need to exist until we dig them up.

The important thing about consensus is that it is a human construct arrived at by discussion - it is not reality, it is only a description. In other words, as long as players' discussions of their experience of cyberspace agree, then it does not matter if the quality of their experience differs. This is our guideline in prioritizing the information that is communicated between computers. The most salient and critical aspects of the virtual world become the most important to communicate. For example, it is more important to know that a landmine has been laid nearby than whether it's round or square, just as it is more important to hear the rapid approach of footsteps than whether boots or stilettos are being worn. The relative importance of these salient features can either be extracted from the dynamic relationships between objects in the virtual world or can be imparted into the model by the game creators.

Limited Storage and Processing
It's only when you squeeze the resources a little tighter that a more streamlined and scalable design emerges. Once you attempt to cater for a local computing environment in which storage and processing resources are no longer unlimited, and furthermore, where their limits are unknown, you can either continue with a brute force approach and refuse to operate on incapable systems or you go scalable and adopt a best effort approach.

The thing is, because we are having to design a system to operate in an unknown local environment it is pointless to try and continue to make any guarantees about local systems integrity (and blame the network when things get sluggish). Ok, local resource restriction doesn't really directly lead to a best effort approach, but it does make it look more and more appealing. After all, when faced with the impossible, your best effort is all you can hope to do.
When 'less than perfection' is adopted as the essential heart of a system rather than regarded as a runtime hazard, this best effort approach is quite liberating and allows for simpler and probably more elegant systems design.

Not only do we prioritize the communication of the model's state, but we also have to prioritize its storage and processing - and again, according to its importance to the avatar.
The local resources are in effect a read/write cache to the global model (even though it may only ever exist as a composite of all these caches). Of course it is an active cache; in as much as it is not just state, but also a portion of a model and thus must be continuously computed. And we all know about caching policies such as 'least recently used' and multi-processing systems that time-share according to process priority - don't we?

The Solution
Given the assumption that storage and processing resources are eminently scalable, we must be able to dynamically take advantage of their growth. This means we need a model that has a fine granularity, i.e. where the model when reduced to any fraction, continues to operate but at a lower effectiveness and quality. Such graceful degradation might lead one to conclude that neural networks were in order, but given that we need to understand what we're dealing with, I'd suggest that an object oriented model was a good compromise. Objects are supposedly, nice and discreet, self-contained packages of state and functionality. And simpler objects are always available given the use of inheritance.

So at last we end up with the beginnings of a solution, a hint of the architecture that will form the basis of cyberspace. It looks very much to me like an active, object oriented, distributed database. All we need to do is: according to certain priorities, continuously distribute and process a collection of objects.

We accept that each computer is likely to have a different version of model state, but that the state in each computer should exhibit a tendency (to converge) toward agreement. This is based on the premise that there is unlikely to be sufficient time in which to obtain perfect agreement.

The Best Efforts of Mice and Men
So here we say goodbye to integrity and all those who cannot tolerate its passing.
In order to arrive at a system that can model a virtual universe for an unlimited number of participants with unknown resources we've discovered that the system must adopt the best effort principle at its heart. This means embracing the compromised integrity of an unsynchronized model.

The plane has to be built and flown before people will believe it can fly.



As you'll see in one of my next installments, in many ways this makes the system simpler and our life easier. Unfortunately, like as Orville and Wilbur discovered, the plane has to be built and flown before people will believe it can fly.

Moving toward an asynchronous system may be a little bit of a paradigm shift for some people, and I expect it may be too much for a few. Perhaps some of you will be able to suspend disbelief long enough to accompany those who've accepted its feasibility in reading next month's installment. I'll be discussing how to go about organizing all these billions of computers to co-operate in distributing what might be trillions of objects.

If I haven't raised enough contentious issues already here are a couple of things to think about or research in the next few weeks:

1) If a game model consists of state, events, and behavior; when we come to network it, which should be distributed and which should remain local?
2) 'Tuple space', Linda, and Objective Linda - will give you a taste of distributed objects.

Read more about:

Features

About the Author(s)

Crosbie Fitch

Blogger

Crosbie Fitch has recently been exploring business models for online games. He has previously worked at Argonaut, Cinesite, Pepper's Ghost, Acclaim, and most recently Computer Artworks. Always looking for the opportunities that combine 'large scale entertainment', 'leading edge technology', 'online', and 'interaction' - one day, all at the same time... He can be reached at [email protected]

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like