A system's security is about controlling the system's relationship with its environment. If you have a predictable environment, or you feel it is safe to assume the environment will be sufficiently predictable for the lifetime of the system, then you can get away with defining a rigid, rather closed security model based on your understanding of this environment. Compromises will most probably be complete as opposed to partial, but one hopes they will at least be a long time coming.
However, if you're developing a system for the very long term, there's not much chance of being able to predict the environment. So any rigid security model doesn't stand a chance. Security in this case needs to be adaptable, and thus, to some extent, intelligent. It needs to be able understand unforeseen threats and attacks, recover from them, and ideally be better prepared to defend against them in the future.
Cyberspace - For Fun, Not Money
Security in cyberspace is a different kettle of fish compared to many computer systems. We aren't a bank, defending a network of computers against unauthorized access, needing to audit and ensure the integrity of all transactions. We have no computers, no intellectual property, nothing of real financial value. All we have is an objective to produce a system that will enable cyberspace and prevent it from being vandalized. In simple terms, we're producing a free game that the whole world can play.
From a real-world, materialistic point of view then, no-one can lose. The source code is still there. The player's money is still in their bank account. What's been created and is inevitably vandalized is entirely virtual: virtual scenery, virtual buildings, and virtual characters.
There's only been one case in which this kind of thing has happened before and it's known as Bowdlerization - the effective vandalization of fiction. You might say that re-writing the history books is comparable, but that is an attempt to change a description of reality, as opposed to a virtual one. Just as some people are into protecting the integrity of works of fiction, so we should be interested in protecting the integrity of the virtual world. Players will be putting a lot of effort in the course of playing within it, and the persistent world that is created will become valuable to many of the players. People won't necessarily require that a castle is never allowed to fall, but they will require that if it is sacked that it is done so via legitimate means and not by hacking the system.
Sure, there is more than the content to vandalize, you can burn the book too. You can trash the computer just as you can trash the virtual world. However, we can rely on conventional security measures for the computer so I don't need to talk about those here. Moreover, we can assume that 100 percent of users (even the hackers) don't want their computers corrupted, and 99% of our users are actively interested in maintaining the integrity of the part of the system that resides on their machine. What we're left with is the task of designing the system to secure the content against that 1 percent of hackers.
But, note that we don't need to be entirely successful in preventing these hackers from achieving their aims. Just as Bowdlerized books remain entertaining to a large audience, despite annoying many purists, so a hacked cyberspace can remain entertaining. Don't get me wrong though. I'm not saying let's give up. I'm saying the opposite: let's build, because even if it does get vandalized now and then, it's still worth repairing the damage and continuing, because the whole edifice doesn't come crumbling down, it's just a tad tainted.
The thing is, cyberspace only has to be useful to the majority of its users, i.e. entertaining. That means it can tolerate a small amount of vandalism or corruption and still remain useful. This is quite unlike a system required for commerce where confidence in its integrity (and thus viability) might completely collapse if it became compromised even in a very small part.
So while some may say that security in P2P systems is a major headache, this is largely from a commercial perspective. We still have a headache of course, but at least we only need to keep the system around 90% clean, rather than 99.99 percent. Of course we'll still strive to stamp out corruption, but now we know our system failure threshold is more achievable than one might at first assume.
Security in Society
This becomes more akin to maintaining a stable society than of securing a system. Each player's node, in the course of being an integrated member of a productive society must necessarily be completely open to each and every other, simply in order to present the player with its understanding of the virtual world and to allow the player's actions to be conveyed to all other nodes.
So, I think we should start talking more about how members of a society evaluate each other in order to determine truth from falsehood, hypothesis from proof, and rumor from news, rather than how we go about keeping secrets.
I think I'm on the side of the fence that believes an open/free society is a more secure one than a closed/repressed one. If anything, it is precisely because its security is so easily and continuously tested that it becomes reinforced. It has to adapt and learn from each new threat, each new attack. Attempting to utilize a closed security model in the process of government may appear to be a more robust approach because it prevents easy access to control or change, but the trouble is when it breaks, it breaks big. A closed or rigidly secure society rots, it cannot confront any corruption, any weakness in its overall security, until ultimately the whole edifice comes crashing down under its own weight.
A rigid structure such as a building may need to be completely rebuilt after slight structural damage, but a more adaptable structure such as an ant hill can be repaired or rebuilt around even quite major damage.
A society in which each member is free and encouraged to express their views lets the society understand itself, understand any problems, and thus heal itself. A world in which a person is free to fly a plane into a building, is a world that can heal itself (better than nuclear annihilation anyway). Of course, it would have been better if this ill could have been recognized and fixed earlier, but it seems that we must deduce that in this case it was difficult for pleb to speak peace unto superpower. And this just as we thought world peace was imminent, given we've just got 'nation shall speak peace unto nation' sorted…
If anything we need to encourage conversation, exchange of ideas, good along with the bad, pleasant along with the unpleasant. This means facilitating crime along with sociability, theft along with trade, vandalism along with art, etc. This must happen at all levels of society, not just among plebs, but also at the corporate level. Read the Cluetrain Manifesto for how corporations need to change toward greater openness. You can't have two worlds, one of people and one of commerce. We're all in this world together.
this all applies to P2P systems. Full exchange of information. You can't
ban bad information. You can't even ban secrets. There must be no apartheid
between commercial content provider and punter - the player is not the
enemy - we are all having fun together. We only have to cope with the
minority of players that choose not to play by the rules.
Secrets? From Who?
As far as secrets are concerned, unfortunately the Pandora's box of encryption has been opened and nothing any government can do can undo that. The best thing to do is to encourage free communication of dissent, bigotry, insurrection, crime, etc. Then people won't feel the need to encrypt it. We certainly can't start sending people to prison simply because they've forgotten their private key.
And on that point: if someone published a program that generated public keys without private keys, and everyone started sending each other, as a matter of course, blocks of code that had been encrypted using one of these public keys, then everyone would have a defense against forgetting their private key, i.e. anytime someone wanted to send real code, no-one would know if there existed a private key to it. Perhaps this should be termed 'anti-steganography', i.e. when code becomes widespread, then there's no need to send code covertly.
Anyway, my point is, that you can't ban the undesirable, because if you do, it'll just happen in secret. And the next step, of banning anyone from doing anything in secret, is not just silly it's mad that anyone finds it plausible (like banning suicide - you only catch the failures). The solution is to encourage open communication, and thus to make this a far more productive means of solving problems and resolving disputes. When you provide the means for man to speak freely unto mankind then secrets become redundant.
Troublesome Alternate Perspectives
Of course, all the above applies to the insane and the criminal as well as the sociable. It is better for an open society to allow someone to show their hand sooner with a lesser act, than for a closed one to force it to commit a much greater act later. A bit like saying "I'd rather someone vented their anger in an e-mail today, than saved up for a gun to shoot me with later…"
But, what to do about the warped sects that follow a collective delusion that humankind must be destroyed? Well, the sooner their hand is shown the sooner something can be done…
However, ultimately there always exists an end to every system. A human can withstand a lot, but a bullet or cancer can finish it off. Humankind can withstand a lot, but a meteorite or nuclear war can finish it off. Life can withstand a lot, but not a supernova. Matter can take a lot, but not a black hole. No system is eternal.
So, there is no such thing as perfect security for any system, but we can take pains to ensure that things are as secure as possible, but without compromising the system in the process. This means finding an appropriate approach.
It seems to me that it is better to look to similar systems in nature to apply to our distributed systems based cyberspace, than it is to bang the square peg of rigid systems' security into our triangular hole.
For example, just as our bloodstream happily ferries pathogens along with our blood, it also uses the same network to ferry antibodies as a defense. There are some systems such as this, that on some levels seem to be dangerously open to attack, but are open through necessity, and luckily it's from such systems that we can take examples of appropriate approaches to security.
Breaking Away from Obsessive Computer Security
Perhaps it's instinctive, perhaps it's brainwashing. Either way, a large chunk of computer literati seem to be overly concerned about security in computer systems. So, perhaps it might be a good idea to just step back a mo and review the situation.
For example, although a high level of conventional security measures may be appropriate for a single computer or a corporate network, it isn't necessarily the best approach for a network of a million or more, effectively anonymous PCs.
With the advent of the Internet it's important not to lose sight of what we're trying to secure, and risk ending up thinking security is sacred. Fragile systems that lose significant value when they're compromised by accident or deliberate act are indeed candidates to warrant considerable security. However, more flexible systems that are expected from the outset to be compromised (perhaps only in part) on a continuous or occasional basis can still maintain their value. Security for such systems is, and must be, an intrinsic property and not an added feature.
The thing is, there's a risk that by continually reinforcing a system's security it simply becomes more and more complicated, burdensome to maintain, unwieldy, and worst of all, ever more fragile. That's why I think it's useful exploring analogues to networked computers, it broadens one's perspective of what's important and how much security, or lack of it, other systems can tolerate.
Insecurity of Changing Systems
Consider human society. If it operates according to certain principles that regard property as a human right, then as long as the majority respect that right, the society can tolerate the few who do not (perhaps making an effort to discourage such disrespect). However, if the majority respect for property collapses then we might still arrive at a stable society - such as communism or anarchy. It will probably be a different society, but not necessarily less 'good'. Whatever the majority conspires to achieve will define the society, and it can still work, still be viable, whether we judge it civilized or not.
There may be some people who are strongly averse to changes in the system, but those changes aren't necessarily undesirable or unworkable. Protecting a system from change may thus sometimes seem to be a given requirement of a security system, but it isn't.
In the sense we're interested in, security is the ability to protect a system from being changed into a patently non-functioning system (and its constituents to non-functioning constituents, but only in so far as the viability of the system is threatened).
The typical solution to this is overkill, i.e. preventing all change to the system except by a select, privileged group.
As can be seen in Open Source software development. Things don't necessarily descend into chaos simply because anyone can change the system. There just happen to be good mechanisms to weed out disadvantageous changes.
I've often heard that people don't like change, so perhaps it's not too surprising that people like security. Creatures of habit aren't we, eh?
But, notwithstanding our discomfort at change, we seek only to minimize the likelihood (and duration should it occur) that our system will cease to function (in whatever form it has evolved into) in a popular way.
Introducing The Social Approach
In the following discussion my primary focus is to consider security in terms of assuring the system's integrity and operational viability. While such things as access control, secrecy, privacy, rights management, input validation, etc. may figure prominently in typical commercial systems' security requirements, I hope I will end up persuading you that they may not be fundamentally necessary in an open system.
Changing Focus of Security
Let's start off with a quick review of how the focus of computer security has shifted over the years.
Figure 1 - Terminal/Mainframe
Then we had the advent of intelligent terminals, computers in their own right that were able to perform some local presentation processing. These could query a central server mainframe for the barest minimum of data or record locking, and handle the complexities of user interaction locally. This made things a bit more responsive, but the client machines still had to be considered in the hands of the enemy (Figure 2). The client/server model is often the initial inclination for commercial systems given that the server is so patently in control, however it has considerable scalability costs. Nevertheless, its good security can help reassure players that games are fair and that their investment in playing will not be compromised.
Figure 2 - Client Server
Note that the server does not necessarily need to be under proprietary control. A variant of it (Figure 3) abandons the need to secure the server, allowing it to be hosted on the same machine as one of its clients. This is quite sufficient for games where players can trust each other, and they are likely to have a network fast enough and a machine powerful enough to cope with the typical number of players.
Figure 3 - Player Hosted Client Server
An alternative to client/server is to make no distinction between participating computers, simply having communication on a peer basis as necessary (Figure 4). In many cases this can help spread the workload and prevent communication bottlenecks. So, although this model is not particularly secure, it can scale to some extent.
Figure 4 - LAN Based Peer-to-Peer
The LAN based peer-to-peer approach can be directly ported over to the Internet (Figure 5), but because bandwidth limits are more significant more careful attention needs to be paid to how relationships are formed, especially if these can be anonymous. Moreover, each computer has to be a little more responsible for what's going on, i.e. the access and privileges it allows to peers. File sharing systems such as Gnutella adopt this kind of approach.
Figure 5 - Unsupervised Peer-to-Peer
Of course, in exchange for an Achilles heel, by providing a secure server alongside the peer-to-peer approach you can provide some security and control for occasions when it's necessary (Figure 6). For example, licensing, payments, indexing, and other features that may need more security than the bulk of information being passed around, can occur via a central server. Napster used something similar to this (and so could be shut down by disabling the server). In some systems the server may only be needed in order to set up a group of peers that thereafter operate independently, and in others it may be used continuously. Either way, it can become a bottleneck.
Figure 6 - Supervised Peer-to-Peer
When the information needing to be provided by the server is proportional to the number of users, then a solution to the scalability problem is to have multiple servers providing the same service. Prospective participants then need to first find the most appropriate server to use (Figure 7).
Figure 7 - Multiple Server
course, if the server-side information needs to be shared among all participants
then you need to either replicate this information across all servers,
or distribute it appropriately (Figure 8). In fact this is beginning to
get popular as 'the solution', i.e. you have a two tier distributed system.
In the top (central) tier you have the trusted servers which are maintained
by trustworthy ISPs, etc. In the lower (outer) tier you have all the untrusted
players. You can still allow players to distribute information among themselves
for expediency, but ultimately they'll only trust the horse's mouth (a
If you have a large amount of content that you need to protect then this is a pretty good solution. However, you have to provide and maintain the servers (or do deals with ISPs).
Figure 8 - Replicated Server (or Partially Distributed System)
approach that tries to obtain the benefit of player owned infrastructure,
but secured content, is to have a distributed system, but encrypt data
between the client and the distributed component (Figure 9). This means
every player can hack the program that runs on their own computer, but
they can only trash the data they have because they don't know how to
encrypt valid changes. Thus you can spoil the fun for yourself, but not
for anyone else. Well, at worst it's like letting your pals play with
your football, but causing hassle when you take your ball away (they just
have to fetch another ball).
It's difficult to imagine that if a program on your PC can decrypt and encrypt something, that you can't reverse engineer that process. So I think the jury is still out on this one.
Don't mistake this approach though, for the one where messages between peers are encrypted. Interception is not the only issue. We need to be able to tell the difference between a computer that has a true arbitration on its hard disk and a computer that has a falsehood in its place.
It's a choice between reputation monitoring and believing in perfect security. With the latter, a security breach means the whole system is suspect, because reputation monitoring won't have been utilized. Even so, I daresay there'll be a few attempts to equip consoles intended for massive p2p games with hardware based encryption. Good luck! See how far they got with DVD.
The other side of the coin though, is that such games might not get developed until the non-encrypted approach has been tried and tested, and that will tend to mean that reputation monitoring is going to get honed as an acceptable, if not reliable, security mechanism.
Figure 9 - Encrypted Peer-to-Peer
And so we come to the hierarchically arbitrated distributed system (Figure 10). Totally open. Protected only by the ability to consult ones peers as to the reputation of other peers. No one in control. Just like Linux, you're free to create your own variation, but the likelihood is that people will prefer to use a standard build from a supplier they trust. The market will provide its own validation suites to double-check that each version is hack-free. All you're left with is a few rogue computers running spoof versions trying to corrupt the system. They have to do their dirty work before any of the other peers they deal with start to smell a rat. Just like on eBay, you don't get so much trust while you're still wearing shades, i.e. the ability to corrupt only comes with reputation, and reputation isn't something you get overnight. You can create a cartel to mutually bounce up reputation, but it's unlikely that a particular peer will consult solely the members of that cartel (unlike on eBay). Just like buying your own printing press, printing your own news is no good unless the stories can be verified (even if you own most of the press).
Figure 10 - Hierarchically Arbitrated Distributed System
Strength with Flexibility
Some of the strongest or most resilient systems are those that can change. The grass that blows in the wind, etc. An animal might die, but the species goes on. The species might become extinct, but life goes on. Ultimately DNA is pretty resilient stuff when it comes to surviving what the universe can throw at it. Perhaps, thinking even beyond DNA, to all life, including as yet undiscovered forms, it's a case of "Life might die out on this planet, but life in this galaxy will go on…"?
But, back to Earth, and more immediate concerns…
I've often wondered if the common cold isn't actually a means by which our immune systems communicate with each other. Think of it like security consultants exchanging details of the latest virus with each other:
"Hey, Fred, I've tweaked this test virus a bit - you know, the one I got from Bill the other day - I've made it a tad more cunning. Infect your system with it and see how long it takes you to suss out how it works"
"Righty ho, Tom. I'll pass it around the lads at work. Our anti-virus software will soon be even stronger"
A system that is constantly exposed to agents that impair its viability
will either adapt or die. In other words if we design a system that cannot
adapt to unforeseen threats, then we must expect it to become unusable.
For a system with a short-term lifespan it's probably quite economic to
make it strong but fragile - when it's busted, it's busted. We can always
send out a patch or fix if necessary. However, a system that's got to
carry on working no matter what's thrown at it has got to survive throughout
the threat long enough that an 'immune system' can beaver away, analyze
the problem and come up with a fix.
The only immune systems for computer systems in use today tend to comprise human beings (teams of coders). We have wafer scale integration, RAID, and voting computer systems, which eliminate errant components. We have disk formats and databases that can repair themselves after corruption, sometimes even without loss of data. We have virus checkers that can recognize viruses, even mutating ones, and remove them. But, I think we're still at the research stage in terms of developing a system that can recognize novel and undesirable elements solely based on their behavior, that is then able to remove them and allow the damage to be repaired.
Of course, you have to be careful with such automatic measures. Sometimes they cause more harm than good. Not mentioning any names, there is a particular system in use today that attempts to secure a user's files (against accidental loss). Thus it can recreate a user's file if it feels it shouldn't have been lost, and also delete it if it thinks it's spurious. Unfortunately if it gets it wrong (the server crashes) sometimes it can decide that all the user's files are spurious and should be deleted ON THE REMOTE COMPUTER! I've seen this happen and the victim tends to emit steam. But then, even our own biological immune system gets it wrong sometimes - with lethal consequences. But, on balance, I guess we'd choose to keep our immune systems for the greater protection they afford us than the harm they cause if they go wrong. It all depends upon whether you live in an 'unfriendly' environment or not.
So, as we're developing a system to have an unlimited lifespan, it looks like we'll be needing a flexible, resilient system that can tolerate being in a state of continuous compromise and can detect and remedy its sources.
Let's now think of humanity at a different level, its behaviour en masse as a cellular organism, perhaps in terms of its nature as a knowledge based system - aside from its behaviour as a parasite on this planet.
Our social system of gossip survives fakes - we can weed out the liars, the false rumour mongers, the charlatans and con artists - well, usually.
Our distributed system on the other hand is one where we have a multitude of computers gossiping about what's going on, and like society there's a continuous ebb and flow of computers that grow in the amount of respect and authoritative status they've earned, and sometimes a fall from grace when they've abused their position.
we need to combine the gossip system where participants can measure the
quality of information they receive by comparing it with everything else
they hear. This is viable where infractions are expected to arise from
individuals rather than a large consortium. The thing is, by definition,
if the consensus wishes you to believe a lie, then the lie becomes the
truth. You try telling people the world is round if the consensus is that
it's flat! What is important to cyberspace in terms of its entertainment
value is that we have a consensus about it - it is not urgent that we
inspect each item of information to determine that its internal logic
is sound - we'll find that out in due course. Or put another way: if you
can't believe everything you hear, then the majority view is a good place
from which to start - it tends to cause least friction.
It's only when the majority view is tested that we need to find out whether it's valid or not. For example, until we take a closer look, it doesn't matter whether there are artificial canals on Mars. It doesn't matter whether witchcraft exists or not, one can still exterminate suspected witches to err on the side of caution, and find out the error a bit later when the consensus changes. Taking the minority view is worse, because then any crackpot can say anything, e.g. the sky is falling.
These days we have scientists who we now hope are able to move civilization on to a higher level, where our consensual reality is constructed a little more rigorously, based a bit more on logic and falsifiabilility, and less on rumor and assertion.
You might think that reputation plays a good part in all this. However, what we can observethroughout history is that reputation does not improve the validity of consensus, it merely improves the ability to disseminate it. The Pope might have had a good reputation, but his knowledge concerning reality wasn't particularly sound - the important thing to note though, is that it didn't matter. Civilization just needs consensus, it doesn't need the right one, unless it has to progress (someone invents the telescope, or discovers America, say).
So, cyberspace doesn't necessarily need a 'right' version of virtual reality, it just needs to be able to disseminate, and achieve, a consensus as to a usable version, i.e. we only need to worry about repairing inconsistency when we meet it.
Don't think I'm trying to devalue the benefits of existential veracity - far from it. I'm just pointing out that there's a separation between consensus and true reality, that corruption of the truth is not necessarily a threat to the system's operational viability in providing an entertaining experience.
So if we have a renegade node that's amassed a sizeable reputation then it will indeed have the ability to sow a corrupt version of reality to a large number of its respectful nodes, but this won't necessarily crash the system, or even make it unusable. It might achieve the renegade node's ends, suggesting that a passing asteroid is actually an alien spacecraft able to pick up recently expired souls, but hey, it's difficult for anyone to prove any falsehood has occurred. There's just a consensus discontinuity. Members of each consensus are just as happy with their version of reality as the others are with theirs.
To some extent the most obvious manifestation of an example of a consensus discontinuity today is between religion and science. The trouble with faith is that it doesn't conflict with consensus, at least to the extent that it can be disproved. That's why it's so difficult to deprogram theists. And from the theist's point of view, that's the trouble with science, that it appears to provide a sufficient universe, that it's difficult to persuade atheists that there is more, that there is a god, that faith 'works'.
The point is, that in a virtual world just as in the real one, an 'untrue/true' version can live among a 'true/untrue' version, and both parts can 'know' that their version is the truth.
This is why politicians worry about someone, who challenges their ideas of truth and goodness, amassing popularity and respect. Popularity can outweigh the truth.
So utilizing 'reputations' is a better strategy than giving credit equally, but it isn't a perfect solution. We also need the ability to inspect the fabric of reality for self-consistency, rather than just taking it at face value. However, let's see how far we can get with reputations.
In order to have some kind of reputation tracking, our system needs to have a means of identifying each participant and the ability to gauge, on a long term basis, the quality of information we receive from them. It doesn't really matter if we only know them for a short-time, we'll make our own judgements regarding what they tell us.
This reputation tracking strategy neatly meshes with the heuristic approach I discussed in the previous article. By measuring a node's reputation based on one's own dealings with that node and by conferring with other nodes that one knows or respects (one may trust senior nodes more, or peer nodes with good reputations) as to their measure of the node's reputation, one can get a fairly reliable idea of a node's 'goodness', i.e. the likelihood that its information is valid. Naturally, one can't simply go by a node's own recording of its own reputation (though if it differs from other nodes' values, something fishy is probably afoot).
This idea isn't new by any means, and for more sophisticated developments please see the end of this article for a bunch of links to further reading.
Remember that peer-to-peer is all about people freely communicating with each other. People have no secrets. Indeed, the system's entire raison d'être is to tell everything that anyone wants to know as efficiently as possible.
A fairly sensitive issue is that people want to be relatively confident about the integrity of the information they receive. And I think this is the key - at least in aggregate terms. It's not that the information must be free of inconsistency, up to date, or correct. It just needs to consist of accurate recordings of events. This is because in some sense, the present is an accumulation of historical events. Although we can still live in a present where the history books have been altered to suit someone's preference, it breaks the spell that this is an alternate reality. It indicates that some players have managed to whisper in god's ear.
If all computers are involved in scribing the history books, and most player's are only interested in playing by the rules, then we need to have some scheme of contriving that the non-rule-abiding players don't get to be scribes.
In other words, in order to secure the system, our primary objective is to determine who is best granted the responsibility for arbitrating over the information that defines the virtual world.
Why is Existential Accuracy Important?
People seem to be designed to operate in a coherent universe, therefore while occasional drug induced excursions may be 'fun', people prefer that their experience makes sense (at least they want to be confident that it will make sense one day if not today). However, it need only make sense sufficient for them to have an experience in which they can remain entertained, e.g. an experience in which a small percentage of events seem to make little sense may be quite acceptable - indeed the human mind seems adept at ignoring nonsense. We are quite happy to ignore such things as gravity: we'll just say that's the way things are, or we won't even realize that gravity is going on all around us. It'll take an apple falling out of a tree to irritate the right person just enough that they'll ask why long enough for it to outlast their attention span.
Perhaps millennia ago the normal people were a bit miffed that wizards and witches appeared to have an unfair advantage in life. Perhaps they worried that these people with large amounts of wisdom and knowledge could exploit this to mess with reality (god's world). It's happening again today. Perhaps we'll institute laws to make hacking a capital offence, perhaps burning at the stake might be appropriate? When we start relying on cyberspace as a virtual reality then we'll probably get very upset if we find anyone knowledgeable enough to mess with it (they couldn't possibly be wise).
Breaking the Rules, Breaking the Game
A game that no-one plays is a broken game. A hacked game that everyone still plays (numbers are not decreasing), has not been broken.
If not through boredom or lack of time, the only reason people stop playing a game is because it has ceased to be fair. Even if some players are breaking the rules, if their presence and exploits are negligible, they can still fail to impact the fairness of the game as a whole.
Just as thieves break the law, their activities fail to impact the perceived overall fairness of a property-based society. Why? Because detection methods keep thievery to an acceptable level. We can't stop it happening, but we can add costs and risks to it from the thief's perspective. It is possible in some circumstances that you could counter thievery solely by reputation, if reputation is valued by thieves of course. This is why in small groups of people (even thieves) the members of that group don't tend to steal from each other simply because they value membership of the group. To some extent this is how we demonstrate against thieves, by removing them from society and removing their social responsibilities.
I'll say it again: grant arbitration to nodes according to their past performance in terms of consistency and accuracy. And yep, we can measure that, because we have a whole community of nodes involved here. It's not an anonymous, one-to-one relationship. The corrupt nodes then tend to get removed. And it doesn't matter if they resurface as new nodes, because lack of a performance history isn't much different to being untrustworthy in any case.
An open system is an evolvable system. It may be weak today, but each attack makes it stronger. A closed system is a fragile system. It may be strong today, and resist many attacks, but the first successful attack will break it completely.
In an open system the solution is to be open about security. The more a system (including its users) is able to understand about itself the more it is able to recognize and pinpoint anomalies and symptoms of corruption.
Security in an open system is an educational training session between the system and its adverse environment. The system is continuously tested in increasingly sophisticated ways, and each time it adapts and accommodates such tests.
Who's the Enemy?
Conventional wisdom so far in the game development community has been that 'the client is in the hands of the enemy'.
Er, excuse me, but 'the client' is in the hands of the player, and the players are friends (well, until they lose sight of the game). Players shouldn't all be tarred with the same brush just because the hacker sometimes wears a player disguise. Players are the great untapped ally in the war against game hackers.
The Hacker Mindset
All the players hope for is: firstly, that they will have fun; and secondly, that they will have equality of opportunity in having that fun, without being obliged to subvert legitimate player interfaces.
In other words, players wish to suspend disbelief in the virtual world. They don't want to have to hack the system in order to obtain parity with other players that get their rocks off doing that sort of thing (like have to get an 'aimbot' just because everyone else uses them). While it may well be fun to hack, that kind of 'fun' usually depends upon the presence of a number of non-hacker users.
That's the hacker mindset for you: if there's no challenge, there's not much point in hacking it. If a derelict house has no doors or windows, why find a way in via the chimney? If something's already broken or worthless, why try to compromise it further? Their motto is probably "If it ain't broke, break it".
Hacking is mankind's equivalent, but essential facet of nature: continuous stress and exploration of opportunity. It's not so much a war between complex systems and the simpler ones nibbling at their heels, but a symbiotic relationship in evolutionary terms. A system will encourage the evolution of other systems to exploit its weaknesses (often against its interest), and the system will either achieve viable equilibrium, adapt, or fail. This comes back to my point about the common cold. It's in our interest to pass every new variation of this around precisely because it strengthens our species' collective immunity. Who knows, we may even be interested in deliberately mutating the cold virus. Wouldn't it be a pity though, if we discovered a cure for the common cold and in so doing inadvertently wiped ourselves out through an enfeebled immune system? I wonder if we need hackers as much as we need thieves and viruses?
Maintaining the Commons
Cyberspace is just going to be the 3D equivalent of the Web in security terms, i.e. nearly everyone's interested in preventing corruption, subversion, vandalism, etc. But this pressure comes from the entire user base. We don't have a particular corporation charging everyone for the Web and thus contractually obliged to provide a given level of service. The Web is a mutually advantageous piece of global collaboration. Cyberspace will probably be the same.
Oops! I've blown it now. Not only have I suggested that the infrastructure should be free, but now I've implied the content is given away for nothing too. Imagine thousands of cyberspace development companies each having a share valuation based upon how many players frequent their virtual universes. Well, hey, it happened with web sites!
Total security is not possible. We can only continue the escalation of preventative and remedial techniques. The system and its hackers just keep getting more sophisticated. However, it seems that people have reached a steady state in dealing with each other. Or perhaps, maybe that's just the general tendency, and there's an occasional imbalance when one side seems to be winning.
At the end of the day any system we used can become corrupted, but humans have evolved to suss each other out such that an apparent advantage is always checked out for its legitimacy.
But have you noticed how few care about other player's disadvantage? How many players are going to be upset because another player keeps tweaking the system to penalize themselves? Well ok, it might be an indication that someone was subtly learning the ropes toward obtaining a great advantage later on, but that's the hacker's cunning and guile for you.
If nodes in our distributed system are like people, then they need to utilise similar social validation strategies. Nodes should be doing background evaluation of computation quality and consistency.
Security and Thermodynamics
About a decade ago, a chap called Len Bullard was asked to have look ahead ten years. He astutely guessed that it would be "A world wide hypermedia system based on markup technology, distributed business processes, etc." - not far off eh? He also explored issues of system stability and security in the face of 'terrorist attack'…
Here's what he has to say:
The goal of destabilization is to exhaust the energy budget of a system and deprive it of the capacity to meet mission goals. One can say a destabilized system exhibits a "higher temperature", thus, an increase in energy expenditure without a resultant increase in organization, until it reaches entropy. Direct attack is one means (e.g. a worm), but more subtle approaches are possible.
Some working definitions:
- the sensitivity of a system element to variance. The number of sensitive
elements and the degree of sensitivity determine the overall system
- Destabilization - the process of increasing the entropic value of a system by introducing false referents or relationships that increase the latency of the messaging system beyond the tolerance thresholds of the protocol.
A successful destabilization strategy disrupts the synergy of system and organization. The more interdependent the system, typically, the easier it is to destabilize. To make the system less vulnerable, it needs to be noise-tolerant and we all understand the most common techniques using redundant data storage, matching and verification, and encapsulation of components or view dimensionality to restrict propagation. It is necessary to be able to discriminate natural activity that results in decay (incompetence in functions, superstitious learning, etc) from an active destabilizing agent (goal seeking).
Destabilization in a system can be increased by decreasing the referential value of a pointer. This activity seeks to increase uncertainty and decrease confidence or goodness in a value. These might be called Boltzmann Attacks based on application of the Boltzmann entropy equation:
- Uncertainty - increase the number of imprecise terms or referents that result in unresolved ambiguities. Superstitious learning is a good example. (aka FUD)
- increase the number of referents precise or otherwise beyond the capacity
of the system to resolve them within the budget (e.g. time, money, any
other finite resource). Vaporware is a good example as it disrupts timing.
Disrupting timing is an excellent strategy. See Miyamoto Musashi - The Book of Five Rings - "You win in battle by knowing the enemy's timing, and thus using a timing which the enemy does not expect." He goes on to describe foreground and background timing and the need to see both in relationship to each other. Musicians understand this as syncopation and the effects of it on autonomic systems.
Some factors that affect destabilization are:
of destabilizing agent in hierarchy of control, that is, the inter-dimensional
effectiveness for propagating by force
of time of effective destabilization, how long is the error undetected
and therefore, the density of the error (e.g., replication)
Destabilization can propagate linearly, by value, or non-linearly by reference.
a mission critical component and its importance in the event stream
the destabilizing agent with sufficient resources to execute a change
redefine a component or critical element of a component.
Reclassification is an excellent strategy here. AKA, labeling. This is why authority is so problematic when creating semantic nets. Note carefully: the principle of rationality is weak for organizing human systems (see Prisoner's Dilemma). No system can be predicated on self-sacrifice that leads to extinction. Trust in an organization is in direct proportion to the relationship to self-preservation.
If it helps, it is supported. If it extinguishes, it is attacked.
resources so that stabilizing controls are decreased, e.g. distraction.
For example, a change of focus can be used to mask destabilizing activities. When the hacker better understands your resources and how you apply them, he can create other activities to deny visibility of his real mission. Coordinated attacks are hard to defend against if such knowledge is available.
the agent until the energy budget collapses such that effective mission
closure cannot be achieved by redirection. Deny the capacity to remediate.
The notion of focus involves temporal elements of concurrency. What can be known, when and with what degree of certainty, grows or diminishes in relation to the available referents and the capacity of the system to resolve them.
To counter instability:
the noise background. Difficult if the hacker can hide in the noise.
and test any inter-dimensional relationship or signal. Precisely identify
such that system uses the smallest number of terms.
As Dr Goldfarb says, conserve nouns, and I say, test verbs.
terms with a large referent set are carefully monitored when applied.
Rigorously QA broadcast deliverables by policy.
terms into strongly bound classes
performance data to identify emerging instabilities. Compare local events
and environment continuously (use current maps and keep them current).
inherently unstable components or processes from the network.
Unstable processes are often useful particularly as they operate near the edge of onset of chaos, and therefore, are engines of evolution. "...crazy but we need the eggs."
system to maximize opportunism and cooperation among dependent subsystems.
If a system is becoming baroque, it is in need of redesign. If the slightest deviation is a cause of controversy, you probably have a system that is overly sensitive. Note this is an issue for many object-oriented systems that use inheritance.
Avoid intrigue as a means to administer policy.
The thing to know about Machiavelli is, he was fired. Do not make an employee bet their badge as the price of innovation. Don't white pig. If the price of innovation is to watch others get the reward for it, thebehavior will be extinguished.
As some extra reading, the Taguchi Model for process evolution and Deming's TQA work are worthy. As in all things, over applied, they are also a good way to exhaust an organization. Beware the problem of top-heavy control systems. In most business transactions, if the customer is satisfied, you are done. They'll call you if they need you. Make sure they know you will respond when they call.
There is a company called Horizon, a Glimpse of Tomorrow that has done a neat bit of lateral thinking with regard to security (see an article by Ben Hoyt).
The gist of it is that the system enables arbitration of state to occur no closer than neighboring nodes of those nodes most interested in arbitrating it.
So a cheat is unable to change anything to their advantage, because by wishing to change something they would necessarily need to avoid interest in it in order to arbitrate over it, i.e. if they were interested in it, they'd have to ask their neighbors to corrupt it, because only their neighbor would be entrusted with it. And given that arbitration and neighbors may change at any time, it's a tad tricky for the cheat to achieve their ends.
Incidentally, this reminds me of the saying that the people who most want to be politicians are the last people society needs in government.
Anyway, although I do like this idea, it would compromise performance. Perhaps an empirical study could see if this hit was worth worrying about.
Even so, there remains the problem of vandals as opposed to cheats. Cheats are players that require the rules to work in order that their cheats prosper. Vandals don't care what they do as long as it upsets as many people as possible. A vandal would corrupt any arbitration that came their way.
This is why I think reputation monitoring is necessary. It not only detects vandals, but it also detects cheats.
So what, apart from a load of waffle, have I achieved so far in terms of solving the security problem for massive multiplayer games?
Securing the game, the fun, and the player's interest
I've proposed that security is ultimately something that only the player is concerned about. The typical player doesn't care if the publisher makes any money, loses control over their property, or ends up in court, nor actually do they care about the developer or ISP in these respects. All they really care about is that they get to play a good game, and naturally they will pay for this, i.e. access to entertainment. It's up to the developer (and publisher, ISP, etc.) to figure out how to create something that players want to access, and to economically (profitably?) charge the player for access to it.
It's really just that most current business models rely on being able to control access via the stable door after the horse has bolted that 'game security' has become such a headache. If it is possible to secure a game sufficient to maintain its entertainment value, but not to sustain traditional business models, then patently this isn't a technology problem but a business model problem.
And so, I've ignored commercial wisdom. I don't think it's ever been useful for solving technical problems anyway. What happens is this: 1) technology gets developed, 2) games get made from the technology, 3) some clever, commercially minded person then has a brain wave and figures out how to make money from them. Do publishers really start the ball rolling themselves, saying "Ah hah! With this new business model we've just thought up, all we need is a new type of product (a game that we have no idea about) that would rely on a new technology (that we have no idea about)" ?
Of course, the typical way a businessman puts it is this: "Yeah, that's a great idea for a new technology, and a great idea for a new kind of game to exploit it, and I'm sure millions of people would love playing it, but unfortunately it isn't compatible with current business models so it'll never happen…"
happening at the moment? Everyone and their dog is bashing their head
against a wall trying to produce massive multiplayer technology that supports
existing business models, i.e. technology that secures the long-term ability
to control access to the game.
Don't do the businessmen's work for them, I say! Make life easy for yourself. Just solve the problem of making a fun massive multiplayer game that will stay fun. And, if it makes you feel better, you can always take solace in the maxim that whenever producers and eager consumers meet, money isn't usually far behind.
So, we can drive a coach and horses through the problem and throw out the need to control access. It makes our life easier, but unfortunately, becomes a commercially unviable proposition. And what in this world allows commercially unviable things to happen? Open Source does! Hurray!
(This is how a certain large software corporation let GNU/Linux come in below their radar, i.e. "Gentlemen, we can now rest easy, because it is no longer commercially viable for anyone to compete with us." Oops!)
So, no access security. How on earth can a system survive? Well, I've looked to other systems with very little access control, such as human societies. Just as people are free to talk to each other, but trust tends go hand in hand with reputation, so millions of computers can self-organize themselves according to reputation. This needs nothing more than that the majority of computers are well behaved in terms of identification and consistent good behavior. That's all that people need, after all.
And for those of you wondering how we keep track of people in the system (just so you can prosecute the hackers). Because we don't need to control access, the system does not need to identify the users (players). Of course the game will want to know about players, but the system only needs to uniquely identify computers. Furthermore it doesn't need to authenticate the identified computers, only be reasonably confident that the identity is unique, which by definition it should be (if it isn't, the identity becomes invalid). Trust only builds up due to relationships or the experience between two nodes of each other. Basically, two strangers meet in a crowd and if over time they find each other agreeable and trust has built up due to continued reliability of exchanged responsibilities then that's all that's necessary. There's no need to prosecute if things go wrong, just forget and move on. This works in society too, overall, if the majority of humanity is basically 'good'. Of course, some unfortunate people will suffer from the few nasty characters, but the system as a whole remains viable (except if nasty characters manage to get in positions of overarching power before they do their dirty work, but even then, it's unlikely to be too late for the majority to remedy things).
Just like in the movie The Body Snatchers, even if a good guy is taken over in the middle of the night by an inferior impostor of unknown intentions, irregularities will reveal themselves. Of course, if they don't then it doesn't matter. For example, if your dad is replaced by a doppelganger and you still can't tell the difference then it's still your dad. Hey, the truth is stranger than fiction: Each night asleep, our brains rewire themselves, and each morning we're a slightly different person. However, because we've been philosophically conditioned to believe we're the same 'I' that wakes as went to sleep, we're quite happy to ignore the discontinuity. So really, anyone that says they're 'Fred Bloggs' and matches his profile might as well be believed unless there's significant evidence to the contrary - cos you can't prove you're the same 'you' can you? This is why twins and clones are often used to easily pull the rug from under the audience's expectations in many stories.
But, can it work?
I know it's not the best analogy, and I don't want to trivialize life by comparing it to a game, but a society of people along with a society of computers share similar problems and if people can rely on an imperfect solution, so can computers. And to some extent we can almost consider computers as extensions of their human owners. It's probably not surprising if the computers can and should adopt similar strategies and thus operate on a much larger scale just as viably.
Mankind has gone on for quite a long while without people needing public key encryption to ensure they can tell the difference between good guys and bad guys, or truth and lies (though it helps in warfare). Or rather, it doesn't really matter if we uncover corruption rather than prevent it - society does not collapse with a lie or a criminal. As long as truth and goodness are in the majority, the system works.
attempt to control access to the system - we're securing fun, not revenue.
no Achilles' heel - have no indispensable, central control.
- Any egg
can be a bad egg, even the erstwhile best egg, but the majority are
good - bank on it!
reputation and, having conferred with peers, grant responsibility accordingly.
- The system
favors content of interest - undesirable content will thus not last.
- A minor expenditure of energy by the good majority easily outweighs the major expenditure of effort by the bad minority.
You know, I reckon the social approach can work. Moreover, I don't think we need computers to be as intelligent as humans in order to measure reputation. Using a system of heuristics such as I described in my previous article should be sufficient. Nor do we need to maintain some kind of perfect graph of reputation - just going by a quick confer with past and present peers should provide a wide enough sample of reputation measurements.
So don't be blinded by commercial realities. Let's solve the technical problems first, demonstrate a game second, and let the businessmen figure out how to make money out of the new entertainment phenomenon we'll have created. Remember, if the creation of the web had been left to businessmen, it would still remain a commercially unviable proposition, and we'd probably be left with an evolution of CompuServe's proprietary service. However, the web did get created, and plenty of money got thrown around in the dot com boom. History is destined to repeat itself. Let's make it happen. Let's allow millions of people to play in virtual worlds together. The Web is just… so limiting!
No matter how flaky you may find some of the ideas that I've described earlier, and no matter how difficult it has been for you to gauge my reputation, the web is always there to get a second opinion! Here are some second opinions, and naturally, I hope my selection isn't too biased.
Reputation Based Systems
OpenPrivacy: Reputation Capital and Exchange Mechanisms
Freehaven: Accountability Measures for Peer-to-Peer Systems
Advogato: Advogato's Trust Metric
Mojo Nation: Technology Overview
Real Communities: 12 Principles of Civilization (digest here)
"Enterprise Engineering for Concurrent Integrated Product Development and Support Environments" Len Bullard, GEAE, 1991 (CALS Conference '91) (Excerpt)
Crypto Anarchy and Virtual Communities
Timothy C. May
How to Hurt the Hackers: The Scoop on Internet Cheating and How You Can Combat It
By Matt Pritchard
Revenue Models In the Absence of Content Access Controls
The Digital Auction: Making Money When Information Wants to be Free
By Crosbie Fitch