Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Sentience and What It Means To AI

A study of the manifesto by Push Singh, and what it means to Sentient AI and Video Games.

Tom Bellingham, Blogger

April 20, 2010

12 Min Read

So I've been asked to write a blog entry answering the following questions:

1) My thoughts on Push Singh's 1996 Manifesto found here.

 2) Do you think modern games pass the Turing Test?

 3)  Will games be the field where AI becomes Sentient?

 4) At what level do you personally define "sentience"?

So first off, I think Push Singh is a typical example of someone who wants to walk before he can run.  The development of AI should be, and has been, following the same pattern that biological evolution has followed for eons.

 Essentially all life that has become sentient (see question 4 for my definition on this), has first felt, then thought. In this I do not mean simply through evolution, you only have to look as far as your own birth. Why do babies come out crying? The parents like it because it is the first sign of a healthy child not being stillborn.

Most likely the reason for this is because it was a very traumatic experience (without getting into the gruesome details, but you try fitting through a pipe half your size and not being a little traumatized), so already we have a human not more than a few seconds old associating pain, trauma, and survival (the need for oxygen that it was deprived of while its head and lungs were being compressed), with a physical reaction, crying.

 Now we can take this one step further. You are reading this either aloud or in your head, either way you are reading it in English,  which is essentially a high level programming language for your mind.  I make this relation, because just as a computer takes its instructions in Assembly say, processes them in machine language and binary, and spewsout the result through an interpretor we call a graphics card, so your brain takes input not only from your five senses, but also from your thoughts, your English thoughts.

 In this fashion it is clear that AI development has been on the right track for the last decade and a half, as it is currently developing the systems that will one allow the AI to associate pain with the electrical systems it is interpreting from a heat sensor say, and then, when a truly sentient AI interprets these signals it will know to pull back, it won't have to be programmed to pull back, because the reason it will know to pull back from something that is hot, is because it has been burned before, much as a child learns this.

 Essentially, it is unrealistic to attempt to mimic the human mind in this day in age, as processors are not able to process the raw amount of data that a human mind can, nor have they reached the human's mind ability to parallel process information.  Richard Benjamin wrote in his book "The Coming Singularity - When Computers Become Sentient" about a day, that he calculated would be sometime in 2050, when computers will finally be able to become sentient, based solely on Moore's Law and the rate at which we are expanding not only in the number of transistors, but also in the number of memory addresses a computer can reference (i.e. 32-bit vs 64-bit), and also the rate at which multicore and multiprocessor chips are being developed.  The Cell BBE is the greatest example to date of parallel data processing and streaming, and being able to allow isolated systems to run together and share data and load.

 One of the reasons it is very difficult to jusge when sentience might become possible is that programming must keep up with hardware, and it is highly unlikely that the right set of programming skills will manifest in one person alongside the right set of philosophical and biological information, generally we call these people geniuses, and generally they are too smart for their own good, much like Push Singh.

 I just noticed how out of order these questions were posed, but I'll answer them in order anyhow, because I'm far too lazy to go back now...

 So my answer to the second question... is a flat out no. I have yet to play any game and be tricked into thinking that a computer player is a human player, and I have played a lot of games.  In fact the best examples of humanlike "AI" are actually scripted pets programmed to a programmer s specifications and reading a script.  Take Oblivion for example, theAI moved very fluidly, and could walk on virtually any angle and act like almost like a player. but they were following a script, the dynamic AI was essentially look this way, look that way, do I see the player? If yes: Attack.

 In RTS and FPS games the AI is somewhat  "intelligent", but thats only because it is following a very speific set of rules and behaviours, that may, at one time, have been developed using neural networks or genetic algorithms, but are now just following the actions that original AI made.  Humans didn't build New York City acting like the first human, let's call him Adam, just for arguments sake, they built New York City after they invented clothes, tools, wheels, architecture, physics, metallurgy, social dynamics, etc.  They weren't a bunch of naked men eating apples off a tree just because thats what the guy before them did.

 So until game AI is actually thinking and reacting solely based on what its 5 senses and its thoughts (which it will be thinking LUA, because LUA is awesome and I say so), tell it, then it is basically not thinking or reacting, its merely cause and effect.  I can still give ita statistically based random number generator of course, maybe some goals, but I'm still giving it limits on how it can react, it can't develop, of its own accord, the abilityto say "there are bullets flying everywhere, I want some McDonalds Ice Cream, and man, the adrenaline pumping, me hiding here, now I have to take a piss too". And you can laugh and say "hey, that's not a real reaction", and I will laugh and say "Have you ever been shot at?", because it was a reaction I had once, and man I wanted a McDonalds, because it had solid walls, a washroom, and Ice Cream.

 Basically what I'm saying is the Turing test is still limiting the possibilities of a programmed AI, so the mind, with what the programmer thinks is legitimately an acceptable reaction, if the programmer thinks it is impossible for the computer to say "Hey Mr. Interrogator this game is boring as hell I really have to go pee and my mommy promised me ice cream", even though it wwould technically constitue a win condition for the confuser if it made the interrogator gice up so the kid could go to the bathroom, well the programmer just wouldn't allow it. beign deceitful is something we all learned as kids, starting with a hand in the cookie jar (wasn't me, i swear), to jokes, to practical jokes, to sarcasm, and outright omissions of fact to favour our cause, not neccessarily in that order, but you hopefully got the idea. There is a natural progression and learning pattern there.

 Even in a game like Spore say, where there is potential (CPU cycle restrictions aside), for the Ai to self develop, it really isn't, it is an if statement, if(entity.Nose==beak){Attack=Peck)}, because too many birdies would die and create too much garbage in the game. You would need like a million birds befor someone finally got it right and was like "I have this beak and these wings, instead of clawing the fish to death while it swims around, maybe I should just pick it up in my mouth and eat it, or at least pick it out of the water with my claws, it can't swim in air". You would also need one hell of a programmer, like me for instance, ;)

 Will games be the field where AI becomes sentient? Well this one will have a much shorter answer than the others, I promise. NO!!!!!!!!!! The reason: Games require high performance code and high performance AI. We use the same hardware for games today that was being used for AI simulations 10 years ago, they are also not running the other worldly simulations games run, theya re just running AI, maybe they would have physics, but a large HD graphical interface and awesome graphics aren't required. Also the pay is better in research and theory than in the games industry, so unless Activision is going to shell out the bonuses, I'm thinking not.

A more sensible answer to those doubters is this. Think of the simplest game you've ever played or seen played. Now, was that the reason humans became sentient? I highly doubt it too, Tic Tac Toe didn't make humans speak, except to argue over who goes first. Most likely sentiencewas developed through a need to survive and work together to survive, and that was only after we became self aware. I am not the first person to see this, the movie "I, Robot" confrms this idea, as does the work of many fictional (some of the best philosophers of our time) and non-fiction writers over the last few centuries.

 This brings us to question four, yay its almost over... :D

How do I personally define sentience? I define sentience as a being being aware of its own existence, the existence and recognition of its species, and the acknowledgement of other species around it, it does not need to talk.  That may ahve confused you a little bit, so lets go back to your childhood, no not the ugly tree, walk the other way, there you, your first happy memory, you laughing, as a child,maybe your first birthday, a lot of people have that memory. Now, even though you could maybe say "mama" or "dada", you weren't really talking. As in the example above, you did not yet know English, but we all know babies can associate things like laughter, good food, bad food, uncomfort, sleep, and methods of getting food, like bottle, with some degree of consistency, they see theirr bottle and they know its theirs, and they reach for it. Even though they don't know English they are thinking real thoughts. At some point English replaces their thought language and becomes the primary language, but this happens through imitation, much like a parrot, or a monkey, it is the relation of their thought language to their language of speaking.

 Another example of this is learning a new language when you are in high school, here in Ontario its French (bah!), but nonetheless. You see I suck at spoken languages, but I know other languages, more on that in a second. When I try to speak French or German,  I am thinking in English, and then translating it to words in that language, hence I have not really learned the language. To really learn a language you no longer need to translate things in your head, and if you disagree with this its because you haven't really learned your second language, you just think you have. My second language is C++, its not a macho statement, its fact, I can sit here and actually think and visualize my thoughts in C++, I do the same when I right in English, like this essay, and because I visualize then right, my grammar and spelling are usually pretty good, I'm typing this on the fly because I only found out about it at noon and there is apparently no spellcheck, and I don't really care, so i apologize.

Hence, if you truly know French or German, or whatever your second language is, then you are able to visualize it, to think in it for long periods of time, to dream in (DreamInCode.Net, I'm sure you've heard of it). Anyways, I love trying to explain this to artists or people who are forcing themselves to learn code, and insist they know code, but they are really only learning keywords and associative behaviours, I get the funniest looks from them, and the whole "anyone can code" speech.  do think anyone can code, just like anyone learn French, but do you really know code, or do you associatively know French?

 Getting back to AI, unless you provide a platform where the AI can grow biologically and learn language biologically, which starts through association, but ends with context, not uploading a dictionary to its 'head', then it will never be sentient, because it was made only through association and it naturally only knows association, not context. The AI must, in order to be truly sentient, have the ability to associate pain and punishment with things it should not do, and laughter and reward with things it loves to do, it should learn these things on it own. It should learn the word colour from looking at colours, and no longer need to associate 0,0,0,0 RGBA with Colour.Black, it should no longer ever have access to the storage device and the way it isstoring information. I cannot go into my head and look at the container that holds all the colours i know the names of, and their RGBA values, but yet I can still retrieve all that information by what is essentially remembering the word colour. 

I hope this made sense to someone other than me, because I'm out of Vodka now, cheers!

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like