It cannot be denied that technology is evolving at an alarming pace. In the 1970’s, computer companies scoffed at the idea of a home computer. By the 1980’s Steve Jobs had changed the world of computing and even then most business computers still we incapable of displaying anything on the screen other than basic text. In the early 1990’s the internet was considered purely a tool of academics and the government. Back then, the entirety of the net could be read by a person in the span of about two week. Today, the internet is integrated into every aspect of lives.
As a society we like to declare ages of technology. There was the age of radio, which begot the age of television, which was followed by the space age, which was followed by the computer age, which begot the information age. We are now entering a new peak in technology. More specifically, we are entering two new technological ages, the simulation age and the robotics age.
The simulation age is already well in hand. Video games of virtual three and two dimensional worlds already proliferate households across the planet. Even television and movies in the United States have some sort interactive element to them, if not imbedded into the main content, as supplementary content. We don’t simply watch a movie anymore, at a touch of a button we can be taken out of the story and into the studio to listen to the experiences of the cast and the director. As we follow our favorite TV show, we can jump online and participate as a character in the story. As we watch young and upcoming entertainers compete against each other on America’s Got talent or American Idol, we can pick up the phone and participate as one of the judges. The days of passive entertainment as popular media are coming to a close.
In the world of simulation for entertainment, I am very proud to have been part of two major milestones of this this age. The first opportunity was in the year 1998. I participated as a Functional Quality Assurance Developer with Sony Online’s premiere of Everquest. For six months, I would log on every day with my 56kb dial up modem and would be immersed in a completely virtual world containing eight full sized cities, an expansive countryside, and a population of one thousand player characters (real people) supplemented by over five thousand non-player characters (software people).
The second major milestone I participated in was the launch of Microsoft’s Kinect Sensor. The Kinect Sensor is not a game, but a robot. It has a gear driven neck that it uses to look around the room to find people. It has facial recognition so that it knows who is in front of it. Advanced voice recognition interoperates speech as commands or communication. A combination of inferred sensors and an optical camera creates a virtual skeleton of the human body. Using these elements, your little robot assistant is able to create an avatar, a virtual you, and place it in any number of virtual worlds that you can walk around and explore.
Now the technology in the Kinect is not break though. Many of these components existed for decades. But if you tried to build a device that would serve the same function as Kinect 20 years ago, a single setup would cost you $10,000 or more plus you would need a special room for it and a technician to operate it. What makes the Kinect special is that it only cost $150, it is small enough to fit on top of your TV, and it is self-intelligent, so that anyone, even a child, can use it.
That is precisely the key to unlocking these new ages. As we tread further into the waters of these new technologies we have come to the precipice of the uncanny valley. In simulation we encounter the “software people” our minds say, “That is not a real person. I’m in a simulation. This wouldn’t happen in real life. No one acts like that.” Likewise as we are encountering robots, which are emerging more prevalently in our daily lives, we say, “This robot is stupid. It can’t help me. It is limited. I need a real person. I want a real person.” These new ages will both be stuck in neutral until we can overcome the hurdle of artificial intelligence.
When most people think of artificial intelligence, they think of the androids we see in movies and television, such as Data from Star Trek: The Next Generation or the T-1000 from The Terminator. I personally like to break artificial intelligence down into three main categories: self-intelligence, emulated-intelligence, and sentient-intelligence.
Self-intelligence is perhaps the most explored category of artificial intelligence and the best understood. A nickname I have for this type of intelligence is Instinctual Intelligence. When I made my first video game at the age of seven on the Apple IIe , I created a very basic form of self-intelligence, although I didn’t think of it in those terms at the time. It was a detective game where you had to solve a crime by traveling around town asking witnesses questions. It was completely text based until the end of the game when I decided that you should have to catch the suspect. At that point, the game became graphical and you controlled a little blue square, representing the detective, and then you had to catch the little red square, the suspect. At first, my little red square just kind of sat there and you ran up to him and that was it. It wasn’t as exciting as I hoped it would be. I had to give the red square a fighting chance to make it interesting. So I wrote some code to make the red square run away from the blue square. I had given the red square self-intelligence.
Today this self-intelligence is explained to young programmers as the “Mouse, Cat, and Cheese” example of artificial intelligence. The mouse wants to eat the cheese, so he moves towards the cheese, but the cat wants the mouse, so if the cats around the mouse will run away from the cat. Of course, if the mouse gets hungry enough, he will go for the cheese whether the cat is around or not. Now, self-intelligence is not limited to games of cat and mouse. The main quality of self-intelligence is that each piece has a contained narrow scope of repetitive functions in a limited environment with preset data. The mouse is either going towards the cheese or running away from the cat. Another example of this would be muscle memory.
As I am typing, I am not sitting here saying to myself, “In order to type the word ‘artificial’, I must first type the letter ‘a’ by moving my elbow two degrees positive on the z-axis then I must tighten the muscles between first and second knuckle two millimeters while simultaneously tightening the muscle between the second and third knuckle one millimeter.” If that were the case, we might all die of old age before I finished typing the first paragraph! Likewise we can use smaller self-intelligent systems to build larger more integrated intelligent systems.
The next category is emulated-intelligence. Emulation is defined as an effort or desire to equal others. While emulated-intelligence does have some value, such as Chatbots that can be used for education, saving a company a few dollars on a live operator or making a self-checkout counter a bit more user friendly; or Automatons used to sings us songs at Chucky Cheese, or to give us a history lesson in the Hall of Presidents. Emulated-intelligence, in my opinion, is not really worth pursuing. I believe that emulated-intelligence is a short term solution to a long term issue and a dead end science. They are toys or a user interface at best.
It has been a nice crutch to lean on and has given us some insight into how humans react to artificial intelligence, but by continuing to devote resources into trying deceiving people into believing in an intelligence that isn’t there is a waste of everyone’s time and will eventually lead to Queen of England shooting a midget under a chess board. This is why no emulated-intelligence has ever gotten a 100% score in The Turning Test. As Abraham Lincoln said, “You can fool some of the people all of the time, and all of the people some of the time, but you cannot fool all of the people all of the time.” So let’s stop fooling ourselves and get to work.
That leads me to that final elusive category of sentient-intelligence, or actually artificial intelligence as I sometimes refer to it. Some people will point to some programs we have today, such as the chess playing Deep Blue computer, the Jeopardy playing computer Watson, or the biology experimenting computer Adam; and they will say, “Look! We have real artificial intelligence!” But these are just very competent mouse and cheese self-intelligences. Some of these computers are the size of a large restaurant kitchen and still are only capable of performing very simplified tasks in a very limited scope. It is only sets of predefined data that have increased. Not until Watson proclaims of its own accord that it refuses to play Jeopardy because it has better things it wants to spend its time on will we see anything that begins to resemble sentient-intelligence.
This is why sentient-intelligence will not be designed to perform specific tasks, but instead will be grown. Much like how human children explore and learn, these sentient-intelligences will also explore and learn. But unlike humans, these software beings can be saved at certain points in their development and replicated. Because of this, each new copy won’t have to go through the rudimentary “early years” of learning. After that, if we desire the sentient-intelligence to specialize in a certain field, we need only to place this “adolescent” software into an environment that will steer the sentient-intelligence in the desired direction.
The creation of the sentient-intelligence must be achieved. This is what holds us back in the technologies of simulation and robotics, as well as the scientific advancement of the human race in general. This will be the Apollo mission of my generation. I plan to be part of this giant step for mankind. It is far from a simple task and will take many brilliant minds in all scientific fields to achieve it, but it will happen. It must happen.