The brain is a malleable organic computer, forced by millions of years of brutal refinement to become the single most efficient and adaptable computational device on this planet. For example, in a recent series of experiments and paper in Atsushi Iriki’s lab, it has been noticed that an animal species who have never shown a propensity for tool use in the wild can be trained to use them.
The species in question is the Japanese macaque, who was trained to extend her reach to scrape up fruit with a short rake. In the abstract for the paper by Shigeru Obayashi, he states that,
“When using a tool, we can perceive a psychological association between the tool and the body parts—the tool is incorporated into our “body-image.” During tool use, visual response properties of bimodal (tactile and visual) neurons in the intraparietal area of the monkey’s cerebral cortex were modified to include the hand-held tool.”
As far as the macaque’s brain is concerned, the rake is part of her arm. You can learn a lot more about this in the book, The Body Has a Mind of Its Own, by Sandra Blakeslee. These findings and others have enormous implications for how the consciousness perceives its connection to a virtual world.
Now, let’s switch from rakes to game controllers. Take a moment to sit down with your favorite game, preferably a first person perspective game, for its added complexity. After you get yourself warmed up a bit, try and play the game while focusing your conscious mind on the motions that your thumbs are making in order to move your character and control your viewpoint in the virtual world. I think you’ll find that your thumbs are dancing within an incredibly complex and minuscule sphere of motion, in very nearly incomprehensible patterns.
Next, turn your attention, while all this is going on, to what is happening at the level of your conscious thought. Probably something more on the level of, “I think I’ll go over there.” or “Yikes! I gotta shoot that guy.” We won’t get into how painful it was to learn how to use the controller to begin with because we simply need to prove the point that one of the brain’s primary functions is to make tool use invisible to the conscious mind. Your every day life involves thousands upon thousands of these invisible little consciousness helpers.
Another example: when you reach the bottom of your computer screen and need to scroll this article up the page, your conscious mind need think nothing of where the mouse wheel is, or how much pressure your finger has to apply to it, or how far a particular motion might move the page.
So, how does this lead to further immersion in virtual worlds? When you apply logic without experience to our common video game addictions, they don’t appear immersive or realistic at all. After all, hanging onto numbs of plastic while wiggling your thumbs and fingers around is hardly synonymous with the complexity of firing a shotgun at a beautifully rendered spawn of hell or even the act of walking across a room. Ditto goes for the flat 2d screens we stare at or the complete absence of tactile feedback.
Time for another experiment. Everybody hop up out of your chairs and leap up in the air while trying to touch the ceiling with your left hand. Now, quick, what did your conscious mind have to think in order to try out that action? Probably nothing more complex than, “I guess I’ll jump up and touch the ceiling you damn pretentious blogger.”
In fact, I’d argue that the difference to the conscious mind between actually jumping and jumping in the virtual world is zero. That’s right, zero, the conscious thinking involved in “I think I’ll go over there,” is functionally identical whether you’re wiggling thumbs or taking steps and all we need to do is throw some twenty-somethings with controllers into the MRI to prove it.
As I was turning this theory over in my mind, another implication stuck me like a holy 1up mushroom springing forth from a magic brick. What might happen to a consciousness who has all these little computational underlings spawning to create its every whim? Nevermind that you had to create the computational underlings at some point.
Remember that the invisibility of the complexity required in tool use is a purposeful function of the consciousness. Isn’t it possible that the conscious might develop a strange sort of superiority complex as a side effect of this environment? After all, the consciousness has merely to express its whim, all the fruit and rake complexity is taken care of. Why wouldn’t the consciousness begin to expect that its preternatural abilities might stretch beyond the realm of the physicaly possible?
When you first look at the mystic scrunching up his face at a spoon, you might think he’s being kinda silly. But, on closure logical inspection, It’s not such a great error to make. The mystic expects to bend the spoon using the same effort he used to pick up the spoon from the table. Conscious will. Why shouldn’t it work?