Sponsored By

Virtual Reality - The Road to Fully Immersive Gaming

Virtual Reality is big right now, but there have been mixed reviews and opinions on the future of VR. It this article I discuss what VR is, why it currently is not working, and how we can create its future.

Jeffrey Werner, Blogger

March 23, 2015

18 Min Read

-The Road so Far-

            As a society we have been fascinated by Virtual Reality for almost 60 years. Virtual Reality is the act of creating a physical presence in a virtual world, to be fully immersed in it to the point that you can't tell the difference. There have been many attempts to create a VR device starting as far back as the late 50's, but it did not become something people talked about until the 80's when it was starting to be featured in major motion pictures like Tron. With the movies driving the interest in VR up, a lot of companies started to develop VR devices. The most famous of these was Sega VR, which was used to create the Sega VR-1 in 1994. VR-1 was a large arcade and ended up not being very successful so Sega never went through with the home headset version, but there were some available like the VictorMaxx Stuntmaster by Future Vision Technologies released in 1993. None of the headsets were ever very successful but movies kept the interest up with pictures like The Lawnmower Man released in 1992, The Matrix released in 1999, and The Thirteenth Floor released in 1999. Now we are trying again with the Oculus Rift and Sony Morpheus, the problem is that these new devices are no different than the ones we had in the 90's.

            Now you can argue that we did not have any good first person games back then and that the technology is better now, but the play experience of the Oculus Rift is the same as the Sega VR-1 only now I can have it in my home instead of the arcade. The biggest problem with a Head Mounted Display is your eyes tell your brain your moving but your body tells it it's not, causing motion sickness. This is because a head set, even one as advanced as the Oculus, is just a display device strapped to your head with a motion sensor. They do not try to create a physical presence in the game, just a better view. Even if we went to a full body suit it will never be detailed enough for you to smell the freshness of the air, or feel the warm sand between your toes, or taste the salt in the ocean breeze, it will still just feel like a movie of the beach only now I can control the camera. So, if a head mounted display device with motion controls will never be good enough why are we still making them?

-The Problem-

            The problem is that the public does not understand VR and the funding goes where the public interest is. For example, the large Arcade like versions that have been built like MechWarrior [1] and Gundam Pods [2] are more commonly called "Simulators" not VR because the government and even NASA have been using similar devices to train people since the 60's [3]. These room size machines were called simulators because it was designed to simulate a real device the soldier would use on the battle field, like a plane or tank. Therefore the public only thinks of them as training not gaming, but they are a far better VR device then any headset has ever been. The problem is no matter how immersive our simulators get they are still only designed for one small part of reality, say parachuting out of a plane [4], they will never be able to simulate an entire world. So if what we are doing is not working then what is the road to Immersive Gamming?

-A Possible Solution-

            Phase One, we need to understand how the brain sends and receives electrical signals to the point where we can build a robot body, put a human brain in it, and the person can't tell the difference. Phase Two, if the brain can't tell the difference between a robot body and an organic body, then it would not be able to tell the difference from a virtual body on the computer. This means we can physically plug a brain into a computer and start creating a virtual body and environment for the person to experience. Phase Three, the average consumer is not going to want to have an expensive and life threatening surgery just to play a game, even VR, so now we need to be able to project the virtual experience into the brain remotely. The most probable ways would be through a dream state or chemically induced coma so that the gamer is paralyzed and we can manipulate their thoughts.

-What's Holding Us Back?-

            So after seeing what needs to be done to create Immersive Gaming it becomes clear that most people are trying to start on Phase Three without understanding the first two. The reason for this is because we really don't understand the brain or how it creates our reality. There is a term in Psychology known as the Phaneron, which basically is the concept that the reality that you experience in your life is simply what the brain perceives of the world not what is actually in it [5]. This means that everyone experiences a different reality and no ones experience is perfectly true. This is a concept that most people have trouble even understanding exists let alone being able to break it down mechanically and manipulate it. So if there are so many unknowns how do we accomplish it?

-Phase One: Cybernetics-

Step one: Prosthetics

            We are making great advancements in the world of prosthetics. We can create robotic arms and legs that can act like the real ones [6], and we are even starting to be able to control them with the body's own nervous system [7] [8]. The issue is that it is looked at as a strictly engineering problem. This means that we will eventually create prosthetics that accurately replace the function of a limb but it will never truly look or feel like your limb.

            Instead we need to stop trying to make a machine that can replace a limbs function, and instead make a machine that functions like a limb. We need to be able to synthetically create bone and muscle that can then be directly connected to the body's nervous system, that way the brain can send the electrical signals that it is used to sending and receive the signals that it is used to receiving. This would make the prosthetic not just act like the missing limb but feel like it too. We actually have materials that can achieve this, they are called Electroactive Polymers [9].  

            Now the human nervous system does not put out a lot of electricity so in order to use these polymers for muscles we might need to use a battery. I don't know about you, but needing to plug in my arm every night to charge it would get very annoying because I know I will forget one time and not have the use of my arm the next day. Luckily all battery's are based on using chemicals to store electrical energy, these are called Electrolytes and our body is already using them. We just need a way for our own bodies electrolytes to be stored in order to charge the batteries of our prosthetic while it is at rest. These types of "Bio-Batteries" are currently being researched and have a lot of great potential [10].

Step two: Synthetic Bodies

            Once we can create artificial bone and muscle we can do more than just replace missing limbs, we can start to create organs too. We will need to understand every part of the signals used in the nervous system in order to create true VR, because you want to be able to feel the adrenaline rush, your heart pounding, and your lungs fill with air. This is also the only way for us to start being able to understand the signals related to our main senses, touch, taste, smell, hearing, and sight. The best way to understand those signals is to be able to replace the physical parts of the body and have them not just function properly but look and feel right too.

            This will ultimately culminate with a great understanding of the entire nervous system allowing us to then tap into the spine. Once we can recreate the most complicated part of the body outside of the brain, the spine, then we can create fully synthetic bodies, implant a human brain, and the person would not be able to tell the difference. This is Virtual Reality, we would be creating artificial sensory information, that is being sent to the brain allowing it to then interpret what is going on around it. The brain will then create its own reality based on the information that it is given, your Phaneron.

-Phase Two: Virtual Reality-

Step One: Lying to the Brain

            Now in the previous stage we would be creating the information based on the physical world around it allowing the person to experience the same reality as the rest of us, but in this stage we are now going to be creating our own signal to send to the brain. This time when the brain tries to move the body's legs, it will move digital legs inside a computer instead of moving it's physical legs. As long as the brain can send and receive the same signals that it would be using for the synthetic body in Phase One then even though its new body is completely virtual, stored in a digital computer, the brain would not know the difference this would be its reality; even if the brain is floating in a jar connected to nothing but a life-support system and a computer.

Step Two: World Creation

            Now that I can put a person into a reality of my own creation, we need to start figuring out how to accurately create reality. The real world is extremely complex, we would need to program and create everything from how the weather moves to how the grass grows. The human mind is willing to overlook minor inaccuracies in order to maintain its sense of reality, but even so creating a fully virtual world for someone to explore and interact with would be mind-bogglingly complex. If you described a large open world game like The Elder Scrolls V: Skyrim to a game studio in 1985 when they were making NES games they would feel just as overwhelmed with the complexity of the system. The reason we can create such complex systems today is because we have much better game creation software with much better tools at our disposal.

            We will have to do the same thing for VR. We will need to create a new type of game engine, a new way to program and to create art. I foresee a blank VR room with the game designers and artist standing in it simply thinking of what they want to create and having it pop into existence before them. With a wave of the arm they can raise mountains, fill oceans, and draw rivers. With a flick of the wrist they can pour through catalogs of trees and bushes and simply throw them out into the world, but who will make all of these things for use to use? Every game engine still relies on artist to create all the usable objects. All games today use Polygons to create the 3D shapes, but they are limited. When we use millions of Polygons we can make an object look real in the movies but it takes a lot of computer power to calculate each frame making it impossible to do that in real time for the game industry. No matter how powerful our computers get or how detailed our polygonal models get it still won't feel real close up. So what's the next evolution of 3D modeling? I foresee two main options that will be usable for VR; Point Cloud Data [11], and using our memories directly.

            Point Cloud Data has been used by 3D scanners for years and they have gotten detailed enough to calculate event the smallest textures of an object. This gives us highly detailed objects and lighting systems, creating extremely realistic environments [12]. The main problem with Point Cloud Data is that someone needs to physically build every environment. This means that the Virtual Reality that we create will be a limited environment to interact in, similar to games of today. No matter how large or how detailed our games get you still can't do everything. How many times has a game given you a choice and you didn't like either option? You now can no longer do what you want and it reminds you that you are in a simulation, it makes you stop believing that this is reality.

            The limitation to the world would be fine for most forms of gaming. We have come to accept that as players we can only do what the developers have given us the ability to do, but this technology can be so much more. We are tapping into the brain directly so why don't we use our own memories and experiences to help form the world? and not just your memories but everyone's. What if you wanted to go sky diving but have never done it before? We would have to create the plane, program how fast you fall, the wind movement, everything to try and make that experience feel as real as possible, but what if we could just scan the brain of someone who has been sky diving? Even better, many people who have been sky diving and then mix the experiences together. Now suddenly the wind in your face, what it sounds like, feels like, tastes like, will be extremely accurate. Not only that but the sound of the plane, the rattling of the clips on your parachute, all of it will be accurate to a point far beyond what we could have achieved trying to program it all ourselves.

Step Three: Memory

            The biggest hurdle for fully virtual worlds is how our minds interact with it and remember it. Neuroscience still does not understand how memories are stored or recalled from the brain, and we barely understand how they are formed. We will need to advance neuroscience in order to fully create and interact with our virtual worlds, but if done right then the game creators would be able to call on memories and experiences of their own life to create them in the game. The very act of creating could be a game in itself. Imagine plugging into a VR system, waking up in a fancy hotel and being told that you could go anywhere and do anything, that you could live out your every fantasy, even create or destroy any aspect of the world you're in, like a god. This could be possible because even if you have never been to Japan, if we have recorded the memories and experiences of people who have, then you can experience Japan for yourself and it would be just as accurate as if you took a plane instead. Only now you have the ability to modify that experience in any way you want. You want Godzilla to attack? Or for there to be dragons and magic? Go for it! It's your fantasy!

            The danger of a system like this is that if we can record one persons experience and play it back for someone else to experience, then what would prevent us from implanting false memories in the same way? Or from taking personal information and memories that they don't want to share? Suddenly you can't trust your own memories or experiences and you no longer believe in reality. They have become a Solipsist. They stop believing in reality and start thinking that everything is fake or an hallucination, some solipsists today even believe that they are currently in a virtual reality world like the Matrix.

-Phase Three: Immersive Gamming-

Step One: Availability

            Phase Two will be difficult, not just technologically but psychologically as well, but it will all be useless if we can't open it up to the greater public. We now need to find a way to do all of Phase Two remotely and non-intrusively. This will transform VR from The Matrix to The Thirteenth Floor. From my research I would say the most promising way of doing this is through a form of dream state. The brain emits lots of electromagnetic waves and manipulating or distorting these wave can affect the brain [13]. If we could figure out which patterns produce which effect then we could induce a sleep state with a flick of a switch, this will then allow us to project our virtual world into your dream state [14].

Step Two: The Dream

            The problem with trying to use our dreams for VR is that we currently don't understand what dreams are, how they are made and experienced, or why we have them. We know that if someone does not sleep for several days that they start to hallucinate and the body starts to shut down. We also know that if someone has brain damage and physically can’t dream that it has a very similar effect to not sleeping. So if the brain requires dreams what will happen if we control those dreams? Will you still be tired when you wake up? Will it cause other problems with your normal sleep cycle? Would that in turn start causing physical ailments? We just don't know, we don't understand the brain enough to answer these questions.

-So What Does All This Mean?-

            So if you have been following along you will realize that before we can even start to research Virtual Reality that we will need to make great advancements in Cybernetics, and during our VR research we will need to make great advancements in Neurology in order to create fully immersive virtual worlds. After we finally create VR we are left with trying to mass produce it, get it to the people, and make it profitable, but the last step leaves us with only questions. We don't know if we can create a non-intrusive VR system, and if we can, we don't know what the health or security risks of it could be.

            In the end this is why we have not made any advancements in VR, it will take a mountain of money and decades of research, but ultimately we don't know if it will even be marketable. Even if VR is ultimately never able to go mainstream and be marketable, look at what we would create along the way. Cybernetics will give us synthetic limbs and organs improving our quality of life and overall life expectancy. Neurological research would give us a better understanding of the mind and the physiological disorders that are created when it fails, giving us better treatments. We would have a better understanding of memories and how it is stored and recalled, allowing us to help people with repressed memories or lost memories due to brain damage. Won’t the benefits of the by-products of VR research outweigh the chance that VR itself might not be marketable?

            There has been a lot of great research in cybernetics and neurology but it has been separated and focusing on many different directions. We need to pool it all together and direct it with a single focus with a single goal in order for us to be able to see true VR in our lifetime.

 

-Bibliography-

 

[1]

[2]

[3]

J. E. Tomayko, "History.NASA.Gov," Computers In Spaceflight: The NASA Experience, 15 July 2005. [Online]. Available: http://history.nasa.gov/computers/Ch9-2.html.

[4]

ATC & Air Defense Training Systems, [Online]. Available: http://www.esigma-systems.com/en/parachute-simulation.html.

[5]

A. M. B. Lopes, "Amblo.net," 21 June 2013. [Online]. Available: http://www.amblo.net/us/thoughts/i-understand-therefore-i-exist/.

[6]

A. Duhaime-Ross, "The Verge," 18 December 2014. [Online]. Available: http://www.theverge.com/2014/12/18/7416741/robotic-shoulder-level-arms-mind-controlled-prosthetic.

[7]

R. Kwok, "Nature.com," 08 May 2013. [Online]. Available: http://www.nature.com/news/neuroprosthetics-once-more-with-feeling-1.12938.

[8]

D. Szondy, "GizMag," 04 March 2012. [Online]. Available: http://www.gizmag.com/nerve-prostheses-interface-scaffolds/21646/.

[9]

G. T. Huang, "Technology Review," 01 December 2002. [Online]. Available: http://www.technologyreview.com/article/401750/electroactive-polymers/.

[10]

K. Le, "Lex Robotics," 27 February 2013. [Online]. Available: http://www.lexrobotics.com/body-fluid-powered-bio-batteries/.

[11]

[12]

C. K. a. S. Wasson, "Tech Report," 24 September 2014. [Online]. Available: http://techreport.com/review/27103/euclideon-preps-voxel-rendering-tech-for-use-in-games.

[13]

A. Raz, "Scientific American," 24 April 2006. [Online]. Available: http://www.scientificamerican.com/article/could-certain-frequencies/.

[14]

M. Anissimov, "Life Boat Foundation," 2008. [Online]. Available: http://lifeboat.com/ex/brain-computer.interfaces.

 

Read more about:

Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like