The article offers an in-depth discussion of the actual nuts and bolts of human visual perception and how head-mounted devices like the Oculus Rift need to address them. "More informally, you could think of this line of investigation as: 'Why VR and AR aren't just a matter of putting a display an inch in front of each eye and rendering images at the right time in the right place,'" Abrash writes. The post is the first in a series and a valuable read. Readers may also be interested in viewing Abrash's 25-minute talk on the GDC Vault, or viewing the slides available from Abrash's blog.
There are three broad factors that affect how real – or unreal – virtual scenes seem to us, as I discussed in my GDC talk: tracking, latency, and the way in which the display interacts perceptually with the eye and the brain. Accurate tracking and low latency are required so that images can be drawn in the right place at the right time; I’ve previously talked about latency, and I’ll talk about tracking one of these days, but right now I’m going to treat latency and tracking as solved problems so we can peel the onion another layer and dive into the interaction of head mounted displays with the human visual system, and the perceptual effects thereof.
1 MIN READ
Valve's wearable computing ace discusses the challenges facing VR
"VR and AR aren't just a matter of putting a display an inch in front of each eye." Valve's Michael Abrash comments on the perceptual challenges facing wearable hardware like the Oculus Rift.