informa
4 MIN READ
Blogs

Abstraction Layers for VR Interfaces

Some suggestions for VR developers

I am highly optimistic for VR and the future of Video Games in general. I still believe that motion controls are an integral part of that future, especially for VR. I will be talking about abstraction layers from a design perspective, specifically as it applies to the interface for games in a VR setting.

 

Abstracting the desired behavior of an in game avatar to a control scheme compatible with your targeted device is an essential part of the design and development of any game.

 

Consider a game about being a gymnast. If all you had was an 8-way digital joystick and one button, you might have the button mean something like “jump”, and the joystick would be used to generate flips, back handsprings, or whatever. But you would obviously be quite limited in recreating the experience of gymnastics, even if you were using top-notch triple-A graphics inside a low-latency, high framerate, wide field of view VR headset.

 

On the other hand, motion controls are not an automatically perfect solution, which is why I chose gymnastics as an example. You simply cannot require all your players to be gymnasts. Laugh out loud. Lots of laughs. Anyway, even imagining the sort of setup required to mimic gymnastics 1-1 is rather difficult. It would be a multimillion dollar facility, and it might only be usable by gymnasts given the amount of money spent to build it. Which would defeat the purpose altogether of bringing the feel of gymnastics to people who are otherwise unable to perform flips, balance themselves on a narrow beam, or perform an iron cross. Not to mention the disabled Video Gamers out there.

 

Gymnastics is a very hard concept to gamify, because it involves the whole body moving in space at very high speeds, not to mention the visuals would be crazy. You think you know simulator sickness? Anyway I’ll go with something simpler, something violent which everyone can identify with since violence in video games is a standard method of interaction. Consider a master swordswoman.

 

She uses two swords because I’ve always been a fan of the dual wield weapon styles. So, being very practical, you consider the target devices available; at the moment there is Move for PlayStation’s still in development Project Morpheus, and then there is the still to be released Oculus Rift. It’s fair to assume that if these are successful, there will be support for Wiimotes and Kinect in future VR offerings from Nintendo and Microsoft. And even if that is not a fair assumption some hacker will hack together a solution for those anyway, but I tend to think VR developers will ultimately want to reach wider audiences than the techno-savvy. At any rate you choose dual Moves, because I’m a PlayStation fan and I want Project Morpheus to succeed. You could have chosen for example Wiimotes combined with Rift, but there are all sorts of hurdles involved in that, business, technical and otherwise. (Powergloves 2.0? Anyone?)

 

Dual sword mastery, though easier in some respects than gymnastics, is still pretty hardcore. Again, it is not fair to require all your players to be masters of dual swords. So you need abstraction. For basic moves, maybe slashes, thrusts, and parries, she (your master swordswoman) might be controllable with straightforward 1-1 controls. But what if you wanted her to do a fancy bullet reflecting parry? Last time I checked, I was a little rusty on my reflecting-bullets-with-swords ability in real life, so I hope you have a simpler solution for that, like a button or something. But then that would be kind of boring, wouldn’t it? Just press the button to reflect bullets. It’s tempting to use simple solutions like this when presented with vexing control scheme problems for motion controlled VR games. Which is where the abstraction comes into play.

 

Between these two extremes - require your player to be a real life superhero, or press button for supermove - lies a whole wealth of options for solving your master swordswoman problem. If you can deflect bullets in real life, you might not be all that interested in playing video games anyway, but that would be the 1-1 example in this scenario. Press button for super is the Abstract example: The complex animations and gameplay effects are then handled higher up in the chain by developers and then by the computer system the game is running on, but the problem with too much abstraction is the loss of interactivity. Interactivity is also limited if you only stick with 1-1, ala bullet deflecting. So all the work involves choosing the point, the layer, at which the interaction frees your player to both do what she wants, and at the same time do something she could not otherwise do in real life.

 

What about movement? The swordswoman needs to move, she’s got places to be. It turns out that dual sword mastery is still difficult in many of the same ways as gymnastics. But wait! Your swordswoman used to be a gymnast!

 

Latest Jobs

Xbox Game Studios

Redmond, Washington
10.5.22
Technical Lighting Artist

Innogames

Hamburg, Germany
10.5.22
Game Designer - Elvenar

Bandai Namco Mobile

Barcelona, Spain
10.5.22
Principal 3D Animator

Cryptic Studios

Los Gatos, California
10.5.22
Staff Core Software Engineer
More Jobs   

CONNECT WITH US

Explore the
Subscribe to
Follow us

Game Developer Job Board

Game Developer Newsletter

@gamedevdotcom

Explore the

Game Developer Job Board

Browse open positions across the game industry or recruit new talent for your studio

Browse
Subscribe to

Game Developer Newsletter

Get daily Game Developer top stories every morning straight into your inbox

Subscribe
Follow us

@gamedevdotcom

Follow us @gamedevdotcom to stay up-to-date with the latest news & insider information about events & more