Sponsored By

Finding the fun in VR-centric design: Telekinesis

VR is all about iteration and experimentation, making uncovering and locking down the fun of your design a tricky challenge. This article covers what went in to creating a first-of-its-kind VR telekinesis, and the rabbit hole we fell in to get there.

Joel Green, Blogger

February 27, 2018

6 Min Read

The first time I got my hands on good motion controls in virtual reality, I knew that I wanted to have the Force. VR controllers remove abstractions; you can actually touch things, wield things, and control things with your own hands in a virtual world, without the constraints of boring old reality. With the hand presence afforded, you could finally have that superpower feeling in a videogame.

At the launch of our built-for-VR adventure series, The Gallery, there was a general consensus that simply picking objects up and throwing them normally felt novel and fun. Letting the player spend time to explore that normalcy, to grab things and toss them while in a virtual world, was a great way to ease into VR.

The question for us was how we could turn that idea on its head. What was the next iteration of interaction with motion controls? What was Throwing 2.0?

We started by looking at the strengths and weaknesses of VR. We know that spatial tracking and physics are fun, but we also know that current generation VR is limited in physical feedback. Picking up an object beyond a perceived weight or size threshold makes it look and feel like a Styrofoam prop. VR controllers have no fingers, so handling an object is like grabbing at something with oven mitts.

There is also no resistance on physical movements, so something as simple as grabbing a lever requires role play. The player will need to either stop themselves from pulling too far, or else break hand presence when they pull through a lever that doesn’t exist. On the other hand (haha), VR did have the advantage of using those natural motor controls in the first place.  

Our solution was to decouple and distance the object from the player’s tracked hand. This abstraction would preserve sync and presence without sacrificing the interaction itself. Most importantly, we could now imply the weight of a distant object without simulating a physical constraint on the hand. With that sense of weight and inertia, you could genuinely feel like you were lifting an X-Wing out of a swamp. Throwing 2.0 was telekinesis.

They player's hand in The Gallery, a telekinetic "Gauntlet"

It began with primitive blocks. We could lift them, place them, stack them, throw them, and knock them over with telekinesis (TK). Building a block tower was fun, but we needed more gameplay.

With the base TK mechanic feeling right, we started spec’ing out more complex tower building puzzles. This meant asymmetrical shapes and tetronimoes that were more difficult to fit together. The player would also need more precise control to move and rotate these new pieces into position.

In flat games, rotating an object on any axis has a simple convention—you push the analog stick. In an ironic twist of fate, the very power of VR to remove those abstractions had left us with human limitations: Your wrist can only go so far in one direction. It was literally impossible to orient objects in certain ways using the motion controls.

The more precision we needed for puzzles, the more constraints we needed to apply. We flirted with zero-G suspension so that players could let go of an object at the apex of their wrist rotation, and then re-grab the object to complete the orientation. The physics were cool, but it wasn’t intuitive the way that TK was by itself.

Movement on the Z-axis was even more messy. Reeling back or casting out with your hand like a fishing rod lost any precision on the Y-axis. Ratcheting your wrist to pull in or push out caused complications on the X-axis. When the “Magnesis” ability in Breath of the Wild was revealed at E3 2016, we studied the footage. They did these actions cleanly—using buttons.

So, we tried that.

We enabled Z-axis movements on a button hold and offered incremental rotations on a button press. Pieces could snap into place and magnetize to each other. We fought and fought against the weaknesses of the motion controls themselves, each new addition taking away from what made the mechanic feel fun and fluid and natural at the beginning.

It got to the point where our TK no longer felt like 2016 motion controls, but like 2006 waggle controls. Like you were doing gestures at an object instead of controlling it with your hands. Like you may as well have been playing on a gamepad.

All the real solutions were simpler. Z-axis movement needed to be amplified, so that a small movement of the hand would have a larger influence on the object. Rotation could be done by tossing the object to yourself or by swapping it between hands—a natural analog to re-orienting large objects outside VR. We removed any puzzles that required precise positioning. And we kept only one button: using the trigger to activate TK.

Just as players had enjoyed the normalcy of having their hands for the first time in a videogame, they were finding fun in telekinesis with all its natural limitations. The mechanic became about discovery, rather than a fandangle control scheme. Precision was not the fun of TK. Trying to rotate a cube perfectly and fit it into a slot was not fun. It was the big, gross, physical movements that were fun. It was having the Force.

At its heart, we weren’t following that fun. TK was the antithesis to the types of puzzles we expected it to inform. There was no way we could have known that precision was going to be such a problem when we started on Throwing 2.0, but we fought in the wrong direction. If we had really wanted those precision puzzles, we should have designed a core mechanic that specialized in the types of things those puzzles would need.

In traditional games, abstraction is the only way you can do things. A gamepad, by design, is an abstracted input device. You press ‘A’ and it tells the computer to tell the game to do something. But in VR, there is an opportunity to remove that abstracted control scheme and replace it with diegetic, in-world abstraction.

In the case of our TK, distancing the player’s hand from the in-game object was just that. VR’s strength is allowing your body to move 1:1 with what’s happening in the game world. TK moved away from needing abstracted input to move objects on various axes, relying only on physics and real-world motions instead. That opportunity for diegetic abstraction is what makes motion controls unique.

With each control layer we added back in, we whittled away at that fine synchronization with the player. It turned into waggle, it turned into a button, and it was always going to feel worse than a button because it’s not as precise as a button. Trying to apply those old ideas, old puzzles, or old designs to VR systems is sometimes like jamming a cartridge into a PlayStation.

For most players, diegetic abstraction is the kind of detail they’ll never even notice. And that’s what makes it believable and immersive. It makes the game feel less like a game and more like a world. Something that they’re a part of. Something fun.

Read more about:

Featured Blogs

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like