Q&A: Oculus' software guru sheds light on the new SDK driving DK2

We catch up with Oculus' chief software architect Michael Antonov to learn a bit more about how and why the Oculus SDK has changed, and what it means for developers of VR titles.
VR developers, take note: if you built your game for the initial Oculus Rift prototype, you've got some work ahead of you to get it up to snuff on the latest development kit. Oculus VR recently started shipping the second iteration of its Rift headset development kit, better known as DK2, and developers have already started publishing preliminary guides to making games that take advantage of newly-added components like an HD display and integrated latency tester. At the same time, Oculus shipped a significantly updated version of the Rift software development kit with new features, including a dedicated Rift display driver and a positional tracking system. The company actually made a show of delaying DK2's ship date a week or two in order to fix some issues with the new v. 0.4 SDK. Gamasutra recently caught up with Oculus' chief software architect Michael Antonov via email to learn a bit more about how and why the Oculus SDK has changed, and what it means for developers of VR titles. Here's an edited transcript of our conversation. What's new in the redesigned SDK, and what inspired those changes? MA: The latest Oculus SDK is the result of over a year of engineering focused on providing developers with everything they need to build ground-breaking, consumer virtual reality experiences using the second Oculus development kit, DK2. Three of the most significant improvements are the addition of positional tracking, an Oculus display driver, and a new C API. The positional tracking system relies on computer vision based tracking of infrared LEDs within the headset. Implementing robust optical tracking and vision-guided sensor fusion has been one of the most challenging projects at Oculus. A huge amount of research and development has been dedicated to eliminating jitter, keeping positional latency low, and handling corner cases such as when the user moves outside the field of view of the camera. The new Oculus display driver should make the Rift significantly easier to develop for and use. With earlier hardware, the Rift’s screen was configured to either duplicate one of the monitors or extend the desktop. The orientation of the screen can also created additional challenges. The driver addresses these concerns, allowing applications to render directly to the Rift without it being a part of the desktop. It also supports mirroring Rift rendering to another window. The driver will continue to evolve as we gather more feedback from the community. The Oculus C API was originally introduced with the 0.3 SDK release and was updated to support positional tracking with 0.4. The purpose of this API is to provide a simple, straightforward interface to hardware that hides many of the details and is easy to bind to from programming languages and engines. Although we still expect it to evolve, the API is getting closer to a point where it can be packaged up as a DLL or a shared library. Why did Oculus announce it was delaying shipping DK2 units to do more work on the SDK? Why not, for example, just ship the units out when they were ready and push the SDK update live when it's ready? The earlier 0.3 Oculus SDK code branch wasn’t truly ready for DK2. It didn’t include the display driver or the service model that make the headset significantly easier to use. Manual portrait display management would’ve led to developer frustration, plus the creation of applications that rely on old display setup. With the 0.4 SDK and runtime nearly ready, we needed that extra week to improve its stability and robustness. Shipping the SDK alongside the hardware meant that developers will have a better out-of-the-box experience. We pulled in the schedule on 0.4 to bring the huge improvements to DK2 right at the launch, and we needed the extra time to stabilize the newest features. Can you give me some clear examples of how developers can make good use of those new features? To enable positional tracking, Oculus SDK reports head pose as combination of orientation quaternion and a 3D position vector in space. In earlier versions of the SDK, translation was computed solely based on the head model; starting with DK2 it includes correct positional data while within the tracking volume. It should be easy to apply this tracking data to the camera view in most game engines, allowing players to move around in 3D space. Translating in 3D virtual space is, however, the easy part of the challenge. Next you’ll need to figure out how head translation interacts with game scenery and engine mechanics. What happens, for example, if the user pushes their head through a virtual wall? Or moves out of the camera tracking range? Sergio Hidalgo discussed some of the challenges related to positional tracking in his article “VR: Letting Go of the Avatar.” One option for handling walls is to fade out the screen until the player moves back into a known space, but more elegant solutions may be waiting to be discovered. Beyond first-person experiences, positional tracking provides a new dimension of input for developers to explore. While a handful of very new experiences like Luckey’s Tale and Superhot have highlighted some of these new possibilities, I’m excited to see what the broader Oculus development community comes up with once they have a DK2 and the new SDK. I’m also looking forward to seeing developers begin to leverage the new display driver. From an engineering perspective, it’s quite easy to use: just create a window whose swap chain matches the resolution of the Rift and call ovrHmd_AttachToWindow on it. All of the swap chain output will show up on the Rift. Having the output redirected from a window does, however, open up the possibility of using the window surface for other things. Besides mirroring, it could potentially be used to display a third-person view or game statistics an external observer. So how has your work with engine makers like Unity and Epic evolved, and how is that reflected in the new SDK? Our relationships with Epic and Unity have really grown from when we started the company in 2012. For example, with the latest Unreal Engine 4 release, Epic has actually integrated the Oculus SDK into the main codebase so that it works out of the box. They’ve also collaborated super closely with us on the core integration, major improvements and feature additions, QA, developer support, and even new samples and demos that ship alongside the engine. The engine integration work has a profound impact on the SDK. On more than one occasion, we’ve modified the API and implementation to account for different engine aspects related to stereo rendering, multithreading, and state management. These changes have made the overall SDK more robust for integration into the hundreds of proprietary engines around the industry.

Latest Jobs

Manticore Games

San Mateo, California
Senior Software Engineer - Mobile

Sony PlayStation

San Diego, California
Sr. Online Programmer

The Walt Disney Company

Glendale, California
Associate Marketing Manager - Walt Disney Games

Insomniac Games

Burbank, California
Accessibility Design Researcher
More Jobs   


Explore the
Subscribe to
Follow us

Game Developer Job Board

Game Developer Newsletter


Explore the

Game Developer Job Board

Browse open positions across the game industry or recruit new talent for your studio

Subscribe to

Game Developer Newsletter

Get daily Game Developer top stories every morning straight into your inbox

Follow us


Follow us @gamedevdotcom to stay up-to-date with the latest news & insider information about events & more