Sponsored By

Microsoft Shows New Environment Modeling Technique Using Kinect

A new technique developed by Microsoft Research shows the potential to use the Kinect's 3D, depth-sensing camera to create detailed, real-time 3D models of entire rooms, from multiple angles.

Kyle Orland, Blogger

August 15, 2011

1 Min Read

A new technique developed by Microsoft Research shows the potential to use the Kinect's 3D, depth-sensing camera to create detailed, real-time 3D models of entire rooms, from multiple angles. The KinectFusion demonstration, as shown in a recent SIGGRAPH 2011 presentation, fuses multiple arbitrary viewpoints of an environment into a volumetric 3D model in a matter of seconds. The system uses the Kinect's point-based depth data to estimate the unit's position and orientation in the room, then uses a GPU to integrate this data into the previously known information about the space. In this way, the Kinect can be moved over an environment to 'scan' the space from multiple angles in real time. Once generated, the 3D model can be manipulated with arbitrary lighting and texture maps, as well as virtual characters and objects that can be superimposed accurately on to a video image of the space. In the demonstration video, researchers show the KinectFusion system being used for robust augmented reality applications, including a finger-tracking demonstration that lets users virtually draw on arbitrary surfaces around a room.

Read more about:

2011

About the Author(s)

Kyle Orland

Blogger

Kyle Orland is a games journalist. His work blog is located at http://kyleorland.blogsome.com/

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like