An Introduction to Intel RealSense Technology for Game Developers
Intel RealSense technology pairs a 3D camera and microphone array with an SDK that allows you to implement gesture tracking, […]
Intel RealSense technology pairs a 3D camera and microphone array with an SDK that allows you to implement gesture tracking, 3D scanning, facial expression analysis, voice recognition, and more. In this article, I’ll look at what this means for games, and explain how you can get started using it as a game developer.
What is Intel RealSense?
RealSense is designed around three different peripherals, each containing a 3D camera. Two are intended for use in tablets and other mobile devices; the third—the front-facing F200—is intended for use in notebooks and desktops. I’ll focus on the latter in this article.
An infrared laser projector and camera (640×480, 60fps)
A microphone array (with the ability to locate sound sources in space and do background noise cancellation)
The infrared projector and camera can retrieve depth information to create an internal 3D model of whatever the camera is pointed at; the color information from the conventional camera can then be used to color this model in.
The SDK then makes it simpler to use the capabilities of the camera in games and other projects. It includes libraries for:
Hand, finger, head, and face tracking
Facial expression and gesture analysis
Voice recognition and speech synthesis
3D object and head scanning
Automated background removal
Note that, as well as allowing you to track, say, the position of someone’s nose or the tip of their right index finger in 3D space, RealSense can also detect several built-in gestures and expressions, like these:
What RealSense Brings to Games
Here are a few examples of how RealSense can be (and is being) used in games: