To top
Share on FacebookTweet about this on TwitterPin on PinterestShare on Google+Share on LinkedInEmail this to someone

One of the quieter pieces of news that made its debut this year was Intel’s RealSense depth-sensing camera tech. Though not yet a tangible product, Intel spent the year cutting deals with computer makers Acer, Asus, Dell, HP, Lenovo, Fujitsu, NEC and Toshiba to incorporate the technology into their 2015 models. Needless to say, the tech is certainly off to a solid start as an integrated part of the PC experience.

A Camera That Senses the Environment

Intel II

RealSense actually began its life as a bulky attachment back in its beta stage, but quickly shrunk into a significantly smaller device that interestingly can now be embedded into electronic equipment of almost any size. Intel’s RealSense imaging technology comes in three forms: a front-facing camera (which captures facial movements, tracks finger and hand movements and detects backgrounds and foregrounds), a rear-facing camera (that can scan and measure rooms and objects) and a snapshot camera (which can alter photo backgrounds after a photo has been taken).

While RealSense is making some noise in the PC world with its gesture control capability and what Intel refers to as “perceptual computing” allowing computers to “see” depth the way the human eyes do using an integrated 3D depth and 2D cameras, the tech is even more intriguing when you begin thinking about it outside of a computer.


Konstantin Popov is what Intel refers to as an “Intel Software Innovator” and is currently the CEO of a company called Cappasity that develops 3D scanning and printing solutions for e-commerce. Using Intel’s RealSense tech, Popov and the Cappasity team recently developed a scanning system for people. “We calibrate the positions, angles and optical properties of the cameras. This calibration allows us to merge the data for subsequent reconstruction of the 3D model. To capture the scene in 3D we can place the cameras around the scene, rotate the camera system around the scene or rotate the scene itself in front of the cameras,” he said.

Regarding the Intel RealSense part of the equation he added, “We selected the Intel RealSense camera because we believe that it’s an optimum value-for-money solution for our B2B projects. At present we are developing two prototype systems using several Intel RealSense cameras: a scanning box with several 3D cameras for instant scanning and a system for full-body people scanning.”

Next Gen Imaging

Intel III

Intel’s RealSense tech could essentially take light-field photography into just about any mobile imaging device or digital camera. For those that don’t know much about the rather trendy world of light-field photography, it essentially allows the shooter to alter the way light effects a scene and also allows for the focus of an image to be altered after capture.

RealSense actually captures an image in multiple layers with its specialized lens array (similar to Lytro) and nicely pulls off the depth-of-field alteration Lytro has mastered, granting accurate distance measurements, both on its surface and within photos if the subject of the image is within a few meters of the camera lens at the time of capturing. With applications including 3D mapping and augmented reality-like interaction, not to mention some nifty editing tricks like filtering an image in segments, having this ability in a smartphone is huge. This says nothing about the tech’s core ability to sense objects in front of the camera lens which is the big part of the PC-equation regarding gesture control. 2016 could finally be the year Intel branches out of the computer world with RealSense, and it’s a fairly safe bet the smartphone world will be waiting with open arms.

Leave a Reply

We are on Instagram