By utilizing an encoding technique commonly employed in the telecommunications industry, MIT engineers were able to develop $500,000 3-D camera technology with off-the-shelf parts for a price tag of around $500. The engineering team said their camera could be used in medical imaging, collision-avoidance in automobiles and to improve the accuracy of gaming devices that use motion tracking and gesture-recognition technology. The MIT device is based on ‘Time of Flight’ technology similar to that used in Microsoft’s newly released second-generation Kinect device. The Microsoft 3-D device determines the location of objects by calculating how long it takes a signal to bounce off a surface and come back to a sensor. The MIT camera improves on this existing technology by compensating for the effects of rain, fog or even translucent objects, said Achuta Kadambi, a graduate student at MIT and co-author of a report on the technology. “Using the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D,” Kadambi said. “That is because the light that bounces off the transparent object and the background smear into one pixel on the camera. Using our technique you can generate 3-D models of translucent or near-transparent objects.” Some environmental conditions, semitransparent surfaces, edges, or motions create multiple reflections that all return to the camera’s sensor – making it difficult to determine which is the original and accurate signal. To compensate for this effect, the MIT device uses a common telecommunications encoding technique to determine the distance a signal has traveled, said team member Ramesh Raskar. “We use a new method that allows us to encode information in time,” Raskar explained. “So when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal.” Report co-author Ayush Bhandari said the concept is similar to the idea behind techniques that clear blurring in digitally-taken photographs. “People with shaky hands tend to take blurry photographs with their cellphones because several shifted versions of the scene smear together,” Bhandari said. “By placing some assumptions on the model — for example that much of this blurring was caused by a jittery hand — the image can be unsmeared to produce a sharper picture.” In 2011, the same team of engineers revealed a trillion-frame-per-second camera that captures a pulse of light as it travels through space. The camera accomplishes this using expensive laboratory-grade optical equipment, to the tune of around $500,000. The new “nano-camera” achieves similar results using a continuous-wave signal that oscillates at nanosecond intervals. This difference allows the team to use cheaper hardware — ultimately costing just $500. “By solving the multipath problem, essentially just by changing the code, we are able to unmix the light paths and therefore visualize light moving across the scene,” Kadambi said. “So we are able to get similar results to the $500,000 camera, albeit of slightly lower quality, for just $500.” James Davis, an associate professor of computer science at the University of California at Santa Cruz, said the multi-disciplinary nature of the MIT team allows them to develop such novel and affordable technology. “Normally the computer scientists who could invent the processing on this data can’t build the devices, and the people who can build the devices cannot really do the computation,” he said. “This combination of skills and techniques is really unique in the work going on at MIT right now.”