Using an innovative technique that mathematically infers what the environment outside the lens’ perspective might look like based on how light enters the camera, researchers at Harvard University have managed to create 3D images using only one lens and without moving the camera. The findings could prove to be applicable to amateur and professional photographers alike, microscopists and other media-related fields like 3D movies of the future where no 3D glasses would be required.
So, only one lens and one perspective, yet the researchers were able to create 3-D images. How does this make any sense? Lead researcher Kenneth B. Crozier and colleagues achieved this by thinking outside the box, or in our case outside the camera’s objective.
Pixel to pixel, light enters at different angles in the camera – an important piece of information which the researchers exploited to infer how the image might look like from a different angle. But with regular tech, like standard cameras, this isn’t inherently possible out of the box.
“Cameras have been developed with all kinds of new hardware – microlens arrays and absorbing masks – that can record the direction of the light, and that allows you to do some very interesting things, such as take a picture and focus it later, or change the perspective view. That’s great, but the question we asked was, can we get some of that functionality with a regular camera, without adding any extra hardware?” asked Crozier.
It’s only light that we’re ‘seeing’, after all…
Standard image sensors can’t measure the angle at which light enters the camera, but the next best thing one can do is guess. The team’s solution is to take two images from the same camera position but focused at different depths. The slight differences between these two images provide enough information for a computer to mathematically create a brand-new image as if the camera had been moved to one side.
By stitching the two resulting images, you get a 3-D animation of your scene. So, presuming you have the software the researchers developed at hand, anyone could create the impression of a stereo image with their shots using simple hardware. Microphotography might find this technique most useful, as the stereo imaging would be greatly helpful allowing the studying of translucent materials, such as biological tissues, in 3D.
“This method devised by Orth and Crozier is an elegant solution to extract depth information with only a minimum of information from a sample,” says Conor L. Evans, an assistant professor at Harvard Medical School and an expert in biomedical imaging, who was not involved in the research. “Depth measurements in microscopy are usually made by taking many sequential images over a range of depths; the ability to glean depth information from only two images has the potential to accelerate the acquisition of digital microscopy data.”
“As the method can be applied to any image pair, microscopists can readily add this approach to our toolkit,” Evans adds. “Moreover, as the computational method is relatively straightforward on modern computer hardware, the potential exists for real-time rendering of depth-resolved information, which will be a boon to microscopists who currently have to comb through large data sets to generate similar 3D renders. I look forward to using their method in the future.”
The entertainment industry might also potentially benefit from the Harvard researchers’ work.
“When you go to a 3D movie, you can’t help but move your head to try to see around the 3D image, but of course it’s not going to do anything because the stereo image depends on the glasses,” explains co-researcher Anthony Orth. “Using light-field moment imaging, though, we’re creating the perspective-shifted images that you’d fundamentally need to make that work – and just from a regular camera. So maybe one day this will be a way to just use all of the existing cinematography hardware, and get rid of the glasses. With the right screen, you could play that back to the audience, and they could move their heads and feel like they’re actually there.”
Findings were reported in the journal Optics Letters. // source: Harvard press