Computer graphics and digital video have gone an incredibly long way since their early days, however technology is still at such a point where people can still distinguish quite easily between what’s digitally rendered and what’s footage of reality. Three new papers recently presented by Harvard scientists at SIGGRAPH 2013 (the acronym stands for for Special Interest Group on GRAPHics and Interactive Techniques), the 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques, are the most recent efforts in perfecting digital imaging, and their findings are most interesting so far to say the least.
One of the papers, led by Todd Zickler, computer science faculty at theHarvard School of Engineering and Applied Sciences (SEAS), tackles a difficult subject in digital imaging, namely how to mimic the appearance of a translucent object, such as a bar of soap.
“If I put a block of butter and a block of cheese in front of you, and they’re the same color, and you’re looking for something to put on your bread, you know which is which,” says Zickler. “The question is, how do you know that? What in the image is telling you something about the material?”
To answer this question, the researchers had to dwell into how humans perceive and interact with objects and inherently can tell certain properties apart. For instance, if you were to look at a certain familiar object, you’re able to assess its mass and density without touching it simply based on its appearance and texture. For a computer this is more difficult to do, but if achieved, a device with a mounted camera could identify what material an object is made of and know how to properly handle it—how much it weighs or how much pressure to safely apply to it—the way humans do.
The researchers’ approach is based on translucent materials’ phase function, part of a mathematical description of how light refracts or reflects inside an object – what we actually are able to see since what is perceived with our eyes is only light that bounces off objects, not the objects themselves. Phase function shape is incredibly different, vast and perceptually diverse to the human brain, which has made past attempts at modeling it extremely difficult.
Luckily, today scientists have access to a great deal of computing power. Zickler and his team first rendered thousands of computer-generated images of one object with different computer-simulated phase functions, so each image’s translucency was slightly different from the next. From there, a program compared each image’s pixel colors and brightness to another image in the space and decided how different the two images were. Through this process, the software created a map of the phase function space according to the relative differences of image pairs, making it easy for the researchers to identify a much smaller set of images and phase functions that were representative of the whole space. At the end, actual people were invited to browse through various images and decide how different these were, providing insight into how the human brain tells objects like plastic or a soap bubble apart just by looking at them.
“This study, aiming to understand the appearance space of phase functions, is the tip of the iceberg for building computer vision systems that can recognize materials,” says Zickler
Looking at a display like through a window
A second paper also involving Zickler is also most interesting. Think of an adaptive display, inherently flat and thus 2-D, that can adapt the displayed objects according to the angle you view it from and environmental lighting – just like looking through a window.
The solution takes advantage of mathematical functions (called bidirectional reflectance distribution functions) that represent how light coming from a particular direction will reflect off a surface.
From the professional artist’s studio to the amateur’s bedroom
The third paper, led by Hanspeter Pfister, An Wang Professor of Computer Science, takes a look on how to optimize and manipulate vivid colors. At the moment, professional artists need to manually brush and edit frame-by-frame a video that needs to have a certain color pallet imposed. Amateur filmmakers therefore cannot achieve the characteristically rich color palettes of professional films.
“The starting idea was to appeal to broad audience, like the millions of people on YouTube,” says lead author Nicolas Bonneel, a postdoctoral researcher in Pfister’s group at SEAS.
Pfister claims that his team is working on a kind of software that will allow amateur video editors to chose from various templates, say the color pallets for Amélie or Transformers, and then simply by selecting what’s the foreground and what’s the background, and then software does the rest, interpolating the color transformations throughout the video. Bonneel estimates that the team’s new color grading method could be incorporated into commercially available editing software within the next few years.