Before the age of smartphones or even digital photography, people used to immortalize moments the old fashion way: on film. The problem is that all physical things naturally decay, and the older the photo, the higher the odds that it is cracked, torn, or damaged in some other way. Some people might enjoy this kind of retro aesthetic, but when the deterioration is extensive it can be distressing to family members who only have a few old photos of their passed loved ones or distant ancestors they’ve never met.
A new powerful AI tool made by Chinese scientists is here to help though. The remarkable computer program uses a neural network called Generative Facial Prior-Generative Adversarial Network, or GFP-GAN for short, that removes wrinkles, spots, grains, and other telltale signs of weathering from old photos, restoring them into as sharp a version as if they were brand new. Remarkably, the results look amazing even if all you have is a low-resolution image. Here’s how it all works.
Generative adversarial networks (GANs) are algorithmic architectures that use two neural networks — the generative model that is trained to generate new examples, and the adversarial model that classifies the examples as either ‘fake’ or ‘real’ — pitting one against the other in order to generate synthetic instances of data that can pass for real data.
Modeling through GANs is fantastic for automating the discovery and learning of patterns in input data in order to generate an output that is almost indistinguishable from the original.
As a GAN, this means that the restored images generated by the new AI do not represent the actual, original image. Instead, all the elements that are added to replace visible signs of degradation and sharpen the original image are model guesses, adding pixels that were never there. The obvious downside is that this means there’s a risk that the restored portrait no longer depicts the person in the original photos, and this was often indeed the case with older GANs for image generation. The new GFP-GAN, however, is so good that you won’t probably tell there are actually fake pixels in there, unlike previous alternatives that end up altering the identity of the people in the photos.
The new tool uses a pre-trained version of an existing GAN, specifically NVIDIA’s StyleGAN-2, that helps orientate their own GAN model at multiple stages during the encoding process. It’s basically a GAN for a GAN, and the end result is higher fidelity around the eyes and mouth, which are the most important for preserving a person’s likelihood.
However, the altered photos do contain guesswork by the model, so the portraits will have at least a ‘slight change on identity’, the Chinese researchers mention in their paper detailing their work. The blurrier and more damaged the photo, the higher the chance that the introduction of thousands of new pixels could make the portrait look very different from the actual person in the original. The way GANs work right now, there’s nothing we can really do about it, but personally, I think most people won’t mind for most photos.
You can play with the tool yourself using the upload function on the AI’s Github page.