homehome Home chatchat Notifications


New AI program creates realistic 'talking heads' from only an image and an audio

Anyone can now speak like Obama -- digitally.

Mihai Andrei
November 24, 2023 @ 3:29 pm

share Share

digitalizing facial emotions
Image generated by AI (not in the study).

The landscape of generative AI is ever-evolving — and in the past year, it’s really taken off. Seemingly overnight, we have AIs that can generate images or text with stunning ease. This new achievement ties right into that, taking it one step further. A team of researchers led by Assoc Prof Lu Shijian from the Nanyang Technological University (NTU) in Singapore has developed a computer program that creates realistic videos, reflecting the facial expressions and head movements of the person speaking

This concept, known as audio-driven talking face generation, has gained significant traction in both academic and industrial realms due to its vast potential applications in digital human visual dubbing, virtual reality, and beyond. The core challenge lies in creating facial animations that are not just technically accurate but also convey the subtle nuances of human expressions and head movements in sync with the spoken audio.

The problem is that humans have a lot of different facial movements and emotions, and capturing the entire spectrum is extremely difficult. But the new method seems to capture everything, including accurate lip movements, vivid facial expressions, and natural head poses – all derived from the same audio input.

Diverse yet realistic facial animations

A DIRFA-generated ‘talking head’ with just an audio of former US president Barrack Obama speaking, and a photo of Associate Professor Lu Shijian. Credit: Nanyang Technological University

The research paper in focus introduces DIRFA (Diverse yet Realistic Facial Animations). The team trained DIRFA on more than 1 million clips from 6,000 people generated with an open-source database. The engine doesn’t only focus on lip-syncing — it attempts to derive the entire range of facial movements and reactions.

First author Dr. Wu Rongliang, a Ph.D. graduate from NTU’s SCSE, said:

“Speech exhibits a multitude of variations. Individuals pronounce the same words differently in diverse contexts, encompassing variations in duration, amplitude, tone, and more. Furthermore, beyond its linguistic content, speech conveys rich information about the speaker’s emotional state and identity factors such as gender, age, ethnicity, and even personality traits.

Then, after being trained, DIRFA takes in a static portrait of a person and the audio and produces a 3D video showing the person speaking. It’s not perfectly smooth, but it is consistent in the facial animations.

“Our program also builds on previous studies and represents an advancement in the technology, as videos created with our program are complete with accurate lip movements, vivid facial expressions and natural head poses, using only their audio recordings and static images,” says Corresponding author Associate Professor Lu Shijian.

Why this matters

Far from being only a cool party trick (and potentially being used for disinformation by malicious actors), this technology has important and positive applications.

In healthcare, it promises to enhance the capabilities of virtual assistants and chatbots, making digital interactions more engaging and empathetic. More profoundly, it could serve as a transformative tool for individuals with speech or facial disabilities, offering them a new avenue to communicate their thoughts and emotions through expressive digital avatars.

While DIRFA opens up exciting possibilities, it also raises important ethical questions, particularly in the context of misinformation and digital authenticity. Addressing these concerns, the NTU team suggests incorporating safeguards like watermarks to indicate the synthetic nature of the videos — but if there’s anything the internet has taught us, is that there are ways around such safeguards.

It’s still early days for all AI technology. The potential for important societal impact is there, but so is the risk of misuse. As always, we should ensure that the digital world we are creating is safe, authentic, and beneficial for all.

The study was published in the journal Pattern Recognition.

share Share

This 5,500-year-old Kish tablet is the oldest written document

Beer, goats, and grains: here's what the oldest document reveals.

A Huge, Lazy Black Hole Is Redefining the Early Universe

Astronomers using the James Webb Space Telescope have discovered a massive, dormant black hole from just 800 million years after the Big Bang.

Did Columbus Bring Syphilis to Europe? Ancient DNA Suggests So

A new study pinpoints the origin of the STD to South America.

The Magnetic North Pole Has Shifted Again. Here’s Why It Matters

The magnetic North pole is now closer to Siberia than it is to Canada, and scientists aren't sure why.

For better or worse, machine learning is shaping biology research

Machine learning tools can increase the pace of biology research and open the door to new research questions, but the benefits don’t come without risks.

This Babylonian Student's 4,000-Year-Old Math Blunder Is Still Relatable Today

More than memorializing a math mistake, stone tablets show just how advanced the Babylonians were in their time.

Sixty Years Ago, We Nearly Wiped Out Bed Bugs. Then, They Started Changing

Driven to the brink of extinction, bed bugs adapted—and now pesticides are almost useless against them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

Couple Finds Giant Teeth in Backyard Belonging to 13,000-year-old Mastodon

A New York couple stumble upon an ancient mastodon fossil beneath their lawn.

Worms and Dogs Thrive in Chernobyl’s Radioactive Zone — and Scientists are Intrigued

In the Chernobyl Exclusion Zone, worms show no genetic damage despite living in highly radioactive soil, and free-ranging dogs persist despite contamination.