homehome Home chatchat Notifications


Wartime deepfakes are the new face of propaganda. Can we still trust our eyes?

New study tries to make sense of the evolving world of deepfake misinformation in wartime news.

Tibi Puiu
October 25, 2023 @ 8:55 pm

share Share

deep fake propaganda
Credit: AI-generated, DALL-E 3.

Deepfakes — videos and voice recordings that have been manipulated by AI to impersonate real people — are the next iteration in online misinformation. This highly convincing and realistic doctored footage can be easily abused to impersonate politicians and celebrities, extract money from gullible people in elaborate hoaxes and cons, and socially target women with pornographic deepfakes.

This is still a novel technology, and we’ve yet to see the full scope of its impact on society. In a new study, researchers at the University College Cork in Ireland have now explored the implications of their use during wartime, as seen in the Russo-Ukrainian conflict. The findings bring to light concerns about trust, misinformation, and the very nature of truth.

The new propaganda frontier

Deepfakes are advanced digital forgeries created using artificial intelligence, particularly deep learning techniques. By training on vast amounts of data, these algorithms can generate eerily realistic video or audio recordings of real people saying or doing things they never actually did. The term “deep” refers to the deep neural networks used in their creation. While the technology has impressive, legitimate applications — such as in filmmaking, video game design, and voice synthesis — it also poses ethical and security concerns.

There is widespread anxiety among AI ethicists that the technology could make it increasingly difficult to tell what’s real among a glut of convincing fake news. And out of all possible scenarios, it is during times of war that deepfakes are perhaps the most concerning.

In early 2022, not long after Russia launched its full-scale invasion of Ukraine, a fake video of Ukrainian President Volodymyr Zelensky started circulating on social media and hacked Ukrainian news websites. The video showed Zelensky appearing to tell his soldiers to lay down their arms and surrender.

Later that year, the mayors of several European capitals were embarrassingly duped into holding video calls with a deepfake of their counterpart in Kyiv, Vitali Klitschko. Around 15 minutes into the video conferences, a fake but convincing Klitschko started talking about how Ukrainian refugees were cheating the German state of unlawful social benefits and appealed to them to send back Ukrainians for military service.

“There were no signs that the video conference call wasn’t being held with a real person,” the office of the mayor of Berlin, Franziska Giffey, said in a statement.

But it isn’t just Russia that is weaponizing deepfakes for propaganda purposes. Two can play that game. In June 2023, Ukrainian hackers broadcasted a fake emergency message on several Russian radio and television stations, showing a nervous President Vladimir Putin declaring martial law after Ukranian troops crossed the border into Russian territory. And early into the war, Ukraine used a combination of video game footage and deep fake images to manufacture the myth of the ‘Ghost of Kyiv’, a supposed ace pilot who downed more than 40 Russian jets before dying heroically in battle.

Researchers from University College Cork set out to explore the impact of deepfakes in wartime scenarios. It is the first study of its kind to do so. By analyzing nearly 5,000 tweets from X (previously known as Twitter) during the first seven months of 2022, the team sought to understand public reactions to these digital deceptions.

“This research is important because there is very limited empirical research on how mis/disinformation deepfakes are impacting social media already. When we talk about deepfakes we often choose to focus on the future harm/benefits rather than looking at how deepfakes are impacting our online spaces now. Our research shows how the potential for deepfakes in conflict has been in some ways realized during the Russo-Ukrainian war,” John Twomey, a psychologist at University College Cork and lead author of the study, told ZME Science.

The team qualitatively assessed the tweets using thematic analysis, adding tags to each tweet and looking for commonalities between them to find common ground. “This method suits analyzing real-world textual data as it can be used to gain a good critical idea of what they contain,” Twomey said.

As they delved deeper, it became clear that onlookers of the Russo-Ukrainian War were having a lot of trouble discerning the line between reality and fiction. But the biggest problem wasn’t that people were getting dupped, although this is also a concern. Instead, the general impression is that people can’t trust their eyes anymore, and this means their suspicions and doubts now extend to legitimate media.

“Our research shows that it is easier and more common for deepfakes to be used to sow doubt. For example, by falsely accusing media of being a deepfake to challenge its authenticity. Though that in no means diminishes the possibility for deepfakes to be used to deceive people and the negative consequences of that.”

‘(Deep)fake news!’

iversity College Cork researchers examining deepfake videos
University College Cork researchers examining deepfake videos. Credit: University College Cork, Image by Max Bell.

As the study uncovered, the mere possibility of deepfakes made many doubt the authenticity of actual footage from the conflict. This can turn into a huge crisis of trust in an already shaky media landscape. Even before deepfakes, some people were questioning the facts around events that unquestionably happened, such as the Holocaust, the moon landing, and 9/11 — despite ample video proof. Deepfakes not only alter our trust in video and audio evidence but threaten to revise history itself to suit a nefarious agent’s agenda.

“I’ve certainly become more worried about two things. Firstly, the harms of falsely accusing real content of being AI-generated. Secondly, the worries of deepfakes becoming a buzzword used to discount real videos. Our research shows that there are already conspiracy theories accusing real videos of politicians as being deepfaked,” Twomey said.

Improving awareness about deepfakes will prove increasingly important to safeguard our democracy. However, there’s a twist. The study revealed a paradoxical effect: while raising awareness can help educate the public about deepfakes, it might also erode trust in legitimate videos. As the number of people who are aware deepfakes exist increases, so will the number of false accusations and suspicions surrounding legitimate media.

This kind of unhealthy skepticism, where genuine content is discounted as artificial, is a new and important challenge that we’ll have to grapple with for years to come. As deepfakes become more sophisticated, the onus falls on us, the consumers of news, to navigate the tricky waters of misinformation and seek the truth.

“Not everything is fake but it is a good thing to know what a deepfake is and how to treat suspected deepfakes,” Twomey said.

The findings appeared in the journal PLoS ONE.

Quick guide on how to spot deepfakes

  • Inconsistent Lighting and Shadows: Look for unnatural lighting on the subject’s face or inconsistent shadows. Real videos have consistent lighting, while deepfakes may struggle with this detail.
  • Facial Distortions: Pay attention to the eyes, mouth, and hairline. Deepfakes might produce glitches or blurring in these areas.
  • Audio-Visual Mismatch: The movement of the lips might not sync perfectly with the audio. Any delay or inconsistency can be a red flag.
  • Blinking Patterns: People naturally blink regularly. Deepfakes, especially earlier versions, might not replicate this behavior accurately.
  • Background Noise: Listen for unnatural background sounds or inconsistencies in audio quality.
  • Emotional Inconsistency: The facial expressions might not match the emotion conveyed by the voice or the context of the conversation.
  • Digital Artifacts: Look for pixelation, unusual patterns, or other digital artifacts that seem out of place.
  • Source Verification: Always check the source of the video or audio. If it’s not from a reputable source, be skeptical.
  • Deepfake Detection Tools: Utilize available software and online platforms that are designed to detect deepfakes. These tools analyze videos for inconsistencies that the human eye might miss.
  • Trust Your Gut: If something feels off about the video or audio, it might be worth investigating further.

share Share

This 5,500-year-old Kish tablet is the oldest written document

Beer, goats, and grains: here's what the oldest document reveals.

A Huge, Lazy Black Hole Is Redefining the Early Universe

Astronomers using the James Webb Space Telescope have discovered a massive, dormant black hole from just 800 million years after the Big Bang.

Did Columbus Bring Syphilis to Europe? Ancient DNA Suggests So

A new study pinpoints the origin of the STD to South America.

The Magnetic North Pole Has Shifted Again. Here’s Why It Matters

The magnetic North pole is now closer to Siberia than it is to Canada, and scientists aren't sure why.

For better or worse, machine learning is shaping biology research

Machine learning tools can increase the pace of biology research and open the door to new research questions, but the benefits don’t come without risks.

This Babylonian Student's 4,000-Year-Old Math Blunder Is Still Relatable Today

More than memorializing a math mistake, stone tablets show just how advanced the Babylonians were in their time.

Sixty Years Ago, We Nearly Wiped Out Bed Bugs. Then, They Started Changing

Driven to the brink of extinction, bed bugs adapted—and now pesticides are almost useless against them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

Couple Finds Giant Teeth in Backyard Belonging to 13,000-year-old Mastodon

A New York couple stumble upon an ancient mastodon fossil beneath their lawn.

Worms and Dogs Thrive in Chernobyl’s Radioactive Zone — and Scientists are Intrigued

In the Chernobyl Exclusion Zone, worms show no genetic damage despite living in highly radioactive soil, and free-ranging dogs persist despite contamination.