homehome Home chatchat Notifications


Millions of Americans are falling for AI-generated content on Facebook

With the 2024 U.S. election on the horizon, AI-generated content is flooding social media, blurring the lines between authentic and synthetic content.

Mihai Andrei
October 31, 2024 @ 10:38 pm

share Share

As the 2024 U.S. presidential election draws near, social media is more saturated with disinformation than ever before. Traditional disinformation tactics are still at play, but now we also face AI-generated disinformation — an issue that remains largely unchecked.

A recent report by the Center for Countering Digital Hate (CCDH) highlights the impact of AI-generated images. These often depict fictitious veterans, police officers, and everyday citizens, gaining millions of interactions and swaying public opinion. These images, meant to elicit emotional responses, aren’t labeled as AI-generated on Facebook, despite the platform’s policy on transparency.

images of ai-generated disinformation using fake humans
Capture from the report showing AI-crafted political propaganda.

The CCDH, an NGO that aims to stop the spread of online hate speech and disinformation, analyzed around 170 AI-generated posts spread between July and October 2024. These images were shared more than 476,000 times and gathered over 2.4 million interactions. They were not labeled as AI-crafted, despite Facebook’s own policies.

The images tend to use a similar approach. They incorporate powerful symbols like American flags or soldiers and try to target current issues like veterans’ rights or immigration.

A prominent example highlighted in the report depicts a fabricated veteran holding a sign reading: “They’ll hate me for this, but learning English should be a requirement for citizenship.” This post alone accrued 168,000 interactions, with most commenters expressing their agreement. Other images show fake veterans advocating against student loan forgiveness or pushing for a veterans’ month to rival Pride Month, all designed to resonate with key (typically conservative) voter demographics.

Despite subtle signs of AI generation — distorted hands, misaligned or nonsensical text on uniforms, and vague backgrounds — most users seem to be unaware they are interacting with artificial content. And so, they are unknowingly contributing to the spread of digital disinformation.

AI is already fooling people

Meta, Facebook’s parent company, introduced AI labeling policies early in 2024, promising transparency for AI-generated images. The CCDH found no sign of these labels.

It’s unclear whether Facebook was unable or unwilling to tag these images as AI-made. But, in the end, these posts ended up tricking a lot of users. Facebook users, relying on the platform’s safeguards, remain largely in the dark, unable to discern whether an image is a genuine endorsement or a synthetic creation.

AI generated image of facebook logos
AI-generated image.

Additionally, the report highlights that Facebook’s user-reporting tools provide no clear way to flag suspected AI-generated content. While users can report posts for hate speech or misinformation, there is no specific option for manipulated media. This gap leaves Facebook users without a clear route to alert moderators to AI-generated political content that could skew perceptions during a critical election period.

The CCDH also found that the images, clearly aimed at the US public, were made by pages outside of the US. Of the ten most active pages analyzed, six are managed from outside the United States. They are based in countries like Morocco, Pakistan, and Indonesia. These foreign-administered pages collectively attracted over 1.5 million interactions on their AI-generated content, shaping discourse on U.S. policies from abroad. Despite the foreign administration, these pages present themselves as authentically American, featuring personas that appear homegrown.

These images are aimed at a particular demographic

The messages are often targeting vulnerable, less tech savvy voterss. These fabricated images aim to exploit emotional appeals and patriotic symbols, which makes them highly influential and very dangerous. Images of fake veterans, for example, aim to evoke respect and admiration, adding weight to the political messages they appear to endorse. For many voters, military service is deeply tied to patriotism, making these endorsements highly persuasive.

The approach also targets frustrated voters.

Example of an AI-made political meme from the report.

The report describes numerous instances where these artificial veterans appear with political statements, such as “Veterans deserve better than being second to student loans” or “Maybe it’s just me, but I believe veterans deserve far better benefits than those offered to undocumented immigrants.” Both sentiments target specific political frustrations among certain voter segments, appealing to those who feel their values are underrepresented or neglected.

These tactics reflect a broader trend in online disinformation, where AI-generated personas cater to niche political identities, crafting messages tailored to resonate with specific groups. This is the already classic disinformation playbook: by simulating “average” American views, these posts tap into cultural debates and amplify divisive topics. AI just adds a new spin to it.

Tech companies should take accountability

The simplest thing to do to address this would be to make it easier for users to report suspected manipulated media. However, this won’t actually solve the problem. By the time you have enough reports and someone actually checking the image, the damage will be already done.

As AI continues to advance, social media platforms must adapt their policies to ensure the technology is used responsibly. The onus cannot solely be on users to spot manipulation. For Facebook, this means implementing reliable detection and labeling processes that actively inform users when they encounter synthetic content.

Platforms like Facebook wield a great deal of influence over public opinion. And their policies — or the lack thereof — have real-world implications for democratic processes. With the U.S. presidential election approaching, it’s more important than ever for companies to be transparent and tackle disinformation. Unfortunately, this doesn’t seem to be really happening.

As the line between authentic and artificial content blurs, society should have a clear idea how to deal with things like this (and who bears the responsibility). This type of problem will only get worse.

The report can be read in its entirety here.

share Share

Scientists Discover a Surprising Side Effect of Intermittent Fasting — Slower Hair Regrowth

Fasting benefits metabolism but may hinder hair regeneration, at least in mice.

The Oldest Human Genomes in Europe Show How an Entire Branch of Humanity Disappeared

An ancient human lineage roamed Europe's frozen tundra for nearly 80 generations. Then they died out.

CCTV Cameras Are Everywhere — And They’re Changing How Your Brain Works

New research reveals how being watched triggers unconscious hyper-awareness.

This New Catalyst Can Produce Ammonia from Air and Water at Room Temperature

Forget giant factories! A new portable device could allow farmers to produce ammonia right in the field, reducing costs, and emissions.

New York City is introducing a congestion tax for cars. Can it really work?

NYC’s upcoming congestion pricing plan promises less traffic and cleaner air — but is the $9 toll fair for everyone?

Origami-Inspired Heart Valve May Revolutionize Treatment for Toddlers

A team of researchers at UC Irvine has developed an origami-inspired heart valve that grows with toddlers.

Scientists Unearth a 4,000-Year-Old Massacre So Brutal It May Have Included Cannibalism

It's Britain's bloodiest prehistoric massacre.

Astronauts will be making sake on the ISS — and a cosmic bottle will cost $650,000

Astronauts aboard the ISS are brewing more than just discoveries — they’re testing how sake ferments in space.

Video Games Were Blamed for Hurting Mental Health — New Research Says They Do the Opposite

New research challenges old stereotypes about gaming’s impact on well-being.

AI thought X-rays are connected to eating refried beans or drinking beer

Instead of finding true medical insights, these algorithms sometimes rely on irrelevant factors — leading to misleading results.