homehome Home chatchat Notifications


Millions of Americans are falling for AI-generated content on Facebook

With the 2024 U.S. election on the horizon, AI-generated content is flooding social media, blurring the lines between authentic and synthetic content.

Mihai Andrei
October 31, 2024 @ 10:38 pm

share Share

As the 2024 U.S. presidential election draws near, social media is more saturated with disinformation than ever before. Traditional disinformation tactics are still at play, but now we also face AI-generated disinformation — an issue that remains largely unchecked.

A recent report by the Center for Countering Digital Hate (CCDH) highlights the impact of AI-generated images. These often depict fictitious veterans, police officers, and everyday citizens, gaining millions of interactions and swaying public opinion. These images, meant to elicit emotional responses, aren’t labeled as AI-generated on Facebook, despite the platform’s policy on transparency.

images of ai-generated disinformation using fake humans
Capture from the report showing AI-crafted political propaganda.

The CCDH, an NGO that aims to stop the spread of online hate speech and disinformation, analyzed around 170 AI-generated posts spread between July and October 2024. These images were shared more than 476,000 times and gathered over 2.4 million interactions. They were not labeled as AI-crafted, despite Facebook’s own policies.

The images tend to use a similar approach. They incorporate powerful symbols like American flags or soldiers and try to target current issues like veterans’ rights or immigration.

A prominent example highlighted in the report depicts a fabricated veteran holding a sign reading: “They’ll hate me for this, but learning English should be a requirement for citizenship.” This post alone accrued 168,000 interactions, with most commenters expressing their agreement. Other images show fake veterans advocating against student loan forgiveness or pushing for a veterans’ month to rival Pride Month, all designed to resonate with key (typically conservative) voter demographics.

Despite subtle signs of AI generation — distorted hands, misaligned or nonsensical text on uniforms, and vague backgrounds — most users seem to be unaware they are interacting with artificial content. And so, they are unknowingly contributing to the spread of digital disinformation.

AI is already fooling people

Meta, Facebook’s parent company, introduced AI labeling policies early in 2024, promising transparency for AI-generated images. The CCDH found no sign of these labels.

It’s unclear whether Facebook was unable or unwilling to tag these images as AI-made. But, in the end, these posts ended up tricking a lot of users. Facebook users, relying on the platform’s safeguards, remain largely in the dark, unable to discern whether an image is a genuine endorsement or a synthetic creation.

AI generated image of facebook logos
AI-generated image.

Additionally, the report highlights that Facebook’s user-reporting tools provide no clear way to flag suspected AI-generated content. While users can report posts for hate speech or misinformation, there is no specific option for manipulated media. This gap leaves Facebook users without a clear route to alert moderators to AI-generated political content that could skew perceptions during a critical election period.

The CCDH also found that the images, clearly aimed at the US public, were made by pages outside of the US. Of the ten most active pages analyzed, six are managed from outside the United States. They are based in countries like Morocco, Pakistan, and Indonesia. These foreign-administered pages collectively attracted over 1.5 million interactions on their AI-generated content, shaping discourse on U.S. policies from abroad. Despite the foreign administration, these pages present themselves as authentically American, featuring personas that appear homegrown.

These images are aimed at a particular demographic

The messages are often targeting vulnerable, less tech savvy voterss. These fabricated images aim to exploit emotional appeals and patriotic symbols, which makes them highly influential and very dangerous. Images of fake veterans, for example, aim to evoke respect and admiration, adding weight to the political messages they appear to endorse. For many voters, military service is deeply tied to patriotism, making these endorsements highly persuasive.

The approach also targets frustrated voters.

Example of an AI-made political meme from the report.

The report describes numerous instances where these artificial veterans appear with political statements, such as “Veterans deserve better than being second to student loans” or “Maybe it’s just me, but I believe veterans deserve far better benefits than those offered to undocumented immigrants.” Both sentiments target specific political frustrations among certain voter segments, appealing to those who feel their values are underrepresented or neglected.

These tactics reflect a broader trend in online disinformation, where AI-generated personas cater to niche political identities, crafting messages tailored to resonate with specific groups. This is the already classic disinformation playbook: by simulating “average” American views, these posts tap into cultural debates and amplify divisive topics. AI just adds a new spin to it.

Tech companies should take accountability

The simplest thing to do to address this would be to make it easier for users to report suspected manipulated media. However, this won’t actually solve the problem. By the time you have enough reports and someone actually checking the image, the damage will be already done.

As AI continues to advance, social media platforms must adapt their policies to ensure the technology is used responsibly. The onus cannot solely be on users to spot manipulation. For Facebook, this means implementing reliable detection and labeling processes that actively inform users when they encounter synthetic content.

Platforms like Facebook wield a great deal of influence over public opinion. And their policies — or the lack thereof — have real-world implications for democratic processes. With the U.S. presidential election approaching, it’s more important than ever for companies to be transparent and tackle disinformation. Unfortunately, this doesn’t seem to be really happening.

As the line between authentic and artificial content blurs, society should have a clear idea how to deal with things like this (and who bears the responsibility). This type of problem will only get worse.

The report can be read in its entirety here.

share Share

What Happens When Russian and Ukrainian Soldiers Come Home?

Russian and Ukrainian soldiers will eventually largely lay down their arms, but as the Soviet Afghanistan War shows, returning from the frontlines causes its own issues.

Some people are just wired to like music more, study shows

Most people enjoy music to some extent. But while some get goosebumps from their favorite song, others don’t really feel that much. A part of that is based on our culture. But according to one study, about half of it is written in our genes. In one of the largest twin studies on musical pleasure […]

This Stinky Coastal Outpost Made Royal Dye For 500 Years

Archaeologists have uncovered a reeking, violet-stained factory where crushed sea snails once fueled the elite’s obsession with royal purple.

Researchers analyzed 10,000 studies and found cannabis could actually fight cancer

Scientists used AI to scan a huge number of papers and found cannabis gets a vote of confidence from science.

Scientists Found a Way to Turn Falling Rainwater Into Electricity

It looks like plumbing but acts like a battery.

AI Made Up a Science Term — Now It’s in 22 Papers

A mistranslated term and a scanning glitch birthed the bizarre phrase “vegetative electron microscopy”

Elon Musk could soon sell missile defense to the Pentagon like a Netflix subscription

In January, President Donald Trump signed an executive order declaring missile attacks the gravest threat to America. It was the official greenlight for one of the most ambitious military undertakings in recent history: the so-called “Golden Dome.” Now, just months later, Elon Musk’s SpaceX and two of its tech allies—Palantir and Anduril—have emerged as leading […]

She Can Smell Parkinson’s—Now Scientists Are Turning It Into a Skin Swab

A super-smeller's gift could lead to an early, non-invasive Parkinson's test.

This Caddisfly Discovered Microplastics in 1971—and We Just Noticed

Decades before microplastics made headlines, a caddisfly larva was already incorporating synthetic debris into its home.

Have scientists really found signs of alien life on K2-18b?

Extraordinary claims require extraordinary evidence. We're not quite there.