homehome Home chatchat Notifications


Deepfake videos are already taking over the internet

Deepfake scams are on a rise, and AI scammers are trying their best to blur the delicate boundary between reality and fabricated reality.

Rupendra Brahambhatt
October 16, 2023 @ 2:38 pm

share Share

Imagine waking up one day to discover that everyone — including your family members, friends, neighbors, and even some strangers — is mocking you. The reason? The previous night, they all saw a video of you proclaiming yourself to be the reincarnation of Jesus. 

You wouldn’t make such a video, right? Well, you won’t need to. There are people out there who can make such a video without even filming you, using AI deepfake tools. 

Image credits: ThisIsEngineering/Pexels

For now, you aren’t a likely target unless you happen to be a celebrity. But it’s already happening. Recently, film star Tom Hanks posted on Instagram about a deepfake video of him promoting a dental plan.

“BEWARE!! There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it,” Tom Hanks said on Instagram.

This is far from the first time a celebrity deepfake has been used to scam people online.    

Deepfake videos are all over the internet

A deepfake is a synthetic media, often a video or audio clip, created using artificial intelligence (AI) techniques to make it appear as if someone is saying or doing something they never actually said or did. Deepfakes are generated by training machine learning algorithms on a large dataset of images or audio clips to learn the unique features and movements of a person’s face or voice. Once trained, these algorithms can then generate highly realistic and convincing fake content.

Any visual or audio content that features doctored voices, images, or videos of real people created through deep machine learning algorithms is called a deepfake. People create such videos either for fun, to gain more views, or to scam others. 

In 2017, a Reddit user named “deepfakes” began posting pornographic content that involved celebrities’ faces superimposed onto the bodies of people in the explicit videos. They were one of the earliest examples of deepfake.

However, deepfakes started gaining popularity in 2018 when some social media users started posting videos with swapped faces of actors in different movie scenes. Another notable early example of a deepfake was a video in which former U.S. President Barack Obama’s face was digitally manipulated to say things that he never said. 

Filmmaker Jordan Peele released this video in April 2018 to raise awareness about the potential misuse of deepfake technology. Since then AI technology has become much better, and deepfakes have become more convincing and dangerous than ever. 

For example, similar to Tom Hanks, CBS anchor Gayle King, and YouTube star Mr. Beast also recently posted warnings on their social media channels about deepfake videos in which their voices and faces are being used to sell phony products.  

“Lots of people are getting this deepfake scam ad of me. Are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem,” Mr. Beast said on Twitter. 

How the heck are deepfake videos even made?

Creating a deepfake video involves a multi-step process driven by artificial intelligence. It begins with collecting a substantial dataset (collection of data points) of the target person’s face and also the dataset of the person whose face is to be replaced. 

This can be done using videos and images that are already available on a target person’s social media.  The collected data is then processed using facial recognition algorithms to align facial features, such as eyes, nose, and mouth, to ensure they match as closely as possible between the source and target. 

Next, deep learning-based Generative Adversarial Networks (GANs), are used to train two types of neural network models; the first type called the generator creates the deepfake, and the second, the discriminator spots any differences between the real and the created version. 

This process continues until the generated content closely resembles the target. After this, the deepfake is further refined using lighting and color-grading software. The video part is done here and next comes the voice, for which audio synthesis software is used to match the lip movements with synthesized speech. Finally, the video is ready.  

Deepfakes are possibly the most dangerous AI technology

Unfortunately, deepfake scams are not limited to just selling cheap products. In March 2019, an executive of a U.K.-based energy firm received a deepfake call that he believed was from his boss, the CEO of their German parent company, and authorized a $243,000 transfer to a supplier in Hungary. 

The money never reached Hungary and instead went into multiple bank accounts in Mexico, and this is just one of the many horrible things that scammers can do using this technology.

A study published in the year 2020 suggests that deepfakes may emerge as the biggest security challenge among all AI-based technologies in the next 15 years. 

“Humans have a strong tendency to believe their own eyes and ears, so audio and video evidence has traditionally been given a great deal of credence, despite the long history of photographic trickery. But recent developments in deep learning have significantly increased the scope for the generation of fake content,” the study authors note.

They believe deepfake technology could be used for various criminal purposes. For example, scammers can pretend to be children when talking to elderly persons over video calls to trick them into sending money. They can make fake phone calls to gain access to secure systems or create fabricated videos of politicians to manipulate public opinion.

However, not everything about the deepfake technology is bad and evil. Similar to any other AI application, it also has its own pros and there are many ways in which it can benefit humans

For instance, some health experts are using deepfake technology to treat patients dealing with visualization disorders such as Aphantasia. There are also educators who use deepfakes to improve the quality of online classes. 

Apart from this, deepfakes also have the potential to do wonders for mental health patients and improve the experience of visitors in museums and historical places. 

However, in the wrong hands, this technology does pose a serious threat. Therefore, the time has come for social media companies, policymakers, and AI experts to work together and devise ways to prevent the misuse of deepfake technology. 

share Share

This 5,500-year-old Kish tablet is the oldest written document

Beer, goats, and grains: here's what the oldest document reveals.

A Huge, Lazy Black Hole Is Redefining the Early Universe

Astronomers using the James Webb Space Telescope have discovered a massive, dormant black hole from just 800 million years after the Big Bang.

Did Columbus Bring Syphilis to Europe? Ancient DNA Suggests So

A new study pinpoints the origin of the STD to South America.

The Magnetic North Pole Has Shifted Again. Here’s Why It Matters

The magnetic North pole is now closer to Siberia than it is to Canada, and scientists aren't sure why.

For better or worse, machine learning is shaping biology research

Machine learning tools can increase the pace of biology research and open the door to new research questions, but the benefits don’t come without risks.

This Babylonian Student's 4,000-Year-Old Math Blunder Is Still Relatable Today

More than memorializing a math mistake, stone tablets show just how advanced the Babylonians were in their time.

Sixty Years Ago, We Nearly Wiped Out Bed Bugs. Then, They Started Changing

Driven to the brink of extinction, bed bugs adapted—and now pesticides are almost useless against them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

Couple Finds Giant Teeth in Backyard Belonging to 13,000-year-old Mastodon

A New York couple stumble upon an ancient mastodon fossil beneath their lawn.

Worms and Dogs Thrive in Chernobyl’s Radioactive Zone — and Scientists are Intrigued

In the Chernobyl Exclusion Zone, worms show no genetic damage despite living in highly radioactive soil, and free-ranging dogs persist despite contamination.