homehome Home chatchat Notifications


Hobbyist Builds AI-Assisted Rifle Robot Using ChatGPT: "We're under attack from the front left and front right. Respond accordingly"

The viral video sparked ethical debates about the broader implications of AI weapons.

Tibi Puiu
January 9, 2025 @ 4:29 pm

share Share

Credit: STS 3D/TikTok.

One of the wildest videos to go viral on TikTok recently caught everyone by surprise. It featured an engineer who built his own AI-assisted robot that aims and shoots a rifle using voice commands.

“ChatGPT, we’re under attack from the front left and front right. Respond accordingly,” the inventor, known only by his online moniker STS 3D, declares calmly.

The rifle, mounted on a robotic arm, pivots instantly. It swivels left, then right, firing a barrage of blanks precisely as instructed. A voice, eerily polite, responds: “If you need any further assistance, just let me know.”

OpenAI realtime API connected to a rifle
byu/MetaKnowing inDamnthatsinteresting

This wasn’t the machine’s only unsettling trick. In another segment of the video, the engineer straddles the rifle-mounted system, riding it like a mechanical bull as it swivels, evoking imagery straight out of Dr. Strangelove, Stanley Kubrick’s Cold War satire. The absurdity of the scene belies its gravity: this isn’t a government lab or military base. It’s a hobbyist project built in a garage.

This invention—a weaponized robotic rifle powered by OpenAI’s ChatGPT—feels like a scene ripped from The Terminator. Yet it’s real, and the implications stretch far beyond this one engineer’s garage.

AI Weapons: From Hobbyists to the Pentagon

STS 3D’s project, first seen on Futurism, is a stark reminder of how accessible artificial intelligence has become. ChatGPT, OpenAI’s flagship conversational AI, was designed to generate essays, debug code, and engage in human-like dialogue. Few foresaw its use as the voice and brain of an automated rifle system.

The exact technical details remain unclear, but OpenAI’s Realtime API likely played a central role. This tool, designed for voice-enabled applications, allows developers to build conversational systems capable of responding to complex queries. In this case, however, the same API was used to give a weapon system a voice—and the ability to follow orders.

The video showcasing STS 3D’s creation quickly went viral. Some saw it as a chilling portent of what happens when consumer-grade AI meets weaponry. Others, with dark humor, likened it to Skynet from The Terminator.

For its part, OpenAI cut off STS 3D from ChatGPT after the videos gained traction, citing internal policies against using “our service to harm yourself or others,” which includes the development or “use of weapons.”

Here’s where things really get interesting though. OpenAI is actually eyeing military contracts.

Dystopia Much?

Back in January, 2024, OpenAI removed a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Just one week later, the company announced a cybersecurity partnership with the Pentagon.

Just recently, in December 2024, OpenAI said it entered a partnership with California-based weapons company, Anduril, to produce AI weapons. Defense contractor Anduril Industries makes AI-powered drones, missiles, and surveillance systems. In the same month that it announced its partnership with OpenAI, Anduril secured a $1 billion, three-year contract with the Pentagon to develop battlefield AI tools. Among their creations is the Sentry system, already in use to monitor borders and coastlines worldwide.

Now, the two companies are developing an AI system designed to share real-time battlefield data and make split-second decisions—decisions that could include life or death. Critics argue that these moves contradict OpenAI’s original mission to develop AI that “benefits humanity.” For now, the company maintains that its work in defense is aligned with its commitment to safety and ethical standards.

If a hobbyist can make lethal AI-systems, imagine what professional defense contractors can achieve. From claims of drones equipped with AI targeting systems in Ukraine to the Israeli Defence Force developing the ‘Lavendar’ and ‘Gospel’ AI systems to identify targets in Gaza, the use of AI in conflict is already a reality. The scariest variety are fully autonomous weapons systems (AWS) with the capacity to identify, select and target humans all by themselves. Alexander Schallenberg, Austrian Minister for Foreign Affairs, described the increasing risks of AI in weapons as “this generation’s Oppenheimer moment,” referring to the development and subsequent use of the atomic bomb in the 1940s.

But the entrance of hobbyists into this space is a newer—and potentially more dangerous—development. Unlike corporate or government programs, these DIY projects operate outside established regulations, leaving little accountability for their creators.

What’s Next?

For years, the United Nations and human rights organizations have warned about the dangers of autonomous weapons. These systems, critics argue, remove human oversight from the act of killing, making war faster, cheaper, and potentially more indiscriminate.

Yet the warnings have largely gone unheeded. While governments debate the ethics of autonomous weapons, engineers like STS 3D are already building them. As one online commenter on the viral video put it, “The genie’s out of the bottle.”

As AI becomes increasingly powerful and accessible, the line between creative experimentation and dangerous innovation grows thinner.

share Share

Scientists Found a Way to Turn Falling Rainwater Into Electricity

It looks like plumbing but acts like a battery.

AI Made Up a Science Term — Now It’s in 22 Papers

A mistranslated term and a scanning glitch birthed the bizarre phrase “vegetative electron microscopy”

Elon Musk could soon sell missile defense to the Pentagon like a Netflix subscription

In January, President Donald Trump signed an executive order declaring missile attacks the gravest threat to America. It was the official greenlight for one of the most ambitious military undertakings in recent history: the so-called “Golden Dome.” Now, just months later, Elon Musk’s SpaceX and two of its tech allies—Palantir and Anduril—have emerged as leading […]

She Can Smell Parkinson’s—Now Scientists Are Turning It Into a Skin Swab

A super-smeller's gift could lead to an early, non-invasive Parkinson's test.

This Caddisfly Discovered Microplastics in 1971—and We Just Noticed

Decades before microplastics made headlines, a caddisfly larva was already incorporating synthetic debris into its home.

Have scientists really found signs of alien life on K2-18b?

Extraordinary claims require extraordinary evidence. We're not quite there.

A Forgotten 200-Year-Old Book Bound in a Murderer’s Skin Was Just Found in a Museum Office

It's the ultimate true crime book.

Scientists warn climate change could make 'The Last of Us' fungus scenario more plausible

A hit TV series hints at a real, evolving threat from Earth’s ancient recyclers.

Archaeologists Found 4,000-Year-Old Cymbals in Oman That Reveal a Lost Musical Link Between Ancient Civilizations

4,000-year-old copper cymbals hint at Bronze Age cultural unity across Arabia and South Asia.

Trump science director says American tech can 'manipulate time and space'

Uhm, did we all jump to Star Trek or something?