One of the wildest videos to go viral on TikTok recently caught everyone by surprise. It featured an engineer who built his own AI-assisted robot that aims and shoots a rifle using voice commands.
“ChatGPT, we’re under attack from the front left and front right. Respond accordingly,” the inventor, known only by his online moniker STS 3D, declares calmly.
The rifle, mounted on a robotic arm, pivots instantly. It swivels left, then right, firing a barrage of blanks precisely as instructed. A voice, eerily polite, responds: “If you need any further assistance, just let me know.”
OpenAI realtime API connected to a rifle
byu/MetaKnowing inDamnthatsinteresting
This wasn’t the machine’s only unsettling trick. In another segment of the video, the engineer straddles the rifle-mounted system, riding it like a mechanical bull as it swivels, evoking imagery straight out of Dr. Strangelove, Stanley Kubrick’s Cold War satire. The absurdity of the scene belies its gravity: this isn’t a government lab or military base. It’s a hobbyist project built in a garage.
This invention—a weaponized robotic rifle powered by OpenAI’s ChatGPT—feels like a scene ripped from The Terminator. Yet it’s real, and the implications stretch far beyond this one engineer’s garage.
AI Weapons: From Hobbyists to the Pentagon
STS 3D’s project, first seen on Futurism, is a stark reminder of how accessible artificial intelligence has become. ChatGPT, OpenAI’s flagship conversational AI, was designed to generate essays, debug code, and engage in human-like dialogue. Few foresaw its use as the voice and brain of an automated rifle system.
The exact technical details remain unclear, but OpenAI’s Realtime API likely played a central role. This tool, designed for voice-enabled applications, allows developers to build conversational systems capable of responding to complex queries. In this case, however, the same API was used to give a weapon system a voice—and the ability to follow orders.
The video showcasing STS 3D’s creation quickly went viral. Some saw it as a chilling portent of what happens when consumer-grade AI meets weaponry. Others, with dark humor, likened it to Skynet from The Terminator.
For its part, OpenAI cut off STS 3D from ChatGPT after the videos gained traction, citing internal policies against using “our service to harm yourself or others,” which includes the development or “use of weapons.”
Here’s where things really get interesting though. OpenAI is actually eyeing military contracts.
Dystopia Much?
Back in January, 2024, OpenAI removed a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Just one week later, the company announced a cybersecurity partnership with the Pentagon.
Just recently, in December 2024, OpenAI said it entered a partnership with California-based weapons company, Anduril, to produce AI weapons. Defense contractor Anduril Industries makes AI-powered drones, missiles, and surveillance systems. In the same month that it announced its partnership with OpenAI, Anduril secured a $1 billion, three-year contract with the Pentagon to develop battlefield AI tools. Among their creations is the Sentry system, already in use to monitor borders and coastlines worldwide.
Now, the two companies are developing an AI system designed to share real-time battlefield data and make split-second decisions—decisions that could include life or death. Critics argue that these moves contradict OpenAI’s original mission to develop AI that “benefits humanity.” For now, the company maintains that its work in defense is aligned with its commitment to safety and ethical standards.
If a hobbyist can make lethal AI-systems, imagine what professional defense contractors can achieve. From claims of drones equipped with AI targeting systems in Ukraine to the Israeli Defence Force developing the ‘Lavendar’ and ‘Gospel’ AI systems to identify targets in Gaza, the use of AI in conflict is already a reality. The scariest variety are fully autonomous weapons systems (AWS) with the capacity to identify, select and target humans all by themselves. Alexander Schallenberg, Austrian Minister for Foreign Affairs, described the increasing risks of AI in weapons as “this generation’s Oppenheimer moment,” referring to the development and subsequent use of the atomic bomb in the 1940s.
But the entrance of hobbyists into this space is a newer—and potentially more dangerous—development. Unlike corporate or government programs, these DIY projects operate outside established regulations, leaving little accountability for their creators.
What’s Next?
For years, the United Nations and human rights organizations have warned about the dangers of autonomous weapons. These systems, critics argue, remove human oversight from the act of killing, making war faster, cheaper, and potentially more indiscriminate.
Yet the warnings have largely gone unheeded. While governments debate the ethics of autonomous weapons, engineers like STS 3D are already building them. As one online commenter on the viral video put it, “The genie’s out of the bottle.”
As AI becomes increasingly powerful and accessible, the line between creative experimentation and dangerous innovation grows thinner.