homehome Home chatchat Notifications


Weapons shouldn't be able to decide themselves to end a life - Hawking, Musk, Wozniak sign letter requesting the ban of autonomous weapons and military AI

A very large number of scientific and technological luminaries have signed an open letter calling for the world's governments to ban the development of "offensive autonomous weapons" to prevent a "military AI arms race."

Alexandru Micu
August 7, 2015 @ 9:43 am

share Share

One of the cornerstone events in Frank Herbert’s fictional Dune Universe is the Butlerian Jihad – an empire-wide crusade against thinking machines and AI of any kind.

Jihad, Butlerian: (see also Great Revolt) — the crusade against computers, thinking machines, and conscious robots begun in 201 B.G. and concluded in 108 B.G. Its chief commandment remains in the O.C. Bible as “Thou shalt not make a machine in the likeness of a human mind.”

A militant group, calling themselves the Titans, used humanity’s over-reliance on technology to gain dominion over the entire human race. They transplant their brains into mechanical bodies and become immortal and nearly unstoppable, enslaving human kind. Granting too much power over their computerized empire to the AI Omnius, they are overthrown by it. The rogue program sees no value in human life, and the deaths it causes makes humanity rise up in revolt and, after their final victory, ban AIs and computers forever.

A photo from the ‘Campaign to Stop Killer Robots’ which called for a pre-emptive ban on lethal robot weapons in 2013.
Image via observer.com

The tale has all the makings of a great story – a hero you feel for, humanity as underdogs and overbearing robot overlords. And, according to many researchers, programers and tech experts, it may have something even more important, that every good story needs.

It may have a kernel of truth

Elon Musk and Stephen Hawking have both previously warned of the dangers of advanced AI. Musk said that AI is “potentially more dangerous than nukes,” while Hawking was far more optimistic, merely saying that AI is “our biggest existential threat.”

The two have added their names to those of a very large number of scientific and technological heavyweights, that have signed an open letter which will be presented at the International Joint Conferences on Artificial Intelligence (IJCAI) in Buenos Aires tomorrow. Noam Chomsky, the Woz, and dozens of other AI robotics researchers have also signed the letter, calling for the world’s governments to ban the development of “offensive autonomous weapons” to prevent a “military AI arms race.”

Most of the letter addresses the issue of today’s “dumb” robots, vehicles and munitions being turned into smart autonomous weapons. Cruise missiles and remotely piloted drones are ok, the letter says, because they cannot make the choice to destroy or kill by themselves, as “humans make all targeting decisions.”

So where do we draw the line?

The letter voices the concern of may scientists that weaponizing AIs is a slippery slope that could very well lead to our extinction. The development of fully autonomous weapons that can fight and kill without human intervention should be nipped in the bud, scientists agree. And it letter warns us that once the first AI is weaponized, many more will follow:

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” the letter reads.

Later, the letter draws a strong parallel between autonomous weapons and chemical/biological warfare:

“Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.”

The letter is being presented at IJCAI by the Future of Life Institute. It isn’t entirely clear who the letter is addressed to, other than the academics and researchers who will be attending the conferences. Perhaps it’s just intended to generally raise awareness of the issue, so that we don’t turn a blind eye to any autonomous weapons research being carried out by major military powers.

The main issue with AI in general, and autonomous weapons in specific, is that they are transformational, game-changing technologies. Once we create an advanced AI, or a weapons system that can decide for itself who to attack, there’s no turning back. We can’t put gunpowder or nuclear weapons back in the bag, and autonomous weaponry would be no different.

There will always be Ix and Tleliax.

To tie the Dune parallel in a neat little bow and bring it to the end, the planets Ix and Tleilax in the fictional universe design and produce technology that was outlawed by the Butlerian Jihad, but is tolerated by the Empire, a kind of technological “gray-area”.

And the same issue stands with the letter. The history of global technology regulation warns us that making this kind of statement is much easier than realising what it asks for. What do we ban, how do we make sure the ban sticks? The thousands of scientists that have signed the letter to ban military use of AI may have inadvertently created restrictions on their own ability to share software with international collaborators or develop future products.

As Patrick Lin, director of the Ethics & Emerging Sciences Group at California Polytechnic State University, told io9.com:

“Any AI research could be co-opted into the service of war, from autonomous cars to smarter chat-bots… It’s a short hop from innocent research to weaponization.”

share Share

A Dutch 17-Year-Old Forgot His Native Language After Knee Surgery and Spoke Only English Even Though He Had Never Used It Outside School

He experienced foreign language syndrome for about 24 hours, and remembered every single detail of the incident even after recovery.

Your Brain Hits a Metabolic Cliff at 43. Here’s What That Means

This is when brain aging quietly kicks in.

Scientists Just Found a Hidden Battery Life Killer and the Fix Is Shockingly Simple

A simple tweak could dramatically improve the lifespan of Li-ion batteries.

Westerners cheat AI agents while Japanese treat them with respect

Japan’s robots are redefining work, care, and education — with lessons for the world.

Scientists Turn to Smelly Frogs to Fight Superbugs: How Their Slime Might Be the Key to Our Next Antibiotics

Researchers engineer synthetic antibiotics from frog slime that kill deadly bacteria without harming humans.

This Popular Zero-Calorie Sugar Substitute May Be Making You Hungrier, Not Slimmer

Zero-calorie sweeteners might confuse the brain, especially in people with obesity

Any Kind of Exercise, At Any Age, Boosts Your Brain

Even light physical activity can sharpen memory and boost mood across all ages.

A Brain Implant Just Turned a Woman’s Thoughts Into Speech in Near Real Time

This tech restores speech in real time for people who can’t talk, using only brain signals.

Using screens in bed increases insomnia risk by 59% — but social media isn’t the worst offender

Forget blue light, the real reason screens disrupt sleep may be simpler than experts thought.

Beetles Conquered Earth by Evolving a Tiny Chemical Factory

There are around 66,000 species of rove beetles and one researcher proposes it's because of one special gland.