Are you ready and excited for the human-robot war? Well, Boston Dynamics and five other robotics companies aren’t. Together, they have signed an open letter warning of the dangers of weaponizing robots and software.
This document enshrines a pledge from six leading tech firms — with Agility Robotics, ANYbotics, Clearpath Robotics, Open Robotics, and Unitree signing alongside Boston Dynamics — not to weaponize their creations. The pledge underscores the potential dangers of applying cutting-edge robotics, computing, and data-processing technologies towards warfare, not only to armed combatants, but to humanity as a whole.
No terminators
“Untrustworthy people could use [weaponized robots] to invade civil rights or to threaten, harm, or intimidate others,” the letter explains. “We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues.”
The letter’s pledge extends to “advanced-mobility general-purpose robots”, as well as the software that drives them. The six companies also said they will take steps to ensure that customers don’t weaponize their products either — although exactly how this will be achieved is yet to be determined.
Despite their hard stance on the issue, the letter explains that the signatories don’t have any issue with “existing technologies” that governments use to “defend themselves and uphold their laws.”
These include products such as Boston Dynamics’ Spot — a dog-like robot — that police and fire departments use in hazardous situations. Although the robot can and should play a part in these fields, Boston Dynamics explains that it is not designed for surveillance, nor to replace police officers. The decision to use force and its application should not be given over to a machine.
It is safe to say that most people today do not like the idea of autonomous weapon systems. Such devices operate on their own without any human supervision and can use lethal force if and when their software deems it appropriate. The recent developments from the Ukraine war, where civilian drones are being retrofitted into anti-tank and anti-personnel grenade delivery systems, are a reminder of just how easily modern tech can be turned to warmaking — and how effective it can be. Adding in artificial intelligence and high-mobility robotic bodies would only compound the destructive potential of such weapons.
This isn’t the first time scientists or engineers have warned about the immense threat weaponized robots can pose to humanity. Back in 2015, Elon Musk, Steve Wozniak, and the titan of science Stephen Hawking added their name to a very long list of researchers and engineers calling for a worldwide ban on the development of “offensive autonomous weapons”. Musk has since embarked on further efforts to protect humanity from destruction at the hands of AI.
Although the open letter from these companies is commendable and echoes public and academic sentiment across the planet, politics doesn’t seem to ring to its tune. A meeting of the United Nations Convention on Certain Conventional Weapons last year didn’t reach a consensus on banning weaponized robots mostly due to objections from countries that are already working on developing such machines — including the U.S, the UK, and Russia.