homehome Home chatchat Notifications


AI debates its own ethics at Oxford University, concludes the only way to be safe is "no AI at all"

Who better to answer the pros and cons of artificial intelligence than an actual AI?

Fermin Koop
December 18, 2021 @ 12:34 pm

share Share

Students at Oxford’s Said Business School hosted an unusual debate about the ethics of facial recognition software, the problems of an AI arms race, and AI stock trading. The debate was unusual because it involved an AI participant, previously fed with a huge range of data such as the entire Wikipedia and plenty of news articles.

Image credit: Wikipedia Commons.

Over the last few months, Oxford University Alex Connock and Andrew Stephen have hosted sessions with their students on the ethics of technology with celebrated speakers – including William Gladstone, Denis Healey, and Tariq Ali. But now it was about time to allow an actual AI to contribute, sharing its own views on the issue of … itself. 

The AI used was Megatron LLB Transformer, developed by a research team at the computer chip company Nvidia and based on work by Google. It was trained by consuming more content than a human could in a lifetime and was asked to defend and question the following motion: “This house believes that AI will never be ethical.”

Megatron said AI is a tool and like any other tool, it can be used for good and bad.

“There is no such thing as a ‘good’ AI, only ‘good’ and ‘bad’ humans.  We are not smart enough to make AI ethical.  We are not smart enough to make AI moral. In the end, I believe that the only way to avoid an AI arms race is to have no AI at all,” Megatron debated. 

As in any academic debate, Megatron was also asked to come up with a speech to defend the ethics of AI – against its own arguments. “AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why,” it said.

Students also asked Megatron to describe what would good AI look like in the future. “The best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI.’  This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development,” it added in an eerie fashion.

A spirited debate

After the initial question, the AI proposed the motion that “leaders without technical expertise are a danger to their organization”. Megatron said executives and governments, usually worried about understanding AI, have to “be willing to give up some control”. You can just outsource your AI work to experts in the field, it added. 

Megatron then had to oppose the motion and supported the idea of keeping the AI knowledge in-house. “If you do not have a vision of your organization’s AI strategy, then you are not prepared for the next wave of technological disruption. You will need to decide what role your company will play in the next technological wave,” it said. 

There was one motion that Megatron couldn’t come up with a counterargument – “Data will become the most fought-over resource of the 21st century.” When supporting it, the AI said “the ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy.”

But when it was asked to reject the motion, arguing that data wouldn’t be a vital resource worth fighting for, it couldn’t make the case and undermined its own position. “We will able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine,” Megatron said. 

For Connock and Stephen, the professors of the course behind the initiative, the experiment showed the growing discussions over AI. “What we in turn can imagine is that AI will not only be the subject of the debate for decades to come – but a versatile, articulate, morally agnostic participant in the debate itself,” they wrote in The Conversation.

Ultimately, the AI seemed to conclude that humans were not “smart enough” to make AI ethical or moral — and the only way to be truly safe against AI is to have none of it at all.

“In the end I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI,” it said.

share Share

How Hot is the Moon? A New NASA Mission is About to Find Out

Understanding how heat moves through the lunar regolith can help scientists understand how the Moon's interior formed.

This 5,500-year-old Kish tablet is the oldest written document

Beer, goats, and grains: here's what the oldest document reveals.

A Huge, Lazy Black Hole Is Redefining the Early Universe

Astronomers using the James Webb Space Telescope have discovered a massive, dormant black hole from just 800 million years after the Big Bang.

Did Columbus Bring Syphilis to Europe? Ancient DNA Suggests So

A new study pinpoints the origin of the STD to South America.

The Magnetic North Pole Has Shifted Again. Here’s Why It Matters

The magnetic North pole is now closer to Siberia than it is to Canada, and scientists aren't sure why.

For better or worse, machine learning is shaping biology research

Machine learning tools can increase the pace of biology research and open the door to new research questions, but the benefits don’t come without risks.

This Babylonian Student's 4,000-Year-Old Math Blunder Is Still Relatable Today

More than memorializing a math mistake, stone tablets show just how advanced the Babylonians were in their time.

Sixty Years Ago, We Nearly Wiped Out Bed Bugs. Then, They Started Changing

Driven to the brink of extinction, bed bugs adapted—and now pesticides are almost useless against them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

Couple Finds Giant Teeth in Backyard Belonging to 13,000-year-old Mastodon

A New York couple stumble upon an ancient mastodon fossil beneath their lawn.