homehome Home chatchat Notifications


Endowing AI with confidence and doubt will make it more useful, paper argues

More like us, in other words.

Alexandru Micu
June 6, 2017 @ 8:39 pm

share Share

Hard-wiring AI with confidence and self-doubt could help them better perform their task while recognizing when they need help or supervision, a team of researchers believes.

Smartphone.

Initial image credits Tero Vesalainen.

Confidence — that thing we all wish we had at parties but can thankfully be substituted with alcohol. Having confidence in one’s own abilities is generally considered to be a good thing, although, as it turns out from a certain presidency, too much of it and you annoy the whole planet. Which is an important point to discuss, given that we’re toying around with creating actual minds, in the form of AI. So would confidence, and it’s mirror twin doubt, prove of any use to a thinking machine?

That’s the question a team of researchers led by Dylan Hadfield-Menell from the University of California, Berkeley, set out to answer. We already know part of the answer — we know what happens when machines get over-confident, he says. A perfect example of this is Facebook’s newsfeed algorithms. They were designed to feed article and post suggestions which would match people’s interests based on what they click on or share. But by following these instructions to the letter, they end up filling some feeds with nothing but fake news. A sprinkling of self-doubt would’ve been a great boon in this case.

“If Facebook had this thinking, we might not have had such a problem with fake news,” says Hadfield-Menell.

The team believes the answer lies in human oversight. Instead of showing every article or post the algorithm thinks a Facebook user wants to see, a more uncertain system would be prompted to defer to a human referee in case a link smells fishy.

But knowing that doubt can help make our machines better at what they do isn’t the same as knowing how, and how much of it, should be implemented. So the team set up an experiment to determine how a robot’s sense of its own usefulness could be used in the creation of artificial intelligence.

The off-switch

The team designed a mathematical model of a human-robot interaction they call the “off-switch game” to see how a machine’s confidence levels would impact its interaction with us. Less of a game per se and more of a simulation, it basically consists of a robot with an off switch which is given a task to do. A human overseeing the bots can press this button at any time to stop the robot, but on the other hand, the robot can choose to disable this switch if it so desires.

Not very surprisingly, when the machine was given a high degree of confidence it would never allow the human player to switch it off. In contrast, a robot endowed with low confidence would always allow the player to shut it down, even if it was performing its task perfectly.

Hadfield-Menell believes this is a good indication that we shouldn’t make AI’s too “insecure”. For example, if you task your autonomous car with taking the kids to school in the morning it should never let a child take control. In this case, the AI should be confident that its own ability is greater than that of the children and refuse to relinquish control. But if you were in the car and told it to stop, it should relinquish control. The best robots, he adds, will be those who can best balance these two extremes.

While the idea of a robot refusing a command to stop or shut down might seem a bit scary or far-fetched (and has been debated at large in the past), context is everything. Humans are fallible too, and you wouldn’t want a robotic firefighter to stop from saving someone or putting out a fire because it was ordered to, by mistake. Or a robotic nurse to stop treating a delirious patient who orders it to shut down. This confidence is a key part of AI operation and something we’ll have to consider before putting people and AIs side by side in the real world.

The issue is wider than simple confidence, however. As machines will be expected to make more and more decisions that directly impact human safety, it’s important that we put a solid ethical framework in place sooner rather than later, according to Hadfield-Menell. Next, he plans to see how a robot’s decision-making changes with access to more information regarding its own usefulness — for example, how a coffee-pot robot’s behavior might change in the morning if it knows that’s when it’s most useful. Ultimately, he wants his research to help create AIs that are more predictable and make decisions that are more intuitive to us humans.

The full paper “The Off-Switch Game” has been published in the journal arXiv.

share Share

This 5,500-year-old Kish tablet is the oldest written document

Beer, goats, and grains: here's what the oldest document reveals.

A Huge, Lazy Black Hole Is Redefining the Early Universe

Astronomers using the James Webb Space Telescope have discovered a massive, dormant black hole from just 800 million years after the Big Bang.

Did Columbus Bring Syphilis to Europe? Ancient DNA Suggests So

A new study pinpoints the origin of the STD to South America.

The Magnetic North Pole Has Shifted Again. Here’s Why It Matters

The magnetic North pole is now closer to Siberia than it is to Canada, and scientists aren't sure why.

For better or worse, machine learning is shaping biology research

Machine learning tools can increase the pace of biology research and open the door to new research questions, but the benefits don’t come without risks.

This Babylonian Student's 4,000-Year-Old Math Blunder Is Still Relatable Today

More than memorializing a math mistake, stone tablets show just how advanced the Babylonians were in their time.

Sixty Years Ago, We Nearly Wiped Out Bed Bugs. Then, They Started Changing

Driven to the brink of extinction, bed bugs adapted—and now pesticides are almost useless against them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

Couple Finds Giant Teeth in Backyard Belonging to 13,000-year-old Mastodon

A New York couple stumble upon an ancient mastodon fossil beneath their lawn.

Worms and Dogs Thrive in Chernobyl’s Radioactive Zone — and Scientists are Intrigued

In the Chernobyl Exclusion Zone, worms show no genetic damage despite living in highly radioactive soil, and free-ranging dogs persist despite contamination.