Hard-wiring AI with confidence and self-doubt could help them better perform their task while recognizing when they need help or supervision, a team of researchers believes.
Confidence — that thing we all wish we had at parties but can thankfully be substituted with alcohol. Having confidence in one’s own abilities is generally considered to be a good thing, although, as it turns out from a certain presidency, too much of it and you annoy the whole planet. Which is an important point to discuss, given that we’re toying around with creating actual minds, in the form of AI. So would confidence, and it’s mirror twin doubt, prove of any use to a thinking machine?
That’s the question a team of researchers led by Dylan Hadfield-Menell from the University of California, Berkeley, set out to answer. We already know part of the answer — we know what happens when machines get over-confident, he says. A perfect example of this is Facebook’s newsfeed algorithms. They were designed to feed article and post suggestions which would match people’s interests based on what they click on or share. But by following these instructions to the letter, they end up filling some feeds with nothing but fake news. A sprinkling of self-doubt would’ve been a great boon in this case.
“If Facebook had this thinking, we might not have had such a problem with fake news,” says Hadfield-Menell.
The team believes the answer lies in human oversight. Instead of showing every article or post the algorithm thinks a Facebook user wants to see, a more uncertain system would be prompted to defer to a human referee in case a link smells fishy.
But knowing that doubt can help make our machines better at what they do isn’t the same as knowing how, and how much of it, should be implemented. So the team set up an experiment to determine how a robot’s sense of its own usefulness could be used in the creation of artificial intelligence.
The off-switch
The team designed a mathematical model of a human-robot interaction they call the “off-switch game” to see how a machine’s confidence levels would impact its interaction with us. Less of a game per se and more of a simulation, it basically consists of a robot with an off switch which is given a task to do. A human overseeing the bots can press this button at any time to stop the robot, but on the other hand, the robot can choose to disable this switch if it so desires.
Not very surprisingly, when the machine was given a high degree of confidence it would never allow the human player to switch it off. In contrast, a robot endowed with low confidence would always allow the player to shut it down, even if it was performing its task perfectly.
Hadfield-Menell believes this is a good indication that we shouldn’t make AI’s too “insecure”. For example, if you task your autonomous car with taking the kids to school in the morning it should never let a child take control. In this case, the AI should be confident that its own ability is greater than that of the children and refuse to relinquish control. But if you were in the car and told it to stop, it should relinquish control. The best robots, he adds, will be those who can best balance these two extremes.
While the idea of a robot refusing a command to stop or shut down might seem a bit scary or far-fetched (and has been debated at large in the past), context is everything. Humans are fallible too, and you wouldn’t want a robotic firefighter to stop from saving someone or putting out a fire because it was ordered to, by mistake. Or a robotic nurse to stop treating a delirious patient who orders it to shut down. This confidence is a key part of AI operation and something we’ll have to consider before putting people and AIs side by side in the real world.
The issue is wider than simple confidence, however. As machines will be expected to make more and more decisions that directly impact human safety, it’s important that we put a solid ethical framework in place sooner rather than later, according to Hadfield-Menell. Next, he plans to see how a robot’s decision-making changes with access to more information regarding its own usefulness — for example, how a coffee-pot robot’s behavior might change in the morning if it knows that’s when it’s most useful. Ultimately, he wants his research to help create AIs that are more predictable and make decisions that are more intuitive to us humans.
The full paper “The Off-Switch Game” has been published in the journal arXiv.