The ultimate goal of artificial intelligence (AI) research is to fully mimic the human brain. Right now, humans still have the upper hand but AI is advancing at a phenomenal pace. Some argue that AIs enabled by artificial neural networks still have a long way to go seeing how such systems can sometimes be easily fooled by certain cues like ambiguous images (i.e. television static). However, a new study suggests that humans aren’t necessarily any better. The findings show that humans can make the same wrong decisions a machine would in some situations. We’re already not that different from the machines we built in our image, researchers point out.
“Most of the time, research in our field is about getting computers to think like people,” says senior author Chaz Firestone, an assistant professor in Johns Hopkins’ Department of Psychological and Brain Sciences. “Our project does the opposite—we’re asking whether people can think like computers.”
Quick: what’s 19×926? I’ll save you the trouble — it’s 17,594. It took my computer a fraction of a fraction of a second to give me the right answer. But while we all know computers are far better than humans at crunching raw numbers, they’re quite ill-equipped in other areas where humans perform effortlessly. Identifying objects is one of them, for instance. We can easily recognize that an object is a chair or a table, a task that AIs have only recently begun to perform decently.
AIs are what enable self-driving cars to scan their surroundings and read traffic lights or recognize pedestrians. Elsewhere, in medicine, AIs are now combing through millions of images, spotting cancer or other diseases from radiological scans. With each iteration, these machines ‘learn’ and are able to come up with a better result next time.
But despite considerable advances, AI pattern recognition can sometimes go horribly wrong. What’s more, researchers in the field are worried that some nefarious agents might exploit this fact to purposefully fool AIs. Just reconfiguring some pixels can sometimes be enough to through off an AI. In a security context, this can be troublesome.
Firestone and colleagues wanted to investigate how humans fair in situations where AI cannot come to an unambiguous answer. The research team showed 1,800 people a series of images that had previously tricked computers and gave the participants the same kind of labeling options that the machine had. The participants had to guess which of two options the computer had chosen — one being the computer’s decision, the other being a random answer. The video below explains how all of this works.
“These machines seem to be misidentifying objects in ways humans never would,” Firestone says. “But surprisingly, nobody has really tested this. How do we know people can’t see what the computers did?”
The participants chose the same answer as computers 75% of the time. Interestingly, when the game was changed to give people a choice between a computer’s first answer and its next-best guess (i.e. a bagel or a pretzel), humans validated the machine’s first choice 91% of the time. The findings suggest that the gap between human and machine isn’t that wide as some might think. As for whether the people part of the study thought like a machine, I personally think that the framing is a bit off. These machines were designed by humans, and as such their intentions are modeled off humans. If anything, these findings show that machines are behaving more and more like humans — and not the other way around.
“The neural network model we worked with is one that can mimic what humans do at a large scale, but the phenomenon we were investigating is considered to be a critical flaw of the model,” said lead author Zhenglong Zhou. “Our study was able to provide evidence that the flaw might not be as bad as people thought. It provides a new perspective, along with a new experimental paradigm that can be explored.”
The findings appeared in the journal Nature Communications.