homehome Home chatchat Notifications


Humans and computers can be fooled by the same tricky images

The gap between humans and AI is getting narrower by the day.

Tibi Puiu
March 23, 2019 @ 12:00 am

share Share

Computers interpreted the images as an electric guitar, an African grey parrot, a strawberry, and a peacock (in this order). Credit: John Hopkins.

Computers interpreted the images as an electric guitar, an African grey parrot, a strawberry, and a peacock (in this order). Credit: John Hopkins.

The ultimate goal of artificial intelligence (AI) research is to fully mimic the human brain. Right now, humans still have the upper hand but AI is advancing at a phenomenal pace. Some argue that AIs enabled by artificial neural networks still have a long way to go seeing how such systems can sometimes be easily fooled by certain cues like ambiguous images (i.e. television static). However, a new study suggests that humans aren’t necessarily any better. The findings show that humans can make the same wrong decisions a machine would in some situations. We’re already not that different from the machines we built in our image, researchers point out.

“Most of the time, research in our field is about getting computers to think like people,” says senior author Chaz Firestone, an assistant professor in Johns Hopkins’ Department of Psychological and Brain Sciences. “Our project does the opposite—we’re asking whether people can think like computers.”

Quick: what’s 19×926? I’ll save you the trouble — it’s 17,594. It took my computer a fraction of a fraction of a second to give me the right answer. But while we all know computers are far better than humans at crunching raw numbers, they’re quite ill-equipped in other areas where humans perform effortlessly. Identifying objects is one of them, for instance. We can easily recognize that an object is a chair or a table, a task that AIs have only recently begun to perform decently.

AIs are what enable self-driving cars to scan their surroundings and read traffic lights or recognize pedestrians. Elsewhere, in medicine, AIs are now combing through millions of images, spotting cancer or other diseases from radiological scans. With each iteration, these machines ‘learn’ and are able to come up with a better result next time.

But despite considerable advances, AI pattern recognition can sometimes go horribly wrong. What’s more, researchers in the field are worried that some nefarious agents might exploit this fact to purposefully fool AIs. Just reconfiguring some pixels can sometimes be enough to through off an AI. In a security context, this can be troublesome.

Firestone and colleagues wanted to investigate how humans fair in situations where AI cannot come to an unambiguous answer. The research team showed 1,800 people a series of images that had previously tricked computers and gave the participants the same kind of labeling options that the machine had. The participants had to guess which of two options the computer had chosen — one being the computer’s decision, the other being a random answer. The video below explains how all of this works.

“These machines seem to be misidentifying objects in ways humans never would,” Firestone says. “But surprisingly, nobody has really tested this. How do we know people can’t see what the computers did?”

Computers identified the following images as a digital clock, a crossword puzzle, a king penguin, and an assault rifle. Credit: John Hopkins.

The participants chose the same answer as computers 75% of the time. Interestingly, when the game was changed to give people a choice between a computer’s first answer and its next-best guess (i.e. a bagel or a pretzel), humans validated the machine’s first choice 91% of the time. The findings suggest that the gap between human and machine isn’t that wide as some might think. As for whether the people part of the study thought like a machine, I personally think that the framing is a bit off. These machines were designed by humans, and as such their intentions are modeled off humans. If anything, these findings show that machines are behaving more and more like humans — and not the other way around.

“The neural network model we worked with is one that can mimic what humans do at a large scale, but the phenomenon we were investigating is considered to be a critical flaw of the model,” said lead author Zhenglong Zhou. “Our study was able to provide evidence that the flaw might not be as bad as people thought. It provides a new perspective, along with a new experimental paradigm that can be explored.”

The findings appeared in the journal Nature Communications.

share Share

The surprising health problem surging in over 50s: sexually transmitted infections

Doctors often don't ask older patients about sex. But as STI cases rise among older adults, both awareness and the question need to be raised.

Kids Are Swallowing Fewer Coins and It Might Be Because of Rising Cashless Payments

The decline of cash has coincided with fewer surgeries for children swallowing coins.

Horses Have a Genetic Glitch That Turned Them Into Super Athletes

This one gene mutation helped horses evolve unmatched endurance.

Scientists Discover Natural Antibiotics Hidden in Our Cells

The proteasome was thought to be just a protein-recycler. Turns out, it can also kill bacteria

Future Windows Could Be Made of Wood, Rice, and Egg Whites

Simple materials could turn wood into a greener glass alternative.

Researchers Turn 'Moon Dust' Into Solar Panels That Could Power Future Space Cities

"Moonglass" could one day keep the lights on.

Ford Pinto used to be the classic example of a dangerous car. The Cybertruck is worse

Is the Cybertruck bound to be worse than the infamous Pinto?

Archaeologists Find Neanderthal Stone Tool Technology in China

A surprising cache of stone tools unearthed in China closely resembles Neanderthal tech from Ice Age Europe.

A Software Engineer Created a PDF Bigger Than the Universe and Yes It's Real

Forget country-sized PDFs — someone just made one bigger than the universe.

The World's Tiniest Pacemaker is Smaller Than a Grain of Rice. It's Injected with a Syringe and Works using Light

This new pacemaker is so small doctors could inject it directly into your heart.