homehome Home chatchat Notifications


Emotional computers really freak people out -- a new take on the uncanny valley

Be like us, but not us.

Alexandru Micu
March 13, 2017 @ 7:27 pm

share Share

New research shows that AIs we perceive as too mentally human-like can unnerve us even if their appearance isn’t human, furthering our understanding of the ‘uncanny valley’ and potentially directing future work into human-computer interactions.

Image credits kuloser / Pixabay.

Back in the 1970s, Japanese roboticist Masahiro Mori advanced the concept of the ‘uncanny valley’ — the idea that humans will appreciate robots and animations more and more as they become more human-like in appearance, but find them unsettling as they become almost-but-not-quite-human. In other words, we know how a human should look, and a machine that ticks some of the criteria but not all is too close for comfort.

The uncanny valley of the mind

That’s all well and good for appearance — but what about the mind? To find out, Jan-Philipp Stein and Peter Ohler, psychologists at the Chemnitz University of Technology in Germany, had 92 participants observe a short conversation between two virtual avatars, one male and one female, in a virtual plaza. These characters talked about their exhaustion from the hot weather, after which the woman told about her frustration at the lack of free time and annoyance for waiting on a friend who’s late, then the man expressed his sympathy for her plight. Pretty straightforward small talk.

The trick was that while everyone witnessed the same scene and dialogue, the participants were given one of four context stories. Half were told that the avatars were controlled by computers, and the other half that they were human-controlled. Furthermore, half of the group was told that the dialogue was scripted and the others that it was spontaneous, in such a way that each context story was fed to one quarter of the group.

Out of all the participants, those who were told that they’d be witnessing two computers interact on their own reported the scene as more eerie and unsettling that the other three groups. People were ok with humans or script-driven computers exhibiting natural-looking social behavior, but when a computer showed frustration or sympathy on its own it put people on edge, the team reports.

Given that the team managed to elicit this response in their participants only through the concept they presented, they call this phenomenon the ‘uncanny valley of the mind,’ to distinguish between the effects of a robot’s perceived appearance and personality on humans, noting that emotional behavior can seem uncanny on its own.

In our own image

Image credits skeeze / Pixabay.

The main takeaway from the study is that people may not be as comfortable with computers or robots displaying social skills as they think they are. It’s all fine and dandy if you ask Alexa about the CIA and she answers/shuts down, but expressing frustration that you keep asking her that question might be too human for comfort. And with social interactions, the effect may be even more pronounced that with appearance alone — because appearance is obvious, but you’re never sure exactly how human-like the computer’s programming is.

Stein believes the volunteers who were told they were watching two spontaneous computers interact were unsettled because they may have felt their human uniqueness was under threat. That if computers can emulate us, what’s stopping them from taking control over our own technology? In future research, he plans to test if this effect of the uncanny valley of the mind can be mitigated when people feel they have control over the human-like agents’ behavior.

So are human-like bots destined to fail? Not necessarily — people may feel like the situation was creepy because they were only witnessing it. It’s like having a conversation with Cleverbot, only a cleverer one. A Clever2bot, if you will. It’s fun while you’re doing it, but once you close the conversation and rummage it over you just feel like something was off with the talk.

By interacting directly with the social bots, humans may actually find the experience pleasant, thus reducing its creepy factor.

The full paper “Feeling robots and human zombies: Mind perception and the uncanny valley” has been published in the journal Cognition.

 

share Share

This 5,500-year-old Kish tablet is the oldest written document

Beer, goats, and grains: here's what the oldest document reveals.

A Huge, Lazy Black Hole Is Redefining the Early Universe

Astronomers using the James Webb Space Telescope have discovered a massive, dormant black hole from just 800 million years after the Big Bang.

Did Columbus Bring Syphilis to Europe? Ancient DNA Suggests So

A new study pinpoints the origin of the STD to South America.

The Magnetic North Pole Has Shifted Again. Here’s Why It Matters

The magnetic North pole is now closer to Siberia than it is to Canada, and scientists aren't sure why.

For better or worse, machine learning is shaping biology research

Machine learning tools can increase the pace of biology research and open the door to new research questions, but the benefits don’t come without risks.

This Babylonian Student's 4,000-Year-Old Math Blunder Is Still Relatable Today

More than memorializing a math mistake, stone tablets show just how advanced the Babylonians were in their time.

Sixty Years Ago, We Nearly Wiped Out Bed Bugs. Then, They Started Changing

Driven to the brink of extinction, bed bugs adapted—and now pesticides are almost useless against them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

Couple Finds Giant Teeth in Backyard Belonging to 13,000-year-old Mastodon

A New York couple stumble upon an ancient mastodon fossil beneath their lawn.

Worms and Dogs Thrive in Chernobyl’s Radioactive Zone — and Scientists are Intrigued

In the Chernobyl Exclusion Zone, worms show no genetic damage despite living in highly radioactive soil, and free-ranging dogs persist despite contamination.