New research shows that AIs we perceive as too mentally human-like can unnerve us even if their appearance isn’t human, furthering our understanding of the ‘uncanny valley’ and potentially directing future work into human-computer interactions.
Back in the 1970s, Japanese roboticist Masahiro Mori advanced the concept of the ‘uncanny valley’ — the idea that humans will appreciate robots and animations more and more as they become more human-like in appearance, but find them unsettling as they become almost-but-not-quite-human. In other words, we know how a human should look, and a machine that ticks some of the criteria but not all is too close for comfort.
The uncanny valley of the mind
That’s all well and good for appearance — but what about the mind? To find out, Jan-Philipp Stein and Peter Ohler, psychologists at the Chemnitz University of Technology in Germany, had 92 participants observe a short conversation between two virtual avatars, one male and one female, in a virtual plaza. These characters talked about their exhaustion from the hot weather, after which the woman told about her frustration at the lack of free time and annoyance for waiting on a friend who’s late, then the man expressed his sympathy for her plight. Pretty straightforward small talk.
The trick was that while everyone witnessed the same scene and dialogue, the participants were given one of four context stories. Half were told that the avatars were controlled by computers, and the other half that they were human-controlled. Furthermore, half of the group was told that the dialogue was scripted and the others that it was spontaneous, in such a way that each context story was fed to one quarter of the group.
Out of all the participants, those who were told that they’d be witnessing two computers interact on their own reported the scene as more eerie and unsettling that the other three groups. People were ok with humans or script-driven computers exhibiting natural-looking social behavior, but when a computer showed frustration or sympathy on its own it put people on edge, the team reports.
Given that the team managed to elicit this response in their participants only through the concept they presented, they call this phenomenon the ‘uncanny valley of the mind,’ to distinguish between the effects of a robot’s perceived appearance and personality on humans, noting that emotional behavior can seem uncanny on its own.
In our own image
The main takeaway from the study is that people may not be as comfortable with computers or robots displaying social skills as they think they are. It’s all fine and dandy if you ask Alexa about the CIA and she answers/shuts down, but expressing frustration that you keep asking her that question might be too human for comfort. And with social interactions, the effect may be even more pronounced that with appearance alone — because appearance is obvious, but you’re never sure exactly how human-like the computer’s programming is.
Stein believes the volunteers who were told they were watching two spontaneous computers interact were unsettled because they may have felt their human uniqueness was under threat. That if computers can emulate us, what’s stopping them from taking control over our own technology? In future research, he plans to test if this effect of the uncanny valley of the mind can be mitigated when people feel they have control over the human-like agents’ behavior.
So are human-like bots destined to fail? Not necessarily — people may feel like the situation was creepy because they were only witnessing it. It’s like having a conversation with Cleverbot, only a cleverer one. A Clever2bot, if you will. It’s fun while you’re doing it, but once you close the conversation and rummage it over you just feel like something was off with the talk.
By interacting directly with the social bots, humans may actually find the experience pleasant, thus reducing its creepy factor.
The full paper “Feeling robots and human zombies: Mind perception and the uncanny valley” has been published in the journal Cognition.