Robots admitting to making a mistake can, surprisingly, improve communication between humans — at least during games.
A new study led by researchers from Yale University found that in the context of a game with mixed human-and-robot teams, having the robot admit to making mistakes (when applicable) fosters better communication between the human players and helps improve their experience. A silent robot, or one that would only offer neutral statements such as reading the current score, didn’t result in the same effects.
Regret.exe
“We know that robots can influence the behavior of humans they interact with directly, but how robots affect the way humans engage with each other is less well understood,” said Margaret L. Traeger, a Ph.D. candidate in sociology at the Yale Institute for Network Science (YINS) and the study’s lead author.
“Our study shows that robots can affect human-to-human interactions.”
Robots are increasingly making themselves part of our lives, and there’s no cause to assume that this trend will stop; in fact, it’s overwhelmingly likely that it will accelerate in the future. Because of this, understanding how robots impact and influence human behavior is a very good thing to know. The present study focused on how the presence of robots — and their behavior — influences communication between humans as a team.
For the experiment, the team worked with 153 people divided into 51 groups — three humans and a computer each. They were then asked to play a tablet-based game in which the teams worked together to build the most efficient railroad routes they could over 30 rounds. The robot in each group would be assigned one pattern of behavior: they would either remain silent, utter a neutral statement (such as the score or number of rounds completed), or express vulnerability through a joke, personal story, or by acknowledging a mistake. All of the robots occasionally lost a round, the team explains.
“Sorry, guys, I made the mistake this round,” the study’s robots would say. “I know it may be hard to believe, but robots make mistakes too.”
“In this case,” Traeger said, “we show that robots can help people communicate more effectively as a team.”
People teamed with robots that made vulnerable statements spent about twice as much time talking to each other during the game, and they reported enjoying the experience more compared to people in the other two kinds of groups, the study found. However, participants in teams with the vulnerable and neutral robots than among both communicated more than those in the groups with silent robots, suggesting that the robot simply engaging in any form of conversation helped spur its human teammates to do the same.
“Imagine a robot in a factory whose task is to distribute parts to workers on an assembly line,” said Sarah Strohkorb Sebo, a Ph.D. candidate in the Department of Computer Science at Yale and a co-author of the study. “If it hands all the pieces to one person, it can create an awkward social environment in which the other workers question whether the robot believes they’re inferior at the task.”
“Our findings can inform the design of robots that promote social engagement, balanced participation, and positive experiences for people working in teams.”