Artificial intelligence doesn’t need to rival our own to have an impact on people’s lives — even “dumb AI” can help humans out, a Yale University study shows.
A lot of the talk regarding AI today revolves around the idea of it substituting or even surpassing our level of intelligence. But right now, as AI is making the first unsteady steps towards reality, that debate isn’t really reflected by the real world. AI is just not there yet.
So with that in mind, can AI’s with significantly lower capacity than our brains help complement human activity? A team led by Nicholas Christakis, a professor of sociology, ecology & evolutionary biology, biomedical engineering, and medicine at Yale, co-director of the Yale Institute for Network Science (YINS) and senior author of the study, took to the realm of video games to find out.
The researchers used an online co-operative game — which required groups of people to work together towards a collective goal — as part of an experiment to find out. In a game, the 4000 participants they recruited for the study were joined by a host of bots, programmed to act according to three levels of behavioral randomness. This meant that the AI’s sometimes deliberately made a ‘mistake’ in the context of the game, and they would do this more or less often according to their programming.
The game they played is called breadboard and was developed at Yale. Breadboard is “a networked color coordination game”, in which the players are embedded into 20-node networks (230 were used for the study). Groups of 3 bots were sometimes added to a network — these were anonymous (the participants couldn’t discern between a human or an AI player) and usually placed different parts of the social network.
“We mixed people and machines into one system, interacting on a level playing field,” Shirado explained. “We wanted to ask, ‘Can you program the bots in simple ways?’ and does that help human performance?”
Bot me up!
The team reports that not only did the bots boost the overall performance of the human players, but those who were placed in central location “meaningfully improved the collective performance of human groups”, reducing the mean time for solving problems by 55.6%. Furthermore, this effect became more pronounced as the tasks became more difficult.
It’s not only the bots’ good plays which offered a boost to the network — even their errors helped.
“Behavioural randomness worked not only by making the task of humans to whom the bots were connected easier, but also by affecting the gameplay of the humans among themselves and hence creating further cascades of benefit in global coordination in these heterogeneous systems,” the paper notes.
So, in other words, the bots started a domino effect inside their networks. Their activity made the game easier for human players, who in turn could do the same for even more players, driving overall efficiency up. The AIs, although designed to be a sub-par player compared to a human on its own could, in a sense, help the players help themselves.
Understanding the dynamic inside AI-human groups could help shape how we think about the technology in a wide variety of scenarios, the team says. For example, it’s possible that human and machine drivers will share the road for some time in the future, and understanding how the two interact could help design AIs which would react more intuitively for drivers. Tandem military AI-human applications could also benefit from the findings, as could online environments for human-AI interaction.
The full paper “Locally noisy autonomous agents improve global human coordination in network experiments” has been published in the journal Nature.