We tend to only look at the most recent feedback when gauging our own levels of competence, a new paper reports. The findings can help explain why people or groups tend to stick to their beliefs even in the face of overwhelming evidence to the contrary.
A team of researchers from the University of California, Berkeley (UC) thinks that feedback — rather than hard evidence — is what makes people feel certain of their beliefs when learning something new, or when trying to make a decision. In other words, people’s beliefs tend to be reinforced by the positive or negative reactions they receive in response to an opinion, task, or interaction, not by logic, reasoning, or data.
“Yes but you see, I’m right”
“If you think you know a lot about something, even though you don’t, you’re less likely to be curious enough to explore the topic further, and will fail to learn how little you know,” said study lead author Louis Marti, a Ph.D. student in psychology at UC Berkeley.
“If you use a crazy theory to make a correct prediction a couple of times, you can get stuck in that belief and may not be as interested in gathering more information,” adds study senior author Celeste Kidd, an assistant professor of psychology at UC Berkeley.
This dynamic is very pervasive, the team writes, playing out in every area of our lives — from how we interact with family, friends, or coworkers, to our consumption of news, social media, and the echo chambers that form around us. It’s actually quite bad news, as this feedback-based reinforcement pattern has a profound effect on how we handle and integrate new information into our belief systems. It’s especially active in the case of information that challenges our worldview, and can limit our intellectual horizons, the team explains.
It can also help explain why some people are easily duped by charlatans.
For the study, the team worked with over 500 adult subjects recruited through Amazon’s Mechanical Turk crowd-sourcing platform. Participants were placed in front of a computer screen displaying different combinations of colored shapes, and asked to identify which shapes qualify as a “Daxxy”.
If you don’t know what a Daxxy is, fret not — that was the whole point. Daxxies are make-believe objects that the team pulled out of a top hat somewhere, specifically for this experiment. Participants weren’t told what a Daxxy is, neither were they clued in as to what any of its defining characteristics were. The experiment aimed to force the participants to make blind guesses, and see how their choices evolve over time.
In the end, the researchers used these patterns of choice to see what influences people’s confidence in their knowledge or beliefs while learning.
Participants were told whether they picked right or wrong on each try, but not why their answer was correct or not. After each guess, they reported on whether or not they were certain of their answer. By the end of the experiment, the team reports, a trend was already evident: the subjects consistently based their certainty on whether they had correctly identified a Daxxy during the last four or five guesses, not all the information they had gathered throughout the trial.
“What we found interesting is that they could get the first 19 guesses in a row wrong, but if they got the last five right, they felt very confident,” Marti said. “It’s not that they weren’t paying attention, they were learning what a Daxxy was, but they weren’t using most of what they learned to inform their certainty.”
By contrast, Marti says, learners should base their certainty on observations made throughout the learning process — but not discount feedback either.
“If your goal is to arrive at the truth, the strategy of using your most recent feedback, rather than all of the data you’ve accumulated, is not a great tactic,” he said.
The paper “Certainty Is Primarily Determined by Past Performance During Concept Learning” has been published in the journal Open Mind.