Numerous academic journals often post intriguing and challenging psychological studies – but according to a new, massive review, we should take those studies with a big grain of salt. A four-year project by 270 researchers attempted to replicate 100 experiments published in three of the most prestigious journals; only 36 produced similar results.
Social sciences have taken quite a “beating” in recent years: one of the most prolific authors was caught fabricating data, leading to more than 50 retracted papers – most of them from top level journals. Then, one of the rising stars of social science also manipulated data on how homosexuals are regarded, and faking scientific data seems to be done industrially in China. These events, along with many others, led to the development of the The Reproducibility Project: Psychology, led by Brian Nosek of the University of Virginia.
Most studies didn’t held up.
“Less than half — even lower than I thought,” said Dr. John Ioannidis, a director of Stanford University’s Meta-Research Innovation Center, who once estimated that about half of published results across medicine were inflated or wrong. Dr. Ioannidis said the problem was hardly confined to psychology and could be worse in other fields, including cell biology, economics, neuroscience, clinical medicine, and animal research.
Reproducing a study’s results lies at the very core of science. After all, if a study doesn’t produce reproducible results, then what value does it have? What confidence can you have in its data? Well, things often aren’t as straightforward as this, and free will often makes it impossible to replicate environments, but generally speaking, when a study and a verification study don’t add up, there are three options: either study A was wrong, study B was wrong, or there were subtle differences between how study A and study B were done that affected the outcome. Just think about it – even today, almost 100 years later, people are still testing for Albert Einstein’s theory of relativity, even though it’s the backbone of modern physics. This doesn’t necessarily mean that the science is wrong.
Cody Christopherson of Southern Oregon University comments:
“This project is not evidence that anything is broken. Rather, it’s an example of science doing what science does,” says Christopherson. “It’s impossible to be wrong in a final sense in science. You have to be temporarily wrong, perhaps many times, before you are ever right.”
Psychology is an especially difficult area to reproduce studies, but this raises a major problem. In order to succeed in academia, you need to publish new stuff, stuff that no one’s ever tried, even though often, everyone would be better of if you’d double check something that someone has already done.
“To get hired and promoted in academia, you must publish original research, so direct replications are rarer. I hope going forward that the universities and funding agencies responsible for incentivizing this research—and the media outlets covering them—will realize that they’ve been part of the problem, and that devaluing replication in this way has created a less stable literature than we’d like.”
In other words, there are two things we should be doing. First of all, we should be taking these studies with a grain of salt; let’s wait for results to be confirmed and double checked by others, and second: let’s encourage others to do so! We’re not only giving out perverse incentives for some research, but we’re eliminating valid incentive for solid, valuable verification work. Journals also carry a big part of the blame: they prioritize positive results and ignore almost completely negative results.
“We see this is a call to action, both to the research community to do more replication, and to funders and journals to address the dysfunctional incentives,” said Brian Nosek, a psychology professor at the University of Virginia and executive director of the Center for Open Science, the nonprofit data-sharing service that coordinated the project published Thursday, in part with $250,000 from the Laura and John Arnold Foundation.