Although science publishing is by far the best option we have for advancing our understanding of the world, the publishing system itself is far from perfect.
In the early 2010s, a new term became very hot in some fields of research: replication crisis. The problem researchers had discovered was that many scientific studies are difficult or impossible to replicate or reproduce. Because reproducing research is an essential pillar of research, this has grave consequences, and forced us to reconsider many of the things we took for granted, especially in the fields of medicine and psychology.
There’s much to be said about the replication crisis, but one particular aspect was brought up in a new study: how un-replicable studies are cited.
Citations can make or break someone’s scientific career; the more citations one study or author has, the more it is regarded as important and influential. But according to a new study, research that is less likely to be replicable is also more likely to be cited.
The problem is not new for experts. In fact, to some extent, researchers in various fields are already aware of this problem.
“We also know that experts can predict well which papers will be replicated,” write the authors Marta Serra-Garcia, assistant professor of economics and strategy at the Rady School, and Uri Gneezy, professor of behavioral economics also at the Rady School. “Given this prediction, we ask ‘why are non-replicable papers accepted for publication in the first place?'”
“Interesting” results
The problem, Serra-Garcia suspects, is that the review teams of academic journals face a trade-off. In order for a paper to get published, a study first needs to be peer-reviewed — edited by experts in the field. When a study is published on something that’s well-known and established, and has useful results but lacks the ‘wow’ factor, it will likely be edited very harshly. But reviewers are more likely to be lenient when the results are more “out there.”
The same thing happens in the media: studies that are striking or more interesting somehow are more likely to be picked up, although their validity may be a bit more questionable.
“Interesting or appealing findings are also covered more by media or shared on platforms like Twitter, generating a lot of attention, but that does not make them true,” Gneezy said.
Serra-Garcia and Gneezy analyzed data from three influential replication projects which tried to systematically replicate the findings in top psychology, economic, and general science journals like Nature and Science. In economics, 61% of 18 studies were successfully replicated; a similar figure was found for general science (62%). This is already less than ideal, but in psychology, things were way worse: just 39 of 100 experiments could successfully be replicated.
The disparity in citations is striking. Papers that are successfully replicated are cited 153 times less than those that couldn’t be replicated. Even when researchers took into account several characteristics of the replicated studies (the number of authors, the rate of male authors, the details of the experiment, and the field) — the relation between citations and replicability was still unchanged.
The impact of such citations also grows over time, becoming more pronounced as time passes. In other words, it’s not just a fad where some studies have unusual results and they get quoted at first but then researchers catch on — the effect continues over time and, on average, papers that could not be replicated are cited 16 times more per year.
“Remarkably, only 12 percent of post-replication citations of non-replicable findings acknowledge the replication failure,” the authors write.
An impactful problem
To see just how big a problem this is, you need look no further than the vaccine-autism controversy. It all started from a study published by Andrew Wakefield in 1998. The study has long been retracted, and Wakefield’s methods were shown to be not just fraudulent but also cruel to the participants — yet despite numerous studies disproving Wakefield’s study, claims that autism is linked to vaccines still continue.
The problem can be solved by improving the way scientific publishing works. Academics are under tremendous pressure to publish papers, especially groundbreaking papers that get a lot of citations. If unreliable papers are more likely to gather citations, then academics have an incentive to publish this type of study. Modern science is generally built on small, incremental progress, not big breakthrough leaps, but small incremental progress isn’t flashy.
The authors hope to raise attention to this problem, and encourage researchers (and readers) to think that something that is interesting and appealing may not always be replicable.
“We hope our research encourages readers to be cautious if they read something that is interesting and appealing,” Serra-Garcia said. “Whenever researchers cite work that is more interesting or has been cited a lot, we hope they will check if replication data is available and what those findings suggest.”
The study is published in Science.