In the world of academic publishing, metrics like citation counts hold significant sway — but what if these metrics could be easily manipulated? A recent experiment showed just how fragile and manipulable the system can be. This is a tale of deception, academic dishonesty, and one very scholarly feline.
Larry the academic
Larry Richardson has 132 citations. He has several published papers and is apparently an accomplished academic. However, Larry Richardson is a cat, and this is just the start of the problem.
The story began when Nick Wise, an academic researcher, stumbled upon an advertisement from a paper mill promising to boost citation counts and h-indexes on Google Scholar profiles. Google Scholar is a freely accessible search engine that indexes the full text or metadata of academic papers. It’s an important platform for researchers because it provides a comprehensive and accessible platform for discovering scholarly literature, tracking citations, and measuring the impact of their work across diverse disciplines. It’s also widely used by universities and research institutes to get a quick overview of how good an academic’s work is. But it’s severely flawed and easy to manipulate.
Wise sent the ad to Reese Richardson, and Richardson had a kooky idea. He used his grandma’s cat, Larry, to show just how easy the system was to game. He wanted to make Larry the most accomplished academic writer. Surprisingly, there was competition.
Accomplished authors
In 1975, a physicist co-authored a paper with his cat. F.D.C. Willard, also known as Chester, became the co-author of a high-quality physics paper when his owner, physicist Jack H. Hetherington, added him as a co-author. Hetherington, having written the paper in the first person plural, faced rejection from the journal unless he could account for the use of “we.” Rather than rewriting the paper, he cleverly included his cat under the pseudonym F.D.C. Willard (Felix Domesticus, Chester Willard). The paper was accepted and published.
For Larry Richardson, the way to becoming an accomplished author was different: trickery.
First, Richardson (the human) wrote 12 nonsensical papers using MathGen, an automated generator of fake math papers. He made Larry sole author of these papers. Then, he “wrote” 12 more papers with different authors that quoted all the previous papers. The goal was to get 12×12 citations, or 144.
The papers were published to ResearchGate, a social platform for academics, and in a couple of weeks, all the papers but one were indexed by Google Scholar. And it’s not clear why that one paper wasn’t indexed.
This basically made Larry Richardson officially history’s most cited cat (according to Google Scholar, at least).
The problem with academic publishing
As funny as this is, it highlights a real problem with academic publishing. Quantitative metrics like citation counts and h-indexes are often used to evaluate researchers, yet they can easily be manipulated.
For many scientists, this creates a “publish or perish” environment, which further creates perverse incentives to prioritize quantity over quality, engage in dubious practices like citation rings or self-citation, and even resort to using services that artificially inflate their metrics. This undermines the integrity of scientific research and the credibility of academic evaluations. This reality calls for a critical reassessment of how we measure and reward scholarly contributions.
Granted, the vast majority of researchers stay well away of this sort of practice. But Larry’s case shows just how easily this system is gamed.
Reese Richardson concludes:
“Of course, this isn’t about making a cat a highly cited researcher. Our efforts (about an hour of non-automated work) were to make the same point as the authors of this aptly titled pre-print: Google Scholar is manipulatable. Despite the conspicuous vulnerabilities of Google Scholar (and ResearchGate), the quantitative metrics calculated by these services are routinely used to evaluate scientists.
For a fairer scientific enterprise, we ought to ditch quantitative heuristics like citation count, impact factor and h-index altogether (see the Declaration on Research Assessment, DORA). Services like Google Scholar, Web of Science, Scopus and ResearchGate could bring us a long way towards this ideal by no longer providing these metrics to users. However, if these services are bent on keeping citation-based metrics around, they should at least make manipulating their products a little more difficult.”
You can read the entire story on Reese Richardson’s blog, including adorable photos of Larry being compensated for his “hard work”.