色盒直播

Citation counting is killing academic dissent

Junior scholars have always needed to curry favour with their seniors, but quantifying research impact exacerbates the problem, says Jonathan R Goodman

November 25, 2019
murder knife blood
Source: iStock

Some years ago, I was present at a philosophy debate about the degree to which people can be compared?to animals. One of the key questions discussed was whether people tend to compete the way animals do, and whether we let our desire for success get in the way of what’s right.

When that latter argument was advanced, the moderator responded: “But that’s the whole point of analytic philosophy – to put reason and truth ahead of our personal feelings.”

Today I regret not asking the question that came to mind then: why, then, do philosophers not publish anonymously? What’s the point of putting our names to ideas, if we don’t publish to compete?

It has struck me since that not only was the philosopher’s point not true, it was at odds with how academic publishing works. Across disciplines, the past few decades have seen publication become indispensable for career development – giving rise to a whole slew of techniques to game the system to academics’ own advantage.

色盒直播

ADVERTISEMENT

One recently highlighted example is the across academic research, whereby authors repeatedly cite their own work to increase its perceived impact. Another is the formation of , in which groups of academics strike an unwritten deal to cite one another’s work, regardless of its quality or relevance.

And regardless of anonymity, the increasing use of citations and impact factors to judge academics has given referees and academic editors the opportunity to others into citing their own works, as the cost of publication.

色盒直播

ADVERTISEMENT

Not everyone uses these tactics, of course, and they are looked down upon. But in a crowded academic environment, people can become desperate to make their work stand out. According to a 2015 article in PLoS One, researchers have feeling pressured to play the citation-maximisation game.

Aside from these known methods of exploitation, reliance on quantification is having a less obvious but arguably equally pernicious effect. published in Proceedings of the National Academy of Sciences has shown that only about 2.4 per cent of articles published in immunology receive negative citations – defined as published, negative criticism of a particular paper. This is a startlingly low number given how strongly researchers often disagree with each other’s interpretations.

Moreover, the authors also found that researchers in closer geographical proximity to one another were less likely to cite each other’s work negatively. This suggests that it may be particularly “socially costly” to criticise neighbouring scientists, who may otherwise be the most likely to cite the author’s own paper.

Negative citations, the authors write, may help to move research forward by calling into question data, arguments or assumptions of published work. It follows that the degree to which researchers are trained against – or even just implicitly discouraged from – openly disagreeing with others will correspond to a slowing of scientific progress.

色盒直播

ADVERTISEMENT

The need to curry favour with one’s seniors has undoubtedly been present throughout the history of universities, but the quantification of academic impact is exacerbating that problem. Researchers may now have to choose between rigorous, honest criticism of published work and their own career advancement. Henry Kissinger is often credited with saying that the passions in university politics run so high precisely because the stakes are so small; in the era of publication, however, it seems that the passions (apparently) run so low precisely because the stakes are so high.

Researchers that they don’t take citation metrics seriously. But given the enormous number of articles and journals in every area of inquiry, some form of quantification is necessary to sort through it. Moreover, no one really believes that their superiors aren’t counting their wins, and this perception is all the incentive junior scholars need to keep their critiques to themselves.

These problems, ultimately, are cultural ones. Even if citation scoring were ended tomorrow, people would no doubt remain wary of criticising each other – especially their superiors – constructively. But if editors and supervisors don’t make greater efforts to encourage dissent rather than sycophancy, the next generation of researchers – even those in the famously combative discipline of philosophy – may not have the chance to say anything meaningful at all.

Jonathan R. Goodman is a doctoral student at the Leverhulme Centre for Human Evolutionary Studies, University of Cambridge.

色盒直播

ADVERTISEMENT

POSTSCRIPT:

Print headline: Citations, citations, citations

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

Citation counts of individual articles aren't a perfect metric for the reasons you outline, but as an indicator of the impact of a piece of research, they are a lot better than the impact factor of the journal in which it was published. More widespread use of article-level citation metrics would help to break the monopoly of certain journals, end the dominance of the impact factor, and would correspondingly help to reduce the coercive practices at journals that you mention. At the institution I work for, getting published in certain journals has become an absolute raison d'etre, fetishised to the point that nobody gives a hoot whether the work is read (or cited). Publication is all that matters and no thought is given to what happens next. High-profile publications hold the potential for impact, whereas citation counts actually demonstrate it. I thought as scientists we cared about evidence?

Sponsored

ADVERTISEMENT