色盒直播

Call to remove ‘hyper-authored’ papers from research metrics

Web of Science study says articles with more than 100 authors or involving dozens of countries can artificially boost citation impact

December 11, 2019
Crowded King's Cross station
Source: iStock

Scholarly articles authored by more than 100 people produce such “unpredictable” and “incoherent” effects on metrics that they should be removed from analyses of research performance, a new report argues.

According to the??on multi-authored papers from the Web of Science Group’s Institute for Scientific Information, about 95 per cent of research worldwide still has 10 or fewer authors.

However, it notes that the number of papers featuring “complex authorship” – with large numbers of authors and countries listed – has noticeably increased in recent years.

In particular, the number of articles indexed in Web of Science with more than 1,000 authors more than doubled from 2009-13 to 2014-18. Meanwhile, papers with authors from more than 50 countries were virtually non-existent five years ago but dozens were published from 2014 to 2018.

色盒直播

ADVERTISEMENT

Although such papers still represent a tiny proportion of global research output, they can have a hugely distorting effect on the analysis of research performance, the report –?Multi-authorship and Research Analytics –?飞补谤苍蝉.

One example detailed in the study is the effect that papers with a large number of authors can have on the overall citation impact of countries with small research bases.

色盒直播

ADVERTISEMENT

For instance, for large research nations such as the UK and US, papers with high author counts tend to have around 2.5 times the citation impact of “typical” articles with 10 or fewer researchers listed.

But for some smaller research nations, the study found that the citation impact of multi-authored papers can be several times higher than their typical output, potentially artificially inflating their overall citation impact.

It details the case of Sri Lanka, where the citation impact of multi-authored research is more than 10 times higher than for its research with fewer than 10 authors. As the presence of multi-authored research forms a relatively large share of the country’s total output, this boosts its overall citation impact score above the UK and US.

The report also finds that a paper’s individual citation impact is boosted every time that another country is involved with research, again showing the potential distorting effect of “complex authorship” on metrics.

It recommends that articles with “hyper-authorship” – those with more than 100 authors and/or 30 country affiliations – should ideally not be included in analyses of research performance.

色盒直播

ADVERTISEMENT

“These articles are, to put it simply, different: they have unpredictable, incoherent effects that can sometimes be very large. There is a strong argument for removing such data from all associated analyses at national as well as at institutional level,” it says.

It adds that articles with more than 10 authors should also be “acknowledged and separately described” in data analyses because of how they can “influence interpretation”.

Jonathan Adams, director of the ISI and a co-author of the report, said that as well as the effect on metrics for countries and institutions, multi-author papers could also clearly have major distortions on performance measures for individuals.

色盒直播

ADVERTISEMENT

“If you took an individual researchers’ portfolio, they could have a whole raft of more typical papers with good/bad/indifferent citation performance and you bring in one of these papers…then clearly it is going to boost their average hugely,” he said.

“That is another reason why you would not want those papers to simply be dropped into the pool if you were doing some kind of performance assessment.”

He added that although it was not the ISI’s role to dictate how research analysis was done, it hoped that the study would generate discussion in the wider bibliometric community about how to deal with such papers.

“They are great papers, they are terribly important, they could clearly contribute to a lot of really key innovation, but they are not normal papers.”

色盒直播

ADVERTISEMENT

simon.baker@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

The root cause of this problem is the assumption that 'citation impact' is a meaningful measure of research performance. Citation impact is simply a count of citations to a paper (or a normalisation of these citations based on the average citations within an arbitrarily defined 'field'). It is a bibliometric concept and not a research concept. Citations cannot directly tell you if research is high quality, replicable, innovative, accurate, or valuable. They can only tell you if the research has been cited and how often, but not why. They are a proxy for performance because they indicate utility of the research, but there is no quantitative measure of research quality that we can use to calibrate their sensitivity to research quality. Therefore, there is no specific logic or rationale we can use for removing outliers that isn't just an arbitrary decision. Why treat papers with more than 10 authors differently to a paper with 9? Why are 30 countries the limit for valid multinational research collaborations? Outliers affect any measure that uses a mean, but perhaps using this mean to measure aggregate research performance is the real problem.

Sponsored

ADVERTISEMENT