色盒直播

Are research output measures more worthy than critical review?

Citation in high-impact journals valued more than scholarly assessment

六月 12, 2014

Source: Alamy

Paper trail: journal impact factors are a straightforward way to signal quality

Academics’ desire to be judged on the basis of their publication in high-impact journals indicates their lack of faith in peer review panels’ ability to distinguish genuine scientific excellence, a report suggests.

The report, Evidence for Excellence: Has the Signal Overtaken the Substance?, written by Jonathan Adams, chief scientist at Digital Science, and Karen Gurney, a consultant to the firm, analyses trends in the outputs submitted to the past three research assessment exercises.

It finds that the proportion of submissions made up by journal articles has increased significantly, from 62 per cent in 1996 to 75 per cent in 2008. This reflects significant declines in the number of monographs submitted by social scientists and in the number of conference papers submitted by engineers – although books and chapters remain popular in the arts and humanities. Rather than a “massive cultural shift”, the report says this “looks much more like a change in behaviour, not in what was being written but in what was being offered for assessment”.

The report suggests the reason is that journal impact factors provide academics with a simple and widely used way to signal the quality of papers of theirs that have appeared in those journals, whereas “similar databases are still only superficial for conference proceedings and books”.

“Perhaps the change in behaviour…is evidence that numbers inexorably overcome real cultural preferences,” the report, which was published on 9 June, says.

An analysis of the 2008 RAE reveals that 14 high-impact journals each accounted for more than 500 of the nearly 81,000 articles submitted. Three journals with particularly high impact factors – Nature, The Lancet and Science – accounted for many more RAE submissions than the number of eligible outputs they contained, indicating that many co-authored papers had been submitted by more than one institution. One Nature paper had been submitted by all 12 UK institutions that had authors who had contributed to it.

Although 500 eligible UK Nature papers were not submitted to the RAE, 418 of the total 1,510 submissions of outputs from the journal were not UK-authored papers. These were either papers by academics recruited from abroad or “ephemera”, such as letters or editorials, which were often not cited at all.

“It might seem to make obvious sense for researchers to choose to submit papers from journals that had particularly high impact,” the report says. But it notes that the review panels – including those for the 2014 research excellence framework, whose results will be known in December – are barred from considering journal impact factors and are supposed to base their assessments of outputs on reading them.

Impact factors are an average of the number of citations that papers in the journal receive over a certain period. But the report points out that the figure is skewed by small numbers of highly cited papers, meaning that most papers garner fewer citations. Yet some academics submitted low-cited papers in high-impact journals in preference to more highly cited papers published in lower ranking journals.

“The real substance of what academics thought was the best marker of research excellence was displaced for review purposes by outputs that gave the simplest signal of achievement,” the report says.

“The kudos of the well-cited journal was a marketing signal outweighing the individual item.”

paul.jump@tsleducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (2)

We should bear in mind that academics are not necessarily going to submit what they think is their best work and perhaps not even what they think the REF panel will think is best. They mayl submit what will be best judged by the internal panels that screen the work and assign a star ranking to it for internal management purposes. That will depend then on what criteria management thinks is best.
Thank you David. I have been smiling again and again over how much of the travesty of the moment is captured in your comment. It does get worse, when the academic in question does not know who the evaluator (called by one good colleague "God") is, or why he gave a 1,2,3 or 4 to any given item submitted. It was apparently a single individual in some cases, judging… well, employability of the staff. Terse. Then, the intermediaries (those handling money both to "God" and to themselves in order to prepare the academic submission, which of course they do not understand) started calling everyone to meetings, asking them to comment how they would improve… their performance!!!! I do not have to tell you what happens when an academic starts questioning what they mean. Monty Python movies are no longer humorous, they have become documentaries of academic life in Britain. https://www.youtube.com/watch?v=7WJXHY2OXGE
ADVERTISEMENT