色盒直播

Overhaul of metrics usage could cut frequency of REF

Chair of review into use of metrics suggests REF interval could be cut to once a decade

四月 3, 2014

Source: Getty

A metrics-based “mid-term review” could allow the time interval between research excellence framework assessments to be extended to once a decade, the chair of a new independent review of the use of metrics in research assessment has suggested.

James Wilsdon, professor of science and democracy at the University of Sussex, told Times Higher Education that such a suggestion went slightly beyond the terms of reference of the review, which was launched on 3 April by the Higher Education Funding Council for England.

“But inevitably these are the sorts of questions you start to ask once you have properly interrogated the metrics.

“Given that – as everyone accepts – [submitting to] the REF is a fairly time-consuming exercise, is there a way of using metrics to do a lighter-touch mid-term review, and then do a full-blown panel-based review every 10 years instead of every six?” he asked.

In 2008 and 2009, Hefce ran a pilot exercise on metrics-based research assessment, which Gordon Brown, then prime minister, favoured. It concluded that metrics were “not sufficiently robust to be used…as a primary indicator of quality”.

However, David Willetts, the universities and science minister, was set to announce at a Universities UK conference on universities’ contribution to economic growth that he has asked the funder to revisit the issue.

Professor Wilsdon said the review would consider more than just the REF and would not produce specific recommendations for the next exercise – likely to be in 2020 – because it would be “premature” to do so before Hefce’s formal evaluations of the 2014 exercise.

But he aimed to make some “fairly intensive efforts” to learn from the experience of the 2014 panels, some of which are using metrics to inform their judgements, following the conclusion of their work in December. He hoped that publication of the review in spring 2015 would be early enough for it to inform the shape of the next REF.

He said he and his “first-rate” steering committee would approach the issues with an open mind and consult widely.

One of the most significant developments in recent years was the rise of alt-metrics, which measure factors such as the number of mentions a paper receives in social media. He was “well aware” of the strong criticisms they had attracted in some quarters, but he was open to the idea that alt-metrics could be used to supplement or even replace the case studies used by the 2014 REF to assess the wider impact of research since they potentially enhanced the “ability to track and measure impact in real time”.

The review will also examine the use of metrics by universities. A number of institutions have been criticised recently for relying heavily on metrics to manage performance or to select people to submit to the REF.

Professor Wilsdon hoped to “come up with practical, useful guidelines about what are appropriate and inappropriate uses of metrics in an institutional context”.

“The ideal is that you find a healthy and sensible balance of metrics- and non-metrics-based systems that can be used in a much more realtime way to ensure you don’t have an 18-month lead-up to the REF that is very time-consuming and burdensome on academic culture and practice,” he said.

paul.jump@tsleducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (3)

So basically, the government did not like the conclusion of the previous review that metrics were not robust enough and wants it revisited- presumably until someone comes up with the "correct" conclusion.
The use of metrics was, very sensibly, ruled out by the current REF. Metrics corrupt science. They encourage gaming, hype and outright dishonesty. Altmetrics are the worst of the lot. To get a high score all you have to do is include the word penis in your tweet. "Altmetrics are numbers generated by people who don’t understand research, for people who don’t understand research" See "Why you should ignore altmetrics and other bibliometric nightmares" http://www.dcscience.net/?p=6369 Science is hard enough without having to contend with corruption by administrators who don't understand it.
"“The ideal is that you find a healthy and sensible balance of metrics- and non-metrics-based systems that can be used in a much more realtime way " Oh really, that's utter nonsense. Metrics provide perverse incentives. They encourage dishonesty and reward short termism. That's been obvious for some time now to anyone who actually does science (as opposed to hanging round its fringes talking policy). Metrics are way for people who don't understand science to give a phony sense of precision to politicians who don't understand science. Productivity management is a way of ensuring that future Nobel prizewinners get fired before they get started. Science is hard enough without having to contend with such nonsense.
ADVERTISEMENT