色盒直播

Uncharted territory

The REF's conflation of intellectual quality and geographical scale makes little sense and may have negative consequences for UK research, argues Alastair Bonnett

April 5, 2012



Credit: Femke De Jong


What is the difference between research that is of nationally recognised quality and research that is of internationally recognised quality? It may sound like an odd question. But if, like me, you are being submitted into the 2014 research excellence framework it is an important one because these geographical categories are central to the REF's quality criteria.

As a geographer, perhaps I should be feeling smug that spatial units are so highly regarded. They have a common-sense appeal. To be internationally acknowledged will seem to many to be obviously better than being nationally known; while being "internationally excellent" and "world-leading" are obviously better still. These kinds of labels have enabled the UK to trumpet the fact that its universities are global players. It's an important message - and politicians, funders and students need to hear it. In the words of the former chief executive of the Higher Education Funding Council for England, David Eastwood, the final research assessment exercise (the REF's precursor) confirmed in 2008 "that the UK is among the top rank of research powers in the world". The 2001 exercise elicited the same response from the organisation's previous chief executive, Sir Howard Newby, proving "the UK's position as one of the world's foremost research nations".

It's a vital argument. It's also a good news story. But maybe it is because I'm a geographer that I cannot help thinking about the assumptions, and the consequences, of the connection of quality to scale. This isn't about offering up another attack on the REF or the principle of assessment. Since moaning about assessment has become something of a bonding ritual for a generation of academics I shall, no doubt, be leaving a few readers hungry for more blood on the carpet. But the appetite for polemic has meant that some of the most interesting (and constructive) questions about the way we do assessment have not been asked. One of these is how and why we divvy up research by scale.

Let's step back a few years, to the University Grants Committee's research ranking exercise of 1986. It may have been the first such exercise of its type in the world. So it is not surprising that it was perceived as a hit-and-miss affair. The UGC's subcommittee of experts classified departments as outstanding, above average, average or below average. The question was asked, "compared to what?". It was widely felt that the methods used lacked rigour. "By any test", concluded Trevor Smith, then pro-principal of Queen Mary College, in the wake of the exercise, it "was a pretty rough and ready lash-up of techniques".

色盒直播

ADVERTISEMENT

Very little of this critical commentary suggested that what was needed was more geographical specificity. But by the time of the next exercise, in 1989, that was what had happened. A five-point rating scale was introduced, with international and national quality applied across all disciplines. Later years saw refinements, notably the dropping in 2008 of what previously were termed "attainable levels" (of national or international excellence) and the introduction of a scale of "recognised nationally", "recognised internationally", "internationally excellent" and the new, highest, category of "world-leading". These were useful refinements ("recognised" is a far clearer qualifier than the baffling "attainable levels"). But the basic idea that value can and should be judged in terms of geography has held firm.

Why? I have delved into the archives and still can't find any serious defence of this aspect of the assessment criteria. Yet it seems to have been accepted almost immediately by the academic community. Indeed, feedback from the 1996 exercise reported that "panels generally found no difficulty in interpreting the concepts of research of 'national' and 'international' standards of excellence". Moreover, similar criteria have spread across the planet. The 2010 Excellence in Research for Australia initiative judged research to be above or below "world standard", while New Zealand's new "performance-based" assessment exercise applies a distinction between world-class, national and institutional-level quality.

色盒直播

ADVERTISEMENT

The fact that people have taken to this approach so intuitively and so widely might seem like the end of the matter. But I don't think it should be. And I'm not entirely alone. One can easily find less sanguine feedback, such as that given by the Town and Country Planning panel after the 2008 exercise. According to John Punter (who chaired the panel) and Heather Campbell (deputy chair), panel members "all commented on the difficulties posed by the application of the criteria, in particular in attempting to distinguish research that was 'world-leading' from 'internationally excellent' or simply 'recognised internationally'".

The panel's difficulties are understandable: this terminology is far from our usual fare. It does not derive from the commonest way we judge research value, the criterion of journal peer review. Journals ask referees to apply a variety of measures of merit, nearly always centred on an assessment of originality, substantive content and rigour. National and international quality don't come into it.

Perhaps I'm overly sensitive to the way spatial units are used and abused. I encounter "common-sense" geographical fallacies all the time. They range from environmental determinism to the assumption that social or even natural processes have neatly bordered causes and forms. The idea that a statement of value - such as excellence - takes national and international forms makes me worry that we are in the presence of another such error. Such fallacies create problems. By introducing a potentially misleading spatial context they can skew our judgement and shape our expectations.

When we claim that the quality of work on quantum theory, calculus or Kantian ethics is recognised at a national level, or that it is internationally excellent, do we know what we are saying? Can we explain what territory has to do with quality? Can we explain why being international is better than being national? In short, although the desire that our research should be up there with the best in the world is a good and necessary thing, why should we assume that this worthy end defines the means by which it is assessed?

Within the collaborative and globally connected world of academic research, in which most journals are edited and produced within more than one country, the idea that we can distinguish distinct geographical spheres of recognition or value looks like an anachronism. Moreover, the fact that research is referenced beyond the UK is not a reliable reflection of its merit. In the social sciences, for example, theories about globalisation get an international take-up that has little to do with their originality and rigour.

色盒直播

ADVERTISEMENT

It must also be acknowledged that there are many ways of defining what is international. Having one's work published and recognised in the developing world may be equally as "international" as having it widely referenced in the US. But more fool the scholar who submits a paper in Uzbek for assessment. Often there are good reasons for this bias. Academic institutions around the world are not doing work of equal worth. Part of the problem with privileging all things "international" is that widely applied distinctions end up as tacit agreements. When you treat something highly varied as something homogeneous, all sorts of demarcations and discriminations, sensible and otherwise, creep in unannounced.

One of the potentially positive consequences of introducing geographical scale into an assessment process is that it results in the recruitment of international experts (or - and the distinction may be important - specialists with international expertise) to validate the process. However, it is perhaps symptomatic of the problems that accompany these categories that the role and status of these experts has been variable, across exercises and even between panels. The review of the 2001 RAE by Sir Gareth Roberts concluded that future exercises should ensure "a significant international presence on each sub-panel and panel". The 2014 REF has seen this ambition translated into a desire to recruit a portion of panel members with "experience of leading research internationally". Even so, it is surely noteworthy that of the 22 members of the Arts, Humanities and Area Studies main panel, only two are from an institution outside the UK, while the Medicine, Psychology, Biology and Agriculture main panel has five non-UK members out of a total of 21 (of the 385 subpanel members from these two streams, none is from outside the UK).

Although working out what is original and important work is something that academics do all the time, knowing how to judge national and international quality - and who should do the judging - is open to a variety of interpretations. We also need to ask what implications these criteria have for research that is designed to feed into national or regional debates. Assessment bodies are sensitive about this issue and go out of their way to footnote their guidelines with reassurance. The 2014 REF stipulates that: "'world-leading', 'internationally' and 'nationally' in this context refer to quality standards. They do not refer to the nature or geographical scope of particular subjects, nor to the locus of research nor its place of dissemination. For example, research which is focused within one part of the UK might be of 'world-leading' standard. Equally, work with an international focus might not be of 'world-leading, internationally excellent or internationally recognised' standard."

色盒直播

ADVERTISEMENT

It's a useful statement. But why are we in a situation in which it is a necessary one? Once one has introduced scale into the assessment process, is it plausible to add the rider that quality judgements must be scale-free?

I don't know whether differentiating national and international quality is having an impact on where and how we do research. But it is a question that is worth asking. I took it to one of the UK's foremost experts on regional studies, John Tomaney, of the Centre for Urban and Regional and Development Studies at Newcastle University. "What you end up being is pragmatic in relation to these exigencies," he explained, adding that the way the assessment criteria are framed "draws you away from the local, even the national". Tomaney suggested that a researcher who focused her or his work around a long-term commitment to a particular locale (especially one in the UK) would be taking a risk that would not be borne by those interested in "big picture" theorisations "based on skitting around the world". However, Tomaney's major concern was that the scaling of excellence may not produce the kind of research that the UK needs. He concluded the interview with a provocative thought: whether the prioritisation of the international "produces social science which contributes more usefully to solving social and economic problems in the UK - I think that is a very open question".

That may be the ultimate irony of this story. Concerns about national well-being and success drive assessment exercises. It would be a perverse outcome if they were encouraging researchers to privilege non-UK agendas and stages. I suspect that many academics hardly need much of a push in that direction. We have long been one of the most footloose professions. A cosmopolitan self-image is one of the vanities of academic life. The scaling of excellence dovetails with engrained prejudices.

Can we think of a better approach? Combining a system that can produce loud post-assessment headlines about "world-beating" research while getting rid of unnecessary attempts to tie value to scale is surely not beyond us. For example, within a schema based upon the assessment of originality, rigour and importance, all mention of national or international quality could be dropped but the top tier or tiers flagged as comparable to, or exceeding, the best research being done elsewhere in the world. Perhaps this, too, is far from perfect. What I hope we can agree on is that, if we are to have assessment exercises, let us make sure we know what the criteria mean and what their consequences might be. As much as I love geography, I look forward to a REF in which geography plays little part.

色盒直播

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT