色盒直播

Informed opinion the key to matters of reputation

二月 17, 2011

It is an eye-wateringly powerful critique. The influential journalist Malcolm Gladwell, writing in the current edition of The New Yorker, shines a harsh light on the US’ national college rankings.

The piece, “The Order of Things: what college rankings really tell us”, is a salutary read.

It raises some fundamental issues, not least the fact that, as Gladwell puts it, “there’s no direct way to measure… how well a college manages to inform, inspire and challenge its students”. While there are some sound indicators of research quality, Gladwell is correct, and it is the responsibility of all those who rank to be open about this. We must explain the proxies we use and be frank about the limitations of the data available.

Gladwell is perhaps at his most scathing when describing the attempt to produce rankings that are both “comprehensive and heterogeneous”. You simply cannot, he argues, compare a wide range of different types of university – from massive rural public institutions with low tuition fees to small, urban, elite private institutions that cost the earth – while at the same time using a wide range of performance indicators.

This attempt at both breadth and depth is a serious problem with domestic university league tables, especially as they are directed at the student consumer. Many attempt to place all of a nation’s higher education institutions on a single hierarchical list, based on a wide range of proxies – something Gladwell describes as “an act of real audacity”.

The Times Higher Education World University Rankings seek to compare only a select world elite and have a heavier emphasis on research. We list just 200 institutions, no more than 1 per cent of the global number (and possibly a lot less, depending on whose figures you use for the number of universities in the world).

While the institutions in our top 200 have different histories, cultures, sizes and structures, they all share broadly similar characteristics: they recruit from the same global pool of leading administrators, academics and students; they push the boundaries of knowledge with world-class research, published in leading international journals; they teach at both the undergraduate and postgraduate levels; and they tend to be well resourced.

We stop our official ranking list at 200 institutions, despite having data on many more universities, because we recognise that the deeper you go, the smaller the data differentials and the more you risk comparing apples with oranges. As the rankings database grows, we will seek to work harder to ensure that we compare, as far as possible, apples with other apples (perhaps by treating small specialist institutions differently, for example).

One of the most powerful sections of Gladwell’s essay castigates the use of reputational surveys in college rankings.

“Reputational ratings are simply inferences from broad, readily observable features of an institution’s identity, such as its history, its prominence in the media, or the elegance of its architecture. They are prejudices,” he writes. Worse, these prejudices stem from the rankings themselves.

But there is hope. Michael Bastedo, an education sociologist at the University of Michigan, and an expert on the weaknesses of reputational ranking measures, says in The New Yorker article that they sometimes work: when, for example, professors in a discipline are asked to rate others in their field.

Such respondents “read one another’s work, attend the same conferences, and hire one another’s graduate students, so they have real knowledge on which to base an opinion”, he says.

This principle, I’m glad to say, is the foundation of the academic reputation survey carried out for the Times Higher Education World University Rankings. Look out for your invitation to participate – we value your expert views.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT