Source: Dale Edwin Murray
Even the most diehard supporter of performance evaluation would ask if it is reasonable to expect such a range of qualities at the individual level
I am a non-executive director for a couple of private companies. From time to time, my executive colleagues ask me what I do during all the “holidays” when I am not teaching. It’s all good-natured teasing I think, but it is also a difficult starting point for explaining to outsiders what an academic does all day – including at weekends. I respond with questions such as “Would you like to have all your clients wandering around your premises most of the year?” or “Imagine there is something that is crucial to your performance evaluation but it always feels as if there is no time for it”. But this cuts no ice, and I can see what they are thinking – “You academics have all this free time and yet all you do is moan.” We have all heard versions of this narrative before.
So then I give them a list of all the different dimensions across which individual academic performance is now assessed: research quality (“REF-ability”), teaching quality (student satisfaction), public engagement, beneficial impact on society, administration and academic management, fundraising and, yes, something called citizenship – meaning voluntary, often invisible, activity to sustain academic culture for its own sake, which brings benefits to a wider group than oneself or even one’s department. This list gets some attention. “That is just silly. How can you possibly be good at all those very different things?” they say. How indeed?
The sheer range and diversity of things that now define academic activity is new. While the various components are extensively discussed, the size and shape of the portfolio is not. Even the most diehard supporter of performance evaluation for academics would be forced to ask whether it is reasonable and sensible to expect such a range of qualities at the individual level. We have always known that higher education organisations are complex and pursue multiple objectives, but more granular definitions of performance will make these tensions more apparent at the individual level. This makes the mentoring of anxious younger colleagues correspondingly difficult, not least because of the underlying hypocrisy of a system that, very often, implicitly makes research success the decisive factor in advancement, subject to adequate performance in other areas. So colleagues ask me questions such as “Can I afford to be a good academic citizen?” and “Do I really need to improve my teaching if it is already good enough?”
The second striking, and new, feature of the performance management landscape is the creation and expansion of performance infrastructure: new roles such as impact officers, expanded roles for press and communications offices, and – as discussed in Times Higher Education last week (“Mass observation”, Features, 23 October) – investment in big databases with the potential to track different kinds of academic activity and its external footprint. Nothing less than a new kind of organisational self-knowledge is being created. We do not yet know the kinds of things that it will be possible to track and measure in the future, but their use in performance evaluation will be inevitable. And make no mistake – these data management systems are already highly proprietorial, regarded as a source of competitive advantage in austere funding environments.
A third observation about performance measurement is that no one can claim ignorance of its side-effects, such as the incentive it creates to discontinue vital activity such as the evaluation of journal submissions and grant applications. That is yesterday’s starting point for discussion. It is no longer a surprise to be told that performance systems create “illusions of control”, needless “audit trails” or a “false precision”; that outcomes quickly become targets with perverse effects (Goodhardt’s Law); that they make as many aspects of performance invisible as they do visible, and that they lead to conservatism in research and teaching. Readers of 色盒直播 and leaders of higher education organisations know all this already and little has changed.
Given that the Pandora’s box of performance management systems cannot easily be closed, the pressing issue is not diagnosis (or moaning, as some would see it). The question is how can those charged with performance management in universities execute and sustain sensible and balanced conversations with staff about performance that may begin with metrics such as student satisfaction scores but do not end with them? This might require confident leaders to create “performance free” time and space in which genuine innovation might be fostered. And it might require greater attention – privately, so that they do not become just another set of targets – to the soft and elusive indicators of academic citizenship.
As for my boardroom colleagues, they will always take some convincing about how hard some academics work, doing so many different things so brilliantly. And I certainly dare not tell them I am currently on paid sabbatical leave.