Everyone knows that universities are taking the National Student Survey more seriously than ever. But much less is known about the effect this is having on higher education.
To address this deficit, in the 10th year of the NSS, I carried out a qualitative study into academics’ views on the impact of the survey. My findings suggest that its effect goes well beyond simply giving students an opportunity to evaluate their programme and identify areas for improvement.
The data suggest that the NSS is encouraging a more instrumental attitude to education among students: the questionnaire itself, institutional responses to concerns, and shifts in curricula are all contributing to this move. An economistic register has been reinforced by the introduction of the ?9,000 tuition fee, encouraging students to consider whether they are getting “value for money”.
In such a context, higher education may increasingly be regarded as a transaction where students pay for something that academics “deliver”.
Those involved in my study reported that some senior managers who oversee the survey take a punitive attitude to academics as a result of their NSS evaluations.
This is evident in the ways in which the results are distributed, the public nature of the comparisons that are made, the requirements to respond to issues raised and the combative tone of much of the discussion around the survey results.
Academics are required to respond – and quickly – to concerns that are raised, despite the fact that these concerns may not represent a significant problem. Scholars reported that poor NSS scores are referred to “again and again and again”, emphasising the impact of those scores on the people concerned.
My research suggests that a series of mediations and approximations are represented in survey results resulting in a distinctly muddy picture of how improvements might be made. Low scores may result from the evaluations of a very small number of students, and sometimes the NSS is used to express disgruntlement about something quite outside the remit of the survey.
Where academics explored a problematic score, and tried to address it, it was noticeable that some students who had been “satisfied” then became “dissatisfied” (and vice versa); the NSS is, as many people describe it, a “blunt instrument” that often cannot differentiate between a real problem and a superficial problem.
It highlights only what has gone wrong (or appears to have gone wrong), and offers no insight into the truth that if nothing has gone wrong, it does not necessarily mean that everything has gone right.
The notion of continuous “improvement”, which is now commonplace in academic departments, disregards thinking about the complexity of educational issues.
Jo Frankham is reader in educational research at Liverpool John Moores University. For a copy of her full report, email?j.frankham@ljmu.ac.uk.