Critics say student evaluations of teaching (SETs) are skewed by innate biases against minority groups, and their results should never be used for professional assessment purposes. But a new analysis has found that SETs are so susceptible to factors unrelated to teachers and courses that their results should be disregarded anyway.
A La Trobe University of 183 SET-related studies has found that issues that have nothing to do with teachers’ identity – such as class size, website quality, university cleanliness and even food options in the canteen – also skew the results. Student characteristics such as gender, age and disciplinary area influence the evaluations as?well.
“That student demographics alone impact on SET results demonstrates just how flawed the system is,” says the paper, published in the journal Assessment and Evaluation in Higher Education. “The existing literature makes it clear that SET results are strongly influenced by external factors unrelated to course content or teacher performance. This analysis raises the question of how any university [can] justify the continued use of SETs.”
Author Troy Heffernan said researchers had spent decades exploring how SETs disadvantaged academics on the grounds of gender, racial background, disability and sexual orientation, with women and academics from minority groups routinely given less favourable evaluations than white, able-bodied males.
But the focus had now turned to even more basic methodological shortcomings, with evaluations influenced not only by the teachers’ irrelevant characteristics but also by background traits of the students.
An estimated 16,000 higher education institutions around the world regularly conduct SETs, the review found. Dr Heffernan said their administrators might not appreciate the fundamental weaknesses of data that appeared “sound”.
“On the surface, it seems like a great system. You have a?class of?100. You ask them if they like the class or course. Over 100 students, you would think you’re getting some form of objective answer.”
Cost considerations also contribute to the continued use of SETs, he said. “The fact is, universities want this data – they want to understand how [to] improve classes – and student evaluations [are] a very quick, cheap way to get instant data.”
Dr Heffernan said none of the reviewed studies had reported favourable findings about SETs, although they had differed on “how damaging” evaluations were. SETs appeared less slanted against minority academics in the humanities than in science-based subjects, for example.
Some academics say they value feedback from SETs, both positive and negative. Dr Heffernan said some institutions conducted evaluations without using the results for career progression purposes. “The main problem is when a majority of universities use this information for hiring, firing and promotion.”
He said qualitative feedback sourced through student support teams would deliver more useful information than quantitative data from students. “Back and forth” dialogue about what “worked” in classes, and what students liked, would be better than “grading someone one to five”.
“But that takes time and money,” he noted. “In a post-Covid austerity-measure world, most universities probably aren’t prepared to do that right now.”