色盒直播

Tide turns on ‘inherently biased’ student evaluations of teaching

Universities begin to phase out module feedback exercises in favour of less formal check-ins and peer reviews

February 1, 2024
A man in stocks at a modern-day fete
Source: Getty Images

More universities are being urged to?scrap or?revise end-of-module student evaluations as?concerns mount about the toll they take on?lecturers’ mental health.

Institutions in the UK, the US and Australia have moved away from using the once-pervasive exercises in?recent years, but they remain commonplace despite a?growing body of?evidence that they often contain abusive comments and are statistically flawed.

An alternative is to run a more informal midpoint “check-in” that focuses less on quantitative data and more on what could be changed in the remainder of the course, said Athanasia Daskalopoulou, a senior lecturer in marketing at the University of Liverpool.

She authored a paper published recently in in which UK?academics recount being called “fat” or facing disparaging comments about their dress, age and accents.

色盒直播

ADVERTISEMENT

The University of Southampton scrapped compulsory end-of-module questionnaires last academic year, and special permission must now be obtained to use them. If?granted, course leaders are instructed to “take care” over how this is?done “in?recognition of the concerns regarding conscious and unconscious bias”.

“We have moved instead to mid-module questionnaires in most circumstances,” a spokesman said. “The reason for this is so that any suggestions students make may help to improve their own experience, in addition to benefiting those in the next cohort.”

色盒直播

ADVERTISEMENT

Newcastle University has also moved to a similar system, with informal check-ins focused on aspects of a course where things can be adjusted immediately to suit students’ needs.

Mid-point evaluations are common across the sector, but most universities use them in tandem with a more formal university-run exercise at the end of a course.

It is these that have proved most controversial, said Dr Daskalopoulou, because the data is often shared widely, and can be used in appraisals and decisions about promotion.

“In mid-point evaluations, students direct their energies towards what they see as working and not working so they can benefit in the learning experience; whereas at the end it is usually an evaluation of how they felt about the person teaching, or if they were happy with their grade,” she?said.


Campus resource: Should we be aiming for student happiness or student satisfaction?


Dr Daskalopoulou’s study, which features interviews with academics at UK business schools, sought to discover how people were affected by the surveys on a personal level.

One participant, a white, female professor given the pseudonym Sara, says: “I?got fat. I?got that put on two evaluations…Then just negative comments about what I?look like...the ones that really stick in my mind are the ones about my weight…it?kind of?really hurts.”

色盒直播

ADVERTISEMENT

Another, known as Mark, says: “It takes one comment to kind of question yourself and your own career.”

Dr Daskalopoulou predicted that, as evidence emerges from the institutions that have moved away from end-of-module evaluations, “maybe more universities will look to find alternative ways of hearing the student voice without linking evaluations to other aspects”.

色盒直播

ADVERTISEMENT

Alongside switching to less formal evaluation processes, Dr Daskalopoulou said some universities were also considering removing the anonymity of students who make very offensive comments so that they are aware that there might be repercussions, although she said this brought clear ethical concerns and was not always straightforward.

Troy Heffernan, a senior lecturer in education at the University of Manchester and co-author of a?study published last year that argued that universities’ use of student evaluations left them vulnerable to legal action from distressed staff, agreed that some institutions were moving away from their use.

He said that, via his consultancy work with more than 30 universities worldwide that have reappraised their evaluation processes in the past two to three years, most have moved to adopt a system of peer reviews of teaching and internal revisions of course content.

“These methods are time-consuming, but for universities who have admitted that student evaluations are inherently biased and bad for everybody – though worse for anyone who isn’t a white, straight, able-bodied, heterosexual, middle-class man – this is how you check course content and teacher quality without relying solely on student populations,” he said.

But Dr Heffernan cautioned that while some universities were evolving, they still represented only a fraction of the total number of institutions worldwide, and it should be seen as a long-term process.

“While we are seeing a change, at this point the change is only small when this was a practice that was taking place in a huge majority of the globe’s 16,000 higher education institutions,” he said.

色盒直播

ADVERTISEMENT

tom.williams@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (6)

These surveys tend to be filled in by students who did not attend, got lower marks than they felt entitled to or have some axe to grind. Managers treat them as a source of ammunition even if only one or two out of 30 or 40 expressed any kind of opinion. They should all stop until someone comes out with a use for this flawed methodology.
A well known academic statistician remarked that most surveys work only for those who like doing surveys or those who are angry. At best in HE they capture some sense of what is going on in a particular cohort but at worst they enable certain students to exact revenge on tutors, their course and the institute.
I tell my students that my email is always open for comments on module content and delivery... receive very few, however, but highlight those who do respond via weeknotes as well as action. Very few bother to do the mandatory (at least, for me to run) mid-module surveys, lucky if there are more than 2 or 3 out of a class of 400+. There is evidence, sorry, reference not to hand right now, that professors who stretch and challenge their students, a hallmark of good univerisity teaching, get rated lower than those who do not push students as hard. Any student making personal remarks about an academic's girth ought to be up on disciplinary charges. OK, so the survey is anonymous, but the group as a whole should be informed that irrelevant rude comments are not acceptable and, were the perpetrator known, would have consequences.
I don't think it is always the angry ones who participate in surveys. Maybe the problem is that the results of the surveys are seen as being used to penalise people and that is why you get those that take advantage of the opportunity. Some of the survey questions beg these kinds of responses. It beggars belief that in institutions where people design and run surveys as part of their research day in and day out are not asked to design something properly. Some external agency has to come in and design things for the management. The other thing with suggesting that surveys are useless is that this argument can then be extended to staff surveys. Very convenient.
Response rates in large classes are often below 10%, yet these data are still taken seriously by the higher ups, including for probation and progression. Students are being asked complete up to 20 surveys per year. Could you be bothered, unless you had an axe to grind? And as for construct validity...
The module evaluations can be gamed by staff who give gifts and other treats to bribe students to rate them highly. It works for those staff and some have risen up the ranks to the top using these surveys as evidence of being an excellent teacher. So, it is possible to pay your way to good scores. The system is ineffective; it does not measure the quality of teaching.

Sponsored

ADVERTISEMENT