As someone who has been involved with the recent teaching quality assessment exercise in geography, both as an assessor and as a member of an assessed department, I have extremely serious misgivings about the whole exercise.
First there is the lunacy of putting so much effort into a process which produces (with the exception of one department) one of only two results. I would expect that the distribution of quality would be a normal one, with very few departments rated unsatisfactory (one in fact), very few at the other extreme, and many departments clustered around the satisfactory/sxcellent borderline. Obviously this process has been revised for the next batch of subjects, but those departments recently claiming excellence and assessed as satisfactory have to live with that label for many years.
As an assessor, I felt that the training given was extremely basic, especially in relation to the evaluation of teaching sessions. We were shown three videos, and asked to rate them on a 12-point scale. One was unanimously satisfactory, but there was no consensus about whether the others were unsatisfactory or excellent respectively. I recall being told that "you'll know excellent when you see it". I am not at all sure that there is (and indeed can be) any consistency here from one assessor to the next. As an assessor I was often not at all sure that I was giving the right assessment on the satisfactory/excellent borderline.
This leads to the issue of the assessment teams; in a small team of, say, four assessors it is entirely possible to have a couple of assessors setting standards too high. They would reduce the number of excellent sessions observed, and could easily affect the overall outcome compared with what a "softer" team might have done. It is well known, though often denied, that the percentage of excellent observed sessions is the overriding criterion. The Higher Education Funding Council for England admits there is a lack of consistency between departments in the whole process.
色盒直播
I have seen a team of four assessors, assessing a department's claim for excellence, three of whose own departments have not claimed themselves to be excellent; this is hardly peer assessment.
There is the worrying issue of the subject lead assessor who went to some departments being assessed and not to others, apparently toughening up the assessors by making comments to the effect that excellent should be exceptional; effectively interfering in the process in some departments. I have asked the director of the QA Assessment Division of HEFCE which departments this assessor visited, but have received no reply. He should go to all departments or to none. And it should be possible for all departments to be excellent!
色盒直播
Finally there is the problem of an assessment team which makes errors, misses documents, and does not ask for necessary information. What are we to make of a team which, in its first quality feedback report, managed to * misstate the departmental aims * accuse the department of not linking more strongly with the local community when there is no reference to such links in either the university or departmental aims * miss entirely a two-page document on Monitoring Student Progress, and then criticise the department for poor monitoring * describe a 20-year old mentor system as "recently developed" * state that one course had no representative on the staff-student committee, when her name is in the minutes?
In addition they failed to speak to the past or present chairman of department about staff development policies, to the department or university safety officers about safety, to the chairman of the staff-student committee about student participation, or to the course tutor about assessment methods and outcomes, or about why some committees are not formally minuted. On all these topics they made unsubstantiated critical remarks.
I have asked the director how many mistakes a team has to make before he would do something about it; I have received no reply. There must be the option for a department to appeal, especially on the grounds of maladministration.
The fundamental problem behind the TQA process is that the assessment is not itself assessed in any real way while an assessment visit is under way. HEFCE admits that consistency cannot be expected, and yet the decision made by a group of assessors is apparently irrevocable. Unless this basic problem can be resolved, the validity of the whole exercise is in doubt. Justice and fairness must be done and be seen to be done.
色盒直播
Dr B. P. Hindle Manchester Geographical Society
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰’蝉 university and college rankings analysis
Already registered or a current subscriber? Login