Why you should write feedback to your students before they鈥檝e submitted
Starting at the end seems counterintuitive, but anticipating student strengths and weaknesses and automating your responses comes into its own for large cohorts
When I set about writing this piece, I drafted the final paragraph first. I always do. I want to know where I鈥檓 trying to get to when I set off on any kind of thought (or, indeed, real) journey. I need to know where I want to land.
To some, it鈥檚 counterintuitive to start at the end of a thing, and students find it a novel idea. 鈥淲rite the conclusion of your essay first,鈥 I urge them. And when I lead on assessment planning, I urge colleagues to do the same. 鈥淲rite your feedback to your students first,鈥 I want to say. 鈥淵es, I know they haven鈥檛 done the work yet. Yes, I know it鈥檚 months away鈥鈥
The way we assess students鈥 progress should be embedded in a comprehensive plan that is built around a clear understanding of what it is we want them to learn. In my experience, the more strategic the approach to assessment within a teaching team, the better the outcomes for the students.
- A step-by-step guide to designing rubrics that will save hours of marking time
- Making feedback effective for your students and efficient for you
- Education and grades are often in direct conflict 鈥 it鈥檚 time for a messy divorce
If we are clear from the outset about what we want our students to learn, and if we have experience in supporting previous students to do this learning, then we will be familiar with the strengths and weaknesses we usually encounter along the way. This familiarity is encapsulated in the number of times we find ourselves writing the same things as feedback on, for example, students鈥 essays.
Teachers have long used pre-constructed text to deliver feedback. So, the basic idea of this piece is not new. Many tactics are available to enable markers to select feedback items from a bank of resources that have already been written. When effectively enabled, this allows useful things to be said to each student without having to craft bespoke text in every case.
I want to go a little further and suggest that these things can be done in more algorithmic ways, and that, crucially, this allows us to work at scale. The key is to know what kinds of strengths and weaknesses we are likely to encounter across the cohort of learners. If we can anticipate these, we can create ways to attach the right kind of feedback, automatically, to any given student鈥檚 work.
Calls to action
For this tactic to be successful, the feedback must be high level and strategic. In the course of marking something as complex as an essay, we simply cannot have automated ways of correcting specific errors that might enter into the work. Indeed, the kind of feedback that this strategy enables is more the 鈥渃all to action鈥 type. By this I mean the 鈥渞evise your understanding of theory X鈥 type of feedback rather than feedback articulating specific limitations of this student鈥檚 understanding of theory X.
Strategic 鈥渃all to action鈥 feedback is appropriate for tasks that aim to assess high-level learning. It puts the ball in the student鈥檚 court, saying: 鈥淲ork at developing your knowledge of this鈥 instead of trying to do that work for them. The student will have opportunities to ask for further help if needed. If those opportunities are not there, then something more fundamental is wrong with the teaching strategy that cannot be corrected by any amount of feedback.
How it can work
The mechanics of automation are easy to achieve. A front-end digital form is completed for each piece of assessed work. Markers assign a rating to each of the grading criteria and perhaps enter certain standard codes (which denote routine comments such as 鈥渨ork on your referencing鈥, 鈥渆nsure you provide evidence wherever possible鈥, 鈥測ou have developed a really strong argument鈥, etc.)
These ratings and codes are uploaded automatically to a database (or spreadsheet) and predetermined formulae are used to generate advice that is genuinely contingent upon the performance of each student. It is then a relatively trivial matter to send this out in bespoke emails, by means of a standard set-up. Links to learning and enrichment resources that are relevant to each individual are included to encourage them to do something with the feedback.
At the simplest level it might look like this:
- Student A. Upper second grade: You did very well on X area. Here鈥檚 a link to further reading that you might find of interest.
- Student B. Third class grade: Well done for passing in X area, but there is some evidence of misunderstanding. Revise the set material from week Z of the module.
When a team starts working on their feedback in this way, we find that the main constraint on the advice that can be constructed is their capacity to imagine. Is it as good, in an absolute sense, as a tutor writing bespoke feedback to every student? Of course not. But it is highly effective when it comes to making the best use of available teaching resources.
Context
I鈥檓 not suggesting that this algorithmic approach be used in relation to all types of assessment tasks, merely that it has its place. It is particularly useful in the case of essay-based exams, and it comes into its own when we need to do these things at scale. In this piece, I am thinking about the specific challenges of working with large cohorts of students.
It is important to note that this does not entail adding more to the long list of things tutors have to do. Instead, it requires a reorganisation of when things are done and front-loads some of the effort. Indeed, in our experience, whatever extra work this approach adds upfront is balanced by savings of time and effort at the back end of the process.
Interestingly, it鈥檚 at this back end, when the marking has just been done, that teachers are best placed to plan how the feedback might be better shaped next time round. It only takes a few iterations of this cycle to end up with a set of marking criteria and associated feedback, made fully available to students, that are highly sensitised to the assessment needs of a unit of learning. And that is to everyone鈥檚 benefit.
In one important respect, this is rocket science. I assume that when designing a rocket, one plans, first of all, where it is intended to land. In most other respects, these ideas are simply good common sense. But, as is often the case with common sense, the sense turns out to be not as common as we might like.
Andy Grayson is an associate professor in psychology at Nottingham Trent University. He has worked in higher education for more than 30 years and provides leadership on learning, teaching and assessment.
If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, .