I will freely confess to hating feedback. Even the anticipation of reading something mean-spirited or critical makes me feel wobbly – the actual experience of it knocks me sideways for days.
You might say I’m in the wrong job, but it seems I’m not alone – this week my Twitter algorithm has been serving me up examples of academic colleagues reporting some of the unkind comments students have left in the latest round of module feedback, and how upset they feel about it.
Now, don’t get me wrong – feedback, done properly, is the breakfast of champions, helping us sharpen our thinking, and moderate our perspective. I’m not shy about giving feedback or, when there’s a relationship of trust and respect, hearing it (and acting on it). Any professional would say the same.
But it’s difficult to engage with feedback that’s not offering the benefit of the doubt that you were trying your best, that makes ad hominem attacks or criticism founded in prejudice, or that is worded in a way that suggests the author has forgotten that there’s a human being that has to read and digest their words.
For academics, there is a lot to unpack with the module feedback form – they may have had little input into what feedback is gathered or how it’s done; unlike when feedback is sought on a corporately-produced event or product, academics soliciting module feedback are subjecting themselves to a very personal critique of something that touches directly on their professional identity; and they may be expected to review and digest it and come up with actions by themselves.
That’s a toxic cocktail of disempowerment, vulnerability, and isolation right there. Crucially, as well as potentially affecting people’s wellbeing and motivation, those are not really the best conditions for making meaningful change in response to feedback.
Pain relief
Just not asking for feedback isn’t really an option, so it’s worth thinking through how to make the collection of student feedback less painful. One suggestion doing the rounds is that it be someone’s job (ie someone unrelated to the teaching of the module in question) to screen out the purely nasty comments, leaving only the constructive ones.
You could go one further and ask module leaders to swap feedback datasets and summarise the key points on each others’ behalf and follow up with a friendly peer conversation discussing it. Or – tech confidence permitting – you could make it technology’s problem, and get one of the many AI software programmes available to give you the highlights rather than reading every word. If nothing else the blunders of the tech could make the process a bit funnier.
But it could also be worth thinking about whether solicitation of feedback could be an opportunity to increase trust and deepen relationships, rather than doing the opposite.
While I’m only speculating here, I’d hazard a guess that being asked the same basket of generic questions on the same form at the end of every module is more geared to produce student cynicism and apathy than thoughtful critique (in my case, I drew the same smiley face on every module feedback form I have ever been obliged to complete).
I suppose that there is some departmental or institutional value in being able to compare and RAG rate module feedback to check things aren’t going horribly wrong on anyone’s watch, but I’d also suggest that the degree of detail required at that level is quite minimal, and probably straightforwardly numerical – ie X per cent of students agree that the module was well-structured, didn’t have timetabling issues, covered useful material, assessment was well understood, there was a good breadth of learning resources etc.
If the data flags a serious issue it will most likely already be known about because it will have been blindingly obvious, or something that can be explored in more depth in the design of the next iteration of the module, with the next cohort of students. You probably also need a prompt that if there is a serious serious issue (like bullying, or fraud, or reading off the PowerPoint) that students are encouraged to raise it through the established channels.
Talk talk
When it comes to how the actual learning happens, though, asking students to think about, and discuss, at points throughout the module, how they’re getting on with learning, what’s working for them, and what isn’t, and share tips on overcoming challenges, shifts the dynamic.
Rather than seeing the module as something that students retrospectively pass judgement on, that has been “delivered” by academics, it asks students to take part in a thoughtful discussion about what their needs and expectations are, what can reasonably be adjusted or enriched, where their module leader could try to remove barriers and where they may need to make changes to their own practice.
It needn’t be through in-person conversation, and it needn’t be the module leader who runs it, if there are other skilled staff who would be better placed. It could be co-facilitated with course reps and have the added benefit of generating an agenda for the next rep meeting. And the only requirement would be that module leaders feed back to their department or faculty meeting any ideas or issues that couldn’t be implemented or solved within the module.
As ever in these moments, there is not a single doubt in my mind that absolutely masses of academics are already doing this, or an appropriately theorised and tested version of it that wasn’t thought up on a quiet Thursday afternoon.
I hypothesise that the reason module feedback forms traditionally ask for written commentary is not that thoughtful teachers don’t already know how their students are getting on, it’s because there’s a worry that there are some who don’t really know how to facilitate a conversation like this confidently and productively. There might also be a concern that students don’t have the confidence – or possibly, in some cases, the capability? – to take part in a conversation like the one I’ve described.
But challenges of scale, engagement, and confidence notwithstanding, the ability on both sides to be able to talk about what’s happening in a module and how the learning is going – seems pretty essential to effective higher education. In other words, if either staff or students can’t do it, maybe they need to learn how.
My institution has removed this problem by putting out a centrally-controlled end of year feedback form on all modules, which has only likert-type response options and no places for any free text feedback. This appears to be so they can build institution-wide data dashboards. Not surprisingly, on courses with which I am involved, response rates have dropped – I wouldn’t feel motivated to complete a feedback form that didn’t allow me to write my own thoughts, and the students appear to agree. As you say Debbie, many of us regularly ask for constructive feedback from our students so we can pick up issues as the academic year progresses.
As we’ve seen recently, some students will use ‘free text’ feedback to attack simply for their own self gratification, with female staff being targeted for the worst abuse. A single male student can be a problem but a group of female students collaborating and submitting similar feedback can be absolutely devastating. Unfortunately some Universities also use student feedback to start capability enquiries, which in the current financial situation can put you at the top of their ‘hit-list’ for ‘cost reduction’ even if the ‘customer’ feedback turns out to be rubbish.
One way to address this is to build in – as you suggest Debbie- a much more integrated and continuous approach to how we evaluate our teaching and the student experience. For many years as a module and course leader I evaluated pretty much every session I taught via a variety of mechanisms both old school (post-its, flip chart paper) and tech based (Google forms) – these were always quick 2 minute jobs and students soon got used to it. I also engaged them in quick chats during the taught session (and before/after if time allowed) – how’s it going, what are you enjoying, what can I / we do to improve things? Are you worried about anything? At the end of each module, I built in dedicated evaluation time in session – 15 minutes for them to talk to each other with me out of the room, focusing on 4 key aspects of the module: the organisation of it; the teaching and learning experience; the resources; and the assessment. I asked them to make a few notes anonymously (on paper or Padlet), then when I came back in, their job was to highlight one key thing from each area. I then asked them to complete the digital module evaluation form (still within the session). The whole thing took about 30minutes and I provided snacks for smaller cohorts (larger cohorts we all brought something to share). Using this process I got great constructive feedback from them and excellent engagement with the survey itself. I always made sure to let them know what I’d changed or would be changing as a result of their feedback. And I did this at the start of the module for the next cohort, so that they knew that student feedback made a difference, as well as including a summary on the Blackboard site. If this all sounds like a lot, it really didn’t take much time out of the module, and I’d argue that this kind of thing is as important as making sure we cover all the content (which they can access on the VLE anyway…) – it helps to create a genuine learning community.