In a bunch of experiments published in a psychology journal a few years back, researchers asked participants to identify occasions when they were (and were not) able to control their temptation to buy things on impulse, and to estimate how much credit card debt they’d be willing to incur in order to buy an item they really wanted.
You’d have thought that the more they’d made mistakes, the less likely they were to clock up big debts on the plastic. But as it turned out, those who remembered loads of times that they had failed to rein in their spending racked up just as much debt as those who reflected on loads of successes.
Apparently we don’t like looking back on our failures – which stops us learning from them – and we overestimate our ability to cope with subsequent challenges by trying to apply lessons from last time when the situation has changed. Eventually the situation changes so much that endless iterative fixes have to give way for more fundamental changes.
There’s also something important about sunk-cost fallacies and letting go of understandings and strategies that served us well in the past. See also the hygiene theatre of wiping surfaces over an airborne virus, and “examinations next week will go ahead as planned”. We love to execute the plans that we have worked on, but they don’t half obscure our thinking about what the right course of action is now.
Omicron and on and on
I raise all of this partly because of where we are in higher education in early January 2022 over Covid.
If I had one consistent critique of senior colleagues in the sector during the early part of the pandemic, it was that despite all those commitments to the “student experience”, a tendency to see higher education in “what we have to deliver” terms rather than in “how will students experience their week” terms was looking at things through the wrong end of the telescope – and coupled with a lack of practice in scenario planning, led to all sorts of mis-steps (mainly around mental health and the various implications of household self-isolation) when what we’d planned met the reality of Covid life for students in September 2020.
So did we learn?
First, a big shout out to those bravely pressing ahead with traditional in-person pencil and paper exams in less than draughty halls over the next couple of weeks. After all, your high academic standards are predicated on standing firm against those who might be actually ill, or those who want to avoid actually catching Covid. Giving students the choice between cough suppressant and June resists ought to teach them a bit of grit. Stiff upper lip! Tally ho!
To be fair, in the parts of the sector where January is a major timed assessment month, most universities spent last term planning to hold (or at least move) assessment online. Omicron changes the delivery game insofar as a provider needs to cope with teaching activity that might have volumes of staff and students isolating off-campus – but it doesn’t change it much if you’ve been hyflexing, or your “delivery” plans are ready to switch to online at a moment’s notice. Omicron? No trouble.
The problem is that an assumption that moving assessment online this month is a strategy that will work may be faulty when we start to consider the level of absences being seen across the sector amongst staff already, let alone students. As Covid centrists have been saying all Christmas, even if you believe Omicron to be “milder” (intrinsically, or via immunity, or both), the fact that everyone’s getting it all at once is a pain. And as soon as halls and HMOs fill back up, Omicron is going to rip around those who’ve not yet had it very fast indeed.
In other words – even if you’re optimistic about Omicron being roughly as dangerous as a bad cold or mild flu, higher education’s systems are only set up for a small number of us to get a bad cold or mild flu at any given time. We’re not set up for vast proportions of our own staff and students to get it at the same time as each other. And we’re certainly not set up for vast proportions of our own staff and students to be ill at the same time as a major timed assessment period – however “online” we have prepped to make it.
Plenty of detriment to go around
That routes us around to fresh conversations about safety nets and “no detriment” policies, which it is clear many had hoped to move on from this academic year.
Ultimately, those sorts of policies/strategies were a sort of structural/mass mitigation when large numbers of students’ studies were impacted or disrupted in a way that was outside of their control. The line has tended to be “nope, we’re back to individualising the issues here folks”.
And so given the volumes of students/staff that are likely to have been heavily disrupted or sick over the past few weeks, or likely to be heavily disrupted or off sick in the middle-back half of January, that gives universities with lots of deadlines or timed assessments this month three options (that are not necessarily mutually exclusive):
- Move or delay any assessment (deadlines or episodes) to after Omicron;
- Develop mass/structural mitigations;
- Fingers in ears, eyes shut, be alright surely, messy messy, mop up later praising “resilience” of our people;
We’ll doubtless muddle through eventually, although even if assessment happens, quite who is going to mark it all – and when – is another puzzle.
As ever signalling to students often and early not just what we are doing, but also that we’ve noticed what they’re experiencing – and building both comms empathy over it, and specific mitigations that address it – is always a great idea.
The wider problem is potentially where we are with assessment generally out of the back of the pandemic. And the problems with assessment exacerbated and highlighted by it are both so wide and deep that they require collective thought and action now – before we tumble into a genuine crisis post-pandemic, where we won’t have Covid to cover it.
Adapt or die
Even before the wet markets of Wuhan, the sector was finding it difficult to adapt assessment to an age of mass higher education participation – because many still want and need the results of assessment to signal whether someone is better than someone else rather than just what’s different about them and what they can do. That problem hasn’t gone away.
Well before the Prime Minister was telling people it was still safe to shake hands, “grade inflation” looked like an intractable problem that few have strategies they genuinely believe will solve the problem. As we’ve learned during Covid, arresting the growth rate is only good if cases go on to actually decline. And because nobody is seriously talking about a wholesale shift from criterion-referencing to norm-referencing, this problem hasn’t gone away either.
For at least a decade, students have been using the National Student Survey to tell us that they don’t believe their work has been assessed fairly. The more interconnected they become, they more they feel injustice at the diversity of practice on their course and across the country. That review of the external examiner system that the Office for Students has been studiously ignoring will be like taking Calpol for a migraine – or wearing a cloth mask for Omicron.
For an impossibly long time, it’s been getting clearer and clearer that expecting all students to be taught, develop, be assessed and complete at the same pace and to the same timeline as every other student is a completely broken, “deficit and sticking plaster” approach. Either we’re serious about diversity or we’re not. That problem has also, very much so, not gone away.
But ironically it’s the shift to online – the thing we’ve told ourselves that we’ve got really good at – that really puts the spanner in the works.
None of it works any more
The high-stakes, single shot, supervised episode of assessment – which I’ll lazily refer to here as the “exam”- has had its fairness and efficacy debated for a very long time. The apparent news from around the sector that moving away from them has been key to closing attainment/awarding gaps in several universities piles on the pressure to kill them off altogether.
They have stumbled on in some contexts – generating an inevitable arms race in online “proctoring” that already appears to be a zero-sum game, with resources diverted from teaching and learning and students finding ways to cheat anyway.
So maybe the long(er) form, produced piece of assessed work is the right answer?
One of the first bits of academic work I produced as a student in 1995 was on the way in which people can pretend to be someone else online, both in relation to the ability to become someone else, but also the way in which that would ultimately reduce trust between us – because we start to gather evidence that we see or hear isn’t always as it is in reality. And yet here we are on both exams and assessment.
A ban on domestic essay mills in England is like sticking our thumb up to a flood. The tools available now that facilitate what we used to call collusion and plagiarism are astonishing – my sense is that very shortly not only will detecting their usage become impossible, even if we think we can, the public won’t believe us. And they all feel like using a calculator in the early 1970s – where eventually using versions of these tools well will become the skill we want to assess, rather than trying to engineer them out of the assessment.
I should add that music piracy in the end was killed off by addressing why it was that folks turned to its convenience to start with. But “tough on the causes of crime” is a good example of the sort of solution we always think we ought to have identified at the time that we find hard to implement in the moment. And especially hard at the moment.
Killing off exams and essays would leave us with the live assessment of tasks – what I’ll again lazily refer to here as the viva/presentation. There’s a whole wedge of amazing innovation here to draw on right across the sector – but not only does finding ways to engineer out prejudices and discrimination that can crop up in these kinds of assessment feel under-researched, these types of authentic assessment often feel like they could never scale in those parts of the university doing the heavy lifting on class sizes and high staff-student ratios.
Whichever way you look at it, the more we try to maintain ways for students to prove they’re better than others, or for us to prove that (some of) our students are better than others’, the more it all falls apart. Trickle and flood, and all that.
The science and art of possibility and politics
I don’t have a suitcase full of suggestions here – but my excuses are that my break has been disrupted by Covid like lots of others’, and anyway, this is a problem that demands big picture, collective and radical thinking rather than individual fixes to address the news story that is nipping at us today.
As such the diametric opposite of what will work is the sort of press/government/regulator led interventions we’ve seen in recent years, with OfS pretending that its statistical analyses of grade inflation are somehow irrefutable science rather than artefacts of art and politics – all ending in the chaos of that weird in(ter)vention over spelling last summer.
What’s needed here is the sector itself to take the lead – some future-focussed thinking on assessment that involves students, academics, employers and the public in a way that is grown-up and has leaders prepared to synthesise the complex issues.
That might feel like a stretch while we’re all in “batten down the hatches and cope with Covid” mode – but the good news is that we know of lots of learning and teaching professionals across the country that are already beavering away on projects and pilots that will deserve our time, attention, approval and scaling smarts just as soon as Omicron is a memory.
Do get in touch if you’re involved in any of that thinking – we’d love to see it highlighted here on the site.