Ever since universities minister Michelle Donelan started telling students that they could apply for tuition fee refunds if the “quality isn’t there”, students have been rightly asking “well what do you mean by quality”?
Many have therefore been surprised to learn that the judgement made of the academic “quality” of their programme is not something they are legally entitled to challenge or complain about. Academics own that.
Until the pandemic, it’s been a painful realisation that has usually kicked in when students have submitted an academic appeal. Partly because they don’t know what the standards are that universities are maintaining, and partly because students aren’t keen to complain about the people about to mark them, few students complain about poor “quality” teaching or support ahead of assessment – and if that then leads to them failing academically, they then angrily discover that they can’t challenge their marks on that basis because that’s “academic judgement”.
The magical power of academic judgement is theoretically established on the basis that it’s not just one academic making the decisions – some peer review is involved internally, and academics from other providers appear at exam boards externally to moderate the marking (usually notably without moderating the assessment design), and maybe to also chip some views in about the course more generally.
But as concerns have grown about “quality and standards” in a context of significant and rapid expansion and grade inflation, how “fit for purpose” are these processes of peer review?
A review about review
I raise all of this because news reaches us that Universities UK (UUK) and GuildHE have asked the Quality Assurance Agency (QAA) to work with them to support universities to review and improve external examining practices.
They say that their recent progress review of efforts to tackle grade inflation demonstrates that universities are already taking steps to review how they use external examiners – but “inconsistencies remain” that they say “could undermine confidence in degree classifications”.
You can say that again. The UK’s external examiner system might have been described in 2003 as a “guardian of the reputation of UK higher education” by the old DfES, but it came in for stinging criticism by the Innovation, Universities, Science and Skills Committee in 2009. It argued that:
- The remit and autonomy of external examiners is often unclear and may sometimes differ substantially across institutions in terms of operational practices;
- The reports produced by external examiners are often insufficiently rigorous and critical;
- The external examiner’s report’s recommendations are often not acted upon – partly because their remit is unclear; and
- The appointment of external examiners is generally not transparent.
It concluded that:
- The QAA should work with providers to create a UK-wide pool of academic staff recognised by the Quality Assurance Agency from which providers would select external examiners;
- A reformed QAA should be given the responsibility of ensuring that the system of external examiners works;
- That should include ensuring that standards are applied consistently across institutions;
- There should be the development of a national “remit” for external examiners;
- Clarification was needed on what documents external examiners should be able to access, the extent to which they can amend marks and the matters on which they can comment;
- That should be underpinned with an enhanced system of training, which would allow examiners to develop the generic skills necessary for multi-disciplinary courses; and
- The system should also be transparent and external examiners’ reports should be published without redaction.
The problems identified in the report eleven years ago feel contemporary and yet also exacerbated by a decade of expansion. The recommendations remain largely undelivered.
Back to the future
In 2010, Universities UK responded to the IUS Committee report by announcing a review of external examining. It said that it would:
- Address the need to develop Terms of Reference for the role, to support consistency;
- Reinforce the specific role of external examiners in ensuring appropriate and comparable standards;
- Analyse the level of support given by institutions to external examining, both financial and professional; and
- Identify current and future challenges and changing practice (such as modularisation) and their implications for external examining.
Unsurprisingly, that Finch review found that external examining arrangements in the UK were working well generally (they always do), but the report offered recommendations for “increasing the degree of consistency” across institutions and “improving levels of transparency”, and strongly advised the adoption of its recommendations by universities as soon as possible. These included:
- The role of the external examiner should be comprehensible to students, the media and the general public;
- A national set of minimum expectations for the role of external examiners should be developed, and should be adopted by each institution;
- A national set of criteria to be established for the appointment of external examiners, adopted by each institution;
- Improvements to induction for all external examiners;
- Core content for all external examiners’ report forms;
- All external examiners’ reports made available, in full, to all students.
Not everyone was thrilled with this set of proposals. Peter Williams, by then a former head of the Quality Assurance Agency, said the plans represented at least the fourth attempt at reform in 40 years, but that “not a lot has happened previously”, and the report left “several elephants in the room”.
“There is no mention of the very considerable resources needed to implement the changes, no mention of the competing demands of research on academics’ time and no real attempt to get to grips with comparability,” Williams said.
On the idea that the system ensured a First was at the same standard as a First from anywhere else, he said “the term ‘broad comparability’ is pretty weaselly. How broad is broad?”
And he added that while the document was “much more realistic” about the limitations of the “still vitally important” role of externals, he said that without better pay, their only motivation was a sense of obligation. “This won’t be sustainable if the workload increases substantially – it’s already very shaky,” he warned.
The problems identified in that report still feel contemporary and exacerbated by expansion. And workload did increase, substantially. Yet many of the recommendations still remain undelivered.
Strike up the band
By 2015, a Higher Education Funding Council for England (HEFCE) that was becoming increasingly interested in “quality” commissioned the then Higher Education Academy (HEA) to conduct a(nother) review of external examining arrangements in the UK to consider how the recommendations of the Finch Review had been implemented, to assess if the arrangements for external examining were fit for purpose, and to look ahead to the “changing higher education environment of 2025” (moocs, hoverboards, etc).
It concluded that there were a number of issues that needed to be addressed in order to establish external examiners as “key contributors” to the assurance of academic standards and to offer “confidence” in those standards to stakeholders. These issues were (and you’ll see a pattern forming here):
- Professionalisation of the role;
- Support from their home institutions to undertake the role;
- Calibration of examiners’ academic standards;
- Clarity in the role and remit; and
- The impact of award algorithms and regulations (including deepening understanding by externals of these so that their judgements were robust)
That review found that “empirical research provided clear evidence of the inconsistency and unreliability of higher education assessors”. Academics “have little knowledge of the difficulties and complexities of reliable marking”, and “they … lack experience of providers across the sector”, “receive limited support for the task from their own institutions”, and 10% had experienced pressure “not to rock the boat”.
On that basis, it recommended that the sector:
- Develop a standardised and clarified role and remit for external examiners that rebalanced academic and quality standards and was agreed across the UK sector;
- Accelerate the professionalisation of the external examiner role;
- Ensure that external examiners take part in regular calibration of their standards organised by their disciplinary communities, drawing on existing UK and international methods for calibration;
- Organise systematic training to develop further knowledge and more consistent perspectives on the role, standards, assessment literacy and professional judgement;
- A central appointment process, managed by the sector but independent of individual providers;
- Adopt equitable and appropriate remuneration;
- Undertake to support and recognise external examiners in their home institutions including development of staff for the role, clear reward and recognition for the role, appropriate resourcing including time, and effective use of examiner knowledge and experience;
- Review the impact of differential award algorithms and regulations amongst degree-awarding bodies on outcomes for students and academic standards.
The problems identified in that report still feel contemporary and have all been further exacerbated by expansion. And yet still, many of the recommendations remain undelivered.
Here I go again on my own
These days the Office for Students (OfS) has managed to run an entire review on quality and standards without so much as even mentioning the external examining system – and doesn’t seem especially interested in the comparatively low score that its own NSS generates every year on “marking and assessment has been fair”.
Nevertheless, it is proposing that while academics might contribute to some of its thoughts, at the core some qualitative judgements on quality will be made by its staff, and some quantitative judgements will be made by a comparison of metrics to absolute baselines (and by comparison to benchmarked outcomes in the TEF).
In other words, when it comes to making judgements about quality, the English regulator has decided that the judgement of academics – expressed partly through the external examiner system – are no longer sacrosanct. So why do university complaints processes and the courts still assume that they are when students are trying to complain?
Do not pass go
In principle, back in 2010, sector legal expert David Palfreyman described immunity from adjudicatory or judicial scrutiny on the basis of academic judgment as the UK higher education system’s “get out of jail free card”.
He noted that similar deference to expertise in other sectors had rightly disappeared – particularly where service users had been able demonstrate that the supplier of a service has failed in their contractual duty to perform a promised service with reasonable skill and care. That’s an inconsistency that remains.
But if we’re going to avoid thousands of opportunistic appeals (as we’re about to see over Level 3 teacher-assessed grades in Examnishambles 2) it remains important that we try to make this work. Academics need the support, time and resource to assess with reasonable care and attention, internal moderation needs to be brought up to scratch, and external examining needs both to work and be seen to be working, consistently and with a focus on consistency.
But when many academics argue that the workload model they operate under prevents them from doing the initial marking with the requisite reasonable care and skill, you’d have to assume a powerful cultural incentive for both internal and external moderators to not rock those boats and cause the initial markers to get it in the neck.
Maybe it will be different this time. Universities UK and Guild HE have announced that they are working (again) with the QAA (again) to review the external examining system, with a view to considering (again):
- Advising on activities that, at a minimum, external examiners should always expect to undertake and be consulted on;
- Establishing the content and format requirements for effective training and CPD of external examiners;
- More consistent use of and reference to sector recognised standards and national frameworks;
- Increasing use of longitudinal data and appreciation of local contexts (including degree outcomes statements in England and Wales), supported by training, to assess performance across and within cohorts;
- Reviewing eligibility criteria and qualifications required for appointment of external examiners, including incorporation of industry and PSRB expertise;
- Improving transparency and consistency in processes for responding to external examiner reports, advice, and concerns.
This really ought to be the last chance saloon for the sector to get this creaking system’s house in order. Dismissing complaints on the basis of higher education staff’s magical capacity for academic judgment maintains the sector’s outward stance of “we know best” – all while it simultaneously repeatedly admits in reports, initiatives, and pilots that actually, we still don’t know best at all.
That the regulator has now rejected the magical powers as a mere conjuring trick is worrying. That lawyers, surgeons, plumbers and social workers long ago lost this flimsy defence, but the academy retains it in the plain sight of repeatedly unaddressed and systemic failure, is astonishing. That we’re fifty years on from the first time someone tried to fix this in higher education is unsurprising – but given how proud we all are of the rapidity and scale of change in HE just lately, it would be nice if we could deal with this one in the slipstream.