Ever since universities minister Michelle Donelan started telling students that they could apply for tuition fee refunds if the “quality isn’t there”, students have been rightly asking “well what do you mean by quality”?
Many have therefore been surprised to learn that the judgement made of the academic “quality” of their programme is not something they are legally entitled to challenge or complain about. Academics own that.
Until the pandemic, it’s been a painful realisation that has usually kicked in when students have submitted an academic appeal. Partly because they don’t know what the standards are that universities are maintaining, and partly because students aren’t keen to complain about the people about to mark them, few students complain about poor “quality” teaching or support ahead of assessment – and if that then leads to them failing academically, they then angrily discover that they can’t challenge their marks on that basis because that’s “academic judgement”.
The magical power of academic judgement is theoretically established on the basis that it’s not just one academic making the decisions – some peer review is involved internally, and academics from other providers appear at exam boards externally to moderate the marking (usually notably without moderating the assessment design), and maybe to also chip some views in about the course more generally.
But as concerns have grown about “quality and standards” in a context of significant and rapid expansion and grade inflation, how “fit for purpose” are these processes of peer review?
A review about review
I raise all of this because news reaches us that Universities UK (UUK) and GuildHE have asked the Quality Assurance Agency (QAA) to work with them to support universities to review and improve external examining practices.
They say that their recent progress review of efforts to tackle grade inflation demonstrates that universities are already taking steps to review how they use external examiners – but “inconsistencies remain” that they say “could undermine confidence in degree classifications”.
You can say that again. The UK’s external examiner system might have been described in 2003 as a “guardian of the reputation of UK higher education” by the old DfES, but it came in for stinging criticism by the Innovation, Universities, Science and Skills Committee in 2009. It argued that:
- The remit and autonomy of external examiners is often unclear and may sometimes differ substantially across institutions in terms of operational practices;
- The reports produced by external examiners are often insufficiently rigorous and critical;
- The external examiner’s report’s recommendations are often not acted upon – partly because their remit is unclear; and
- The appointment of external examiners is generally not transparent.
It concluded that:
- The QAA should work with providers to create a UK-wide pool of academic staff recognised by the Quality Assurance Agency from which providers would select external examiners;
- A reformed QAA should be given the responsibility of ensuring that the system of external examiners works;
- That should include ensuring that standards are applied consistently across institutions;
- There should be the development of a national “remit” for external examiners;
- Clarification was needed on what documents external examiners should be able to access, the extent to which they can amend marks and the matters on which they can comment;
- That should be underpinned with an enhanced system of training, which would allow examiners to develop the generic skills necessary for multi-disciplinary courses; and
- The system should also be transparent and external examiners’ reports should be published without redaction.
The problems identified in the report eleven years ago feel contemporary and yet also exacerbated by a decade of expansion. The recommendations remain largely undelivered.
Back to the future
In 2010, Universities UK responded to the IUS Committee report by announcing a review of external examining. It said that it would:
- Address the need to develop Terms of Reference for the role, to support consistency;
- Reinforce the specific role of external examiners in ensuring appropriate and comparable standards;
- Analyse the level of support given by institutions to external examining, both financial and professional; and
- Identify current and future challenges and changing practice (such as modularisation) and their implications for external examining.
Unsurprisingly, that Finch review found that external examining arrangements in the UK were working well generally (they always do), but the report offered recommendations for “increasing the degree of consistency” across institutions and “improving levels of transparency”, and strongly advised the adoption of its recommendations by universities as soon as possible. These included:
- The role of the external examiner should be comprehensible to students, the media and the general public;
- A national set of minimum expectations for the role of external examiners should be developed, and should be adopted by each institution;
- A national set of criteria to be established for the appointment of external examiners, adopted by each institution;
- Improvements to induction for all external examiners;
- Core content for all external examiners’ report forms;
- All external examiners’ reports made available, in full, to all students.
Not everyone was thrilled with this set of proposals. Peter Williams, by then a former head of the Quality Assurance Agency, said the plans represented at least the fourth attempt at reform in 40 years, but that “not a lot has happened previously”, and the report left “several elephants in the room”.
“There is no mention of the very considerable resources needed to implement the changes, no mention of the competing demands of research on academics’ time and no real attempt to get to grips with comparability,” Williams said.
On the idea that the system ensured a First was at the same standard as a First from anywhere else, he said “the term ‘broad comparability’ is pretty weaselly. How broad is broad?”
And he added that while the document was “much more realistic” about the limitations of the “still vitally important” role of externals, he said that without better pay, their only motivation was a sense of obligation. “This won’t be sustainable if the workload increases substantially – it’s already very shaky,” he warned.
The problems identified in that report still feel contemporary and exacerbated by expansion. And workload did increase, substantially. Yet many of the recommendations still remain undelivered.
Strike up the band
By 2015, a Higher Education Funding Council for England (HEFCE) that was becoming increasingly interested in “quality” commissioned the then Higher Education Academy (HEA) to conduct a(nother) review of external examining arrangements in the UK to consider how the recommendations of the Finch Review had been implemented, to assess if the arrangements for external examining were fit for purpose, and to look ahead to the “changing higher education environment of 2025” (moocs, hoverboards, etc).
It concluded that there were a number of issues that needed to be addressed in order to establish external examiners as “key contributors” to the assurance of academic standards and to offer “confidence” in those standards to stakeholders. These issues were (and you’ll see a pattern forming here):
- Professionalisation of the role;
- Support from their home institutions to undertake the role;
- Calibration of examiners’ academic standards;
- Clarity in the role and remit; and
- The impact of award algorithms and regulations (including deepening understanding by externals of these so that their judgements were robust)
That review found that “empirical research provided clear evidence of the inconsistency and unreliability of higher education assessors”. Academics “have little knowledge of the difficulties and complexities of reliable marking”, and “they … lack experience of providers across the sector”, “receive limited support for the task from their own institutions”, and 10% had experienced pressure “not to rock the boat”.
On that basis, it recommended that the sector:
- Develop a standardised and clarified role and remit for external examiners that rebalanced academic and quality standards and was agreed across the UK sector;
- Accelerate the professionalisation of the external examiner role;
- Ensure that external examiners take part in regular calibration of their standards organised by their disciplinary communities, drawing on existing UK and international methods for calibration;
- Organise systematic training to develop further knowledge and more consistent perspectives on the role, standards, assessment literacy and professional judgement;
- A central appointment process, managed by the sector but independent of individual providers;
- Adopt equitable and appropriate remuneration;
- Undertake to support and recognise external examiners in their home institutions including development of staff for the role, clear reward and recognition for the role, appropriate resourcing including time, and effective use of examiner knowledge and experience;
- Review the impact of differential award algorithms and regulations amongst degree-awarding bodies on outcomes for students and academic standards.
The problems identified in that report still feel contemporary and have all been further exacerbated by expansion. And yet still, many of the recommendations remain undelivered.
Here I go again on my own
These days the Office for Students (OfS) has managed to run an entire review on quality and standards without so much as even mentioning the external examining system – and doesn’t seem especially interested in the comparatively low score that its own NSS generates every year on “marking and assessment has been fair”.
Nevertheless, it is proposing that while academics might contribute to some of its thoughts, at the core some qualitative judgements on quality will be made by its staff, and some quantitative judgements will be made by a comparison of metrics to absolute baselines (and by comparison to benchmarked outcomes in the TEF).
In other words, when it comes to making judgements about quality, the English regulator has decided that the judgement of academics – expressed partly through the external examiner system – are no longer sacrosanct. So why do university complaints processes and the courts still assume that they are when students are trying to complain?
Do not pass go
In principle, back in 2010, sector legal expert David Palfreyman described immunity from adjudicatory or judicial scrutiny on the basis of academic judgment as the UK higher education system’s “get out of jail free card”.
He noted that similar deference to expertise in other sectors had rightly disappeared – particularly where service users had been able demonstrate that the supplier of a service has failed in their contractual duty to perform a promised service with reasonable skill and care. That’s an inconsistency that remains.
But if we’re going to avoid thousands of opportunistic appeals (as we’re about to see over Level 3 teacher-assessed grades in Examnishambles 2) it remains important that we try to make this work. Academics need the support, time and resource to assess with reasonable care and attention, internal moderation needs to be brought up to scratch, and external examining needs both to work and be seen to be working, consistently and with a focus on consistency.
But when many academics argue that the workload model they operate under prevents them from doing the initial marking with the requisite reasonable care and skill, you’d have to assume a powerful cultural incentive for both internal and external moderators to not rock those boats and cause the initial markers to get it in the neck.
Maybe it will be different this time. Universities UK and Guild HE have announced that they are working (again) with the QAA (again) to review the external examining system, with a view to considering (again):
- Advising on activities that, at a minimum, external examiners should always expect to undertake and be consulted on;
- Establishing the content and format requirements for effective training and CPD of external examiners;
- More consistent use of and reference to sector recognised standards and national frameworks;
- Increasing use of longitudinal data and appreciation of local contexts (including degree outcomes statements in England and Wales), supported by training, to assess performance across and within cohorts;
- Reviewing eligibility criteria and qualifications required for appointment of external examiners, including incorporation of industry and PSRB expertise;
- Improving transparency and consistency in processes for responding to external examiner reports, advice, and concerns.
This really ought to be the last chance saloon for the sector to get this creaking system’s house in order. Dismissing complaints on the basis of higher education staff’s magical capacity for academic judgment maintains the sector’s outward stance of “we know best” – all while it simultaneously repeatedly admits in reports, initiatives, and pilots that actually, we still don’t know best at all.
That the regulator has now rejected the magical powers as a mere conjuring trick is worrying. That lawyers, surgeons, plumbers and social workers long ago lost this flimsy defence, but the academy retains it in the plain sight of repeatedly unaddressed and systemic failure, is astonishing. That we’re fifty years on from the first time someone tried to fix this in higher education is unsurprising – but given how proud we all are of the rapidity and scale of change in HE just lately, it would be nice if we could deal with this one in the slipstream.
A very useful run through the history of reviews into (external) examining in the UK but I am concerned that they and you focus so much on external examiners (declaration of interest that I am one) who as you acknowledge are only part of the wider system for grading the performance of students in undertaking a programme of assignments. I wholeheartedly agree with you that the (professional) academic judgement line is untenable when justifying and/or defending individual and collective awarding of grades for student performance. But for me the question is one of how HEIs individually and collectively operate a system or systems for grading the performance of students that can scrutinised or challenged and who undertakes that scrutiny, including whether external examiners are part of that or not (they are not apparently in American HEIs – https://www.quora.com/Do-American-universities-have-external-examiners-for-undergraduate-courses-the-way-UK-universities-do-If-not-how-do-they-ensure-standards-are-consistent-or-adjudicate-borderline-cases. So that would whether some quasi judicial body or the courts are the final arbiters of whether an individual has been grade fairly.
I think one of the key lines in this is:
‘In other words, when it comes to making judgements about quality, the English regulator has decided that the judgement of academics – expressed partly through the external examiner system – are no longer sacrosanct.’
There’s been quite a lot of discussion and focus of the ways in which the OfS’s approach has signalled the end of co-regulation. What’s gone much less remarked is the way in which their approach has fundamentally undermined for English HE the peer review approaches that have previously been dominant. These approaches weren’t perfect and there were areas where development was needed (e.g. instead of throwing out peer review through QAA reviews in English HEPs, we should have refocused them so we were looking at outcomes data as well as qualitative evidence and processes), but it feels as though we’ve lost a huge amount of value without the sector fighting hard enough for the value of peer review.
Completely agree, Richard, with your points on peer review and refocussing QAA reviews to consider the quantitative. I would also note the associated lack of any publicly available detailed reporting – surely prospective students, students and other stakeholders are entitled to such?
This feels a bit like shutting the door after the horse has bolted as OfS have said – publicly – that their intention is to remove any requirement for external examiners from the Regulatory Framework… Which, perhaps oddly, does not form an element of the current Consultation. But, there’s a fair bit of history before the history – some of which I addressed in this https://wonkhe.com/blogs/analysis-external-examining-system-past-its-sell-by-date/ and my conclusions were that first, the size and diversity of the sector is such that we are not able to even consider assuring comparability of standards of student achievement. Second, were this even possible, examiners are not in a position to assure comparability of the academic regulations used to measure student achievement, such as how algorithms for degree classification might work. Third, examiners do not have guidance in the form of nationally-agreed descriptors for a First, a 2:1 etc. in order to establish whether standards are in any way comparable across the sector. We’ve had a go at the third point here (and OfS intend to include this as a sector-recognised standard). We’ve tried to have a go at the second through the work of UUK, GuildHE. UKSCQA etc on algorithms, and AdvanceHE are having a good go at the first through their work on calibration and their Professional Development Course for external examiners (funded by OfS and others), but with the OfS declaration, the last chance saloon might be more flogging a dead horse?
Andy Lane asks what body would/could properly supervise and review academics in the professional application of their assessment practices. In the third 2021 edition of Farrington & Palfreyman on The Law of Higher Education (OUP, and a snip at £200 for 1000+ fun-packed pages) we set out at para 12.44 (pp 470/471) a case where the Court did actually delve into the mysteries of marking – dredging up marks of 47 & 40 for the first time around and then 71 & 65 arising from remarking ordered by the Court; which was challenged by the U and so another mark of 51 eventually emerged! The case helps answer the question posed by David Warren Piper way back in 1994 ‘Are Professors Professional? -The Organisation of University Examinations’…
As for the broader question of quality in HE delivery, the options for setting the standards and sustaining them (even perhaps enhancing them) are: 1) reliance on the innate pedagogical professionalism of academics; 2) reliance on the internal management of quality controls; 3) reliance on the external policing of quality controls by some form of inspection agency; 4) reliance on the requirements of professional bodies where degree courses are linked to entry to certain careers; 5) reliance on the student-qua-consumer being able to enforce a fair comprehensive and standardised U-S B2C contract to educate using consumer protection law and having been honestly informed about what the £9250 will be buying.
The OxCHEPS Occasional Paper No60 (2016) sets out how the information asymmetry problem in HE recruitment might be addressed, while the 2021 edition of TLHE yer again (as for the 2012 second edition) sets out such a suggested standardised U-S contract. I fear the 2027 fourth edition will still be calling for the same – by then some 35 years after my co-author first raised the need for contractual clarity in 1992! The HE Industry, through its Trade Body the UUK, sadly and egregiously shows no political nous in ever engaging in timely anticipatory self-regulation – and time will tell whether the CMA and/or the OfS step in to force external regulation upon the industry by ensuring all prospective applicants are given appropriate material information that leads to the student being able to rely on enforceable terms under the contract to educate.
No surprise here. Ms Lapworth has a personal view that external examining isn’t worth the paper it’s written on; and there’s nobody within the regulator (or its Board) to offer an alternative view. For years there has been an acknowledgement that external examining is like democracy – ” the worst form of government except for all others”. The professional development course is excellent but of course that’s also resource-intensive; and calibration is excellent in theory but really needs something closer to a national curriculum to work. Externality – rather than necessarily external examiners – has been the gold standard; and yes, we’ve let that slide. Peer review, rather like co-regulation, belongs in the ‘golden age’ which never seemed that golden at the time. Unaccountable appointees are the arbiters now. Sic transit gloria mundi.
Yes, the most striking and worrying (though not, in the current context Jon sets out, surprising) aspect of the current OfS consultation on quality and standards is the bit part that specialist academic expertise plays in the future that OfS is set on implementing. I would fully expect it to have little impact (having seen the way in which OfS responded to the consultation responses on Phase 1), but it’s down to the sector to in its responses to this round of consultation to make the case that externality is essential to the credibility and effectiveness of the national framework/system for quality and standards.
Jon, indeed. I note Ms Lapworth also dislikes periodic review and programme approval, as such things should clearly pop out of the ground, fully formed! I would honestly feel more comfortable with the criticism of these processes if I hadn’t read the messes that were the OfS’ attempts to crowbar numerical measurements into quality processes that constitute their consultations parts I and II. At least with external examining there is an attempt at parity rather than appearing to design systems to get rid of politically and financially inconvenient providers.