In the UK’s quality assurance system, there’s one feature – the external examining system – which is assumed to be an immutable object.
But does assuming that ‘well, we’ve always done it, so it must be right’ blind us to other options? Rather than tweak the system, as has regularly been proposed over the years, we ought to be more radical. Do we need an external examining system at all?
First, a little history…
The external examining system was set up in the 1830s to enable Durham to assure potential students (and others) that their degrees were comparable to those of Oxford or Cambridge. By the 1950s, the system was adopted by the National Council for Technological Awards (NCTA) and then from the 1960s onwards, by the Council for National Academic Awards (CNAA). Today, the Quality Assurance Agency still expects, through the Quality Code, that external examiners will provide informative comment on whether academic standards are comparable with those in other UK degree-awarding bodies of which the examiners have experience.
Twenty years ago, Ronald Barnett noted that “… we have to doubt that the external examining system ever fulfilled the responsibilities placed on it. It appears likely that the idea was always a fiction; we just did not recognise it as such.” Barnett’s perspective was based on a system inadequate for supporting the demands of a mass higher education system. He noted at the time that there were around 100 universities and around one million students. Two decades on, the sector has around 1.8 million undergraduate students studying at over 500 institutions.
Stop Me If You Think You’ve Heard This One Before
Barnett was not alone in doubting that the system was able to assure the comparability of standards of student achievement. The 1985 Lindop Report noted that there was confusion around the role of the external examiner. In 1989, the Council for National Academic Awards (CNAA) hoped that training would address such confusion. In 1994, the Higher Education Quality Council (HEQC) joined the debate, suggesting that comprehensive training was all that was required. A year later, HEQC commissioned a report called The External Examiner System: Possible Futures. Proposals included a national register and a national policy for training.
Clearly, the difficulties had not been resolved by 1997, when Dearing entered the fray with recommendations for a register of external examiners, managed by QAA. In 2009 the Commons Select Committee for Higher Education got involved, also recommending the creation of a register of external examiners along with new training for examiners.
Most recently, the Higher Education Academy (2015) also questioned whether the external examining system was effective in safeguarding standards. The report also noted that examiners may lack experience of different providers across the sector. With around 150 higher education institutions, 270 colleges of further education offering higher education, and an increasing number of ‘alternative’ providers, it’s easy to see that this is probably the case. The report concludes with recommendations designed to strengthen a system aimed at assuring comparability of standards at discipline level. It did not propose a more comprehensive review and a potential re-casting of purpose of the system.
The repetitive nature of these reports and reviews begs the question: why does the sector insist on tinkering with the system rather than going for first principles?
What Difference Does It Make?
Might there be reasons for retaining the external examining system? If the system does not offer maintenance and comparability of standards, then what does it offer? Its value is in its ability to identify matters for improvement and best practice that might improve the quality of the student learning experience. However, given the extraordinary cost of the system to the sector, it is questionable whether there is enough value in return for the extortionate amount of time and effort external examining can take.
There are three fundamental flaws with the current system. First, the size and diversity of the sector is such that we are not able to even consider assuring comparability of standards of student achievement. Second, were this even possible, examiners are not in a position to assure comparability of the academic regulations used to measure student achievement, such as how algorithms for degree classification might work. Third, examiners do not have guidance in the form of nationally-agreed descriptors for a First, a 2:1 etc. in order to establish whether standards are in any way comparable across the sector.
The UK is nearly unique (Malta, Denmark and Norway preventing it from being completely so) in having a system of external examining at undergraduate level. We could focus on second and/or double marking and internal moderation to assure standards within institutions. Universities and colleges have systems for the management of quality and the assurance of standards. To claim that external examiners add materially to these processes is a fiction. Let’s use them for what they’re good for – enhancing the student experience – but let’s not kid ourselves, or our students, that they’re doing more than that.