Sue Rivers is an independent higher education consultant and a Principal Fellow of the Higher Education Academy.
Some years ago, I caused a stir when I first sought to play my part in the external examiner system. I advertised my availability, qualifications and motivation to take on the role on the Jisc external examiners’ email list.
According to ‘Disgusted of Poppleton’, who took no hesitation in selecting the ‘reply to the entire Universe’ option, I had besmirched the list by using it as if it were a type of Tinder for academics! However, as even one critic admits, there is some difference between the individual external examiners (EEs), who want to do their bit to enhance the student learning experience, and the system itself.
The external examiner system was once hailed as a “guardian of the reputation of UK higher education” by the DfES, but it was also criticised by a House of Commons Committee in 2009. There are two main areas of critique: professionalism and guarding standards. Given HEFCE’s recent proposals for reforming the EE system, I also want to give an update on the Higher Education Academy’s project on the EE system and its pilot training for EEs.
If the role of EE ought to be professionalised, then the starting point must be admission to the profession. The advent of the Jisc email list (as it operates today) is a positive step towards more open recruitment. Despite this, the selection of EEs has been criticised for lack of transparency, such as appointing ‘people you know’ (recently dubbed a ‘chumocracy’) and for a strong tendency for those in charge of appointing EEs to do from institutions like their own from similar parts of the sector.
An Indicator in the UK Quality Code creates a person specification for EEs. A induction for new EEs is now compulsory in many higher education institutions. However, some inductions concentrate on the institution’s quality procedures and exam board systems rather than on the substance of the role itself. In terms of professional standards, individual EEs have varying experience of comparator institutions, which may affect (read: probably does affect) the accuracy and consistency of their judgements. There is an expectation in some institutions that staff will take on EE roles but there’s a lack of appropriate professional recognition and reward.
The Quality Code indicates that EEs should normally hold no more than two EE appointments for taught programmes/modules. This may prevent EEs accessing a wider comparator group and inadvertently discriminates against highly experienced people willing and able to take on more than two EE roles. For the individual, the role provides exposure to diverse practices and is generally seen as positive for personal development and career prospects. In some institutions, being an EE is considered mandatory for performance review and promotion, yet it is not always easy to obtain an EE post and so there are some people disadvantaged as a result. A recent advertisement for an EE post on the Jisc list in which prior EE experience was mandatory led some to ask: how are those without EE experience ever to get it?
Each institution with degree-awarding powers is responsible for setting the standards for its awards, and ensuring that its graduates achieve those standards. According to the Quality Code, external examining is one of the principal means for maintaining academic standards within autonomous higher education providers. When I eventually did become an EE (by replying to an advert on the Jisc list), I was presented with a heap of cardboard boxes containing exam scripts to look at. One of the ‘clear fail’ scripts caught my attention immediately. It consisted entirely of one short paragraph of supremely spidery writing accompanied by three very large stains. Each stain was scrupulously circled in a different colour. The circles were labelled, respectively: ‘blood’, ‘sweat’ and ‘tears’! , I was pleased to confirm that this had indeed failed to address the question at hand.
One of the advantages of the EE system is that, as an external quality process, it is uniquely concerned with academic standards (measured by the output of student achievement) as well as quality standards (measured by input and focussed on other aspects of the assessment cycle). This is important because it is possible to have high quality inputs without this necessarily leading to good academic standards. However, one of the questions about the EE system is whether it is fit to do the job of maintaining academic standards. Students may not understand the EE role and, outside the sector, there may be an assumption in some quarters that EEs are the sole guardians of higher education standards. Any perceived weaknesses in the EE system could, therefore, impact strongly on public confidence in universities and higher education more broadly.
The Quality Code refers to ‘threshold standards’ (the minimum acceptable level of achievement that a student has to demonstrate to be eligible for an academic award), but there is a public (and even a sector) expectation that EEs should judge the quality and standards of a programme at one institution compared with ‘national’ standards across the sector. This is despite the fact that institutions often expect EEs to make comparative judgements based on their own experience of similar programmes, rather than on a wider basis.
An EE’s capacity to act as the ‘guardian’ of standards, such as to prevent grade inflation, is affected by the limits of their remit; for example, they may have little power to safeguard programme-level award standards where institutional algorithms are applied to affect students’ class of degrees. EEs themselves have commented that their role appears to be changing from ‘additional marker’ to ‘moderator’: they now look at samples of student work rather than all of the student assessment, and generally do not change individual marks.
The EE system should, in theory, provide many rich examples of good practice which EEs can take back to their home institutions to enhance students’ learning. However, a lack of clear systems and institutional imperatives for reporting and implementing this probably acts as an impediment to bridging the gap between theory and practice.
A HEFCE Review in 2015 concluded that the EE system should be retained but strengthened, and the role professionalised so that EEs would be better able to provide reliable judgements about the standards set by institutions and measure student achievement against them. HEFCE favoured establishing a mechanism by which subject examiners could compare students’ work and judge student achievement against the standards set, to improve comparability and consistency. There should also be research into classification algorithms, to determine a sensible range of ‘approved’ algorithms according to the desired outcomes.
Consequently, the HEA was awarded a contract by HEFCE (on behalf of the devolved administrations) to lead a project on degree standards which has two linked aspects:
The pilot EE Training Programme aims to ensure that EEs understand their guardianship role in national standards and to increase their understanding of calibration. It aims to enhance their knowledge of assessment, using key reference points. Participants (aspiring, new and experienced EEs) assess samples of anonymised student work and share their experiences of this process, together with their views on other authentic issues and scenarios, in workshops, although there is no formal assessment. It will be interesting to see how consistent their marking is and what can be learned from the pilot about institutional approaches to marking, for example, differences in applying deductions for poor writing, spelling, and grammar.
If the EE role is to be professionalised, there must be a single, transparent recruitment process, so that entry to the profession is based on merit. There must be appropriate and consistent reward and recognition for the EE role, including published national fee scales which take into account the volume, level and complexity of the work undertaken.
The advent of an EE training programme is to be welcomed if it ensures that key principles of the role are understood by all EEs across the sector. If the HEA programme is to be adopted beyond the pilot phase it must be owned by the sector as a whole, whether or not the onus is on the home institution to train its staff to be EEs elsewhere.
A key output of the project should therefore be a sector-owned process of external examining staffed by professionals. The idea of training in subject-based calibration of standards is a critical factor and raises important questions about the reliability of calibration and the longevity of training in it, paving the way for EE professional development. At the moment we have institution-level regulations and it is possible that the result of subject calibration will reveal a need for subject-level regulations. On the other hand, this raises the risk that discipline-based calibration will emphasise disparities between disciplines.
If the external examiner system were to end, one possible replacement would be a Grade Point Average scheme in which individual Deans or Professors determine final marks. If so, this might result in grade inflation, as in the USA, which would surely threaten confidence in the credibility of UK higher education. Alternatively, if a national body was established to provide independent verification of standards, this would not square well with the notion of institutional autonomy.
So while the external examiner system may not be perfect, but it may well be better to mend the peer show rather than end it.