It’s been good this past week to see issues around external examining debated on Wonkhe – but missing from the debate has been external examining’s role in protecting (or failing to protect) students’ interests.
One of my favourite ways to wind up HEFCE when it was dabbling in the “student interest” (around 2012-13) was to suggest that its ‘secret list of universities facing collapse’ would have to be made public if the body was truly operating in the student interest. Of course, they never released it because, as the great and the good know best, the only way to protect the student interest is if everything is kept secret.
The ‘we know best’ attitude is frequently invoked by elders trying to evade scrutiny in ancient sectors – Parliament, the BBC, the law – until the position looks so unjustifiable in the twenty-first century that a scandal can easily engulf them.
We operate in a mysterious world, where basic rights, redress or natural justice are still missing where elsewhere they have become established. This systematic letting down of students is justified by the need to protect sacred cows such as ‘academic freedom’ and ‘academic judgment’. Such concepts are woven into the mythical romance of higher education, yet they increasingly look like straw men. Or even justifications for bullying the indebted and powerless out of getting what they’ve paid for.
It is bad enough that it has taken the Competition and Markets Authority’s intervention to expose just how weak sector-owned regulation was in ensuring students were not mis-sold on their open day. But sadly in their search for a comparable product, the CMA’s guidance lazily picked the easy consumer products – buildings and course content – rather than the more subtle failures in service. The CMA missed a core aspect of that service: the development of and maintenance of academic standards, against which student work is assessed, degree classifications issued, and life chances enhanced or restricted.
Given that the marking and grading process has such a huge impact on whether a student will ever pay back their fees, the passenger on the Clapham Omnibus might reasonably assume that if marking is not being done effectively or with due care or attention, a student might be able to exert a challenge. Yet we know they can’t. Because of ‘academic judgment’. Because ‘reasons’. Literally because ‘we know best’.
Back in 2010, David Palfreyman described immunity from judicial scrutiny on the basis of academic judgment as UK higher education’s “get out of jail free card”. Palfreyman argued that similar magical defences of expertise in other sectors have rightly disappeared, particularly where service users can demonstrate that the supplier of the service has failed in their contractual duty to perform a promised service with reasonable skill and care.
Given that educational outcomes are co-produced, it is, of course, reasonable for universities to argue that judgments have to be able to be made, lest every failing student vexatiously blame poor teaching or unfair marking. But surely the defence disappears if the assessment was carried out without reasonable skill and care, and the system designed to assure and maintain those academic standards in that assessment was poor or failing? If that was the case, surely students would be able to sue?
There is sufficient evidence to be concerned that this is indeed the case. UCU has shown that teaching staff are having to do more marking and assessment outside of reasonable working hours. If assessment and feedback have not been palmed off onto postgraduate research students, then it’s probably being done hurriedly against a deadline set by a pro vice chancellor that needs to eek out an extra point or two on the NSS.
And when it comes to external examining, HEFCE’s recent review found that “empirical research provides clear evidence of the inconsistency and unreliability of higher education assessors… academics … have little knowledge of the difficulties and complexities of reliable marking”, and “they … lack experience of providers across the sector”, “receive limited support for the task from their own institutions”, and 10% had experienced pressure ‘not to rock the boat’.
Dismissing complaints on the basis of higher education staff’s seemingly magical capacity for academic judgment maintains the sector’s outward stance of ‘we know best’, while simultaneously admitting in reports, initiatives, and training pilots that actually, we don’t know best at all. That lawyers, surgeons, plumbers and social workers long ago lost this flimsy defence, but academics retain it in the plain sight of systemic failure, is quite astonishing.
Katie Akerman is correct when she points out that there are “three fundamental flaws” with the external examining system. What she misses is that it’s probably only this wispy justification that keeps the bulk of student complaints out of court altogether.
So that’s the real test for the new Office for Students. Of course, it matters whether its board has a student on it – I wouldn’t do the job I do if I didn’t fundamentally believe that students are capable of discharging governance duties in HE with more than reasonable skill and care. But regardless of its title and governance, when it concludes like others before it that the secret list must remain secret and that academic judgment is sacrosanct, it will confirm again that it is not an office for students, but for the interests of universities.
To be honest, I think you are a bit behind the times here Jim. Long-established OIA guidance (see case study 55 at http://www.oiahe.org.uk/news-and-publications/Recent-Decisions-of-the-OIA/case-studies.aspx, for example) is that ‘academic judgment’ relates to the assessment tasks set and the marks given. If the setting or marking process is not followed correctly, then that failure is not a matter of academic judgment. Same applies if the University just delivers the course poorly (same link, case 53).
So whilst the Academic Judgment card does still exist, it only gets you out of a very small set of jails in rather narrowly-defined circumstances these days.
It’s perfectly possible to set and mark correctly against the process whilst being overworked, underpaid, stressed and distracted on a short term zero hours contract. See Guardian expose on HE precariat. What would a student have to do- say they saw their lecturer yawning?
“The CMA missed a core aspect of that service: the development of and maintenance of academic standards, against which student work is assessed, degree classifications issued, and life chances enhanced or restricted.”
This is perhaps where QAA became undone; there’s no section of the Quality Code on academic regulations so – in a sense – examiners are being set up to fail, because of sector-wide inconsistency in regard to academic regulations. For example, different institutions treat mitigating/extenuating circumstances differently (some only allow claims in the case of failure, some allow claims whatever the outcome, some operate ‘fit to sit’ policies); different institutions treat failure differently (some allow an automatic right to re-sit, some don’t; some allow compensation within modules or between modules, some don’t). Different institutions treat academic malpractice/misconduct differently (penalties applied and so on). Curran and Volpe (Degrees of freedom: An analysis of degree classification regulations, 2003) put together a fascinating analysis of the differences that such can make to outcome – for example, weighting can change a student’s final classification from a First to a 2:1 – if a student has a second year average of 50 and a third year average of 70, on a 20/80 weighting (20% allocated to the second year, 80% to the third year) the outcome is a First. With the exact same marks but a weighting of 30/70, the student would achieve a 2:1. So, as Curran and Volpe note “The most important implication of the analysis is that students with similar marks profile can be awarded different degree classifications depending on the institution that they attend.” The problem is, therefore, perhaps more nuanced than HEFCE have considered in suggesting the development of algorithms for classifying degrees, because academic regulations are far more complex than simply giving information on how an award will be calculated.
Your passenger (and prospective student and employer?) on the Clapham Omnibus might reasonably expect regulations at Oxford to be more or less similar to those at Oxford Brookes (now where I have I heard that one before?).
I’m not (yet?) convinced that HEFCE’s proposals for calibration activities will resolve the matter, either – HEFCE notes “…the progress made by the Australian higher education sector in providing opportunities for markers and examiners to share and develop their views about academic output standards through calibration activities. Australia’s Group of Eight (roughly equivalent to the UK’s Russell Group) has developed a ‘Quality Verification System’, to calibrate standards. As ‘Group of Eight’ rather suggests it’s a group of eight institutions. The Competition and Markets Authority reckons the UK has around 900 providers of higher education. So, it is not at all clear as to how this will work (or even if it might work).
A possible solution might be to develop sector-agreed descriptors for a First, 2:1 etc – which would make the proposed calibration activity more informed – and potentially genuinely strengthen the external examining system for students (and examiners, institutions etc)?
(And thank you for picking up on this!).
Two matters:
In my view the Quality Code, Expectation A2.1 does cover academic regulations: ‘In order to secure their academic standards, degree-awarding bodies establish transparent and comprehensive academic frameworks and regulations to govern how they awards academic credit and qualifications’.
The HEA did some valuable work on external examining some years ago, c. 2007 – 2009 (I think) which found I think some legitimate issues with calibration. Separately, one of the real challenges is whether external examiners adequately fulfil each role expected of them: examining and moderating, commenting on whether the institution has correctly applied its assessment processes taking account of the FHEQ and subject benchmark statements as necessary, commenting on the comparability of standards with other institutions, and offering advice and guidance to the institution on its programmes. How many examiners truly engage deeply with each role? Is their conflict between the accountability part of the job and the advice and guidance part of the job? Do we expect too much of them? etc etc…
I am not at all sure that the QC gives any insight on what academic regulations might look like, though – deliberately so as it’s to do with the thing on institutional autonomy – but this means there is significant variation across the sector as outlined in my comments above… The 2015 HEA/HEFCE report on external examining is fascinating – although elements of it have been taken and mis-used, for example: “However, 40% of quality officers report that their institutions have changed their award algorithm(s) in the last five years to ensure that it does not disadvantage students in comparison with those in similar institutions…” – which sounds shocking – until placed in the context that ” Responses were received from quality officers employed at 98 of the 159 institutions with degree-awarding powers (62%).”
It’s certainly always struck me as bizarre that universities can get away with basically implying their procedures are guaranteed free of certain types of errors.
I wouldn’t want university assessments to go the same way as GCSE and A-level where pretty well everyone who doesn’t like their grade and isn’t too far off the next will go for a re-mark – it would be an unworkable system for a start. But surely there needs to be a halfway house that acknowledges errors can be made and provides some means of correcting them.