Scotland’s national assessment review gets up and running
Jim is an Associate Editor (SUs) at Wonkhe
Tags
Last month QAA published the first Targeted Peer Review report under Scotland’s Quality Concerns Scheme – a review of the University of Glasgow prompted by the case of Ethan Brown, a 23-year-old geography student who died by suicide on what should have been his graduation day, three months after being wrongly told he didn’t have the credits to graduate.
The review didn’t cover the individual circumstances of Ethan’s death, but it did examine whether the errors that led to a student being given the wrong outcome were isolated or systemic.
The answer was unambiguous – Glasgow’s assessment framework posed “systemic risks” to both academic standards and the quality of the student experience.
I noted at the time that the SFC’s decision to commission a national review of assessment policies across all Scottish institutions suggested the regulator shared the concern that this may not be a Glasgow-specific problem.
SFC and QAA have now published the scope for that review – and while the broad strokes were expected, the detail of how it’ll work explains the scale of concern.
It review will cover all credit-bearing provision within SFC’s funding arrangements that leads to a recognised award, wherever it’s delivered. That’s pretty broad, with the scope explicitly noting that the awarding status of the provider will be taken into account. If you’re a college delivering an HND validated by a university, and that university’s awarding arrangements are shaky, the provision appears to be in the frame.
Four phases
The review runs across four phases. Phase one is a desk-based exercise – QAA will go through its existing evidence base to identify where matters similar to those found at Glasgow have already been considered or observed at other Scottish HEIs. That covers all HEIs in Scotland and wraps up by the end of April 2026.
In other words, QAA is going back through its own files – ELIR reports, annual discussions, previous enhancement themes, any concerns flagged through existing channels – to build a picture of where the warning signs already existed.
If phase one turns up evidence that similar issues were visible in existing material, the follow-up question is obvious – why wasn’t anything done about them at the time? Scotland’s enhancement-led model is built on the premise that collaborative, developmental review processes lead to improvement without the need for heavy-handed intervention. If the existing evidence already contained red flags about assessment integrity and awarding arrangements, the model has some explaining to do.
Phase two is methodology development – designing the approach for deep-dive reviews into a sample of institutions, with lines of enquiry adapted from the Glasgow TPR. QAA will develop a sampling method for the shortlist, but interestingly institutions will also be given the opportunity to put themselves forward to be included in the sample.
Institutions that volunteer are signalling confidence in their own arrangements, and if the review finds problems anyway, the reputational hit is softened by the fact they invited scrutiny. Institutions that don’t volunteer but get selected face a different set of optics.
Phase three involves on-campus deep-dive reviews of sampled institutions, running from May to November 2026. These will include evidence submissions, desk-based analysis by a peer review team (including a student reviewer, as with the Glasgow TPR), campus visits, and published reports. If significant concerns are found at any individual institution during this phase, they get referred straight to the Scottish Quality Concerns Scheme – the same mechanism that led to the Glasgow review in the first place.
And SFC has left itself room to go further. The scope notes that based on the evidence gathered through phases one to three, SFC “will also consider whether further work is needed more widely across the tertiary sector.”
Then phase four is the enhancement bit – a summary report of good practice and areas for improvement, plus a programme of sector-wide enhancement activity on assessment regulations and awarding arrangements, open to all colleges and universities. That’s the concession to Scotland’s enhancement-led culture – even a review born out of serious regulatory concern ends with a collaborative improvement programme.
What it doesn’t say
Absent is any mention of checking whether past awards were correctly conferred. The Glasgow TPR noted that the university had checked more than 700 student records in the School of Geographical and Earth Sciences and found at least two students with mistaken outcomes – but hadn’t extended that analysis to its other 23 schools. The national review’s scope says nothing about whether sampled institutions will be expected to conduct similar retrospective checks.
There’s also no mention of how students or students’ associations will be involved beyond the inclusion of a student reviewer on peer review teams. The Glasgow TPR benefited from student testimony that revealed how opaque grade calculations were from the student end, and from SRC Student Advice Centre staff identifying grade calculation as the most common area of student confusion. If the national review is going to work, it needs structured engagement with student representatives at every sampled institution – not just a student reviewer parachuted in for the visit.
The timeline, too, is worth noting. Phase one concludes by April 2026. Phase three runs May to November 2026. Phase four is the lessons-learned and enhancement work. For students currently enrolled at institutions where similar problems may exist, the full cycle of this review won’t complete until well into the 2026-27 academic year at the earliest.
This is tricky stuff for QAA – the whole model of an enhancement-led approach is supposed to be light on finger wagging and condemnation to encourage improvement and honest reflection. It’s harder to justify approaches like that in a context of student tragedy – everyone doubling down to learn honest lessons, rather than developing box files full of defensiveness, is likely to be the way to retain that model.
It is one of the long term flaws in the old QAA methodologies also – there was a push to, for example, get HEIs to standardise classification algorithms and processes across a provider via QAA Reviews back in the earlyish 2000s, but it was inconsistently applied and (seemingly) driven by the views of individual assessors. Reports from that era will sometimes include very clear recommendations for change, and sometimes have no mention at all.
Not that I don’t miss them, sat here in England (*glances sadly at the much delayed TEF dashboard, and emails about repeated errors in the underlying spreadsheets*), and it will be interesting to see whether there is any follow up in England.