You can’t go on social media right now without seeing graduates celebrating, or lamenting, their degree results.
The fateful envelope may have given way to a page on the university intranet, but the nerves and confusion remain the same. And it’s the latter that the UK Standing Committee for Quality Assessment (UKSCQA) are seeking to address in the publication of a new set of guidelines and principles for degree algorithm design. The way a degree grade is calculated from modular or yearly assessment results should be transparent, fair, and consistent.
You may recall from February of this year David Allen’s detailed analysis of the problem of the degree algorithm for Wonkhe – and his proposal of a national algorithm for degree marks and classifications to sit alongside provider systems. A glance at the comment section suggests that Allen’s ideas sparked a great deal of debate – and it is heartening to see his work (including the article itself) cited in the research underlying the report. But in some ways the report is eclipsed by this earlier analysis – it is good to see a survey describing current practice but in all honesty we need more than 69 responses in such a diverse, four-nation sector.
Give me six
As you may well expect, UKSCQA doesn’t go quite as far as a national algorithm. Instead – we get six sector-wide principles, that are worth quoting in full:
“To be effective, an algorithm must:
- provide an appropriate and reliable summary of a student’s performance against the learning outcomes, reflecting the design, delivery and structure of a degree programme
- fairly reflect a student’s performance without unduly over-emphasising particular aspects, with consideration being taken at the design stage of how each element within a method of classification interacts with other elements
- protect academic standards by adhering to the current conventions and national reference points used to define classification bands and boundaries
- normally be reviewed at least every five years – or alongside national cyclical review timetables – to ensure algorithms remain relevant and appropriate, with input from across the provider, including students, academic and non-academic staff, and accrediting bodies
- be designed and reviewed in a way that is mindful of the impact of different calculation approaches to classification for different groups of students
- be communicated and explained clearly to students, both in how it works and why”
Like most principles drafted in workshops, there is little on the face of these for the majority of people to disagree with. It is the second section, on “implementing the principles”, that may cause arguments.
Weight a minute
UKSCQA identifies four common weighting schemes used in degree algorithms. We have “exit velocity” which covers courses with a single, final point of assessment in year 3 of a standard three year degree, “emphasis on exit velocity”, which looks at the final two years of marks with a greater weight placed on the latter, “equal weighting” weights the final two years equally, and “level 4/8 inclusion” adds aspects of a students performance in year one but weights towards the final year. This presumption towards weighting away from year one might reflect practice, but it is fair to argue that it could also put students off applying themselves in their first year of study.
The sector is urged “where possible”, variation within these four schemes should be kept to a minimum. There’s a similar limitation expressed across the grand theories of “discounting” – the practice of not counting modules with lower marks (maybe excepting the final year, or core modules). This is apparently fine now and then, but “it is important that any form of discounting is minimised to reduce its inflationary potential and ensure the title of the degree awarded is not misleading”.
There’s also clarity that discounting should not be used as a proxy for mitigating circumstances. In a way this puts the onus on students to seek and confirm mitigation at the time the problem that affected their studies happened, rather just counting on the bottom two (say…) modules not being used to calculate the final degree score. We get a suggestion that modules should be designed away from reliance on a single point of assessment so mitigation at this late stage is needed less, but we don’t get any real examination as to what kind of modules are most often discounted and why, or indeed whether there is a pedagogic rationale for discounting in the first place.
Over the borderline
It is, after all, exam board season, so let us discuss borderline marks and rounding up. This form of moderation acts directly upon, and via, academic judgement – though the report notes rule-based approaches should be “encouraged to avoid the potential for a discretionary approach” this does run the risk of complicating already fiendishly nuanced algorithms. UKSCQA recommends that any adjustment of a classification should be rule based and anonymous – and that we should consider a maximum zone of consideration of two percentage points from the grade boundary, with no additional rounding.
Rounding and borderlines are two of the most complex and contentious parts of degree classifications. Students attempting to calculate their degree based on rounded module marks and a partial understanding of borderline regulations can often be disappointed, and academics can feel constrained where a rule-based approach does not allow for specific circumstances.
Throughout all of this, there is a commendable focus on making the process as clear as possible for the student. There’s a presumption, for example, that each student’s grade should be calculated using only one algorithm – bad news for courses and providers that have a “best of the two” approach. Changing an algorithm mid-course could have consumer protection legislation implications. It’s not clear how an algorithm or wider approach that deviates sharply from the provided approaches would be seen by regulators or in complaints.
Clarity and simplicity – laudable goals as both are – don’t necessarily do anything to address grade inflation. And it is the latter that leads the publicity released around the report. It is a shame that we’re not in a position to be arguing for clearer and more reliable algorithms on their own merits, but complying with these recommendations should at least allow us to be clear that the growth in “unexplained” first-class degrees is not explained by academic mendacity.
I’m grateful to David Allen for his support in analysing this report, though all errors are my responsibility.