You can’t go on social media right now without seeing graduates celebrating, or lamenting, their degree results.
The fateful envelope may have given way to a page on the university intranet, but the nerves and confusion remain the same. And it’s the latter that the UK Standing Committee for Quality Assessment (UKSCQA) are seeking to address in the publication of a new set of guidelines and principles for degree algorithm design. The way a degree grade is calculated from modular or yearly assessment results should be transparent, fair, and consistent.
You may recall from February of this year David Allen’s detailed analysis of the problem of the degree algorithm for Wonkhe – and his proposal of a national algorithm for degree marks and classifications to sit alongside provider systems. A glance at the comment section suggests that Allen’s ideas sparked a great deal of debate – and it is heartening to see his work (including the article itself) cited in the research underlying the report. But in some ways the report is eclipsed by this earlier analysis – it is good to see a survey describing current practice but in all honesty we need more than 69 responses in such a diverse, four-nation sector.
Give me six
As you may well expect, UKSCQA doesn’t go quite as far as a national algorithm. Instead – we get six sector-wide principles, that are worth quoting in full:
“To be effective, an algorithm must:
- provide an appropriate and reliable summary of a student’s performance against the learning outcomes, reflecting the design, delivery and structure of a degree programme
- fairly reflect a student’s performance without unduly over-emphasising particular aspects, with consideration being taken at the design stage of how each element within a method of classification interacts with other elements
- protect academic standards by adhering to the current conventions and national reference points used to define classification bands and boundaries
- normally be reviewed at least every five years – or alongside national cyclical review timetables – to ensure algorithms remain relevant and appropriate, with input from across the provider, including students, academic and non-academic staff, and accrediting bodies
- be designed and reviewed in a way that is mindful of the impact of different calculation approaches to classification for different groups of students
- be communicated and explained clearly to students, both in how it works and why”
Like most principles drafted in workshops, there is little on the face of these for the majority of people to disagree with. It is the second section, on “implementing the principles”, that may cause arguments.
Weight a minute
UKSCQA identifies four common weighting schemes used in degree algorithms. We have “exit velocity” which covers courses with a single, final point of assessment in year 3 of a standard three year degree, “emphasis on exit velocity”, which looks at the final two years of marks with a greater weight placed on the latter, “equal weighting” weights the final two years equally, and “level 4/8 inclusion” adds aspects of a students performance in year one but weights towards the final year. This presumption towards weighting away from year one might reflect practice, but it is fair to argue that it could also put students off applying themselves in their first year of study.
The sector is urged “where possible”, variation within these four schemes should be kept to a minimum. There’s a similar limitation expressed across the grand theories of “discounting” – the practice of not counting modules with lower marks (maybe excepting the final year, or core modules). This is apparently fine now and then, but “it is important that any form of discounting is minimised to reduce its inflationary potential and ensure the title of the degree awarded is not misleading”.
There’s also clarity that discounting should not be used as a proxy for mitigating circumstances. In a way this puts the onus on students to seek and confirm mitigation at the time the problem that affected their studies happened, rather just counting on the bottom two (say…) modules not being used to calculate the final degree score. We get a suggestion that modules should be designed away from reliance on a single point of assessment so mitigation at this late stage is needed less, but we don’t get any real examination as to what kind of modules are most often discounted and why, or indeed whether there is a pedagogic rationale for discounting in the first place.
Over the borderline
It is, after all, exam board season, so let us discuss borderline marks and rounding up. This form of moderation acts directly upon, and via, academic judgement – though the report notes rule-based approaches should be “encouraged to avoid the potential for a discretionary approach” this does run the risk of complicating already fiendishly nuanced algorithms. UKSCQA recommends that any adjustment of a classification should be rule based and anonymous – and that we should consider a maximum zone of consideration of two percentage points from the grade boundary, with no additional rounding.
Rounding and borderlines are two of the most complex and contentious parts of degree classifications. Students attempting to calculate their degree based on rounded module marks and a partial understanding of borderline regulations can often be disappointed, and academics can feel constrained where a rule-based approach does not allow for specific circumstances.
Throughout all of this, there is a commendable focus on making the process as clear as possible for the student. There’s a presumption, for example, that each student’s grade should be calculated using only one algorithm – bad news for courses and providers that have a “best of the two” approach. Changing an algorithm mid-course could have consumer protection legislation implications. It’s not clear how an algorithm or wider approach that deviates sharply from the provided approaches would be seen by regulators or in complaints.
Clarity and simplicity – laudable goals as both are – don’t necessarily do anything to address grade inflation. And it is the latter that leads the publicity released around the report. It is a shame that we’re not in a position to be arguing for clearer and more reliable algorithms on their own merits, but complying with these recommendations should at least allow us to be clear that the growth in “unexplained” first-class degrees is not explained by academic mendacity.
I’m grateful to David Allen for his support in analysing this report, though all errors are my responsibility.
This doesn’t seem to address the key problem with our classification system in which 0-40 is a fail, then three narrow bands 40-49,50-59,60-70 of which only the last of those really counts, and then a wide‘excellent’ band between 70-100. The Burgess Report thought it was daft and recommended the GPA system. Why are we so obsessed with keeping this ridiculous, antiquated, no longer ‘fit for purpose’ system?
Yes, I’ve long thought this. Though my understanding is that in the humanities at least the full range was never meant to be used. Traditionally you marked to the different degree levels, assigning things like A- – -, AB, BA etc – which collectively gave you a useful ten point scale for each class (more than enough to discriminate between stronger and weaker 2:1s, etc). Obviously you need at some point to assign a numeric value to these grades, to calculate overall average, and my understanding was that the 50 point scale was then mapped across 30-80 in an apparent percentage, so that degree results in humanities could be loosely compared to results in science degree results, where they use a more straightforward percentage system. (it could be I’ve hit this provenance wrong, in which case am happy to be corrected!). But if you’re going to shift to now using the ‘full range’ of what assumed to be a standard percentage system, surely you need 20 point degree classes? Ie 0-19=fail, 20-39=3rd, 40-59=2.2, etc
Paul – naturally, I would agree with you. Indeed, there is an irony that the contentious issues of borderline decisions and potential uplifts and rounding are simply an artifact of the current classification system. The use of a GPA style classification [over a wider range of marks] negates the need to look at borderline marks in the first place. Otherwise, my main concern rests with this continued emphasis on ‘exit velocity’ where the final year marks attract a higher weighting. This faith in exit velocity is not founded on any robust evidence. Who’s to say that the tendency for final year marks to be higher than the previous year, or year two, simply because the year two marks dipped below those achieved in year one. If this is the tendency then universities need to look at what is happening in year two – and not solve the issue with a tweak in the weightings. As for discounting … that’s another rant
First principle “against LOs” is problematic. Which LOs, module or course. Then there is the issue of conflation ….
It may be that there are disciplines where Level 5 outcomes are equally as important as Level 6, because content is covered – and assessed – which is crucial, but is never covered again. I can imagine this being the case in law, and possibly areas of medicine and health. In disciplines where the complexity builds throughout the course, there’s a pretty strong argument for measuring where someone is at the end of the course – what they can do now – rather than where they were a year ago when dealing with less complex problems. Indeed some students achieve far better at Level 5 but struggle with the more advanced learning. It would be strange indeed to see them rewarded equally, if we want the outcomes to be comparable
My husband gained 72% overall for his degree and left with a 2:1 thanks to the norm referencing of the early 90s – if he did it all again today and got 68% he could have his First?! Personal anecdotes aside, this seems to be a complete about-turn on the intentions of the UKSCQA paper of October 2017 on Understanding Degree Algorithms:
… there would be a risk to the confidence of sector stakeholders if an institution were simply to upgrade all students who fall into a borderline or classification boundary. In effect, this practice would introduce a different set of final degree classification boundaries, and undermine both conventional practice and confidence in sector standards. Such practice, if it exists, is not acceptable.
So, in response to “protect academic standards by adhering to the current conventions” the current convention would be to not allow for a borderline as we were told not to?!
Sounds like we need some research into exit velocity, though…
And last but not least – including students, academic and non-academic staff, and accrediting bodies – what is this ‘non-academic’?!
(I think the 6th principle got collapsed into the 5th?)