The grade inflation agenda puts regulators in a terrible bind. And it’s all down to the idea of institutional autonomy. Two new initiatives attempt to square this circle with more information.
On the one hand it is clearly in the interests, and some would argue the remit, of the OfS to be sure that what higher education offers is – in the unfortunate language of the marketplace – a quality product, and that outcomes are clear and dependable. On the other hand, they can’t intervene directly in the development and delivery of said product, or even put requirements on the way it is delivered.
It’s a common problem and has a common solution. The producers in the market come together to agree a common set of standards and expectations, and then voluntarily agree to abide by them. This approach has another advantage – it unites the UK HE system in action, despite the growing divergence between England and the other three home nations, in quality assurance processes and regulatory practice.
So the UK Standing Committee on Quality Assurance, on behalf of the sector, has agreed two new initiatives to put information about degree standards into the public domain, during the 2019-20 academic year. This applies in England and Wales, but providers in Scotland and Northern Ireland will already be aware that the proposal in the UKSCQA statement of intent has found its way into the quality enhancement framework and annual provider review process respectively.
Inside the sausage machine?
Both HESA and the OfS publish data on the way degree classification profiles (the percentage of graduates who get a first, and so on) have changed over time in your institution, but you’d have to dig around in some spreadsheets to find it. You’d then have to dig into a backlog of institutional press releases to find out what – if anything – justifies any profile changes, and what your provider proposes to do about it.
So, from 2019-20, providers will be expected to publish this information in a publicly accessible Degree Outcomes Statement on their website – and UKSCQA has published guidance, and a handy checklist, to help providers understand what needs to feature. This will be an evaluative document – the guidance breaks the expected content down as:
- “What has happened” – the institutional degree classification profile data
- “What has changed” – analysis of the difference in this classification over the year
- “Why it has changed” – commentary on why more firsts are suddenly being awarded, and details of any plans/internal reviews that have been put in place.
What this isn’t is a description of how degree classifications are arrived at or how your regulations work. This new document needs to be short – between two and three pages – and strategically focused. Not least because it will be signed off by the governing body.
The guidance provided sets out a proposed structure for these statements – and as usual there would need to be a fairly good reason to do things in another way. But remember throughout that this is a voluntary sector-led code, there are no teeth.
Why Johnny didn’t get a 2(i)
We’ve covered degree algorithms on the site before, but suffice it to say the process that determines whether you get a first class degree or an upper second is a complex one. Though it is generally documented somewhere in an institutions policy documentation, it is not exactly easily accessible or comprehensible.
Though some may enjoy reading red-faced institutions claiming that the class of 2019-20 were just spectacularly capable, you’d have to consider that at least one audience for these statements would be recent graduates whose concerns about the number of firsts and upper seconds awarded would be largely driven by a need to understand why they didn’t get one. Like Gavin, a graduate from a university in Yorkshire, who told me “When I was at university, you could count the number of students on my course who got firsts on one hand, but now grade inflation has become entrenched in higher education”. For me, this is where the focus on strategy rather than process lets us down.
The recommended structure does include sections on classification algorithms and assessment practices, but this will necessarily be at a very high level to fit onto three pages of “clear English” alongside all the other stuff. The guidance notes that you may “already publish” a “clear description” of your algorithm(s) and zones of consideration. I’d love to see one.
But what even is a 2(i)?
There’s been a sporadic sector movement (remember the Burgess Review?) pushing for the UK to move away from the current UK classification system to a grade point system as used in US higher education – advocates argue that the current system is imprecise, opaque, and antiquated.
The current system arose in the early 19th century at Oxford and Cambridge, at a time where few undergraduates even studied for “honours”. Candidates were split into quartiles by achievement, with the top 25 percent distinguished from the middle 50 percent and so on. Classifications were also a hot topic in the early 1990s, with proposals that led to the eventual availability of a detailed transcript alongside a certificate.
We all of us carry a model of the system in our minds (the first is above 70 per cent, and so on) but the complexity of algorithms make such approximations dangerous. And what does “above 70 per cent” mean to an employer?
What UKSCQA has attempted to do is capture the vagaries of these distinctions in language. A noble aim, but in reality we are left with a lengthy set of descriptors outlining a simple ranking of modifiers – the word “strong” seems to denote a 2(ii), a 2(i) sees a shift to “thorough”, and a First broadly means “exceptional”.
Strong work in semantics
The idea is that these common descriptors can be used in course design and approval, or in staff development. This at least sees the sector singing from the same hymn sheet as regards a 2(i) being for graduates with a “thorough” understanding of a topic – but surely prompts further questions on what “thorough” means.
If you are setting out such broadly applicable descriptions you are in danger of not adding anything tangible to the subject specific learning goals and outcomes that already exist in course documentation. Indeed, the QAA’s subject benchmark statements provide a ready source of such language, and professional bodies another.
With such rubrics already available, and the cascading modifiers approach familiar to anyone who has designed or led a programme in HE, what exactly do these non-exhaustive generic descriptors actually add? The idea of consistency in measures of learning is attractive if unlikely – a mention of a provider’s adherence to these descriptions in their degree outcomes statements seems to be the likely endpoint. And I’m not sure who benefits from that.
Of course, the classification algorithm is just part of the academic regulatory story. The myriad of progression regulations universities adopt makes achieving a degree something of a lottery, as SACWG and NUCCAT have shown over the years.
In fact it’s even worse than Harvey suggests. University assessment systems do things with numbers (marks) that a first year statistics student would fail for – adding and averaging marks from different ranges that were judgements of different outcomes but treating them all the same.