The details around the next steps of the TEF subject pilot has one glaring omission. Grade inflation – after a brief, supplementary, appearance last year – will no longer play a part.
Described as “unhelpful” within the context of subject TEF, the plan was that it would return to provider-level TEF, as “panellists found the metric was limited in its current form, suggesting it is not precise enough to enable them to make judgements about why grades may have risen”. There were suggestions that accounting for prior attainment was one aspect where the metric could be improved. None of this, of course, stopped DfE from trumpeting how this consultation response meant that grade inflation was being clamped down upon. Grade inflation is always good for a headline.
The UK Standing Committee for Quality Assurance has since commissioned Universities UK, GuildHE, and QAA to write a report on grade inflation in the form of a consultation paper, to which responses are currently being sought. Meanwhile, as the TEF documentation made clear, OfS would work on refining the abandoned TEF measure for it’s own regulatory purposes.
This work will be used by the OfS to inform its regulatory approach to addressing grade inflation. One of the four primary regulatory objectives of the OfS is to ensure that ‘qualifications hold their value over time’ and this objective is underpinned by a condition for registration that is applied to all providers.
Now read on
Which brings us neatly to today, and a short technical report from OfS that purports to offer a more accurate metric for the grade inflation “crisis” that is currently engulfing the sector. To cut a long story short, it doesn’t – and the way it has been presented and trailed is actively occluding our understanding of the issue.
Readers will now realise that I’m about to lay in to the methodology – so let’s get a few things up front first. The sector does have to deal with the problem of grade inflation – anyone who spends any time at all talking to academics and administrators knows that it is happening, though often in less visible ways that could be (and are often) argued to be attempts to more accurately reflect the work submitted by students.
As far as we can have an “explainable” proportion of first class degrees, the Office for Students is wringing it’s hands over the growth in the number of “unexplained” first class honours for each institution since the class of 2010-11. According to the data (and the assumptions underpinning it), nearly all of the rise in the proportion of first class degrees is “unexplained”. This sounds like a big problem, but there are (yes) some questionable aspects to the methodology.
The first of these is the choice of 2010-11 as “year zero”. It’s arbitrary to the extent that TEF3, earlier this year, used 2007-8 as a comparator year. There’s no particular reason to assume that the right number of graduates got first class honours in 2007-8 or 2010-11, or any other year. Seven years of data isn’t a magic number – some institutions would have changed radically in size and subject mix over this period, others would be broadly the same. But an inflation measure needs a baseline and a bit of information about why this was chosen might have helped. One is tempted to suspect that the data beforehand was embarrassingly poor quality.
What the OfS analysis does next is to look at attainment by demographic characteristics. From this it can calculate the effect that each characteristic would have, and estimate a variable co-efficient for each. This gives evidence for some of HE’s darker little secrets – Black students attain less well than white students, those without traditional entry qualifications do less well than those that do, POLAR5 does better than POLAR1. And so on.
Here’s what the raw data looks like. Just to note on entry qualification, the OfS is suggesting that more than a quarter of all graduates in 2010-11 had no level 3 qualifications. Which, apart from anything else, suggests that it is not a good idea to build this data into the OfS reference model.
Counterintuitively, these demographic disadvantages are baked in to their model of what years beyond 2010-11 should have looked like. OfS compares this with the following years, with the percentage point difference between the number of observed and expected first plotted as “unexplained” first class honours degrees for each year.
So, an institution that recruits disproportionately well among young black people and works hard to raise attainment, will see a large “unexplained” increase in first class honours. Despite this, OfS is threatening to use the full range of its regulatory powers against providers that fail to address unexplained grade inflation.
Nice try, but try again
This bluntly is what happens when the OfS relies on data rather than contextual evidence. For me, the best approach to unexplained results is to ask institutions to explain them. I’m sure there are a number of great reasons – reasons that should be celebrated – that attainment has improved. I’m sure that many first class graduates and their employers would be happy to chip in. Maybe some current students could add information on how hard they are working and how well they hope to do.
The other problem I have with this release is the decision to release institutional-level data. This seems engineered to get journalists to run it as a league table story. By the time you read this, I’m sure all the usual papers will be splashing with their “top 10 institutions awarding too many first class degrees” pieces – without any accompanying notes on how this is assuming a natural order where poor kids do worse than rich ones, and white kids better than black ones. You’ll excuse me for not playing that game here. You can see individual results below, and if you want a league table please feel free to build your own.
The thing is there is very likely some grade inflation in the system. Institutions are under a lot of pressure to perform better partly to meet performance targets and partly because some supremely idiotic league tables include number of first class degrees in their rankings. For me, the requirement to publish degree classification requirements at the start of each course, and to commit not to change these for that cohort, would do much more to address the legitimate issue without dragging the sector through the mire for no good reason.