It was a shame, when we finally got sight of the TEF3 grade inflation data from OfS, that it was presented in such a difficult format to work with. Individual pdf files – one per institution – must have been really difficult for the panel and executive to use.
Of course, if the OfS actually did have the data in another format and still chose to release it in such an unwieldy manner then this would certainly be against the spirit of the accessibility sections of the Code of Practice for Statistics.
It’s a data wonk nightmare – and one from which the sector can now awaken because I’ve put them all in Tableau for you.
Where does this come from?
You’ll recall that the semi-inclusion of grade inflation within TEF3 came from one of those Jo Johnson moral panics that beset the later months of his ministerial tenure. It was back in September last year during his Universities UK conference speech. Wonkhe’s Ant Bagshaw covered the initial furore, though the idea eventually wound up in the specification for the new Teaching Excellence and Student Outcomes Framework. Every institution with degree awarding powers was required to complete a HEFCE-provided form with their own records of degree classes awarded for 10 years ago (or as near as they could) and the last three years.
In all, 20 institutions submitted the required data – if they didn’t, or claimed data wasn’t there when it should be, they could have been disqualified from TEF altogether. All but four provided the ten-year comparator data too – Trinity St David, SOAS, and Aberystwyth provided 2013-14 data instead, and Trinity Laban was only able to provide one year of data as a newish holder of degree awarding powers.
TEF assessors were provided this information, as the aforementioned pdf, for each of the institutions it was available for. They were instructed to use it as a measure of “rigour and stretch” (TQ3), with an uncaveated rise in the proportion of first class and 2:1 degrees awarded over the last ten years being seen as evidence of grade inflation (and thus a fall in rigour), where as a fall in these proportions that could be linked in the provider submissions to “clear institutional policies and practices” seen as an increase in rigour. Assessors were given a sector average level of grade inflation, which was emphatically not to be used as a benchmark – rather the guidance is clear that all grade inflation is negative.
What does the data tell us?
It’s pretty negative, on the face of it.
Every institution where data is presented showed evidence of grade inflation when comparing the most recent year of first class awards with the supplied historical comparator, in some cases up to a 20 percentage point difference. Most institutions also showed a steady increase over the most recent three years, all of which were substantially above the earlier figure.
Every institution showed a rise in the number of first class degrees, and a fall in the number of 2:2, third class or other honours degrees. Looking at the raw numbers it appears that the “ordinary” or other non-honours degree has pretty much died out.
What doesn’t the data tell us?
Resits, basically. We don’t know to what extent degree candidates are simply not accepting lower awards, and instead choosing to resit elements of their course to achieve a higher award. We also do not know to what extent institutions are encouraging this – in light of the continued idiocy of certain parts of the rankings industry in including “percentage of first class degrees” in league tables, or in the light of student care (and a weather eye on DLHE metrics).
The simple proportions are also less reliable for smaller institutions, where you would expect to see a greater fluctuation year on year and cohort by cohort. And we don’t (yet – this may come in future years when the data is derived centrally from HESA) get any splits – of particular interest here would be prior qualifications, but we already know that various student attributes are a good predictor of final grade.
How was the data used?
In all honesty it’s difficult to see any link between shifts between the initial hypothesis (see my flags article for more detail on how we worked that out) and the the final awards and this particular measure. It would, of course, have been one of several supplementary and contextual measures taken into account, alongside the institutional statement.
So where is the data?
It’s here. Or, if you want to view it full screen – here.
I’ve three tabs for you (along the top) – the first lets you look at all the data for each institution, the second shows the percentage difference from the comparator year (I’ve omitted the four institutions that didn’t use the 2007-8 comparator here), the third shows you proportions as percentage for each institution and year of available data. The source data (as pdfs in a zip file) is available from OfS, but because I am kind you can download my transcription from within the tableau if you want it for anything).