This article is more than 6 years old

Bang! – grade inflation in TEF3

The OfS finally released the TEF3 "grade inflation" data. Wonkhe's DK renders it in a more useable format to help us make sense of what it says.
This article is more than 6 years old

David Kernohan is Deputy Editor of Wonkhe

It was a shame, when we finally got sight of the TEF3 grade inflation data from OfS, that it was presented in such a difficult format to work with. Individual pdf files – one per institution – must have been really difficult for the panel and executive to use.

Of course, if the OfS actually did have the data in another format and still chose to release it in such an unwieldy manner then this would certainly be against the spirit of the accessibility sections of the Code of Practice for Statistics.

It’s a data wonk nightmare – and one from which the sector can now awaken because I’ve put them all in Tableau for you.

Where does this come from?

You’ll recall that the semi-inclusion of grade inflation within TEF3 came from one of those Jo Johnson moral panics that beset the later months of his ministerial tenure. It was back in September last year during his Universities UK conference speech. Wonkhe’s Ant Bagshaw covered the initial furore, though the idea eventually wound up in the specification for the new Teaching Excellence and Student Outcomes Framework. Every institution with degree awarding powers was required to complete a HEFCE-provided form with their own records of degree classes awarded for 10 years ago (or as near as they could) and the last three years.

In all, 20 institutions submitted the required data – if they didn’t, or claimed data wasn’t there when it should be, they could have been disqualified from TEF altogether. All but four provided the ten-year comparator data too – Trinity St David, SOAS, and Aberystwyth provided 2013-14 data instead, and Trinity Laban was only able to provide one year of data as a newish holder of degree awarding powers.

TEF assessors were provided this information, as the aforementioned pdf, for each of the institutions it was available for. They were instructed to use it as a measure of “rigour and stretch” (TQ3), with an uncaveated rise in the proportion of first class and 2:1 degrees awarded over the last ten years being seen as evidence of grade inflation (and thus a fall in rigour), where as a fall in these proportions that could be linked in the provider submissions to “clear institutional policies and practices” seen as an increase in rigour. Assessors were given a sector average level of grade inflation, which was emphatically not to be used as a benchmark – rather the guidance is clear that all grade inflation is negative.

What does the data tell us?

It’s pretty negative, on the face of it.

Every institution where data is presented showed evidence of grade inflation when comparing the most recent year of first class awards with the supplied historical comparator, in some cases up to a 20 percentage point difference. Most institutions also showed a steady increase over the most recent three years, all of which were substantially above the earlier figure.

Every institution showed a rise in the number of first class degrees, and a fall in the number of 2:2, third class or other honours degrees. Looking at the raw numbers it appears that the “ordinary” or other non-honours degree has pretty much died out.

What doesn’t the data tell us?

Resits, basically. We don’t know to what extent degree candidates are simply not accepting lower awards, and instead choosing to resit elements of their course to achieve a higher award. We also do not know to what extent institutions are encouraging this – in light of the continued idiocy of certain parts of the rankings industry in including “percentage of first class degrees” in league tables, or in the light of student care (and a weather eye on DLHE metrics).

The simple proportions are also less reliable for smaller institutions, where you would expect to see a greater fluctuation year on year and cohort by cohort. And we don’t (yet – this may come in future years when the data is derived centrally from HESA) get any splits – of particular interest here would be prior qualifications, but we already know that various student attributes are a good predictor of final grade.

How was the data used?

In all honesty it’s difficult to see any link between shifts between the initial hypothesis (see my flags article for more detail on how we worked that out) and the the final awards and this particular measure. It would, of course, have been one of several supplementary and contextual measures taken into account, alongside the institutional statement.

So where is the data?

It’s here. Or, if you want to view it full screen – here.

I’ve three tabs for you (along the top) – the first lets you look at all the data for each institution, the second shows the percentage difference from the comparator year (I’ve omitted the four institutions that didn’t use the 2007-8 comparator here), the third shows you proportions as percentage for each institution and year of available data. The source data (as pdfs in a zip file) is available from OfS, but because I am kind you can download my transcription from within the tableau if you want it for anything).

7 responses to “Bang! – grade inflation in TEF3

  1. Two thoughts:
    Firstly, it is absolutely right that OfS expects that ‘all students from all backgrounds’ with the ‘ability and desire to undertake HE’ should be supported by institutions to achieve success and progress to work or further study, and it is equally positive that we are charged with reducing the attainment gaps between different groups of students (for example the BME attainment gap). However, if measures are effective, this ought to lead to grade improvement. (NB, this is different from grade inflation and the TEF metrics can only measure grade change – not tell us if this is due to inflation or improvement).

    Secondly, while league tables reward higher proportions of good degrees with more points, and NSS satisfaction ratings show a clear correlation with degree outcomes, there will be powerful incentives for institutions to find ways to drive up the percentage of good degrees – points really do mean prizes.

    If the designers and developers of TEF are serious about wanting to avoid unintended consequences, they need to look across all the various performance measures to see how they work together.

  2. “And we don’t […] get any splits – of particular interest here would be prior qualifications”

    Bearing in mind of course that these prior qualifications themselves may be subject to grade inflation, or its inverse.

  3. David, thank you for pulling together this limited data set.

    There is another aspect to grade data, and that is the Transparency Condition that sits within HERA.
    http://www.legislation.gov.uk/ukpga/2017/29/section/9/enacted and in particular:
    (e)the number of students who attained a particular degree or other academic award, or a particular level of such an award, on completion of their course with the provider.
    Which is almost certainly a description of the publishing the number of students by degree classification, and then splitting this data by:
    a)the gender of the individuals to which they relate;
    (b)their ethnicity;
    (c)their socio-economic background

    Of course, a subscription to HeidiPlus would expose this data by gender and ethnicity (subject to rounding)

  4. The data certainly show significant rises in the proportion of students achieving good honours degrees (running a stanrard signfiicance test on selected universities).

    What would also be interesting to look at is the median or average marks awarded, assuming each university has a common mark scale. These data would reveal whether there has been a shift in skew (the most radical would be a shift from negative to positive skew) and whether the improvement is driven by higher marks awarded to students or changes in the method of classification.

    As has also been alluded to by other commentators (eg David Radcliffe,above), at a more granular level it is worth looking at whether performance gaps have closed. For example, if in 2007 more men than women achieved 1st-class degrees but this gap has closed, then grade inflation is clearly not the only possible explanation of the change observed. The same applies to the closure of attainment gaps among different ethnic groups or between students drawn from different quintiles of the household income distribution.

    A worse case scenario would be a fall in marks awarded, a rise in the proportion of firsts or upper-seconds, and a widening of attainment gaps. A best case scenario would be a rise in marks awarded, a rise in the proportion of good degrees, and closure of attainment gaps.

  5. My concern here folks is that this is a numbers based analysis, not a quality-based analysis.

    Dare I suggest, that in a world where standards are scrutinised at every turn under the fees structure and separately through TEF, that it is entirely possible that that better teaching = better outcomes for students…

    In a world where Government are intervening with models to “drive quality” in education, surely the increase in more good honours would be something to celebrate if authentic?

    The two approaches from OfS are totally at odds with each other. Drive quality up, but curb the amount of good honours if the improvement in quality realised? Surely metrics, in this instance, only give a partial (at best) understanding of good honours and what they are actually worth.

  6. incidentally there are not that many HEIs which allow resits on demand for full marks- you’d largely need ECs for that and it’s a calCulated risk for any student to take. It overlooks the fact that ECs are indeed a much needed Vestige for those whose studies are interrupted by ill health and personal difficulty. I’m not sure resits in themselves are that large a factor in this.

Leave a Reply