This article is more than 5 years old

Grade inflation: a clear and present danger

Iain Mansfield argues that we can no longer ignore the issue of grade inflation at university.
This article is more than 5 years old

Iain Mansfield is Director of Research and Head of Education and Science at Policy Exchange

Grade inflation is endemic in our higher education system. What’s more, its pace appears to be accelerating and neither high tariff nor low tariff institutions are immune. The regular newspaper articles showcasing the latest growth in firsts and 2:1s have become an expected feature of the summer, yet the sector still shows a failure to truly grip this nettle. Too often, debates over whether it really exists are still taking the place of genuine attempts to reform.

There are at least three principal reasons why unchecked grade inflation should cause concern in those who care about the sector:

  • It plays into the hands of those who wish to devalue the sector. For anyone wishing to denigrate the value of a degree, or to complain too many people are going to university, grade inflation presents an open goal. If a 2:1 is clearly not worth what it was forty years ago, it is easy to extrapolate – correctly or not – to the worth of HE as a whole.
  • It threatens institutional autonomy. The autonomy to set standards and grade degrees is one of the pillars of institutional autonomy. If grade inflation continues to worsen, it may reach a stage where the government, supported by public pressure, could no longer ignore it. It’s hard to see how government could act decisively in this area without fundamentally impinging upon institutional autonomy – and that line, once crossed, would be hard to restore.
  • It undermines the credibility of the sector. The strength of the HE sector in public debate should lie in its ability to muster logical arguments well-informed by robust evidence. When the sector casts aside the evidence to make a self-interested argument on grade inflation, it fundamentally weakens the sector’s credibility to engage in other areas it may care about, such as student migration.

More than most other ‘hot media topics’, grade inflation has far reaching implications. People might be concerned about vice-chancellor salaries, for example, without necessarily questioning the rigour or robustness of the education being offered. In contrast, grade inflation strikes at the very heart of higher education’s integrity.

Is it really happening?

The raw facts are undeniable. Almost three-quarter of students now secure a first or upper second, compared to 66 per cent in 2011/12 and fewer than half in the mid-1990s. Looking only at first class degree, in 2016-17 the proportion of students receiving a first has increased to 26% up from 17% in 2011/12. If we go back to 1994, the statistics are even starker: then, only 7% of students received a first.

But if these are the facts, what of the putative explanations?

One often advanced is that it simply reflects rising attainment on entry. This is implausible: between 1994 and today the average prior attainment of those entering HE decreased as participation widened. This may have had other benefits, but it could not have driven a more than three-fold increase in the number of firsts. More detailed statistics also disprove this theory: a recent HEFCE report found that the proportion of 2:1s and firsts awarded rose for students with all but the very highest levels of prior attainment, with the largest increase in firsts being amongst students with BBC at A-Level.

The other argument often proffered is that students are working harder, or that teaching has improved dramatically. There may be some truth to these statements, but if it were to explain the level of increase, we would expect to see a consequent improvement in the ability of graduates. Yet we do not. A recent survey found that 25% of employers had needed to provide remedial training on functional skills for graduates. OECD Reports have found that only 1/4 of graduates have high level skills in literacy, while 7% are lacking basic skills in English and Maths. To quote Andreas Schleicher, the OECD’s Director for Education and Skills, “You can say in the UK that qualification levels have risen enormously – a lot more people are getting tertiary qualifications, university degrees – but actually a lot of that isn’t visible in better skills.”

These explanations clearly do not explain the tremendous rise in the proportion of good degrees. But even if they did, there would be a problem. The purpose of degree classification is not to measure the performance of graduates against an arbitrary standard set out in the 1990s, but to provide meaningful differentiation for employers, further study and for the graduates themselves. It is difficult to argue that a system in which nearly 4 out of 5 graduates get the top two grades is fulfilling that purpose.

What can be done?

The hard truth is that grade inflation must not simply be halted, but reversed.

No-one can deny that universities are in an invidious position.  League tables, which sadly persist in using the proportion of 2:1s and firsts as a measure of quality, provide a constant pressure to ratchet up the grade, even before we consider the way that grade inflation can be used to flatter other key measures, such as the NSS (unsurprisingly, students tend to be more satisfied when they’re given good grades). And naturally, no university wants to disadvantage its own graduates by awarding too many 2:2s in a world where many employers use the 2:1 as a hard cut off.

Government’s recent initiatives, such as the UK Standing Committee on Quality and Standards and inclusion of grade inflation in the TEF, are worthy – but are unlikely to be enough by themselves. The problem has become embedded. Moving to a grade point average is likewise no answer – grade inflation is as rife in the US as it is here, and no university can tackle it alone, as Princeton has found. Decisive, meaningful, collective action is the only way forward – and just to make it more challenging, it must be done in a way that preserves institutional autonomy over standards.

The most obvious way forward would be for a sufficiently large subset of the sector to agree that they will not award more than a certain proportion of their students firsts and 2:1s each year – perhaps 15% firsts and 40% 2:1s. This could, if desired, be averaged across subjects, or even be a three-year rolling average, to account for natural variation between years. This would restore meaningful differentiation into the system, place a natural gap on league-table driven inflation whilst fully preserving each institution’s autonomy over standards. Oxford and Oxford Brookes do not need to debate the relative standards of their degrees; they would simply both agree to maintain the proportions for the students they admit, and set standards accordingly.

A couple of obvious questions present themselves:

  • What is a “sufficiently large subset”? There is no clear answer to this, but it needs to be large enough, and contain sufficiently many prestigious institutions, to force both league tables and employers to take notice. Perhaps 50 institutions including at least half the Russell group might be sufficient, but other combinations would do.
  • How to get from here to there? Moving from a system in which more than 30% of students get firsts to one where only 15% do could prove painful. One option, therefore, would be to tweak the degree classification to ensure differentiation, whilst maintaining the principle of limiting the proportion receiving the highest grades. Perhaps the starred first could make a come back, or perhaps the 2:1 could be split further (or renamed).

It is easier to propose solutions than to implement them, and even more so when implementing them requires collective action. But if the sector is to restore faith in the degree classification system, and in higher education itself, something along these lines must be attempted.

15 responses to “Grade inflation: a clear and present danger

  1. It’s the wonks who cause grade inflation. The systematisation and bureaucratisation of learning is so thorough, as are the punishment/ reward mechanisms for lecturers that student learning these days is about learning the rules and how to work them, rather than the content per se.

    That ‘remedial’ intervention may be needed by employers is because this systematisation trains students’ mindsets to be more and more disconnected from the world of work. In which every encounter is not recorded and accompanied by .ppt slides, where tasks are not accompanied by grading criteria, and assessments of achievement are complex, indeterminate, but of real significance to the individual.

  2. Yes the quality of education must be re-established since the infiltration by privatisation and the replacing of ‘learning for its own sake’,by commercial ends, by Tory governments.I agree there should be a more rigorous scale of achievement.Degrees are after all not status symbols but actual measures of ones attainment’ judged by standards of the highest and broadest integrity.

  3. No no no

    We have a criterion referenced system: we agree what ‘first class’ work looks like and give it a grade- we don’t decide that 15% of work will be first class and give the best 15% that grade.

    When we accumulate those marks together we do have the excitement of the overall degree classification, and different universities have different rules. But that’s a different story. If the sector, or a big part of the sector, wanted to move to a *norm referenced* system, then you would need to move from the classified honours system (it would be invidious to compare students from two systems with the same looking outcomes).

    The difficulty would be agreeing the systems. Would Oxford and Oxford Brookes both agree that only 15% of their best students would get an A? Could Oxford Brookes even agree that 15% of historians and 15% of mechanical engineers would get an A?

    Just look at the tortuous discussions about GPA – it took years to bring it forward, politicians went very lukewarm on it, and we’re left with a few examples in play bit without a single agreed system to run it. And that’s a simple system.

    Remove degree outcomes from league tables. It just adds another incentive to universities to chase the best prepared students, rather than students who might benefit most from the education on offer.

  4. I agree with Mike, but I hope he can explain to his Registry colleagues and to Deans why, when operating with a criterion-referenced system, you can’t then ask for comparability of grade outcomes between modules, subjects, degree programs.

    “The purpose of degree classification is not to measure the performance of graduates against an arbitrary standard set out in the 1990s, but to provide meaningful differentiation for employers, further study and for the graduates themselves”. No. The purpose of degree classification (and I’m not defending this in itself) is to assess achievement against a set of criteria appropriate to the subject, which are well publicized to students, and which offer guidance to students on how to improve their work. These are not arbitrary standards; they are congruent with the benchmarked learning outcomes of the subject and reflect progression within the degree program. Assessing this way is not, as Mike points out, compatible with a demand that Firsts/2.1s should be rationed in some way.

    I expand on my arguments here: https://academicirregularities.wordpress.com/2017/08/06/firsts-among-equals-why-have-the-number-of-first-class-degrees-increased-so-dramatically/

  5. If, as the last two comments say, the degree classification system assesses achievement against set criteria, there still remains a question to be asked and answered: how can it be that so many more students are getting the highest grades, without being able to perform accordingly afterwards? As the article argues, the evidence does not support the idea that all the students have got objectively better against some set criteria. So in this viewpoint, a very hard look at the criteria themselves seems to be asked for.

    I agree that “morally” a result should say what an individual student has achieved in their subject, rather than putting them in comparison with all of their cohort. This viewpoint makes education seem more “an end in itself” than “a means to an end”, eg getting a job or a place on a phd program. Maybe as an academic I prefer this viewpoint because I value education and improving one’s capacity of logical, critical and abstract thought (to name some of the improvements a university education should bring). But is that really realistic in our current environment? Are school leavers choosing to go to university to improve their minds or to get a job afterwards? I have the impression that for the majority of students it is the latter. So while the author’s statement of what degree classification is for sounds at first more employer focused than I as an academic find comfortable and acceptable, it does perhaps reflect the reality of many students.

    I think that the argument of what a degree classification measures, objective achievement or placement within a cohort, does not affect the author’s argument about the problems of grade inflation. It does perhaps put into question the precise suggestion of a remedy, but then perhaps an alternative suggestion is needed. The problem remains present.

  6. I still can’t believe that the author believes that working harder is something that makes you more employable across the board.

    Someone doing a non-vocationally oriented course (i.e. all the humanities, some theoretical STEM courses like Cambridge CompSci, etc) gains few-to-no ‘hard’ skills that are valued by employers. Working hard and getting a First is only an indicator as to their *soft* skills (patience, resilience, good planning, ability to keep calm in exams, etc). And yet, employers who need to ‘re-educate’ their graduates with new skills are addressing the *hard* skills that they’re lacking – issues of substance.

    My point is that you can work as hard as you want to get a degree in any but the most directly vocational courses, but that you’ll need the additional training either way. Saying “employers still think our grads don’t know sh*t” is an argument for reforming university courses. It’s not an argument for saying that our students are more or less lazy than they were 15 years ago.

  7. Thank you for all the comments. A criteria based system would be compatible with what I propose: the universities would simply have to revise the criteria so that roughly the appropriate proportion of students would get that grade (they should have a good estimate of the capability of their students) – and then probably do a calibration exercise every few years, given that the criteria based system has shown itself to be highly prone to an upwards ratchet effect.

    Liz, your article was interesting. I understood it to say that with a criteria based system, students are effectively better at exam technique; i.e. it encourages learning (if not teaching) to your test. I note you say that approach to learning is flawed – I’d agree with you there, and the fact that the criteria based system encourages that seems like a strong argument against using it. Either way, if that’s the explanation, I’d argue it still doesn’t measure a genuine increase in students’ abilities (and it doesn’t explain the way almost 50% of universities have adjusted their algorithms to give more firsts in the last few years).

    Mike, we already have different systems giving results that look similar: every university grades what are sometimes very different courses to different standards. And I would love to remove ‘good degrees’ from league tables, but the compilers aren’t going to do that of their own volition, and I hold freedom of the press in even greater esteem than I do academic freedom. A solution has to be found outside that.

    At the end of the day, I agree with Julia: the problem remains present. We don’t need to understand the mathematics of summing infinite series to know there’s something wrong with Zeno’s paradox – we just need to see the arrow reaches its target. And from the fact that the huge rise in firsts doesn’t match a comparable rise in genuine capabilities, we know that the classification system isn’t working. The solution I suggested may or may not be the best, but one does need to be found and implemented.

  8. I think this depends greatly on what you mean by “hard” and “soft” skills. The most important thing students learn at university is how to think, and how to learn (more efficiently). They literally improve their brains. So, yes, maybe a student doesn’t know exactly all the small details in their new job, but the “soft” skills you quote are I think very secondary to the most important skills: thinking critically, logically, independently, creatively; problem solving; evaluating data and evidence, etc (weighted slightly differently depending on the degree). These mean that in their jobs, they can learn the environment of the new job quickly, and apply their brains successfully to a large variety of situations.

  9. As someone who has taught (and marked coursework) at Oxford Brookes University for over 20 years I can confirm that there is pressure on the academics to award more firsts, driven by the effects of that percentage on league tables. While I am glad that I work in a department that is behind the curve on grade inflation I do understand that we have to move with the times and ultimately our graduates will suffer as well as us if we don’t “catch up”.

    How do we do it? Our marking criteria have been revised over the years but it’s more about how generously those criteria are applied as well as nuances in the calculations (such as being able to discount some of the lowest marks). The written definition of what constitutes a first isn’t hugely different from the criteria I’ve seen for the MBA at the University of Oxford’s Saïd Business School. Expectations count for a lot.

    I certainly feel that I penalise poor spelling and grammar less than I used to. I used to have a bugbear about effect/affect but nowadays there are bigger fish to fry. In a world where larger numbers of students have “blue cards” to show that they have dyslexia it seems unfair to overly penalise the students who are writing in their second language or who probably could get a dyslexia certificate if they tried.

  10. Has anyone repeated the Volpe & Curran study of 2003, “Degrees of freedom: An analysis of degree classification regulations” – V&C found that “The most important implication of the analysis is that students with similar mark profiles can be awarded different degree classifications depending on the institution that they attend.” V&C also state that “Table 3 shows the wide variation in the methods and the resulting spread in the minimum average required to qualify for the award
    of a first class degree. The lowest minimum is 50.8% at the University of XXX and the highest minimum is 68.75% at the University of XXX. The average of the fifty-eight institutions is 61.91%…” That’s curious, to say the least, an average of 61.91% for a First?

    So, if I was asked to consider grade inflation at any particular institution, I’d start with their academic regs and how the award is calculated as a starter for ten. Completely agree grade inflation needs considering and also completely agree with comments on norm-referenced marking NOT being the way to do it. UUK & GuildHE’s work might cast some light? If I had a minute (!) I’d get hold of the HESA stats back to around 1987/1988 (i.e coincidental with the incorporating out of local authority control for some institutions) and look to see if significant events i.e. 1992 Act / 1997, Dearing and adoption of an outcomes-based approach caused spikes. And look at entry quals, staff student ratio, research intensity etc – any data that might correlate.

    Identifying the cause of the problem might then identify a solution to the problem – assuming we think we have a problem and are not assuming we think we have a problem because we are being told we have a problem!

    PS: the C&V paper is here, https://www.researchgate.net/profile/John_Curran9/publication/228603949_Degrees_of_freedom_An_analysis_of_degree_classification_regulations/links/542539850cf26120b7ac7e56/Degrees-of-freedom-An-analysis-of-degree-classification-regulations.pdf?origin=publication_detail if anyone is interested.

  11. Iain,

    If we had an answer (and I really don’t think norm referencing is that answer) we would need to make a change in nomenclature similar to the shift happening in GCSE. As with that exam, change would come in gradually so you’d need something to say whether a degree was classified according to old variant or new variant rules – especially if it was as radical as setting a fixed proportion of marks to be awarded.

    Let’s get back to the problem. Is it that too many people are meeting the criteria for first class work, as we understand it as seen through the lens of thousands of different courses in hundreds of different providers? Or is it that we want fewer people to be put in the ‘top’ class? If the latter, then the Gove GSCE solution applies – make a ‘First’ the same as a 9 – give it to a fixed number of people who meet the criteria for the top grade. It’ll be arbitrary and will lead to all sorts of problems.

    With degree classification, the paradox is Sorites.

  12. Putting to one side whether IM’s proposed cure is worse than the disease, I’d like to rewind to the start of the piece and ask whether grade inflation actually matters. IM suggests three reasons why it does matter:

    It plays into the hands of those who wish to devalue the sector.
    It threatens institutional autonomy.
    It undermines the credibility of the sector.

    This ignores the possibility that the sector has adopted grade inflation as the best available solution to a set of problems and that, by and large, the solution is accepted politically and socially.

    If grade inflation is curbed, the sector would face two problems.

    Firstly, students would attempt to use legitimate means to maximise their grades, by finding ways of gaming the system. They would, for example, compete with peers and seek ways of undermining their confidence; there would be little incentive to cooperate and even less to help others become better learners. In the event of any trade-off occurring between protecting a grade and additional learning, students ought always to prefer grade maximisation unless somehow incentives are built into assessment favouring another behaviour. So for example, a rational student would to take the leasiest route available to attaining a given target grade. There would also be incentives to pressurise, manipulate or complain about academics and to gain more attention/time from instructors at the expense of other students.

    Secondly, removal of grade inflation where it has been attempted in the USA resulted in flight of applicants and a deterioration in well-being (= even greater demand for welfare services).

    Provided that a university runs a fair and competitive admissions process, and provided that employers invest in recruitment systems that do not rely exclusively on degree grade signalling – what is intrinsically wrong with grade inflation? If it is a problem, why does it exist – doesn’t its existence indicate that it is in fact a solution?

  13. Mike is absolutely right. This paper offers a fair analysis of the situation but some kind of norm referencing is definitrly not the answer for all the reasons Mike has argued. The solution is calibrated external examiners in each subject discipline as the current HEA led project is developing. If successful, a 2.1 in history at Oxford Brookes should be the same standard as a 2.1 at Oxford. Which institution gets more will be most interesting and a good metric for the TEF!

  14. I’m sorry Ian but a criterion based system is not in line with what you propose because you are structuring it backwards. You want to fix the criteria to give you the grade proportions you/the university want. But the criteria should be focussed on what is required to pass the learning outcome – just, well or exceptionally. It is perfectly reasonable that from year to year the proportion getting a particular grade will change perhaps because of the quality of the cohort, or the teaching!
    Only calibrating academics in the discipline across the sector can truly make grading – and hence classifications – reliable

  15. I don’t think we should let the ‘employers say’ comment go untested. comments from employers have not been systematically or formally gathered over the years (or indeed challenged for their evidential base) in any reliable sense. They are not a homogenous group. Anecdotes about graduates not being able to do what they could do ‘back in the day’ are not a sound basis in themselves to question graduate achievement. Surely the demands of any workplace have changed over the years and employers may wish to test out? train for and instil different skills emerging as a result of these changes in the workplace not necessarily addressed in a degree course? Otherwise why would Deutsche bank recruit theologians from Cambridge to become accountants?

Leave a Reply