Fears of grade inflation are on the rise. Again. But the data suggests that government pressure on universities might be misplaced and is risking an unfair double blow for young students.
Once universities complained to the government about grade inflation. Back in that age of number controls and students “scrambling” for places, some universities believed eroded A level standards were piling up the numbers achieving top grades. This, they argued, was making their job of selecting the “best of the best” impossible.
Now the grade inflation complaint runs the other way. The government wants to know why more students are gaining “good” (seemingly, first or 2:1 class) degrees than their statisticians calculate they deserve and grade inflation is suspected. This time by universities overly keen to please their paying customers. Ministers have been sharply critical.
It has been quite a reversal in just a decade. In working with universities at dataHE we spend a lot of time helping them gauge and respond to changes in grades. Our analysis suggests these grade inflation complaints are two sides of the same coin. It is likely that the true attainment of today’s young people is being seriously underestimated, putting them at a disadvantage, and damaging universities in the process.
The quiet decade for A level grades, and what went on before
Few factors are more important to the statistical understanding of higher education than the simple summary measure of A level points achieved from the highest graded three A levels. This ranges from 18 (for three A* grades) to 3 (for three E grades), with each increase in grade yielding an additional point. In pretty much any analysis of what goes on in the HE system, this will be the most powerful factor. The distribution of 18-year-old UCAS applicants achieving each of those point totals has been remarkably stable in recent years (Figure 1).
Figure 1: Cumulative distribution of achieved A level points (young UCAS applicants)
Source: End of Cycle data resources 2018, www.ucas.com
The trend remains becalmed when we convert it from points to the typical A level grade achieved by these young HE hopefuls (Figure 2). We do this so we can compare it to the distribution of all grades awarded at A level, published by JCQ (Figure 3). There are numerous technical differences between these populations. But they are not likely to alter the obvious conclusion from the graphs: the boringly stable profile of grades attained by young UCAS applicants in recent years springs from the boringly stable profile of grades awarded by the A level awarding bodies.
Figure 2: Cumulative distribution of achieved average grade (young UCAS applicants)
Source: End of Cycle data resources 2018, www.ucas.com
Figure 3: Cumulative distribution of achieved grades (all awarded A level grades)
Source: Published JCQ data, jcq.org.uk
Life wasn’t always this quiet for A level grades. Between the mid-1980s and 2010 grades were intended to reflect the absolute level of attainment of candidates. That is, to be absolutely referenced (or “criterion-referenced”), rather than relatively referenced (or “norm referenced”). Under this system, if those taking A levels did better than their peers from the previous years then grades would change. More higher grades would be awarded. Fewer lower grades would be awarded. And the average grade awarded would increase.
And increase it generally did. The profile gradually shifted from lower to higher grades (Figure 4). And the mean grade achieved rose (Figure 5). Those trends stopped in 2010.
Figure 4: Cumulative distribution of achieved grades (all awarded A level grades)
Source: JCQ provisional sequence 2001-2019, QCA final sequence 1992-2000, awarded grades only
Figure 5: Mean grade achieved (where awarded, A* remapped)
Source: JCQ provisional sequence 2001-2019, QCA final sequence 1992-2000)
The abrupt change of trend in 2010 isn’t a mystery. Or an accident. This was when Ofqual – reacting to that earlier round of grade inflation complaints – deliberately changed the concept of grades to be more relative in nature. This is the “comparable outcomes” policy, where the expectation is that the distribution of grades from year to year will be broadly the same. In part, this was to prevent candidates being disadvantaged whenever there was a major change to A levels.
But it also changed the underlying assumptions of what grades measure through time. In practice, the comparable outcomes period looks very similar to the quota system used before the 1980s when grades were strictly relative. An A grade simply meant you were in the top 10 per cent who entered in that particular year. It wasn’t meant to say anything about how attainment levels were changing through time. And didn’t.
Suppose people were simply getting better
Measuring attainment is a difficult specialism. There are differing views on what is best. Some argued that the trend in increasing A level grades prior to 2010 was grade inflation, pure and simple. But what if it wasn’t?
Suppose that the increase in grades in that absolute-measurement period was, in truth, mostly a steady rise in underlying educational attainment. Over time you would expect that rise to drive both higher proportions entering for the exam, and for the grade distribution of the results to shift upwards. This wouldn’t be exceptional. Education attainment levels have generally increased through time. As an extreme example, literacy among the adult population in the UK is a lot higher now than it was hundreds of years ago.
If A level attainment was increasing steadily prior to 2010 then it seems likely it would have continued to do so. It just isn’t allowed to show up in higher grades anymore. Suppose people had continued to do better. This would mean we are now in a world where A levels suffer from the opposite of grade inflation: grade deflation. Instead of grades becoming easier to achieve through time (“inflated” relative to real attainment), they become harder to achieve (“deflated” against real attainment).
Extrapolating forward the previous trend of increasing attainment can give an indication of how large this deflation effect has become. It turns out to have now reached around 0.3 of a grade per exam entry (Figure 6).
Figure 6: Mean grade, actual and a ‘no deflation’ model
With a few further assumptions, we can convert this calculation into its (rough) equivalent for the A level points held by young UCAS A level applicants. If real attainment had continued to increase, in line with its long term trend, then the average applicant would have achieved just under ABB in 2019. In fact, they were awarded just under BBB. So, with these assumptions, the “comparable outcomes” induced grade deflation has robbed the 2019 applicant of a full A level grade. Their results would have been a grade better if they were converted to “2010 money”.
Figure 7: Actual and corrected achieved points for young applicants
Does any of this matter?
University entry is mostly a form of relative competition within a single exam cohort. So grade deflation doesn’t automatically cause an entry problem for applicants – if universities understand what grades are doing. But there might be areas where this powerful grade deflation could be causing problems for young people and universities. Here are two examples.
The first is the damage from the charge that the sector is “dumbing down”. This has that – in contrast to the past – universities are now admitting people whose attainment is simply not good enough for higher education. That the average A level grades for UCAS acceptances has been going down provide fuel for this view. There is plenty to argue about on this view in terms of who university is, and isn’t, for. But declining A level entry grades shouldn’t be the trigger for this argument. If you correct for the modelled grade deflation (Figure 8), average grades held by UCAS applicants who get into university have not been going down. They have been going up.
Figure 8: Recorded and deflation adjusted A level points for young UCAS placed applicants
The second problem is where post-2010 grade data is used for analysis through time. Particularly so if that analysis is used by government to pursue policy. Which takes us back to those sharply worded complaints of degree grade inflation that the government has levelled at universities, and its calls for action to stop it.
These rest on Office for Students statistical models of degree grade inflation. A level attainment is a very powerful factor in that model. And rightly so because the stronger your A level grades the better your odds of getting a higher class degree.
But the way the model is built effectively assumes that A level grades are an absolute measure of educational attainment that are stable through time. With this model construction, if universities maintain their academic standards then it is inevitable that the neglected A level grade deflation will pop up as degree grade inflation. But it would be a false signal. Degree quality would be unchanged. It is the measure of the input quality that has changed.
Our proposed A level grade deflation might not be a big enough effect to account for all the degree grade increases seen. But it would be a very substantial effect. We think that this, and other potential weaknesses in the model, do amount to reason enough to look again at the models and their conclusions. Meanwhile, government might want to think again about its pressure on universities to make it harder for students to get “good” degrees. Otherwise a double whammy for young people looms: those who have already been hit by deflated A level grades risk being hit again with a lower degree class than their attainment deserves.