Fears of grade inflation are on the rise. Again. But the data suggests that government pressure on universities might be misplaced and is risking an unfair double blow for young students.
Once universities complained to the government about grade inflation. Back in that age of number controls and students “scrambling” for places, some universities believed eroded A level standards were piling up the numbers achieving top grades. This, they argued, was making their job of selecting the “best of the best” impossible.
Now the grade inflation complaint runs the other way. The government wants to know why more students are gaining “good” (seemingly, first or 2:1 class) degrees than their statisticians calculate they deserve and grade inflation is suspected. This time by universities overly keen to please their paying customers. Ministers have been sharply critical.
It has been quite a reversal in just a decade. In working with universities at dataHE we spend a lot of time helping them gauge and respond to changes in grades. Our analysis suggests these grade inflation complaints are two sides of the same coin. It is likely that the true attainment of today’s young people is being seriously underestimated, putting them at a disadvantage, and damaging universities in the process.
The quiet decade for A level grades, and what went on before
Few factors are more important to the statistical understanding of higher education than the simple summary measure of A level points achieved from the highest graded three A levels. This ranges from 18 (for three A* grades) to 3 (for three E grades), with each increase in grade yielding an additional point. In pretty much any analysis of what goes on in the HE system, this will be the most powerful factor. The distribution of 18-year-old UCAS applicants achieving each of those point totals has been remarkably stable in recent years (Figure 1).
Figure 1: Cumulative distribution of achieved A level points (young UCAS applicants)
Source: End of Cycle data resources 2018, www.ucas.com
The trend remains becalmed when we convert it from points to the typical A level grade achieved by these young HE hopefuls (Figure 2). We do this so we can compare it to the distribution of all grades awarded at A level, published by JCQ (Figure 3). There are numerous technical differences between these populations. But they are not likely to alter the obvious conclusion from the graphs: the boringly stable profile of grades attained by young UCAS applicants in recent years springs from the boringly stable profile of grades awarded by the A level awarding bodies.
Figure 2: Cumulative distribution of achieved average grade (young UCAS applicants)
Source: End of Cycle data resources 2018, www.ucas.com
Figure 3: Cumulative distribution of achieved grades (all awarded A level grades)
Source: Published JCQ data, jcq.org.uk
Life wasn’t always this quiet for A level grades. Between the mid-1980s and 2010 grades were intended to reflect the absolute level of attainment of candidates. That is, to be absolutely referenced (or “criterion-referenced”), rather than relatively referenced (or “norm referenced”). Under this system, if those taking A levels did better than their peers from the previous years then grades would change. More higher grades would be awarded. Fewer lower grades would be awarded. And the average grade awarded would increase.
And increase it generally did. The profile gradually shifted from lower to higher grades (Figure 4). And the mean grade achieved rose (Figure 5). Those trends stopped in 2010.
Figure 4: Cumulative distribution of achieved grades (all awarded A level grades)
Source: JCQ provisional sequence 2001-2019, QCA final sequence 1992-2000, awarded grades only
Figure 5: Mean grade achieved (where awarded, A* remapped)
Source: JCQ provisional sequence 2001-2019, QCA final sequence 1992-2000)
The abrupt change of trend in 2010 isn’t a mystery. Or an accident. This was when Ofqual – reacting to that earlier round of grade inflation complaints – deliberately changed the concept of grades to be more relative in nature. This is the “comparable outcomes” policy, where the expectation is that the distribution of grades from year to year will be broadly the same. In part, this was to prevent candidates being disadvantaged whenever there was a major change to A levels.
But it also changed the underlying assumptions of what grades measure through time. In practice, the comparable outcomes period looks very similar to the quota system used before the 1980s when grades were strictly relative. An A grade simply meant you were in the top 10 per cent who entered in that particular year. It wasn’t meant to say anything about how attainment levels were changing through time. And didn’t.
Suppose people were simply getting better
Measuring attainment is a difficult specialism. There are differing views on what is best. Some argued that the trend in increasing A level grades prior to 2010 was grade inflation, pure and simple. But what if it wasn’t?
Suppose that the increase in grades in that absolute-measurement period was, in truth, mostly a steady rise in underlying educational attainment. Over time you would expect that rise to drive both higher proportions entering for the exam, and for the grade distribution of the results to shift upwards. This wouldn’t be exceptional. Education attainment levels have generally increased through time. As an extreme example, literacy among the adult population in the UK is a lot higher now than it was hundreds of years ago.
If A level attainment was increasing steadily prior to 2010 then it seems likely it would have continued to do so. It just isn’t allowed to show up in higher grades anymore. Suppose people had continued to do better. This would mean we are now in a world where A levels suffer from the opposite of grade inflation: grade deflation. Instead of grades becoming easier to achieve through time (“inflated” relative to real attainment), they become harder to achieve (“deflated” against real attainment).
Extrapolating forward the previous trend of increasing attainment can give an indication of how large this deflation effect has become. It turns out to have now reached around 0.3 of a grade per exam entry (Figure 6).
Figure 6: Mean grade, actual and a ‘no deflation’ model
With a few further assumptions, we can convert this calculation into its (rough) equivalent for the A level points held by young UCAS A level applicants. If real attainment had continued to increase, in line with its long term trend, then the average applicant would have achieved just under ABB in 2019. In fact, they were awarded just under BBB. So, with these assumptions, the “comparable outcomes” induced grade deflation has robbed the 2019 applicant of a full A level grade. Their results would have been a grade better if they were converted to “2010 money”.
Figure 7: Actual and corrected achieved points for young applicants
Does any of this matter?
University entry is mostly a form of relative competition within a single exam cohort. So grade deflation doesn’t automatically cause an entry problem for applicants – if universities understand what grades are doing. But there might be areas where this powerful grade deflation could be causing problems for young people and universities. Here are two examples.
The first is the damage from the charge that the sector is “dumbing down”. This has that – in contrast to the past – universities are now admitting people whose attainment is simply not good enough for higher education. That the average A level grades for UCAS acceptances has been going down provide fuel for this view. There is plenty to argue about on this view in terms of who university is, and isn’t, for. But declining A level entry grades shouldn’t be the trigger for this argument. If you correct for the modelled grade deflation (Figure 8), average grades held by UCAS applicants who get into university have not been going down. They have been going up.
Figure 8: Recorded and deflation adjusted A level points for young UCAS placed applicants
The second problem is where post-2010 grade data is used for analysis through time. Particularly so if that analysis is used by government to pursue policy. Which takes us back to those sharply worded complaints of degree grade inflation that the government has levelled at universities, and its calls for action to stop it.
These rest on Office for Students statistical models of degree grade inflation. A level attainment is a very powerful factor in that model. And rightly so because the stronger your A level grades the better your odds of getting a higher class degree.
But the way the model is built effectively assumes that A level grades are an absolute measure of educational attainment that are stable through time. With this model construction, if universities maintain their academic standards then it is inevitable that the neglected A level grade deflation will pop up as degree grade inflation. But it would be a false signal. Degree quality would be unchanged. It is the measure of the input quality that has changed.
Our proposed A level grade deflation might not be a big enough effect to account for all the degree grade increases seen. But it would be a very substantial effect. We think that this, and other potential weaknesses in the model, do amount to reason enough to look again at the models and their conclusions. Meanwhile, government might want to think again about its pressure on universities to make it harder for students to get “good” degrees. Otherwise a double whammy for young people looms: those who have already been hit by deflated A level grades risk being hit again with a lower degree class than their attainment deserves.
Thanks for this very helpful and illuminating analysis. It puts the finger on the key issue of relative vs absolute grades.
What would the percentage A grades (or average points) be by say 2040 if the 1992-2009 trends continued?
To add to this, we would surely expect more students to do better than historically given the dramatic and fairly recent changes in the learning environment. When I was a student, for example, it took 3 weeks to get hold of a copy of article through an inter-library loan if we didn’t hold the journal, I had to hand-write assignments, there was no allowance made for any form of learning disability, marks were not transparent, there was no google, no VLE, no laptops…need I go on? Students can now gather and assimilate vast quantities of high quality information – and… Read more »
Attempts by the sector to explain away unprecedented improvements in degree outcomes risk fuelling public perceptions of a defensive, self justifying University system. Universities need to get a bit more savvy…
Interesting. There is also, I believe, a mathematical issue; for some years pressure was put on universities to use the full range of marks in the First Class classification i.e. 70 – 100% whereas previously many had used e.g. 70 – 85% to be closer to the available range of marks in other degree bands. It seems to me that this change could also cause an upward trend and that this should be analysed. If true, it is another reason why the current narrative regarding “grade inflation” is unfair to students as it may in fact be that previous students… Read more »
Hmmmm
“if A level attainment was increasing steadily prior to 2010 then it seems likely it would have continued to do so.” Actually no. Ceilings do exist.
Also, this argument surely only works if the degree class inflation were solely about “classification compared to A-level grades” rather than “percentage of higher classifications awarded”
At the ex poly where I teach, In general, students are weaker and spoon fed. Teaching may well have improved but students who in the past would have achieved a 2:2 are now getting 1sts.
This analysis seems to assume that the data is measuring a relatively static system over time. While some significant backwaters may assume the system is never-changing in fact this is far from the truth for the system as whole. In fact there have been some quite massive macro/system level changes going on since the 1980s, even more so since the 1970s when A level grades were quite explicitly based on relative performance. These changes include: 1) growth of nursing as a major degree level discipline, replacing previous entirely separate arrangements where A levels played no significant part; 2) massive decline… Read more »
I agree. Although we do have some excellent students, generally there are many students who at Level 6 are really not independent learners. We are measure on internal student satisfaction with specific units. We are expected to spoon feed students e.g. seeing past examples of work, lots of assignment support workshops etc. This means students are not really thinking themselves any more. Therefore many students receive a lot of assistance to get them a higher grade that they probably could not have achieved alone.
Thanks for the comments. Some clarifications: Under the extrapolated trends in the analysis, average attainment of A level UCAS applicants in 2040 would be around ‘AAA’ compared to just under ‘BBB’ (thirty years earlier) in 2010. In our experience, long run data series generally do show education levels increasing rather than ceilings being hit. Indeed, eventually grade systems and entire levels of assessment often become saturated, typically associated with progressively higher ages of leaving full-time education. Literacy is mentioned as a very long run example. More recently population attainment of 5 A-C at GCSE/O level has risen from around 10%… Read more »
The assumption that BTEC students are weaker at degree level primarily because of poorer GCSE entry is a contested concept. BTEC and similar qualifications tend to emphasise work-based and vocational skills, such as technical ability and team-working, rather than the rote learning and assessment revision techniques needed for A levels. I recall a review by ‘blue chip’ employers over 20 years ago came up with a profile of ‘ideal’ graduates that exactly matched the skills of those pursuing BTEC qualifications (including HND), rather than those delivered by the honours level academically-focussed Russell Group universities entering with A levels, from which… Read more »
The claim based on data that there is not a ceiling only makes sense if you assume grading standards are constant. They aren’t (speaking here as a teacher for over 25 years )
Cath is absolutely right. And as many of us have argued, it’s questionable whether in fact there are really any standards at all given that there is no systematic comparison of assessment standards across institutions or even departments. Unless assessors within disciplines calibrate their marking standards with each other this whole question about grade inflation remains at best speculation and guess work – however much one may try to use maths to prove a case!
‘Spoon feeding’ methods such as seeing past examples, assignment support workshops etc. help to enhance students independent learning through the development of assessment literacy. I wonder whether an academic would consider submitting an article for publication without looking at many other examples of papers published in the chosen journal, or without research and publication support workshops or feedback from colleagues on drafts? Why shouldn’t students have similar opportunities to help them improve?
‘Many students receive a lot of assistance to get them a higher grade that they probably could not have achieved alone.’ I call that ‘teaching’ and don’t think it should be discouraged in Universities.
At the ex poly I teach, I invariably write the students aims and objectives for their dissertations. Most are sorely incapable of doing this themselves. It’s certainly not teaching, it’s spoon feeding. Have colleagues who practically write the whole thing for them.
The change in A-level grading introduced in 2010 could well affect estimates of ‘class of degree inflation’. It would be worth re-running the OfS modelling using estimated post 2010 A-level grades adjusted to make them approximately criterion referenced. That said, my best guess is that such a ‘what if’ calculation would not make a material difference to the estimate of class of degree inflation. The A-level scores will typically refer to the academic year three or more years earlier than graduation, so the 2010 change in A-level grading can only impact the 2013 and later graduation years. If the missing… Read more »
Thanks John for that very good point. You’ll absolutely right: if grade deflation was all that was going on, A levels are taken in year of entry, and the model has got everything under control, then would expect to see a linear progressive effect (like the gap between the lines in Figure 8 above) in ‘unexplained’ degree class awarded from 13-14 onwards (it is only 2011 A levels that start to fall behind where they ‘should’ have been) . This isn’t what the raw figures show – with some increases in ‘unexplained’ between 2010-11 and 2012-13 as you note. However… Read more »
[…] (2) Who you get an offer from and what the conditions are, for example, together with censoring effects at the limits of the points scale. The relationship has also changed through time for a number of reasons, possibly including the deflation of exam awarded grades (https://wonkhe.com/blogs/grade-inflation-run-wild/). […]
[…] Grade inflation is an inexact description of what happens when students attain higher grades from one year to the next. This may be because students or teachers are getting better, exams are getting easier, or because of the covid-19 shock to the education system produced unforeseeable and uncontrollable consequences. In reality there is no way to accurately pinpoint reasons for whatis a deeply ambiguous situation. Nevertheless, shifting attitudes to grade inflation are a social and political issue that have influenced education policy and A-level outcomes in recent decades. […]
[…] the proportion at higher grades rose steadily, from 2010-2019 the distribution was designed to be more or less fixed. Since 2020 annual moves, each of previously unimaginable magnitude, changed the distribution. […]