Each year, most university applicants get a set of grades for their A level (or equivalent) qualifications two terms before they take their final exams. In 2020, a predicted grade set around the time exams would have happened replaces the A level itself.

There is no comprehensive official guidance as to how UCAS predicted grades are arrived at, and – in general – they overestimate actual A levels results by around two to three grades over three A levels. Research by Cambridge Assessment in 2014 found that 43 percent of predicted grades were accurate, and 43 per cent were over-optimistic.

Opinions as to the usefulness of A levels in predicting higher education success are mixed, and older readers may recall that the government once argued that GCSEs were the best predictor of the ability to get a good degree. And yet, in general, higher education works for most students. Very few fail to complete their studies – the UK is an international leader in retaining students. And, in general, students are satisfied with their courses. Just under two thirds would not change their provider or course after three years of study.

So we can be cautiously optimistic that grades in 2020 will – on the whole – be fit for purpose in terms of admissions. But an overall judgement is not the same as being fair in every single individual case.

What happens when exams are cancelled?

Following the decision to close all UK schools and cancel exams in light of the current Covid-19 emergency, significant attention has been focused on the way the absence of A level results will affect university admissions. There’s been sensible statements from all of the expected sensible people – urging providers to be flexible, to avoid moving to blanket unconditional offers, and to put the interests of the entire 2020-21 student cohort first.

To support this business as usual where possible stance, students will be awarded “calculated” grades for any exam they were entered for. This choice of language is deliberate – “predicted” grades sound speculative, whereas calculated grades sound like the application of hard science.

Under emergency guidance assessment centres (generally schools, colleges, and similar) will have to submit two pieces of information for each student on every subject.

  • An estimated grade – based on what they would expect the student to obtain if assessment was proceeding as normal.
  • Their place in an ordered list of all candidates for that subject in that centre.

Ofqual and DfE will then use a new statistical process to generate the grades that go to students and to UCAS. This will take into account the overall spread of marks across the country, and the historic performance of that particular test centre. Importantly for universities, there is no indication of the date these will be released – although the hint is that it would be slightly earlier than the usual second week of August.

The estimated grade

Teachers get surprisingly little guidance on the initial grade estimation. In a nutshell, it is expected that this “holistic professional judgement” will draw on:

  • records of each student’s performance over the course of study, including for example progress review data and available work
  • performance on any non-exam assessment – where these already exist for re-sitting students
  • AS results in that subject if these exist
  • Performance on any class or homework assessments and mock exams
  • previous results in your centre in this subject, and the performance of this year’s students compared to those in previous years
  • any other relevant information

There’s no precise rubric or weighting – in an education system that has done much to de-privilege professional judgment it honestly feels quite old fashioned. How teachers perform against these expectations will be fascinating to watch.

For students with disabilities teachers will be asked to estimate performance based on the expected mitigation (more time to do the exam, a scribe, an alternative assessment) being applied. For homeschooled students and others who have studied outside of their assessment centre, further guidance will follow.

The ordered list

After this, teachers or teaching teams will rank pupils from best performing to worst performing, in a confidential list that will be submitted alongside the estimated grades. The guidance urges teachers to:

discuss the rank order and come to a shared view of the standard being applied within their centre. We recognise that this will be challenging for some centres and in some subjects, and in the current circumstances.”

The difficulty in separating the performance of two students who are both likely to get, say, a B is noted (tied rankings are expressly forbidden) – and we also see an awareness of the problems in dealing with large cohorts where no one teacher will know the work of every student.

Statistical standardisation

Somewhat terrifyingly, the precise statistical process to be deployed has not yet been finalised. We are told it will draw on:

  • expected grade distributions at national level
  • results in previous years at individual centre level
  • the prior attainment profile of students at centre level

and that the rank order of students within a centre will not change. Drawing primarily on the existing curve and prior attainment at centre level, whole centres will be moved up or down. This will be a delicate operation, and the stated aim – to ensure students are not unfairly advantages or disadvantaged – is a noble one.

Has anything like this ever been done before?

Yes. Students who miss examinations through serious illness are sometimes awarded calculated grades. Though this is not a widespread phenomenon, the government guidance suggests Ofqual will be drawing on this experience in designing the new system.

There is pre-existing Joint Qualification Council guidance on the “special consideration” process, which outlines just how rare this calculation option is. In most cases those who miss an exam through illness and have not completed more than 25 percent of the assessed work (that would be most A level candidates following the move away from coursework in the last few years) would be offered an alternative date to sit their exam. Even students who die or become terminally ill and thus miss a final exam would get a letter of recognition rather than a calculated grade.

How aren UCAS predicted grades usually calculated?

By far the most appropriate comparison is the process of producing “predicted” grades for UCAS. This is widespread, and takes place in just about every school and college in the UK. However, this is not – as far as we are aware – a standardised process, and it only happens for students applying to university ahead of their exams.

The only guidance out there seems to be from UCAS itself – this short page represents what is offered to the wider category of “advisers”. Predicted grades are meant to be:

  • In the best interest of applicants
  • Aspirational but achievable
  • Determined by professional judgement
  • Data-driven

But should not be:

  • Affected by pressure from students or guardians
  • Influenced by university entry requirements or behaviours

Schools are encouraged to set and document their own policies, but it is very surprising to see such sparse guidance at a national level. Another UCAS page notes that “some schools are more likely to ‘over predict’ than others”, suggesting that the principles above are not consistently applied.

The expertise and experience of teachers are vital in informing A level predictions, but predictions should also be informed by prior attainment at levels 2 (SATS) and 3 (GCSE). The grades contribute to what could best be described as a sniff test – “students like this who enter my class with these grades generally end up with these A levels”, plus a little extra for the “aspirational but achievable component”.

This draws on everything from the way the student performs in class, to the quality and organisation of the work they are handing in, to their overall attitude towards learning.

Adding to that

This is very much a professional judgement, and therefore difficult to systemise or prescribe. Though it is not perfectly reliable by any stretch of the imagination, it is reliable enough to base an entire offer-making structure on.

The Ofqual guidance attempts to make this process more accurate – remember “calculated” not “predicted”? – but the only data added to this existing practice other than the way the student has engaged with their learning in the 12 weeks following the point when predicted grades were released before schools closed. And the ordered list.

The method suggests that the standard A level distribution will be preserved, with a national moderation. In other years a similar normalisation process takes place for exam results. It has been argued that this grading to a curve that happens with A levels is a big part of the long lamented difference between predicted and actual grades.

For future generations of educational researchers this will be a fascinating point of reference – for the first time we will have two complete sets of predicted grades, the second made two terms later and (I assume) without the “aspirational” component. Will teachers scale their predictions back?

And one final piece of good news. Most of the summer term of your traditional A level year is traditionally devoted to revision so we don’t need to be too concerned about students missing new learning. Indeed, in a stressful point of their life in a very stressful year, the decision to cancel exams gives 18 year olds a chance to look after themselves and others.

Leave a Reply