Johnny Rich is Chief Executive of outreach organisation Push, and of the Engineering Professors’ Council, and a consultant.

In advance of the May 13th deadline for responses to DfE’s consultation on admissions, everyone is hoisting their flags.

UCAS has come out in favour of PQO (post-qualification offers) and now the Universities and Colleges Union (UCU) and NEON have published a report supporting PQA (post-qualification applications).

Sadly, there are two major things wrong with their report – what they think the problem is and what they think the solution is.

Problem solving

The credentials of the UCU/NEON report are great – Graeme Atherton is a leading expert on disadvantaged students, and UCU reports are usually hard to contradict. But they argue that the big problem that needs solving is the use of predicted grades. It isn’t.

Even if it were, PQA would only make the real problem worse unless it was accompanied – or better still, preceded – by a raft of changes to the bigger picture, which the DfE isn’t even contemplating let alone planning.

The report says that most predicted grades are “wrong”. That’s true, but only as predictions of what the actual grade will be. They are not necessarily wrong in terms of being fair assessments of the student in question.

Predicted grades are decided by the expert view of a teacher who knows the potential of their students based on their past performance across a course.

Meanwhile, actual grades are an assessment of how a student did in an exam on a particular day when their dog might have just died, they’re on their period and the room’s too hot. What’s more, actual grades aren’t even an accurate assessment of that.

As Dennis Sherwood has shown, second examiners regularly disagree with the awarded mark and, as OfQual’s head Dame Glenys Stacey told the Education Select Committee, the grades are only correct give or take a grade.

Saying that predicted grades are a bad predictor of actual grades is like saying that a calculation of how many sweets are in a jar based on its weight is a bad predictor of how many you might guess there are. It is, but that doesn’t mean it’s not useful if what you actually want to know is the number of sweets.

Tackling unfairness

If it comes to it, I’d rather admissions were based on a student’s future potential than on their past performance.

But there’s no denying it -predicted grades are on average higher than actual grades and, if they are on average less generous to disadvantaged students, then the UCU report has a point in saying that their use is unfair and a problem. So, is that the case?

It would seem not. Or at least, not always. Mark Corver’s excellent piece for HEPI’s collection of essays Where next for the future of admissions? suggests the opposite is true.

There is however a pattern of high-attaining disadvantaged students whose predictions lead them to “undermatch” – that’s to say, they pick an institution that is less selective in terms of grades than they could have achieved with their actual grades.

This assumes they’ve made a reluctant choice to “settle” for those institutions, which is disparaging to both the students and institutions involved. It’s perfectly possible for someone capable of entry to, say, Oxford to make a positive choice to study somewhere else. They have the option of UCAS adjustment if their choice was indeed made reluctantly. Vanishingly few people choose that option.

In any case, when we refer to “undermatched” disadvantaged students, we’re talking about around a thousand students a year. Overhauling the entire system in favour of an untested model for the sake of 0.2% of the cohort is a massive risk, particularly given that the unintended consequences of such a change may not only damage the other 98.8% of applicants, but may even make matters worse for the very students we want to help most.

Flaw filler

Don’t get me wrong. The current system is flawed. Deeply. And it’s biggest flaw is one that affects more than 10 per cent of the cohort.

In order for unplaced students to find places and unfilled places to find students, the current system requires the Clearing process. It’s called a “process” but it’s more of a free-for-all. Decisions on both sides are made in the heat of the moment in the face of rising desperation.

Unsurprisingly, matches made through Clearing are often sub-optimal. In fact, there’s a correlation between the numbers arriving at an HEI through Clearing and its drop-out rate. Correlation is not causation, but we all know the saying “act in haste, repent at leisure”. Dropouts are just the tip of an iceberg of bad matches.

During the summer and after the exam results, it’s difficult for applicants to access good careers advice, to take time to absorb it, to research, to explore options (visit unis) and generally make good choices without undue pressure.

Students with the greatest disadvantages (socioeconomic or other) are most in need of support and guidance (because their needs may not be typical of more traditional students), but are least able to get it.

Schools can’t help – various studies have shown that careers advice is weakest in the schools/colleges with the lowest progression rates. And parents can’t help – by definition, these students come from backgrounds with less experience of higher education.

It’s not surprising then that students from disadvantaged backgrounds are the ones most likely to use Clearing.

So, if you’re looking for a problem in the current system that stacks the odds against disadvantaged students, look no further than Clearing.

Screwball scramble

Which brings me to what’s wrong with the proposed PQA solution. In practice, it does away with the use of predicted grades and putting a small proportion of students into Clearing by, in essence, putting everyone into Clearing instead.

All decision-making would be made over the summer when teachers, uni administrators, careers advisors and families were all hoping or expecting to be on holiday. Slow, considered, advice and guidance would disappear in favour of grab-it-while-you-can approaches.

If you think the current system has encouraged some HEIs to use unfair competitive practices like “conditional unconditional” offers, just wait and see what they come up with when their entire recruitment is based on what they can achieve in a massively compressed time-frame.

Of course, it wouldn’t be all of their recruitment. Some places would be taken by international students who would need to be offered places earlier in the cycle (or we run the risk of compromising competitiveness on the international HE market).

The most selective universities would be keen to use this as an opportunity to fill as many spaces with international students as possible, leaving fewer spaces for the very disadvantaged students we want to help.

Indeed, unless regulation steps in (which would require legal changes as HEIs’ autonomy over admissions is explicitly protected), those unis might also offer early secured places to UK students on an international fee basis using – you guessed it – predicted grades.

To many privately educated students and their schools, those fees may not look so bad if that’s the price of a guaranteed place. With that certainty, they can also get to the front of the queue for housing options and even part-time job opportunities (should they want or need them).

Anyone going through PQA though would instead have to sort out their accommodation, funding, care for dependents, etc all at speed.

A conditional offer

So PQA looks attractive at first glance, but the unintended consequences – especially for disadvantaged students – might herald a wild west era of admissions with higher costs, fewer opportunities and worse outcomes for the most disadvantaged students.

We should only – I repeat, only – get behind the idea if it comes with:

  1. Guaranteed ways to embed careers information, advice and guidance into the process from an early stage (Yr9 and beyond).
  2. Greater regulation to avoid unfair competitive practices (and such regulation may have its own damaging effects on institutional autonomy, which we might decide mean PQA isn’t worth the candle anyway).
  3. Penalties for failure to maximise wider access and equity of opportunity, including perhaps recruitment targets for students from disadvantaged backgrounds and reserved places on courses.
  4. Changes to Level 3 assessment processes to ensure they recognise potential as well as summative performance.
  5. Changes to the HE academic year (a January start for everyone, perhaps?) with financial and career support for disadvantaged students from the period when school ends until the university term begins.

None of these “only ifs” is included in the current proposals for PQA – so, as they say on Dragon’s Den, for those reasons count me out.

7 responses to “Why put every student into the scramble of clearing?

  1. Very god article, the discussion of how unfair exams are needs to be heard more, policy makers still see them as some sort of gold standard.

  2. “Meanwhile, actual grades are an assessment of how a student did in an exam on a particular day when their dog might have just died, they’re on their period and the room’s too hot. What’s more, actual grades aren’t even an accurate assessment of that.”

    Thank you. The more often this gets pointed out, the better.

  3. There are no ‘actual grades’ even – only moderated grades. The system of moderated grades (ie grade boundaries) sustains an illusion that grades achieved in different years are comparable to each other. They are not.

  4. “All decision-making would be made over the summer when teachers, uni administrators, careers advisors and families were all hoping or expecting to be on holiday. Slow, considered, advice and guidance would disappear in favour of grab-it-while-you-can approaches.”

    Why is that? With PQA all the students can research their target institutions and courses beforehand and create a ranking of sorts, “if I get 5 A* I apply there, if I get BBC, I apply there”. There is no need to scramble on the results day.

    1. @Iorek In theory there is no need for a scramble. In practice, that is exactly what there would be. You can’t just hope future generations will magically behave in ways that they have never behaved previously, when they’re being given no incentive to do so and additional reasons not to.
      I run an outreach organisation (Push) that works in schools supporting students during the decision-making process. It is – or at least should be – a long drawn-out process of challenging their assumptions, seeding ideas, drip-feeding information, testing options, rinsing and repeating. Many schools struggle to get students to engage in this process when the students feel they have more immediate concerns like exams coming up and grades to get. The students with the least family support/pressure and the schools dealing with a wider set of post-18 pathways have the greatest struggles – ie. the most disadvantaged students.
      If you change the system by (i) expecting them to rationally weigh up a far wider set of options, and (ii) removing the urgency of an application deadline, they are even less likely to engage. Even those that do try to engage will be overwhelmed by the choice and sheer volume of information they feel they ought to absorb. This kind of cognitive overload is already a significant challenge to the process of optimised decision-making. It would be far worse with less of a road map in terms of doing things step-by-step and with the guidance of predicted grades.
      That brings me to another issue: you would not get rid of predicted grades, which is supposedly the main problem that PQA is supposed to solve. Teachers would need to give guidance to students to help them narrow their choice down from around 80,000 courses at over 400 institution in the UK alone (never mind studying abroad or not going to uni or doing a degree apprenticeship or etc.), so they would be making unofficial predictions anyway. As these predictions are now unofficial, students (and parents) would give them less weight and – as happens often now – you’d have CDD candidates refusing to accept that they won’t get a place to do medicine and highly capable students from non-traditional backgrounds not casting their net wider than their local provider.
      Instead of getting rid of predicted grades, you’d simply make them less useful and drive their use underground, where the deleterious effects of any in-built prejudice would be unseen and harder to counteract. As it happens, PQO would basically be no better than PQA in this regard.
      Furthermore, the evidence suggests that the very problem we are hoping to solve with unconditional offers is that it lets students feel they can take their foot off the brake. One interpretation of this research is that the current offer process encourages students to aspire to attain. If you remove actual offer-making and just tell students to do as well as they can and then see what they can get with whatever they’ve got, then, at best, you might not improve the foot-off-the-brake situation. Indeed, at worst, you might be making the problem mainstream by putting everyone in the position of having no specific target rather than just the unconditional offer-holders.
      To return to your point about the need to scramble, just compare PQA to Clearing. In theory, there is no need for a scramble in Clearing. In practice… well, anyone who has experienced it from either side can tell you what it’s actually like.
      Now, take Clearing and remove all the earlier deadlines, multiply the number of people going through the system simultaneously by a factor of about seven and throw in staff on holidays, that’s PQA. It’s a more of a recipe for a thorough scrambling than advice from Delia Smith on eggs.

  5. Excellent article Johnny – very clear analysis of how PQA (though perhaps not PQO) is not the right answer. I think there are two intertwined issues here.

    The first is the method we should use to assess whether an individual should be admitted to a university – and I agree that the notion that A-levels are “right” whereas teacher assessments are not, simply because they don’t accurately mirror A-levels, is fundamentally flawed. We need an approach to assessing the potential to benefit from university – whether that be exam-driven, teacher assessed or (my own preference), some combination of the two.

    And the second issue is when applications to university should be made, and offers given, in relation to the point at which that assessment of potential (however it is done) is made.

    I fear the DfE is set on resolving the latter issue without first dealing with the former.

Leave a Reply