Over the last few weeks we’ve all seen heartbreaking individual stories of young people who were set to achieve their ambitions to progress to university, overcoming barriers such as poverty, or disability, only to fall foul of the SQA or Ofqual algorithm. Small numbers overall, to be sure, but looming large in the public eye.
Though the government asked universities at the last minute to be “flexible” in making admissions decisions, this request followed months of finger-wagging over unconditional offers, the re-introduction of student number controls, and an apparent attack on the practice of using contextual information in making admissions offers.
Examples like Worcester College, Oxford, of institutions that have decided to ignore awarded grades and honour the offers they have made already are to be applauded – and it would be great if the rest of the Oxbridge colleges do the same.
But Oxbridge is an anomaly in the admissions system, as most colleges make only a handful of offers over the numbers they expect to admit, because in most cases applicants make their grades.
For the majority of universities, to come out at the right place numbers-wise it is necessary to offer more places than you have available, on the assumption some students will go elsewhere, not make grades and so on. For some courses numbers will be fixed anyway because of constraints on placements, equipment or space, so the requested flexibility is not as easy as it sounds – especially with only a few days’ notice.
We’ll see over the next few weeks how this immediate mess might be resolved. Certainly, universities should be making it a priority to admit people where it’s feasible to do so, and especially where there’s indication that their awarded grades are out of step with past performance. The government should let it be known that number controls are No Longer A Thing, or at least that anyone pointing to the A level debacle as a reason for exceeding planned numbers should be let off the hook.
But assuming that this is an anomalous year, and there are no specific lessons to be learned about the politics of awarding grades, might there be some takeaways to apply as university admissions come back into the policy spotlight in the coming year?
One exam to rule them all
One thing that’s been made clear is that the system’s dependence on a single set of exams makes it weaker and more precarious. The whole argument over post-qualification admissions hinges on whether it is possible to conduct university admissions in a time-bound window over the summer – there’s been talking of moving A levels earlier, starting the university year later, and so on. But all this assumes that it’s physically impossible to make a decision about a candidate without having their A level grades.
Had England – like Wales – retained AS levels, universities would have had reasonably recent, validated information on which to base offers for candidates who had missed their grades. AS levels aren’t perfect – even if you don’t agree with Michael Gove’s critique as education secretary that there was too much opportunity for resits to bump up grades – but they’re certainly better than nothing.
But you can take it even further. One of the reasons that predicted grades are so frequently askew is that there is so much riding on them. It’s a teacher’s effort to predict performance in a single set of exams, rather than an assessment of a pupil’s general level of competence or preparedness for university-level study.
The debates about respecting teachers’ judgements miss the mark in that respect – if it’s about predicting performance, it’s reasonable to give your student the benefit of the doubt. You’re judging what you hope they’d be capable of on a good day – and for many students, the days they sit their A levels are not guaranteed to be good days.
There’s also, of course, the issue of potential bias in the other direction, where factors like race, disability and socio-economic status could unduly influence teachers’ assessment of likely future performance.
And let’s not forget that the reason the algorithm disproportionately affected disadvantaged young people is the pre-existing, enormously unjust, gap in attainment between socio-economic groups. The algorithm only laid bare what is already known: young people at independent schools are genuinely more likely to get As and A*s. It’s only this year when it’s an algorithm rather than an exam that’s produced the injustice that it’s playing out as a national scandal.
Scrap A levels
If we scrapped national exams and schools moved to more of a regular lower-stakes assessment and GPA-like system, with regulation of the standards of schools’ awards rather than the national qualifications, universities would have plenty of reasonably robust information on which to base offers.
Whisper it: perhaps many, even all, of those offers could legitimately be unconditional. Being a good student – even being among the “best” needn’t come down to your performance during six weeks in May and June. It could be the mature judgement, backed by evidence, of teachers and admissions tutors.
Schools and universities could work together – as they do now – to create learning opportunities for young people aiming to study particular subjects, but unlike now, these could be credit-bearing, allowing those young people the opportunity to demonstrate their academic readiness to their chosen universities.
There’s nothing particularly special about A level performance – exams suit some young people more than others; some will be having a good day, others not, and so on. And there are many down sides: stress and pressure on the pupils, teaching to the test to the exclusion of other opportunities for personal and academic development, and the inevitable annual scramble to match people to places in August.
There would still be a need for something like Clearing – especially for applicants whose performance had improved towards the end of the school year and those who hadn’t secured an offer, whose circumstances had changed, or who had come late to the process. But there would be much less angst about not having got into the university you’d been planning for and dreaming about for months.
Far too much about the university admission system is based on the idea that performance in a national exam is the gold standard which cannot be challenged. A whole host of young people enter university with BTECs, having quietly completed a number of units with continuous assessment. Maybe it’s time the question “what did you get in your A levels?” was consigned to the dustbin of history.
It all sounds great in theory, but … 1) to my mind the net effect of those ongoing low stakes assessment would feel to kids as if they were always doing coursework. And coursework is stressy. For any perfectionist young person it would feel like they could never make an error ever – not good for MH. 2) Not only would it be perpetual coursework for kids, but also for teachers. They’d feel they needed to be always on the case ensuring no child underachieved 3) Plus coursework type assessments tend to really advantage those with supportive home backgrounds, or… Read more »
> The algorithm only laid bare what is already known That’s not the case: the algorithm introduced entirely new injustices. That’s why, for instance, Rye St Antony, a poorly-performing independent school with small classes, went from 18% A/A* grades in 2019 (below the national average) to 48% A/A* grades in 2020 (better than every state school in the country). That’s not a problem that was already there; it’s an injustice introduced by Ofqual. > It’s only this year when it’s an algorithm rather than an exam that’s produced the injustice The exam exposes inequality of various sorts (that’s the entire… Read more »
I thought the whole idea was that past performance was another means of standardisation. If a school has been over optimistic to ‘chance their arm’, why didnt the past performance standardisation bring is back into alignment with previous years. How does that change get through? I get that schools and teachers are on the league table treadmill so will chance their arm, but the algorithm should have spotted that as an anomaly, class size small or large, state or independent all not relevant the computer system couldn’t have had a quality / data check where grades being proposed for a… Read more »
Measuring a students’ ability to grasp higher education is a difficult one. I bombed in my A-levels due to a number of self-inflicted reasons (Girls, terrible exam technique, crap module grades) and only scraped into an HND on the back of a D and 2x U’s. I had to take a gap year because my UCAS form had been mis-filed by the school. But I worked during my gap year for the Year in Industry program and found I excelled at my HND due to a lack of exams and a more adult relationship with my lecturers. I completed a… Read more »
Helen, the point about the escaping past performance standardisation is that it did not apply if the cohorts were small. And that is absolutely right statistically.
Carl, actually yes we are in some cases. We can’t look things up the whole time.
Just to give you a really obvious example – if as a teacher I have to look something up every single time a child asks a question, they’d not have much respect for me. And I don’t see most emergency surgery is carried out checking the procedure on the internet throughout either,
Of course exams aren’t always the best form of assessment in every circumstance. But that doesn’t mean they have no place.
One of the things the Editor gets wrong is the blaming of “the algorithm” which is no more and no less than a particular set of rules for classifying a result. The problem ultimately was the fact that some test centres were too small for moderation of teacher assessments to be possible. The rules could not be applied so the unmoderated assessments had to be used, creating winners ultimately balanced by losers. The logic of this piece is essentially that rules are imperfect and that, therefore, there should be no rules. People that tear up the rule book are to… Read more »
There is just one thing I would like to point out, although I disagree with other things too. Exams are meant to test your ability to store and apply a large amount of information all at once, in a way that coursework can’t. With coursework, there’s the issue that you can learn all about one topic for a certain assessment, then you’ll have forgotten about it a few weeks later. I’ve done it. It’s much easier than the continual, sustained focus most people require to prepare for final exams. When you’re giving a speech, you can’t read a sentence at… Read more »
Continuous assessment is not the only alternative to centrally-administered exams. I know the Swiss system quite well. There are no national exam boards: schools set their own assessments, which are then validated by a system of mutual moderation between schools. My friend over there, who is a very experienced (indeed now retired) teacher regularly visits other schools to inspect and validate their annual exam round, including, I think viva voce exams with at least some students. He is accredited to do this. It is rigorous but locally controlled, by people who know their courses and know their students. And there… Read more »
American school way of assessment is way better than the UK one exam means all approach. Besides that, I like the US credit system too, students are given more responsibility and freedom in choosing subjects.