After the results debacle in 2020 Gavin Williamson indicated that ministers were again putting their trust in teachers to mark GCSEs and A-Levels.
He confirmed algorithms would not be used to work out fair grades. This came after students from deprived areas and smaller cohorts had their results downgraded by what Boris Johnson described as a “mutant algorithm”- the government had previously instructed the Office of Qualifications and Examinations Regulation (Ofqual) to create a system that ensured grades were comparable to previous years to minimise grade inflation.
Memories are short, and lessons forgotten.
The Office for Students (OfS) has used statistical modelling to estimate degree classification results and complains that students in some institutions did better than their algorithm predicted. The OfS algorithm bases degree classification predictions on factors including gender, ethnicity, disability and university participation (POLAR4) data. But is one minister’s “grade inflation” another minister’s “getting in and getting on”?
Reflection is required – beyond the media outcry the OfS analysis generated. The same media last year decried the use of similar algorithms as students protested their results.
Contradiction should be avoided
The OfS report analyses graduate attainment from 2010-11 to 2020-21 – and sets out by institution the expected proportion of first and upper second-class degrees compared to those awarded. It concludes that 14 per cent of “good” degree (first class or upper second) attainment change since 2010-11 is unexplained by changes in the graduate population.
Williamson should have shared the lessons he learned. Not only are memories short, but the OfS is in danger of appearing to contradict itself, with the newly appointed John Blake asserting universities’ duty to ensure that students of all ages – and from all backgrounds – can achieve in order to enhance their careers and their lives. The OfS has a duty to promote equality of opportunity.
While Blake argues for merit, the grade inflation pundits argue for their predictions based on gender, ethnicity, disability and Participation of Local Areas (POLAR4), which make better results “unmerited”. Dare students escape their personal characteristics and their predecessors’ disadvantages? The computer says no. It’s to be a regulatory concern. It’s important therefore that the freedom of English higher education providers to determine the way students are taught, supervised and assessed is protected in law.
Removing attainment gaps lifts all outcomes
Tipping this over, there is a problem. Attainment gaps remain for some groups, who are less likely to achieve a first or upper second class degree classification, despite controlling for prior attainment. We need to close these attainment gaps, not bake them in based on past performance and prejudice.
Universities are working on this issue. For example, in 2019 “Raising Awareness, Raising Aspiration” (RARA), a collaborative project between the University of Sheffield, King’s College London and the University of Portsmouth, investigated the extent to which enhanced personal tutoring helped reduce attainment gaps for Black and minority ethnic students and those from lower socioeconomic groups. The success of this and similar projects was to create life chances, allowing students to beat the algorithm, to beat the odds, and create outcomes which may be unexplained but are not unexplainable.
The OfS report uses hypothetical models to estimate the effect of closing (reducing to zero) the existing attainment gaps. The analysis shows if they adapt their methodology, removing attainment gaps for these characteristics, the “unexplained” attainment reduces to 11 percent. But we learnt through RARA that having a point of access through a personal tutor and a systematic approach to personal tutoring, with clarity on rights and responsibilities in that relationship, supported all student attainment – both closing the attainment gap, and lifting attainment across the cohort. Interventions which close the attainment gaps support all students.
Using this methodology, graduates and universities may decide to treat their “unexplained” results with pride. Universities transform lives. Students transform their life chances through education. The Sutton Trust says universities are among the most powerful engines for social mobility we have in this country, putting disadvantaged young people into well-paid and rewarding careers.
Future aspiration beats past prejudice
While we can’t ignore the increase in good honours awarded, care is needed when using models that rely on historical data that today’s students cannot control. The past data holds biases that a predicative algorithm can end up replicating. I would argue algorithms should not use characteristics which have historically unfairly held some groups back. Using gender, ethnicity, disability and POLAR4 as predictors (POLAR is a relative measure of educational disadvantage) risks baking in past disadvantage.
I attend graduation ceremonies and see the impact on the people of qualifying. Treating those graduates who beat the odds as errant statistics, and controlling their grades, will mess up people’s life chances. The 2019 Conservative manifesto promised to build a Britain in which everyone can make the most of their talents regardless of their background. To beat the odds. students who want to work hard, to achieve, to realise their ambitions, and to do so based on their merits, should not be limited by a predictive algorithm. Neither should the providers who support them
It is certainly true that increasing percentages of students are gaining good honours degrees. Indeed, many are going on to study Masters degrees in an attempt to distinguish themselves from the crowd in a competitive jobs market. A Conservative government seeking to reward aspiration could bring Level 7 qualifications into its flagship Lifelong Loan Entitlement in recognition of this fact.
Some comparisons are false
There has been action across our sector on safety net and no-detriment policies to mitigate the impact of Covid-19 on assessment. The Russell Group published a joint statement on approaches to ensuring fair assessment and protecting the integrity of degrees. UUK produced a report and set of principles for effective degree algorithms in July 2020. But there are limits to such sectoral initiatives. UUK stated there was variation across the sector in how algorithms are designed, reflecting differences in teaching and assessment and skills within specific degree programmes.
In other words, schools have a national curriculum, and national examinations, and A-Level and GCSE grades need to be comparable. This is not true between universities and academic subjects. No-one seeks to compare a BMus from the Royal Academy of Music (who stand, as it happens, accused of giving out too many firsts) with a BEng from Imperial College. Both are marks of quality. They are different in every aspect.
The national or institutional regulatory frame for degree classification embraces unique disciplinary and institutional differences. The diversity of higher education is its strength and uniformity has limits. The external examiner system recognises this. The subject benchmark statements which define the academic standards inside institutions recognise this. There will be considerable variation in good honours rates at departmental level, with fluctuations between years of intake. Scaling down results at the institutional level to fit a standardised model would create and not remove unfairness.