Some interesting things happened to degree attainment in 2019-20.It was widely observed that there was a rise in attainment and a narrowing of some previously stubborn awarding gaps which many thought must have been due to the widespread use of “no detriment” assessment policies in summer 2020. Think again.
Detailed analysis of the impact of no detriment, such as that published this week by the University of Exeter alongside its annual Degree Outcomes Statement demonstrates that we need to consider more nuanced influences on degree attainment.
At Exeter where the percentage of “good honours” degrees rose by four percentage points, only 1.4 per cent of students’ degree classifications were increased through the implementation of the no detriment benchmark. The gap between mature students and younger students closed by six percentage points despite no difference in the rate of use of no detriment between the groups.
There was widespread concern in April 2020 that the benchmarks calculated as part of no detriment policies would see students opting out of trying in the summer exams and lead to grade inflation. It was widely thought that high-stakes summer exams which traditionally led to lower grades would essentially be null and void as students would take their better coursework grades through to calculate degree outcomes.
Changing approach to assessment during Covid-19
The published evidence from Exeter and anecdotal evidence from elsewhere points to a more complex picture. There were many other changes to assessments put in place at the time: exams moved online and became open note, and many changed from being time-limited to 24 hour or longer.
But perhaps most importantly the stakes were lowered, the pressure released and students knew that they could try their best in a lower risk environment. As Sunday Blake, Exeter Students’ Guild President 2020-21 put it in Twitter: “I witness how the No Detriment policy has given students the time, space, and reassurance to first and foremost prioritise their own health and safety, whilst also working to the best of their ability, and achieving a degree classification that is indicative of their true ability.”
There were concerns across the sector about the impact of open note non-time-limited assessments, with opportunities for academic misconduct more widespread, and there are reports of more academic misconduct offences than in previous years.
However it would be disingenuous to say that there were no positives to the change in mode of assessment. If it wasn’t the benchmark that raised attainment and closed awarding gaps it was most likely a combination of these other factors, assessment type, scheduling and the environment in which students could complete the assessment.
A fairer future for assessment
This evidence gives insight into the context for existing awarding gaps, and points to actions educators can take to ensure that gaps don’t open up again post pandemic. If it is our assessment type and structure which perpetuate awarding gaps then we should change our approach to assessment to more fairly assess students across different demographics and educational backgrounds.
There undoubtedly is a place for closed note time limited exams. Some subject areas are incredibly challenging to assess online open book; in maths, for example, it can be easy to look up solutions and proofs, which are skills that need to be assessed. I teach first year chemistry and there are limited ways to demonstrate understanding of molecular orbital diagrams for homonuclear diatomics, and they are all in the textbooks or on the web. However we can and must look to use alternative forms of assessment, which assess more real world skills in a more real world form, time and place as often as possible.
We must also recognise that awarding gaps are not an individual institutional problem, they are a sector problem. There is variation between institutions but we all have awarding gaps and we can all do better. We cannot compete with each other on this, we must collaborate to address the issues and make sector level change in the way we assess to ensure equality of opportunity for our students.
We can also only effectively address awarding gaps if we know where they are, who they affect and what the impact of interventions like no detriment is. At Exeter we found that when we tried to look at intersectionality of impacts our datasets become too small so we need to share experiences and our analyses if we’re serious about making a change.
The data analysis at Exeter did throw up one particularly interesting observation, that students in the lowest participation neighbourhoods were more than twice as likely to use their no detriment benchmark as other students. So the students in those areas with the lowest HE experience were least likely to be able to perform to the best of their ability in this period of additional stress, or maybe were those least likely to have access to the computing and internet facilities required to undertake online exams. There is undoubtedly more in this which requires further exploration in a deeper dive into awarding gaps but it’s still clear that the policy did impact some student groups far more than others.
We’ll also want to look at the range of policies put in place for 2020-21, and while these are often more complex to analyse as they are more nuanced in their implementation we mustn’t shy away from asking the questions just because it is hard. We now have an additional year of understanding about setting and delivering online and open note exams and assessments and we must look at the impact these have had on attainment and awarding gaps.
So don’t be shy, if you’ve analysed the impact your institution’s policy had on students and awarding gaps share it. And if you haven’t, well, perhaps you should. Let’s pool our collective knowledge and learnings to improve things for all our students. We really should be all in this together.