Does exit velocity actually exist? Or is it merely a convenient fiction, which should not be used to determine algorithms?
The Universities UK/Guild HE Understanding Degree Algorithms (2017) report showed us that of 100 responding institutions, 87 used exit velocity in their algorithm for calculating degree classification. But what is exit velocity, and why should we use it?
The standard definition is from the work of David Allen, and explains that “The notion of an exit velocity comes from the widespread belief that the student’s marks generally improve from year two to year three.” However, as Allen notes: “Given there is little or no research into exit velocity the true intention for rewarding it by higher weightings on year three marks seems misjudged.”
Do grades improve?
There is surprisingly little research into exit velocity – I found a 2015 paper by Mark Betteney which examined whether there is any truth in the commonly held belief that grades for undergraduate students improve from year two to year three, based upon a case study for BA (Hons) Primary Education with Qualified Teacher Status (QTS) students. As with Allen, Betteney describes exit velocity as students achieving better grades in their final year of study than in previous years of study.
I’m based in the University of Chichester – a small, post-2003 university which offers a broad academic portfolio, and prides itself on being a widening participation institution. We use a fairly standard algorithm to calculate degree classification.
For most undergraduate provision (those without placements which are marked as pass/fail and are, therefore, excluded from the usual 240 credits) all successfully attained credit is used from Level 5 and Level 6 and this is weighted at 40:60, on the basis that students achieve exit velocity in their final year of study.
Across 970 students completing in 2019-20, the Level 6 mark was, on average, higher by just 2.6 per cent for an individual student (unlikely to impact classification, unless at the boundary). There was, however, a range across different schools or institutes within the university, which roughly reflected the proportion of Firsts and Upper Seconds awarded, with the majority being for ensemble-based provision and the fewest being for programmes within business and sports.
It should be noted that for the 2019-20 academic year, we operated two different algorithms for calculating degree classifications to ensure that no student was disadvantaged during the pandemic so the minor increase of just 2.6 per cent is perhaps surprising given a more generous approach to calculating degree classification.
Many programmes did see an increase of some kind but some showed a decrease in marks for students between Levels 5 and 6. About one-third of students in business fields and in creative industries did not benefit from exit velocity. With sports, about a quarter did not benefit. The only area where students did consistently benefit was in the arts and humanities.
So there are considerable differences in different subjects, and this means that we cannot simply determine whether exit velocity is fact or fiction.
When the sector-wide results became apparent via the annual HESA release, Wonkhe (1 February 2021) noted that “the big question is what are we doing to final year students in semester two that takes so many from a first to a 2:1 in a regular year?”. This was in response to noted changes in graduate attainment, seen following adoption of “no detriment” or “safety net” policies by degree-awarding bodies.
For the most part, these used a calculation based on student achievement to date (at the point institutions shifted to online provision in March 2020), versus a calculation for all qualifying attainment that would usually be considered in calculation of classification.
If more Firsts and Upper Seconds were awarded based on semester one attainment only, the logic was that something in semester two acted to impede student attainment trajectory.
One issue to note here is that students do not very neatly start and complete 60 credits in each of the two semesters. They may start 60 credits in the first semester but only complete 30 by the end of the semester, leaving 90 credits to be completed in the second semester (which will almost always include the independent project or equivalent).
Some modules are all-year modules, such as for the independent project. This means that students have far fewer marks for their first semester on which to consider what their final award might be. Workload – completing 90, 105 or 75 credits in the second semester – might adversely affect outcomes for the second semester versus the first.
Considering a smaller set of data from students at the University of Chichester completing in 2018-19, there is an average improvement of 2.5 per cent between semester one marks and semester two marks. However, the range falls between -18.3 per cent through to 24 per cent.
No discernable logic
Reviewing final classification shows that the stronger students get stronger, between the two semesters of study in their final year. Of these students, there is no pattern in terms of programme of study for velocity between the two semesters in the final year of their programmes. There is an equal mix of arts and science disciplines, of business, sports, ensemble, creative, and social science subjects. There is no discernible logic to exit velocity between final-year semesters.
Betteney’s research showed that the answer to the question on whether grades did improve from year two to year three was “yes”, and “no”. The case study for the University of Chichester gives the exact same outcome, and for a year where the algorithm for calculating classification was more generous. If there is limited evidence for exit velocity in an unusual year, then we should ascertain what happens in a usual year, given algorithms are predicated on exit velocity being a fact, rather than an entirely possible fiction.
Further work is clearly required in this area, given exit velocity is instrumental in informing the design of algorithms for calculating the classification of a degree. Ascertaining whether exit velocity is fact or fiction should then positively influence how academic regulations manage exit velocity in the weighting of algorithms for classification.