Is the concept of exit velocity for final-year students fact or fiction?

Katie Akerman ponders whether there is any logic behind giving higher weighting to the final year of study

Katie is Director of Quality and Standards at the University of Chichester.

Does exit velocity actually exist? Or is it merely a convenient fiction, which should not be used to determine algorithms?

The Universities UK/Guild HE Understanding Degree Algorithms (2017) report showed us that of 100 responding institutions, 87 used exit velocity in their algorithm for calculating degree classification. But what is exit velocity, and why should we use it?

The standard definition is from the work of David Allen, and explains that “The notion of an exit velocity comes from the widespread belief that the student’s marks generally improve from year two to year three.” However, as Allen notes: “Given there is little or no research into exit velocity the true intention for rewarding it by higher weightings on year three marks seems misjudged.”

Do grades improve?

There is surprisingly little research into exit velocity – I found a 2015 paper by Mark Betteney which examined whether there is any truth in the commonly held belief that grades for undergraduate students improve from year two to year three, based upon a case study for BA (Hons) Primary Education with Qualified Teacher Status (QTS) students. As with Allen, Betteney describes exit velocity as students achieving better grades in their final year of study than in previous years of study.

I’m based in the University of Chichester – a small, post-2003 university which offers a broad academic portfolio, and prides itself on being a widening participation institution. We use a fairly standard algorithm to calculate degree classification.

For most undergraduate provision (those without placements which are marked as pass/fail and are, therefore, excluded from the usual 240 credits) all successfully attained credit is used from Level 5 and Level 6 and this is weighted at 40:60, on the basis that students achieve exit velocity in their final year of study.

Across 970 students completing in 2019-20, the Level 6 mark was, on average, higher by just 2.6 per cent for an individual student (unlikely to impact classification, unless at the boundary). There was, however, a range across different schools or institutes within the university, which roughly reflected the proportion of Firsts and Upper Seconds awarded, with the majority being for ensemble-based provision and the fewest being for programmes within business and sports.

It should be noted that for the 2019-20 academic year, we operated two different algorithms for calculating degree classifications to ensure that no student was disadvantaged during the pandemic so the minor increase of just 2.6 per cent is perhaps surprising given a more generous approach to calculating degree classification.

Many programmes did see an increase of some kind but some showed a decrease in marks for students between Levels 5 and 6. About one-third of students in business fields and in creative industries did not benefit from exit velocity. With sports, about a quarter did not benefit. The only area where students did consistently benefit was in the arts and humanities.

So there are considerable differences in different subjects, and this means that we cannot simply determine whether exit velocity is fact or fiction.

Semester variance

When the sector-wide results became apparent via the annual HESA release, Wonkhe (1 February 2021) noted that “the big question is what are we doing to final year students in semester two that takes so many from a first to a 2:1 in a regular year?”. This was in response to noted changes in graduate attainment, seen following adoption of “no detriment” or “safety net” policies by degree-awarding bodies.

For the most part, these used a calculation based on student achievement to date (at the point institutions shifted to online provision in March 2020), versus a calculation for all qualifying attainment that would usually be considered in calculation of classification.

If more Firsts and Upper Seconds were awarded based on semester one attainment only, the logic was that something in semester two acted to impede student attainment trajectory.

One issue to note here is that students do not very neatly start and complete 60 credits in each of the two semesters. They may start 60 credits in the first semester but only complete 30 by the end of the semester, leaving 90 credits to be completed in the second semester (which will almost always include the independent project or equivalent).

Some modules are all-year modules, such as for the independent project. This means that students have far fewer marks for their first semester on which to consider what their final award might be. Workload – completing 90, 105 or 75 credits in the second semester – might adversely affect outcomes for the second semester versus the first.

Considering a smaller set of data from students at the University of Chichester completing in 2018-19, there is an average improvement of 2.5 per cent between semester one marks and semester two marks. However, the range falls between -18.3 per cent through to 24 per cent.

No discernable logic

Reviewing final classification shows that the stronger students get stronger, between the two semesters of study in their final year. Of these students, there is no pattern in terms of programme of study for velocity between the two semesters in the final year of their programmes. There is an equal mix of arts and science disciplines, of business, sports, ensemble, creative, and social science subjects. There is no discernible logic to exit velocity between final-year semesters.

Betteney’s research showed that the answer to the question on whether grades did improve from year two to year three was “yes”, and “no”. The case study for the University of Chichester gives the exact same outcome, and for a year where the algorithm for calculating classification was more generous. If there is limited evidence for exit velocity in an unusual year, then we should ascertain what happens in a usual year, given algorithms are predicated on exit velocity being a fact, rather than an entirely possible fiction.

Further work is clearly required in this area, given exit velocity is instrumental in informing the design of algorithms for calculating the classification of a degree. Ascertaining whether exit velocity is fact or fiction should then positively influence how academic regulations manage exit velocity in the weighting of algorithms for classification.

10 responses to “Is the concept of exit velocity for final-year students fact or fiction?

  1. In my experience, the rationale for a higher weighting in the final year isn’t exit velocity as such, it’s the fact that students at L6 are completing higher level learning outcomes critical to their programmes, and that higher performance at L6 is therefore indicative of a greater mastery of the subject, and therefore worthy of greater weight in the degree classification.

    The great subject algorithm wars of the early 2000s (when the QAA were pushing for HEIs to have single algorithms across all disciplines) were thus fought between subjects inclined to recognise differences between levels of study (e.g. most science) and those who were not (eg Law, English, Maths), with the former often favouring 30/70 splits, and the latter wanting 50/50 (we signed a peace treaty at 40/60).

    If it were just exit velocity which drove the higher weightings, then we’d be giving more weight to the final year because students typically did better in the final year, which would just be giving (even) higher grades because the grades were higher?

    1. Yes, that is my understanding as well. Using exit velocity is intended to measure students’ achievement at the end of their programme.

  2. Very interesting article and, given the often quoted ‘sophomore slump’ that can occur at L5, could an equal 50/50 split across the latter 2 years help with this? Very difficult to decide during the challenges of covid, and it’s additional variables muddying the field, but definitely an interesting are to pursue further.

  3. Thanks to Katie for encouraging us to spend some time on an important topic. As Andy notes above, support for higher weightings at Level 6 is often based on the feeling that because the award being made is a Level 6 award the classification should arguably be based to a greater extent on performance at that level. Similar logic is used to exclude Level 4 completely, along with the view that this supports transition into higher education.

    Given persistent attainment gaps across the sector and Chichester’s interest in WP it would be interesting to see if Katie’s analysis reviewed differences between disadvantaged groups. There is an assumption that exit velocity style regulations help by allowing students more time to demonstrate their full potential. Indeed, there are still some exit velocity regulations that only use Level 6 marks where this is advantageous to students for final classification, when compared to an overall average. These rules becoming rarer, due to the trend towards simpler regulations where marks are calculated once to a specified weighting. I have sat in many an award Board (in previous roles) where a true exit velocity regulation meant someone, often studying HE at a College, would get a First having ‘clicked’ in the final year and done brilliantly, although their overall average was comparatively weak.

    Over time I have concluded there is ‘no right or wrong answer’ to classification weightings. The question then morphs into something different, would the sector be (or be perceived to be) more robust on standards if we agreed and maintained the same approach to degree classification?

    1. Hello Andy and Adam – thank you for your comments. I did also ponder whether universities weight the final year of study more heavily to encourage students to ‘do better’ in their final year/emphasise their skills, knowledge, experience, or whether it is based upon a belief that ‘exit velocity’ exists and should be recognised by the algorithms used to calculate and clarify awards. But, reading other institution’s academic regulations suggested that it is the latter i.e. UCL state that in regard to determining Honours Degree Borderline cases, they use evidence of exit velocity in the candidate’s performance. Similarly, Aston University state that they recognise “…the importance of exit velocity in student achievement, and this is reflected in the significant weighting placed on the final Stage of study.”.

      But the Higher Education Academy’s (now Advance HE) work on Grade point average: Report of the GPA pilot project 2013-14 – describes exit velocity as “… weighting the final level or levels of study to recognise achievement in the more taxing elements of the programme… providers stated that they would seek to retain exit velocity of some kind. There was a strong concern to recognise students’ developed skills and attainment in the latter stages of their programmes and not to penalise them unduly for early weaker marks.”

      So, do we/should we privilege the importance of the final year because of development of knowledge etc rather than just because higher grades are achieved? Is there an agreed definition for exit velocity and, if so, should it be recognised?

      I completely agree on the need for more data analysis! I need to find me a statistician…

      Katie

      1. Hi Katie – the UCL borderline criteria you cite have been phased out now. But I believe the rationale was to emphasise the importance of achievement in the final year. We also include first year marks (with a lower weighting) in our algorithm.

      2. Hi Katie,

        I think Exit Velocity is just the ‘speed’ (grade) at which a student leaves their degree. It’s a measure of student performance in the final year, and it’s deemed to be positive when a student shows an improving trajectory by demonstrating higher performance in the final year/level of their award compared to their previous year/level(s). It may or may not exist (received wisdom is it does; I have access to 10 years + internal data so would certainly have the ability to test that at my own institution, should there be time as we look at 2020-21 assessment performance). I think that works alright as a definition.

        I think a student with positive Exit Velocity will benefit from a degree algorithm which prioritises performance in the final year, but that prioritisation doesn’t exist *because* of Exit Velocity (as above, it’s because the LOs in the final year better represent to totality of the LOs for the programme).

        I also suspect, as per Thom’s comment below, that an average increase of even +2-3pp would have a possibly significant effect on degree outcomes. Certainly at my own HEI a smaller increase in avg marks during the last couple of years led to a much larger increase in firsts / 2:1s (because marks are not evenly distributed on the 0-100 scale, and there is quite a lot of bunching around the upper 2:1 boundary).

        I think the last couple of years (with No Detriment Safety Nets) has shown just how much you can affect degree outcomes by making relatively small changes to algorithms (and that more substantial changes can have a huge impact – like those HEIs with a 20-25pp increase in firsts in 2019/20). We’ll certainly be doing some very careful modelling (and at subject level as well as overall) if we look to tweak ours.

  4. When we modeled student marks a shift of 2.6% had a substantial impact on degree classification. Most students are in the range 50-70, not spread out across the range from 0 to 100. There are also often clusters just below boundaries – particularly at 40% and 70%. Within a boundary a surprising proportion are therefore. You can test this intuition with a thought experiment. If 90% of students are in the range 50-70% and distribution is roughly uniform then about a quarter of students in each grade band (50-59.9 and 60-69.9) then 0.25 * 90 = 22.5% of students would be lifted over the boundary by a 2.5% increase.

    So the proportion depends on the observed ranges and the distribution within grade boundaries. This could be further boosted by bringing students into consideration in discretionary bands for those universities that have exam board processes that shift boundaries or consider borderline students. My guesstimate is that a 2.6% increase would increase grade classification about for about 15%-30% of students depending on other factors. If the increase was not uniform but higher for borderline students than non-borderline students this could easily be higher. Indeed that’s what you’d expect perhaps as second years averaging 51.5% and second years averaging 58.5% typically have different motivation in their final year.

    1. My original post got slightly garbled in the editing, but thinking about this further one can simplify the intuition that a uniform 2.6% increase leads to a large proportion of students with higher grades:

      Very few students are more than 10 percentage points from a grade boundary. If you assume a roughly uniform spread within the grade boundary then over 25% will be shifted above a grade boundary. For this estimate to be wildly off you have to assume a very weird distribution (with few students near the upper grade boundary) or many students 80%. If you look at a large set of student marks neither of those things holds – you do get clustering at boundaries and most (virtually all?) students are in the range 30-80.

  5. I have not done any statistical analysis of this, so I comment with hesitation, but as a young academic (many years ago) I remember a student asking me about the weightings. I confessed I didn’t know much beyond exit velocity, but I also observed that because the first 2 years of the degree were core and compulsory modules, and all final year modules were electives, students generally did better in the modules they chose to study rather than had to. Is there anything to this do you think?

Leave a Reply