Are you a higher education regulator? Are you bored of all your regular metrics?
Well, to make up for a podcast-free week without the award-winning “yes but does it correlate” segment, I thought I’d get hip to the latest trend from the South West – multiplying two metrics together.
The Office for Students is currently running a top-secret consultation on a new indicator it is calling “start to success” (S2S). Representations are being sought based on a letter I’m pretty sure I’m not meant to know about – with the indicator itself to be unveiled to a waiting world late in November.
The four UK performance indicators (non-continuation, widening participation, graduate employment, and – until 2015 – research outputs) are all derived from existing data collections. As I understand it, S2S will itself be derived from two of these derived indicators – the (T5) projected outcomes from the non-continuation UKPI and an assemblage of data from Graduate Outcomes on “highly skilled” outcomes (which I suspect is what will happen to the employment of leavers metrics).
The data exists, so…
I’ve asked OfS about this new measure and the above represents the limits of what I have been told. What follows is an interpretation of other things I have heard informally about the plans.
In some ways, these metrics are both measures of probabilities – the likelihood that any given entrant will graduate, and the likelihood any given graduate will get a highly skilled job (or, I believe further study will count but there’s not any multivariate public data to play with). So you could see S2S as a measure of the probability that any given entrant would get a good job – and you could get to that simply by multiplying the two numbers together.
These two metrics are also – with the National Student Survey’s fall from grace – the last two meaningful TEF metrics standing. Indeed – non-continuation and high skilled employment were cited as “other, more robust” course level metrics in the ministerial diatribe that led to the NSS review. They’re clearly not robust at course level – but what about at provider level? Or provider and CaH2 subject level? This seems tailor made for an updated “B3 bear“.
This is a homebrewed alternative S2S – I’ve multiplied the percentage projected continuation to a degree award with the percentage of students in employment that reported being in a highly skilled (SOC 1-3) role in the latest Graduate Outcomes data. As the generated scores generated are unwieldy (the top score is 10,000, and I don’t want to give the University of Cambridge chance to claim it is “over 9,000” however delighted that would make the sector’s SROC/Dragonball Z fandom) I have divided by 1,000 to give a handy 10 point scale.
For podcast fans, a correlation plot is on the other tab.
Why this is a terrible idea
Cow Eye University College is a tiny, fictional, provider, which generally has a cohort size of around 100. These students primarily hail from a fairly deprived local area. Cow Eye does a valuable job meeting niche local skills needs, but Covid-19 and brexit has hit local companies hard – so demand for both high-skilled graduates and students for in-term employment has fallen sharply.
Falls in both continuation and “high skilled” graduate employment can fairly be said not to be a function of the quality of teaching offered by Cow Eye – it employs the same teachers teaching the same courses as it does last year. The small cohort size amplifies the impact on both underlying measures, multiplying them to reach the score compounds the effect. If this score is used for any regulatory purpose Cow Eye would see a detriment compared to larger providers that recruited more widely. The fact that OfS are already nervous about sample sizes of less than 250 students suggests that many providers (much less parts of providers) would not feature.
It’s a problem that is going to be faced by any proportion-based metric. Small providers – a fair percentage of OfS’s registration list, either do not have sufficient populations to reliably generate publishable metrics or are adversely affected by changes involving one or two students that would just be noise at a larger provider. In this, I mean, this does not affect the headline score – any student who leaves higher education having signed up for a course is a concern in one way or another.
Only two non-public providers have public (by which I mean statistically useful) data on both these metrics – anything that attempts to include independent providers is going to need have to radically lower data standards, making it less useful as a measure or as the “indicator for prospective students” that prospective students never actually use.
What could it tell us?
The metric as I presented it would tell us how likely a person entering a provider would be to end up with a good job at the end of it. Almost. In actual fact both underlying metrics omit certain outcomes – for example we don’t see students who leave within 50 days of starting a course, and we don’t see graduates who don’t respond to the Graduate Outcomes survey. And I’ve ignored graduates in further study or those who are unemployed.
All metrics are indicative – no data is an accurate description of a complex student or graduate population. That’s not to say what we have isn’t useful, or doesn’t offer us insights to steer further indication. But combinations of metrics amplify this problem – a reason to be nervous about even the best of the league tables. And in the years around 2020 more than ever.
The worry is that we are looking again for “low quality courses” or similar, rather than providing useful information. While this appears to be a ministerial priority, it is a priority to no-one else.