There’s a major problem with (the) graduate outcomes (survey)
David Kernohan is Deputy Editor of Wonkhe
Tags
Continuation and completion are derived from HESA Student data, while progression is based on the Graduate Outcomes survey.
But each of these underlying data sources – and thus the validity of B3 itself – is facing questions about reliability and validity next year.
The most startling reversal in fortune is the most recent -next year’s Graduate Outcomes data release will be the first based on less than half of eligible UK graduates submitting responses. Just 46.3 per cent of home students responded to the survey this time round – in overall terms that works out to a response rate for all domiciles of just 39 per cent.
This is by far the lowest response rate on record, and continues a multi-year pattern of declining rates – though it is important to relate that these rates are not a final assessment of responses, the completed figure is unlikely to be above 50 per cent.
A part of this year’s poor showing can be laid at the door of a cyberattack. We understand that a direct denial of service (DDoS) experienced by the survey system provider early in October meant “over a week of telephone surveying at much reduced capacity” and the loss of “an estimated 9,000 interviews”.
But this unfortunate event affected only the cohort D collection – the smallest of four cohorts, comprising primarily students who began (and/or completed) their studies at a non-traditional time.
Half and half
Fifty per cent is an important psychological boundary for surveys that claim (as Graduate Outcomes does) to be a population level (or census) data collection.
With more full-time UK students not answering the survey than answering it, we need to answer difficult questions about the representative value of what is, basically, a self-selecting sample. Previous HESA work had not identified a notable skew around responding graduates, but such research is by nature limited – and referred to a much higher response rate.
There were gasps when this news was related to Jisc’s Student Data Forum – with many sector representatives asking difficult questions about the future… of data futures.
Jisc’s Head of Surveys, Gosia Tuner, told us that:
Although this year’s response rates don’t match our targets, they are still exceptional for a national survey of this kind at over 40% for most groups of graduates. Every year we analyse the survey responses to ensure that the data we have is representative of the year’s graduates in general, and of each sub-population (such as by subject or level of qualification).
While the sector’s designated data body will make the best of a disappointing outcome, providers investigated on the basis of a low progression rate may rightly question how less than half of their graduates can form the basis of a determination of teaching quality.
The worst is yet to come
If it was just one of OfS’ three B3 measures that was in question, you might expect that the regulator might choose to spend a year focusing on the other two.
Unfortunately, future continuation and completion metrics will both be based on data collected during the flawed and heavily caveated 2022-23 Student data process, and on a 2023-24 collection that shows some signs of being worse.
If you have a tangential awareness of the travails of Data Futures, you may be labouring under the impression that the problems were addressed during the last cycle, and the data currently being collected with respect to the 2023-24 academic year will be a return to form. You would be wrong.
We are aware of 17 providers that have not yet submitted their HESA return – the deadline has been quietly extended. This includes 10-12 large, well-known, providers covering nearly 10 per cent of the entire student population – there are discussions afoot as to whether these providers would be named officially.
A combination of a failure to address aspects of the collection (at both provider and sector level) that did not work well last year, staff attrition, and poor staff morale have all left the sector struggling to cope with current demands. The 2022-23 data bears several large asterisks – 2023-24 may be similarly stellar when we get the open data.
Again, when OfS comes knocking (or even before) will your provider be content to be judged on flawed data?
Data futures is a nightmare and with the planned expansion of data (TNE, UK validation etc), and currently unimplemented plans for more regular data returns, it is likely to continue to get worse (potentially significantly worse for some providers) over the coming years.
It is easy to see the in-principle logic of taking the GO survey away from providers, but long-term drops were widely predicted, and if they continue we will be looking at a survey which has a response rate half the size of DLHE’s.
It still feels like HESA, OfS etc have their heads in the sand…
There appears to be a mistake in this article. Cohort D is not the smallest Cohort. Generally it is the largest cohort for Graduate Outcomes as it contains graduates who completed their studies from 1 May – 31 July in the academic year in question. So this typically covers the majority of the undergraduate population. Therefore Cohort D is usually the most important and covers the population of most interest with the data.
I’m glad you pointed that out as I was questioning my whole understanding of the process based on that!
Yes, that’s a typo/error and should read Cohort A.
Not necessarily – at my institution Cohorts B and C have the lowest number of graduates in them.
I am afraid I have to agree with Gosia, the response rate for GO remains very high and while it’s not at the historic DLHE levels it’s also not directly comparable as the DLHE rate was boosted in ways that were likely to skew responses, a central collection remains by far the best way to ensure comparable data. Of course the drop in response rates makes it more important that the survey outcomes are cross-checked against LEO to see whether those responding are representative of the wider population, this is much more valuable than looking for demographic skews.
I agree with David that bringing in TNE before in year data is another significant misstep by the OfS based on personal agendas, ensuring that there is proper regulation of UK based HE is surely much more important than data on overseas activity where the context will be so hard to understand. It’s not as if they are using the aggregate data that they already have, or at least we’ve not seen any investigations launched off the back of it which surely we would have done if it was a burning platform that required urgent action.
Hello David,
Fascinating article as always. I am not sure that Cohort D is the smallest cohort as this collects students who graduated 1 May and 31 July 2023 so for us and many others that will be the UK FD FT group which are typically one of the largest groups studying at university. So our Cohort D is around three times the size of the others. There are also interesting implication for league tables who primarily will use UK FD FT which mainly sit within Cohort D. So if league table provider don’t weight over two use of data to smooth out issues this year it could make for interesting reading in September. The OfS as far as I am aware has yet to release it prioritisation mechansims for this academic year (do correct me If I missed something here) so providers are not entirely sure what is or is not going to be top of there list to look at. Finally, sample size has two real important implications for B3 one is that the 90% and 95% CI used as thresholds to start investigations and take regulatory action are generally large and less accurate on smaller samples meaning they are less likely to identify courses genuinely below the threshold. Still don’t take my word for it here are two papers exploring the issue: https://www.tandfonline.com/doi/pdf/10.1080/03075079.2024.2382258 and https://www.tandfonline.com/doi/pdf/10.1080/03075079.2023.2196292
First, the minor nitpick. Gosia’s surname is ‘Turner’.
More to the point: Graduate Outcomes has a representative sample and meets ONS quality standards for an Official Statistic. Here’s the most recent ONS report:
https://osr.statisticsauthority.gov.uk/publication/assessment-of-compliance-with-the-code-of-practice-for-statistics-higher-education-graduate-outcomes-data-and-statistics/
ONS don’t just hand this accreditation out for fun, they’re currently withholding it from their own Labour Force Survey.
Obviously we’d like the response rate to be higher but otherwise I agree with Richard Puttock, who, of course, knows more about this topic than anyone.
If anything it does highlight that DLHE response rates were almost improbably high, which is grimly amusing as I recall all sorts of people obsessing over non-response as if merely having an 80% response rate for a census sample of several hundred thousand people was an unforgiveable flaw.
Gosia, Richard, Charlie and the ONS assessment paint a nuanced picture of fitness of GO for regulation and strategic planning purpose, compared to a simpler historical or heuristic view, e.g. of 50% response for a survey of this kind.
But what is the “so what”? What could/should be done differently next time? One imagines Jisc could spend more money, public money, sector money, to drive up response rate, via various means. But what if that made apropos of no difference to the value of the data, in relation to GO’s purpose? Perhaps, at a time of constrained resource, a financial angle can help focus on what is most important.
I declare the same interest as Charlie above, i.e. I work for Jisc.
Are we getting to a point where LinkedIn could provide a more useful data sample than the Graduate Outcomes Survey?
It could also show more data on what a graduate has been up to after leaving university than the snapshot survey week approach.
Great point. Using my own institution as an example, I can easily find our alumni on LinkedIn that haven’t responded to the Graduate Outcomes survey; it would be great if we were able to do more during the survey window to remind graduates via LinkedIn to take part in the survey.