The OfS have, following a DfE study, recently announced a desire to use LEO for regulation. In my view this is a bad idea.
Don’t get me wrong, the Longitudinal Outcomes from Education (LEO) dataset is a fantastic and under-utilised tool for historical research. Nothing can compare to LEO for its rigour, coverage and the richness of the personal data it contains.
However, it has serious limitations, it captures earnings and not salary, for everyone who chooses to work part time it will seriously underestimate the salary they command.
And fundamentally it’s just too lagged. You can add other concerns around those choosing not to work and those working abroad if you wish to undermine its utility further.
The big idea
The OfS is proposing using data from 3 years’ after graduation which I assume to mean the third full tax year after graduation although it could mean something different, no details are provided. Assuming that my interpretation is correct the most recent LEO data published in June this year relates to the 2022-23 tax year so for that to be the third full tax year after graduation (that’s the that’s the 2018-19 graduating cohort, and even if you go for the third tax year including the one they graduated in it’s the 2019-20 graduates). The OfS also proposes to continue to use 4 year aggregates which makes a lot of sense to avoid statistical noise and deal with small cohorts but it does mean that some of the data will relate to even earlier cohorts.
The problem is therefore if the proposed regime had been in place this year the OfS would have just got its first look at outcomes from the 2018-19 graduating cohort who were of course entrants in 2016-17 or earlier. When we look at it through this lens it is hard to see how one applies any serious regulatory tools to a provider failing on this metric but performing well on others especially if they are performing well on those based on the still lagged but more timely Graduate Outcomes survey.
It is hard to conceive of any courses that will not have had at least one significant change in the 9 (up to 12!) years since the measured cohort entered. It therefore won’t be hard for most providers to argue that the changes they have made since those cohorts entered will have had positive impacts on outcomes and the regulator will have to give some weight to those arguments especially if they are supported by changes in the existing progression, or the proposed new skills utilisation indicator.
A problem?
And if the existing progression indicator is problematic then why didn’t the regulator act on it when it had it four years earlier? The OfS could try to argue that it’s a different indicator capturing a different aspect of success but this, at least to this commentators mind, is a pretty flimsy argument and is likely to fail because earnings is a very narrow definition of success. Indeed, by having two indicators the regulator may well find themselves in a situation where they can only take meaningful action if a provider is failing on both.
OfS could begin to address the time lag by just looking at the first full tax year after graduation but this will undoubtedly be problematic as graduates take time to settle into careers (which is why GO is at 15 months) and of course the interim study issues will be far more significant for this cohort. It would also still be less timely than the Graduate Outcomes survey which itself collects the far more meaningful salary rather than earnings.
There is of course a further issue with LEO in that it will forever be a black box for the providers being regulated using it. It will not be possible to share the sort of rich data with providers that is shared for other metrics meaning that providers will not be able to undertake any serious analysis into the causes of any concerns the OfS may raise. For example, a provider would struggle to attribute poor outcomes to a course they discontinued, perhaps because they felt it didn’t speak to the employment market. A cynic might even conclude that having a metric nobody can understand or challenge is quite nice for the OfS.
The use of LEO in regulation is likely to generate a lot of work for the OfS and may trigger lots of debate but I doubt it will ever lead to serious negative consequences as the contextual factors and the fact that the cohorts being considered are ancient history will dull, if not completely blunt, the regulatory tools.
Richard Puttock writes in a personal capacity.
But aren’t you completely forgetting the main problem with LEO data, as with all Graduate Statistics? i.e. The first thing that all Statistics students get taught is that Correlation doesn’t prove Causation. So if a History graduate gets a job as an M&S buyer, then the LEO data will record their pay and employability as if it is the fact that they studied History for 3 years that ’caused’ them ending up having a career as a Buyer and became good at choosing the colour range for the next Autumn collection, which is a complete nonsense. Yet nobody seems to… Read more »
I don’t think this is as big a flaw as you say, there’s naturally going to be some element of the students’ aptitude, social class, and straight-up luck, but the whole idea is to look holistically at the provider and their departments, rather than the individual student. If an institution has 100% of their history grads working at a high pay, they’re probably doing something right in their history department, even accounting for other factors.
Of course there is a risk that there is some extraneous causal factor that we are unable to account for that is manifesting in spurious correlations, perhaps social capital or motivation. However, there has been extensive work on LEO and GO that attempts to account for other potential causal factors such as student background/personal characteristics or aptitude (as measured by prior attainment) that leaves course effects as the most likely causal factor.
Have you seen the effect of Prior Academic attainment as explored in my report ‘Why is the Average Graduate Premium falling’? (you’ll find it at Universitywatch.org). I really don’t think you can dismiss this as merely a ‘potential’ causal factor. I am looking forward to the new Graduate & Postgraduate Labour Market Outcomes data due in 2026 that will show graduate premium broken down for Prior academic attainment for the first time.
I largely agree, but note that the consultation proposes to discount actions planned or taken that have not already improved outcomes. It specifically says that they expect to manage that through the cyclical nature of the revised TEF. Effectively, “Don’t tell us that what you are doing *will* improve outcomes. Show us that it *has*.” This is likely to make the lag (and the black box issue) more difficult to deal with.
Agreed although I can see why they want to take this approach. The issue is how to interpret evidence of improved outcomes through the progression indicator, you surely can’t ignore better outcomes on GO either in job type or salary.
The OfS will not be able to trace the historical sequence of events causing the probable outcomes. Or to put it another way, there is no audit trail only an input and an output, as you say a black box. What is the purpose and the functions of the OfS?
I totally agree with Richard about the practicalities of using this data for regulatory purposes, but what seems to be overlooked is the broader issue of whether in the first place it is meaningful to associate graduate earnings three years after graduating with institutional performance. The Treasury has for decades nurtured a desire to link funding to earnings and has, in a peculiar and perverse way partly achieved that through the loan repayment scheme such that that those that earn more pay back more. It now seeks to focus attention through regulation on those that don’t pay back some or… Read more »