If you’re a Wonkhe regular, you may have noticed that our time has been somewhat preoccupied with two massive higher education data releases in recent weeks: the Teaching Excellence Framework, and Longitudinal Education Outcomes data.
Both TEF and LEO contain, in their own way, measures of the employment of graduates. Two of the six main TEF metrics use DLHE data for students 6 months after graduation – looking both at all graduates reporting employment, and those reporting “highly skilled” employment. LEO draws on HMRC salary data for “sustained” employment 1, 3 and 5 years after graduation.
However, LEO also looks at salaries, and implicitly values highly paid graduates over all. LEO offers more granularity than TEF – offering salary ranges by subject, where TEF is (for now) institution level only and shows only deviation from a benchmark for that institution.
Employment (drawn from DLHE) is already an established UK Key Performance Indicator. Skilled employment was derived from the same dataset direct from TEF – for a primer in how this was done we turn to the DfE research report that describes the development of the “highly skilled” metric.
Peter Blyth and Arran Cleminson, the report authors, note that “It is […] worth noting that development of the Longitudinal Employment Outcomes (LEO) dataset may offer an alternative or complementary measure of highly skilled employment in the future” – and there was strong sector support for (LEO-style) “linked tax, benefits and education data” in the HESA NewDLHE consultation. But none of this was going to be ready in time for TEF year two. Even now LEO still remains a technically ‘experimental’ release (that said, TEF remains technically a ‘trial’).
The original research team at IFS who dug into LEO have already done their own comparison of the LEO results with what DLHE and the UK Labour Force Survey suggest. They found that survey data tends to overreport gender disparities and underreport the very highest salaries.
Blyth and Cleminson agree. “It is possible that non-response [to DLHE] is (negatively) correlated with the outcome of interest.” Basically, this means that graduates outside of employment or further study are less likely to respond to the DLHE survey, and so it is implied that graduates in skilled roles may be more likely to respond to it.
There is no control for this aspect in the setting of benchmarks in TEF (it is difficult to image how one might be applied), but the employment metric is benchmarked against:
- Entry tariff
- Subject of study
In contrast, the “highly skilled” metric is also controlled for
- POLAR (a measure of HE participation in a local area)
It is curious that the “employment” metric is not controlled for either of these aspects. Both are well known to affect graduate employment as well as earnings. It is also worth noting that Blyth and Cleminson found that institutional REF performance and “age” of institution (I assume date of foundation as a university?) were statistically significant in determining graduate employment.
How do TEF and LEO compare?
As a part of Wonkhe’s week of LEO we calculated institutional average salaries, taking into account each institutional subject spread, for graduates 5 years out. This wasn’t to build a league table (we didn’t think it demonstrated much) but to build a series that we could compare against other data.
So how well do deviations from the weighted sector benchmarks for employment at 6 months after graduation (in TEF) predict a higher salary 5 years later (in LEO)?
Caveats first. These are statistics for two different cohorts graduating nearly a decade apart. An institutional can change a lot in 10 years, as can the graduate job market and the general economic outlook. DLHE (which underpins TEF) is self-reported, and the “Highly Skilled” subset is derived from comparing reported employment type to SOC job codes 1-3. In contrast, LEO is derived from what employers report to HMRC and does not control for job type or skills status.. And, obviously, I’ve only used institutional datasets where full data exists for both TEF and LEO.
Remember – my LEO figures are averages of the median salary after 5 years for all students at an institution. So the figures below are the averages of the average of a median salary, which isn’t enormously meaningful.
TEF Highly Skilled Flag (core, dominant, all yrs) LEO average median salary (5 yrs) Number of institutions with flag
++ £25,857 45
+ £26,794 7
= £25,416 42
- £25,748 8
-- £23,515 29
All positive £25,983 52
All negative £23,998 37
|TEF Employment Flag (core, dominant, all yrs)||LEO average median salary (5 yrs)||Number of institutions with flag|
So the answer would appear to be: not very well. This does make sense, in that we might consider that the changes that happen in the working lives of 21-26 year-olds would only rarely show a linear progression between an entry level and higher level role in the same industry. But there are a few artefacts that bear further investigation.
Positive flags and lower salaries?
One interesting figure here is the fact that institutions with a positive TEF employment flag (number of students in employment against benchmark) have a substantially lower average median salary for graduates after five years. This is not repeated for graduates in highly skilled employment.
This feels counter-intuitive, but what is happening here looks fairly straightforward. 14 institutions have negative or double negative flags for TEF employment – of these all but three also have a double negative for TEF highly skilled employment (the exceptions? LSE with a ++, Bolton and Rose Bruton with an =). Removing LSE as a clear outlier (with a very, very high institutional average), we see a much more reasonable average salary of £22,622 – still more than the positive employment rating average, but a lot more reasonable.
Back in LSE
But what about LSEs double negative for “employment”? The panel agreed that “whilst progress to highly skilled employment or further study, which is notably above benchmark for many student groups, progression to employment or further study is notably below benchmark”. In their provider statement, LSE note a caveat to their “employment” flag – that when compared to a more conventional institution they have a very low number of UK domiciled undergraduate students in any given cohort. A low sample size means that the performance of an individual graduate would have a proportionally greater effect on the overall score.
So those 51 LSE graduates who were unemployed according to DLHE may have a lot to answer for. In their provider statement they note that students who report as unemployed tend – based on the findings of LSE-conducted interviews – to be waiting for the “right” (graduate) job.
The POLAR and disability figures for the LSE were made available in the “contextual data” tab of the Metrics sheet. It’s a shame to ruin a great story but there’s no indication that the LSE are secretly educating the working classes about the benefits of the neoliberal consensus… the socio-economic make-up of their undergraduate body does not seem to be the source of their variant metric.
Of course, POLAR and disability should clearly and consistently be controlled for in all DLHE based metrics. But a much bigger issue is that for small sample sizes, small numbers of outliers can have a big effect. If we are going to do something like TEF that requires metrics are used for everyone, we are going to need to flag and correct for these issues a bit better.