September marks the start of the next round of Graduate Outcomes data collection.
For universities, that means weeks of phone calls, follow-up emails, and dashboards that will soon be populated with the data that underpins OfS regulation and league tables.
For graduates, it means answering questions about where they are, what they’re doing, and how they see their work and study 15 months on.
A snapshot
Graduate Outcomes matters. It gives the sector a consistent data set, helps us understand broad labour market trends, and (whether we like it or not) has become one of the defining measures of “quality” in higher education. But it also risks narrowing our view of graduate success to a single snapshot. And by the time universities receive the data, it is closer to two years after a student graduates.
In a sector that can feel slow to change, two years is still a long time. Whole programmes can be redesigned, new employability initiatives launched, employer engagement structures reshaped. Judging a university on what its graduates were doing two years ago is like judging a family on how it treated the eldest sibling – the rules may well have changed by the time the younger one comes along. Applicants are, in effect, applying to a university in the past, not to the one they will actually experience.
The problem with 15 months
The design of Graduate Outcomes reflects a balance between timeliness and comparability. Fifteen months was chosen to give graduates time to settle into work or further study, but not so long that recall bias takes over. The problem is that 15 months is still very early in most careers, and by the time results are published, almost two years have passed.
For some graduates, that means they are captured at their most precarious: still interning, trying out different sectors, or working in roles that are a stepping stone rather than a destination. For others, it means they are invisible altogether, portfolio workers, freelancers, or those in international labour markets where the survey struggles to track them.
And then there is the simple reality that universities cannot fully control the labour market. If vacancies are not there because of a recession, hiring freezes, or sector-specific shocks, outcomes data inevitably dips, no matter how much careers support is offered. To read Graduate Outcomes as a pure reflection of provider performance is to miss the economic context it sits within.
The invisible graduates
Graduate Outcomes also tells us little about some of the fastest-growing areas of provision. Apprentices, CPD learners, and in future those engaging through the Lifelong Learning Entitlement (LLE), all sit outside its remit. These learners are central to the way government imagines the future of higher education (and in many cases to how universities diversify their own provision) yet their outcomes are largely invisible in official datasets.
At the same time, Graduate Outcomes remains prominent in league tables, where it can have reputational consequences far beyond its actual coverage. The risk is that universities are judged on an increasingly narrow slice of their student population while other important work goes unrecognised.
Looking beyond the survey
The good news is that we are not short of other measures.
- Longitudinal Education Outcomes (LEO) data shows long-term earnings trajectories, reminding us that graduates often see their biggest salary uplift years into their careers, not at the start. An Institute for Fiscal Studies report highlighted how the biggest benefits of a degree are realised well beyond the first few years.
- The Resolution Foundation’s Class of 2020 study argued that short-term measures risk masking the lifetime value of higher education.
- Alumni engagement gives a richer picture of where graduates go, especially internationally. Universities that invest in tracer studies or ongoing alumni networks often uncover more diverse and positive stories than the survey can capture.
- Skills data (whether through Careers Registration or employer feedback) highlights what students can do and how they can articulate it. That matters as much as a job title, particularly in a labour market where roles evolve quickly.
- Case studies, student voice, and narratives of career confidence help us understand outcomes in ways metrics cannot.
Together, these provide a more balanced picture: not to replace Graduate Outcomes, but to sit alongside it.
Why it matters
For universities, an over-reliance on Graduate Outcomes risks skewing resources. So much energy goes into chasing responses and optimising for a compliance metric, rather than supporting long-term student success.
For policymakers, it risks reinforcing a short-term view of higher education. If the measure of quality is fixed at 15 months, providers will inevitably be incentivised to produce quick wins rather than lifelong skills.
For applicants, it risks misrepresenting the real offer of a university. They make choices on a picture that is not just partial, but out of date.
Graduate Outcomes is not the enemy. It provides valuable insights, especially at sector level. But it needs to be placed in an ecosystem of measures that includes long-term earnings (LEO), alumni networks, labour market intelligence, skills data, and qualitative student voice.
That would allow universities to demonstrate their value across the full diversity of provision, from undergraduates to apprentices to CPD learners. It would also allow policymakers and applicants to see beyond a two-year-old snapshot of a 15-month window.
Until we find ways to measure what success looks like five, ten or twenty years on, Graduate Outcomes risks telling us more about the past than the future of higher education.