OfS’ outcomes focused phase of the regulator’s quality and standards review is nearly upon us – and development of performance indicators promised as part of the looming consultation.
As the regulator works out how best to assess graduate outcomes, it’s helpful to reflect on historical interpretations of success, why we don’t know what success looks like now, and what the future vision is.
First destinations, first steps
While the intense focus on graduate outcomes within the sector is a relatively recent phenomena, collecting and assessing information on university leavers’ destinations is not. Institutions were doing this for their own purposes long before a new (ish) sector statistics agency introduced a formal collection exercise in the mid-1990s and HESA’s First Destinations survey was born.
In 1999 HESA crunched First Destinations data to create the first official UK performance indicator (PI) for graduate outcomes, focusing on the proportion of leavers who entered employment shortly after university. Progression to further study was added in 2011, ten years after First Destinations evolved into DLHE which enjoyed a 17-year reign as the sector-standard measurement of graduate activity before being replaced by Graduate Outcomes in 2018.
The revised UKPI survived throughout, even with the obvious flaw that in failing to distinguish between the types of work graduates went on to, it didn’t really provide a meaningful measure of performance. Leavers in low-skilled or low valued jobs counted equally as those in “graduate level” roles.
Part of the reason HESA’s metric never evolved is because it didn’t have to. Despite offering the analytical advantage of sector benchmarking (and providing eager marketeers with impressive stats about student success), more useful ways to evaluate performance were found elsewhere. By the time providers started to pay really close attention to graduate outcomes as drivers of reputation and recruitment, league table rankings which first appeared in the 1990s had long established summary metrics from DLHE data. These calculations involved various ways of recording progression to highly skilled employment or further study and were widely adopted by institutions as internal PIs. Later they formed the basis of analogous approaches to provider assessment when a new regulatory era of higher education ushered in the OfS and TEF.
So much for the history of PIs. Before looking to the future, it’s worth pausing on a rather astonishing present. While “good” graduate destinations has become perhaps the dictum of provider quality, 18 months after Graduate Outcomes results were first released and three years since data collection began, there is still no official stance on measuring this.
HESA were widely expected to lead development of a new UKPI to match the new survey, having announced plans to do so in early 2020. Eyebrows were raised back then about the perceived late timing of this work, but more widespread surprise followed when it was abandoned in 2021 as part of HESA’s move to distance themselves from performance measurement. HESA’s position is understandable (although unfortunate because its expertise and influence is significant), but early disclosure would have been welcome to encourage more timely progress elsewhere.
OfS is likewise culpable for delaying the process and were promising imminent work on PI development in consultation with the sector when survey results were first published. But without immediate regulatory need for one (TEF would have looked absurd using DLHE data when Graduate Outcomes data was available, but the old exercise never really progressed past 2018), urgency drifted before things were swept up in the wider overhaul of quality and standards.
Without much official direction or innovation from within the sector and with new rankings to produce, league table compilers again led the development of performance measures for the first Graduate Outcomes results, which typically involved reworking the old DLHE metrics on highly skilled employment and/or further study. HEIs, needing a recognised means of assessing and comparing performance adopted these measures internally before OfS, without making much headway in developing a PI in consultation with the sector, wrangled a version into (experimental) Start to Success and Proceed provider assessment measures. Sound familiar?
The introduction of Graduate Outcomes should have prompted a fresh look at graduate destinations and construction of a more nuanced interpretation of what “good” outcomes look like, one which considered new perspectives, advocated by a range of conversant sector stakeholders including providers and representative organisations like AGCAS.
This could have included undertaking a more detailed and contextual examination of activity, assessment of contribution to key skills or civic needs and, crucially, consideration of graduates’ own views of success. But however we define “good”, I’d like to move away from the idea that it can be usefully reduced into a single binary measure based on crude classifications of current employment and study.
This approach might have served well in the past but we’re in a different place now, with different data and where judgements about outcomes performance have different and more profound implications. It’s not just providers who matter here either. It doesn’t seem desirable for choice among a diverse student body to be influenced by such a condensed construal of success. Nor does it appear useful to construct a limited, UK-wide PI for four regulators who seem to be moving in rather different directions. Graduate Outcomes data (and the environment it sits within) is complex. Stakeholders need more advanced performance indicators to help make good sense of it.
Arguably PI development should have started years ago as part of the planning for Graduate Outcomes. But does that matter right now when OfS is about to take this work forward?
Possibly. I worry that an endeavour which deserved dedicated attention will be lost within the wider piece and that with historical interpretations of performance recently embedded in the sector, a single stale definition of success will be mooted as the starting point for OfS proposals. If it is, let’s hope there’s time and scope for genuine consultation and consideration of alternatives. We’ve long been without a good approach to measuring outcomes and can hold out a little longer to get one.