This article is more than 3 years old

Why we should all mourn the disruption to student outcome data

However we feel about data in regulation, we should be concerned that insights into equality we need will be lost thanks to Covid-19, says Deborah Johnston.
This article is more than 3 years old

Deborah Johnston is deputy vice chancellor (academic framework) at London South Bank University

Writing about the fallout to disruption to student outcome data following Covid-19, I have never been more concerned that my readers work their way carefully through my article.

To set matters straight immediately, I have been one of the many critics of the simplistic uses to which such data are often put, the “leaguetablification” that Wonkhe has frequently written about. From the overreaching use of LEO, the use of inappropriate benchmarks for graduate outcomes in OFS regulatory conditions to the misuse of statistical indicators in TEF metrics, I’ve stood shoulder to shoulder with those protesting about the way data is used to both regulate and rank the sector.

I should then have been celebrating when it became clear that Covid-19 may have dealt a deathblow to the reliability of our key student outcome indicators, and certainly seems to have derailed many of the ways that the data is commonly used to evaluate us. Indeed, many commentators must have been rubbing their hands with glee as doubts have been cast about the wisdom of completing this year’s NSS cycle.

When we have been navigating an onslaught of data-based criticism about the value of the sector, doesn’t it make sense to welcome this moment when the tables are turned and we call into the question the value of the data? However, we should all pause. It is not only the reliability of the datasets that may be lost, but also a necessary challenge to the sector.

Covid-19 and the unravelling of student outcome data

Wonkhe has already recognised the manifold problems that COVID19 has created for the use of student data. Its unclear how we will be able to interpret the results of this year’s NSS. Student continuation, award and employment data will almost certainly be affected, and in ways that will be uneven, localised and complex.

There are two implications. First, its not clear how the data relating to this year can be compared to other years, and so the usefulness of this data in providing information about trends in student satisfaction and outcomes will be diminished.

Second, this disruption to data comparability puts extreme pressure on a range of indicators that are compiled to both regulate and rank the sector. From league tables to the regulatory conditions of the OFS, these approaches use benchmarks and averages to interpret data from any one observation. And these assessment criteria will be so much harder to justify, that it’s not clear if the resulting conclusions will be reliable. Two examples help.

Most commentators expect an at least immediate impact on graduate employment prospects due to Covid-19. If the likely blow falls particularly hard on those with in certain sectors or locations, do we have the right benchmarks to separate out these factors and is it then appropriate to the value of a university degree by indicators on employment or earnings in a period when the labour market has suffered a shock? And, if student continuation  for a certain university plummets in the coming year, what does the data tell us? That the university has been particularly unresponsive or that there is been a particularly large covid19-related shock to its students?

Suddenly the use of student outcome data to interpret university performance seems unwise. However, that doesn’t mean that we should be pleased that we’re likely to be heading into a future in which student outcome data will suddenly have much less power to judge us.

Paradise lost?

I’d argue that the use (and abuse) of student outcome data has ignited important initiatives to improve inclusion and re-energise debates about the value of HE. While marketisation has led to a frenzy of concern of about league table positions and NSS scores, it was the creation and refinement of a suite of student outcome data that really challenged the sector to set out its impact. Through the imposition of the TEF, Access and Participation Plans, and the earnings debate, universities have had to respond to issues of value and inclusion.

This challenge was good for the sector in two ways. Tt forced universities to look at value. From the individual work of universities to align themselves, for example, to the UN SDGs to the wider work of the UUK to look at the sector’s social impact, we see powerful narratives to move the government away from interpreting value through narrow student earnings metrics to understand value. This has been important for more than policy makers. Universities have had to win over public sentiment and indeed the morale of own staff, and integrated reported approaches promise that we will better at value-creation in the future.

And, universities have been forced to look deeply into their data and to understand how students are affected by HE provision. Never before have we had such attention to teaching quality and the broad factors that lead to awarding and progression gaps by race, ability and income. And this has underpinned many positive initiatives – new compacts with students to work on student well-being, racism and a broader and deeper focus understand what it is that students of all kinds value.

Back to the future?

Like many, I’m critical of simplistic data exercises and narrow definitions of value – and we all want to see the outcomes of initiatives to eliminate the racial awarding gaps, improve inclusion and support student well-being. But data on student outcomes has been the grit in the oyster – the sector’s need to interpret and contextualise this data has undeniably given new energy to debates about the value, student experience and inclusiveness of university education.

Does anybody want to go back to a world where universities will not have to justify their performance, at least partially, by evidence about their students do or feel? With a dearth of reliable student data, will we return to a world where university teaching is assessed only by numbers on access, or indeed, to a world where research data is king. And this is particularly important as we will need to understand how Covid-19 affects different groups of students. If we move to a world where student outcome data is simply discounted because of Covid-19, I for one worry that we will be weaker as a sector.

Leave a Reply