This article is more than 4 years old

Covid-19 will last a long time in regulatory data

These are strange times. But their very strangeness offers a particular series of challenges to the way we use data in higher education regulation.
This article is more than 4 years old

David Kernohan is Deputy Editor of Wonkhe

The past few years of regulatory changes in higher education have led to what I’d like to call the “leaguetablification” of everything.

Should you be allowed to register with OfS? Should you have access to specific funding streams? Are you any good at teaching and student outcomes? Are your students “satisfied”? Are you widening access and participation properly? Are you awarding too many firsts? Are you admitting too many students with unconditional offers? Are too many of your students not completing? Are too many graduates not getting good jobs? Do you have a “low quality” course? Are you financially sound?

The answer is always in the data. And more specifically, the answer is always in a grab bag of several years of data – often compared with a grab bag of several years of earlier data to see if you are getting better, or worse. And if you are getting worse, you’d better believe that the Office for Students will use the full extent of their powers.

The day the data went weird

Any dataset for the 2019-20 academic year is now a mess. Chances are 2020-21 will be similar. I’m not holding out great hopes for 2021-22 either. Not a mess in terms of credibility or – dare I say – accuracy. A mess in terms of the numbers being irrevocably skewed by circumstances far beyond the control of higher education providers – not even the Daily Telegraph could lay Covid-19 at the door of universities.

Data will be affected differently for different providers. Subject mix will play an important part, region and provider type will play another. We won’t know exactly how these modifiers will play out until the data starts coming in.

Let’s take a few examples. Non-continuation rates will probably rise, as students are pulled in to managing family commitments or vital work. But employment rates 18 months after graduation will drop, as the economy hopefully begins the slow path to recovery. Student satisfaction is anyone’s guess at this stage – somehow the National Student Survey is continuing for a cohort of students dealing with (as we all are) shock, confusion, and grief.

In a rational world this would simply be valuable information for institutional staff trying to plan support and advice offers, and then an interesting phenomenon for future generations of higher education data nerds to plot on Tableau. And, as Rachel Hewitt points out on HEPI, it would be a challenge for the ever-popular university league tables.

Judgement days

Alas, such data also oils the wheels of a regulatory system that has become far more automated than many are comfortable with. Interventions are designed and implemented based on metrics-driven indicators, and (occasionally) reportable events. There are thresholds on the indicators – not all public – and when institutional data passes these thresholds action should be taken.

Were this a simple, first-order, sector-wide effect then you could mitigate by simply changing the thresholds. But the effects of Covid-19 will be complex and hyperlocal – every decision (even Jim Dickinson’s “B3 bear”) is now a judgement call. This approach will be familiar to many OfS staff as the old and noble way of HEFCE – but in a higher-stakes competitive environment it seems more likely to end up in more court cases.

Just to give one example here – the likely rise in non-continuation could be exacerbated by a move to online provision. The class of 2012 MOOC fans still lament the fact that online learning does have a markedly higher dropout rate than in person study. A solid argument could be constructed that in holding providers to previous standards is a bit off.

Three years later

This gets worse. Remember up at the top of this piece when I told you that we often look at this data in three year “buckets”. That’s how TEF works, for example. And KEF. The purpose of this practice is to offer a broader based perspective, making sure that one outlying year does not have a disproportionate impact on your Gold award or suchlike.

Two or three years of Covid-19 inflected data poisons any number of these buckets. The effects may be longer lasting (remember that there’s 10 years of data in every TEF workbook), unpredictable, and provider specific. In such circumstances it is even more difficult than usual to claim with even a pretence of a straight face that TEF is robust, fair, or comparable to previous years – even for people who make that claim now.

There’s five years of data in the dataset that underpins access and participation plans, and those have an impact on up to £3,000 of income per home student. We don’t yet know how Covid-19 related events will have an impact on students from different backgrounds – or if the underlying societal effects of these backgrounds will remain. As we are comparing effects across academic years all 1.6Gb of data is going to become nearly useless.

There’s three years of data in KEF, which has an impact on HEIF allocations. KEF is much more nuanced than its just-another-excellence-framework acronym suggests, and you could argue that the benchmarking groups and explicit focus on externalities mean that there will be less impact. But we still do not know for sure.

Here to regulate

If you are working at home somewhere in South Gloucestershire or north Bristol, you may feel like I am being unfair. Of course the Office for Students will not unfairly penalise providers for events outside of their control. Of course our regulator will join with the QAA and HESA in providing support and mitigation for universities and colleges, and a bad 2019-20 NSS or continuation rate should not result in a condition of registration.

But this isn’t just next year, or the year after. The practical effects – on finance, on recruitment, on student behavior will last longer than that. And the data effects will last – based on current systems – for a decade or more. I’m not even going to mention LEO, because no one ever under any circumstances should be contemplating using a dataset as flawed but fascinating as LEO in regulation.

It’s not just the data-driven regulatory systems that need to be rethought and redesigned as a result of Covid-19. It’s the entire ideological underpinning of the structures that require them.

One response to “Covid-19 will last a long time in regulatory data

  1. Good to see you drawing attention to this – particularly the 3 year batching used by TEF. Scary stuff, and I doubt there a simple solution.

Leave a Reply