We’re talking about burden again.
Well, the government is, with the same wearying chronological predictability of a minister six months into their role.
They’ve consulted with the sector and the sector has complained that the statutory and other reporting obligations are not sustainable. The department has responded with “something must be done”. This “something” was set out in a policy paper I read with my head mostly in my hands.
Starting from the wrong end
That’s not to say a new conversation about burden is welcome. But we’re looking for burden in the wrong places. Somewhat counter-intuitively the minister appears to be exacerbating the problem by considering burden in data silos.
I’m not just talking about the NSS. Within the DfE’s same Sanctuary building there are entirely different regulatory models for FE and HE. That’s a heck of a prize when we’re considering burden. Especially now with so much FE blending into HE. Yet we still preserve two entirely separate reporting protocols mostly based on the same data. This is symptomatic of the siloed approach to reporting. It may be in the same building but it feels like two different planets.
Since that isn’t even on the burden reduction agenda, let’s focus on the NSS where again it’s hard to argue against a review. The issues reach far beyond the burden of collection. But even when we focus on that, the recommendations do not appear to be a fix. It’s merely going to be differently burdensome.
A proper review
This is not new nor is it news. Nearly ten years ago there was a proper strategic review of data collection. This spawned the HEDIIP programme and the data landscape project.
HEDIIP and specifically the data landscape project was not perfect. But it did try to live up to the “a once in a generation opportunity to transform the data landscape” mission – a mission defined in the 2011 White Paper “Students at the Heart of the System”.
The HEDIIP Data Landscape has many goals but two are worth reminding ourselves of here:
- Collect once use many. Data’s superpower is its utility. Over 80 different collections were asking many of the same questions. Collecting that once and distributing was obvious if complex.
- Stay closer to real time. The historic issue with the Student HESA return was the 18 month gap between collection and publication.
What the Data Landscape did well was what further and higher ed in the UK does well. It consulted, it collaborated, and it compromised. The outcome was an observed model which reflected not only the data the sector was reporting externally but the relationships between those data points. This was crucial to meet the two goals of utility and timeliness. If we consider the regulatory and reporting environment we find ourselves in now, this is exactly the approach that would hoover up requirements for COVID-19 type information.
A less well documented outcome of the data landscape was the sector governance project that followed it. This attempted to address the perennial problem of calculating the value of a collection against the cost. It attempted to level-up the relationship between requester and supplier through a code of practice and a robust assessment process.
This was always a challenge where that requirement often stems from a statutory instrument – often bluntly expressed as “the government needs this data”. But it set out a framework where trust and transparency were placed at the heart of the debate. Calculating institutional burden is hard, assessing value is harder but this should not have stopped us trying.
So why do we find ourselves back in pre-2011 days dragged into individual data silos? That’s a short question with a long answer. There’s a prevailing thought that the difficulty in implementing the landscape through Data Futures is the answer. I’m not qualified to comment on that but I will say we’re putting the cart before the horse.
The thing about Data Futures
Two elephants were in the room. Firstly the sector was – let’s be charitable – lukewarm on the model. Providers did not agree this would be less burdensome and some of the data collectors were concerned about losing their grip on individual data collections.
Some of the issue was messaging. There was going to be more burden to get over the hump and into a new way of operating. But after that burden would absolutely come down and – crucially – future burden could be managed in a far more sustainable fashion.
The second issue is related. Many – if not most – universities have not prioritised good practice data management outside of data under external scrutiny. Terrified is not too strong a word to describe senior leaders response to data that’s not been through a six month cleansing process to be allowed out of the building.
The argument that most of that data has many uses in every university did not get through. On the face of it it seems logical that creating and maintaining trusted student, finance and staff datasets would – over time – save providers money, improve their own data capability and satisfy regulators and other stakeholders.
Logic and HE are once again not the happiest of bedfellows. So is the Data Landscape dead? Certainly the original mission has been sliced by a thousand cuts. The number of use cases for the current model appears to be one. We’ve not really moved the dial at all – instead we’re back to looking at individual collections and declaring them burdensome. The conversation around cost versus value has been silenced.
So what’s next? Another review of Data Futures apparently. It feels like standing on the deck of the Titanic and shouting “Iceberg sighted behind”. We have abandoned strategy for short termism and questionable tactics.
There’s an irony that accurate and timely data is essential to getting us the right answers. On burden, we’re asking the wrong questions.