You can’t fix OfS’ quality proposals without fixing the NSS
Jim is an Associate Editor (SUs) at Wonkhe
Tags
But the thing that’s been really nagging at me since October is the cheek of proposing to fundamentally restructure quality assessment without looking at whether the National Student Survey (NSS) – which sits at its heart – is fit for the purpose OfS wants it to serve.
Not that we’re allowed to say so in the consultation. Page 47 says:
We are not proposing changes to the National Student Survey as part of this consultation.
The regulator then spends the next 54 pages explaining how NSS data will be the primary evidence source for assessing whether providers meet their B conditions, how it will inform student experience ratings that could restrict growth or degree awarding powers, and how it represents:
the only consistently collected, UK-wide dataset that directly captures students’ views on their teaching, learning, and academic support.
In other words, OfS is proposing to assess whether providers meet regulatory conditions that were introduced in 2018 and have evolved since, using a survey that was designed in 2004 and last substantially revised six or so years ago, without any systematic attempt to align the questions to the requirements they’re supposed to evidence.
B real
B1 requires courses to be “up to date” with current scholarship and professional practice, with learning outcomes that are “credible” and build coherently toward qualifications – but the NSS doesn’t ask whether course content reflects current knowledge, whether modules connect meaningfully to each other, or whether what students learn actually prepares them for their intended destinations.
Under B2, providers must ensure their staff teams are collectively sufficient in number, appropriately qualified and skilled, and accessible to students – but the NSS asks only whether staff are “good at explaining things” and whether students have been “able to contact staff when I needed to.”
A course could be taught entirely by overworked PhD students on zero-hours contracts with no relevant professional experience, and as long as they answered their emails and explained things clearly, the NSS would record satisfaction. The survey doesn’t ask whether there are enough staff to provide proper support, whether they’re qualified for what they’re teaching, or whether high turnover disrupts learning.
Under B2’s requirements around resources, providers must fund “physical and digital learning resources” without additional charges, ensure adequate IT infrastructure and study spaces, and help students understand and avoid academic misconduct. The NSS captures none of this specifically, asking instead whether library resources “have supported my learning well” without establishing whether students had to pay £200 for essential software licenses, whether they could find somewhere to study during exam season, or whether anyone ever explained what plagiarism actually means beyond “don’t use Chat-GPT.”
B4 requires assessments to be “valid” and “reliable”, using methods that are “appropriate” for testing intended learning outcomes – concepts with specific technical meanings in assessment theory. But the NSS asks only whether marking criteria were clear, whether assessment was fair, and whether feedback was helpful – perception questions that tell us nothing about whether the MCQ exam actually tested understanding of qualitative research methods, or whether the group presentation genuinely measured individual critical thinking capabilities.
OfS knows the NSS has problems – it’s just chosen to ignore them rather than fix them. Take the organisation and management theme, which OfS has excluded entirely from TEF indicators – despite its own press release this July highlighting it as one of the sector’s weakest-performing areas, noting that disabled students in particular reported “significantly worse experiences” and that the gap between disabled and non-disabled students was growing.
That release said that institutions across the sector could be doing more to ensure disabled students are getting the high quality higher education experience they are entitled to – yet when it comes to using this data for regulatory assessment, OfS has decided it doesn’t count. How can something be simultaneously important enough to warrant a press release about sector-wide failure but be irrelevant to quality assessment?
Postgrads and community
Then there’s the mysterious case of the vanished postgraduate pilots. Back in 2018, OfS promised to develop a PGT NSS, with pilots running in 2019 and 2022 – the latter involving over 100,000 students across 42 providers. The pilots included questions on value for money, mental health and wellbeing support, and whether courses met expectations set at recruitment. Then… silence. No results published, no explanation for why development stopped, and no transparency about what was learned.
It took Freedom of Information requests to reveal what OfS discovered – around 60 per cent positivity on whether universities care about students’ mental health and wellbeing, just 55-57 per cent on value for money, and systematic concerns about organisation and course management that mirror undergraduate patterns. Rather than grapple with these findings or explain why the pilots didn’t proceed, OfS just buried the data – all while websites direct prospective PGT students to undergraduate NSS scores without mentioning that not a single postgraduate was surveyed.
Even more damaging than what the NSS doesn’t measure is what OfS has actively removed from it, apparently without considering how this would affect its utility as a regulatory tool. The learning community questions – “I feel part of a community of staff and students” and “I have had the right opportunities to work with other students as part of my course” – were dropped after 2022, despite OfS’s own blended learning review identifying that isolation during online delivery was a significant concern and that courses without opportunities for small group teaching would “likely raise compliance concerns.”
OfS justified removing these questions by claiming some students interpreted them as being about their “wider sense of community” rather than specifically their learning community, with this being “particularly true of students from some ethnic groups.” But rather than investigate why different student populations experience community differently, or develop better questions that capture this crucial dimension of education, OfS just abandoned the measure entirely.
The mental wellbeing question that replaced them asks only about awareness of support services – a provider could have three overwhelmed counsellors for 20,000 students with six-month waiting lists, and as long as students knew the service existed, the score would be positive. The question tells us nothing about whether support is accessible, effective, or appropriate for diverse student needs.
What makes OfS’ insistence on treating the NSS as immutable especially vexing is that the solution is both obvious and, with the Lifelong Learning Entitlement approaching, inevitable. Once students can fund individual modules through the LLE, with exit qualifications at Levels 4 and 5, the right to give feedback on each unit of credit becomes absolute. A student paying for a 30-credit module deserves the same quality assurance infrastructure as someone completing a traditional three-year degree.
Regulatory attention should match the unit of funding. If providers can charge for individual modules, they should be accountable for the quality of individual modules. If students fund their education unit by unit, quality assurance must operate unit by unit.
Compulsory module-level evaluation with published results and demonstrated institutional responses would transform everything – students’ unions could draw on systematic evidence about actual student experience rather than attempting to characterise entire institutions from fragments, patterns would emerge before they become crises, and students would see whether their feedback actually leads to change.
Yet OfS proposes to assess “student experience” without requiring the basic feedback infrastructure that would evidence whether providers actually listen to and act on student views.
Meanwhile, the consultation maintains institutional-level assessment that produces single medals telling prospective students almost nothing useful – a 40,000-student university gets the same single rating as a 200-student specialist college, obscuring enormous variations in quality within large providers whilst potentially penalising small institutions doing heroic work in challenging circumstances.
A student choosing Megaville University for its TEF Gold might find their specific course has continuation rates, satisfaction scores, and graduate outcomes that would warrant “requires improvement” if assessed separately – but those poor outcomes are invisible, averaged away in the institutional medal.
Fudge and crumble
Reading between the lines, what becomes clear is that OfS isn’t interested in fixing the fundamental infrastructure problems that prevent meaningful quality assessment – it’s interested in consolidating its existing approach whilst raising the stakes.
The shift from Bronze meaning “achieving excellence” to “meeting absolute minimums” isn’t about improving quality – it’s about creating more levers for regulatory intervention without having to prove that intervention is justified by reliable evidence.
The complete abandonment of educational gain – despite students consistently requesting it – suggests that OfS has given up trying to capture what students actually want from their education in favour of what can be benchmarked in spreadsheets.
The timeline that leaves PGT students waiting until the 2030s for proper oversight, whilst TNE and PGR students have no timeline whatsoever, creates a hierarchy of regulatory protection that has nothing to do with risk or fees and everything to do with administrative convenience.
And the silence on why 35 per cent of student representatives didn’t feel free to be honest in their student submission suggests OfS either doesn’t know or doesn’t care that its flagship student engagement mechanism is compromised by power dynamics it refuses to acknowledge.
But as I say, treating the NSS as immutable revealed truth rather than the creaking, misaligned, incomplete instrument it’s become is the biggest problem.
Until OfS accepts that meaningful quality regulation requires an NSS whose question set aligns questions to regulatory requirements, extends coverage to all students, captures what actually matters for student success, and is supplemented by module-level evaluation that shows whether institutions genuinely respond to feedback – it’s building elaborate regulatory castles on foundations of sand.