If we cast our minds back to 2005, the four UK higher education funding bodies ran the first ever compulsory survey of students’ views on the education they receive – the National Student Survey (NSS).
Back then the very idea of a survey was controversial, we were worried about the impact on the sector reputation, the potential for response bias, and that students would be fearful of responding negatively in case their university downgraded their degree.
Initial safeguards
These fears led us to make three important decisions all of which are now well past their sell-by date. These were:
- Setting a response rate threshold of 50 per cent
- Restricting publication to subject areas with more than 22 respondents
- Only providing aggregate data to universities.
At the time all of these were very sensible decisions designed to build confidence in what was a controversial survey. Twenty years on, it’s time to look at these with fresh eyes to assure ourselves they remain appropriate – and to these eyes they need to change.
Embarrassment of riches
One of these rules has already changed: responses are now published where 10 or more students respond. Personally, I think this represents a very low bar, determined as it is by privacy more than statistical reasoning, but I can live with it especially as research has shown that “no data” can be viewed negatively.
Of the other two, first let me turn to the response rate. Fifty per cent is a very high response rate for any survey, and the fact the NSS achieves a 70 per cent response rate is astonishing. While I don’t think we should be aiming to get fewer responses, drawing a hard line at 50 per cent creates a cliff edge in data that we don’t need.
There is nothing magical about 50 per cent – it’s simply a number that sounds convincing because it means that at least half your students contributed. A 50 per cent response rate does not ensure that the results are not subject to bias for example, if propensity to respond was in some way correlated with a positive experience the results would still be flawed.
I would note that the limited evidence that there is suggests that propensity to respond is not correlated with a positive experience, but it’s an under-researched area and one the Office for Students (OfS) should publish some work on.
Panel beating
This cliff edge is even more problematic when the data is used in regulation, as the OfS proposes to do a part of the new TEF. Under OfS proposals providers that don’t have NSS data either due to small cohorts or a “low” response rate would have NSS evidence replaced with focus groups or other types of student interaction. This makes sense when the reason is an absolute low number of responses but not when it’s due to not hitting an exceptionally high response rate as Oxford and Cambridge failed to do for many years.
While focus groups can offer valuable insights, and usefully sit alongside large-scale survey work, it is utterly absurd to ignore evidence from a survey because an arbitrary and very high threshold is not met. Most universities will have several thousand final year students, so even if only 30 per cent of them respond you will have responses from hundreds if not thousands of individuals – which must provide a much stronger evidence base than some focus groups. Furthermore, that evidence base will be consistent with every other university creating one less headache for assessors in comparing diverse evidence.
The 50 per cent response rate threshold also looks irrational when set against a 30 per cent threshold for the Graduate Outcomes survey. While any response rate threshold is arbitrary to apply, applying two different thresholds needs rather more justification than the fact that the surveys are able to achieve different response rates. Indeed, I might argue that the risk of response bias might be higher with GO for a variety of reasons.
NSS to GO
In the absence of evidence in support of any different threshold I would align the NSS and GO publication thresholds at 30 per cent and make the response rates more prominent. I would also share NSS and GO data with TEF panels irrespective of the response rate, and allow them to rely on their expert judgement supported by the excellent analytical team at the OfS. And the TEF panel may then choose to seek additional evidence if they consider it necessary.
In terms of sharing data with providers, 2025 is really very different to 2005. Social media has arguably exploded and is now contracting, but in any case attitudes to sharing have changed and it is unlikely the concerns that existed in 2005 will be the same as the concerns of the current crop of students.
For those who don’t follow the detail, NSS data is provided back to Universities via a bespoke portal that provides a number of pre-defined cuts of the data and comments, together with an ability to create your own cross-tabs. This data, while very rich, do not have the analytical power of individualised data and suffer from still being subject to suppression for small numbers.
What this means is that if we want to understand the areas we want to improve we’re forced to deduce it from a partial picture rather than being laser focussed on exactly where the issues are, and this applies to both the Likert scale questions and the free text.
It also means that providers cannot form a longitudinal view of the student experience by linking to other data and survey responses they hold at an individual level – something that could generate a much richer understanding of how to improve the student experience.