David Kernohan is Deputy Editor of Wonkhe

It’s fair to say that reviewing the NSS was not top of the Office for Students’ list of priorities this autumn.

Until recently, the expectation had been that the NSS would expand – to first and second year undergraduates, and to postgraduates (there was even a pilot study). In a regulated, customer-focused, sector like HE, there was a sense that regular data on customer satisfaction was a prerequisite. NSS was built into TEF, and was a central part of the OfS’s regulatory warning systems.

And now this review. Mandated by the DfE, it represents a retreat from the “students at the heart of the system” narrative that has dominated higher education policymaking for the last 10 years. It represents a return to an earlier idea – often argued for in the grey academic literature – that learning need not be entertaining or immediately satisfying.

The big news is on the 2021 survey – it will still go ahead (because OfS has already paid Ipsos MORI a very large sum of money), but providers no longer need to promote the survey internally – and we might see a different level or mode of publication to align with the new direction of travel (and to stop troublemakers like me plotting results, I imagine…)

That remit in full

The brief announcement today, following the September OfS board meeting, makes it clear how significant this review is seen as being. The terms of reference are:

  1. Assess the bureaucratic burden the NSS places on providers and how this could be reduced.
  2. Explore the unintended consequences of the NSS for provider behaviour and how these could be prevented, including whether the NSS drives the lowering of academic standards and grade inflation.
  3. Examine the appropriate level at which the NSS could continue to provide reliable data on the student perspective on their subject, provider and the wider system, and what could be done without depending on a universal annual sample.
  4. Examine the extent to which data from the NSS should be made public, including the implications of Freedom of Information (FoI) legislation.
  5. Ensure the OfS has the data it needs to regulate quality effectively.
  6. Ensure the NSS will stand the test of time, and can be adapted and refined periodically to prevent gaming.
  7. Ensure the UK wide role of the survey is considered in any recommendations.

Most of the immediate action is around the first three points, though arguably the latter terms will turn out to be more significant.

Burden

The NSS is an externally administered survey. Providers are currently asked to promote the survey to final year students during the spring term, and in return are given access to detailed responses at provider and subject level to give them the opportunity to design and implement enhancement. The first part of this can in no way be seen as burdensome – so we must assume that it is the use of the results that causes issues.

Universities can and do investigate areas of provision where NSS data signals that students are experiencing problems – remember only one question out of 27 relates to course satisfaction specifically, others examine key “hygiene” issues like resource availability and course organisation. Good universities – that is to say, most of them – will have early warning of these issues through their own systems of module and course surveys and other student feedback mechanisms.

To take the popular “university as gym” metaphor, you would want to know if customers were saying the rowing machine was broken and one of the lights was out in the changing rooms – it has an impact on satisfaction, but also on retention and quality. And if the NSS does not provide such information, it will be gathered elsewhere.

Consequence

The idea that providers would act in suboptimal ways given student feedback is a curious one, and of all the strangeness in the DfE announcement this is the bit that seems to have been plucked from the air. I am unaware of any credible evidence that providers are making courses easier or less rigorous based on NSS results. There is, to be clear, an argument that NSS results are not a great basis for applicants to make course decisions (the same could be said for salary and employment data) but there is little evidence that survey results have this kind of influence.

And the idea that NSS – a survey filled in by final year students in the spring of their graduating year – somehow drives grade inflation is almost laughable. Again, the first task of the OfS review should be to identify evidence for this criticism. Here’s my attempt to help:

[full screen]

You’re looking at the OfS’s own “unexplained first and 2:1” metric against the difference between the provider’s Q27 agree score and the benchmark in NSS. There is no relationship whatsoever between these two variables.

Reliability

NSS is a survey with a very high response rate. Around 70 per cent of all final year university students complete it – giving us (for the most part) numbers so large that it can be treated (with caveats) as a population study. Certainly, the response rate is in the same statistical ballpark as the Destination of Leavers from Higher Education survey (77 per cent) which provides the currently fashionable data on “highly skilled employment”.

There are issues when you drill right down to lower level subject area – issues that the rubric for the unistats dataset elegantly deals with by stepping up a level when looking at small populations. But for most courses at most providers enough students in a large course or subject area will respond to offer a useful split metric. And without an “annual universal system” (actually a very large annual sample) it would not be possible to do this.

An allegations procedure exists to provide a warning of “inappropriate influence” within a subject area or provider. The OfS (in England) will receive feedback on concerns from an NSS allegations panel, and investigate credible allocations and make amends. Though this is a secretive process (with no publication requirement or feed into regulatory conditions) I have never seen any evidence of data being suppressed or restated after publication – perhaps more openness around these instances would help.

Administration

The last four terms are more interesting to me. One of the strengths of the existing NSS is that it is nationally published to a high independently-audited standard, to the requirements of Official Statistics. Publication makes the data source a single source of truth – no-one can claim a satisfaction score higher (or lower) than it actually is, and the information is available to both academics, support staff (including SUs), and managers equally. A retreat from this position would be puzzling – the only beneficiaries would be providers with low or falling scores.

Term five opens a huge can of worms – what data does the OfS need to regulate quality effectively? Well, for a while it needed multiple student data collection points and now apparently it does not. Once it needed a TEF to provide an incentive to improve the quality of provision above a baseline, now… well, we’ll see. Once it definitely and emphatically need a system of regular inspections of provider quality processes, unlike most other European countries – and I feel like this may be up for debate too.

The NSS role in regulation is complex. It is an indicator of student (customer) concern, and has recently played a part in understanding differences in satisfaction across student and course characteristics. It shouldn’t really be used as an absolute cut off for direct intervention (as in regulatory condition B1) but it should certainly indicate where some actual quality assurance may need to be considered.

The term on standing the test of time is curious – surely the fact it has survived 15 years with periodic refinements (not to prevent gaming, for which there is very little evidence) but to improve the usefulness of data would suggest there is not a huge problem here.

But there is a huge problem around term 7. The NSS (along with big parts of the HESA data collection) is UK wide – the three other regulators will have strong views on how the survey changes, and may wish to strike out in another direction. This was a problem as soon as this review was announced – it is about to become a larger one.

9 responses to “The National Student Survey’s radical roots and branches

  1. I like how providers don’t have to promote the survey/ Surely that’s a form of gaming if they don’t like what they hear or where it ranks them in tables. By not promoting they survey, they would avoid hitting the 50% participation threshold for results to be released anyway. ‘Open to gaming’ was the one of the things the DfE accused the NSS of!

  2. It is odd that the DfE is asking for yet another review of the NSS. The NSS has been very recently reviewed – it is fit for purpose. The ‘concerns’ expressed are not new and have no solid basis. While a survey cannot be perfect it’s the best available option. Instead of the NSS asking students to feedback on their experience maybe DfE want to predict what students think; Just like they predicted A level results?

  3. Not sure what Pauline means in terms of ‘its the best available option’ , far from it, there has never been a robust validity study, it seems reliable but we do not really know what it is measuring. There are other more intensively researched tools available such as the NSSE. In terms of ft for purpose, if one of its main aims of the NSS is to to help applicants make decisions, then the evidence is that it fails. We respect to quality the NSSE is a much better fit, as its focus is on outputs whilst the NSS has a focus on inputs

  4. There is much gaming of the system already, our sister department holds an NSS ‘games night’ with ~£450 of free pizza, drinks and snacks, bribery and corruption???

  5. It is very tempting to look to the NSSE for an alternative. If the link between self-reported student engagement and positive student outcomes could be demonstrated it would possibly show us useful information on what students do during their studies. The NSSE also has an appealing emphasis on the broader student experience which could supplement our understanding of how students generate the wider benefits of going to University (with associated understanding of differences between groups etc).

    The UK Engagement Survey developed initially by Advance HE/HEA doesn’t appear to have achieved major traction – perhaps this is its moment.

  6. I’d be pretty sure that Ipsos MORI’s own promotion could get most institutions over the threshold of 50% without institutional help (though probably not as many subjects and other units which all have the same threshold as the institutional total). The great unknown is whether that would produce a different result than getting 70-75% with institutional promotion. However, assuming it must have at least some impact, the OfS have basically laid the conditions in which some institutions will be ‘gaming’ by simply doing what they’ve been encouraged to do every year, and potentially others not. Asking around, I don’t know any colleague elsewhere who doesn’t think they’re going to have to promote as normal because it’s too risky not to.

  7. Iain, what is the evidence that it fails to help applicants make decisions? My sense is that it does, as improved NSS results for a subject tend to lead to a rise in applications the following year in my experience.

  8. Hi Karl, the actual evidence is that increase in NSS leads to very marginal increases and only when you shift by over 30 places in the league table.

  9. I would disagree with the assumption that promotion of the survey is not a burden on the institution. A lot of time and effort from multiple professional departments and most academic departments goes into our promotion efforts.

Leave a Reply