How might NSS results be published in 2023 and beyond?

Here's what we will likely see on NSS results day this year.

David Kernohan is Deputy Editor of Wonkhe

The Office for Students’ track record on National Student Surveys isn’t exactly stellar – but the changes proposed to publication plans for 2023 onwards are straightforward and uncontroversial.

You’ll recall that 2023 will see an entirely revamped survey instrument – one that replaces the old agree/disagree five point Likert scales with four-response direct questions, taking the opportunity to add some questions (mental health, freedom of expression), and lose others (on timetabling, feeling part of a community, and in England the overall question).

Given these changes, you’ll be unsurprised to learn that the old stalwart “percentage agree” indication that you see in most coverage has morphed into a “positivity” measure – showing the proportion of respondents who indicated a preference who chose the two most positive responses. If you don’t like this – don’t worry – the full results are also available so you can roll your own composite measure (I suspect I’ll be doing a “negativity” measure again)

And reflecting changes elsewhere in the OfS data portfolio, we will get new splits for mode (now including apprenticeships alongside full and part-time study), and level (now separating out students studying an undergraduate course with postgraduate components. The traditional subject and provider slices (down to CAH level 3) will remain, but at a sector-wide level there’s been some tweaks to what personal characteristics are shown – again bringing NSS in line with other OfS data publications – the old “25 and above” age category, for instance will split into “25-29” and “30 and above.

Fans of benchmarks will be delighted to see them extended to every level of aggregation, and the inclusion of an indication of the contribution a provider makes to its own benchmark. There’s some changes to the way these benchmarks will be calculated (in line with the results of the data indicators consultation). Level of study is now added to the list of factors used in calculation, the “unknown” category is added to the “White” category for ethnicity and “non-UK domiciled” added, and the “other” category is added to “female” for sex – though these are only benchmarking changes not presentational ones.

Elsewhere there’s a hint of wait and see – the plans are eventually to openly publish results of the bank of healthcare questions but this is pending an analysis to be conducted in 2023. There’s still no intention to openly publish open-text questions, reflect provider departmental structures (this information is actually collected by HESA but I’ve never seen it published!), or publish low response rate data (there’s some super-nerdy reasoning for this) – all these remain in the provider portal for internal use.

One familiar feature that you may miss this year is the grouped responses for scales/themes – given the changes to the survey and the need to test assumptions these won’t feature in the initial 2023 data but will instead turn up in the autumn. Given that these scales are used in TEF this will be a second NSS day to mark in your calendar (though the pending TEF will only be using already-existing NSS data).

And if you’ve strong views on the use of Welsh language within NSS publications, you are encouraged to respond on this issue.

The deadline for responses is 26 May.

5 responses to “How might NSS results be published in 2023 and beyond?

  1. OfS now lapsing into self-parody:”We do not anticipate consulting again on
    changes to the thresholds, provided that they are made using the general approach. This is
    because any such changes would be drawn from the application of statistical techniques, and
    we consider that only a small minority of consultation respondents would be able to respond
    authoritatively on these matters”

  2. Given the changes to the NSS I think the OfS is right (note possible author bias) to focus on the positivity score. With a five point scale looking at negativity was important but now I think it creates a risk of a negative perception surrounding performance that could be to the detriment of the sector especially for international students. We are better off focussing on degrees of good rather than creating negativity.

    I think the most controversial aspect of this is the inclusion of student demographics in the benchmarks although I am sure the OfS views this as adequately covered in the earlier B3 consultations as it is not discussed further here. Are we really happy that disabled students perceive themselves to have a different (possibly worse) experience?

  3. The technical notes are also important as they contains some important changes, particularly re the thresholds for releasing the data – including via the provider portal for internal use.

    Under the proposal, instances with almost unanimous results in one response category would have the c100% (and reciprocal c0%) values replaced by a text code, which would cause problems for anyone calculating either a positivity or negativity score.

    Furthermore, the text in paragraph 10 of the technical notes states that results would be published “only if the response rate is less than 100 per cent or the responses are not unanimous”, which would have even wider implications.

    For example, if an institution received 20 responses from all 20 NSS-eligible students on a particular course, and these 20 responses were distributed across all of the four response options (eg 2 / 10 / 5 / 3), then under the proposed changes the results would be suppressed for that question because of the 100% response rate. And not just suppressed in the public-facing data, but also in the data provided for internal use via the provider portal – paragraph 13 states that explicitly.

    1. Thanks Em. I’m the senior press officer at the OfS, and I’ve been speaking with policy colleagues as we thought it would be helpful to provide some clarification on this. This suppression would only apply in cases where both the response rate is 100 per cent and the responses are unanimous, or close to unanimous. This is so that respondent confidentiality is protected – we would avoid publishing data which showed that everyone responded in the same way.

      In the example given, we would publish the responses as the response rate is 100 per cent but contains multiple positive and negative responses. The responses would be suppressed if there was a 100 per cent response rate and all, or almost all, were either negative or positive. For instance, if the response rate was 100 per cent and the results were distributed across the four response options as 0 / 0 / 11 / 9, this would mean that 20 responses were positive and none were negative. To protect the confidentiality of the 20 respondents who responded positively, the result would indicate DPH (Data Protection, High) to show that the results were suppressed, but they were close to 100 per cent positive.

      In the consultation response and subsequent publication on how we will publish NSS results, we will include clear wording on how data will be suppressed.

  4. Tom: This is beyond ridiculous. Providers will just know that ‘high suppressed’ means 100% or close to, and ‘low suppressed’ means 0% or close to. Meaning no confidentiality is protected (you simply know that everyone in that class gave you a good or bad score) and providers will just change the text to 100%/0% scores for the purpose of data reporting. What is achieved by this, other than annoying people?

Leave a Reply