This year’s NSS results are a misleading mess

Last week as well as public NSS results, the Office for Students (OfS) published provisional TEF results.

Jim is an Associate Editor at Wonkhe

On the latter it set out a series of guidelines aiming to protect the interests of students, including by “seeking to prevent students from being exposed to misleading or inaccurate information about TEF outcomes”.

There’s not, sadly, a comparable set of guidelines surrounding the National Student Survey (NSS).

Even if you ignore the fact that this year we are being robbed of discovering the percentage of students feeling part of a community or telling us about their timetabling, the lack of guidance coupled with publication dangerously close to clearing means that this year’s circus of comms from providers about their results feels especially partial – and often close to meaningless.

I’m particularly struck by the own-goal of a narrative that adds up to to “hey, everything’s getting better” when the money is getting much, much worse. Who throws more money at a sector with apparently dramatically improving satisfaction scores?

Anyway, here’s some highlights from the messaging I‘ve noticed since Thursday morning.

Thumbs up

A significant number of providers seem to have taken the new “positivity score” and compared it to the old positivity score in their press releases, without telling editors or their readers that the question format has changed significantly.

So for example right now I’m looking at a press release that proclaims that “compared to the university’s results last year, there was a 10.3% rise in students’ rating of academic support”.

Even if you ignore that the provider actually means there was a 10.3 percentage points increase (an important difference) and ignore that last year there were three questions making up this category where this year there’s two, last year this university got a 13 per cent negativity score on this category – and this year it’s 19 per cent.

That’s partly because “neither agree nor disagree” isn’t an option this year – but you wouldn’t know that from a whole clutch of providers’ press releases.

Full marks

The “choose your own adventure” nature of the presentation of the results has long been an issue, but this year’s attempts to create a set of providers, subject areas or questions that show a given university in a good light feel especially preposterous.

One especially enthusiastic university headlines with “100% positivity: National Student Survey reveals praise” but has to pick out Events and Entertainment Management at its London campus, and Artist Designer Maker: Glass and Ceramics to get there on teaching – omitting to mention the 35.4 per cent score on Sciences (non specific) or the 67.3 per cent score on Counselling, Psychotherapy and Occupational Therapy.

One university says its “student positivity score of 85.3%” means that the university is 4th in the UK when compared to other institutions included in the Guardian University Guide Table. You have to read down to the small print to get to the asterixed note that the score is based on questions 1-24, omitting the questions on the SU, wellbeing and free speech.

Another says that it’s “4th in the UK for overall student positivity and first for assessment and feedback for full-time, first-degree students”, omitting to mention that it’s about fiftieth once you’re looking at “other undergraduate”.

You think that I’m strong

The mental wellbeing question is just as misleading as we predicted. For example, I’m looking right now at a university press release that says “NSS scores 2023 show that [we are] in the top 5 London universities for Mental Wellbeing Services … among our students”.

Another says that it has “strong results” in the “mental wellbeing category”. Another says that amongst the Russell Group, it received the highest positive responses to 27 questions that asked students to rate their “academic experience, mental wellbeing, resources and support”.

The problem is that the question neither mentions students’ wellbeing nor the quality of the services designed to improve it – it measures how well services are communicated to students. But you’d never know that from a huge swathe of the releases in the wild.

Breaking ranks

When the Office for Students ran its formal consultation on changes to the NSS, it noted that there were concerns about the use of the old summative question 27 by the media in England when reporting on the survey’s outcomes:

The current question 27 on overall satisfaction is the most commonly used metric in league tables, and its removal might make the results less susceptible to ranking.

Oh really? It took THE less than five hours to wrangle the spreadsheets into a results ranking, which it said shows St Andrews, West London and LCA London as top.

Takes two

This year to address what it called the “growing salience” (amongst politicians and the press) of the issue of freedom of expression on campus, OfS ostentatiously introduced a question asking respondents how “free” they felt during their studies “to express their ideas, opinions, and beliefs”.

By the time we got the results, OfS appeared to be rather less proud of the question – it omitted to mention the 86 per cent positivity rating from students in its press note, highlighting lower scores for “subject specific resources”, intellectually stimulating courses and communication on wellbeing services instead. You’d certainly never know that it was the third best scored question on the survey.

In the media former King’s VP Jonathan Grant said that he was “not surprised” by results showing a small minority of students having free speech concerns, arguing that the result showed that “the debate on freedom of speech is overblown and not backed up by the evidence”.

Not to be outdone, architect of the Free Speech Bill and former DfE SpAd Iain Manfield called that an “an astonishingly bad take”, framing the 14 per cent choosing negative responses as akin to staff reporting harassment”:

It is those with minority views who will be cancelled. The Brexiteers, the gender critical feminists, those who oppose ‘decolonisation’. And, yes, those on the left who have radical views on Palestine, or Marxism. Those crowing about ‘only’ 14% of students saying they are not free to speak are using the same false logic as someone who polls an 80% white organisation and boasts that ‘only’ 14% of respondents report racism.

Of course the ultimate “bad take” is to frame the negative result as a type of bullying and harassment rather than a lack of confidence. Quantitatively, we were 1 per cent off on our polling, so I’m confident that our qualitative – demonstrating that the big issue is one of confidence – is right too.

Forced off the fence

In the review of the survey, OfS opted to replace agreement, ambivalence or disagreement with a series of statements with direct questions that had response options tailored to each question.

That means for example that where, in 2022, students were asked on five point scale if they agreed with “Staff are good at explaining things”, this year they were invited to respond on a four point scale to “How good are teaching staff at explaining things”, with the top two answers each time melded into a positivity score.

The problem is that while that reframing worked for some questions, it didn’t for others. So last year’s “My course has provided me with opportunities to explore ideas or concepts in depth” became “To what extent have you had the chance to explore ideas and concepts in depth”, with the four new choices as “large extent”, “some extent”, “small extent” or “not at all”.

That not only renders the results impossible to compare with last year, it means that Qs 5, 7, 8, 22 and 23 aren’t even meaningfully comparable with other questions. You’d never know that from plenty of provider press pieces.

Don’t mention the postgrads

Over the past decade, the percentage of full time students studying at postgraduate level has grown from about 17 per cent to one in four – but despite OfS Director of External Relations Conor Ryan taking to Wonkhe in 2018 to note that “their voice is not well represented when it comes to strategic thinking in HE or in wider policy development”, OfS’ two pilots of a PG NSS – one in 2019 and one in 2022, news on a PG NSS has gone decidedly quiet.

The problem is that you’d never know that the survey is UG only from a large selection of of the press releases – the Russell Group omits to mention it, Universities UK keeps shtum, and a whole raft of individual providers like this one, this one, this one, this one and this one (I could go on) fail to mention it too.

Academic interests

The failure of OfS to address the wording of the question on SUs is astonishing – research from 2017 showed that asking students about their SU’s effectiveness in representing their “academic interests” was often interpreted as whether the SU was interested in the same academic subjects as them.

But worse is the continued maintenance of the question as a universal. A whopping 17 per cent of respondents ticked “This does not apply to me”, which would be fine were it not for the fact that by my reckoning, at least half of the top twenty providers on the SU question don’t actually have an SU at all – suggesting that students are answering anyway. We manage filtering over placement questions – how hard can it be to do the same for SUs given that the regulator is about to… regulate them directly over free speech?

Expectation management

As I often say, one way of looking at the NSS is that it’s basically a sector wide consensus statement on what makes a good (academic) student experience, like a bill of academic rights.

But if that’s the case shouldn’t we publish that to students at the start of their course as a set of expectations they should have, rather than just ask students about it at the end? Wouldn’t it be a great international recruitment tool for UK HE? Wouldn’t that help them understand what they can raise during their course? Shouldn’t it link more explicity to OfS’ “B” Quality and Standards definitions and/or the UK Quality Code?

And shouldn’t we also take some steps to ask students which of the elements are more, less or not at all important to them so we can capture and understand the diversity of students and providers?

Perhaps all of this is inevitable, and the real learning is from the results internally. That might be true – but as we noted here, it’s often the case that providers leap to action planning without interrogating the scores first. That’s hard enough when there’s only one qual question and we never see even a sector-level analysis of answers to it – and timings-wise, it’s tricky when results emerge in the first week of July, let alone the second week of August.

There is real learning to be had from the NSS – but only if you take care. As such, my top tip remains as thus – whoever you are, don’t do an NSS action plan without involving students in the question of why scores are the way they are first. Skipping, or getting the hypothesis step wrong, is almost worse than not acting.

***Since publication of this blog, OfS has drawn my attention to its Data Quality report, on the basis that it “included guidance saying that the 2023 results could not be straightforwardly compared with those from previous years”. It does indeed say that – but doesn’t, unless I’m missing something, offer any guidance on how the results should be presented to the public – which as I say above, is in sharp contrast to the guidance issued over the TEF.

5 responses to “This year’s NSS results are a misleading mess

  1. & at least one provider has leant on their partners strong performance by publishing that they are 1st in London and the South East (using their registered population) – some of their partners aren’t even in London & the south east.

    Whether you use average of all questions, or 26 questions in common, can we all agree that for it to be fair, all institutions with degree awarding powers should be included based on their taught population? If you do that it doesn’t really matter which question set you use, the top 10 doesn’t really move around all that much.

    Theme and question level sophistry (particularly YOY comparisons as pointed out) is a bigger concern IMO.

  2. The OfS hold providers accountable for all registered students and it is used in regulation and published stats for providers. None of these homemade rankings are comparable due to multiple filters which might be used (Specialist or not, mode, level, region). The best we can hope for is for providers to be clear about their population in communications so they can be compared to official statistics by keen, sceptical students.

    1. Poor take. The OfS does hold providers accountable for all students, but they allow you to view/write about them separately for a reason. If it’s being used to market an institutions taught courses, then it should be restricted to their taught population – fine if one says ‘insitution X and our partners’ ranked first, but I don’t think that was the case.

  3. Great article Jim. I’ve long thought even before this year’s unholy mess that is the NSS, that ‘NSS action plans’ (usually knee jerk) are a daft idea. We should have thoughtful, data informed, student informed (co-created ideally), student experience plans (as part of broader holistic institutional strategic plans), covering the whole student lifecycle, with measurable short, medium, long term actions, kept under constant review, and updated as new data gives us insights as to progress (or not as the case may be). But that’s generally not what happens…. So often NSS (thus student experience) is siloed off as belonging to a specific group of staff to ‘solve the issues’ instead of being seen as everyone’s responsibility…

  4. Rebecca, you are spot on. Identifying and responding to NSS issues is a collective endeavour, not least trying to understand what lies behind the responses to the survey. Sometimes the only change that might be required is one of communication (at least initially) so that students understand an assessment better and have a clearer expectation of when feedback will be received. But much of this requires more time and effort than is available before the next survey rolls around.

Leave a Reply