This article is more than 2 years old

Where is the debate about the future of the National Student Survey?

Camille Kandiko Howson evaluates progress on efforts to review the UK's national survey of students - and finds an absence of meaningful debate.
This article is more than 2 years old

Dr Camille B. Kandiko Howson is Associate Professor of Education at Imperial College London

I looked under the sofa, behind the curtains, under the rug. Nothing. Where is the debate about the future of the National Student Survey?

The government call for a “root and branch” review of the NSS a year ago seems to have resulted in a jumble. And while moving away from the word “satisfaction” or an overall judgement on a course, the end product may well please no one.

It has been a decade since the decision to treble tuition fees and place students “at the heart of the system”. Prospective students having increased consumer information to drive competition and raise quality was the rationale, and the main vehicle the government offered to do that through was the National Student Survey.

But as it stands, it offers little new information from 10 years ago, or the 1990s when the original survey was developed. There is a new(ish) regulatory approach through the Office for Students. But what is the role of students, student voice and student engagement – both in the NSS and the wider regulatory regime?

Transformational review?

The OfS-led review has been carried out in two phases, with Phase one complete. Phase two has involved workshops, evidence sessions and consultations – based on five narrow areas. Only one of five actions touches on the purpose of the survey, addressing the content of what the survey covers. OfS says the exercise will:

Review current questions to ensure they remain fit for purpose and stand the test of time. This will include the removal of the term “satisfaction” from any summative question or aggregate score to replace question 27.”

I managed to nab one of the spots on a roundtable hosted by the Office for Students (not by invitation of course, but happenstance of procrastinating on Twitter). The scope of what was under discussion sounded like tinkering with the survey without an overarching idea of what it is for, and what the resulting data could be used for.

There was debate about different topic areas – like whether and how questions on mental health and well-being could be included, and how different institutions or sub-groups of students are marginalised by the survey. And as with the last review of the NSS, a key theme was student engagement. But what this meant was, well, as clear as what is happening with this “root and branch” review.

Aligned to regulation?

Selecting what to measure is a highly political undertaking, based on value judgements about the purpose of higher education. One could argue these are embedded in the regulatory objectives of the OfS, and thus it would make sense to align with the primary way for students to feedback on the quality of their course. However, what seems underway is a pick and mix review of topics, sampling, data presentation and policing.

What the review, and any sector discussion, is missing is debate about the overall direction and purpose of the survey, the link with the quality assurance approach and the role of a highly bureaucratic survey in the era of a light-touch, data-driven regulator.

Is the data from the NSS going to form the basis of the revamped Teaching Excellence and Student Outcomes Framework (TEF), as the first iteration did? Will the student voice aspect be increasingly marginalised? Or removed altogether?

Is the data from the survey primarily meant to inform student choice, as was the original remit of the survey. Or for institutional enhancement, as was suggested for the TEF by the Pearce Review. Or is it now part of a pact between students and the regulator for action against institutions?

Or all of those?

The role of students?

Without an overarching logic, there can be no rationale for decisions about what to include or how to frame those questions. For example, should institutions be responsible for students’ mental health? Physical health? What about sustainability and climate change? Value for money?

More broadly, what is the role of students, notions of student engagement, or the representative role of students’ unions? Are current political topics up for grabs – freedom of speech, wokeness, decolonisation? Public roundtables and sessions with various stakeholder groups will elicit plenty of feedback, but also an incoherent survey, muddled data and a lack of responsibility.

Under OfS, there has been a move away from student engagement in the quality assurance regime, with the sector fighting back to keep student engagement alive in the Quality Code. And the purpose, and usefulness, of the OfS Student Panel still remains to be seen. However, there was much interest in including more questions on student engagement in the NSS.

Satisfaction, engagement or something else?

What seems forgotten is that the UK has a National Survey of Student Engagement (UKES), administered by AdvanceHE. There are decades of research across multiple countries on the conceptual framework, validity of survey instruments, and case studies of using engagement data for enhancement.

There is a science behind developing surveys. The consumer theory basis of satisfaction surveys places the student in the role of customer and that the responsibilities and contribution of the student as learner are not represented. The satisfaction basis of the NSS is premised on the relationship between students’ expectations and subsequent experiences.

Engagement is about how students participate in educationally purposeful activities, and how the institution supports students and offers an environment for this to happen. Engagement-based surveys have found more difference within institutions than across them, hence not publishing institutional-based rankings.

The NSS was specifically designed to provide data to compare courses across different institutions. But as the survey has evolved, the remit of the survey has expanded, with some questions more relevant to central services, others pertinent at an institutional level (or beyond in the case of students’ unions).

Despite some integration of a few engagement-based questions into the NSS, the two student experience survey approaches have largely been seen in opposition. This (open-access) paper written by myself and my colleague Dr Frederico Matos delves into these different approaches in detail, but highlights the need for an integrated aim for a survey, and the resulting data, for it to be valid and fit for purpose.

Student voice about what?

On one level there’s a debate about whether the survey should look at the student academic experience (as it was narrowed towards in the middle of the last decade) or the wider student experience – if you are able to meaningfully separate the two, that is.

On another you could interrogate whether the survey should explore students’ views on quality, or whether it could ask about their perceptions of learning gain.

There’s another debate about the use of the Likert scale, the use of both “doesn’t apply to me” and “neither agree nor disagree” options, and reporting which only ever considers active satisfaction rather than active dissatisfaction.

There are important questions about qualitative comments, how “optional” any optional banks should be, and whether all questions need to be asked to all students (particularly the ones that relate to the institution as a whole rather than the course).

But these are not debates we’re having – and if OfS is having them, we’re not invited.

Above all, students need to know what their role is – customers reporting on their very expensive product? Learners evaluating their own effort into their student experience? Informers reporting on bad institutional behaviour?

Continuing in a pick and mix fashion risks losing the benefits of the NSS (which some argue passionately for) and leaving the sector with a Franken-survey – one that would “satisfy” no one.

6 responses to “Where is the debate about the future of the National Student Survey?

  1. Great article, thank you Camille.

    Your point about engagement and raising this is an important one. I’ve run the NSS at my institution for more than 10 years. Throughout my time, it has been extremely clear that students struggle to reflect on their course across two/three/four years, often focussing on one or two experiences at a modular rather than holistically feeding back at programme level.

    NSS values all students equally in their responses which I feel is a rather blunt tool. For example we value feedback from students who have barely attended their course equally with those that have never missed a class. Their positions to give an accurate view on their experiences is therefore significantly different.

    NSS needs a radical rethink. Student voice is extremely important, and the language the survey uses needs to reflect this. There is a very consumerist approach to the question set which bypasses pedagogy in favour of value for money.

    One route out of this is to rethink what we want the data for. Is it about programme enhancement or a marketing tool? Using the current data for TEF versus consumer experience (Discover Uni) are opposing positions and require different questions.

  2. I think Camille’s article is spot on, we no don’t know who the survey if for, what it is for or indeed what it is actually measuring.

  3. And as for the Postgraduate version (recently back from the dead), all the above, but much more so…

  4. I think there is a good and current reason for the existence NSS that is working for many providers. It may not be the one it was designed for and it may not be the one that a new NSS will eventually be aimed at.

    The NSS does make many providers think about and improve the student experience and to deal with the issues that students talk about. For me it’s one of the few examples of league tables actually motivating providers to improve what they do. Publishing the NSS data so that third parties can rank and dissect it was never part of the original NSS plan but league tables doing just that made many providers sit up and listen to student comments in a way they just wouldn’t have happened without the desire to move up the slippery pole of our domestic rankings.

    The mere existence of a survey that is open to large numbers of students and is mostly out of the influence of universities will carry on having this effect regardless of how much we angst over the detail or complain about the statistical significance of the data. So let’s embrace and celebrate the fact that it has and probably will continue to do a lot of good for millions students.

  5. ‘The course is well organised and running smoothly’ and ‘I have been able to contact staff when I needed to’ are the two questions that are key as a reader of these reports for courses. Teaching questions are the next most important area. IF a course is getting less than 70% for the well organised and running smoothly question even if with a small sample size there is a red flag issue that as a potential student you want to ask the staff about at an open day. Likewise with the contacting staff question. While I agree the variations in sample sizes and sometimes rather small ones can skew results, either of these two questions receiving a low score is cause for alarm and warrants uni explanation. Filling in the survey yearly would be far more beneficial too, as reflection back over 3-4 years is hard – having completed this in the past 10 years as a mature student – while overall satisfaction related to the course in its entirety many other questions related to the final year.

  6. Great piece Camille! The purpose of the NSS has always been contested territory. Even when I was involved in its pilot way back in 2005. It has always been an extremely blunt survey instrument and the review would be an opportunity to, if not get it right, then at least get it better with the value of hindsight. However the NSS is not without its merit if triangulated with other sources, for instance course or module level data. The latter offer more granular data which can help contextualise NSS outcomes.

Leave a Reply