I looked under the sofa, behind the curtains, under the rug. Nothing. Where is the debate about the future of the National Student Survey?
The government call for a “root and branch” review of the NSS a year ago seems to have resulted in a jumble. And while moving away from the word “satisfaction” or an overall judgement on a course, the end product may well please no one.
It has been a decade since the decision to treble tuition fees and place students “at the heart of the system”. Prospective students having increased consumer information to drive competition and raise quality was the rationale, and the main vehicle the government offered to do that through was the National Student Survey.
But as it stands, it offers little new information from 10 years ago, or the 1990s when the original survey was developed. There is a new(ish) regulatory approach through the Office for Students. But what is the role of students, student voice and student engagement – both in the NSS and the wider regulatory regime?
The OfS-led review has been carried out in two phases, with Phase one complete. Phase two has involved workshops, evidence sessions and consultations – based on five narrow areas. Only one of five actions touches on the purpose of the survey, addressing the content of what the survey covers. OfS says the exercise will:
Review current questions to ensure they remain fit for purpose and stand the test of time. This will include the removal of the term “satisfaction” from any summative question or aggregate score to replace question 27.”
I managed to nab one of the spots on a roundtable hosted by the Office for Students (not by invitation of course, but happenstance of procrastinating on Twitter). The scope of what was under discussion sounded like tinkering with the survey without an overarching idea of what it is for, and what the resulting data could be used for.
There was debate about different topic areas – like whether and how questions on mental health and well-being could be included, and how different institutions or sub-groups of students are marginalised by the survey. And as with the last review of the NSS, a key theme was student engagement. But what this meant was, well, as clear as what is happening with this “root and branch” review.
Aligned to regulation?
Selecting what to measure is a highly political undertaking, based on value judgements about the purpose of higher education. One could argue these are embedded in the regulatory objectives of the OfS, and thus it would make sense to align with the primary way for students to feedback on the quality of their course. However, what seems underway is a pick and mix review of topics, sampling, data presentation and policing.
What the review, and any sector discussion, is missing is debate about the overall direction and purpose of the survey, the link with the quality assurance approach and the role of a highly bureaucratic survey in the era of a light-touch, data-driven regulator.
Is the data from the NSS going to form the basis of the revamped Teaching Excellence and Student Outcomes Framework (TEF), as the first iteration did? Will the student voice aspect be increasingly marginalised? Or removed altogether?
Is the data from the survey primarily meant to inform student choice, as was the original remit of the survey. Or for institutional enhancement, as was suggested for the TEF by the Pearce Review. Or is it now part of a pact between students and the regulator for action against institutions?
Or all of those?
The role of students?
Without an overarching logic, there can be no rationale for decisions about what to include or how to frame those questions. For example, should institutions be responsible for students’ mental health? Physical health? What about sustainability and climate change? Value for money?
More broadly, what is the role of students, notions of student engagement, or the representative role of students’ unions? Are current political topics up for grabs – freedom of speech, wokeness, decolonisation? Public roundtables and sessions with various stakeholder groups will elicit plenty of feedback, but also an incoherent survey, muddled data and a lack of responsibility.
Under OfS, there has been a move away from student engagement in the quality assurance regime, with the sector fighting back to keep student engagement alive in the Quality Code. And the purpose, and usefulness, of the OfS Student Panel still remains to be seen. However, there was much interest in including more questions on student engagement in the NSS.
Satisfaction, engagement or something else?
What seems forgotten is that the UK has a National Survey of Student Engagement (UKES), administered by AdvanceHE. There are decades of research across multiple countries on the conceptual framework, validity of survey instruments, and case studies of using engagement data for enhancement.
There is a science behind developing surveys. The consumer theory basis of satisfaction surveys places the student in the role of customer and that the responsibilities and contribution of the student as learner are not represented. The satisfaction basis of the NSS is premised on the relationship between students’ expectations and subsequent experiences.
Engagement is about how students participate in educationally purposeful activities, and how the institution supports students and offers an environment for this to happen. Engagement-based surveys have found more difference within institutions than across them, hence not publishing institutional-based rankings.
The NSS was specifically designed to provide data to compare courses across different institutions. But as the survey has evolved, the remit of the survey has expanded, with some questions more relevant to central services, others pertinent at an institutional level (or beyond in the case of students’ unions).
Despite some integration of a few engagement-based questions into the NSS, the two student experience survey approaches have largely been seen in opposition. This (open-access) paper written by myself and my colleague Dr Frederico Matos delves into these different approaches in detail, but highlights the need for an integrated aim for a survey, and the resulting data, for it to be valid and fit for purpose.
Student voice about what?
On one level there’s a debate about whether the survey should look at the student academic experience (as it was narrowed towards in the middle of the last decade) or the wider student experience – if you are able to meaningfully separate the two, that is.
On another you could interrogate whether the survey should explore students’ views on quality, or whether it could ask about their perceptions of learning gain.
There’s another debate about the use of the Likert scale, the use of both “doesn’t apply to me” and “neither agree nor disagree” options, and reporting which only ever considers active satisfaction rather than active dissatisfaction.
There are important questions about qualitative comments, how “optional” any optional banks should be, and whether all questions need to be asked to all students (particularly the ones that relate to the institution as a whole rather than the course).
But these are not debates we’re having – and if OfS is having them, we’re not invited.
Above all, students need to know what their role is – customers reporting on their very expensive product? Learners evaluating their own effort into their student experience? Informers reporting on bad institutional behaviour?
Continuing in a pick and mix fashion risks losing the benefits of the NSS (which some argue passionately for) and leaving the sector with a Franken-survey – one that would “satisfy” no one.