It is very strange to set up a review of a longstanding element of the higher education system and to explicitly tell those on the review what they are expected to find. However, the government’s announcement of the review of the National Student Survey has no such hesitation in spelling out, in fewer than 500 words, exactly what is expected.
More disturbingly, the evidence base for these expected findings ranges from the deeply contentious to the downright non-existent. For anyone committed to evidence-informed policy making it is a profoundly worrying and unsettling document.
Standards are slipping
The first blithely asserted bombshell is:
Since its inception in 2005, the NSS has exerted a downwards pressure on standards within our higher education system.
If we were to take this assertion seriously we’d be asking what on earth universities have been doing to allow educational quality to be corroded for 15 years. In reality, it is very difficult to imagine how providing universities with information about students’ experiences of studying has forced standards down.
All we are offered in support of this is the assertion:
There is valid concern from some in the sector that good scores can more easily be achieved through dumbing down and spoon-feeding students, rather than pursuing high standards and embedding the subject knowledge and intellectual skills needed to succeed in the modern workplace.
This is a rhetorical device that will be familiar to anyone who has read government documents relating to the Teaching Excellence Framework: an unidentified group is cited who just happen to agree with the government’s position, and their concerns are thereby validated.
In essence, the case seems to be that universities face a stark choice between either maintaining high standards or seeking student satisfaction. Large scale reviews of the NSS – such as this ten year study or this large-scale analysis – simply do not support this assertion. Student satisfaction with courses most strongly relates to the quality of the teaching and the organisation of the programme.
For the avoidance of doubt, I should point out that, despite the government’s assertion, poor teaching and disorganised degree programmes do not provide a secret path to the maintenance of academic standards.
The second shock is that suddenly the NSS was never intended as an instrument to inform students’ choice of degree programmes. We must have all imagined the resources and effort poured into Unistats and Discover Uni, because apparently NSS was only ever intended to be “a helpful tool for providers and regulators.”
This revisionism informs the request for the review to consider whether NSS data should be made public. This is amazing (and not in a good way) on two levels. First, it is deeply unethical to ask students for their views of the quality of their degree programmes but not to make this public. Second, it would appear to go against all principles of data protection that those who asked to comment on the quality of a degree programme are not allowed to know the outcomes on which their data is based.
Next we learn that another problem with the NSS is that it:
[does] not correlate well with other, more robust, measures of quality, with some of the worst courses in the country, in terms of drop-out rates and progression to highly skilled employment, receiving high NSS scores.
As David Kernohan points out retention rates and employment outcomes don’t correlate with each other either, which more than slightly undermines the case made here.
There is, however, plenty of evidence that employment outcomes are shaped by the state of the economy, gender, ethnicity, disability, geography, and social class more than they are by the quality of the course you study – here’s one example from many.
We are living in strange times indeed, when the NSS, a measure of quality which has been developed over decades based on an instrument validated in government backed reviews in Australia and the UK, is dismissed because it does not relate to the government’s currently favoured quality measure.
Goodness only knows how the authors of this document will react when, based on this logic, they realise that the employment outcomes during the Covid-19 pandemic tell them that there has been a complete collapse in the quality of university education over the last three years.
It is only fair to note that some institutions have tried to game their performance on the NSS and some have introduced overly bureaucratic approaches to manage this. But it is a complete overreaction to use this to rubbish an instrument that has provided students with useful insights into the quality of degree programmes, and is used by course teams across the country to inform efforts to improve the quality of their courses.
The final blow comes at the end when, from out of nowhere, we suddenly learn that, not only has the NSS lowered standards, but it has also fuelled grade inflation. The only possible explanation for this is that the NSS has so blinded universities to their educational responsibilities that they just give students whatever degree outcomes they ask for. Unsurprisingly, no evidence is offered to support this link.
Developing measures of educational quality that are not open to institutional gaming and genuinely inform students about the quality of courses is very important. This is why the agenda set for this review is so troubling and why it is so shocking that a radical root and branch review is set up on such falsehoods, while being given less than three and half months to reach its conclusions.
What becomes clear is that there may well have been a decline in standards: a decline in the standards of policy makers in setting the terms of an important review of a key element of the higher education system.