An Office for Students report concludes that the National Student Survey will continue broadly as is for the foreseeable future.
It will be run in 2020-21 in the same way it was run this year, as a population survey. Results will be published at a statistically robust level, and there will be efforts to improve user guidance and – in particular – to raise student awareness and use of the published resources.
Further forward, there will be some changes to individual questions – along the lines of the 2017 NSS review. The word “satisfaction” will disappear from question 27. And the Office for Students will undertake further examinations as to whether running the survey with a large, stratified, sample – or biennially – will provide usable data while reducing burden.
Our survey said
A student survey, a survey for other NSS users, a series of structured interviews, five round tables, discussions within OfS, and detailed statistical analysis conducted by Ofs, were all unable to identify evidence of grade inflation linked to NSS scores. There was no correlation between grade inflation and increases in overall satisfaction or reported intellectual stimulation.
The OfS did not find any evidence of widespread gaming of the survey. The current survey was identified as the best of the available options by staff and students – cancelling the survey would not appreciably reduce the administrative burden of collecting information on the opinions and experiences of students in order to iterate improvements in teaching and support for learning.
A section on “concerns about the NSS” refers only to criticisms of the survey by the Department for Education. Despite this original request, closely mirrored in the terms of reference of this review, OfS has been unable to identify any other substantive criticisms. There was, to be utterly even-handed, some concern about the perception that this survey of student academic experience was a satisfaction survey – and some academic staff were concerned at the heavy-handed use of survey results within a provider to seek out problematic areas of provision.
Faces on posters, too many choices
The large container carrier that is the National Student Survey appears to be pretty much shipshape. Certainly, there is no evidence other than in the minds of DfE advisors that it is blocking the Suez canal of high quality student experience. Measures to address this imagined blockage are far more likely to damage our ability to understand how students feel about their higher education experience than improve anything.
This all puts the Office for Students in an uncomfortable position. We’ve not (yet) seen the English regulator go against DfE guidance in this way – at the time the consultation exercise began we commented on a survey design that seemed to aim at getting people to say quotably negative things about the NSS. It turns out that it’s not just us that respond poorly to such obvious leading questions.]
There was clearly an appetite at DfE to “solve” the “problem” of the NSS, but the sector has spoken up in favour of current arrangements. The annual trickle of reports that academics are encouraging students to say nice things about them (low double figures every year, at best) do not make for a crisis of confidence.
I appreciate this is a bit of an “inside baseball” point, but usually whenever OfS sends out a press release, we get a comment from a DfE minister (Gavin if it is “free speech” or any opportunity to say that some university courses aren’t good, Michelle otherwise). Despite multiple attempts to contact DfE we did not get a comment on the outcome of the NSS review it asked for.
Update (9.30am, 30 March): A DfE spokesperson (you’ll note, not a minister) told us:
This government values the views of students in higher education and is committed to effectively gathering data on their academic experience, whilst also avoiding pressures to reduce standards. We are pleased the Office for Students will be removing ‘satisfaction’ from the National Student Survey and look forward to working closely with the OfS through the next phases of the review to understand what more needs to be done.
Too many shadows, whispering voices
Our earlier article on the consultation included a graph that I make no apologies for repeating here:
As the OfS put it:
Data can be used, however, to rule out causal hypotheses. For example, if event B often occurs in the absence of event A, we cannot claim that A is the sole cause of B. Our approach, therefore, has been to explore whether the data we hold is consistent with the hypothesis that the NSS causes grade inflation.
Reader, it should come as no surprise that the data OfS holds is not consistent with the hypothesis that the NSS causes grade inflation. The report plots an overall trend in the rising proportion of first class degrees against answers for intellectual stimulation and overall satisfaction at a sector level, and changes in the proportion of firsts and overall satisfaction over eight years (the graph above shows the difference against the benchmark for 2020 results).
This makes it clear that there is no evidence for provider level grade inflation either as a response to or as an impact of provider NSS results. The report hedges this slightly by asserting that:
if there is a causal relationship between the two, it is complex and that multiple factors are involved. While we heard no evidence to suggest the mechanism is through provider-level behaviour, feedback from some individual academics suggest there could potentially be some effect at an individual level
The technical report on grade inflation adds that “colleagues” (both internal and external to OfS) reflected that the acceleration of grade inflation took place round about the same time that fee limits were increased. “Perhaps”, it is noted, “an increase emphasis on competition and the market led to accelerated grade inflation”.
There’s also a fascinating piece of analysis on overall satisfaction by degree classification – this shows a constant, but stable, correlation between degree class and overall satisfaction. There’s a similar correlation for responses to the “intellectual stimulation” question, suggesting students that work harder are more satisfied. We should also note that students did not have their final results at the point they completed the survey.
If, when, why, what, how much have you got?
You don’t need to look far to find providers working to drive up NSS response rates – by reminding students to complete the survey, and by providing space for them to do so. What is against the rules – and leads to the suppression of survey results in particular academic areas of particular providers – is any encouragement to complete the survey in a particular way. Since it came into being, the OfS has upheld 24 individual complaints about this in three years (compare the number of responses to the survey during this period). Annual numbers for complaints were similar for the HEFCE years.
The workshops with students held as a part of this investigation confirmed that
student opinions were not likely to be swayed by staff influence or incentives like free pizza or coffee
Which is pretty much as you’d expect. The evidence is that students take these surveys seriously both as a way to feedback on their own experience and (as I hinted this year) to improve the experience of future cohorts. There is no evidence available of any systematic attempt to influence survey responses.
One aspect of the survey that OfS may be looking into is the use of a five-point likert scale for responses. There’s a body of literature that suggests that, given an odd number of responses on a scale respondents to a survey instrument would be more likely to use the middle value. This makes both objective and statistical sense, although a fundamental design change like this would be at the expense of being able to construct a time series or compare cohorts using existing data.
Have you got it, do you get it, if so, how often?
Up front, there doesn’t appear to be any general concern with the burden that NSS places on providers – and students’ only concern is the possibility of survey fatigue. If the NSS didn’t exist, it is clear that the sector would invent it – or invest in alternative national and local surveys in a way that would add administrative burden.
But because the DfE is so certain there is an issue, OfS will be investigating two alternative options – a large sample based survey, stratified by provider and subject; and a biennial version of the current survey. Neither would give data as good as the current design (students, providers, and other respondents were clear about that) – but there was some suggestion that the biennial version especially would decrease administrative burden. I’m not so sure.
The large stratified sample version would require institutions to work with the survey contractor and OfS in identifying and contacting a representative sample. OfS suggests that this will represent 60 per cent of the current provider burden – I’d suggest that that figure seems low and unevidenced. The probability of a student being one of the (250,000) selected would like between 60 and 100 per cent – and one in ten providers would end up with all of their students involved anyway. This method may also bring a reduction of cost centrally (collecting responses from 250,000 compared to 311,432 final year undergraduate students is estimated to represent 75 per cent of the current costs). And all this to provide comprehensive and statistically reliable results at a provider level only – only just over 60 per cent of CAH3 subjects would have usable results, compared to 75 per cent now.
The biennial survey idea simply splits the sector into equal halves, and runs the existing survey for the final year undergraduate cohort for one half each year. This effectively disenfranchises about half of all students in each cohort, reduces the amount of data collected, destroys the annual time series of sector level data, and leaves provider level results open to the impact of one off events like industrial action (or global pandemics…) where comparators would not be. Though staff at providers would have to run the survey only once every two years, the overall cost of the exercise at a central level will not reduce by much, and I would be very surprised if the costs halved overall..
Which do you choose, the hard or soft option?
Were this a normal review of NSS, conducted in normal times, the result would be obvious – there’d be some tweaking to the questions, some changes in emphasis, but the survey overall would remain the same. There would be no case to make radical changes based on the available evidence.
But this is not a normal review. The strength of language in the initial commission from DfE, and the strength of feeling this implies, suggests that action is expected despite – not because of – the available evidence. This is a dangerous road to go down in policymaking terms, and all credit should be to the OfS for reporting the evidence as it stands in the face of such pressure.
I’m tempted to ask What I have Done to Deserve This?