Every year there seems to be an increasing focus given by institutions to the National Student Survey as a benchmarking exercise.
And every year there are stronger marketing campaigns encouraging students to complete the survey – and some institutions offer benefits for those who complete it. Scroll though NSS on Twitter and you get an interesting take. There are the inevitable nice shiny graphics form universities or departments who have done well against their benchmark, or have finished in the top X position – quite rightly as they should be proud of themselves; people offering critique of the system as a whole; analysis of the results and what they mean and perhaps my personal favourite, university and SU staff showing pictures of waiting at their laptops with a mug of tea or coffee and perhaps even breakfast in the shot.
The question is, why do we put so much focus into the NSS against other measures of student satisfaction out there. The accompanying OFS statement for 2019’s results suggests that we still need to improve the feedback universities collect, with Nicola Dandridge stating:
We will continue to develop the National Student Survey, ensuring it remains an invaluable tool for capturing student opinion and driving improvements across the sector – both for the benefit of current students and generations to come
It is of course the largest single data collection exercise from undergraduates, but could we gain more valuable data another way? Should we be relying on a final take to analyse a (majority) 3, 4 or even 5 year journey at undergraduate level at the very end point of what is ultimately a transformational time for anyone who completes a degree programme?
Having data is a great thing but only if it is used and implemented correctly. For example, question 1 ‘Staff are good at explaining things’ and 2 ’Staff have made the subject interesting’ both ask students to rate all staff they encounter on their course as a singular entity, failing to capture inevitable differences on a module to module basis. If this data is not used alongside modular feedback then in effect it offers nothing more than a snapshot view of three years.
More worryingly some departments and students’ unions talk about how their funding is linked to NSS outcomes. For this to make sense it would need to be mapped against performance metrics that show active engagement, completion rates, graduate outcomes, modular feedback and more – basically a wealth of data that is already collected that must sit alongside to enable a concrete analysis.
What does the data show us?
There is data that we can compare across multiple demographics, covering Learning Opportunities, Academic Support, Learning Resources, Learning Community and Student Voice. The breadth at which the focus on academic experience is helpful, but does this represent the overall picture of what a degree in the UK is? With the Office for Students Strategic Mandate 2019 Student Experience section opening line states:
The OfS should continue its work supporting the student experience with a focus on wellbeing, mental health, welfare and harassment and hate crime
Only one of the 27 NSS questions ‘I feel part of a community of staff and students’ gets near this and it has a low performing score of 69. This is hardly surprising with the widely accepted mental health crisis facing our campuses and perhaps just rubber stamps what we already know from research such as Wonkhe’s on student loneliness.
So if this is the focus that the OFS have from the government why isn’t this something we measure? Why does NSS effectively say student life comes down to academic experience alone? We need questions that focus on wellbeing, mental health, welfare and harassment to understand the national and institutional progress.
Benchmarking against other institutions is of course a useful thing that the NSS exclusively enables for the undergraduate experience. But what if your institution uses the Additional Questions, particularly those for Students’ Unions – because there are widely known flaws in question 26? You might get data on sense of belonging, local community and skills as this makes up a core aspect of charitable objectives and purpose as an SU. Yet, there is a complete inability to benchmark these against others – while you can view the average for the country you have no idea who you are making these comparisons against, and it could be making comparisons to hundreds of similar sizes or it may be for those who don’t even have SUs.
Trying to obtain further feedback from this data to make improvements for the benefit of students poses another challenge. By the time this data is made publicly available these students have already graduated. Obviously comparisons can be made from previous years data to monitor trends and if there has been an increase or decrease there can be (at best) an educated guess as to what you did that worked – there’s the search for open text feedback and ideally there will be something that matches the area you wish to look at for that golden nugget of ‘why’.
But if you want to go out and gain further student feedback then you have to approach students from following year groups about feedback they didn’t give you – a bit like the classic Two Ronnies Mastermind sketch. This will bring general feedback from a current cohort but their experiences can be vastly different to the year beforehand.
Sitting next to
There are strengths to this data and where it can be used. The breakdown of demographics should sit alongside looking at attainment rates. For example, if you find lower satisfaction rates from BME students and know that you have a black attainment gap then this really is a driver to go out and establish why this is, and how you can ensure that this sits alongside your Access and Participation Plan.
It should really be a tool used for measuring impact, but then sits alongside the OFS A-Z guidance of student demographics for providers to focus on. Why is it that data is unavailable to be broken down into refugee status, low household income and socio- economic status, carers, care leavers and so on? Surely as a measure for judging student satisfaction against an overall institutional core this needs to be reflected so that NSS correlates with Access and Participation plans.
Of course the inevitable ‘Value for Money’ issue was mentioned by Chris Skidmore on the NSS Website
We know the quality of teaching and assessment is a huge factor in a student’s experience of university and whether they feel they’re getting value for money.
It’s odd that we use this to determine ‘Value for Money’.Value is subjective and when it comes to money, it is determined by what you can afford. £9250 or perhaps more controversially a future £7500 would surely represent a greater ‘Value for Money’ if this is something you view as cheap in comparison to a private education. The data available does not show a breakdown of home vs international students. When paying a different price for what is ultimately the same product surely represents a differing view on value? It’s like how when you go to the cinema, you buy the sweets from a different shop before going in because it’s better value than paying 3 times as much for the same thing.
Given the survey does not directly ask if a degree represents Value for Money it is worrying if OFS and the minister intends to use this a benchmarking exercise for value.
We can do better
I’m a believer that there is value in the data obtained from the NSS, but that it comes from being the best UK wide comparison tool that is available. It doesn’t mean the NSS can’t be improved. The opportunity to collect this level of data need to be more frequent and open to all years of students to distinguish patterns changing; we need to see of satisfaction changes year to year; free text needs to be available for more than an overall statement but should have options for each question to provide richer qualitative data; the questions themselves need examining and repurposing to be relevant to the breadth of student life of academic and lived experiences; Access and Participation must be integrated into NSS; and there should be an opportunity for further analysis with those who have taken the time to fill this in through follow ups, focus groups, modular feedback and academic representation systems in order to see continual improvement to better higher education.