This article is more than 1 year old

Proceed and NSS? not so much

Why does student satisfaction data bear so little relationship to outcomes or continuation? We've got the data.
This article is more than 1 year old

David Kernohan is Deputy Editor of Wonkhe

The charges made in the Department for Education’s plan to reduce bureaucratic burden back in 2020 still don’t stack up.

Former Secretary of State Michelle Donelan made the troubling assertion that:

good scores can more easily be achieved through dumbing down and spoon-feeding students, rather than pursuing high standards and embedding the subject knowledge and intellectual skills needed to succeed in the modern workplace

It sounds to me like she was setting up an opposition between good NSS and employer esteem. Handily, we have a way to test that at subject level.

Progression

But if you plot NSS scores and progression (either to a highly-skilled full time job or further study, or another “good” outcome) you can see little to no relationship between these variables.

[Full screen]

For any given subject some providers will do well on NSS, some on progression, some on both, and some on neither. There’s a notable tendency (look at the colours) to see more traditional providers (pink, yellow) score well on progression – this could be linked to the profile of students that attend such providers, or brand recognition among employers. But it bears no relation to the student experience as reported by actual students.

Clearly if anyone is gaming the NSS quietly enough to avoid scrutiny but profoundly enough to shift aggregate performance there is no easy way to spot them on this plot. But maybe we could see them in another place.

Continuation

The other end of Proceed looks at the predicted chance of a member of a given subject cohort completing their course. Though students could, theoretically, be persuaded en masse to game a survey, if a student is annoyed or frustrated enough with their course to leave then this is picked up in the OfS data that underpins this model.

[Full screen]

If students were being incentivised to score an unpleasant course well we’d see high NSS scores and high dropout rates – exactly the same thing as we’d see in one of those challenging but rewarding courses that some people seem to like (honestly, if you have a student attrition problem you have a problem either in your admissions process or your student support systems).

But a play with the chart (you can look at different subject areas at CAH2 level, and at any NSS question you are interested in) does not reveal much evidence of any of this. Again, more selective providers have better progression rates, but it’s all looking rather tenuous. Students just like some courses and don’t like others.

Proceed

Coming soon to a bus, train station, or little frame next to the urinals near you the Proceed metric (yes, I know it is an acronym but I refuse to dignify it by spelling it out) is meant to offer applicants a quick means of assessing the quality of an advertised course. Well, advertised subject area at one of two levels.

Now, I’m old enough to remember the Key Information Set (KIS) – and those widgets drawn from Discover Uni data that sit alongside online course pages on provider websites. This data does include student satisfaction alongside graduate employment data – it also used to include information more directly of interest to applicants like accommodation costs, contact hours, and assessment records.

On other adverts we are used to – even alongside statutory investment warnings – review data from people who have used the service. But perhaps Proceed is a close enough proxy to obviate this?

[Full screen]

Apparently not.

Data for applicants?

I’ve never been convinced that chucking data at 17 year olds is the way to “fix” applicant behaviour. There have been enough surveys suggesting that what really works is direct personal contact with students and staff on their intended course to make me wonder about the review model again – but the NSS feels like a source of data that is close enough to this kind of information to make sharing it worthwhile.

I know the word is now verboten, but if you are going to have customers you need (as Paul Simon once noted) to keep them satisfied. The fact that student satisfaction does not have any relationship (positive or negative) to outcomes or continuation data suggests either that students are focused on fripperies or that students do not judge their courses by outcomes. I know which answer makes more sense to me.

3 responses to “Proceed and NSS? not so much

  1. There are however notable negative relationships between satisfaction with assessment questions and the Proceed metric.

  2. I may be being thick here, but how is student even supposed to judge their course by its outcome when they take the NSS in their final year, not a year after graduation – they don’t yet have an outcome?

    1. They’re not. NSS has nothing to do with graduate outcomes.

      Some people (in the Department for Education) have suggested that universities are making courses less rigorous in order to get better NSS scores. If courses really had been getting easier we’d see worse graduate outcomes – this doesn’t appear to be the case.

Leave a Reply