Team Wonkhe slack chat: the NSS

The National Student Survey 2017 results are published today, alongside details of a consultation process for the establishment of an NSS for taught postgraduate students. But, twelve years on – and amidst the now annual round of discussion – what does the NSS actually tell us, and what do we think it means? Is it the right instrument to understand the modern student experience, and even if it is useful at undergraduate level will it work for post-graduate courses? Members of team Wonkhe took to slack to discuss the issues.

Content has been lightly edited for clarity and to add relevant links. This is an experimental format for us (cribbed from the supremely wonky FiveThirtyEight), so do please comment – and continue the conversion – below.

dk: (David Kernohan, Associate Editor) We’ve probably all got the #nss2017 column open on twitter, and just like every year we’re seeing proud institutions alongside serious criticisms of the entire process. This year the pressure is even greater as an NUS boycott means that a number of prominent institutions aren’t eligible to have their data published. Twelve years on from the first run, what can the NSS really tell us about the student experience?

david: (David Morris, Deputy Editor) As I noted in my blog this morning, I think a good starting point on this is Graham Gibbs: “NSS scores for a degree programme provide a useful indication of where there might be problems, but rarely point directly to the cause of problems let alone to possible solutions…”

nona: (Nona Buckley-Irvine, Policy Assistant) Yeah, I agree. I think NSS can be useful on an institutional level, particularly when it comes to the open text comments. The issue is these aren’t up for grabs in the public domain

ant: (Ant Bagshaw, Deputy Director) On the point about pride vs. meaning, one thing that has become a truism in HE (see LEO and TEF results too) is that the default is to crow about wins and decry/ignore the exercise where you don’t do well. I’m not sure how much the actual results reach prospective students

nona: That’s where you’ll get the info on lecturers not turning up to lessons, poor wifi connection in a particular institution, delays to timetables causing mass frustration. But the raw figures really aren’t that telling at all, and it’s difficult to draw comparisons between different institutions e.g. the arts student experience is radically different to medics

ant: Overall satisfaction is pretty useless, and strange HEFCE gives that out under embargo (and not the other details)

ant: It does lead the story that way

ant: Useful internally is true also for TEF, why the split metrics are important, and also the prospect of subject-level; agree with @nona, though that’s not today’s story

dk: There are genuine concerns about the figures – just a glance over the full results show sample sizes in the low teens for many individual courses. Back when the NSS was being developed at HEFCE (by the same chap who is leading on the TEF there, history fans!) there was loads of concern about the way in which the issues that low sample sizes could cause. Many steering group members argued for error bars – but they didn’t get them.

david: This is interesting because it’s only recently (one or two years ago?) that the publication thresholds were actually reduced, and HEFCE were adamant that this was statistically sound

dk: And there was a great deal of argument as to whether it should be even possible to construct a league table. That’s why you can’t easily build one from unistats – a feature not a bug.

david: It makes the limitations quite acute for many HE in FE and alternative providers

dk: Indeed – and for smaller specialist providers like arts colleges

ant: But true for nearly everyone at programme level. Somewhere in the institution will be a small result

ant: What’s the alternative?

dk:  @ant is there a need for any kind of national comparison at all? what do we really learn from it?

ant: Unis do something with the results, actually pay attention to the good/bad

ant: Not sure they’d do that if not public, harder to see that from PTES/PRES

nona: I think there can be some use to national comparisons, but the way that comparisons are done needs to be done in a more nuanced way rather than focussing on the overall league table.

david: There’s a tension at the heart of the exercise here – on the one hand, the data is best used as a formative and enhancement tool, but it also needs that ‘whip’ of being made public to be effective, and for providers to take notice

dk: But they pay attention to their own internal module feedback surveys just as much if not more than the NSS (which is very much a blunt instrument that can’t be use to assess the impact of specific interventions?)

nona: @dk I think that’s a problem with the concept of student satisfaction. e.g. LSE where I was a student rep, they fared much better on internal module feedback surveys than on the NSS. The problem was much wider than just course delivery and so NSS was effective on drawing on that

ant: @david, yes; @dk, not my experience

david: Well one bonus of the NSS in that regards is that it measures *programmes*, not modules, and thinking about programme level interventions is essential for improving L&T and student learning

ant: Modules too important for that

dk: @nona I agree with you that nuance is required. So is it our fault (as commentators) that the stories are all about rankings?

nona: @dk yes, it is our fault! Everyone wants to crunch data in a way that produces a story – in a competitive HE market the rankings are usually the story. Although some commentators are worse than others…

dk: @nona the public nature of the NSS lends itself to a more political use than just simple feedback. As the NUS boycott has demonstrated.

david: Absolutely – it’s a hard circle to square, but the feedback function is still there and still important, if overshadowed

nona: @dk it does. But on the other side of the coin, with internal feedback I would say students aren’t necessarily that engaged with the implications of what they fill out – aka can be quite passive in filling out the forms

david: A HEA report in 2014 found it to be the most effective and cost-efficient L& T intervention HEFCE had overseen over the last 15 years or so

nona: Well that’s a good news story

david: The important thing with the feedback aspect is that institutions need to follow it up with more qualitative forms of engagement

dk: @david but it’s not an intervention, it’s a metric. Sensible actions based on findings are the intervention.

ant: @dk, that’s a useful analysis, good to see boycott for what it is – a political act – one which could have damaging consequences for SU-uni relations, use of NSS for enhancement

ant: Seems to me like we have consensus: NSS as league table = bad; high quality enhancement within institution = good (whether uses NSS or not)

dk: So we like feedback, but we’re leery about public point-scoring.

nona: Oh how we’re leery

ant: We enjoy the point scoring for sport

ant: The spin-o-meter in overdrive

david: the public point-scoring is all flotsam and jetsam – the really interesting thing is how it’s used on the ground

dk: @david the ground war not the air war.

david: shock and awe

dk: So what would fix the NSS? Could we just keep it secret?

nona: I just don’t think the NSS is the answer. Genuinely working in partnership with students’ unions to enable them to produce independent reports on the student experience that are then genuinely acted on seems far better to me

david: yes, but what incentive do universities have to do that without the ‘whip’ of public results? I don’t necessarily disagree, but I just don’t know the answer to that question

nona: Also there are other things that the NSS doesn’t capture in its one size fits all approach – e.g. financial anxieties, mental health etc. There are unique stresses depending on the institution that would likely impact the overall satisfaction score. If you could design a bespoke one then much better

nona: @david a problem indeed.

dk: @david league tables as incentives to improve is peak Michael Barber ?

nona: Performance-related pay for VCs ?

ant: I think the state of PGT education answers that; there isn’t improvement in quality without the public evaluation

dk: @nona some institutions do have bespoke questions alongside the NSS – but we don’t know how these are used in practice.

ant: And they have their own internal surveys, so scope for nuance

david: @dk it is, and this is why i found Shan Wareing’s article on Wonkhe back in April a really interesting intervention

david: suggested new forms of public accountability that are perhaps more sophisticated than what Barber has advocated in the past

dk: @david top on message citation game here.

david: but would they work? who knows

dk: @ant do you think a PGT NSS would work?

ant: yes

ant: as much as UG NSS works ?

dk: @ant so as fodder for press offices in August and (occasionally) use in planning the next run of a programme?

nona: Setting your own qs as an institution is questionable

ant: @nona, what’s wrong with own questions? Identify/solve problems in bespoke way, no?

nona: @ant As with any org setting their own questions, you can set them up in a way that gives you the answer you want.

nona: to be cynical

ant: @dk, I still think unis pay more attention to internal action *because* of public nature

nona: In a way I think that PGT NSS would be more useful than UG NSS because it is a greater financial investment and I think students do want more of a ‘value for money’ e.g. career changers; and it’s easier to measure across (usually one) year

ant: @nona, but that’s a problem regardless; a lot of this discussion assumes genuine desire for improvement for students

nona: @ant and I think that is a potentially generous assumption for some institutions

dk: @ant tho the actions themselves are not always public. Or commensurate (institutions have closed courses that perform badly in the past).

ant: Why not close a course that performs badly?

dk: @ant define “perform badly”!

david: I don’t think it’s necessary to defend the management decisions made as a result of the survey in order to the defend the survey

ant: Doesn’t recruit students, doesn’t satisfy students

dk: @ant some courses are – bluntly – a hard slog but eventually worth the effort.

david: NSS doesn’t kill courses, bad teaching and/or bad management kills courses ?

david: absolutely, but i’m very wary of that being used as an argument not to use them

dk: @david NSS is a warning light that suggests action may be needed. But is it accurate, and are the correct actions always clear?

david: @dk – not always no, that’s precisely what Gibbs says

ant: @dk, understand that there are some worthy courses that perform poorly when evaluated by students, and yes there needs to be nuanced management

ant: but why not take student views into account on this!?

dk: @ant again, nuance.

david: this is where effective management and leadership is vital

dk: @david I think we’re in agreement there.

david: careful handling of data and follow up engagement with students and staff crucial

nona: @david i don’t think it’s an argument not to use them, but about how it’s used

nona: @ant I do think that’s where we need to be careful in terms of how women/BME lecturers and teachers are rated worse

david: @nona – the evidence on this is far from conclusive, but needs further investigation

ant: @nona, don’t disagree, though that doesn’t seem to be an NSS issue – yes at course level

dk: @nona @ant a very good point. So much unconscious bias can creep in to survey instruments.

david: again, the programme-level of the NSS actually guards it more against bias than module surveys, which tend to grade individuals

david: the research literature is split on the extent to which students are prone to bias in evaluations

Further reading

ant: If we’re in the referencing game, look at my piece on subject-level TEF; opportunity for benchmarking between subject areas

david: I’d recommend: The 2014 HEFCE review of the NSS

Implications of ‘Dimensions of Quality’ in a market environment from the Higher Education Academy

The role of HEFCE in teaching and learning enhancement from the Higher Education Academy

dk: For the early history, try the 2004 HEFCE consultation responses, and the resources relating to the pilot year.

4 responses to “Team Wonkhe slack chat: the NSS

  1. “Nona: @ant I do think that’s where we need to be careful in terms of how women/BME lecturers and teachers are rated worse

    david: @nona – the evidence on this is far from conclusive, but needs further investigation”

    Can you point to studies which don’t show bias against women and BME lecturers, please?

  2. Herb Marsh’s meta-analyses argue SETs are “relatively unaffected by a variety of variables hypothesized as potential biases”.

    Marsh, (2007). Students’ evaluations of university teaching: A multidimensional perspective. In R. P. Perry & J C. Smart (Ed.), The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective (pp.319-384). NY: Springer.

    Marsh, H.W. & Roche, L.A (1997). Making students’ evaluations of teaching effectiveness effective. Am Psychol, 52, 1187-1197.

    But there are lots of studies that argue the opposite, of course. The main one for NSS is a regression analyses of characteristics that might influence higher scores, one of which they found was white teachers, but I think there could be more variables to investigate here.

    Bell, Adrian R. and Brooks, Chris, Is There a Magic Link between Research Activity, Professional Teaching Qualifications and Student Satisfaction? (June 2016).

  3. I don’t understand how a student can judge how satisfied they are if they have only ever been to one provider.

    It’s like asking someone whose only ever had Tesco Beans if they are satisfied with the Beans when they have never tasted Heinz!

Leave a Reply