Jo Johnson has announced that the weighting of the National Student Survey in the Teaching Excellence Framework will be halved. It is a decision that further reduces the student voice in the higher education policy landscape and replaces it by a stronger emphasis on institutional metrics.
The operational direction of policy has been more than clear for a while: metrics and output measures, deliverology and value for money. While it has been attractive to dismiss these changes as negative and damaging, a new level of accountability has been developing which could be argued to relate to principles of higher education as a social good, but this time with a very new accountability regime. And indeed, Jo Johnson’s speech refers explicitly to the ‘legitimacy’ of the sector, student views through the HEPI-HEA Student Academic Experience Survey and the value-for-fees argument. The rhetoric is there, but the commitment to actual students is now diminishing.
In 2011, HE policy became driven by ‘Students at the Heart of the System’. We were told the introduction of fees was going to be empowering for future students. Then the Office for Students was announced in 2015 but, despite its promising name, there was no student voice within the structure or governance of the proposed OfS. Following persistent objection to this, there will now be a member of the governing board with specific expertise on the student experience – a rather indirect approach to representing the student voice. At least TEF recognised the importance of putting students central: the NSS plays a major role, and there are students on TEF panels.
The student voice
NSS has a track record of giving students a voice. Over more than ten years, the survey has given institutions, students, academic programme teams, students’ unions and, importantly, prospective students a wealth of information on the student learning experience. For a good part of the sector, the NSS and related student engagement metrics have influenced teaching practices more than almost any other policy measure. The recently-reviewed NSS now also explicitly invites students to rate how effectively they think feedback from students has been engaged with. The inclusion of those student voice questions shows that the sector itself is taking student representation and student interests seriously. Student engagement and partnership have long been part of quality management mechanisms as well as university governance practices. And now the sector has reflected that collective commitment to engagement with the student voice in its public accountability.
This year the student response to higher education policy has been particularly interesting. When NUS decided to campaign against TEF and student fees, they looked for a route into the new policy environment. And so they boycotted the NSS to send a message to the government. In a few institutions, the student voice went silent, which was intended to affect some datasets for the subject level TEF trials ahead. Students at the Heart of the System cuts both ways. And while Jo Johnson quite rightly recognises that “the NSS remains an extremely valuable source of information”, he nonetheless diminishes the student voice it represents, precisely when the students’ representative body chose to use it to speak out against TEF.
Where did the other half go?
When the student interest weighting is halved, what is the disappearing half replaced with? And will it be relevant to students – prospective and current? We knew that weighted contact hours would be introduced. And we expected salary data through LEO. Jo Johnson also announced grade inflation metrics – not entirely unexpected either, but surprising nonetheless. He has moved from pressing for a grade point average initially, to having (rapidly assembled) national degree classification standards, the effective implementation of which will be measured through the grade inflation metric ultimately. Standards as a policy control matter, rather than an academic matter. A controversial move in itself which will generate debate for years to come.
Policy imperative or student interests?
A positive aspect of TEF is the benchmarking of data not by institution, but according to student characteristics (including subject, age, background etc). Used wisely, such student-centred benchmarking can help to see how different groups succeed across the sector and what kind of educational practices deliver good outcomes for a diverse student population. In a sense, student centred benchmarking has given a new, data-driven, cross-sector voice to particular groups of students. This has reinvigorated the debate on inclusivity and supports ultimately efforts for greater equality for students with, say, protected characteristics. In itself, a great gain.
But the newly announced metrics are different. Neither the new grade inflation metric nor contact hours relate sensibly to student characteristics, whereas progression, graduate destination or students’ views clearly do. The new metrics are no more than institutional performance statistics.
Their inclusion moves the teaching excellence debate further away from the learning by students whose education TEF is meant to judge. We can only hope that TEF submissions and TEF panel members continue their emphasis on actual teaching excellence and the student learning experience.
‘Halving the NSS weighting’ may have been a statement designed to pacify objections from within the sector, but underestimating the student voice can ultimately come at a cost. This government linked teaching excellence to fee levels as a policy lever in order to change the sector. So whenever TEF gets discussed, fees are also on the table and students will be interested. And the last election showed us how politically relevant the student vote can be.
The NUS need to put their hands up and accept some responsibility for this. The boycott was a perverse exercise which diminished the power of the student voice and weakened the standing of one of few tools available for students to wield which had real bite in producing positive change for students within HEIs. Thanks to the government response they are now effectively joint creators of a second spectacular own goal in the standards game. Yes the government are equally culpable but if the NUS’s childish pursuit of short term disruption had been checked The TEF, Sector and crucially students might be all the better for it.
The very basis of the NUS boycott was that ‘NSS was never designed’ to be included within a metric with such heavy weighting on it, to then determine teaching excellence (TEF outcomes and further fee caps.)
So really, the ‘student voice’ needs to make their minds up!
Actually, the boycott has been incredibly successful. It has demonstrated three things:
1. How flimsy the TEF’s goal of measuring ‘teaching excellence’ is when it is happy to divert away from teaching measurements so easily – and proves its real goal is differentiating tuition fees
2. The boycott has managed to expose the most vulnerable flaw in the TEF, that the government will openly game it in order to ensure its most elite institutions still succeed. Had it not been the RG who boycotted, not least Oxford and Cambridge, I doubt they would be doing this.
3. The TEF clearly doesn’t put students at the heart of the system – students have openly and powerfully illustrated they don’t want to take part in the TEF and its link to fees. They don’t want it to rate their universities with flawed and insufficient metrics, they don’t want it at all – but the government is still bulldozing through with it, effectively ignoring the student voice
What is childish is the government, instead of taking a step back and thinking ‘huh, students really don’t like the TEF, how can we make it better for them?’ (which is what the House of Lords did), they are dismissing the legitimate concerns of thousands of people for their own quick fixes.
The NSS is a poor measure of student voice and students have never claimed otherwise. I talk to many students who either don’t know what it’s for, think they should give positive responses to make them more employable, or think it is used against them (e.g. in the case of raising fees). It is ridiculous that the TEF should use NSS responses to raise fees at the same time as claiming that this is ‘what students want.’ That is what the boycott was about.
The proposals haven’t “halved” the student voice as it was never really there in the first place. Instead of making the system better, however, the government have taken away students’ only (very flawed) bargaining tool. It’s like a slap in the face.
With over 3 million students having completed the NSS in the past decade, it’s a far better metric than any. Instead the NUS complain as it shows consistently that Students’ Unions perform poorly (usually 15% below the overall satisfaction question for the university) and it exposes them for the sham that they are.
If anything the TEF needs use all 27 NSS questions including overall satisfaction (Q27) and not just 1-14.
The NUS have ruined the only thing that made them relevant – unis only ever cared about them because of the NSS and now we will see the decline of unions over the next 5-8 years.
But this just isn’t true Mark. NSS measures satisfaction, not quality. That is why the metric is a poor choice for TEF. Happy or unhappy students are not a measure of teaching quality, they are a measure of satisfaction. And satisfaction, as I’m sure psychologists would agree, is influenced by too many things beyond the control of a University’s teaching resources.