Jo Johnson has announced that the weighting of the National Student Survey in the Teaching Excellence Framework will be halved. It is a decision that further reduces the student voice in the higher education policy landscape and replaces it by a stronger emphasis on institutional metrics.
The operational direction of policy has been more than clear for a while: metrics and output measures, deliverology and value for money. While it has been attractive to dismiss these changes as negative and damaging, a new level of accountability has been developing which could be argued to relate to principles of higher education as a social good, but this time with a very new accountability regime. And indeed, Jo Johnson’s speech refers explicitly to the ‘legitimacy’ of the sector, student views through the HEPI-HEA Student Academic Experience Survey and the value-for-fees argument. The rhetoric is there, but the commitment to actual students is now diminishing.
In 2011, HE policy became driven by ‘Students at the Heart of the System’. We were told the introduction of fees was going to be empowering for future students. Then the Office for Students was announced in 2015 but, despite its promising name, there was no student voice within the structure or governance of the proposed OfS. Following persistent objection to this, there will now be a member of the governing board with specific expertise on the student experience – a rather indirect approach to representing the student voice. At least TEF recognised the importance of putting students central: the NSS plays a major role, and there are students on TEF panels.
The student voice
NSS has a track record of giving students a voice. Over more than ten years, the survey has given institutions, students, academic programme teams, students’ unions and, importantly, prospective students a wealth of information on the student learning experience. For a good part of the sector, the NSS and related student engagement metrics have influenced teaching practices more than almost any other policy measure. The recently-reviewed NSS now also explicitly invites students to rate how effectively they think feedback from students has been engaged with. The inclusion of those student voice questions shows that the sector itself is taking student representation and student interests seriously. Student engagement and partnership have long been part of quality management mechanisms as well as university governance practices. And now the sector has reflected that collective commitment to engagement with the student voice in its public accountability.
This year the student response to higher education policy has been particularly interesting. When NUS decided to campaign against TEF and student fees, they looked for a route into the new policy environment. And so they boycotted the NSS to send a message to the government. In a few institutions, the student voice went silent, which was intended to affect some datasets for the subject level TEF trials ahead. Students at the Heart of the System cuts both ways. And while Jo Johnson quite rightly recognises that “the NSS remains an extremely valuable source of information”, he nonetheless diminishes the student voice it represents, precisely when the students’ representative body chose to use it to speak out against TEF.
Where did the other half go?
When the student interest weighting is halved, what is the disappearing half replaced with? And will it be relevant to students – prospective and current? We knew that weighted contact hours would be introduced. And we expected salary data through LEO. Jo Johnson also announced grade inflation metrics – not entirely unexpected either, but surprising nonetheless. He has moved from pressing for a grade point average initially, to having (rapidly assembled) national degree classification standards, the effective implementation of which will be measured through the grade inflation metric ultimately. Standards as a policy control matter, rather than an academic matter. A controversial move in itself which will generate debate for years to come.
Policy imperative or student interests?
A positive aspect of TEF is the benchmarking of data not by institution, but according to student characteristics (including subject, age, background etc). Used wisely, such student-centred benchmarking can help to see how different groups succeed across the sector and what kind of educational practices deliver good outcomes for a diverse student population. In a sense, student centred benchmarking has given a new, data-driven, cross-sector voice to particular groups of students. This has reinvigorated the debate on inclusivity and supports ultimately efforts for greater equality for students with, say, protected characteristics. In itself, a great gain.
But the newly announced metrics are different. Neither the new grade inflation metric nor contact hours relate sensibly to student characteristics, whereas progression, graduate destination or students’ views clearly do. The new metrics are no more than institutional performance statistics.
Their inclusion moves the teaching excellence debate further away from the learning by students whose education TEF is meant to judge. We can only hope that TEF submissions and TEF panel members continue their emphasis on actual teaching excellence and the student learning experience.
‘Halving the NSS weighting’ may have been a statement designed to pacify objections from within the sector, but underestimating the student voice can ultimately come at a cost. This government linked teaching excellence to fee levels as a policy lever in order to change the sector. So whenever TEF gets discussed, fees are also on the table and students will be interested. And the last election showed us how politically relevant the student vote can be.