Alongside the results for individual institutions, HEFCE has produced a summary guide to give some context to the TEF results and ‘help’ members of the press to interpret the results.
Unfortunately, it seems that the TEF team has missed some important lessons over the last few months, and persisted with the fiction that the TEF results measure teaching excellence in some very specific ways.
This problem first became apparent when the TEF technical consultation response was published back in September. In order to explain the newly-introduced categories of Gold, Silver and Bronze (recall that they were originally ‘outstanding’, ‘excellent’ and ‘meets expectations’), the consultation response provided descriptors for what these judgements ought to be interpreted as. But these descriptions were fictions, including a specificity about provision within an institution which – while probably relevant to teaching excellence – were simply not being measured as part of TEF.
The notes on HEFCE’s TEF press release – and repeated in its ‘short guide to TEF’ – will perpetuate confusion about the exercise’s scope and meaning. Let’s take the section on “learning environment” which claims that TEF measures teaching excellence including “the effectiveness of resources and activities (such as libraries, laboratories and work experience) which support learning and improve retention, progression and attainment.” But look at the metrics underpinning this area of “teaching excellence”.
- Non-continuation (HESA)
- Academic support (NSS scale 3). That’s questions 10-12 from the pre-2017 version of the survey:
- I have received sufficient advice and support with my studies
- I have been able to contact staff when I need to
- Good advice was available when I needed to make study choices
As was pointed out at the time of the technical consultation, the choice of NSS questions (and you can debate separately whether they’re meaningful) does not include those questions which directly ask students about learning resources: question 16 on library provision, question 17 on IT, and question 18 about specialist equipment.
After errors in the technical consultation response were pointed out on Wonkhe, Labour’s spokesman Gordon Marsden raised the matter with Jo Johnson during the Higher Education and Research Act’s committee stage in the House of Commons last October. Yet, come June 2017, the TEF team has failed to understand its own exercise and to present accurate information about it.
The lack of meaningful inclusion of data on learning resources in TEF may turn out to have a serious impact on institutions. Student services are standing ready to work on TEF, and academic publishers too have an interest in supporting institutions in this area. But until TEF actually measures students’ experiences of learning resources – which is possible with NSS – it’s ridiculous for HEFCE to claim that TEF judgements reflect these inputs.
Unpacking the HEFCE statement further, where in TEF is the claimed data on the impact of “work experience” on retention? And given that non-continuation is included, but not measures of attainment, how can there be any link drawn between resources and students’ academic outcomes? At best, this is just sloppy drafting. At worst, it’s a wilful lie.
There are lots of problems with TEF, but if those promoting the exercise can’t demonstrate the most basic understanding of ‘what does it measure?’ then it’s difficult to have any faith in the exercise at all.
For the most comprehensive coverage of the Teaching Excellence Framework, follow Wonkhe’s TEF hashtag here.
I seem to recall the TEF consultation response summary saying they wouldn’t use learning resources questions from the NSS as they were being reworded for the 2017 iteration – ignoring the fact that so were several of the ones they *did* use…