Today the government released its response to the TEF Technical Consultation and so we finally have more detail about how the TEF will work in its second year, although the plans raise some fresh questions about the exercise.
Let’s first recall that the Teaching Excellence Framework is misnamed: it’s not an evaluation of teaching as one might define ‘teaching’ in a university, or in normal conversation for that matter. The breadth of the exercise’s metrics – on employability and equality of opportunity, for example – show that TEF is really about a broader ‘student experience’ framework or a ‘non-research’ evaluation.
And that’s just the start of where things get complicated. Providers (check here to see if you made the list) will send in their submissions to the exercise in January 2017 and results will be published in May. The idea is to influence prospective students’ decisions about entry for 2018.
All eligible providers – they’ve met the prevailing quality threshold, i.e. are in good standing with QAA and the funding councils – will be allowed to charge a fee uplifted with inflation for 2018 if they participate in TEF. They also need an Access Agreement or an equivalent statement, and to have valid data. You don’t have to enter TEF if you don’t want to; you can also have a go in a future year if you want to. So far, so clear.
What’s the fuss about?
Let’s start at the top. The original TEF proposals had said that there would be three grades of outcome judgement: Meet Expectations, Excellent and Outstanding. These will be replaced by Bronze, Silver and Gold. So that’s much clearer then.
In the government’s response to the consultation, it is rightly noted that the original judgements were hopelessly indistinguishable. Can you (or a prospective student) discern a meaningful difference between excellent and outstanding? Perhaps it was the influence of Rio 2016 or just the pithy nugget from a respondent which swayed DfE to the new categorisation. The document quotes the UCL response which draws a parallel with the widely-recognised Athena Swan scheme. That scheme is widely recognised within the HE sector, but do these judgments provide anything meaningful beyond universities?
Perhaps this is a case of grade deflation (not the usual complaint in education). Passing one’s QAA review was supposed to be something of a ‘gold standard’ in international higher education. Yet just meeting that gold standard will now only merit a lowly Bronze award, which virtually everyone will receive by default – at least.
The medal system might satisfy an ‘all must have prizes’ mentality, but it risks forcing what is actually a fine-grained judgement into three uneasy buckets: will the sector really bear the idea that more than half of its ‘excellence’ is merely Silver? That’s the anticipated distribution, with 20% Bronze, 50-60% Silver and 20-30% Gold.
Metrics mania
Alongside the consultation response, DfE has released a review from the Office of National Statistics on the data underpinning the metrics elements of TEF. The DfE response includes an update on the recommendations made in the ONS report. While a little on the dry side, the ONS report makes interesting reading but, most helpfully, if you just want a summary of the role of metrics in TEF, here it is:
“The TEF metrics will be comparing institutions’ performance to benchmarks that account for some of the characteristics of their student intake. Analysis will aim to show whether institutions are significantly above or below their benchmark and [these] comparisons will be used in TEF assessments. TEF judgements will be based on the benchmarked metrics and qualitative provider submissions.”
In refining the exercise, DfE has agreed to a small number of changes to the metrics in use in TEF, including the way in which highly-skilled employment is benchmarked and ensuring that the top and bottom of the distributions are flagged appropriately.
In its analysis of the factors affecting highly-skilled employment, DfE has produced a document which makes for interesting reading and concludes, amongst other things, that there is a correlation between highly-skilled employment, the age of institution and its REF score. But there is no evidence that these are determinants of excellent teaching. Usefully, it also points out that providers should only be judged by those factors within their control. Expect more to be said on this matter.
You’re kidding, right?
Concerningly, the descriptions of the medal-ratings include statements which bear no relation to the underlying data that informs the award. Let’s take Silver, for example: “High quality physical and digital resources are used by students to enhance learning.” The metrics proposed don’t even include the ‘learning resources’ section of the National Student Survey. How can TEF possibly make a differentiation in the learning resources available at an institution? It’s an after-the-fact determination to claim that because the students have done well, they must have had the right inputs.
Where’s the challenge?
In responding to the consultation publication, the sector mostly reiterated usual lines about the international reputation of UK HE. Universities UK said, rightly, that the challenge is developing a framework to suit everyone. University Alliance was not convinced of the detail on some of the metrics, and the Russell Group laboured the point about TEF being in a pilot phase and needing time to reach maturity. There are many valid, but competing, definitions of excellence out there and they should be recognised by the exercise: not everyone is playing out a real-life version of Brideshead Revisited.
Is there any good news?
It isn’t all bad: DfE has improved the TEF by adding an appeals process in cases of procedural irregularity, and there’s a revision to the evidence list providers can submit to make their case for a prize.
Trial by fire
The TEF is ploughing ahead. The technical consultation response is littered with ‘it’s only a trial year, honest’ and that may be enough to salvage the exercise. There is certainly real opportunity for TEF, but a clumsy ‘trial’ year could confirm all the doubters’ worst fears about the threats posed by the exercise.
They changed the names of the outcomes, but there are still gaping holes in the TEF. This may be a trial, but it could end up with DfE in the dock.