This article is more than 7 years old

Learning TEF lessons

The release of the TEF lessons learned report, highlighted in Jo Johnson's UUK speech, means more changes to everyone's favourite measure of teaching excellence. Catherine Boyd and David Kernohan take a look at the implications.
This article is more than 7 years old

Catherine is a former Executive Officer at Wonkhe.


David Kernohan is Deputy Editor of Wonkhe

Today saw the release of the TEF: Lessons learned report from the Department for Education alongside a speech from Jo Johnson at UUK’s conference. For those expecting in-depth analysis and data regarding the running of the TEF process – you will be disappointed. However, the changes that are being made could have quite significant impact.

Furthermore, the changes announced today are to be implemented straight away, in TEF year 3. The TEF continues to progress at great speed leaving the sector little time to prepare ahead of TEF year 3 submissions, which will be sometime in the new year.

National student survey – 50% off

The big news for many will be the decision to cut the weighting of the national student survey derived metrics in half. For many prestigious institutions, it was the NSS metrics that dragged them down to Bronze. This change will come too late to forestall the initial reputational damage of this year’s awards, but could lead to a rash of upward movement in TEF3. It also provides an incentive for providers with Bronze or Silver awards and historically poor NSS results to enter the TEF again next year and try their luck, in the hope of upgrading their award.

The change will also open questions of comparability between TEF2 and future iterations of the exercise. Furthermore, the partially successful NUS NSS boycott has impacted TEF3 enough for the DfE to bring in mitigations. Those institutions with non-reportable NSS metrics due to the boycott will either have a core metric created from aggregated scores across the 3 years or have the metric omitted from their calculation altogether. Either way, the NSS metrics have been significantly downgraded for TEF3.

Up to the mark

As called for by the Russell Group and others, high and low absolute values metrics will be indicated alongside a measure of performance against benchmarks, and this will be used to inform award level hypothesis where there are no flags for significant deviation against the benchmark. This will be particularly useful in situations where metric comparisons are not useable due to low numbers of students in a particular group, and should mean that split metrics will have more of an effect on panel decisions. In particular, adaptations to the processes used to present the data derived from part-time students will increase the value and utility of splits in that area.

The process of benchmarking itself, drawn from key performance indicators, was less explicit within TEF documentation than other aspects. It is to be hoped that better explanations of the derivation and validity of benchmarks will also inform the work of the panel.

Grade expectations

Jo Johnson has announced the introduction of a new TEF metric that will “recognise providers who are genuinely tackling grade inflation, and hold to account those who are not”.

Up until now, grades have not informed the TEF metrics – so this may seem a surprising addition for the next phase. However, Jo Johnson has always shown concern regarding grade inflation in the sector. When he first came into post as universities minister he announced his view that Grade Point Average (GPA) was the solution. The first proposed versions of TEF required HE providers to declare whether they used GPA as a prerequisite (although it wouldn’t influence TEF awards). However, this quickly disappeared in TEF year 2. Alongside this, UUK and GuildHE were tasked with a project looking at degree classification algorithms, to go alongside HEA’s external examiners contract from HEFCE, and produce guidance for the sector.

Alongside this new metric, the newly established OfS will be publishing annual data on the number of degrees awarded at different classifications. This will allow them to challenge those institutions with inflated grades – but prompts the question of how this will be benchmarked. Do we currently have the ideal split between degree rankings? Is there an ideal split? As is often the case, what the metric is compared against will be hugely important.

It is unclear how this metric will work in an outcome-based process. But this move is a continued effort to position TEF as the antithesis to league tables, which Jo Johnson argues encourages grade inflation by rewarding those for number of high-class degrees they award.

Writing about league tables on Wonkhe, Paul Greatrix often likes to postulate a table of tables. TEF is a very new entrant to this market for arbitrarily ordered lists of universities, and the choice to include metrics on degree inflation is an attempt to mitigate the effects of metrics in other tables that incentivise a rise in first class degrees. It is a differentiating move, and perhaps even a disruptive one. We already know that many league tables will not include TEF awards in their calculations.

LEO (nearly) joins the metric club

The sector has finally received confirmation that the LEO dataset will play a part in TEF year 3. This follows numerous experimental releases over the past year, covered extensively on Wonkhe. The data has plenty of caveats and offers often depressing results for institutions, particularly with great variation at a regional level, plus very visible issues linked race and gender. Some careful consideration will need to be given to how to benchmark this data, as employment metrics have not been regionally benchmarked thus far.

This will be a supplementary metric alongside the existing graduate salary metrics from DLHE. This makes sense as HESA’s new Graduate Outcomes survey, which will replace the DLHE, plans to incorporate LEO into their data by 2020. However, until this is ready using LEO alongside DLHE allows the government to experiment with use the salary outcomes data and get the method right first.

For those who have argued that salary outcomes are a crude or inappropriate measure of teaching excellence, this is bad news. Alongside the halving of the NSS weighting, it looks like the TEF is becoming more focused on student labour market outcomes than teaching excellence.

What now? And why now?

The “key findings” section of the report tries to give the impression that everything is fine with the TEF – but these minor changes may actually be quite major. A downgrade of the metrics that brought in the actual opinions of students, and the addition of metrics that focus on things that it is felt the students care about, moves the TEF away from something that is done on behalf of applicants towards being something that is done for reasons of policy implementation.

As with the contemporaneous announcement on senior staff pay, we see here reactive policy making – responding to a hostile media environment, rather than a continuation of Johnson’s initial desire to update and streamline sector regulation. When we saw the Higher Education and Research Bill pass into law we may have been forgiven for thinking that – for all of the faults in the new legislation – we at least would see stability and an end to headline-chasing. After today, we wouldn’t be so sure.

Leave a Reply