This article is more than 6 years old

TEF results – The full core metrics results

We can now present a table of the full TEF core metric results. Click through for some data-led fun.
This article is more than 6 years old

News, analysis and explanation of higher education issues from our leading team of wonks

We have frequently argued that the Gold, Silver, and Bronze labels for TEF results are devoid of much real meaning. They are not sufficiently graded to help students make choices, nor to encourage a culture of continuous improvement (particularly in those institutions that obtain Gold).

So there is some insight to be gained from looking at a full list of the core metric scores. This shows us the extent to which institutions deviated from their benchmark scores on each of the six core metrics (their Z score). These are:

  • Teaching on my course (NSS)
  • Assessment and feedback (NSS)
  • Academic support (NSS)
  • Non-continuation (HESA and ILR data)
  • Employment or further study (DLHE)
  • Highly skilled-employment or further study (DLHE)

As well as the headline core metrics there are ‘splits’, which look at variations in each of the core areas by (amongst others) gender, ethnicity, age and disability. The aim of the split metrics is to establish how students from different backgrounds fare on the various measures relative to their peers. We’ll have more analysis of the splits to come.

The combined Z scores do not reflect how the panel would have made initial judgements, which instead used a ‘flagging system’ to highlight significant deviations from benchmarks. Nonetheless, the sum score gives a good idea of which institutions perform, on the whole, significantly above or below their benchmark. This generally fits alongside their ultimate TEF outcome, but as you scroll through the list, you’ll see there are some variances.

This table has used what we think are the most common assumptions. We’ve taken values from the core metrics for the majority mode of delivery for each institution, and we have not looked at all at ‘provisional awards’ (this latter is just because there is no data available). In many cases HEFCE have omitted or rounded data for the purposes of data protection – we’ve treated these omitted values as zero for the z-scores, making the assumption that a low sample size would not be reliable anyway.

So what we present is not a true ‘panel-less TEF’, but it is a close approximation of one. It also helps show where some institutions may have fallen significantly short of their benchmarks, and where they need to put in work to improve.

Data notes: Blank entries are for metrics not published by HEFCE for insufficient data.