You’ll forgive us for being less than excited at the prospect of analysing the results of TEF4, now released by the Office for Students (OfS).
It’s partly because this year we’re only looking at the longer end of the provider tail – newer and smaller providers, along with those who have submitted for a resit. It’s partly because there’s a big review on of the TEF which could throw any or all of these results up in the air, depending on how bold Dame Shirley Pearce and her panel are prepared to be and how the Department for Education (DfE) and then OfS respond. It’s also partly because we saw the underpinning data back in January – which, lest we forget, revealed that no provider in the Russell Group would be getting a Gold if we’d only looked at Stage 1a of the magic algorithm process and ignored the contextual data.
How it “works”
Contextual data? Magic algorithm? For those that don’t follow this closely, a brief reminder on how all of this “works”. First, a bunch of metrics are obtained for each provider. These are weighted against a particular formula, and performance is benchmarked. This first stage sees the calculation of what we call an initial hypothesis, based only on the core metrics for the dominant mode of provision (full- or part-time). Each of the core metrics is compared to a benchmark, and a significance flag is generated, showing the importance of the difference between the two.
Still with us? These flags are displayed as double positive (++) through to double negative (–). As my colleague David Kernohan always points out, the use of this gradation is frequently criticised as it is statistically problematic – a single significance flag in the sector key performance indicators is equivalent to a double flag for TEF – which makes the TEF single significance flag not particularly “significant”.
Single and double flags are treated the same for the purposes of calculating the hypothesis – but the six core metrics are not. The NSS derived metrics – for teaching, academic support, and assessment/feedback – are weighted at half that of the other metrics.
So the formula goes as follows:
- Positive flags on metrics worth 2.5 or more, and no negative flags – Gold
- Negative flags on metrics worth 1.5 or more – Bronze
- Otherwise – Silver
This gets us to our initial hypothesis – step 1a of the TEF assessment process. Oh and by the way – institutions with similar numbers of full and part time students have a 1a calculated from both modes of delivery.
Next some other things happen. In some cases nothing – if the initial hypothesis is looking clear at this stage it is unlikely the award will change. Otherwise we move onto step 1b, which looks at these initial hypothesis in terms of absolute values and split metrics. And then step 2 brings in factors that could affect performances against core and split metrics – the supplementary metrics (two graduate outcome salary data metrics (LEO) and a grade inflation measure) and anything relevant in the provider statement.
Time travel
As well as all of the above, we should remind ourselves about the timelines. TEF awards have generally been good for three years, but providers have been free to re-enter voluntarily before their previous award expires. The intention to introduce subject TEF has thrown the three year bit in the air too – right now it means that there won’t be any awards in 2020, and then everyone will be in in 2021 – which means that awards made in 2017 will have (mainly) lasted four years, and those handed out today will only last two… resulting in OfS hosting a single set of awards results that in fact contain multiple assessment points, multiple TEF methodologies and multiple tactics from providers!
This all creates some interesting lines of enquiry. First, are there any providers entering for the first time (or having to enter this year) whose results are better or worse than the raw metrics predicted back in January – because that would indicate that the contextual data has had an impact, and it’s fascinating to try to work out how. Second, are there are providers that didn’t have to re-enter this year but did, presumably in an attempt to bump up their medal. If they did, did that gamble pay off? And finally, are there are providers mysteriously missing from the revised full dataset – and what might that indicate?
As predicted?
Let’s first then look at the newbies who’ve performed differently than we might have predicted back in January when we got the dataset. Most match their 1a metrics hypothesis. But there are some outliers.
We can’t find anyone that has outperformed their 1a hypothesis. There are, though, a clutch of providers who didn’t achieve their predicted grades. Bournemouth and Poole College, for example, should have been looking at a Silver but got a Bronze. The statement gives us clues as to why – the provider is “significantly below benchmark for full-time student continuation rates”, and the panel judged that this was “partially addressed” in the submission – code for “must try harder”.
Similarly Regent’s University in London ought to have been looking at a Silver but ended on Bronze – in its case “continuation rates are below the provider’s benchmark” and the panel “deemed this was partially addressed in the submission”. As in previous iterations, across the piece it looks like the panel has not taken kindly to submissions that merely justify below par performance, but are more comfortable with warm words on addressing it.
Bank it?
Next let’s look at those taking the opportunity to try to gamble their existing award away for a better one. To be absolutely fair, that might not be the reason – the institutions in question might have decided to enter every year to keep themselves honest and their rating fresh. They might have thought “we could well go down, but we’ll learn things”. But you’ll forgive us for assuming that anyone with an award that wasn’t due to expire this year has been hoping for a bump up.
The major winners here were Staffordshire University, University for the Creative Arts (both Silver to Gold), as well as the University of Roehampton and the University of Wales Trinity Saint David (both Bronze to Silver). The Stage 1a hypothesis suggested that UCA might stay silver, but it’s clear from the provider submission that the panel was impressed with a range of interesting aspects – “outstanding levels of stretch”, “assessment methods involving self, peer and tutor reflection for formative and summative feedback”, and “an institutional strategy for employability embedded across all curricula” among the plaudits. A handful of those in the long(er) tail also went up – ALRA, Riverside College, Reaseheath College, RNN Group, Nelson College London Limited, Leeds College of Music all improved their medal although in some cases they were required to re-enter this year.
Some on the other hand will be wondering why they bothered. The University of Sheffield, the University of Central Lancashire, the University of Sussex, and Teesside University all held at Silver, and the University of East London and the University of Salford both held at Bronze – although in all six cases, these results match the stage 1a hypotheses.
We can’t find anyone that gambled and came off worse for it – although City of Liverpool College, the Trafford College Group, RTC Education and the University of Law all ended up down having not been awarded the full three year medal and being required to enter again this year. What’s fascinating about this group is that news of their Gold or Silver status was plastered all over their own websites and a bunch of others – and whilst OfS has very hastily amended its site (and Unistats) to record the new results, the providers themselves and the range of other guides, news sites and wikipedia pages may not be so quick (and the providers themselves may not be in much of a rush to correct).
Gone but not forgotten
One of the frustrating things about the way the data has been presented is that it erases any history. If you’re browsing the OfS site and are looking at someone that anyone that’s not brand new this year, you can’t see the previous provider statements or their previous result. And even more intriguingly, there appear to be some (we think four) previous TEF award holders that no longer hold them, and on the OfS site it’s like they never did. The data from previous exercises is available – but you’d have to be something of a supersleuth to find it. And it’s hard to spot those who have dropped out.
We think Pearson College, for example, went from a “provisional” award in TEF2, to a proud “Silver” in TEF3, but they’re now missing from the main TEF outcomes page and the OfS register lists their TEF award as “none”. We had guessed that this was because registration condition B6 only requires providers to participate in TEF if there are more than 500 students on higher education courses – and thought Pearson may have dropped below. But it turns out that B6 isn’t in force yet, and TEF4 was still voluntary. It would help if OfS displayed clearly the history of a provider’s ratings, their entry into the process and who no longer holds an award. In fact ideally OfS would show all the data it holds on any provider in a single place, rather than making us hunt through multiple excel sheets and PDFs.
What’s next
While we don’t want to detract from those institutions buying up their local greetings card shop’s stock of gold or silver balloons, you’ll forgive us for not getting too excited about all of this. Even if we set aside the timeline complexities and breadth of critique that has been tumbling into the independent review, TEF is strange precisely because its provider-level focus both makes perfect sense and no sense.
What do we mean? Some of those holding a TEF bronze award have fewer students in totality than some Gold holders’ average business studies lectures. That means that the “level playing field” indeed treats all vice chancellors (or equivalent) as equal, but from a signalling perspective the exercise could give radically different results for identical course, cohort or campus performance to applicants depending on the overall size of their provider – and means that a student experiencing poor provision is more likely to see it fixed the smaller the provider they’re in. That’s the problem that subject TEF was supposed to fix – yet many are speculating (and some even predicting) that the review will kill it off. It may well happen, but from a student perspective perhaps the only thing worse than eating sausages is knowing what is in them.
And by the way, as I’ve said before. If your position is that TEF is a bit arbitrary, that awards aren’t comparable across the sector because of the bench marking, that the signalling is faulty and that a bizarre algorithm that few understand ends up in sorting institutions into four overly simplistic categories, now you know how students feel about the UK degree classification system.
Highbury College used to be TEF Bronze, but now has no rating. The published HESES/HEIFES data for 18/19 shows that it has 165 students, so it’s not in breach of condition B6. Pearson and DN Colleges were both Silver but have no TEF rating now. Pearson has 1055 and DN Colleges 2115 students.