And the government says that it “helps prospective students to select the best provision for them.”
The idea here is that an assessment of the quality of provision sends a signal to students about the quality they will experience if they enrol on a programme at that provider.
OfS even has analysis (in both a blog and a dashboard) that claims things like “26 per cent of entrants aged under 21” are “accessing an outstanding experience and outcomes” – or “gold”, as it’s known here.
But some of those students may well be “experiencing” quite poor teaching, or have poor outcomes, or be having a poor experience. And plenty of students in non-gold providers will be having an excellent experience with outstanding outcomes.
There’s not really much wrong with an exercise that hands out awards to universities in this way. We all need a pick-me-up from time to time, and it’s nice for advertising purposes.
But its efficacy as a formal “consumer signalling” exercise? I’m not so sure.
- If I was to ask 100 students if they realise that the outcomes component is benchmarked around the sorts of students at that provider with similar student bodies, I can pretty much guarantee that 100 will say no. Benchmarking might make the exercise “fairer” from a “value added” point of view for providers, but students looking at the outcomes ratings will still feel cheated if and when they discover the “truth”. Same’s true for experience.
- The core metrics consist of 4 years’ worth of data, and so some of the metrics from 2018-19 refer to experiences and outcomes derived from years before that. A lot could have changed since then – and almost certainly has.
- That also means that as the awards last four years, a student might choose a provider in 2026 based on data from the mid 2010s, on the basis that the performance will persist to 2030. That feels less than optimal.
- As well as the idea that the signal helps a student predict future quality, the signal only makes sense if the signal broadly represents the whole provider – so all subjects, levels, modes etc are broadly consistent. But in a large provider, there’s pretty much no chance that’s true. See also various student characteristics.
- Provider size matters. Plenty of small providers have been awarded bronze. But if those providers were delivering exactly the same experience and outcomes as part of a huge university, it’s entirely possible that said university would have been given gold. That’s highly problematic – despite the dropping of subject TEF” along the way.
- Somewhat surprisingly, providers were able to choose whether to include apprenticeship provision, or validated provision. I doubt that will mean that prospective students on such programmes will be warned that the medal being flaunted at them in advertising might have ignored their course.
- Subcontracted-in students and transnational education students are not separated out as categories in data or ratings. That also raises questions about the way those providers advertise themselves in those contexts.
- And prospective PGT students will also doubtless be told about the rating that their potential university has received, with similar problems regarding advertising.
- Given where overall sector funding is, the prospect of most universities maintaining the quality of experience over the next four years is low – but students won’t be told that or in any way warned – and there will be differential deterioration based on subject and provider type.
- If you’re throwing a 50th wedding anniversary party this weekend for a relative, you’ll want to signal your joy – but you’ll struggle on the procurement of balloons in some towns/cities this weekend. All because of an arguably misleading consumer signalling exercise for your local university.