The summary panel statements, provider submissions, and student submissions are out for the majority of providers involved in the latest round of the Teaching Excellence Framework.
What are those? Well, at least half of the value of your provider TEF awards and sub-awards were calculated using qualitative methods – based on documentations submitted by someone in the university, and some kind of student representative body (in most cases a student union). What comes out the other end is a panel statement, which sets out what they made of it in a more eloquent way than the medal table.
There are shedloads of these things out, so there’s no way any human is going to read all of them (stay tuned for non-human attempts!). What we’re going to do here is set out the features of a few representative panel reports (chosen largely at random, but we’ve largely gone for non-traditional providers rather than universities because that’s most of the sector). What, in other words, does “gold-gold-gold” (or “bronze, bronze, requires improvement”) feel like?
Take a bow – the Royal Academy of Dramatic Art got a “triple-gold”.
Right at the top we (the reader) get the magic words:
“Typically, the experience students have at the Royal Academy of Dramatic Art and the outcomes it leads to are outstanding.
Thereafter, a list of outstanding features in each of student experience (4 points) and student outcomes (5 points). It seems that merely “very high quality” isn’t as good as “outstanding”, there are points here again under experience (2) and outcomes (1).
It is notable that these are not in general couched in terms of what the metrics say, but rather in terms of what is (likely) noted in the submissions. Certainly, there are three points relating to educational gains under outcomes, whereas experience includes praise for the experience of the teachers and high contact hours.
Beyond this there are brief descriptions of TEF and of the provider, before we get to the more detailed narrative comments, starting with student experience. Here’s the first notable aspect “Gold-Gold-Gold” does not mean a provider is perfect, so there a note that:
though there is insufficient evidence that staff professional development and academic practice is a very high quality or outstanding quality feature, the panel did not find any features to be of concern.
Which is nice. The text proceeds through the aspects within experience and then outcomes, noting points of interest along the way. There’s evidence of contextual judgement – progression rates are low because the theatres have been closed during lockdowns, and because a two year foundation degree structure means graduates complete the Graduate Outcomes Survey only three months after completing a top-up year. This latter is directly taken from the submission, upon which the panel placed “considerable weight”.
If you are wondering how “educational gains” worked this is a good panel judgement to read – the whole thing is “outstanding” apparently. By reading what the panel says, there’s clearly a link to good course planning, employment-focused aspects, embodied learning, and tailored support
The end is curious – though the “best fit” language is consistent in this sample.
The panel considered the overall ‘best fit’ rating to be ‘Gold’, based on the student outcomes and student experience aspect ratings.
Silver-Silver-Silver: Regent’s University, London
A round of applause for Regent’s University. If Gold is “outstanding” then Silver is “very high quality”, so these are words that feature a lot in this document. Here, there are six “very high quality” features in student experience, and two very high quality (and three “outstanding”) features in student outcomes.
Again these highlights are not based on the indicators – we get notes of commendation on everything from “real world challenges” as a learning focus to “fostering a sense of belonging”. What really shines through is the amount of attention paid to the submissions – clearly the big differentiator this time round.
Reading through the longer narrative, we find ourselves looking for where the panel offered hints on what would make for a Gold. Here’s one example, that shows how split metrics have been used within these judgements:
The panel noted that indicators showed below very high quality for students from disadvantaged backgrounds. This is a small group, but there is limited focus or mention of interventions for this group in the provider submission. Because of this, the panel did not rate the feature as outstanding but it noted some strong and convincing statements on teaching practice, even though it is too early to see the potential impact
The panel noted that, aside from the industry partnership and some innovation, there was limited evidence of course content and delivery that inspires students to actively engage with and commit to their learning, skills and knowledge development.
Which just prompts questions as to how you could measure the linkage between curriculum design and inspiration.
Bronze-Bronze-Bronze: London Northeastern
London North Eastern used to be the New College of the Humanities, and is a “high quality” provider. Yes – “high quality” is the language used to indicate bronzeness.
There’s not a list of high quality features, but we do see a list of four “very high quality” aspects of student experience, alongside one “very high quality” and one “outstanding” aspect of student outcomes. One of the outcomes examples is about the way educational gains are articulated to students.
A new notable phrase…
there was not enough evidence to judge three features as very high quality.
…suggests that “high quality” is generally defined as a lack of “very high quality”, which is a curious linguistic construction. A lot of the language is about a limited spread of good practice, or good practice localised in a particular area – there’s no direct language that tells you about anything London Northeastern may, in the judgement of the panel, be getting wrong.
The other repeated term is “insufficient evidence”, again giving the impression that the submission rather than the provision may be in question.
We’ve also looked at several university providers where one of experience and outcomes was Gold and the other was Silver. That gave the panel the choice of Gold or Silver for the main award – so we’ve looked for reasons for going Silver, rather than Gold.
Across the sample, the big message is consistency. Frequently, the panel noted differences in the quality of provision across subject areas. There were also a number of comments arguing that some universities failed to provide compelling evidence that their approaches – particularly in student outcomes – were fully embedded and tailored to all student groups.
The panel in particular seems to have been down on “Experience Gold” and “Outcomes Silver” universities – and also seems to have been quite critical where great outcomes aren’t being experienced by underrepresented groups. Similarly, in the one university where outcomes were Gold and experience was Silver, the resulting Silver rating is put down to typicality.
Then we’ve had a look at where the two components differed but the panel tipped things upwards – and the reverse is pretty much true. Gold-rated universities apparently demonstrated that their high-quality features applied broadly across all student groups, particularly those from underrepresented backgrounds – and that their educational approaches were well embedded and tailored to meet the needs of different students.
There’s also clearly something going on on demonstrating strategy – statements that tipped up rather than down frequently discuss collaboration (for example with SUs or employers), rehearse making targeted interventions based on student feedback, and outline continual refinement and “upward” trajectories over the 4 year period.
Needs improvement – outcomes
Inter-ED UK gets an overall Bronze, with the judgement:
Typically, the experience students have at Inter-ED UK Limited and the outcomes it leads to are high quality, and there are some very high quality features.
If you just read that – or noted three “very quality” (sic – we think this should be “very high quality”) aspects of the student experience, and two ”very high quality” aspects of student outcomes – you would be unaware that this provider apparently “requires improvement” on outcomes.
“Requires Improvement” is positioned as an absence of a TEF rating – “improvement is required” in order to obtain one. What improvement?
Reading down into the main narrative brings to light an “area of concern” – namely completion rates. As this is one of the things that can be seen in data before submission, Inter-ED UK made two mitigating arguments:
- Widening access has led to higher completion rates across the sector
- Students don’t progress where they “lack a sense of belonging, struggle to engage, and experience financial hardship”
The panel gave the second reason a little consideration – but placed limited weight on it as there was no evidence of the scale of the issue, or for support offered by the provider.
Needs improvement – experience
St Mellitus College Trust is another overall Bronze, but with “Needs Improvement” for student experience. We get one very high quality feature for experience (on academic support and the learning environment), and one “very high quality” and one “outstanding” (continuation and completion) feature for student outcomes. Curiously, there’s no mention of a concern.
It is, in other words, “insufficient evidence” all the way down – again suggesting an issue with submission writing rather than providing higher education.
What can be learned from all this
It was always clear that this round of the TEF would be more qualitative and less quantitative than previous iterations. The upside of that is that there no sign of very broad judgements being made with only limited contextual input, the downside is that TEF sails quite close to a competitive bidding round – albeit one that ends with no projects being awarded and no money shared out.
DK has run a fair few competitive bidding rounds in his time, and his preferred focus was one how “shovel-ready” a project was – how quickly it could get started, how much resource and expertise was already in place. Judgements on excellence are notoriously difficult to make in such situations – and in TEF there is no delivery measures to fall back on.
The other obvious point of comparison is the QAA Higher Education Review – both float at a similar level of abstraction, but the QAA approach seemed a little more focused on outcomes – in terms of what a provider needed to do. The TEF does sail remarkably close to marking an extended essay – with points made about a lack of evidence rather than a lack of quality.
These statements are for providers as much as anyone else, so it remains to be seen what use will be made of them. The public statements feel more like a transparency exercise than any meaningful attempt to engage applicants, advisors, or the wider public with the quality of higher education in England.