If you want the TL;DR version of the latest slew of TEF documents, it’s that TEF3 will be a lot like TEF2 in shape and outcomes, but with some tweaked metrics and the exciting new additions of supplementary metrics and a grade inflation measure. For those of you looking for more, read on for an update on what’s changed.
The amendments to the exercise which Jo Johnson announced at the beginning of September have been spun out into two hundred pages of policy documents. Inevitably, there are still plenty of holes in the policy including the statement that “the results are generally perceived as credible and reflecting teaching excellence across the sector.” Let’s recall that the exercise does not measure teaching, and move on from there.
It’s clear that the Department for Education also realises that the name needed changing and the exercise will be called the ‘Teaching Excellence and Student Outcomes Framework.’ Disappointingly, the department still wants to use TEF as the acronym (rather than TEaSOF, or “tease-off”). It’s yet another missed opportunity to improve the exercise.
Here beginneth the lesson
From the lessons learned document [pdf], of which we had a taster in September, we have the rationale for halving the impact of NSS and changing the way in which the initial hypothesis of awards is made:
- A provider with positive flags (either + or ++) in core metrics that have a total value of 2.5 (after accounting for the weighting set out in 7.10 [The three core metrics based on the NSS have a weight of 0.5. The other three core metrics have a weight of 1.0.]) or more and no negative flags (either – or – – ) should be considered initially as Gold.
- A provider with negative flags in core metrics that have a total value of 1.5 or more should be considered initially as Bronze, regardless of the number of positive flags.
- All other providers, including those with no flags at all, should be considered initially as Silver.
There’s also an explicit reference to NUS’s boycott of NSS:
“If a provider does not have reportable metrics for the 2017 National Student Survey and there is evidence of a boycott of the NSS by students at that provider, the provider shall be treated as if it had reportable metrics for that year for the purposes of eligibility and award duration.”
These measures, while retaining NSS within the exercise, diminish its role and the position of the student voice, as Gwen van der Velden has argued.
One thing that isn’t changing is the names of the award categories. This knotty question is one of many that have been left for the more formal review of TEF which is mandated in the HE and Research Act. The Independent Review is due in 2018-19, the results of which will inform the exercise from 2019-20. While some thought was given to changing the names, there was this gem on the steps being taken to explain the meaningless terms to baffled audiences worldwide:
“During TEF year Two we recognised that explaining TEF to an international audience would be a challenge, specifically to communicate the subtle message that TEF bronze shows teaching excellence – and builds upon very high national quality assurance thresholds throughout the UK. We have worked with stakeholders to try and mitigate this risk – e.g. through developing an international script – and will continue to do.”
A subtle message indeed.
Within the specification [pdf] of TEF3, we have plenty of detail about the changes made to the exercise. Highlights include the inclusion of LEO (graduates’ salary data) and the new grade inflation metric as trailed last month. Other changes include allowing providers with a majority of part-time provision will be able to submit additional evidence to make their case, giving the Director of Fair Access (and Participation, as the role has been renamed) a role in eliminating ‘game playing’, and the power of referral to reevaluate a provider against the threshold quality level.
TEF assessors will know whether a provider is in the top or bottom decile of the metrics on absolute, rather than benchmarked, scores: “Where a metric is flagged, the flag will form the basis of determining the initial hypothesis. However, where a metric is not flagged, a high or low absolute value will be treated as, respectively, a positive or negative flag in that metric.” This is perhaps the most regressive of the moves to TEF’s design which move it further from the ‘added value’ approach with benchmarked data to something more akin to a traditional ranking system.
A new grade inflation measure will attempt to measure the “rigour and stretch” of a provider’s provision with institutions providing data on degree classifications over the years. They’ll have to make a case for why their numbers have changed to convince the assessors that any increase in firsts or upper seconds isn’t a result of rampant lowering of standards.
A true document for wonks, the analysis of final awards [pdf] is an in-depth analysis of how various external factors did – or did not – influence the awards in order to learn lesson for TEF3. The major headline is that there was no statistically significant evidence that provider type, tariff level or student characteristics (ethnicity, gender or disability) are associated with particular award levels. Considering the role that benchmarking has played in TEF, this is good news for the architects of the exercise. When the initial year two results came out, Wonkhe undertook some analysis of the ‘London effect’, a topic which has been considered very carefully by DfE. Although a small difference in award allocations was found in the London/South East area, it wasn’t statistically significant. While there won’t be benchmarking by region in TEF3, panel members will have more information about institutions’ recruitment of local students.
- Just for the hard-core wonks, there’s a note on the lessons learned document which shows that it wasn’t updated following the announcements at Conservative Party conference, referring to the repayment threshold of student loans at £21,000.
- Despite some good advice on Wonkhe, there’s a repeat of an error in the description of providers in which the quality of their learning resources are described without any metrics relating to these.
- And if you haven’t had enough of all this TEF lark then you can enjoy the analysis of TEF metrics [pdf].
HEFCE will publish its guidance for TEF applicants this month and a Benchmarking Review in November. HEFCE’s TEF baton will pass to OfS in April. Then we’ll all wait with bated breath for the results of TEF3 next year and the Independent Review.
You can download the latest documents here.