This article is more than 5 years old

Should we finally admit that TEF doesn’t inform students?

David Morris argues that the TEF could work - if we let go of the idea that it should inform student choice.
This article is more than 5 years old

David Morris is the Vice Chancellor's policy adviser at the University of Greenwich and former Deputy Editor of Wonkhe. He writes in a personal capacity.

I was fortunate enough to attend one of the consultation events for the ongoing independent review of the Teaching Excellence and Student Outcomes Framework (TEF), led by Shirley Pearce.

With a great deal of policy and legislative attention focused somewhere between Dundalk and Derry right now, one could almost be forgiven for forgetting that something so mundane and niche as TEF was causing the government its greatest legislative headaches only two years ago.

With the final plans for the first subject-level exercise being made on the basis of two years of pilots, now seems as good a time as ever to step back and take a broader look at the purpose and principles behind TEF. Such a wide lens appears to be in Dame Shirley’s brief – as she calmly explained at last week’s briefing, the review is very much focusing on the “dichotomies and tensions” surrounding TEF policy making. These include the balance between subject versus provider level assessments, qualitative versus quantitative methods, and promotion of consistency versus recognition of diversity within the sector.

Nonetheless, Dame Shirley – a former vice chancellor at Loughborough, and now chair of LSE – was also sure to remind the assembled university representatives that few government regulators are so benevolent as to ask providers for their views on their own regulation. Though this review was introduced at the sector’s behest in protest at TEF’s introduction, there seems little chance – on the basis of these sessions at least – that it will lead to outright abolition of the exercise.

Nonetheless, discussions were sufficiently stimulating to get the grey matter cells working. At Greenwich we have just begun in earnest our preparations for the first subject-level TEF, whatever particular form it might take, and are beginning to get a sense of the real impact (both positive and negative) that it might have on the institution. With that in mind, here are my own reflections on where Dame Shirley and her team could have the most impact in making TEF better.

Be clear about purpose

Part of the government’s problem in persuading the sector, students, and wider public of the need for TEF has been its insistence that it is about enabling better student choice. This is clearly complete tosh, and is being borne out by early data we have on students’ general unawareness and indifference about an institution’s TEF rating.

Long-time readers of Wonkhe may well remember that the real genesis of TEF (and indeed the entire new regulatory regime) came as much from government officials’ belief that universities were held insufficiently accountable for teaching quality under the old quality assurance regime, particular compared to research, as much as it came from any Tory ideologues’ insistence of creating a market for student choice.

Indeed, TEF would have the chance to be a lot better if the government and OfS were more honest about this purpose, rather than dressing it up as a benevolent attempt to “empower” student consumers. Students have shown their indifference, and indeed outright opposition (such as the NSS boycott) to such efforts.

Greater honesty about TEF’s role in asserting the public as well as student interest in university accountability would also better reflect what we have finally acknowledged about higher education funding: ultimately, the taxpayer is footing most of the bill. Acknowledging this fact, as well as the wider limits of marketisation, could lead to an accountability exercise with greater scope for nuance, recognition of diversity, and more conducive towards actually making teaching and learning better.

Don’t throw the benchmarking out with the bathwater

It is easy to slate TEF, and indeed many have taken the time to do so, because the exercise provides us with many problems, contradictions, and strange outcomes. But we shouldn’t overlook the instances where TEF has pointed us in the direction of a more progressive and fairer assessment of the state of the UK university sector.

This is most notable in the instance of benchmarking TEF metrics, by far the biggest leap forward in assessing UK universities’ quality of student experience upon their actual merits rather than irrelevant and archaic qualities such as ancientness, research power, or international prestige. Benchmarking is what distinguishes TEF from the traditional media league tables, by acknowledging that different institutions’ student characteristics give them a different starting point from which to be evaluated.

I really hope that the Pearce Review does not abandon this approach. If TEF abandons benchmarking and moves in a more qualitative direction, the spectre of the early-nineties teaching quality assessments might begin to emerge, with judgements on the quality of teaching being made almost concurrently with perceptions of prestige and research quality. This would be a huge step backwards.

The biggest criticism of benchmarking – heard primarily from the higher echelons of the Russell Group – is that it provides confusing information for students. But were TEF to abandon its aim of enabling consumer choices (as explained above), this would become much less of a concern.

Get rid of LEO

Regular readers of Wonkhe will know that I am far from a LEO cynic. Indeed, I am really enthused about the power that richer data about graduate employment outcomes for better policy making in higher and further education and about the youth labour-market efforts to make society more just.

But beyond ideological objections (which are well documented elsewhere), on a practical level, TEF is not the right place for the DfE to play with its sparkly new toy. The piloted inclusion of two new supplementary LEO metrics in TEF appears to have produced bizarre results. Upon brief examination of the national data, the spread of outcomes once benchmarked across providers appears to be very narrow, with few providers securing either a positive or a negative flag. Under the current flagging system, if a new TEF metric does not show a sufficient spread of performance, it is hard for me to see how it will aid panel decision making or provide much value.

Then there is the lag effect of LEO’s inclusion in TEF. If TEF 2020-21 goes ahead as planned, it will include assessment of the graduate employment and salary outcomes of students who entered university in 2008 (ie my own fresher year). It will also assess those graduates’ employment outcomes in the 2014-15 tax year. This seems nonsensical, both in fairly assessing institutional performance, and in providing information to applicants. Most providers, I am sure, will struggle to provide adequate context to this data given it is so dated, and I am sure panels will have to allow for that. In that sense, the inclusion of LEO in TEF feels like a real waste of everyone’s time, including those DfE analysts who might be able to utilise this new dataset far more purposefully.

Get past the niche subject

I was initially favourable to the adoption of “Model B” as the basis for subject aggregations in TEF. Some form of subject exercise seems to be critical in ensuring that TEF is based around a relevant unit of assessment, and everyone who works in a university knows how much good quality teaching and student experience depends on subject, departmental, or even programme-level factors, which vary enormously within institutions. A by-exception approach, as piloted last year, also seems insufficient.

However, the reality of the current CAH Level 2 assessments means that a huge number of providers will undergo the arduous process of preparing subject submissions for tiny subject areas which cover few students, programmes or staff. In many cases, such “subjects” will be completely incoherently aggregated (or disaggregated) in relation to either the student experience, areas of research expertise, or administration.

This is partly a result of TEF’s overriding need to provide cross-comparability, in order to function as useful market information. If such an aim were abandoned, there would be greater flexibility for more sensible aggregations of subjects in reasonable sizes which pertain to the different ways in which universities are organised.

I know that DfE officials have been scratching their heads over this problem for several years now, and I don’t have an answer to their problem. But a widening of scope and renewed focus on enhancement rather than marketisation may give them the leeway needed to find a reasonable solution to the subject-aggregation conundrum. Right now, we do not have one.

Look to the future

It is easy to forget that TEF has only been with us for four years. One of the reasons for that has been the sheer pace of development and revision: as soon as academics and wonks have got to grips with one methodology, DfE has proposed a new one. This is understandable in the early days of TEF’s development, but will become less so if and when the exercise becomes more established.

In the long run, a clear, transparent and fair decision-making framework will be needed to oversee TEF’s continued evolution and to agree any changes. Indeed, concerns about the lack of such a framework were one of the main reasons for parliamentary opposition to TEF in the first place. An independent review every few years seems silly, but we only have to look at the recent controversy over the new REF guidance to see what happens when stakeholders within higher education believe that decisions have been made unfairly or in a particular groups’ interest.

It would be great if the Pearce Review could get out ahead of this issue and recommend a means by which a fair and impartial guardian of TEF could oversee future developments and changes, balancing the interests of government, providers, students, staff, and the wider public.

One response to “Should we finally admit that TEF doesn’t inform students?

  1. Really interesting and honest article David, thank you.

    To add to the section about Be Clear About Purpose, I would also add that not a single core metric in TEF looks at quality (and surely this is a major oversight). NSS is a measure of satisfaction not quality; and as we know, students often can conflate experiences into their feedback which are beyond University control.

    The continuation rate metric is equally problematic and does not accurately reflect the broad range of reasons students may have for not continuing study (often these are welfare or personal issues, not curriculum based).

    The employability metrics are also flawed in their use for TEF. There are no region weightings in place nor recent mappings of what constitutes graduate employment. Large city universities benefit from higher regional earnings such as through the inclusion of London weighting which skews the metric towards a South East bias. But again, this metric puts the onus on Universities to secure the best jobs for their students (and we should be encouraging students to apply for quality jobs) – but surely students have a responsibility for their own career choices (which may not be driven by income)! So why is this a metric aligned to teaching quality?

Leave a Reply