Jo Johnson recently announced that the subject level TEF pilot would be extended for an extra year, ready for implementation in TEF year 5 (2019/20). HEFCE will be delivering the first year of pilots in 2017-18, before the transition to Office for Students in 2018-19.
Little is known about how subject level TEF will work, though such are the challenges that it worryingly looks like it’ll be a rather large square peg being forced into a much smaller round hole. Previous iterations of subject level policy, such as QAA subject review, were so expensive, burdensome, and widely hated that they were ditched altogether. So how will these problems be avoided in a subject level TEF?
Subject to approval
Without sending you into a semantic stupor, I’m forced to begin by asking: what is a ‘subject’, and how do you define it? Alastair Robertson has argued that universities’ systems are set up for programmes rather than subjects, which allows for instances such as joint honours programmes. But could the Department for Education really mean course, sometimes a subcategory of a programme? The recent HEDIIP Higher Education Data and Information Improvement Programme (HEDIIP) New Landscapes project report outlined the variation in standard definitions of courses and programmes amongst the sector’s own various data collectors and users. As it happens, the term subject isn’t that meaningful or transferable across the whole sector.
Grouping subjects to allow comparability is a difficult task. At present, they are grouped by JACS codes, which contain hundreds of subject codes that fit within a hierarchy, broadly grouping into 19 key subject areas at the top level. This categorising of subject areas allows for comparability, such as when browsing UCAS and Unistats. However, many have been unhappy with how subjects are grouped, and the implied hierarchy of JACS codes has led to inconsistent application of them. Consequently, there are accusations of gaming the system, as subjects could be coded on a tactical basis to hide poor provision, especially where it comes to NSS results.
This system is changing: JACS will be replaced by HECoS (Higher Education Classification of Subjects) in 2019. HECoS separates the JACS hierarchy from the coding system, instead offering a flat structure. The aim is to make it harder to sort subjects inconsistently and ensure greater transparency.
However, the transition is still being ongoing. Subject codes from JACS do not easily map across to HECoS. HESA is currently developing a Common Aggregation Hierarchy (CAH) to ensure standard aggregation from one system to another.
This new hierarchy, which is separate from the subject coding (HECoS), will likely inform the grouping of the TEF subject areas. Regardless, keeping everyone happy with subject groupings is an impossible task and will cause unavoidable outrage in many departments and institutions.
As easy as one-two-three?
TEF, as currently designed, is still primarily a metrics-driven exercise. It also takes a significant amount of administration to deliver. At a subject level, this could become a mammoth task: for universities, the Department for Education, and HEFCE (later OfS). Furthermore, there are some easily foreseeable complications regarding the availability of data for 2020 and its application across different subject areas.
The current TEF benchmarks institutions based on their student profiles, such as high/medium/low tariff entry, socioeconomic background, and ethnicity, using weighted sector benchmarks devised by HESA. This means that different institutions have to meet different benchmarks depending on the makeup of their students. The system arguably ensures controls on extraneous variables, such as prior student ability, allowing for ‘relative’ comparability. However, others have argued that you should measure teaching excellence on the best ‘absolute’ outputs. They suggest that by creating weighted benchmarking, you mislead prospective students who think they are getting the best teaching when they are actually getting the best relative to a complex weighted system.
Whichever side of the debate you take, benchmarking creates an additional layer of complexity when applied to subject level TEF. Across the different metrics used, there are large differences among the various subject areas. This may not appear too problematic as subject level TEF would compare results within subject areas across institutions rather comparing different subjects to each other, i.e. it would compare English across different institutions, rather than compare English with Maths. However, the variation in metrics between subjects may force the government to set additional benchmarks across the various subject areas. This could mean that within the same university, subject A could get a gold rating when its non-continuation rates are higher than subject B, awarded bronze.
LEO and New DLHE
The announcement from Jo Johnson that the subject TEF pilot would be extended for another year, ready for implementation in TEF year 5 (2019/20) is good news for the employment metrics. HESA is currently in its final stage of consultation for the New DLHE model to be delivered in 2018, ready to publish the results in January 2020.
New DLHE will link to student identifier data, including DfE’s Longitudinal Education Outcomes (LEO) salary data. TEF might incorporate these salary metrics, something that hinted at by government in the early consultations. Although New DLHE won’t be ready for the subject level TEF pilots, the LEO data will be released at subject level in the coming months (see the release for Law already available). For certain subject areas, this may be considered very dangerous, particularly in those where success is rarely measured by salary (e.g. creative arts).
There is significant variation in NSS results between different subjects in most universities, which are typically masked by overall scores. It is very rare for an institution to have good NSS results across all subject areas, although there are some exceptions, including Oxford and the Open University. In fact, there is more variation in NSS results between subjects within institutions that there is between institutions themselves. This is one of the main rationales for introducing a subject level TEF.
However, there are concerns about the robustness of NSS data at a subject level because completion rates might result in datasets that are too small to produce reliable data. There is an optimum balance between the number of categories (number of subject areas) and the number of students in each course. As you break into more categories, you find greater variation in results, but the data became less reliable. Too many subject areas could lead to weaker datasets, but too few could not accurately reflect the actual subject areas and be misleading for students.
Chris Husbands, the TEF panel chair, has gone to great lengths to emphasise that this year’s TEF is not a strictly metrics driven exercise. Providers currently can submit a fifteen page written submission as part of their TEF application, to provide contextual information alongside their metrics. No doubt, these took great effort to put together at an institutional level. So just imagine delivering – and assessing – one per subject area?
Clash of the medals
The overall stated aim of the TEF is to improve the information that can inform student choice, and thus improve quality in the system. Many of the consultation responses to the 2015 Green Paper recognised the value of subject level assessments for student choice and quality improvement.
However, the introduction of the Gold, Silver and Bronze awards since then creates the problem of ‘medal clash’. What will happen when a provider gets a higher institutional level TEF than a particular subject, and vice versa? Pity the marketing departments left with that conundrum, and the CMA which will have to arbitrate.
The government has outlined how it might aggregate subject level TEF outcomes to produce an institutional award. There is the potential for embarrassment here if, as a consequence, there was a change to providers’ initial institutional level TEF award.
There are other unanswered questions about how the subject level TEF will work:
- With institutions aware of their weaker subjects, would they be allowed to volunteer particular areas?
- What would this mean for each subject area if too few institutions entered for a subject?
- How would the aggregated award work if subject TEF worked on a voluntary basis and they didn’t have all the data? Will it have to be mandatory?
Finally, there is the matter of fees. If institutional TEF ratings are linked to fees after all (the Lord’s amendment is far from secure), how would universities justify their Gold tuition fees for Bronze courses? Considering this matter last spring, the Commons BIS Select Committee stated that “there is a logic to tuition fees operating at a subject level in accordance with the relevant TEF score” in their report on the TEF and quality assessment.
All of this would only add to the administrative burden, not least for the Student Loans Company, and there would no doubt be fears about the side effects for widening participation and the size and shape of the loan book. Right now, differential fees at this level seem unthinkable and implausible, but they are not impossible.
The reputational and financial impact a subject level TEF could have on the sector is considerable. However, there are significant problems to be solved on the road ahead. In this context, Johnson’s extension of the subject level pilots is a very sensible and very necessary step.
Join Team Wonkhe and a host of expert speakers on June 8th in London to explore TEF and the future of teaching excellence. Reflect on the ups and downs of TEF’s journey so far, and look ahead to the next steps for quality and excellence in UK higher education. Sign up for The Incredible Machine: What next for TEF?