As we move through TEF3 and into the realm of subject-level TEF, it’s timely to revisit and reflect upon how we use the TEF. One suggestion from the Office for Students is that the TEF “…encourages providers to work with their students to identify, pursue and maintain excellence.“
So, taking the standard QAA enhancement definition as using evidence to inform deliberate steps to improve the student learning experience, the suggestion is that the TEF has the potential to be used as an enhancement tool given that the act of pursuing excellence implies a desire to achieve continuous improvement. But what are the possibilities and pitfalls of using TEF in this way?
Can TEF enhance teaching practice?
There is no doubt that the TEF provides institutions with a valuable quantitative data set, one which can be used as a starting point for evidence-based internal review, scrutiny and dialogue as part of an enhancement agenda. If used as a constructive conversation starter around performance then some value can be seen in using the data as an enhancement tool. The principle of using benchmarked data within TEF is key here too: the need for enhancement activity is then, in theory, conducted within the context of an element of comparing “like with like”, rather than absolute measures.
The act of producing a provider written submission is an important part of the process, reflecting that TEF assessment decisions are not made on metrics alone, but represent a holistic judgement based upon TEF metrics and the provider written submission together. So, perhaps the very act of putting together the written submission also provides an opportunity for us to engage with an enhancement agenda. By reflecting upon TEF metric performance within the written submission, providers have an opportunity to outline the qualitative evidence base in relation to enhancement, evaluation and impact, within the context of their own overall institutional strategic approach to improving the student experience.
On the face of it, perhaps we do have a valuable enhancement tool in the form of TEF. However, I would suggest that by using TEF in this way we face a number of challenges, potential pitfalls and unintended consequences.
First, using the data as an evidence base to inform enhancement work presents challenges of interpretation and application because each of the data sets in relation to NSS metrics, student continuation rates, and employability reflect different student cohorts and, as we know, are historical. They give a snapshot of “what has been” for different student cohorts rather than what is happening now for a clearly identified group of students, suggesting an element of the horse having bolted in relation to the data being useful for enhancement purposes.
Second, a particular problem for small higher education providers, the data can sometimes be unreportable due to low student cohort numbers thus severely diminishing the utility of such a data set.
Third, the introduction of “absolute” measures during TEF3 has, in my view, undermined the possible use of TEF as an enhancement tool. Absolute measures detract from the key principle of TEF as a benchmarking exercise: the introduction of high and low absolute values into TEF metric workbooks indicates an element of benchmarking “creep” which does nothing to instil confidence in TEF for enhancement purposes.
Fourth, the introduction of grade inflation metrics during TEF3 is of questionable value. Such a metric does not consider the contexts within which providers are operating. Providers have robust and detailed mechanisms for ensuring fair and equitable assessment of student work, including the use of external examiners to calibrate sector-wide, a system that contributes positively to the enhancement agenda and to which the grade inflation metric adds little value.
Does subject TEF support student choice?
The possible introduction of subject-level TEF presents further pitfalls in relation to using TEF as an enhancement tool. For example, the recent consultation document suggests using level two of the Common Aggregation Hierarchy as the classification system (CAH2, with 35 subjects). This is deeply problematic.
At my university, categorising subjects on the basis of CAH2 would not allow us to represent our provision well, especially where subject areas contain multiple and distinctive courses with different levels of exit award and differing modes of delivery. In addition, I suspect that subject groupings are not actually helpful to students, who are making decisions about courses, not subject areas.
For enhancement purposes, CAH2 does not allow for the isolation of evidence pertinent to individual courses and so is rendered almost useless. How do we know what to target for enhancement if it is buried within a broad subject grouping?
The consultation asks for views around the introduction of a measure of teaching intensity. In my view, the proposed measure has no meaning and no connection to excellence, value or quality, let alone enhancement. There is the potential for the information to be misleading as it will need specialist and careful interpretation.
TEF in Wales
Perhaps the pitfalls outweigh the possibilities in relation to use of TEF as an enhancement tool and this is one reason why my own institution and other higher education providers within Wales may well be reflecting upon whether it is worth participating in TEF at all. Unlike in England, where providers which register with the Office for Students will have to take part in TEF, participation is voluntary in Wales, as it is in Northern Ireland and Scotland.
Subject TEF has the potential not to do justice to programme provision; it is not clear whether it will really help prospective students; it will consume valuable time and resources; and it does not really add value to the enhancement toolkit. There is a real possibility that Welsh universities will not voluntarily participate going forwards. Of course, this has the potential to exacerbate divergence with regard to the coherency of a UK-wide higher education offer and the implications this has for public perception of quality. This is a direct result of the DfE taking a consistently England-centric approach to TEF which has skewed the focus of policy implementation away from the devolved nations and their specific contexts with regard to quality assurance and enhancement in particular.
We have to grapple with the challenges of how to evaluate success and impact and of course we are all working hard to ensure continuous enhancement in relation to student learning opportunities, but for the reasons I’ve outlined the TEF is beginning to look like an extremely blunt instrument in this respect. Proceed with caution, if indeed you choose to proceed at all.