When the Teaching Excellence Framework was first conceived it had multiple lofty goals – raising the profile and incentivising enhancement of learning and teaching, recognising excellence, and informing student choice.
Former vice chancellor Shirley Pearce, in her independent review of TEF, considered the various purposes claimed for TEF, and concluded that its primary purpose should be to enhance learning and teaching for all students across demographics and courses. The independent report articulated a theory of change that focused primarily on conversations and activity inside institutions:
The clear message from senior leaders responsible for teaching and learning is that the process of engaging with TEF has significant potential in the enhancement of provision.
The response from government and the Office for Students didn’t disagree that quality enhancement is important (how could they?) but tied it firmly to the wider agenda on quality assessment and institutional accountability. And student choice.
The theory of change is set out in the regulatory guidance from OfS:
The TEF aims to incentivise a higher education provider to improve and to deliver excellence above these minimum requirements, for its mix of students and courses. We intend that TEF ratings will create this incentive by putting a spotlight on the quality of providers’ courses, influencing providers’ reputations and informing student choice.
In other words, TEF is about the external signals that it provides to the market.
Before refining too heavily on the difference, we should acknowledge that the different dimensions of TEF are mutually reinforcing to some extent. Enhancement conversations and activity inside institutions are lent a certain urgency by public assessment, and public identification of excellence across institutional missions offers new insight that can inform the sector’s work on enhancement.
But given the competing high level discourses on how TEF works to make things better, we wondered about how institutions experienced the current iteration of the TEF process and whether they believed it to be worthwhile for their own internal purposes and missions as well as for the signals sent to an external audience.
Chatting with colleagues working in learning and teaching at different universities across the sector about their experiences, we were struck by how conflicted they felt their experience to be. What started as a generalised off the record chat about how the process felt from inside institutions (answer: frenetic) turned into a deeper reflection about what is afforded to institutions to improve learning and teaching.
Bless this mess
Reading the TEF guidance you get the sanitised version of universities’ learning and teaching activity. There’s a fragrant, hygienic world posited of inputs, activities, and outcomes, with underpinning evidence stitching them all together. The reality is, of course, very different and much messier.
There’s the institutional data – in a lot of cases the public data will have told institutions roughly what they already knew from their own internal metrics, but in some cases there will have been a good bit of interpretive work and sense checking on the part of planning teams.
Then there’s all the digging across teams and departments to find out what’s going on and what institutional evidence is actually available. In each department or subject area there are going to be different individuals responsible for different elements of courses. And their understanding and articulation of the interrelationship between activity and outcome will have been variable.
Then there’s the tension between second-guessing what the Office for Students wants to hear and presenting something that pretends that everything is always already neat and tidy, and reflecting the messier truth of institutions’ strengths and weaknesses in all their glory.
Questions about how to handle education gain illustrate this tension. This iteration of the TEF invited universities to include in their submission reflection on their internal understanding of education gain – the hypothetically most robust measure of quality available in that it sets out to capture “distance travelled” by students in terms of acquisition of knowledge, skills, and capabilities while at university. The national learning gain project of 2015-17 had concluded that attempting to capture learning or education gain in national metrics would not be appropriate – meaning that behind the scenes efforts to measure it at local level were quietly put on ice.
We’ve heard of a few different responses to this prompt – in one case a university took the opportunity to interrogate its own practice and create and consult on a new education framework aligned more closely to its graduate attributes. In others, work that had been deprioritised after the conclusion of the national learning gain project had to be hastily unearthed and dusted off. Presumably a handful either had some longer term work to report on or simply ignored the issue in their submissions. But there will have been lots of anxiety about which way to jump, and speculation about the “right” answer.
Stitching all this together in what some felt to be an unreasonably short timescale into something that reads as both coherent for public consumption and authentic for the internal audience has clearly taken no small degree of skill, patience, and ingenuity. Not every institution will have pulled it off, and will have made a pragmatic decision in light of the high stakes involved to sacrifice authenticity in favour of a glossier public presentation.
Despite all these challenges, at this early stage, what we’re seeing is an enormous degree of pride across the sector on the pedagogic stories and good practice that the TEF process has enabled institutions to unearth and/or construct. In some cases at least, TEF generates focused analysis of practice, has prompted conversations and comparisons to start across different subject areas, and prompted closer alignment of activity where otherwise there might have been unhelpful siloes.
It has also strengthened the internal case for generating better evidence about pedagogy, and allowed institutions to reflect on and raise the profile of their efforts to develop academic practice. Some of this is in how it’s been set up and what OfS asked for in submissions, but some of it is about how those working in universities grasped the opportunity and leverage that TEF offers.
We’d expect a more robust evaluation of this TEF process to happen – especially since if the next TEF is not to take place for four years, there’s time to make refinements and allow institutions to prepare, as they do for REF.
Inevitably the evaluation will take OfS’ objectives and theory of change as its starting point. But it would be helpful for that evaluation to, rather than mounting a retrospective defence of regulator and government policy, to consider carefully how national policy instruments can create as favourable conditions as possible for universities to actually do the work of teaching enhancement.
One vital aspect of this is ensuring institutions can have confidence that their written submission will be taken seriously and count in assessment of results. While most in the sector acknowledge the value of data and metrics, few believe that metrics are capable of capturing the whole story of the quality of a university’s education, and many are concerned about the implications for institutional practice of a tilt too far towards a data-led understanding of quality.
The risk if there is a belief that only the metrics matter, only activity that demonstrably moves the dial on metrics – and moves it quickly, for preference, rather than sustainably – will be attempted. And because evidence of what works is more slippery than is typically appreciated, there’s a risk that the sector lowers its ambitions for what can actually be changed based on the ease of measuring it.
One way of mitigating this risk – and we’d be surprised if it weren’t already on OfS’ roadmap – would be to analyse all written submissions not only to extract “good practice” in relation to student outcomes but to surface where institutions are exploring novel pedagogies and practices even where they are not being leaned on by the regulator to do so. Some work with Advance HE on evidence and pedagogy could also be helpful to create trust and confidence that OfS gets the challenges involved in working meaningfully with metrics.
At the very least OfS should be planning to give universities as much notice as humanly possible of any changes to the next iteration of TEF, informed by open consultation on the experience of this iteration. Assessment of the experience could usefully start before results are published, before the experience is filtered through the final results, and while the process is fresh in the sector’s minds.
A high-handed response would scoff at this proposition and argue that good, autonomous universities should be across their data, evidence, and practice and the TEF should simply be translating that work. That could be technically true, but it would not be fair or reasonable. If the TEF is to support teaching enhancement at all, it needs to take meaningful account of the conditions for that work to take place.
Debbie would like to thank the colleagues who took the time to chat about their experiences of TEF.