This article is more than 3 years old

The value of a national TEF is in enhancing university learning and teaching

The TEF will serve public and student interests best if it drives teaching enhancement in universities. Shirley Pearce, independent reviewer of the TEF, sets out what that could mean in practice.
This article is more than 3 years old

Shirley Pearce is the independent reviewer of the Teaching Excellence and Student Outcomes Framework (TEF).

It is great that the TEF review is finally out in the open and the government accepts most of my recommendations.

The government response from the Department for Education agrees that there should be a clear purpose for a revised TEF and that enhancement of provision should be the goal. This clarity of purpose makes redesign more straightforward.

The government also accepts the principles for redevelopment that the review proposed. A redesigned TEF should be transparent, relevant and robust. These are all important in multiple different ways.

Students should have a voice, but not just through the NSS

To give just one example of the transparency principle, I was concerned that the NSS was being used as a proxy for too many aspects of quality. Not only did this raise statistical concerns about multiple comparisons but it also used students’ impressions to infer absolute quality. Students’ judgements about their experience are important but they are not an absolute measure of quality. There is no absolute measure of educational quality.

We proposed that a revised TEF should be transparent about where students’ judgments fit into the assessment process and proposed an explicit “student satisfaction” aspect to the structure. The NSS has played an important part in the assessment of student experience and whatever the future of the current NSS, students’ views of their education should always be part of a national assessment of excellence.

The government has agreed the student voice is important but does not agree with the name “student satisfaction”. The response suggests an element called “student academic experience” but does not yet say what nationally comparable metric might be used. Our other proposal, that the submission process should enable the student body to contribute independently, would ensure a student voice, but this would not enable national comparisons. The student submission should not be considered an alternative to a nationally comparable metric of student report, such as the NSS, in the structure of the TEF framework.

Subject data drives useful conversations, but public ratings are not robust

The review made a significant number of other process and statistical recommendations, drawing on the tremendous work of the Office for National Statistics (ONS). The statistical risks in the TEF are at their most acute when making subject level ratings which is why we concluded that TEF should not proceed with public subject ratings. At the time of the review this was one of the most polarising considerations and I am pleased that the government has accepted this recommendation.

Although I, and the TEF advisory group, found undue risk in developing subject ratings, we were impressed by the consistently expressed view, (by lovers and haters of TEF) that the provision of subject level data with splits for the various benchmark characteristics had a value within institutions in identifying areas for improvement.

In particular, institutional leads for teaching and learning were clear that the benchmarked data enabled them to pull levers for change in their institutions. The data helped them gain institutional attention for areas that needed investment or improvement. We concluded that there was a value to enhancement, within institutions, of access to the granularity of information in the subject level data.

We also consider that the way in which providers identify and respond to their own variability in subject performance is a key part of their enhancement process and an assessment of this should be incorporated into the overall provider TEF ratings.

This should be the case for all universities. Even high performing universities may have pockets of less successful teaching and learning, and this must not be overlooked if all students are to thrive. I welcome the government response making clear that the TEF exercise, however it develops, should apply to all universities.

Institutional ownership of some metrics could help balance diversity and accountability

A message from all parts of UK higher education during the review was that “we are different”. Different HE providers have different missions and provide different learning experiences and opportunities for their different groups of students. There was also a strong message that formal teaching is only one part of the overall range of learning opportunities that comprise high quality HE.

This difference of mission across different providers, with multiple different kinds of learning outcomes makes it impossible to judge all provision against a single set of metrics. Students want a choice, and the country (and the world) needs a range of provision.

This is why we recommended that a redesigned TEF should request institutionally determined measures, to enable each provider to demonstrate how they measure their own performance against their specific mission. This would supplement nationally comparable metrics.

Overall ratings should be derived from both qualitative and quantitative data. This enables institutions to articulate and demonstrate how they create an excellent educational environment and how they facilitate their students’ learning. The government agrees that qualitative and quantitative data are important. An overall judgement to determine the final ratings also enables contextual factors, such as the devastating Covid-19 pandemic, which may have differential impacts across the sector, to be taken into account.

Taking the TEF forward

We have the Higher Education and Research Act (HERA) 2017 to thank for the unusual opportunity to involve the sector in a review of policy at a relatively early stage of its development.

The expertise and care that the TEF advisory group brought to the multiple strands of the review; the listening sessions, the call for views, the statistical analysis from the ONS, the survey of international experts, the British Council report, the surveys from UCAS of current and prospective applicants and our interviews with employers, provide a rich source of data and ideas for all interested in educational enhancement. I can only touch on some of the issues we raised and addressed. I hope you will enjoy reading the report itself!

The government has asked the Office for Students to take forward the redevelopment of TEF in the light of all this evidence. I welcome that and hope the sector, which had such a significant input into the evidence base, can continue to share its expertise with OfS as it consults on practical proposals to implement the changes.

3 responses to “The value of a national TEF is in enhancing university learning and teaching

  1. “The value of a national TEF is in enhancing university learning and teaching”

    Good one, if it was intended as a joke. Nothing improves teaching better than a good helping of bureaucracy, pseudo-scientific measures, and form filling. Sure.

  2. I know Shirley Pearce means well, but it is the above we always end up with if government and its agencies get involved. Always.

  3. I have to disagree with you here Tom. It is probably possible for institutions to use Heidi plus to access equivalent data (and construct visualisations of this data) at subject level, but I’m not sure this ability (or access) is widespread across institutions (a view confirmed by fellow participants within the Heidi Plus Advisory Group), or whether central planning & insight teams have the resource to achieve this on a regular basis, the ability to meaningfully connect this to local data sets outside of the HESA statutory returnd, or even suitable organisation architecture in which to insert this level of analysis on a routine basis.

    The provision of this data and its benchmarked flags in pilots, notwithstanding the limitations expressed so effectively by ONS, did enable management teams & academic colleagues to gain insights on the relative positions of groups of students across subject areas, in ways that I hadn’t observed before, and which did shift the focus of review processes around T&L – as summarised above by SP (” [the] consistently expressed view, (by lovers and haters of TEF) that the provision of subject level data with splits for the various benchmark characteristics had a value within institutions in identifying areas for improvement.”)

    Is it not possibly unhelpful to only consider whether teaching has improved, without considering the corollary impact of that teaching on learning and achievement?

    The difficult part is obviously how you take action to address perceived underperformance amongst certain groups – where much of teaching and learning is (in my experience) delivered on a one to many basis, plus how you effectively connect the provision across academic and professional service areas, whilst of course being mindful of the grade inflation vultures that circle above (despite Gavin Williamson’s reminder to the OFS yesterday that “[they] should focus on driving up quality”)
    But whilst something might be difficult, surely that doesn’t mean we shouldn’t try?

Leave a Reply