Subject-level TEF’s good intentions could have unintended consequences

When students make the transition from school or college to university, what should they expect from higher education teaching and learning?

Their experience of education in the years preceding university will most likely have been of guided learning, closely supervised by a teacher at the head of a classroom, and an incrementally increasing emphasis on examination preparation. This preparation is geared to maximising pupil attainment, with the concomitant metrics being key indicators of the school’s success.

Students who apply to higher education should ideally do so with an informed view of what might be different before they start their course. These differences could be defined by the nature of the dialogue between staff and student, or perhaps the dynamic ebb and flow between teaching, learning, and independent study – in short, an experience of “delivery” that is equally responsive to individual learning styles and shifts in subject parameters.

What seems to be missing from the sector’s proliferation of performance indicators and metrics is a way to help a range of audiences better understand the relationship between teaching inputs and student success. In short, it’s about time that the sector developed and applied a measure of teaching in the interests of promoting consistency and comparability, for applicants and students.

So the question is begged here, does the answer to this complex issue reside in the ongoing pilot of the subject-level Teaching Excellence and Student Outcomes Framework (or TEF)?

The value-for-money quest

As we were reminded by universities minister Sam Gyimah, the subject-level TEF pilot is part of a wider quest to ensure value for money and high-quality university teaching.

“Prospective students deserve to know which courses deliver great teaching and great outcomes – and which ones are lagging behind,” the minister said.

HEFCE’s 50-institution pilot spans nursing, engineering, creative arts and design, history and archaeology, business and management – a mix of disciplines intended to capture a variety of provider types and professional requirements- and its focus is the experience of full-time undergraduates.

So, if the outcome of the pilot’s test metrics are confirmed and subjects are graded individually as gold, silver or bronze, what longer-term implications and issues will higher education institutions face?

The viability question

A key question from the outset has been the viability of teaching intensity either as context or as a fit-for-purpose subject-level TEF metric.

Can a quantitative formula predicated on contact hours and staff-student ratios be a credible proxy for excellence? Or put more crudely, does teaching quantity equal teaching quality? The further element in this aspect of subject TEF is the reception for what has been delivered, garnered from students’ views through an NSS-like survey.

A pre-emptive answer to the viability question was offered in 2010 by Graham Gibbs’ Dimensions of Quality paper for the Higher Education Academy, in which he laid bare the best predictors of educational gain as “measures of the educational process: what institutions do with their resources to make the most of whatever students they have.”

This is neither a function of university facilities nor of student satisfaction with those facilities in Gibbs’ view, “but concern a small range of fairly well-understood pedagogical practices that engender student engagement.”  

“Class size, the level of student effort and engagement, who undertakes the teaching, and the quantity and quality of feedback to students on their work are all valid process indicators,” he argued.

This would suggest that the factors at the forefront of the pilot are important, but far from the full range of considerations of what it takes to successfully deliver an appropriately challenging and rewarding higher education experience.

Broadly speaking, one wonders if the formulaic measurement of teaching intensity, as modelled in the pilot, may have unintended consequences for curriculum design, delivery, and engagement.

The privileging of low staff-student ratios in search for a strong Gross Teaching Quotient (GTQ) may well push universities towards the “teacher-in-classroom” paradigm ahead of a richer variety of learning methods, such as peer learning or experiential learning – and certainly may not sufficiently acknowledge modes of study outside the conventions of a full-time undergraduate delivery model.

The formula for calculating the intensity of teaching may well assert itself in unexpected ways, ways that could de-value autonomous learning which, in turn, may reduce curriculum space for the mastery of practice-based skills. The value of time and space for practice and reflection in areas as diverse as the creative arts and film through to medicine, veterinary practice, and engineering shouldn’t be overlooked. In subjects where reflexive, research-informed practice leads to the development of high-level skills, the volumetric measurement of teaching quality seems to slightly underestimate the learner.

What has really been gained for students if the measurement of teaching intensity finds its corollary in learning passivity?

Practicalities

There are also practical considerations.

The data required to calculate the pilot’s GTQ must be accurate and verifiable – perhaps drawn from institutional timetables or evidence-based provider narratives. But, what should be included and excluded: time with technicians or demonstrators, careers sessions, library inductions, or will it be just lecture and tutorial time?

And for those drafting timetables, faced with the TEF’s positive disposition towards small-group teaching, a tension is almost certain to grow between staffing budgets, physical space, contact time, and real (or perceived) value for money.  

Threshold weightings of value to group size may also mean that timetabling and curriculum choices may drive class sizes to arithmetic extremes.

And what of niche courses or subjects with modest cohort sizes, that may fall outside subject-level TEF because of survey sample sizes too small to be eligible for a rating? What fate would befall such small courses? Without a TEF subject-level ratings, what chance would they have to attract students in an increasingly competitive environment?

Unintended consequences and good intentions

This is not an argument against trying to understand and define teaching excellence at subject-level; rather, thoughts from the perspective of a curriculum designer on the issues the pilot is likely to encounter and will need to be weighed carefully.  

In short, let’s be wary of crafting unintended consequences based on good intentions.

2 responses to “Subject-level TEF’s good intentions could have unintended consequences

  1. Much of this is right but we should not over-state the importance of ‘teaching intensity’.

    Teaching intensity data was just one of two ‘supplementary metrics’ and will not be the determining factor in whether subjects are rated gold, silver or bronze within the pilot.

    The TEF subject pilot included the full set of benchmarked metrics covering student experience and outcomes that are used in the main TEF. It also included comprehensive provider submissions explaining institutional approaches to learning and teaching at subject level.

    So, while this piece makes several good points about teaching intensity metrics, problems around teaching intensity should not determine our overall response to the proposal to introduce subject level TEF.

  2. A simple premise in good teaching is knowing when to be quiet and to let the learners learn, maybe with a nudge or two but no more. We are hitting this hard in entrepreneurial education where the learners have to think, and to see alternative options, not just the ‘right way’.

Leave a Reply