If you’ve spent time looking over the data sheets underpinning the Teaching Excellence Framework, you may have noticed that not every box is filled. Small sample sizes mean that it’s not possible to report the data, a particular problem for Further Education Colleges and Alternative Providers. TEF data is suppressed for samples with fewer than 10 students (shown in the tables as ‘N’), or low response rate (‘R’) or insufficient data to form the benchmark (‘SUP’).
This is hardly ideal, but is to be expected. It highlights the difficulty there is in reporting the data broken down by gender, ethnicity, full-time/part-time, and so on. But that’s not the only problem. To understand the student experience in a more granular way, we should be thinking about how the ‘split metrics’ characteristics intersect, and not just thinking of them as isolated characteristics. That might compound the data reporting problem, but that shouldn’t stop us trying to find a way to report the data differently.
Without being able to look at the experience of individuals across multiple groups we are missing on a big part of the issues with student experience in UK higher education. You may recall David Cameron’s espousal of anonymous admissions as a means to boost ethnic minority admissions and strive for gender equality, or Theresa May’s contention that white, working class, males are less likely to go to university. These are live, political, issues and the TEF panel should have had the data to understand how individual institutions have responded to them.
As things stand:
- A TEF panel member or assessor can know at a glance whether a good overall institutional performance against a metric is reflected in the performance of BME students. But they cannot know except by extrapolation and guesswork whether Black women have a worse experience than the student body overall.
- A panel member or assessor may spot an issue of poor attainment amongst white students, but will not be able to know for sure that ethnicity rather than disadvantage is the correlated factor – a college may recruit from POLAR1 from predominantly white students.
We should note at this point that several institutions did break down their split metrics more finely in their written submissions, but this was far from universal and it would be fair to assume this was done where it would show the institution in a positive light.
A little history lesson
Columbia Law School Professor Kimberlé Crenshaw coined the term ‘intersectionality’ in her famed 1989 essay in order to discuss the tendency to “treat race and gender as mutually exclusive categories of experience and analysis.” In recent years, the concept of intersectionality has typically included more multiple-faceted analysis of the barriers faced by disadvantaged societal groups, covering issues such as gender, disability, LGBTIQ+, ethnicity, identification, religion…
Intersectionality, put simply, is analysing the multiple barriers that people can face in line with their identities – examining race, gender, disability, sexuality, and the role that these identities play in compounding further inequality. By taking an intersectional approach, you acknowledge that there is more than one type of disadvantage – and advantage – that exists, and analyse these different types of disadvantage to get a fuller picture.
However, due to its origins in race theory and black feminism, the term ‘intersectionality’ is often contested when discussing characteristics other than race or gender. Intersectionality is a huge, active issue that is currently being addressed both critically and in policy making. It has become an essential tool in beginning to understand variation in human experience in a subject-focused and experience-informed way. And understanding the way data is designed to be analysed within exercises like TEF is absolutely the way this critical lens should be used.
What can be done?
We asked HEFCE about this and were told that the multi-variable diversity of institutional student bodies was reflected within the benchmarking process. But this, however welcome, is not the same as being able to analyse individual intersectional issues – and we can be clear that the TEF assessment panel was not able to do this to inform its judgement.
We’ve recently seen a commitment from HEFCE to address issues around small sample sizes for subject area TEF pilots. Similar methods could be used to examine intersectional issues.
We’re still not sure exactly the ways in with written submissions influenced the decision making of the TEF panel. But they evidently were influential with significant movement in institutional judgements from the initial hypotheses suggested by the data. The Equality Challenge Unit argues that the TEF submission is one of the ways to challenge the sector in its approach to equality. ECU told Wonkhe: “Equality of experience – including an intersectional approach – needs to be addressed. A dedicated section of the [provider’s written] submission on this could be useful, or otherwise clear guidance as to how to embed this in any analysis of split metrics.”
This idea, and some way of reporting on intersectionality, should form part of TEF’s ‘lessons learned exercise’ as refinements to the process are made for future years. There is an opportunity to develop a TEF that addresses myriad forms of marginalised identity. The purpose of the TEF is to drive improvement in students’ experiences. A fundamental part of that is understanding your student body and with such diverse cohorts an intersectional approach to teaching should be key.