If you’ve spent time looking over the data sheets underpinning the Teaching Excellence Framework, you may have noticed that not every box is filled. Small sample sizes mean that it’s not possible to report the data, a particular problem for Further Education Colleges and Alternative Providers. TEF data is suppressed for samples with fewer than 10 students (shown in the tables as ‘N’), or low response rate (‘R’) or insufficient data to form the benchmark (‘SUP’).
This is hardly ideal, but is to be expected. It highlights the difficulty there is in reporting the data broken down by gender, ethnicity, full-time/part-time, and so on. But that’s not the only problem. To understand the student experience in a more granular way, we should be thinking about how the ‘split metrics’ characteristics intersect, and not just thinking of them as isolated characteristics. That might compound the data reporting problem, but that shouldn’t stop us trying to find a way to report the data differently.
Without being able to look at the experience of individuals across multiple groups we are missing on a big part of the issues with student experience in UK higher education. You may recall David Cameron’s espousal of anonymous admissions as a means to boost ethnic minority admissions and strive for gender equality, or Theresa May’s contention that white, working class, males are less likely to go to university. These are live, political, issues and the TEF panel should have had the data to understand how individual institutions have responded to them.
As things stand:
- A TEF panel member or assessor can know at a glance whether a good overall institutional performance against a metric is reflected in the performance of BME students. But they cannot know except by extrapolation and guesswork whether Black women have a worse experience than the student body overall.
- A panel member or assessor may spot an issue of poor attainment amongst white students, but will not be able to know for sure that ethnicity rather than disadvantage is the correlated factor – a college may recruit from POLAR1 from predominantly white students.
We should note at this point that several institutions did break down their split metrics more finely in their written submissions, but this was far from universal and it would be fair to assume this was done where it would show the institution in a positive light.
A little history lesson
Columbia Law School Professor Kimberlé Crenshaw coined the term ‘intersectionality’ in her famed 1989 essay in order to discuss the tendency to “treat race and gender as mutually exclusive categories of experience and analysis.” In recent years, the concept of intersectionality has typically included more multiple-faceted analysis of the barriers faced by disadvantaged societal groups, covering issues such as gender, disability, LGBTIQ+, ethnicity, identification, religion…
Intersectionality, put simply, is analysing the multiple barriers that people can face in line with their identities – examining race, gender, disability, sexuality, and the role that these identities play in compounding further inequality. By taking an intersectional approach, you acknowledge that there is more than one type of disadvantage – and advantage – that exists, and analyse these different types of disadvantage to get a fuller picture.
However, due to its origins in race theory and black feminism, the term ‘intersectionality’ is often contested when discussing characteristics other than race or gender. Intersectionality is a huge, active issue that is currently being addressed both critically and in policy making. It has become an essential tool in beginning to understand variation in human experience in a subject-focused and experience-informed way. And understanding the way data is designed to be analysed within exercises like TEF is absolutely the way this critical lens should be used.
What can be done?
We asked HEFCE about this and were told that the multi-variable diversity of institutional student bodies was reflected within the benchmarking process. But this, however welcome, is not the same as being able to analyse individual intersectional issues – and we can be clear that the TEF assessment panel was not able to do this to inform its judgement.
We’ve recently seen a commitment from HEFCE to address issues around small sample sizes for subject area TEF pilots. Similar methods could be used to examine intersectional issues.
We’re still not sure exactly the ways in with written submissions influenced the decision making of the TEF panel. But they evidently were influential with significant movement in institutional judgements from the initial hypotheses suggested by the data. The Equality Challenge Unit argues that the TEF submission is one of the ways to challenge the sector in its approach to equality. ECU told Wonkhe: “Equality of experience – including an intersectional approach – needs to be addressed. A dedicated section of the [provider’s written] submission on this could be useful, or otherwise clear guidance as to how to embed this in any analysis of split metrics.”
This idea, and some way of reporting on intersectionality, should form part of TEF’s ‘lessons learned exercise’ as refinements to the process are made for future years. There is an opportunity to develop a TEF that addresses myriad forms of marginalised identity. The purpose of the TEF is to drive improvement in students’ experiences. A fundamental part of that is understanding your student body and with such diverse cohorts an intersectional approach to teaching should be key.
Not all intersections are equal in the same way that not all benchmarking factors are equal. The first stage in calculating if intersections are important is to look at the national benchmarking dataset and run a multi-component analysis on it; unfortunately HEFCE do not provide the population sizes for each benchmarking group and so this is impossible to do and would have to be undertaken by HEFCE or DfE.
From that analysis it might be found that certain intersections show profound differences that cannot be explained by treating them as independent variables and it is those intersections that we as a sector need to focus on. However from a University’s perspective there are 15 different intersections (C(6,2)) for dual interactions and commenting on all 15 without any idea of how dependent they are on each other would be fruitless.
I agree that we need to have a conversation about intersections but first we need the data to show whether it is a problem and to what extent and then we need to focus on only those intersections that have a measured deviation from what would be expected by treating them as independent factors.
I’m wondering why ‘class’ or ‘wealth’ is not mentioned in regards to intersectionality. Surely the ‘inequality’ itself in terms of access to resources etc. would be one one of the largest contributors? Perhaps I have missed something; or perhaps this results form a socially constructed silence of some form?
“wishes he could edit his typos”
🙁 apologies.
Looking at NSS specifically, it’s very challenging for institutions to do their own intersectional analysis. Those brave enough to navigate the clunky results portal rather than rely on the one-dimensional main downloads can cross-reference subjects and respondent attributes, but it’s a long-winded process and hampered by the reporting thresholds which suppress units below ten. If HEFCE would release anonymised individual level data to institutions we could do far more.
The other NSS/TEF frustration I have is that the TEF data was the first time institutions ever received their NSS scores split by POLAR – it’s only now been added to the NSS outputs this year (replacing SEC, Richard!) It felt rather like being judged on something that we hadn’t been allowed to see.