David Kernohan is Deputy Editor of Wonkhe

The waiting is over – and, if we treat REF 2021 in the way it is intended to be treated there is a curious sense of anticlimax.

The Research Excellence Framework is designed to identify pockets of world-class research conducted between 2014 and 2020 in subject areas within universities, and to make a very broad assessment of the global standing of UK research.

And REF is not a league table – there are very few sensible “who’s up? who’s down?” stories to tell – “winners and losers” framings contribute to the harm caused by poor research culture without actually offering any illumination or insight. Neither – as I noted on Monday – is REF a key to large amounts of funding that will change the course of an institution. But it is important, and there is a lot we can learn from today’s release.

If you just want to look up how your provider/unit of assessment did, the data visualisations are towards the end of the page. But there are a lot of other things we can learn about the nature and spread of excellent research first.

Nations and regions

The headline should be that the post-Stern review changes to the process (which methodologically make the results incomparable to previous iterations) have worked – 84 per cent of assessed research is either “world leading” (4*) or internationally excellent (3*). Nearly every provider that submitted to REF had at least some research judged to be “world leading”.

In part, these results are due to more staff being submitted and a smaller pool of outputs being used to assess them. The increase in full-time equivalent staff (FTE) used in the exercise was focused in providers not traditionally perceived as “research intensive” – giving us evidence of high quality research (and, importantly, evidence for the impact of that research) in every part of the UK.

So let’s start with the size and shape of research in the UK. Here the size of each mark on the map and each bar on the chart refers to the number of FTE submitted by each submitting university to your selected unit of assessment. The intensity of purple denotes the proportion of research rated as “world leading” (4*) based on the REF submission. London does tend to get hectic on maps like this, so I’ve added a regional filter.

[Full screen]

If you wanted to think about the impact of this iteration of REF, consider that larger and darker marks here are likely to lead to more REF-related funding (QR in England) coming into an institution than smaller and lighter ones.

We can also start to consider (NUTS3) regional research power in more detail. Here I’ve tried to show the proportion of submitted research in a UoA in each region that is rated 4* (the height of the bar), and the FTE submitted (the width of a bar). The size and height of each dot shows institutional performance in the same terms, the colour refers to my own groups of providers with similar attributes.

[Full screen]

Here we see evidence of excellence in every part of the UK, and the contribution of many types and sizes of institution to that evidence. If you look at the “impact” profile you can see traditionally research focused providers contributing to the way research benefits the economy and society alongside institutions historically associated more with teaching. For example, more engineering research from Manchester Metropolitan University is judged as having world class (4*) impact than at another famous institution on the same road.

But even considering outputs only (as some people like to imagine defines the “real” quality of research) the engineering UoA shows Nottingham Trent University making a better showing on engineering outputs than another well-known provider in that East Midlands city. There’s fascinating surprises like this throughout the release.

Types of provider

A mission group lens tells another story – despite powerful contributions from many providers the Russell Group submitted more staff and saw more submitted evidence assessed as world class. This is the case for most, but not all units of assessment – smaller specialised providers do well on the world-class measure in biological sciences, other pre-92 providers do admirably in archaeology.


[Full screen]

But the value of using mission groups as a unit of analysis is not high – again and again you see individual providers of all sorts offering precisely the kind of “pockets of excellence” that REF 2021 was designed to find. Our understanding of the full gamut of research is stronger for this demonstration of diversity. Much love to the University of Bedfordshire English research community!

Subject effects

The 36 REF units of assessment are derived from one or more of the HESA Cost Centres – with submissions to UoA being based in part on institution level staff attribution to cost centres in HESA returns. What that means is that we cannot always cleanly map REF results to departments, faculties, schools, or research centres within providers.

And because each unit of assessment has its own panel – with its own expectations of what “world class” looks like in that domain – we can’t really use this information to compare subject areas. Moderation happens at numerous levels within the REF process, but moderation only goes so far.

So we can note, for example, that it appears to be less likely that we will find research judged “world class” in the social sciences from a proportionally larger pool of staff. And we can say that there was more 4* research found in “public health, health services, and primary care” than in any other subject. What we can’t assume, however, is that the UK is better at research where there is a higher proportion of 4* work – or that it is “easier” or “harder” to be world class in a particular domain.

This visualisation shows the proportion of research at each of the five possible REF ratings (alongside the FTE submitted as the smaller bar) for every main panel (top) and UoA (bottom) in the exercise. I’m conscious that the bottom chart is hard to read, so if you click on a main panel at the top the bottom chart zooms in to show you just the UoAs under that main panel.

[Full screen]

What results mean to providers

Though the proportion of 4* and 3* research in each provider (and the number of FTE attached to that subject domain) has a direct bearing on the amount of QR (or similar) funding a provider receives allocations are not hypothecated at source. Providers may choose to assign QR funds based on REF performance – giving more funds to centres with a track record of excellence – but they can choose to spend money in any way they want. You don’t have to look hard to find providers that:

  • Use QR to invest in “world class” research where it has already been identified
  • Use QR to invest in creating new areas of “world class” research
  • Use QR to provide support for research where it is financially required (those big toys the physicists use are really expensive, journal subscription costs are always going up) or to support capacity between large projects
  • Use the money to maintain the institution more generally (from fixtures, fittings, and maintenance to subsidising loss making activity like teaching home undergraduate students.

If you are in a provider you should not assume that areas of research that did not perform as well as hoped will see cuts – there may be a strategic need for that research at a provider, local, or national level. It is possible that research conducted right at the end of the period was “world class”, but there wasn’t enough of it to outweigh the less notable work done in 2014 and 2015 that was used to meet the submission volume requirements.

This data starts discussions – it should not end them.

Results in full

These last two charts are very similar to the ones I published on Monday for REF2014 – showing institutional performance within a unit of assessment, and the performance of each submission from a provider. The coloured boxes show the proportion of research at each provider/UoA combination (actually provider/submission combination – I’ve been marking multiple submissions and joint submissions throughout), the small pink bars show the submitted FTE (you can see the proportion of eligible FTEs that this represents in the tool tip.

Filter by UoA

[Full screen]

Filter by provider

[Full screen]

The burden

A mention of the REF is always accompanied by complaints about burden and cost. The REF is expensive, and it is a lot of work. To assess this REF 1,120 panel members (comprising UK and worldwide academics and research users) joined more than 1,000 zoom calls to support the assessment of 185,594 research outputs and 6,781 impact case studies.

This was supported by a supremely talented executive, and the work of 76,132 academics was brought to the panel by the efforts of research managers and administrators.

It is, in other words, a lot of hard work by a lot of people.

In return we get a way for the sector to be held accountable (and to make arguments for the further funding of) the research it performs and shares. And a way to assess – via the “gold standard” of peer review rather than any suspicious proprietary metrics – the quality of UK research (and the value of global collaborations involving the UK) against international peers. REF will likely change next time round (the consultation on the future of research assessment closed last week).

If REF did end, we’d end up inventing something quite similar to replace it.


For anyone that needs a guide for how to read and use the above visualisations, I’ve made a quick video. 

2 responses to “Understanding the REF 2021 results

  1. I doubt we’d want to invent it in exactly the same way. With the overall continuous improvements between exercises, how long before all UK research is 3 or 4*? There are some ‘surprising’ results as you state, somewhat euphemistically perhaps.

Leave a Reply