KEF 2023 results

The least well-known but best designed of the sector frameworks gets a new iteration for 2023.

David Kernohan is Deputy Editor of Wonkhe

The reason the Knowledge Exchange Framework (yes, it is KEF day) passes largely without comment while the equivalent Teaching Excellence Framework (TEF) and Research Excellence Framework (REF) releases drive gigabytes of breathless commentary and interpretation is that KEF has been impeccably well designed.

KEF clusters similar providers for comparison. This means that we are comparing institutions with similar characteristics rather than, for example, looking at how a large research driven provider compares with a small arts college.

Here’s how those clusters fit together for 2023 (membership is updated for this year, using the same methodology as previously along with new data). Note that there’s a bunch of providers, particularly in the ARTS cluster, that have chosen not to enter KEF (it’s still optional for anyone who doesn’t get HEIF allocations). We do get the data, but we don’t get the names.

[Full screen]

And here’s what they mean:

  • ARTS Specialist institutions covering arts, music and drama
  • M Smaller universities, often with a teaching focus.
  • J Mid-sized universities with more of a teaching focus
  • E Large universities with broad discipline portfolio across both STEM and non-STEM
  • STEM Specialist institutions covering science, technology, engineering and mathematics
  • X Large, high research intensive and broad-discipline universities
  • V Very large, very high research intensive and broad-discipline universities.

As the name suggests, KEF covers knowledge exchange (sometimes known as “third mission”) activity. This is actually an unhelpfully large category, covering everything from graduate startups to regeneration income through to research collaboration and intellectual property. We could broadly describe KEF as a measure of the way in which a provider interfaces with (and brings value to) the world outside academia – contributing to economic growth and levelling up on the way.

There’s no single KEF score, no medal table. Providers are assigned quintiles in each of seven “perspectives”, with these benchmarked against the average quintile assigned to the cluster they have been placed in.

I’ve done a results dashboard to help you see the provider that interests you in the full KEF context. This shows the seven “perspective” quintiles on the left (a lighter colour means a higher engagement level) with the cluster average in the dark blue bars. Mouse over a perspective to see the metric quintiles ( based on a 3yr average) that underpin it. I’ve included the actual metric values in the tooltips, though these are not as important as you might think.

[Full screen]

You’d be forgiven for thinking this is all a bit, well, dull. The use of quintiles and comparators – and the year-on-year changes to clusters – mean that it is quite hard to get a sense of who is “up” and who is “down” in each iteration. And again, this is the point. For that matter, different providers have different strategic approaches in each area, so a “quintile 1” rating may well be due to that area not being a priority rather than an indicator we should be concerned with. Very few arts colleges enter commercial research partnerships – it’s just not a part of that landscape.

You can simply rank providers by each metric – I’ve sketched this out below – but this is not really all that useful. On a sector level, providers turn up largely where you would expect them (especially if you already know the HE-BCI data).

[Full screen]

What the annual KEF offers is hugely important to those working in the fields it covers. With many providers still finding their way in, say, commercialising generated IP it is helpful to know what other similar places are doing. The reflective commentaries (renewed for this year, and available from the official KEF dashboard) are possibly even more important than the quintiles, in that they offer direct insight into strategies and innovation. It’s in the nature of knowledge exchange staff to share their practice.

So year on year changes, artificial rankings, and false comparisons add little to KEF. If only this were true of some other, higher profile, frameworks.

Leave a Reply