Elizabeth (Lizzie) Gadd is Head of Research Culture & Assessment at Loughborough University.

Research England has announced that it will be releasing the REF results on 12 May 2022, with submitting institutions getting early notice on the 9 and 10 of May.

Having experienced the marathon of submission, followed by a year of audit hurdles, the final leg for institutions is to think about the analysis of their results and the inevitable scramble to tell the world how well they’ve done – irrespective of how well they’ve done.

We’re (all) top 10!

In the past, UK HEIs have been ridiculed for all claiming Top 10 status as a result of slicing the results data in many and various ways. Now I’ve never really had a problem with that in one sense because I believe every university in the UK is ‘top 10’ in something: we all have our strengths, and REF’s stated ambition is to reward that ‘excellence wherever it may be found’. However, the Metric Tide report recommended that all UK universities made a commitment to the responsible use of metrics, and such a commitment has been suggested as one indicator HEIs can use to support their Institutional REF Environment Statement.

So as someone whose job it is to start thinking about how we might present our REF results when they appear, and someone who cares deeply about responsible metrics, I am wondering how we can be just as responsible in presenting our metrics as we are in compiling them.

Hark to humility

The temptation to sift through the results data to identify any and every ‘win’ is understandable. There might be some joyous surprises when we discover just how much of our research in some disciplines is considered world-leading. Or how strong our research environments in some areas are considered to be. There might also be some nasty shocks on which we need to run a post-mortem. However, in the world of research that we’re seeking to celebrate here, the practice of digging through the data to find good stories to tell is seriously frowned upon. ‘HARKing’ (Hypothesising After the Results are Known), and p-hacking, where you seek only to report on statistically significant findings, are the focus of many sector calls for more responsible research and innovation. So should we be doing it with our REF results?

Another thing no-one really wants to admit – and certainly not HEIs if they’ve done well and certainly not Research England who invest so much in producing a high quality process – is that academic assessments are difficult and subject to uncertainty. Despite all the unconscious bias training and attempts at broadening the diversity of panel membership, assessors will still battle biases towards big names, big institutions, certain disciplines and people like them. And despite ostensibly being a process of ‘expert peer review’, many have questioned whether, given the many sub-disciplines being reviewed and the size of the panels, this is true in every case.

Of course assessments are an art not a science. Panellists are seeking to reduce a huge range of qualities (originality, significance, rigour, reach, and everything assessed in REF environment statements) to a single digit. There will be disagreements amongst panellists which will be resolved through various mechanisms, but on another day may have been resolved differently. Indeed, Waltman and Traag put out a call in 2019 for Research England to collect peer review uncertainty data so we can get a measure of the REF’s reliability. And ARMA asked REF EDAP if they’d run an Equality Impact Assessment on output scoring which they have agreed to do.

Whilst no-one can doubt that the REF offers a thoughtful and rigorous approach to national research evaluation, we should accept that the results will be robust but not gospel. As such, shouldn’t we present them with an appropriate dose of humility?

The rankness of ranking

Another unhelpful practice in the world of research evaluation is ranking. Given how competitive the REF is, it’s sometimes hard to remember that it’s not an actual competition. It’s not the purpose of REF to rank universities, and they present the results in strictly alphabetical order. It’s the universities and news outlets that snatch at the spreadsheets and sort by GPA. But whilst benchmarking yourself can be instructive, as can identifying world-leading environment statements and case studies, the inevitable “we’re the best” messages that results from ranking, are not helpful.

There is a tension between competition and cooperation in academia. We can’t win competitive grant-funding without cooperating with other institutions. However, when it comes to national exercises like REF, it’s every institution for itself. But every researcher knows that addressing the world’s complex problems depends on collaboration. So if we care about cultivating a research environment in which collegiality prospers, shouldn’t we be foregrounding our partnerships rather than seeking to promote ourselves at others’ expense?

At the Global Research Council’s Responsible Research Assessment event in November 2019, UKRI CEO Dame Ottoline Leyser suggested that it might be possible to present the REF results in such a way that it prevented rankings. And it would of course. We could agree to refuse to release the data in spreadsheet form (although someone somewhere would reverse engineer it). Or refuse to release individual HEI results at all, presenting just a profile of the UK results and leaving the sharing of university-level results up to each university (although that might lead to unverifiable claims of research excellence). But perhaps the best way to avoid ranking is to design it out of the assessment altogether.

It’s interesting that of the three national university assessment exercises, TEF, KEF and REF, the REF is the only one that enables universities to claim that they’re ‘the best’. TEF is a threshold-based exercise and many universities win Gold. KEF is a cluster-based profiling exercise where universities are expected to show different strengths. REF is the only one where each university is given an overall score to two decimal places allowing victors to be declared. And due to the criteria used – “internationally excellent” and “world-leading” – such victory claims are not limited to the UK. If 100 per cent of your research is “world-leading” then you must be one of the best in the world, right? (Even though, as the Hidden REF has shown us, not all research-enabling individuals and activities are represented by the current exercise). And of course, because we prize research success over teaching, enterprise, and other forms of university mission, if you win at REF, you win at universitying.

Research metrics rebooted

So what do we do? How can we present our REF results in a way that celebrates our strengths as individual HEIs, but does not result in an undignified scramble to make overblown claims at the expense of so many of our partner institutions? Here are three thoughts:

Present the results in line with your own institutional mission

Decide upfront what matters to you and seek to showcase your results in line with those ambitions. This might include highlighting your research environment even though it’s only technically 15 per cent of the overall result, focussing on flagship discipline areas, or demonstrating global connectivity through the international reach of your case studies.

Present the results as part of the total picture of your institutional success

Remember the REF only offers a partial window onto your overall research performance. And whilst your research is mission-critical, you have other missions too. Offering up your REF results in context helps fulfil the “humility” principle of the Metric Tide report. We can celebrate what the REF results do tell us whilst also presenting the narrative that the REF results leave untold: the stories of lives changed, advances made, people included, research opened up and plans for the future.

Present the results in a way that foregrounds collegiality, rather than superiority

Avoid unnecessary comparisons with other institutions – in particular showing off your rank. If you have to benchmark, use an appropriate peer group rather than a whole cohort of unrelated institutions. And highlight the success of your region, country and/or nation as well as the role you played in that success.

I really hope that, as a nation, we keep our responsible metrics principles uppermost in our thoughts as we present our REF results. I hope we can see them as a learning point rather than just a marketing opportunity; something that enables us to collectively celebrate our strengths rather than to judge individual weaknesses. To my mind, this is the foundation we need as we move into a post-pandemic research recovery phase together.

4 responses to “Using REF results responsibly

  1. Interesting propositions. Still the problem lies not only in the presentation of results, but the lack of assessment of the collaborating approach inherent in the best research. Perhaps greater weight to the unit and institutional environment statements could showcase this.

  2. One option would be not to publish the results at all but simply use them in the formula to determine QR, which is why quality assessment was introduced in the first place.

  3. Useful blog: thanks; but I agree with Patrick, it’s not just about the presentation of results but rather a quest for “excellence” is making us blind to the damage we have done to serious scholarship – the REF is pregnant with unintended consequences that are making the university sector a nasty dog-eat-dog world that is also weirdly anti-intellectual. Please see my WonkHE blog “How we get what we value”: https://wonkhe.com/blogs/how-we-get-what-we-value/

Leave a Reply