So the FRAP outcomes are finally with us and as expected, the next REF is going to bring a greater focus on research culture.
REF2021’s Environment Statements are now REF2028’s People, Culture & Environment (PCE) Statements and the weighting has increased from 15 per cent to 25 per cent. The culture aspect will be assessed at both institution and discipline-level using a yet-to-be-developed “framework relating to research culture that will define core data or evidence requirements, while offering some flexibility for HEIs to tailor submissions to their own circumstances and priorities.”
This approach has largely been welcomed but perhaps predictably, not from all quarters. One of the more vocal detractors has been former Department for Education advisor (and current Director of Research at Policy Exchange) Iain Mansfield, who despaired at the prospect of “outstanding research… being slashed to just 50 per cent of the total score” whilst “running the right EDI schemes [became] fully half as important as actually producing outstanding research.” When folk countered that a positive research culture might actually be a prerequisite to producing more outstanding research, he argued that if this was the case, we’d all being doing it anyway.
He has a point.
It’s quite common to hear of research culture expenditure being justified in terms of the gold-and-glory that will surely result if we only get in there quick enough and spend enough fast enough. But the truth is that developing positive research cultures is not about short-term competitive advantage but about the long-term health of the sector. Research culture is a hygiene factor. We need to set the standard below which we must not fall, rather than making research culture the next big competition in Higher Education. It’s about stemming the loss: the loss of good people (through lack of diversity, poor leadership, toxic behaviours, lack of career paths, recognition, and reward) and the loss of quality (through questionable research practices, closed and irreproducible research), and not a short-cut to gain.
Research culture competition
So whilst I disagree with Iain Mansfield that it’s a mistake to allocate 25 per cent of REF outcomes to research culture, we need to make sure this has the desired long-term effect. The risk of pitting us all against each other in some unholy research culture competition is that hyper-competition was always at the heart of so many of our unhelpful research cultures. In fact, a lot of the research culture challenges we face are outwith the agency and reach of individual institutions, leaving collaboration as our only mechanism to create real change.
One thing is for sure: if we don’t get this right and research culture does become the next big competition in HE, we all know who’s going to win: our large, old and wealthy friends, the Very Research Intensives. Not only do they do more research – a fundamental prerequisite when it comes to research culture – they also benefit from many other forms of social and economic ‘research capital’.
For starters, they have access to far more research culture funding. The Wellcome Trust are currently deliberating as to which of their already well-funded institutions they are going to further fund to improve their research cultures. Similarly, the research culture funding allocated to English HEIs by Research England is based on institution size. The have-nots get £50K, the have-lots get £1M. And whilst you might argue that more researchers require more support, remember there are economies of scale here. A research culture post costs the same whether it’s in Cumbria or Cambridge, but it will absorb a much lower proportion of a £1M budget than that of a £50K one. (As Terry Pratchett taught us, it’s expensive to be poor).
The other privilege available to the privileged of course, is direct access to a host of exclusive networks to support their efforts: Researchers 14 (for researcher developers in Russell Group HEIs), the Brunswick Group (for REF aficionados in Russell Group HEIs), the Research Directors Group for Russell Group HEIs, and of course Research Libraries UK (for Russell Group HEIs and friends thereof)… You get the picture.
There is a very real fear that, despite the introduction of research culture having the potential to level a playing field distorted by previous REF environment assessments, given the macro-culture in which our individual research cultures sit, the new approach may simply reward cumulative advantage.
What to do?
In March I was invited to attend a Cambridge Science and Policy workshop exploring how the REF could do more to recognise positive and inclusive research cultures, the report from which is cited in the REF Initial Decisions. Despite having to wrestle with my conscience about attending an “inclusive research cultures” event on the evening of a strike day in half-term which consisted of 50 per cent Cambridge folk, 90 per cent Russell Group, I was ultimately glad I did.
Firstly, there seemed to be agreement that research culture was a hygiene factor that could be defined by a core set of indicators. I pointed out that this is exactly how open access was treated in REF 2021. There were no star-ratings; you either hit the 95 per cent threshold or your score got dinged. End of.
There was an acceptance that all institutions are in a different place when it comes to research cultures. And what mattered was not so much where you were, but whether you were on the journey and how far you had come: a distance travelled model.
And there seemed to be agreement that making research culture assessment a competitive process with rank-able outcomes would mitigate against the collaborative and collective approaches to research culture improvement that we all need. Instead of being incentivised to hide our research culture good practice behind internal firewalls to avoid another institution getting the REF-benefits, we should be required to share our successes – and failures.
Whilst the concept of having a core set of research culture indicators – inspired in part by our Harnessing the Metric Tide report recommendation of using “Data for Good” – seems to have made it into the REF Initial Decisions, it is unclear whether these are to form a baseline of acceptability or a set of rateable dimensions.
Again, the Initial Decisions’ idea of “tailoring submissions to institutional priorities” feels like an interpretation of the “distance travelled” approach but is not quite the same thing. Context is king when it comes to research culture assessment. And of course, a critical factor when considering someone’s travel history, is the resources they have available to them to do the travelling. ‘Research capital’ is heavily size- and location-dependent. (Don’t forget our Welsh, Scottish & Irish friends have no dedicated research culture allocations). Travelling 50 miles on foot is quite a different thing to travelling 50 miles business class in a private jet.
Finally, the suggestion of sharing case studies based on research culture activity (a la the Scottish Funding Council’s Outcome Agreements) doesn’t seem to have made it at all. I think this is a shame, because as I have argued before, whilst summative evaluation might motivate you to improve, it is formative evaluation that helps you to make the improvement. Knowing that Loughborough has a four-star research culture is all well and good, but knowing how they got there is far more useful to us in ultimately ensuring the long-term health of the sector.
At the workshop, I drew on Ottoline Leyser’s concept of “net contribution”, to suggest that institutions should be assessed not only on how they improve their own research culture but how they support others to improve theirs. This is a joint enterprise, after all.
Everything to play for
I mean, it’s early days and there is everything to play for here. The joint funding bodies are nothing if not consultative and a consultation is scheduled for the Autumn/Winter to flesh out some of the detail. We need to ensure that all institutions get a fair hearing here. There is no reason why a smaller less research-intensive institution shouldn’t care as much about their research community and the standards they uphold, as a larger research intensive, and this should be reflected in the assessment design and outcomes.
Establishing baseline levels of acceptability and context-specific expectations will be critical (what did you do with what you’ve got?) and should leave institutions with a clear sense as to where and how they might improve, and not as a number on a ghastly research culture ranking.
I should like to acknowledge the helpful feedback and input of Dr Helen Young on an earlier draft of this piece.