Is research culture really too hard to assess?

Yolana Pringle and Ben Tatler make the case that the REF pause should be the moment to build on the most substantial sector-wide collaboration ever undertaken on research environments

Yolana Pringle is Deputy CEO and Director of Partnerships and Programmes at the Careers Research & Advisory Centre (CRAC)


Ben Tatler is Dean for Research Culture and a Professor of Psychology at the University of Aberdeen.

Assessing research culture has always been seen as difficult – some would say too difficult.

Yet as REF 2029 pauses for reflection, the question of whether and how culture should be part of the exercise is unavoidable. How we answer this has the potential to shape not only the REF, but also the value we place on the people and practices that define research excellence.

The push to assess research culture emerged from recognition that thriving, well-supported researchers are themselves important outcomes of the research system. The Stern Review highlighted that sustainable research excellence depends not just on research outputs but on developing the people who produce them. The Harnessing the Metric Tide report built on this understanding, recommending that future REF cycles should reward progress towards better research cultures.

A significant proportion of what we have learnt about assessing research culture came from the People, Culture and Environment indicators project, run by Vitae and Technopolis, and Research England’s subsequent REF PCE pilot exercise. Together with the broader consultation as part of the Future Research Assessment Programme, this involved considerable sector engagement over multiple years.

Indicators

Nearly 1,600 people applied to participate in the PCE indicators co-development workshops. Over 500 participated across 137 institutions, with participants at all levels of career stage and roles. Representatives from ARMA, NADSN, UKRN, BFAN, ITSS, FLFDN and NCCPE helped facilitate the discussions and synthesise messages.

The workshops confirmed what many suspected about assessing research culture. It’s genuinely difficult. Nearly every proposed indicator proved problematic. Participants raised concerns about gaming and burden. Policies could become tick-box exercises. Metrics might miss crucial context. But participants saw that clusters of indicators used together and contextualised could allow institutions to tell meaningful stories about their approach and avoid the potentially misleading pictures painted by isolated indicators.

A recurring theme was the need to focus on mechanisms, processes and impacts, not on inputs. Signing up for things, collecting badges, and writing policies isn’t enough. We need to make sure that we are doing something meaningful behind these. This doesn’t mean we cannot evidence progress, rather that the evidence needs contextualising. The process of developing evidence against indicators would incentivise institutions to think more carefully about what they’re doing, why, and for whom.

The crucial point that seems to have been lost is that REF PCE never set out to measure culture directly. Instead, it aimed to assess credible indicators of how institutions enable and support inclusive, sustainable, high-quality research.

REF PCE was always intended to be an evolution, not a revolution. Culture has long been assessed in the REF, including through the 2021 Environment criteria of vitality and sustainability. The PCE framework aimed to build on this foundation, making assessment more systematic and comprehensive.

Finance and diversity

Two issues levelled at PCE have been the sector’s current financial climate and the difficulty of assessing culture fairly across institutional diversity. These are not new revelations. Both were anticipated and debated extensively in the PCE indicators project.

Workshop participants stressed that the assessment must recognise that institutions operate with different resources and constraints, focusing on progress and commitment rather than absolute spending levels. There is no one-size-fits-all answer to what a good research culture looks like. Excellent research culture can look very different across the sector and even within institutions.

This led to a key conclusion: fair assessment must recognise different starting points while maintaining meaningful standards. Institutions should demonstrate progress against a range of areas, with flexibility in how they approach challenges. Assessment needs to focus on ‘distance travelled’ rather than the destination reached.

Research England developed the REF PCE pilot following these insights. This was deliberately experimental, testing more indicators than would ultimately be practical, as a unique opportunity to gather evidence about what works, what doesn’t, what is feasible, and equitable across the sector. Pilot panel members and institutions were co-designers, not assessors and assessees. The point was to develop evidence for a streamlined, proportionate, and robust approach to assessing culture.

REF already recognises that publications and impact are important outputs of research. The PCE framework extended this logic: thriving, well-supported people working across all roles are themselves crucial outcomes that institutions should develop and celebrate.

This matters because sustainable research excellence depends on the people who make it happen. Environments that support career development, recognise diverse contributions, and foster inclusion don’t just feel better to work in – they produce better research. The consultation revealed sophisticated understanding of this connection. Participants emphasised that research quality emerges from cultures that value integrity, collaboration, and support for all contributors.

Inputs

Some argue that culture is an input to the system that shouldn’t be assessed directly. Others suggest establishing baseline performance requirements as a condition for funding. However, workshop discussions revealed that setting universal standards low enough for all institutions to meet renders them meaningless as drivers of improvement. Baselines are important, but alone they are not sufficient. Research culture requires attention through assessment, incentivisation and reward that goes beyond minimum thresholds.

Patrick Vallance and Research England now have unprecedented evidence about research culture assessment. Consultation has revealed sector priorities. The pilot has tested practical feasibility. The upcoming results, to be published in October, will show what approaches are viable and proportionate.

Have we encountered difficulties? Yes. Do we have a perfect solution for assessing culture? No. But this REF is a huge first step toward better understanding and valuing of the cultures that underpin research in HE. We don’t need all the answers for 2029, but we shouldn’t discard the tangible progress made through national conversations and collaborations.

This evidence base provides a foundation for informed decisions about whether and how to proceed. The question is whether policymakers will use it to build on promising foundations or retreat to assessment approaches that miss crucial dimensions of research excellence.

The REF pause is a moment of choice. We can step back from culture as ‘too hard’, or build on the most substantial sector-wide collaboration ever undertaken on research environments. If we discard what we’ve built, we risk losing sight of the people and conditions that make UK research excellent.

0 Comments
Oldest
Newest
Inline Feedbacks
View all comments