The proposal for the 2028 Research Excellence Framework to devote a quarter of the score to “People, Culture and Environment” (PCE) has generated much debate across the sector.
Concerns have focused on the lack of an agreed definition of “research culture”, how it is to be measured and assessed, and the relatively short time frame for implementing the new statement.
Many have also questioned whether the REF is the appropriate mechanism for recognising and rewarding culture – given its context-dependent nature, and the challenge of measuring and comparing across very different institutions. Indeed, one of the key concerns is that research culture could be undermined by introducing a dynamic of inter-institutional competition and potentially narrow, distorting metrics.
But are these fears proportionate to what’s being proposed? And does the proposal really present such a radical shift? The comments that follow are intended to reassure on both fronts – and provide a pragmatic way forward, which hopefully we can all swing behind.
In high definition
Let’s take the definition first. The most common articulation is the Royal Society definition, which – to paraphrase – describes research culture as the set of norms and beliefs that shape the conduct and communication of research, and influences researchers’ career paths.
To operationalise this broad definition, we clearly need to identify some key dimensions. I am going to suggest five – which should sound quite familiar.
One of these will need to capture the “People” element of the PCE. Here, the new REF could build on the REF 2021 Environment rubric, which asked institutions to describe “the institution’s staffing strategy, support and training of research students,” and “evidence about how equality and diversity in research careers is supported and promoted across the institution.” This could be expanded to capture broader aspects of employment conditions, career support and EDI.
Second, the PCE will need to capture the growing significance of “responsible research”. In REF 2021, the Environment narrative asked for information on research ethics and integrity, and on open access. This section might be broadened to capture more detail on open research, reproducibility, trusted research and aspects of international security, and environmentally sustainable research practices.
Third, no research environment can be properly assessed without understanding the submission’s overall strategy and the key research themes and groupings it comprises – along with some of their key achievements. This section could also incorporate a description of the unit’s contribution to [academic] knowledge and understanding, including the spread of outputs created across its research community over the REF period.
The fourth crucial element is the unit’s infrastructure, facilities and other resources which are so crucial for a flourishing environment. Again, this part was already covered in 2021, and should be retained to encourage appropriate investment and support for sustaining excellent research.
Finally, the PCE will need to capture the unit’s broader contribution to society: its strategy for impact and engagement, how it supports these, and the spread of impact and engagement activity across the unit.
Across all of these elements, both EDI and collaboration beyond the institution could be rewarded as key ingredients of a flourishing environment – incentivising institutions to demonstrate how they have cooperated, shared and learned through working with others.
Getting out the measuring tape
How would we measure these five elements? This part is crucial, as adopting crude metrics risks narrowing down the focus of institutions, and encouraging forms of gaming and distorting effects. This suggests the need for judicious use of two types of measurement.
First are some general contextual metrics, at institutional and/or unit level, which provide background and can help panels triangulate what they read in PCE narratives against the data. Previous REFs included research income and PGR completions. We might want to add one or two further metrics aimed to capture key elements of culture, for example related to diversity in staff composition or promotions (see the useful list provided in the Metric Tide). We might also include a threshold expectation of signing up to key concordats or other sector codes, although we need to be careful not to overload institutions.
Second, metrics to evidence research culture will need to be flexible – able to capture the specific context of the unit. Research culture will vary across units, linked to their trajectory, disciplinary norms and institutional priorities. Moreover, a strong research culture will involve co-creating objectives and approaches with the research community – rather than imposing uniform, top-down priorities.
Units should also be rewarded for the value-add they produce over time. This might include demonstrating progress since 2014 where possible – but importantly, the plans set out in 2028 should be used as a baseline for assessing progress in subsequent REFs.
Such accounts will necessarily involve a narrative component – albeit with clear guidance on the questions to be addressed. However, narrative should not be equated with lack of rigour. Each claim will need to be evidenced with data – whether in the form of specific examples, or with metrics.
The latter could be drawn from a basket of suggested metrics, which submissions could select from to best evidence their narratives. This would ensure a degree of overlap in the metrics being deployed across units allowing for a comparative dimension, but without shoe-horning narratives into a one-size-fits-all.
The PCE could be assessed based on the 2021 criteria of vitality and sustainability – but with additional criteria related to inclusivity and collaboration, to incentivise and capture the behaviours we want to promote across the sector.
By increments
A key advantage of the approach sketched above is its incremental nature: it builds on REF 2021, retaining the content of the Environment narrative but expanding it to capture more elements of people and culture. This would justify expanding its score from 15 per cent to, say, 20 per cent – especially if it incorporated the proposed statements on impact and engagement, and on contribution to knowledge and understanding.
And it would do so in a way that introduced more rigour into assessment – but without sacrificing aspects of context and trajectory that are so crucial to understanding cultures.
Not least, the suggested approach would build in criteria that reward and incentivise the behaviours that REF should be promoting: inclusivity, openness, collaboration and learning. It is certainly not beyond our wit to build these virtues into how we assess and reward this crucial component of research excellence.
On thé metrics front, PRÈS culture scores could also be included. Good to see reference to metric tide and responsible metrics. Ref to DORA might also be helpful. Other metrics could include AS and RE charters, for example, to capture EDI issues.
Good piece to continue the discussion.