There’s a lot of cross-sector debate about how we can assess something as nebulous as research culture.
In January, the REF 2029 Steering Group announced that it is entering an extensive period of engagement with the research community to develop a set of indicators for the newly proposed People, Culture and Environment (PCE) part of the assessment.
To help inform this conversation, we wanted to share what we have learned as we developed indicators for research culture at Newcastle University to track the impact of our five-year Research Culture Action Plan.
In collaboration
We wanted to ensure that how we measure research culture change across our own institution is done in a way that is meaningful, builds trust, and avoids unintended consequences for the research community. We didn’t want to create metrics by committee, but instead, in collaboration with the research community. We also wanted to take a balanced and pragmatic approach. After all, there’s no such thing as a bad metric, only an irresponsibly used metric. This message sits at the heart of our university’s statement on the responsible use of metrics, as well as in our development of a Research Culture Index (RCI) that would help us to track our institution’s collective progress at enhancing our research culture.
We initially defined what a positive research culture looks and feels like with 100 members of our community (across career stages, job families, and disciplines). From this, we identified a basket of measures that we can use to track institutional improvements in research culture against this framework. Measures included in our RCI were considered feasible and viable, and were deemed to create positive – rather than adverse – consequences for research culture change. Through starting with what we value, we generated a list of metrics that are diverse and, we trust, fit for purpose.
Whilst we were developing the index, we were learning lessons the whole time. The process (we employed the INORMS SCOPE approach) challenged our institutionalised thinking about which metrics are the most useful, and which ones – for this exercise, at least – we should park.
The metric side
We found that with some probing, some potential metrics were unsuitable for a number of reasons. Some metrics could only sit on our “wish list” – they needed system improvements to make them viable. A good example of this was the adoption of pre-prints as a publishing method – we currently don’t have a way to reliably record this across the institution.
Probing was not only invaluable for identifying metrics that were unsuitable (at least for now), but also helped us to surface a set of metrics that correlated with our values of a positive research culture, and by which we can be held to account on the things that really matter to our research community. These are likely to be measures that other universities could readily collect.
For example, under the value of “fairness and inclusion”, we included a measure on diversity of ethnicity and gender across the UKRI population for grant awards and applications. In line with sector-wide trends, the data are confirming what we already suspected: that the proportion of Newcastle University UKRI grant holders from ethnic minorities remains low. It is important that as part of our Research Culture Action Plan we better understand the reasons behind this, and work with colleagues both at Newcastle University and across the research ecosystem to address.
Our completed index consists of 17 measures – four of which are now part of refreshed key performance indicators that are annually reported to our university’s council on the progress of our wider university vision and strategy. These are: number of grant applications and awards with co-investigators from multiple academic units; diversity of ethnicity and gender across UKRI population for grant awards and applications; Postgraduate Research Experience Survey summary of overall satisfaction (benchmarked against the Russell Group median); completion rates for doctoral students within four years; and the percentage of open research deposits in our institutional repository.
While perhaps our index is not necessarily applicable to all institutions or needs, our approach and learning from it could be helpful for how metrics for research culture could be developed. In terms of its utility for the forthcoming exercise, one important consideration will be the level at which the SCOPE framework is applied, as indicators developed for unit and institution level assessment are likely to look very different.
For example, some of our RCI measures could be applied at UoA level (e.g. take-up of the 10 days of professional development stipulated in the Researcher Development Concordat could work at any scale), and others couldn’t (e.g. datasets at UoA level would likely be too small to consider the diversity in UKRI grant applications and awards by ethnicity and gender).
Research communities at the heart of the system
For the record, we have welcomed the REF’s proposed increased emphasis on research culture. Research culture is a crucial cornerstone of research excellence. However, we would advise that an approach is taken whereby institutions can identify, tackle, and evaluate research culture issues in their own contexts.
Like others in the sector, we hope that research culture will be measured in the upcoming REF as a reward for the journey travelled, rather than pitting institutions against each other in a spiralling research culture competition. This approach would also enable universities, who like us have already invested in community listening exercises and identifying priorities, to keep focussed on those issues that matter most to our communities. We want to improve the experiences of the people who deliver, support and enable research. Only by creating meaningful improvements in our research culture will we ensure we have a sustainable system for delivering world-leading research in the future.
We are heartened to see that within the Initial Decisions report, the REF is committed to codifying research culture in a way that adheres to the principles of responsible research assessment and in a way that ensures metrics are aligned to an institution’s research activity. With adoption of the SCOPE framework, we know from our own experience that measures can be developed according to an institution’s local context. And most importantly, this methodology has our research community as people at the heart of it. It is designed to build metrics that have been subject to a 360 degree analysis of their unintended consequences for individuals.