There is a multi-directional relationship between research culture and research assessment.
Poor research assessment can lead to poor research cultures. The Wellcome Trust survey in 2020 made this very clear.
Assessing the wrong things (such as a narrow focus on publication indicators), or the right things in the wrong way (such as societal impact rankings based on bibliometrics) is having a catalogue of negative effects on the scholarly enterprise.
Assessing the assessment
In a similar way, too much research assessment can also lead to poor research cultures. Researchers are one of the most heavily assessed professions in the world. They are assessed for promotion, recruitment, probation, appraisal, tenure, grant proposals, fellowships, and output peer review. Their lives and work are constantly under scrutiny, creating competitive and high-stress environments.
But there is also a logic (Campbell’s Law) that tells us that if we assess research culture it can lead to greater investment into improving it. And it is this logic that the UK Joint HE funding bodies have drawn on in their drive to increase the weighting given to the assessment of People, Culture & Environment in REF 2029. This makes perfect sense: given the evidence that positive and healthy research cultures are a thriving element of Research Excellence, it would be remiss of any Research Excellence Framework not to attempt to assess, and therefore incentivise them.
The challenge we have comes back to my first two points. Even assessing the right things, but in the wrong way, can be counterproductive, as may increasing the volume of assessment. Given research culture is such a multi-faceted concept, the worry is that the assessment job will become so huge that it quickly becomes burdensome, thus having a negative impact on those research cultures we want to improve.
It ain’t what you do, it’s the way that you do it
Just as research culture is not so much about the research that you do but the way that you do it, so research culture assessment should concern itself not so much with the outcomes of that assessment but with the way the assessment takes place.
This is really important to get right.
I’ve argued before that research culture is a hygiene factor. Most dimensions of culture relate to standards that it’s critically important we all get right: enabling open research, dealing with misconduct, building community, supporting collaboration, and giving researchers the time to actually do research. These aren’t things for which we should offer gold stars but basic thresholds we all should meet. And to my mind they should be assessed as such.
Indeed this is exactly how the REF assessed open research in 2021 (and will do so again in 2029). They set an expectation that 95 per cent of qualifying outputs should be open access, and if you failed to hit the threshold, excess closed outputs were simply unclassified. End of. There were no GPAs for open access.
In the tender for the PCE indicator project, the nature of research culture as a hygiene factor was recognised by proposing “barrier to entry” measures. The expectation seemed to be that for some research culture elements institutions would be expected to meet a certain threshold, and if they failed they would be ineligible to even submit to REF.
Better use of codes of practice
This proposal did not make it into the current PCE assessment pilot. However, the REF already has a “barrier to entry” mechanism, of course, which is the completion of an acceptable REF Code of Practice (CoP).
An institution’s REF CoP is about how they propose to deliver their REF, not how they deliver their research (although there are obvious crossovers). And REF have distinguished between the two in their latest CoP Policy module governing the writing of these codes.
But given that REF Codes of Practice are now supposed to be ongoing, living documents, I don’t see why they shouldn’t take the form of more research-focussed (rather than REF-focussed) codes. It certainly wouldn’t harm research culture if all research performing organisations had a thorough research code of practice (most do of course) and one that covers a uniform range of topics that we all agree are critical to good research culture. This could be a step beyond the current Terms & Conditions associated with QR funding in England. And it would be a means of incentivising positive research cultures without ‘grading’ them. With your REF CoP, it’s pass or fail. And if you don’t pass first time, you get another attempt.
Enhanced use of culture and environment data
The other way of assessing culture to incentivise behaviours without it leading to any particular rating or ranking is to simply start collecting & surfacing data on things we care about. For example, the requirement to share gender pay gap data and to report misconduct cases, has focussed institutional minds on those things without there being any associated assessment mechanism. If you check out the Higher Education Statistics Agency (HESA) data on proportion of male:female professors, in most UK institutions you can see the ratio heading in the right direction year on year. This is the power of sharing data, even when there’s no gold or glory on offer for doing so.
And of course, the REF already has a mechanism to share data to inform, but not directly make an assessment, in the form of ’Environment Data’. In REF 2021, Section 4 of an institution’s submission was essentially completed for them by the REF team by extracting from the HESA data, the number of doctoral degrees awarded (4a) and the volume of research income (4b); and from the Research Councils, the volume of research income in kind (4c).
This data was provided to add context to environment assessments, but not to replace them. And it would seem entirely sensible to me that we identify a range of additional data – such as the gender & ethnicity of research-performing staff groups at various grades – to better contextualise the assessment of PCE, and to get matters other than the volume of research funding up the agendas of senior university committees.
Context-sensitive research culture assessment
That is not to say that Codes of Practice and data sharing should be the only means of incentivising research culture of course. Culture was a significant element of REF Environment statements in 2021, and we shouldn’t row back on it now. Indeed, given that healthy research cultures are an integral part of research excellence, it would be remiss not to allocate some credit to those who do this well.
Of course there are significant challenges to making such assessments robust and fair in the current climate. The first of these is the complex nature of research culture – and the fact that no framework is going to cover every aspect that might matter to individual institutions. Placing boundaries around what counts as research culture could mean institutions cease working on agendas that are important to them, because they ostensibly don’t matter to REF.
The second challenge is the severe and uncertain financial constraints currently faced by the majority of UK HEIs. Making the case for a happy and collaborative workforce when half are facing redundancy is a tough ask. A related issue here is the hugely varying levels of research (culture) capital across the sector as I’ve argued before. Those in receipt of a £1 million ‘Enhancing Research Culture’ fund from Research England, are likely to make a much better showing than those doing research culture on a shoe-string.
The third is that we are already half-way through this assessment period and we’re only expected to get the final guidance in 2026 – two years prior to submission. And given the financial challenges outlined above, this is going to make this new element of our submission especially difficult. It was partly for this reason that some early work to consider the assessment of research culture was clear that this should celebrate the ‘journey travelled’, rather than a ‘destination achieved’.
For this reason, to my mind, the only thing we can reasonably expect all HEIs to do right now with regards to research culture is to:
- Identify the strengths and challenges inherent within your existing research culture;
- Develop a strategy and action plan(s) by which to celebrate those strengths and address those challenges;
- Agree a set of measures by which to monitor your progress against your research culture ambitions. These could be inspired by some of the suggestions resulting from the Vitae & Technopolis PCE workshops & Pilot exercise;
- Describe your progress against those ambitions and measures. This could be demonstrated both qualitatively and quantitatively, through data and narratives.
Once again, there is an existing REF assessment mechanism open to us here, and that is the use of the case study. We assess research impact by effectively asking HEIs to tell us their best stories – I don’t see why we shouldn’t make the same ask of PCE, at least for this REF.
Stepping stone REF
The UK joint funding bodies have made a bold and sector-leading move to focus research performing organisations’ attention on the people and cultures that make for world-leading research endeavours through the mechanism of assessment. Given the challenges we face as a society, ensuring we attract, train, and retain high quality research talent is critical to our success. However, the assessment of research culture has the power both to make things better or worse: to incentivise positive research cultures or to increase burdensome and competitive cultures that don’t tackle all the issues that really matter to institutions.
To my mind, given the broad range of topics that are being worked on by institutions in the name of improving research culture, and where we are in the REF cycle, and the financial constraints facing the sector, we might benefit from a shift in the mechanisms proposed to assess research culture in 2029 and to see this as a stepping stone REF.
Making better use of existing mechanisms such as a Codes of Practice and Environment and Culture data would assess the “hygiene factor” elements of culture without unhelpfully associating any star ratings to them. Ratings should be better applied to the efforts taken by institutions to understand, plan, monitor, and demonstrate progress against their own, mission-driven research culture ambitions. This is where the real work is and where real differentiations between institutions can be made, when contextually assessed. Then, in 2036, when we can hope that the sector will be in a financially more stable place, and with ten years of research culture improvement time behind us, we can assess institutions against their own ambitions, as to whether they are starting to move the dial on this important work.
I agree with much of what you say here Lizzie, but wondered at what level you think case studies should be used? Both institutional and unit level? I also think there is scope for more tightly defining what institutions/units include in their PCE statements and laying out what possible evidence could be cited (possibly with main panel variants as before to recognise disciplinary differences); the current proposals could be a starting point for this (but are certainly not the end point.)
Thanks Elizabeth! I guess what I’m wondering is whether we should move away from ‘PCE statements’ altogether in 2029, and ask HEIs instead to describe the approach they are taking to understand, plan, & monitor improvements to, their research cultures. Case studies (& data) could be offered as early evidence of progress they are making at both unit and institution level. I feel this would give HEIs greater flexibility to make context-specific and genuine change on a greater range of culture domains than currently covered in the PCE enablers, and deliver that ‘journey-travelled’ assessment everyone was calling for so clearly in the PCE workshops.
What a great article. I agree with the hygiene part especially, but think we could adopt something similar to the way wine awards work. A bronze medal is technical excellence, no flaws, silver and above is that plus expressive of place and character. Trophies are best in class. I think you could use a scale like this to address both hygiene factors but then move to typicity in terms of research mission. It would also really force universities to make their strategies authentic. Mine wants to change the world but has no measures of what success would look like. Your proposal would force change in ways that are contextually valid and may also get minds focused on achieving what is possible.
I’m sorry, but all of this is not going to improve research culture. You cannot impose culture from above or via quantitative metrics. Campbell’s law, which you cite, says that using a quantitative indicator will “distort and corrupt the social processes it is intended to monitor”. As you point out in the beginning, doing this assessment is counterproductive and burdensome. In order to avoid being burdensome, which is your goal, the UK should simply not do any of this “assessment”.
Here’s an example. The University of Reading wants to do better in the REF and get better outputs and better culture score. So, it creates a Peer Review of Outputs form that all researchers have to fill in, multiple times a year. The idea is that this will incentivize researchers to get better feedback and thus have a better research culture and produce outputs with more stars. But having to fill in a form is not going to make someone do better work. I wish it would. I wish that having “a set of measures” to “monitor our progress on” would make people want to do good work and care about finding the truth about the world. But it will not. And by just producing more paperwork on top of everything else, it is actually making the situation worse.
The idea that the REF will lead to improvements in research has no evidence or coherent logic behind it. Where is the best research done? One of the top candidates is the USA. We should copy their REF-equivalent. Oh wait, they don’t have one.
How do we know that the best research is done in the USA if there is no agreed way to assess its quality?