The state of peer review

A new report by IOP highlights the fragility of the peer review ecosystem

James Coe is Associate Editor for research and innovation at Wonkhe, and a partner at Counterculture

Peer review holds up the whole research ecosystem. Through a sense of obligation, good will, and professional courtesy researchers evaluate each other’s work and in doing so help ensure the quality of the research on which we all rely. It is an admirable and crucial part of our research infrastructure but it is fragile due to an ever growing workload.

Publishers IOP have published survey results of 3,064 physical science researchers on their perceptions and experiences of peer review.

Half of this cohort reported an increase in the number of reviewer requests they received in the last three years; 47 per cent of respondents receive less than one peer review request per month; 54 per cent of respondents feel they receive the right number of requests, and only 35 per cent of early-career researchers say they have more time for peer review compared with the number of requests they receive. Around 35 per cent of respondents think generative AI would have a negative impact on the peer review process compared to 29 per cent who think it will have a positive impact.

This survey is limited to the fields in which IOP Publishing publishes research so it shouldn’t be taken for granted that physical and environmental sciences are representative of the experiences of all researchers. The survey is also global so caution should be taken in assuming trends are entirely applicable to the UK.

The IOP report paints a picture of a peer review system which relies on the good will and high workloads of researchers. This work hints at themes but does not as strongly reflect work by the like on Publon back in 2018 which found a system that was creaking under the weight of administrative demands and another multi-university exercise to crowdsource views on peer review which found it can be “slow, opaque and cliquey, and it runs on volunteer labour from already overworked academics.”

IOP found that the proportion of respondents reporting bias has reduced from 24 per cent in 2020 to 16 per cent today. Respondents stated that the most prevalent form of bias was against authors from specific regions or countries. Alarmingly, one respondent shared that “I have to admit that I already assume a paper to be of low-quality if the author is based in [COUNTRIES REDACTED].” Again, this is a self-selecting survey of a set of programmes which is largely male dominated. However, it is interesting to see equity in peer-review discussed in relation to country of origin, an issue which generally seems to get less attention in other reviews of peer-review.

It is also interesting to see discussion on why people choose to undertake peer review in the first place. After all, it is usually done without pay and it is time consuming and challenging. Respondents said their main motivation for reviewing papers was interest in the paper itself. Reputation of the journal and expectations of the scholarly community were the second and third most important motivating factors when accepting an invitation to peer review. In short, the sector’s single most important approach to validating research is held up by interest and altruism.

Elsewhere 36.5 per cent of respondents believe that open-source generative AI will have little or no impact on the peer review process. IOP themselves note they have undertaken “extensive testing and research to find hallmarks of generative AI in manuscripts and peer review reports,” and they maintain a ban on the use of generative AI in writing or augmenting peer review reports. This could skew results given the experiences of the respondents but it seems exceptionally unlikely that AI will not significantly impact peer review in the future given the wide range of literature and policy already speculating on and constraining its application and impacts.

As more attention is paid to the global impacts of replicability challenges, error, and outright fraud, the quality of peer review is important in holding up the very basis of academic research. The major challenge is despite some areas of improvement there seems to be no way out of the current system which prizes more research, and therefore requires more peer-review, which in turn places more pressure on reviewers.

One response to “The state of peer review

  1. The last sentence is the key one. There are more and more requests made and the pressure of workload means that I, like many other academics, have to turn down many of these. Peer review is not rewarded in terms of money or career progression/ promotion/ appointment. Indeed, one could argue that currently researchers could better spend the time peer-reviewing on writing papers or grant applications themselves. I’m not saying it should be like that, but there is little incentive to take on more peer reviewing.

Leave a Reply