Expensive, time consuming, and unpopular – why is it so hard to end grant funding peer review?

Peer review of grant funding is time consuming, expensive, inefficient and yet seemingly invulnerable. James Coe looks to a new report for a door out of the gilded cage

James Coe is Associate Editor for research and innovation at Wonkhe, and a senior partner at Counterculture

Peer review seems to have survived as the least bad of a range of possible options for grant funding.

Nobody likes spending the majority of their research time writing grant applications, reviewers do not like reviewing innumerable bids that have no chance of winning, and the system has to discriminate between bids of almost identical quality in the allocation of millions and millions of pounds of public money.

And yet, this system somehow endures. It isn’t necessarily for lack of good ideas – the Research on Research Institute is full of them – it’s that cultural norms are hard to shift. Version of lotteries may, in some circumstances, be no less arbitrary than the current system – but they somehow feel unfair. Despite the absolute centrality of how funding is allocated it has received less attention, fewer column inches, and – a few notable interventions aside – less attention than other research issues. And like much of higher education the way things have always been done is a driving force for the way things will continue to be.

Fees and loathing in late funding

It is useful to think of peer review of grant funding as two interlocking markets with different kinds of inefficiencies. The producers of grant bids, academics and professional services staff, have a low chance of success and a high amount of effort to get funding. They could choose to spend their time doing other things or raise moneys in other ways but grant funding functions as not only a way to allocate funds but a marker of prestige. It is an external validation that an idea, or even a researcher, is good in the eyes of their peers. A lottery by definition does not carry that kind of signal.

As long as peer review grant funding exists, this kind of inefficiency is inevitable. There are things that could be done to make criteria more specific, forms shorter, information recyclable, and so on. However, this can only go so far where the grant-funding architecture remains as it is today. Even worse, as institutional funding becomes tighter universities are incentivised to chase more funding (questions of full economic cost aside), spending even more precious research time on diminishing prospects of return.

For the overall research system, a slightly separate market exists. The benefit of large open funding calls is that, in theory, they surface the largest collection of ideas. Again, in theory, the greater the competition there is for funding, the more good ideas that can be funded – though this does rest on the idea that there is a correlation between the quality of bids and the quality of projects, that researchers suitable for a bid will always bid for it, and the bid design is good enough to get the right bidders. Aside from these enormous challenges the basic theory holds. The more ideas there are, the more ideas there are to choose from – which should mean more good research is funded.

The obvious consideration is whether it would be possible to design a system which still surfaces large numbers of research projects while radically reducing the bureaucracy on both ends of doing so. A new report out in pre-print – The costs and benefits of research grant peer review – may have some of the answers.

Universal basic research income

The most interesting idea within the report is that the emphasis on reforming funders is misplaced. This is because the vast majority of costs, this report estimates up to 89 per cent, are borne by applicants not by funders. Clearly, universities can only reduce their own costs if they feel there are incentives to do so, but the question of what universities might do to reduce their own grant-funding bureaucracies is an interesting one.

The report puts forward two places where this bureaucracy emerges. The first is that because success rates are so low universities are incentivised to “gold-plate” funding bids. One of the more radical proposals is for a

basic research income to all researchers who meet a set ‘quality’ criterion. If structured in a way that would be generous enough to allow support of substantive research activity, but at a threshold to incentivize the pooling of funding through collaborations, this could significantly reduce the need for and demand for extramural research support.

The report is light on details on how this would work in practice. It’s possible to imagine this funding as a kind of more generous QR funding with a smaller allocation for grant funding. Equally, it could be a kind of trifecta funding where there is QR, a funding top-up for a minimum quality threshold (perhaps assessed by REF to remove duplication,) and a strategic collaboration fund (maybe something like HEIF.) Matthew Smith for the Council for the Defence of British Universities imagined a kind of “UBRI” where all academics would be awarded a basic research income, scaled depending on their discipline, which they could use as they pleased. It is entirely correct this model would reduce bureaucracy but this model would also preclude a national funder like UKRI directing research policies aligned to government.

AI, again

The final idea for cost reduction is to look at the use of lotteries and AI through a cost efficiency lens. As the report points out, their own surveys suggest concerns over bias and highlight challenges of feedback between applicants. Perhaps, in an era of constrained resources, institutional exhaustion, and (some) desire for experimentation, the idea of a lottery could have its time to shine. Though the fear is always that, at the point researchers believe they are missing out based on chance rather than skill, it will struggle to gain favour.

The purpose of reports like this is to push the envelope on what is possible. A system which is high cost, low success, and little loved should not be treated as an inevitable part of the research system. Any new ideas which emerge not only have to tackle the extreme complexity of the system but make the case that any upheaval would be worth it.

1 Comment
Oldest
Newest
Inline Feedbacks
View all comments
Bobby
26 days ago

The basic research income described is (from what I understand from Canadian colleagues) basically how Canadian Discovery grants work.

Historically, NSF grants in the US worked in much the same way where basically all good researchers in theoretical STEM areas had an NSF grant (in the US in those areas mainly used for “summer money”).