The history of quality related (QR) funding is the history of research assessment.
A Technopolis evidence review undertaken to inform the 2016 Stern review of the Research Excellence Framework sets out a pacey history of how QR came into being. In 1986 the University Grants Committee introduced a system of allocating funding based on research performance. The Research Selectivity Exercise, as it was known, of the 1980s was replaced by the Research Assessment Exercise up until its replacement by the Research Excellence Framework (REF) we all know and love in 2014.
QR funding is a bit of a misnomer as it is several separate funds with slightly different allocation methodologies brought under a single umbrella. There is funding for research degree supervision; funding for supplementing work with charities; funding for research with business; funding for engagement with policymakers; and funding for national research libraries.
Usually, when referring to QR funding most people are referring to what is known as mainstream QR funding. This accounts for two thirds of the £1.97bn of the annual funding allocation. This funding is calculated based on quality (the REF results), volume of activity, and the cost of research.
The reason James likes QR and why parts of the sector really like QR is because of its flexibility. Mainstream QR funding can be used to crowd in new investment, test out the unpredictable, prop up existing activity, explore the difficult to fund, bring in new partners, and – not least- supplement competitive project funding that does not actually cover the full economic costs of research projects. It is the fund for the not yet done, not quite clear, or not quite sure if it might work. It also covers – in theory at least – the less glamorous work of overheads and infrastructure costs.
There are a number of notable reviews of research that bump into QR – the aforementioned Stern review concluded QR is “vitally important” for UK universities; the Smith/Reid review of international research and innovation collaboration advocated for more QR funding; and the Reid review of research and innovation in Wales suggested that increasing QR should be among the highest priorities for research growth.
It’s noticeable, however, that the focus of reviews is more typically on the impact of the REF processes for assessing research quality in terms of the kind of research that is valued, or the impact on researchers’ careers. With QR itself there’s a long standing debate about the relative weighting of world-leading and nationally excellent research when it comes to funding allocations. Rarely, has the case been made that the allocation of QR funding itself should come into question.
To some extent the recent Nurse review of the research, development, and innovation landscape follows a similar trajectory – Nurse argues that QR is “essential and valuable to universities.” But he is not convinced it is working as it should, and recommends that both sides of the dual support system of competitive grant and QR funding should be reviewed.
Part of his concern is about the ongoing problem that university research is essentially loss-making – as Jonathan Grant points out, successive governments have not addressed this problem, yet research continues to be precariously subsidised by other income sources – mainly international student fees.
But the other part of Nurse’s concern seems to be about what universities are doing with their QR allocation, and whether they are making the best use of it, recommending:
Government, working with UKRI, the UK higher education funding bodies and the wider sector, should consider more transparent mechanisms to provide assurance and accountability on QR funding.
Opacity on how QR is spent by universities, Nurse concludes, “hampers a wider assessment of how public funding impacts the UK RDI landscape.”
While Nurse cautions against introducing additional bureaucracy to research accountability, his argument raises a prospect that must surely be alarming to university leaders – that in tracking more closely the uses to which QR is put in universities, government could start taking views about the uses to which it ought to be put.
It could too easily turn into the kind of “universities can’t necessarily be trusted to do the right thing” narrative that the research landscape has enjoyed somewhat more protection from than the teaching side of the operation.
Stand up and be counted?
If, therefore, government concludes that the full-scale review of the research funding landscape that Nurse recommends is required, universities may need to be prepared. Not to demonstrate that QR is spent appropriately according to an external logic – as that would mean conceding that there either is or should be an external logic to which universities are accountable – but that there is an internal strategic logic to the allocation in terms of universities’ aspirations for their research environment.
In other words, universities need to show, not that their strategy is the right one – because that’s a question for a university and its stakeholders, not the government or research funders – but that it is acting strategically.
For example, if an institution’s QR allocation is routinely topsliced for central administration and maintenance of infrastructure and then allocated to faculties or subject areas according to an internal algorithm with minimal direction or oversight of expenditure, that might be perceived as insufficiently rigorous – though a case could certainly be made that decisions about where to direct expenditure within subjects should be made at subject level.
To take another example, although QR funding explicitly follows research excellence the internal allocation of funding does not have to follow the same logic. This can be because of the cost of some areas of research, or that research areas are at different points in their development, or for any number of internally sound reasons. The point is not necessarily how the money is spent but whether there is a clear strategy behind spending decisions.
Increasingly universities have defined strategies and intentions for research, and QR offers the kind of medium term financial predictability that can allow strategies to be funded over a number of years, so it could be relatively straightforward to articulate the link between strategy and funding decisions.
But the question then might be raised about whether, having allocated funding in accordance with strategic priorities, universities are very clear about the impact of that funding on advancing specific objectives, or are in a position to reprofile funds if the strategy evolves. While there’s no question that much of the research itself is excellent and impactful, how much do universities conduct their own research on how the research gets that way?
A further question might also be raised about cross-subsidy, specifically where QR is used to plug the deficit where project funding does not cover the full economic cost of research. While maintenance of research infrastructure and environment is a wholly legitimate use for QR funding, an external observer might expect that the effect should be that research as a whole is thereby put on a sustainable footing rather than requiring further cross-subsidy from other income sources – and wonder why it is that it is that despite this flexibility, the research funding system is still considered to be unsustainable.
The answer must be that either the pattern of project funding allocation leaves gaps that cannot be plugged by QR (plausible) or (also plausible) universities are making a strategic decision to undertake more research than is actually funded, and rely on a continued cross-subsidy to make the numbers stack up. Understanding these patterns and the decisions that underpin them could in theory help to work towards a more robust approach to sustainable research funding – but it could also lead to some uncomfortable conversations.
At this point it would be straightforward to call for more reporting and more accountability – we’re certainly curious about the answer to these questions. But crucially, this can’t be accountability at any cost – not if finding out was to suck in so much resource and researcher time on form filling that it actively harmed the research endeavour itself.
If and when QR is reviewed, the very act of reviewing could create a logic that leads to more reporting, and more bureaucracy. Whoever fronts that effort will no doubt be very senior and very well apprised of the need to strike a balance between bureaucracy and accountability. Universities can almost certainly help by being robust in resistance to one and open and clear on the value of what they are doing on the other.