The HE green paper signals a radical overhaul of the research funding system, but leaves the detail to be filled in by Sir Paul Nurse’s review of the research councils.
Like the fourth of the New Testament gospels, Part D of today’s HE green paper, on “reducing complexity and bureaucracy in research funding”, is its most enigmatic. While there can be little doubt about the desired destination – ministerial aspirations for a simpler funding system have been well-trailed – the roadmap from here to there remains sketchy, and requires a certain amount of reading between lines. This is also the briefest section of the green paper: compared to the loving attention lavished on every detail of the TEF, the future of the REF merits just two pages of discussion.
Sir Paul Nurse looms large. While he’s no John Chilcot, slippage in the timetable for his review of the research councils (originally scheduled for the summer or early autumn) has prevented a neat sequential flow from Nurse to the green paper and then to the spending review. Once the decision had been made to abolish HEFCE and create the Office for Students (OfS), the green paper had no choice but to address the architecture of the research funding system, given HEFCE’s role in the REF and the allocation of quality-related funding. But a lot now hinges on Nurse’s conclusions, described as a “critical input” which the green paper doesn’t want to pre-empt.
At the same time, the green paper clearly ups the ante for Nurse. Early drafts of his review, which are now in circulation, suggest that he will recommend a series of incremental reforms, which will preserve the seven councils, but create a new ministerial-led committee to set overarching strategy and priorities for research funding. This would fit with the approach that the councils themselves have proposed in their ‘Research Councils Together’ plan.
But the fundamental reshaping of the HE institutional landscape envisaged in the green paper sits somewhat uncomfortably with such modest tinkering. The absence of any reference to the ‘Research Councils Together’ plan in the green paper, and the more radical thrust of Sajid Javid’s BIS2020 agenda, suggests that the option of a research council merger remains open for now, and Nurse will need to make a compelling case if he wants to preserve the status quo. Rumours suggests that his interim findings may still be published ahead of the 25 November spending review; but I wouldn’t be surprised if debates still raging behind the scenes hold it up for a little longer.
One aspect of the green paper that will be widely welcomed is its unambiguous endorsement of the dual support funding system. As the paper says, “Dual support is established and respected…[and] sustains a dynamic balance between research which is strategically relevant and internationally peer-reviewed and research which is directed from within institutions.” As a result, “we are committed to the retention of the dual support system as part of a reshaped research funding landscape.”
Of course, in a post-HEFCE world, the question then becomes who will administer the quality-related (QR) side of dual support? The green paper floats a range of options, including the creation of a new body specifically for this purpose. But given the wider push for simplification, the clear logic is for QR funding to move over to the research councils.
This raises a number of issues. First, there are questions of governance: how will the QR budget line be protected (a ring fence within a ring fence?) such that it can’t be raided to make up shortfalls in the research council system, or indeed to meet new priorities that emerge from Nurse’s overarching committee? The green paper acknowledges the need to “ensure the integrity of the dual funding system”, but in the heat of a funding crisis two or three years down the line, it may prove harder to maintain this separation. The sector will be watching closely to see what safeguards are put in place, and how watertight these are.
A second set of questions concern capability: over successive cycles of RAE and now REF, HEFCE has built up unique expertise in the design and administration of research assessment processes, and in the way that these interact with HEI systems and cultures. No such expertise exists in the research councils, which by necessity, interact with HEIs in more siloed, disciplinary ways. The “Research Councils Together” plan makes no reference to how they would meet this new function. And a potential role for the councils in the REF/QR also lies beyond the formal terms of reference for the Nurse review (though may now be added to his report). Elsewhere in the green paper, there is a welcome acknowledgment of the need to preserve the expertise that resides within HEFCE. As part of this, I think it will be imperative that the experience and knowledge of key individuals, such as David Sweeney, Steven Hill and colleagues in HEFCE’s research policy team, is retained and moved across to the research councils. Otherwise, the councils will be starting from scratch to rebuild a sophisticated evaluation machine without any input from its principal designers and engineers
A final set of questions concerns methods. The green paper makes sensible points about the need to minimise the costs and administrative burden of processes like the REF. This leads it to reopen the issue of metrics, which my recent report The Metric Tide addressed in detail, and to consider a range of options for future REF exercises “such as making greater use of metrics and other measures to ‘refresh’ the REF results and capture pockets of research excellence in between full peer review.”
I met yesterday afternoon with Jo Johnson to discuss metrics in the context of the green paper, and without breaching any confidences, we talked about why – in the view of my independent review group – a wholesale shift to a metric-based REF would be impossible, even on the slightly-extended timetable of 2021. It is of course possible to design an algorithmic method for the allocation of QR funding, based on available blibliometrics. But the conclusion of my report still holds: “No set of numbers, however broad, is likely to be able to capture the multifaceted and nuanced judgements on the quality of research outputs that the REF process currently provides.”
Metrics can support expert peer review in the REF (as they did for some panels in 2014). But they can’t supplant it. And I shared with the minister some new data to illustrate why. Whatever Elsevier and Thomson Reuters may say, the challenge of blbliometric coverage and robustness across disciplines is still significant.
The table below highlights a sample of those HEIs for whom journal articles were a minor component of research submitted to REF 2014. It also includes a few larger research-intensive HEIs (Oxford, Cambridge, UCL, LSE etc) for whom the percentage is much smaller but the number of ‘missing’ outputs is significant. These figures demonstrate how much excellent research would not be counted using a purely citation-based approach.
|HEI||% of non-journal outputs submitted to the REF||Number of ‘missing’ outputs||QR funding FY2015/16|
|Royal Academy of Music||93||63||308,066|
|University for the Creative Arts||86||89||377,469|
|University of the Arts, London||81||352||3,519,985|
|Royal College of Music||74||56||461,794|
|Glasgow School of Art||70||125|
|University of Oxford||24||2,025||139,061,600|
|University of Cambridge||21||1,534||120,096,538|
|University of Bristol||15||603||46,556,048|
So on metrics, while the thrust of the green paper is right, in seeking to incorporate quantitative date more intensively in research evaluation, there are no easy wins here. In measuring research outputs, there are serious issues of coverage, especially across arts, humanities and social sciences. And for impact, the task is even harder: as both my review and a separate study by Kings College and Digital Science show, we simply don’t have reliable quantitative indicators that can capture the richness and diversity of impacts in the near-7,000 REF case studies.
Instead of diving headlong into the metric tide, the focus for the next five years should be placed firmly on improving the robustness, coverage and interoperability of the datasets that we have, and applying them responsibly in the management of the research system. This is just one of the issues in the green paper which will I’m sure provoke a lot more debate over the coming weeks.