After a distinguished career spanning academia, international institutions and HM Treasury, Nicholas Stern came to wider public attention in 2006, with the publication of his review of the economics of climate change. This hugely influential 700-page report provoked debate worldwide, by making the economic case for early action on climate change in stark and compelling terms.
Ten years on, Lord Stern is back at the helm of another government review, which many are hoping will bring a similar clarity to arguments over the Research Excellence Framework. Stern 1.0 described climate change as the “greatest market failure the world has seen” which, if left unabated, could cost the world between five and twenty per cent of GDP each year. I imagine even the staunchest critics of the REF would admit, grudgingly, that it pales into relative insignificance. But the opportunity presented by Stern 2.0, to have one of the world’s top economists take a cool, hard and evidence-based look at the benefits and burdens of the exercise, is to be welcomed.
Stern 2.0 should also fill in some of the gaps created by the elliptical and disjointed coverage of research policy in the HE green paper and Nurse Review. It’s not that the REF has been neglected: within the £700+ billion that the UK Government spends each year, the £1.6 billion QR budget must be a candidate for the most intensively-audited portion. On top of months of scrutiny by the four main and thirty-six sub-panels that determined the results of REF 2014, HEFCE has undertaken a comprehensive series of reviews of impact, costs, interdisciplinarity and other aspects. And my independent panel’s contribution, The Metric Tide, has added to the mix.
But it’s fair to say that much of this has focused on the detail: poring over the design and performance of the 2014 evaluation machine, with a view to pimping, tweaking and putting it back on the road towards REF 2021.
The release by HEFCE earlier this month, under a freedom of information request, of its draft consultation on the next REF (cancelled by Jo Johnson shortly before its scheduled publication, to clear the way for the green paper), shows that it was intending to canvass views on the number and spread of panels; the potential introduction of a 5* rating; whether the impact component should rise to 25 per cent; and the possibility of decoupling individual staff from unit-of-assessment outputs.
There are lots of sensible, pragmatic suggestions in HEFCE’s consultation that, if implemented, could go some way to simplifying and reducing the administrative burden of the next exercise. But it presents one issue as settled: the aims and purposes of research assessment. And it’s here that I think the Stern Review could make a further important contribution, by opening up a discussion about the purposes of the REF, how these have evolved over successive cycles, and whether they need to be redefined.
This is something that Lord Stern hinted at in the joint letter he wrote with Sir Paul Nurse to the THE in December 2014, a few days before the REF results were announced:
“As presidents of the British Academy and of the Royal Society, spanning most areas of research covered by the REF, we believe it is time to ask crucial questions about whether we are assessing quality in the most sensible way, and whether the burden could be reduced and the value of the process enhanced… we urge that we begin by focusing on the big questions before being swamped by the detail.”
The Stern Review’s call for evidence (which is open for responses until 24 March) appears to lean towards a narrow interpretation of this question, stating at the outset that: “the primary purpose of the REF is to inform the allocation of quality-related research funding.”
But as all good HE wonks know, this is in fact only one of three stated purposes of the exercise. HEFCE defines these as follows:
1. Allocation “our higher education funding bodies use the assessment outcomes to inform the selective allocation of their grant for research to the institutions which they fund;
2. Accountability “The assessment provides accountability for public investment in research and produces evidence of the benefits of this investment.”
3. Benchmarking “The assessment outcomes provide benchmarking information and establish reputational yardsticks, for use within the higher education (HE) sector and for public information.”
To be fair, the call for evidence goes on to acknowledge that “data collected through the REF…can also inform disciplinary, institutional and UK-wide decision-making” and that “the incentive effects of the REF shape academic behavior”.
But I believe Stern needs to tackle the purposes question more explicitly, and consider how and why, as the exercise has evolved, it has accrued at least two additional functions within the research system. These are not formally acknowledged, but anyone who has participated in, managed or analysed the REF will recognize them:
4. Influencing research cultures and behaviours. Initially the RAE incentivized (and it’s fair to say, improved) productivity. But in the most recent cycle, it has been the biggest driver of a serious shift towards embracing and valuing impact through the system (which, despite initial misgivings, many now see as positive). As the HEFCE consultation confirmed, the next cycle is likely to be used to nudge change through the system in other areas, through its requirements over open access and (perhaps) the use of unique identifiers like ORCID numbers. Given the reach of the REF into all corners of university research, it’s arguably the most efficient and effective way of introducing changes of this kind.
5. Performance management within HEIs – this is the most controversial purpose, but there’s no denying the fact that across the university system, the REF is now used as a de facto management framework for research activities (just as the TEF will become, over time, for teaching). Whether one thinks this is good or bad, the REF has progressively been internalised and institutionalised as a convenient tool and framework for HEI leaders and managers to monitor performance and make strategic (and sometimes unpopular or controversial) management decisions, while subtly shifting the responsibility for these onto the REF, HEFCE or the government.
Many of the arguments we have as a sector about the REF result from people talking, literally, at cross-purposes. If you insist that it is all – or primarily – about QR allocation, then your approach to redesigning the exercise, will be very different to someone who sees value in HEFCE’s three, or my five, purposes. You might, for example, argue that metrics are the answer to a purely algorithmic challenge of allocation. Similarly, in the debate over costs and burden, the range of purposes you include will fundamentally alter your cost-benefit equation.
One of the points we make in The Metric Tide is that, even if you radically simplify or metricise the REF, it won’t remove the need for functions 4 and 5 within the research system. A lot of the time, effort and energy that currently goes into managing the REF as we know it would simply be diverted into optimising performance against university rankings, or for other funders, using slightly different criteria (and no doubt, requiring the purchase of costly analytical services from Elsevier, Thomson Reuters et al. to underpin fresh efforts to outperform others).
Let me illustrate this point with a personal example: a few weeks ago, I moved to a new job at the University of Sheffield, which includes (alongside research and teaching) more of a strategic function to support impact and engagement across our Faculty of Social Sciences. So I’ve joined the ranks of the many hundreds of academics and professional staff across our universities who have such elements in their job description, and are regularly (and fairly!) satirized by Laurie Taylor and Twitter accounts like @ass_deans.
If I look at my diary for the next two months, there are already a dozen or more meetings in there with “REF” in the title: ostensibly planning and preparing for an exercise which won’t take place until 2021, and for which we don’t yet even know the rules! Some would see this as a symptom of the insane burden of the assessment system. But I’m not one of them: even if we didn’t have a REF, such meetings – which are mostly sitting down with departments and talking sensibly about their strategies for research, wider impacts and staff development – would still need to take place in some form.
I think a lot of the grumbling we hear about the REF results from this confusion of purposes. Many academics dislike being managed, or nudged to think about things like impact. But management in some form isn’t going to go away, nor is the demand for accountability of public expenditure on research.
I’m certainly not suggesting that we as a community can’t improve the way we run the REF, nor the way that we manage our universities. My review’s proposal for a framework of ‘responsible metrics’ forms part of this progressive case for a more open, transparent, democratic and supportive culture of university management.
But the REF has become a convenient scapegoat for a lot of the frustration that researchers (and some managers) feel about the way their institutions are run, and the encroachment of bureaucracy of various kinds on their autonomy and freedom. Some of this is caused by the REF, but changes to the REF won’t make it go away. The anthropologist David Graeber captures our dilemma eloquently in his book The Utopia of Rules, when he tries to unpick why “Nobody seems to like bureaucracy very much – yet somehow, we always seem to end up with more of it.”
So my hope is that the Stern Review will look in detail at the changing purposes of the REF – and the way that these frame the way we calculate benefits, costs and burden. If all we want from the REF is a QR allocation tool, then we can certainly do that in an algorithmic, metric-based way. But the exercise has far more than this singular function. Just as Stern 1.0 redefined the way we think about the economics of climate change, so Stern 2.0 has the opportunity to reframe our approach to research management and assessment.
The Royal Charters behind the research industry tend to have three objectives – the contribution to intellectual development; the contribution to the economy; and the contribution to society.
However, the industry tends to ignore the latter two objectives and concentration on citations. And the REF is one of the institutional features that embed this approach into the system.
Given that the research industry receives public money in the same way that benefit recipients receive benefits I would suggest might wish to consider the same sort of rights and responsibility conditionality regime where the recipient of the subsidy is expected to behave in ways that maximise the chances that they achieve all the objectives set out in Royal Charter.
Consequently, as well as completing the research, the researchers should have obligations (with financial sanctions if they don’t follow it through?) to behave/undertake activities that they – in conjunction with the Research Councils – believe will maximise the usefulness of the research to the economy and society. [e.g. report results to an organisation that promotes innovation and also go back to your old school/college to communicate the results and promote interest.]
hi James,
Refreshing thoughts on management and HEI. Thank you for taking the time, hopefully academia will soon realise that such accountability is a necessary evil.
Wilsdon is quite wrong about the quality and impact of the 2006 Stern Review, and in calling Stern a top economist.
Richard Toll or should I say Richard Troll!
Impact, as Stefan Collini and others have shown, is a charade of no intellectual merit. And your piece does not address the worst features of the REF: the consistent rewards for those who lie, the complete absence of any objectivity in the evaluation of scholarship, the pretence that submissions are read, when it is all too clear that most of them are not read. Nor do you suggest ways in which it might be possible to prevent the REF being used as a reason to sack staff.