Am I REF-able? This is a question that plagues UK-based academics, giving rise to numerous myths and half-myths about what will be valued highly in the REF, the UK’s research assessment exercise.
Myths about the research assessment process were so prominent that in 2018 the REF 2021 Director wrote a Wonkhe article to dispel them. In the article’s comments, however, were numerous discussions of REF implementation that was quite different to the official rules. In my PhD research on UK university audit processes (2013-2018) I found something similar; REF myths became REF rules in some departments and universities, complicating the question of what or who is ‘REF-able’ in practice.
In my research, I compared the REF 2021 guidelines to the experiences of five UK-based physicists who were implementing the REF in their departments. They expressed a number of mythologised or at least deeply suspicious interpretations of the rules, which guided their implementation. Many myths were rooted in more general ideas about what would be valued in physics or in academia, trying to optimise REF submissions to avoid perceived risk, due to the intensity of pressure around the REF process.
Interviewees were not necessarily unthinkingly accepting myths, but rather taking a risk-averse marginal gains approach to the processes, often with a deeply suspicious reading of the official rules and anticipating the subjective reading practices of their fellow physicists on the sub-panel, who assessed the submissions.
Interdisciplinary research is risky!
Three interviewees discussed concerns about interdisciplinary research being riskier and therefore seemed more likely to sift out interdisciplinary work from their departmental submission or have it queried by higher-ups. The concerns about interdisciplinary research largely focused on reviewers having insufficient expertise to assess interdisciplinary outputs, and that this would lead them to hedge their bets and give lower ratings.
One interviewee was also concerned that interdisciplinary researchers who published across disciplines would be unfairly assessed within their sub-panel, seemingly unaware of the cross-referring mechanism, whereby a university/sub-panel could send outputs to other sub-panels who had more appropriate disciplinary expertise. This was a straightforward misunderstanding of the REF processes, which was likely to be rectified when he read the updated guidance or through guidance from other REF staff at his university.
Concerns about insufficient expertise are slightly harder to dispute, as sub-panellists are unlikely to have the specific expertise of all interdisciplinary papers submitted. However, this is a general issue with peer review, the bedrock of all academic assessment, from funding decisions to job applications to publication decisions, not just an issue with interdisciplinary research or REF.
The REF directors’ myth-busting article spoke about this interdisciplinary topic, highlighting analysis of REF 2014 outputs did not show that interdisciplinary papers did any worse than non-interdisciplinary papers. This concern about interdisciplinary research was so prevalent that REF 2021 introduced a whole new panel, and sub-panellist positions, relating to interdisciplinary research, to reassure academics that their research would be assessed fairly; this does not appear to have reassured academics. The lack of trust many academics have in the REF as a process and the level of pressure universities place on staff to “do well” means that even academics know the rules, they do not believe they will be implemented or implemented fairly, and any perceived risk will be managed or sifted out of submissions.
Secret metrics will be used! Papers are not read!
Another common concern was around the use of metrics and other proxy measures to judge the quality of publications, including publication venue, journal impact factors, and citation data.
The REF guidance explicitly states that journal impact factors, hierarchies of journals, and publication venue/medium will not be used in output assessments – and yet my interviewees were suspicious of these statements. They thought that markers of prestige might influence sub-panellists’ assessments and thus took these into consideration when putting together submissions.
At a local-level, in many departments and universities, metrics and prestige markers are being used to decide which publications get included or not, regardless of the official rules or what happens in sub-panels.
When it comes to citation data, it is more complicated, because these metrics are used by some sub-panels, including physics. Citation data is just one part of assessing the “academic significance” of an output, and this is used in context. However, my interviewees had extremely suspicious engagements with the guidelines. One stated:
any physicist will sit down and think about how many publications a member of the panel is going to have to go through, how little expertise they’re going to have in some of the things they are looking at … clearly people are going to be using impact factors and citation data and all those kinds of things … we all know that the reality is that data will get used.
The climate of distrust in the REF process means that no matter what the official guidance says, some academics will not believe it. But my interviewees were also being very practical – we know academics are overworked and have little time for reading; we know what is “valued” in academic or specific disciplines and this might influence sub-panellists’ assessments; we know peer review is imperfect.
So my interviewees were anticipating the subjectivity of the sub-panellists’ assessments and trying to game this by optimising submissions. I began to think about this as a form of professional sport; seeking marginal gains and trying to avoid marginal losses.
Marginal gains and managing micromanagers
A marginal gains approach to the REF emerges because of the intensity of the REF process as a method to distribute some government research funding to 3* and 4* rated work, alongside the use of results in university league tables and university marketing. Some of my interviewees were enthusiastic participants in the REF game, pursuing a pseudo-scientific precision in optimising their submissions. Whereas others saw their departmental REF responsibilities as a way to minimise hassle and disruption to academic colleagues, with one interviewee describing his role as “a bit of a bulwark against silly demands” from management in order to protect his colleagues’ time so they could get on with actually doing research.
Higher-up managers and professional REF-focused staff in university often tried to intervene in departmental decisions, for example, querying the risky nature or suitability of outputs, despite often having little to no appropriate disciplinary knowledge, but instead fuelled by general REF myths as discussed above. This resulted in interpretative tussles over outputs with academics with departmental REF responsibilities pushing back and drawing on their own disciplinary expertise, the actual guidelines, and advice from others who were, or had been, involved in REF sub-panels to legitimise their interpretations.
These interpretative tussles demonstrate how important what one person or department decides is “REF-able”, which results in the REF being experienced very differently in different contexts. When academics with REF responsibilities try to push back against managers’ REF anxiety and extreme micromanagement, this can meaningfully reduce the REF audit burden and stress in their department.
The mythic power of the REF
The REF has become an enormous part of academic life, reshaping the ways academics do research, publish, and produce impact. While the REF, like the weather, is often a safe topic to discuss amongst academics – we hate it, down with the REF! – this presents too simplistic a picture.
The REF functions as a lightning rod for frustrations and anger at issues not exclusively connected to the REF process, like overwork, precarity, and undemocratic universities. The REF in general performs legitimacy and excellence to justify government research funding and judges academic research in ways that are consistent with general academic peer review practices. But the intense audit burden and pressure on individual academics, what many experience as the worst parts of the REF, are implemented at a local level through REF gameplaying by management and academics; just one more element of undemocratic managerial universities and academics’ internalisation of competitive academic funding structures.
If research money is to be selectively distributed using an audit process (and perhaps it should not be), then the REF is a less bad process than many others, maintaining academic peer review as opposed to using blunt metrics or more direct government intervention.
In closing, this is not a celebration of the REF, but rather a complication of simple complaint narratives around REF as an external imposition about which we academics can do nothing. Instead, we can look at ways that academics are pushing back in departments, across universities, and on the sub-panels themselves, to challenge the worst excesses of REF gaming.