It is nearly time to dust ourselves down, pack our Research Excellence Framework (REF2021) bags and return to the day job of seeking to undertake world-leading research.
While we wait anxiously in the REF2021 lobby for our results, the UK higher education funding bodies have moved surprisingly swiftly to launch a Future Research Assessment Programme, quickly followed by a commissioned warts and all report evaluating perceptions and attitudes towards REF2021 during the final submission stages.
Now, following the recent trend of wall-to-wall consultations and reviews, the UK funding bodies are seeking feedback on the REF2021 process from both higher education institutions and individuals with responses due by 26 January 2022.
Same old REF? Not so fast
Word on the REF street is that the next Research Assessment will be seeking an evolution not a revolution. For sure, the REF2021 is a slick, well-oiled machine with a rule book that covers every conceivable scenario and bends over backwards to ensure a level playing field for all who want to enter the REF Crystal Maze. But, if REF2014 is anything to go by, it may cost over £250m in total effort to the sector, so perhaps there are some radical new ways to make the whole assessment process simpler, quicker and arguably even fairer. Here are some suggestions to stimulate that debate.
We should retain the three components of assessment that featured in REF2021 – outputs, impact and environment – but replace the Environment Statement with just five key performance indicators. These could be standard metrics that all universities record. For example, the ratio of staff submitted to number of early-career researchers (ECRs), total Unit of Assessment (UoA) income/Full-time-equivalent (FTE), postgraduate conferrals/FTE, number of external independent fellowships/FTE, and Research Council income/FTE. This means no narrative at all.
We could save on all those painful hours writing and revising what essentially is an operational document. I hope that we don’t micro-manage submitting institutions – and instead trust them to develop their research culture and govern their operations both openly and fairly, answerable to their constituents rather than a funding body.
Game over?
Alongside this, we should reduce the number of outputs submitted by a Unit of Assessment. Sticking with a Stern-like formula – a change to a total volume of outputs equivalent to two times the Staff FTE submitted. This would make for a saving of 20 per cent of the time spent assessing outputs in one quick swipe. It may also ensure panels see the very best outputs in UoAs.
We need to close all doors on any possible game playing. Setting up a researcher-led working group to learn whether and how institutions find opportunities to optimise the rules in their favour could help us reconsider – for instance whether institutions should submit all research and academic staff (HESA codes 2 and 3) to a Research Assessment. This could stop local definitions contributing to sector variation in what constitutes “significant responsibility for research”. A rigid, level playing field means time saved not looking for permutations to optimise a submission beyond excellence – and that we could let the evidence speak for itself.
And we should encourage and facilitate more joint submissions. Allowing for large and small units to come together as one submission would stimulate collaboration – especially across local geographical regions. Less submissions mean less panel work, even with the utopia of no narrative for the Environment Statement. Research culture and environment may benefit through economies of scale and less competition must save hours in the long term.
Waiting for rules
Perhaps most controversially, we shouldn’t announce the guidance and criteria for the next Research Assessment until one year before the submission deadline. Overnight this will slash the number of institutional meetings, internal reviews and external-led mock Research Assessments. It also will give less time for any last-ditch attempts to seek opportunities to manipulate the submission. Just like a large grant submission, it will focus the mind on the submission in a few hectic months.
Are any of these too radical for consideration? Would they even make a dent on the REF juggernaut as it powers towards the next Research Assessment? How about one more suggestion to think about?
If we accept the principle that any Research Assessment is a form of competition with cash prizes (quality-related funding or QR), then why is it the UK funding bodies only set the prize rules after the competition has finished? Surely all entrants are entitled to know the rules to the competition before they start the race? But maybe that is one revolution too far?
The challenge with the REF is that no-one likes it, but there is no agreement on what would be better. I think there are some good suggestions here, but would make a few points. Whilst the cost is high and we should always seek to minimise it, when it is used to decide on how to spend over £10 billion across the REF cycle it represents a few percent. I may be cynical, but I don’t think we can remove gaming. Whatever rules are put in place, institutions will, and arguably should, maximise the outcomes for the institution and its… Read more »
“Perhaps most controversially, we shouldn’t announce the guidance and criteria for the next Research Assessment until one year before the submission deadline.” If you do that you’ll need a *massive* simplification of the submission guidance and criteria well beyond anything indicated in this post – and this isn’t so much about the number of publications as about the type and complexity of information required per-publication. It took well over 18 months end-to-end for the national submission system to be developed and debugged this time round because – in part – of all the various cases in the several hundred pages… Read more »
Thanks for the provocation, Phil.
Just to add to cim’s last suggestion, if we believe the rhetoric we’re all world class; or at least internationally excellent. So, a volume-driven funding formula might be adequate. But as well as using academic FTE (T&R and R-only), perhaps it should also include PGRs and research grant & contract income, to reflect the necessary funding of those elements. A return to the major and minor volume indicators of the past?