It is nearly time to dust ourselves down, pack our Research Excellence Framework (REF2021) bags and return to the day job of seeking to undertake world-leading research.
While we wait anxiously in the REF2021 lobby for our results, the UK higher education funding bodies have moved surprisingly swiftly to launch a Future Research Assessment Programme, quickly followed by a commissioned warts and all report evaluating perceptions and attitudes towards REF2021 during the final submission stages.
Now, following the recent trend of wall-to-wall consultations and reviews, the UK funding bodies are seeking feedback on the REF2021 process from both higher education institutions and individuals with responses due by 26 January 2022.
Same old REF? Not so fast
Word on the REF street is that the next Research Assessment will be seeking an evolution not a revolution. For sure, the REF2021 is a slick, well-oiled machine with a rule book that covers every conceivable scenario and bends over backwards to ensure a level playing field for all who want to enter the REF Crystal Maze. But, if REF2014 is anything to go by, it may cost over £250m in total effort to the sector, so perhaps there are some radical new ways to make the whole assessment process simpler, quicker and arguably even fairer. Here are some suggestions to stimulate that debate.
We should retain the three components of assessment that featured in REF2021 – outputs, impact and environment – but replace the Environment Statement with just five key performance indicators. These could be standard metrics that all universities record. For example, the ratio of staff submitted to number of early-career researchers (ECRs), total Unit of Assessment (UoA) income/Full-time-equivalent (FTE), postgraduate conferrals/FTE, number of external independent fellowships/FTE, and Research Council income/FTE. This means no narrative at all.
We could save on all those painful hours writing and revising what essentially is an operational document. I hope that we don’t micro-manage submitting institutions – and instead trust them to develop their research culture and govern their operations both openly and fairly, answerable to their constituents rather than a funding body.
Game over?
Alongside this, we should reduce the number of outputs submitted by a Unit of Assessment. Sticking with a Stern-like formula – a change to a total volume of outputs equivalent to two times the Staff FTE submitted. This would make for a saving of 20 per cent of the time spent assessing outputs in one quick swipe. It may also ensure panels see the very best outputs in UoAs.
We need to close all doors on any possible game playing. Setting up a researcher-led working group to learn whether and how institutions find opportunities to optimise the rules in their favour could help us reconsider – for instance whether institutions should submit all research and academic staff (HESA codes 2 and 3) to a Research Assessment. This could stop local definitions contributing to sector variation in what constitutes “significant responsibility for research”. A rigid, level playing field means time saved not looking for permutations to optimise a submission beyond excellence – and that we could let the evidence speak for itself.
And we should encourage and facilitate more joint submissions. Allowing for large and small units to come together as one submission would stimulate collaboration – especially across local geographical regions. Less submissions mean less panel work, even with the utopia of no narrative for the Environment Statement. Research culture and environment may benefit through economies of scale and less competition must save hours in the long term.
Waiting for rules
Perhaps most controversially, we shouldn’t announce the guidance and criteria for the next Research Assessment until one year before the submission deadline. Overnight this will slash the number of institutional meetings, internal reviews and external-led mock Research Assessments. It also will give less time for any last-ditch attempts to seek opportunities to manipulate the submission. Just like a large grant submission, it will focus the mind on the submission in a few hectic months.
Are any of these too radical for consideration? Would they even make a dent on the REF juggernaut as it powers towards the next Research Assessment? How about one more suggestion to think about?
If we accept the principle that any Research Assessment is a form of competition with cash prizes (quality-related funding or QR), then why is it the UK funding bodies only set the prize rules after the competition has finished? Surely all entrants are entitled to know the rules to the competition before they start the race? But maybe that is one revolution too far?
The challenge with the REF is that no-one likes it, but there is no agreement on what would be better. I think there are some good suggestions here, but would make a few points.
Whilst the cost is high and we should always seek to minimise it, when it is used to decide on how to spend over £10 billion across the REF cycle it represents a few percent.
I may be cynical, but I don’t think we can remove gaming. Whatever rules are put in place, institutions will, and arguably should, maximise the outcomes for the institution and its staff.
Only setting the rules one year in advance would be challenging for aspiring researchers for whom the REF is important (whether we agree it should be or not) and for research professional staff the career roller coaster would be even more severe. I think a rolling programme with one panel submitted and assessed each year would ease the pressure on institutions and RE, giving better career progression for professional services staff.
“Perhaps most controversially, we shouldn’t announce the guidance and criteria for the next Research Assessment until one year before the submission deadline.”
If you do that you’ll need a *massive* simplification of the submission guidance and criteria well beyond anything indicated in this post – and this isn’t so much about the number of publications as about the type and complexity of information required per-publication. It took well over 18 months end-to-end for the national submission system to be developed and debugged this time round because – in part – of all the various cases in the several hundred pages of guidance. And similar amounts of work needed doing at each institution.
Some examples of things that wouldn’t be allowed under a 12 month release-to-submission cycle:
– different rules for data requirements for panels or sub-panels. Physics research isn’t like Classics research? Tough, one rule for everyone or it won’t be possible to build and test the technical backends (on both the national side *and* at every submitting institution!) by the deadline. Bye bye double weighting,contextual information on outputs, etc.
– open access requirements? Got to be scrapped entirely. One field in the final submission; even a *simplified* flowchart of how to assess its contents barely fits on an A3 sheet of paper. It took us *months* to get to the point where we were confident that the data we were submitting there would pass audit, and it wouldn’t have been significantly less time if we were submitting 20% less. And that was just one field. There were tens of fields on a typical output record.
– Stern’s “non-portability” ideas or anything else concerning Cat B staff? No chance – you didn’t tell people at the start of the cycle to collect/preserve that data, so now they don’t have it. Submit current staff only and go back to REF 2014 rules on this (the ones with all the alleged “game playing” about last minute appointments)
Regardless, I would quite welcome this sort of simplification, but then people will complain – quite rightly – that it’s unfair on the Physicists/Classicists/ECRs/whoever. So there needs to be at least 2-3 years notice (and full-cycle notice for some things) just so that the data can physically be collected in the first place. This REF came very close to collapsing under its own bureaucratic weight as a result. Next one may actually do so…
As far as my suggestions for REF simplification go, I think it needs to be rather more radical:
– get the REF-distributed funding
– divide it between institutions according to their FTE of research staff on the previous year’s HESA return
– all sorted in a morning
Thanks for the provocation, Phil.
Just to add to cim’s last suggestion, if we believe the rhetoric we’re all world class; or at least internationally excellent. So, a volume-driven funding formula might be adequate. But as well as using academic FTE (T&R and R-only), perhaps it should also include PGRs and research grant & contract income, to reflect the necessary funding of those elements. A return to the major and minor volume indicators of the past?