David Kernohan is Deputy Editor of Wonkhe

Expected late last year, Adam Tickell’s interim report from the independent review of research bureaucracy is the first to leave the higher education governmental logjam in 2022.

It’s not something that will knock you off your feet, but it has a lot of interesting things to say and tees up a very interesting final report (still due in “early 2022” – I’d be looking at before the end of this parliament) to come.

The genesis of this report comes from the same bureaucracy jeremiad as OfS’s ill-fated review of the National Student Survey back in September 2020. Tickell, his secretariat, and his panel began work in March 2021 – talking to pretty much everyone (funders, policymakers, vice chancellors, researchers, and research teams) about their experiences and expectations around research bureaucracy. There was also a call for evidence with 250 responses, and a look at international comparators. As such, this publication is something that can be seen as reflecting back the views of the sector – though there’s no explicit consultation at this point I’m sure he’ll hear about it if any of this feels particularly wrong.

So there’s no specific recommendations here – it’s a principles-based direction of travel with a few “early ideas” sprinkled in.

Systemic shock

As the report takes a whole (UK) system approach to examining how research is done in higher education settings (other settings will be considered in phase two, but research conducted by businesses is out of scope, and the REF exercise review is separate but some evidence has been shared between the two) we are looking at some very high level changes, though we have to see this effort in the context of stuff like Paul Nurse’s landscape review, internal and external reviews of UKRI, the above-mentioned review of REF, the R&D People and Culture Strategy, and the wider SFC review.

Say what you like about Amanda Solloway – but she did kick off a lot of reviews.

Tickell has in his sights practices that are “excessive, ineffective, or simply unnecessary” – be that through poor initial design or subsequent events changing administrative requirements without a consequent change in processes. Administration does expand over time – an accretion of initiatives and priorities rather than whole scale consideration of the system is the root cause of that, and while this does point to the need for regular deep reviews I’m not sure of the case for this many!

There were 461 calls for proposals issued in 2020-21 by UKRI – a number that will likely rise with the planned expansion of public R&D funding. Charities (more than 150 just in medical and healthcare related fields alone), public bodies, and businesses, both in the UK and further afield, are increasingly important players in what is a very complex space.

Said in seven words

Harmonisation, simplification, proportionality, flexibility, transparency, and fairness are the abstract nouns of the moment. It’s a smart set of priorities in that it would be difficult for anyone to oppose them. There’s nobody out there who wants to see duplication of effort, complexity for the sake of complexity, or an inflexible approach that uses rules to stop research happening. In any number of cases in higher education more generally we use different names and acronyms for the same concepts, different definitions for data collection, and different requirements for doing essentially the same tasks.

If we’ve had the Augar review, then this shows every sign of being the Augean review – clearing out a lot of the cruft that accumulates over the years as ideas ossify into processes.

But note the last priority – fairness. Sometimes, fairness needs bureaucracy to make it work. We’d never know, for example, that we were disproportionately awarding grants to male researchers if we didn’t collect data on the sex of applicants. The report argues that “bureaucracy can both support and erode fairness” – monitoring is important, but asking for too much at the same time erects barriers that can keep less well-resourced applicants (often from minority groups) out of the game all together.

Not included in the principles is “sustainability” – but those two short paragraphs at the end will be repeatedly cited. For example:

To effect lasting change, the review’s recommendations must avoid destabilising the system by prescribing rapid, swingeing cuts to bureaucracy, which would almost certainly be followed by its equally swift return.

Quite.

Our survey says

We get a tantalising sniff of some quantitative data – but short of saying that the application process and institutional systems are the main pain points there’s not really much that we are able to learn from it. In both those cases more than 150 of the 250 respondents cited them as the main sources of “unnecessary” bureaucracy. But for more than that we need to wait for the final report.

The qualitative stuff picks up the concerns over application, alongside monitoring, in-grant management, and digital platforms and gives us a glimpse of “key ideas” that the remainder of the review will explore in depth. We’ll also have to wait till part two for “ideas” of any sort on internal provider bureaucracy and communications.

Assurance, reporting, and monitoring

We can look forward to a comprehensive analysis of assurance processes later this year, but for now – aside from the usual strictures of effective data management, “collect once”, and format standardisation the big idea is “risk based assurance”. In other words the idea that universities that perform a lot of research will be subject to less scrutiny (just like in OfS monitoring – there’s even the spectre of periodic or risk-based assurance rather than working on a by project basis).

So although this feels sensible, in practice this puts a disproportionate burden on providers and settings where less research is carried out where support and advice would be most required. It would add to the already considerable infrastructure barrier that centralises research in places that do research already. For this reason, it’s good to see some interest in collective resources to support compliance – but if we are serious about expanding the pool of research activity rather than reinforcing existing structures we need a lot more of this kind of thing.

Applying for funding

Because people always want funding, and because we want to make best use of it, funding applications have developed as a way to ensure that limited funds are spent in the most beneficial way. However, when the volume of applications means that less than a quarter are successful we do need to take a step back.

Academics (and specialist support staff) spend a lot of time writing applications for funding that they don’t get – and it is difficult to see this as time well spent. Competitive funding is squarely out of fashion in other parts of higher education (witness the screams of agony around OfS teaching capital or claims for world-leading small and specialist funding) but it still dominates the research world.

There’s some radical ideas out there (lotteries are almost as effective and staggeringly more efficient ways of allocating funding than bidding!) but here we fall back on the old standard of the two-stage review with limited information required at the first stage. Unless this kind of thing is handled carefully, you end up writing the second stage bid (and financial documentation) so you can confidently submit a shorter expression of interest that will hit the scheme requirements. There’s a similar issue with post-award assurance – imagine landing a project and then failing on technicalities, and on caps to institutional submissions which can stop the best ideas even reaching reviewers.

To be clear the report is aware of these issues – and these ideas are for further examination but not roll-out just yet. Ditto the “improvements to peer review” which if applied poorly end up reducing the complexity (and thus, sometimes, the fidelity) of review criteria.

Grant implementation and in-grant management

Try telling a successful awardee that they must be pleased to be getting on with some research now the grant has been approved. After they’ve stopped laughing hysterically and/or weeping you will learn about the realities of project management (a discrete and often undervalued role in its own right). Contracts need to be agreed, collaboration agreements need to be signed off – both can get lost in impenetrable legal wrangling for months – and then there is the exciting world of procurement to explore.

To be clear providers have specialist staff to support on these issues, and funders often provide templates and additional advice, but there is always an edge case. The review will look into the relationship between provider rules, the templates where they exist, and industry practice more widely. But this is difficult stuff that spans a long way beyond what you would traditionally consider research practice.

As a project manager one of your key relationships is with a programme manager (full disclosure: I used to be one). This is the person that can approve your virements and budgetary reprofiling, extend your project if needs be, and deal with other issues on behalf of the funder. It’s an undervalued skillset that involves balancing a sincere desire to get stuff done while obtaining value for public money – I did it for comparatively small sums of money, but the pressures for larger projects and programmes are many and varied. Where things can be standardised and templates developed this is good – but don’t forget the edge cases (did I say there are always edge cases? I meant in every project in the programme…)

Improving digital platforms and systems

You don’t (often) write an application, or report on a project, as a Word document or in Outlook. There are specialist tools and platforms for every research-related interaction – all the way from the application system (the late and little-lamented JeS) to the repository where you store and curate the data your project curated and at every point in the middle.

These systems don’t always interoperate at all, or cleanly draw key bits of information from one system to another) without calamities of every sort. As always with information technology, the shiny front end is of little value if you need to enter your project data for the seventy-third time. Some of these systems are owned by funders, some managed by providers, some by publishers. Some are hand-coded legacy systems, some are commercial behemoths. The report sets out a modest aspiration to get this stuff standardised and working together nicely – something that sounds straightforward until you start doing it.

What’s next

Tickell and the panel expect to keep the stakeholder channels open, and even conduct system-wide tests of their recommendations to avoid unintended consequences. It feels to me that they are identifying the key problems – but, as I hope I have made clear, there are no easy answers or quick wins in these spaces.

As the panel will be aware, there are also overlapping attempts to fix some of this stuff at provider, regional, national, and global levels alongside a rich heritage of previous projects with reports, findings, and recommendations. There is a skilled (if often demoralised) group of people who spend their lives working on these problems that could and should be drawn into these debates. And – not mentioned in the report, worryingly – disciplinary differences are key, with some subject communities running infrastructure of their own.

3 responses to “Research bureaucracy – can you have one without the other?

  1. I’m sure DK has said this before, concentrating research in a few universities is counter to the Government’s attempts to level up which is then reinforced by a risk-based regulation system which offers a lighter touch reward for those with more research money. Surely a risk-based approach should be based on track record rather than volume per se?

  2. Originality, rigour, and significance of what’s being proposed should be key, not institution. The diversity of institutions and researchers getting quality grants are lamentable, and often, the feedback by ‘committee’ approach used by some funding bodies, with no recourse to question it, does little to improve things. For research to play a significant part in ‘levelling up’ will require quite major changes

  3. Much research, probably most, is neither read nor useful. We all know this but the game goes on. Were there less useless research, the burden of managing it would be less, and less complex.

Leave a Reply