The Stern review – Building on Success and Learning from Experience: An Independent Review of the Research Excellence Framework – has just been released.
Let’s start with the good news: the review reports that UK has a highly productive research base. The headline judgement is that the Research Excellent Framework (REF, and its predecessor, the Research Assessment Exercise) has had a positive impact. The dual support funding system, with its quality-related (QR) funding allocations is important for supporting the research base. QR is worth around £2bn per year across the UK, so there’s a need to judge the quality of the research before handing out more cash. Seems reasonable, no?
Bad news is that the exercise of judging the quality is expensive, at around £250m for the last one, REF2014. There are also problems with institutional behaviour and the gaming of the system: universities with deep pockets were been able to play the academic transfer market to buy in star players just in time for the researcher census date. The introduction of the impact evaluation, via case studies, to the last REF was not uniformly welcomed and further changed behaviour: there was a need to submit a number of case studies which varied by the number of researchers submitted in each area, and so there was an incentive to omit some people if there wasn’t a suitable impact submission lying around.
The recommendations of the Stern review of REF are that not too much should change. This is tweaking the rudder slightly rather than aiming for a completely different location.
The recommendations:
1. All research active staff should be returned in the REF.
There’s still plenty of room to play the game of who is, and isn’t, research active. But this should mean that it takes away one gaming element introduced for the last REF for which both the absolute results were released and the data on the proportion of staff submitted. This allowed for multiple rankings with some universities opting for smaller higher-quality submissions over more comprehensive approaches.
2. Outputs should be submitted at Unit of Assessment level with a set average number per FTE but with flexibility for some faculty members to submit more and others less than the average.
The most recent exercise had a fixed number of four submissions (reduced for early career staff and in other personal circumstances): this recommendation would reduce the average number of submissions to two, but allow flexibility of up to six outputs from a single academic. This should make for more of a departmental submission with the most productive academics contributing more to the exercise.
3. Outputs should not be portable.
If you can’t take your outputs with you, there isn’t much of a last-minute transfer market. This should be one the institutions will like, but perhaps not those individual academics who have benefitted from (or were hoping for) recruitment or retention offers around REF time.
4. Panels should continue to assess on the basis of peer review. However, metrics should be provided to support panel members in their assessment, and panels should be transparent about their use.
One of the options had been for a REF to go metrics-only but this is not welcomed by everyone, either within or between disciplines. This is a compromise position, and some assessment panels will likely rely more heavily on metrics such as citations. But is this the thin end of the wedge?
5. Institutions should be given more flexibility to showcase their interdisciplinary and collaborative impacts by submitting ‘institutional’ level impact case studies, part of a new institutional level assessment.
This is a departure from the existing REF format in which assessment has been made at disciplinary (Unit of Assessment) level: it should provide high-quality fodder for marketing departments and allow for some interesting cross-disciplinary examples of research impact to come forward.
6. Impact should be based on research of demonstrable quality. However, case studies could be linked to a research activity and a body of work as well as to a broad range of research outputs.
It was not always easy, for the REF2014 exercise, to draw a direct link between submitted research work and the impact made. A loosening of the definition should be welcomed by institutions so that they are able to tell the tale of the relationship between research and impact in a greater variety of ways.
7. Guidance on the REF should make it clear that impact case studies should not be narrowly interpreted, need not solely focus on socio-economic impacts but should also include impact on government policy, on public engagement and understanding, on cultural life, on academic impacts outside the field, and impacts on teaching.
In another response to the complaints about the role of impact in REF2014, this should allow for a greater likelihood of success for individual case studies thus easing the burden on the authors.
8. A new, institutional level Environment assessment should include an account of the institution’s future research environment strategy, a statement of how it supports high quality research and research-related activities, including its support for interdisciplinary and cross-institutional initiatives and impact. It should form part of the institutional assessment and should be assessed by a specialist, cross-disciplinary panel.
For REF2014 there was a whole industry in authoring submissions, either within institutions and/or engaging outside consultants. These ghost writers will be pleased to see an opportunity for their creative side in coming up with the most convincing-sounding ‘future plans’ section of the submission. Start polishing your hyperbole.
9. That individual Unit of Assessment environment statements are condensed, made complementary to the institutional level environment statement and include those key metrics on research intensity specific to the Unit of Assessment.
By contrast with 8, this is a welcome piece of housekeeping to avoid repetitious submissions.
10. Where possible, REF data and metrics should be open, standardised and combinable with other research funders’ data collection processes in order to streamline data collection requirements and reduce the cost of compiling and submitting information.
Straightforward. What’s not to like?
11. That Government, and UKRI, could make more strategic and imaginative use of REF, to better understand the health of the UK research base, our research resources and areas of high potential for future development, and to build the case for strong investment in research in the UK.
This is rather cheeky both in its tasking of UKRI to be more ‘imaginative’ – the body doesn’t exist yet and it may never, depending on what happens to the Bill – and also it’s a clear request for cash. This isn’t wholly surprising that the researchers with the authorial paws on the document are also seeking for additional money for UK research. In the Brexit context, with its negative implications for EU research funding, that request is all the more important.
12. Government should ensure that there is no increased administrative burden to Higher Education Institutions from interactions between the TEF and REF, and that they together strengthen the vital relationship between teaching and research in HEIs.
This is the most under-baked section of the report which is understandable given that TEF is yet to operate. The statement that “TEF criteria for teaching quality will include the extent to which it is informed by the latest in research, scholarship or professional practice” shows yet another issue for the TEF and its metrics approach. We should hope that the extent to which teaching is informed by research is not just a function of it taking place in an institution with a high REF score: these are not the same thing.
How would I rate this submission?
It’s a well written report, so due congratulations to the staff at BEIS for their work, and it proposes some politically palatable suggestions. There isn’t any particularly radical change proposed: some will be disappointed by the half-measure on metrics, and some by the non-portability of personal research outputs. Will it really reduce the financial burden of the exercise on the sector? It’s not obvious where material savings will come from.
There’s a nice little note on the ‘other important purposes’ of REF which includes the charming line: “It provides a periodically updated reputational benchmark, which is based on rigorous peer judgement by fellow academics.” For those ‘winners’ in REF – and those members of the Steering Group are mostly from institutions with large QR allocations – there is some enjoyment in the exercise and the bragging opportunities. REF suits some: the Stern report will keep those people happy.
And finally, for those of you out there looking for a short history of research assessment, I recommend the appendices to the report as a handy guide on the topic.
What’s next?
There will be a consultation by the end of 2016 for results to be published by summer 2017.
What are the implications for the ‘not portable’ for people who publish ‘between jobs’? It’s worrying as you need a book to get a job but then that book will potentially be wasted…
I imagine it will make no substantive difference. Everyone else will be in the same boat, so their outputs from xyz University will also not matter in REF terms. So the job will be offered on the basis of their ability/reputation (demonstrated by that book for instance) and their potential for future REFable research and outputs/impacts. IMHO you need the book to demonstrate your ability and research capability – no one should be appointing you (even in the current system) just because it’ll be REF submitted.
Yes, as someone who is doing just that I would appear to be, to use the technical jargon, screwed.
Different gaming will take place.
University of Hertfordshire is not a research university.
Well that is just palpably not true – I have seen excellent research from Univ of Hertfordshire in areas such as History and Pyschology for instance.
Yes indeed – that shold have read ‘research centric’. Apologies for any unintended sleight…
There’s a good reading list of current opinion here:
http://blog.history.ac.uk/2016/07/stern-review-an-initial-bibliography/