This article is more than 5 years old

Busting REF myths

There's a lot of common beliefs about the REF that aren't actually true. Kim Hackett and Catriona Firth take the time to put you right.
This article is more than 5 years old

Kim Hackett is the REF Director at Research England.


Catriona Firth is Head of REF Policy at Research England.

Over the years we’ve heard many REF myths, and with the publication last week of the draft guidance and criteria, we’ve been keeping our ears to the ground to stay alert for any that are gathering particular momentum.

They range from apparent misunderstandings to more bizarre ideas– these include the suggestion that the REF team has a call centre of staff to deal with queries (we don’t), that impacts on policy can’t be submitted (they certainly can), and that the panels use journal impact factors to assess outputs (the criteria underline more than once that they won’t). Yet despite our best efforts to dispel many of these points in the guidance, some myths are particularly enduring. So we focus here on three of the more persistent ones we’ve encountered.

‘Only journal articles can be submitted’

Equity is an underpinning principle of the REF, and means that the guidance and criteria enable recognition of excellence across all types of research and forms of output, and their assessment on a fair and equal basis. We have provided a glossary defining the full range of output categories for REF 2021, and the criteria make clear that the panels welcome all forms of output meeting the definition of research. Excellence was found across all output types submitted in REF 2014. Feedback from that exercise does suggest there was some uncertainty with the submission requirements for some types of practice research outputs. With this in mind, Main Panel D (where the largest proportion of these outputs were returned) have set out more guidance on what should be submitted this time round. This is intended to ensure the panel has access to the research dimensions of the output and give confidence to institutions to present their best research in whatever form it has been produced. Views on this are welcomed during the consultation.

‘The discipline-based UOA structure means that interdisciplinary research will be disadvantaged’

There is little evidence that interdisciplinary research was disadvantaged in REF 2014. In fact, analysis of the outcomes indicated that outputs identified by institutions as interdisciplinary were found to be of equal quality to other outputs. However, there is a concern that institutions did not feel confident submitting interdisciplinary research. Working with our Interdisciplinary Research Advisory Panel (IDAP), we have introduced a number of measures intended to reassure institutions that interdisciplinary outputs will continue to be assessed robustly and fairly. At the heart of this is a network of expert interdisciplinary advisers, who will provide guidance to their own sub-panels and liaise with the advisers on other panels to ensure the equitable assessment of interdisciplinary outputs.

‘You can’t have a high-scoring impact case study based on public engagement (PE)’

This is simply not true. A 2017 report by the NCCPE did not find any significant difference in the scores awarded to case studies featuring public engagement in 2014. However, we know that institutions can be nervous about submitting PE-based case studies, which are perceived to be ‘risky’ and difficult to evidence. There is often confusion about where dissemination ends and impact begins – does being on the telly ‘count’ as impact? Engagement can be an important pathway to impact and can play a vital role in creating a compelling narrative. Participation or viewing figures, for example, can provide valuable evidence of the reach of the impact. But the significance of the types of impact most commonly associated with public engagement – impacts on understanding, participation and awareness – can be tricky to evidence. The REF panels are clear that they welcome all types of impact and have included an extensive (but not exhaustive) list of impacts and indicators in the ‘Panel criteria’, including plenty of examples of public engagement.

New myths to come?

The changes to REF 2021 following the Stern review are likely to bring with them a new set of myths – in fact, we’d already like to make clear that we’re not expecting to see any half-outputs submitted: rounding should be applied to give a whole number of outputs for return! If researchers and research managers are feeling bemused, baffled or incensed by any REF myths, we would encourage you to reassure yourself with a rummage through the guidance on our website www.ref.ac.uk, or through a word with their institutional REF contact.

4 responses to “Busting REF myths

  1. On the other hand… journal articles make up the vast majority of outputs submitted to the REF; academics often use journal impact factors to select which articles to submit because they don’t understand the REF assessment criteria, or don’t have any systematic way to apply them if they do. Public engagement impacts are less likey to be submitted because their benefits are difficult to quantify and thus sound way less impressive. The guidance that researchers and research managers want is a clear explanation of how the scoring crtieria will be applied and managed.They won’t get this.

  2. I am currently conducting research on open access repositories and academic engagement and can confirm that the belief in the crucial nature of high impact-factor journals for the REF is alive and well.

  3. It is worth pointing out that the NCCPE study simply identified case studies that mentioned some element of public engagement (PE), but many of these treated PE as an add-on to another impact, which was their main focus. It is also worth noting that NCCPE were not able to statistically analyse the difference in scores (partly I assume because we know the score for so few), and even if they were able to do this, a finding of no significant difference wouldn’t mean much given that many of the case studies were not primarily PE-based. In my own research on the same dataset, we have also been unable to find a statistically significant relationship, but this is because we found so few PE focussed case studies in our sample of cases that we knew scores for. I think the answer at present is that we don’t know if PE focussed case studies did any better or worse than others.

  4. What you fail to mention or see is that there is a great mismatch between the official REF guidance (and rhetoric) on quality assessment and institutional practices within universities (internally justified by/attributed to the REF by management).

    One problem that I see is that too many universities (mis)use the REF (an assessment of a collective unit for the purpose of research funding allocations) and develop individualised performance targets – rightly or wrongly – related to the REF. The latter are often based on proxi-measures not part of the official REF. In a business school context that means using the dreadful ABS (AJG) journal ranking/list, for example (in other disciplines it might be impact factors, citations and the like).

    Another problematic practice are the use of internal mock-REF exercises, the setting of fixed interim publication targets and their appropriation for probation, promotion and capability procedures and decisions about individual academics. This means, that ECRs in particular are forced to adopt a purely instrumental approach to research (i.e. research increasingly determined what gets published in a highly ranked journal and not what is worthwhile, challenging, and interesting as a research topic) and are at the mercy of senior academics and administrators without little or no understanding of a specific subject area.

    Related to the last point mentioned above are the pradigmatic biases, political interests and ideological blinkers worn by senior academics making these internal pre-submission non-blind assessments who are usually also the gatekeepers who dominate the “leading journals” and represent the “mainstream” of a discipline (which makes it very difficult to publish in emerging or niche areas or to go against the dominant research paradigms and so on). None of these inherent problems of research assessment have been addressed nor are they resolved by the REF but they already have manifest negative effects on individuals and disciplines at large.

    In my department, for instance, colleagues have continuously been bullied and mistreated (sorry “supported”) based on such practices; some driven to resign and move on or retire, some succumbed to demands, while others have payed dearly for this with their physical or mental health due to the uncertainty and stress created (I know this is only anecdotal evidence but I hear similar sad stories from colleagues all around the UK).

    Amongst colleagues we call that approach “management by threat and fear” (fundamentally based on coercive and authoritarian power) to which the REF (and TEF) are instrumental if used as individualised KPIs. For all its good intentions, policy makers in the UK ought to have a better understanding of the (inadvertent) consequences of said policies.

    Scratch the shiny surface and you see a very ugly undercoating that blemishes UK HE.

    In an nutshell, it is not so much a question of whether academics “believe in the crucial nature of high impact-factor journals” (some may have internalised it, yes) or that “they don’t understand the REF assessment criteria” (most academics are clever enough to understand, you know) but a question of self-preservation and a reaction to the institutional practices that academics face and the adverse concrete consequences for their careers and lives they seek to ward off/avoid.

    I wonder who the real mythmakers are in the audit-obsessed world of UK HE.

Leave a Reply