This article is more than 5 years old

REF impact – a new mode of scholarly distinction?

Richard Watermayer reports on research into the way research impact was assessed in REF 2014.
This article is more than 5 years old

Richard Watermeyer is Reader in Education and Director of Research in the Department of Education at the University of Bath.

In REF2014 – for the first time – the research of UK academics was evaluated for societal and economic impact. This new dimension of research assessment provoked controversy, and the suspicion of many who perceived with it both an extension to the long-arm of government and the intensifying of universities’ bureaucratisation.

While “REF impact” may remain contentious today, an accurate understanding of how it is evaluated is rare in the sector. For many researchers and their universities this is cause for concern. Not least because the weighting for impact has increased from 20% (in REF2014) to 25% (in REF2021), and with it the significance of the impact case study (ICS) as a lever of quality research (QR) funding. From a sociological perspective a study of REF impact is important for the way it is challenging and changing a sense of what counts as research.

Opening the box

Here I report on my own efforts in prising open the black box of impact evaluation and on our conversations with REF evaluators (academics and user-assessors) – from social science and arts and humanities disciplinary sub-panels – on their experience of adjudicating REF impact.

REF impact signals the emergence of a new and almost ambiguous space of research excellence. Our interviewees spoke candidly of their emotional and intellectual vulnerability in making judgements that were simultaneously divergent from yet enabling of what we have called “new modalities of scholarly distinction“.

Correspondingly, assessors discussed the challenge of forming a group-concept and group-style that harmonised academic and user-assessor perspectives, and which had the potential to mitigate the risk of polarised views. Yet, perhaps most significantly, their characterisation of impact as a ‘game-changer’ for research was related to their experience of impact evaluation as a (pseudo-scientific) process where theory (largely absent) and evidence (of variable quality) came a long way second to the persuasiveness of prose; and where evaluation criteria were considered to be loose or counter-intuitive, in relation especially to public engagement.

An absence of theory

User-assessor respondents (in greater numbers than their largely oblivious academic equivalents) spoke of their surprise at what they saw as the absence of theory – or more specifically the failure of impact case study authors to account for how their research had affected change:

The people who I work with as partners and stakeholders around promoting research in our sector draw on theory. And that was the sort of thing I was expecting to see.

By neglecting theories of change, the ICS was felt to be an exclusively descriptive and non-critical document. It might thereby be argued to contravene a scholarly convention which privileges theory in the explanation of research and as a route to academics’ critical reflexivity.

Variable evidence

Respondents spoke of the challenge of interpreting and trusting the evidence put forward by ICS authors:

So what kind of weighting do you give to a personal testimony over and above a letter of commendation; if it was solicited, what did it mean? Did we know the person wasn’t just a friend of the person of the case study?

Concurrently, they revealed their fear of being unduly influenced by the stylistic virtuosity of ICS authors, and their skill in successfully marketing their impact achievements:

I think with impact it is literally so many words of persuasive narrative broken up into two or three sections, which are inadequate in themselves to giving any kind of substance… nothing you can hang your judgement on…

They also addressed issues of evidence-bias, and instances of evaluators’ instinctive and involuntary lean towards more tangible, obvious, or compelling forms of evidence:

It was obvious that some things were much easier to evidence with a very concrete piece of evidence… I think we are all kind of tangling ourselves with a dilemma of not wanting to privilege particular kinds of evidence. In effect, what that did was privilege certain kinds of impact.

Unsurprisingly then, perhaps, respondents nearly all admitted to having either avoided or only superficially consulted the underpinning evidence presented within ICS – and this on the basis of formal guidance. In fact, it was generally acknowledged by respondents that any consultation of evidence should occur only as a last resort, or where doubt as to the efficacy of claims presented in the ICS was sufficient:

“We were told not to look at the corroborating evidence unless there was a problem . . . So really we ignored it”.

Applying criteria

HEFCE’s evaluation criteria for impact in REF2014 were viewed by respondents as dynamic, evolving, and habitually open to interpretation:

I think you’ve got to recognise that the panel had quite an influence on the criteria.

Yet they were also seen as unhelpful, closed, and restrictive – and perhaps contrary to common sense, particularly in the context of public engagement, which the REF rules dictated was a route to rather than a form of impact. This was a rule that many admitted to flouting:

I think we tried to be sympathetic to people who were doing that kind of thing, but you know at the same time it was hard to recognise that within the rules.

Concurrently, respondents advocated for a wider and more sophisticated set of evaluative criteria and indicators (though cautioned against a reliance on or inadvertent privileging of metrics) in providing for a more holistic and better-educated review:

Although we don’t want metrics that will pre-empt the assessment, we need to find some kind of indicators and measures that we could use

Final thoughts

We are left then to ponder a system of performance evaluation – or perhaps performative evaluation? – which is foreign to any established scholarly paradigm of research excellence. We’ve heard about a break with theory and evidence, the capricious application of criteria and the potential entrapment of evaluators by the guile of ICS authors as impact merchants. We observe thus the emergence of a new and expressly unscientific form of scholarly distinction, and the further estrangement of academics from a sense of who they are what they do (best) . . . and with REF2021 approaching we see the potential of an ever-widening gulf.

Leave a Reply