Could this be the end of the impact factor and h-index in UK higher education? The fevered response by some to last Thursday’s (February 8th) RCUK statement, marking the signing of the San Francisco Declaration on Research Assessment (DORA) by all seven of the UK’s research councils, would seem to suggest so. Librarians present at ‘The turning tide’ event held on the same day, discussing responsible approaches to research metrics, were particularly (alright, ironically) giddy at the thought of being able to update their training slides.
Farewell to the h-index?
For those who are unfamiliar, the h-index is an individual author-level metric that tries to measure both the citation impact and the productivity of a researcher’s publications, based on a set of their most commonly cited papers, and how many times they were cited elsewhere.
I for one won’t miss these particular metrics, with my own h-index currently hitting the giddy heights of 1, although Cameron Neylon reported that many well-qualified applicants to a recent postdoc at his institution were keen to report their own numbers. Other unloved and unlovely measurements such as grant income, field-weighted citation impact (FWCI), Journal Impact Factor (JIF) and all forms of university rankings also took a kicking on the day.
The event, hosted by the Forum for Responsible Metrics, was billed as an exploration of “the emerging culture of responsible metrics in research”, although the reality was a little more mundane for the most part, due to the low numbers of academics in the room. This was queried on the event hashtag on Twitter, after a show of hands revealed around ten were present, though it was patiently explained that unless they had a professional or research interest in metrics, researchers were unlikely to have time to attend. However, research support staff, funders, publishers, library workers and other stakeholders ensured that the event ‘sold out’ quickly. Metrics are no longer a niche topic, and news stories about ‘irresponsible’ use of quantitative approaches to research and researcher assessment abound – including the use of FWCI in redundancy consultations.
Doing absolutely nothing
Following some typically entertaining opening comments from HEFCE’s David Sweeney, we got to the meat of the morning session, as Paul Ayris from UCL dug into the results of a recent survey on research metrics at UK research organisations. Unsurprisingly, to anyone following this issue, there’s not much doing. Engagement levels are low and qualitative responses to the question on what institutions were doing around responsible metrics initiatives, such as the San Francisco Declaration on Research Assessment (DORA) and the Leiden Manifesto for research metrics, included some completely irrelevant answers and the brutally honest “absolutely nothing”.
The first panel, on challenges and solutions to responsible metric use, had the word “culture” in the title, but didn’t really go there – it was more of a pragmatic account from funders and policy folk, with little that was new to anyone familiar with recent discussions around scholarly communication, peer review and the sort of people who do bibliometrics work. Evelyn Welch of King’s offered the most insight, sharing the horrified responses of humanities and social sciences academics to the wrong sorts of measurement and the unfortunate effects of dubious TEF teaching metrics overtaking the longer-established research variety.
The impact of metrics
After lunch, it was time for the researcher’s perspective, and it’s not too much of a brag to say that was probably the most popular session, despite me being on the panel. I shared the stage with Kyra Sedransk Campbell and Chris Jackson from Imperial, an early career researcher and full professor respectively, and policy wonk and former early career researcher Adam Wright from the Royal Society. We had about five minutes each to talk about the impact of metrics on our own careers and perspectives, and then the audience Q&A seemed to tease even more provocative statements out of us. Adam Tickell asked some tough questions around the problem of dismissing metrics entirely when the numbers of academic staff and publications are so high. I’m not sure we had the solutions.
Chris suggested that the main barrier to responsible use of metrics was senior academics, and jokingly suggested ignoring everyone over 50 in favour of speaking to Masters students and beyond. Adam and Kyra movingly drew our attention to the number of great people leaving academia, both of their own volition to do other work that is considered lesser by the academy, and due to the pressures on mental and financial stability caused by an overly metricised and uncertain academic culture. I tried to avoid the trap of being the easily dismissed ideologue, but did try to highlight the problems inherent in field normalisation, STEM bias, spending lots of money on metrics products when permanent academic contracts are scarce, and failing to measure the right things or capture all outputs – especially when relying too heavily on single suppliers.
A responsible research culture
The final panel looked at how metrics fit into the wider agendas around open science, responsible innovation and hiring practices – and the relationship between policy and implementation. Unfunded mandates are difficult to enforce, as we saw in the ten years or more of open access mandates before the RCUK, Wellcome, HEFCE etc policies came in and started to change behaviour.
We then broke into groups to discuss and share best practice. Some people had no examples of good practice, so put forward ideas and suggestions such as the ‘bio-sketches’ recommended by Stephen Curry in his Nature piece on DORA. Our group started from a similar doomy “there isn’t any” place, but it became apparent that recent guidelines from Wellcome, including on diversity in panels and reporting inappropriate use of metrics, would be useful to different groups across the sector, and there is a role for HR and ‘critical friends’ outside of faculties and departments to support responsible research culture.
Adam Tickell rounded off the event with some final comments, including the need at times for policy to come before culture change (the example given was the seatbelt law) and the way in which metrics have already led to behaviour change in themselves, as well as “making a lot of money for Elsevier”. I just wanted to get that out there, as it wasn’t me saying it for once.