Reproducibility – a seemingly elegant and clear-cut concept in isolation – lies at the heart of a tangled web of incentives, career structures and publication practices.
But in its recent report, the House of Commons Science, Innovation and Technology Committee has a rich set of proposals that aim to untangle this web.
While some of the report’s recommendations lack specificity, a number of them are unusually granular, from a meaty set of objectives and processes for the UKRI Committee on Research Integrity to a multipronged approach to assess and fund statistical input into research and statisticians’ career paths. The committee hasn’t shied away from getting into the nitty-gritty in this inquiry.
I’m looking through you
A prominent theme of the report is the importance of research transparency. The committee even goes so far as to say, “we use the term reproducibility to broadly refer to the transparency and quality of research.”
This emphasis on transparency – described in the report as openness to enable robust assessment of research data, methods and conclusions – is hugely welcomed. Transparency is not only the means to reproducibility, it is also, compared to reproducibility, a more practical, achievable goal – and more objective, easily audited and applicable across disciplines.
The research transparency agenda has already led to the development of numerous initiatives such as the Concordat on Open Research Data, institutional policies and collaboration via the UK Reproducibility Network, and publication practices such as preregistration, Registered Reports and reporting guidelines. It has also driven major funders to adopt policies promoting data sharing.
There aren’t yet clear-cut mechanisms, however, for monitoring compliance with funder policies, nor to reward grant applicants who have previously shared their data or adopted other transparent research practices. This can reduce the impact of these policies.
The committee recommends going a major step further by making transparency a “prerequisite of top-scoring research” in the next REF. Now that’s a sentence that warrants re-reading. As REF is one of the strongest levers influencing researcher and institutional behaviour, this recommendation, if adopted, would represent a fundamental shift towards valuing research practice alongside results and impact.
The next REF is widely anticipated to be leveraged for research culture ambitions, so a focus on transparency (beyond open access) would fit well here. The devil would be in the detail, of course.
The one risk associated with the committee’s expansion of the term reproducibility to encompass transparency and quality is the potential confusion resulting from the headline recommendation for “reproducibility as a condition of grants awarded for empirical research.” If you hadn’t read the committee’s definition of reproducibility, this recommendation could be interpreted as requiring an outcome (reproducibility of research findings) that is impossible to guarantee and impractical to monitor.
While you may think I am splitting hairs, we have to be careful here because on this nuance hinges a crucial change in culture towards openness. As a researcher, whether others can reproduce your findings is largely outside of your control. Yes, you can provide all the information someone would need to run your study, but if they have different software or antibodies or research participants, your meticulous record keeping may not be mighty enough to preserve your findings in the face of unsuitable conditions.
Reproducibility is not synonymous with validity or integrity (in fact, for inherently noisy disciplines like biology and psychology, a perfectly reproducible result should raise eyebrows, and not in a good way). As a sector, if we penalise researchers whose research findings don’t immediately or clearly reproduce, we shoot ourselves in the foot, because knowing they can be penalised in this way is exactly what discourages researchers from sharing their methods and data in the first place.
While reproducibility remains a crucial benchmark for trusting (certain types of) research findings, transparency should be our benchmark for trusting researchers. This relates to a broader, necessary move towards valuing openness and honesty over novel, “publication-worthy” findings – towards acceptance of the inconvenient messiness of the pursuit of knowledge. This would encourage more researchers to share their methods and data.
As a sector, if we can use the committee’s report as a catalyst to shift incentives and culture in this direction, we can foster a stronger foundation of trust in both research and researchers.