Across UK higher education, research culture has become highly measurable. Universities now hold more data than ever on staff experience, career progression, inclusion, workload and collaboration. Research culture is monitored through staff surveys, Athena Swan submissions, REF People, Culture and Environment narratives, internal culture programmes, and workforce analytics.
Yet a question is emerging: why does so much insight about research culture fail to translate into visible, day-to-day change in how research environments actually operate? The sector is not short of research culture evidence. It is short of ways to use it.
By translation, I do not mean simplifying insight into slogans. I mean the practical work of turning fragmented evidence into decision rules, prompts and accountabilities that shape what leaders actually do: how workload is set, how authorship is handled, how people are recruited and promoted, which teams are resourced, and what trade-offs are treated as acceptable.
At the moment, universities are generating insight faster than they can use it.
For most academic leaders, culture evidence arrives as a request for attention in an already overloaded decision environment. Unless insight is translated into the language, timing and risk frameworks leaders already use, it is likely to sit alongside decision-making rather than shaping it. For many leaders, culture evidence currently arrives as an additional reporting responsibility, rather than as something that makes core decisions easier to take with confidence.
The problem is fragmentation, not absence
Research culture evidence sits across human resources, equality, diversity and inclusion, researcher development, research finance and research culture programmes. Each area generates useful insight, but each operates with different evidence logics, reporting cycles and definitions of success.
In practice, this often produces a familiar institutional experience. The same issue shows up in multiple places at once: staff survey comments about workload, equality submissions highlighting progression bottlenecks, exit interview themes about burnout, culture programme feedback about psychological safety. The insight is not missing. It is simply not held together.
Leaders, meanwhile, receive multiple dashboards and narratives that are not aligned to the decisions they are about to make. Culture evidence arrives as commentary alongside performance evidence, not as an input that changes the decision itself. It is very hard to act on “everything, everywhere, all at once” – particularly when accountability is split across committees and functions.
This is the structural mechanism by which culture becomes reportable without becoming changeable. Fragmentation means no single picture of what is happening. No single picture means weak ownership and unclear responsibility. Weak ownership means insight does not enter decision cycles in time. When insight arrives late, it is logged, discussed and often agreed with, but it does not change the decision that has already been made.
As research environment expectations evolve, the focus is increasingly shifting from demonstrating that culture has been measured to demonstrating how culture evidence shapes incentives, behaviours and ultimately research outcomes.
Culture is made in decisions, not documents
One risk in the current debate is treating research culture as something that sits alongside research activity. Culture is produced through research activity, via everyday decisions.
Research culture is enacted through decisions about workload protection, recruitment, progression, authorship, line management, resource allocation and role recognition. These decisions are made in leadership environments shaped by funding pressure, performance metrics, student demand and institutional strategy, and they often happen far away from formal culture governance structures.
If culture evidence is not translated into the decision frameworks leaders use, culture work becomes a parallel conversation: something an institution can evidence without necessarily changing how it behaves.
It is also why the sector’s recurring focus on better metrics only takes us so far. In many institutions, the barrier is not measurement sophistication. It is that insight is not reaching the points where power is exercised.
Why translation is now the critical capability
This is often where institutions feel the gap most clearly: they know something is wrong, but cannot yet see which decisions are producing it.
As culture evidence becomes more distributed, a different capability comes into view: the ability to integrate, interpret and embed insight across systems.
This is not mainly a technology problem. It is an institutional design problem: who has the remit, legitimacy and skill to convene evidence across functions and convert it into something leaders can act on without it being dismissed as “soft”, “nice to have” or “someone else’s job”.
Qualitative insight is particularly important here. Qualitative evidence is often treated as a way to add voice to a report. Its more important function is diagnostic and design-led: it reveals how decisions are actually experienced, where incentives are misaligned, and which rules and routines produce predictable harms. It helps determine not only what is happening, but what should be measured and governed because it is driving behaviour.
In other words, qualitative insight is not simply about describing culture. It is how institutions locate the decision points that produce culture.
This shift reflects a broader move across research systems towards understanding how evidence, incentives and decision structures interact to shape behaviour across teams, disciplines and career stages.
From reporting maturity to decision maturity
The first phase of research culture reform has been about visibility: establishing that culture matters, creating accountability, generating evidence and building narratives. That work has value. It has made previously invisible experiences harder to ignore.
The next phase is about decision maturity: ensuring culture evidence changes decisions, not just documents them.
A practical way to think about this is as a translation stack: integrate insight across evidence systems, interpret what that combined picture means for risk, equity, sustainability and performance, and embed it into decision routines so insight arrives before decisions, not after.
Done well, translation is how institutions turn culture evidence into better research performance and more sustainable research careers. If institutions cannot make this shift, research culture will remain highly reportable and only intermittently changeable. Universities will get better at evidencing activity, while staff experience continued misalignment between what is said and what is rewarded. Leaders will experience initiative fatigue. Staff will experience consultation fatigue. And culture work will start to look, to many, like theatre.
The sector is not short of evidence. The challenge now is ensuring that evidence shapes the decisions that shape research itself.
A really clear analysis of the need to move from evidence to decision. Are there examples of emerging decision maturity anywhere? I would think the details would vary hugely dependent on institutional size, local autonomy vs central direction, as well as when institutional culture interacts with disciplinary culture (be that academic disciplines or professional disciplines).
I think this is still emerging but one example I’m working on through the cross institutional project that is trying to map what “decision maturity” looks like in practice.
Rather than assessing culture in isolation, we’re mapping the system of decisions that shape return-to-research outcomes (e.g. contracts, funding, workload, policy (institutional and local), manager discretion) and using that to work with stakeholders on where intervention is actually possible.
It’s early, but the aim is to move from insight sitting alongside decisions to insight being built into how those decisions are made.
I have written more this methodology here: https://whiterose.ac.uk/news/designing-research-culture-reflections-from-our-parental-leave-project/
I think that variation is exactly where this gets interesting, more granular in a useful way, and where qualitative approaches can really add something.
In the above work we used purposive sampling to capture variation across institution, career stage, contract type, and research context (disciplinary and work structure/environment). The aim wasn’t representation, but to understand how return-to-research experiences unfold under different structural conditions.
What that allows you to see is not just variation, but patterns across it how similar dynamics (e.g. around workload models/teaching, funding, discretion) show up differently depending on context.
That’s where I think “decision maturity” starts to become visible not as a single model, but as how institutions recognise and respond to those patterns in practice.
Reading it took me back to some work I did a few years ago where we were trying to make sense of what I used to call the institution’s “rich data reservoir” sitting across EDI, HR, finance, REF, impact & more. Everything was there, just not speaking to each other.
Once we started connecting those datasets, the picture of research culture became much more real. Not just who is in the system, but how progression actually unfolds, where success rates diverge between application & award stages, how research income is distributed, and where certain minority groups quietly drop off over time.
What became clear quite quickly is that these patterns don’t sit on their own. When you layer in things like leave data, REF participation, panel roles, you start to see the conditions shaping progression – workload, caring responsibilities, access to opportunities, visibility in decision-making spaces. That’s where you start to see culture in how the system actually runs.
It also shifted how I think about collaboration & leadership. You start to see where collaboration concentrates, where participation thins out, and who is present in the spaces that matter most. That tells you a lot about how inclusive the system actually is, beyond headline metrics.
And this is where I connect strongly with the piece on translation. The barrier was never the data itself, it was access, alignment & making it usable across teams. Without that, the “reservoir” stays fragmented and never quite turns into insight.
This is such a clear articulation of it, especially shift from “who is in the system” to how progression actually unfolds over time.
What you describe feels like a really important step towards translation, making patterns visible across what would otherwise sit in silos.
The next challenge I’m seeing is how that kind of integrated insight actually gets pulled into decision processes (e.g. workload, recruitment, allocation of opportunity), rather than sitting alongside them. That feels like the point where it starts to increase possibility of changing the system itself.
I am not convinced the thesis here is true, all the universities I worked at in the UK have made huge changes in processes and decision culture in the last twenty years.
There are of course always data sets that point to more improvements (assuming we can agree on the priorities). Monitoring that those changes made have been positive and that there are no unintended side effects is tricky, and it takes longer than the average PVC tenure to find out if the changes made have indeed worked as intended.
Perhaps what is missing is a kaizen culture, of continuous improvement and continuous monitoring of outcomes.
I think that’s a really important perspective, there has been significant change over time, and I agree that evaluating long-term impact is complex, particularly given leadership cycles.
The question I’m trying to surface is slightly different: why, despite that change, similar patterns continue to appear across multiple datasets and contexts.
My sense is that this is less about whether change is happening, and more about whether insight is consistently entering decision points early enough to shape those changes in practice, rather than being reviewed after the fact.
Continuous improvement is clearly part of the picture, but it depends on how evidence is integrated into decision-making in the first place, which is where the idea of “translation” comes in.
Really good article. We tried to create space for this kind of reflection within Wellcome’s Institutional Funding for Research Culture call. However, because the call focused on innovative ideas to advance research culture, we may not have highlighted the need for widely translatable insights as clearly at the outset as we should have, in hindsight. That said, other initiatives are also working in this area. The COMET project by UKRN, for example, is examining and articulating the case for evidence‑informed and holistic approaches to institutional culture change, particularly in relation to decision-making: https://www.ukrn.org/comet/.
I would be interested in knowing what the costs might be for institutions undertaking this work, and who the most appropriate thinkers or experts are to lead it.
Thank you Shomari, this is a really helpful reflection, particularly on translatability.
The work we’re doing across the White Rose universities is starting to show what this looks like in practice.
What’s emerging is that this isn’t a single intervention or programme. It’s a form of work that sits across institutions, roles and decision points and depends on being embedded enough to understand how things actually operate, while also connected enough across networks to see patterns.
In practice, that has meant:
working across institutions to compare how similar situations play out
building coalitions across functions (HR, research culture, finance, EDI, leadership) around shared moments of decision
mobilising people across institutions to surface and socialise emerging insights, so that they can inform conversations as they develop rather than afterwards
spending time in conversation with people at different levels to understand how things actually work on the ground
and writing publicly as the work emerges, to test and refine what is transferable
In terms of cost, it’s less about large-scale programmes and more about sustained capacity for this kind of translational work roles or secondments that can move across boundaries, and time for synthesis and sharing, this is particularly important in terms deepening impact and socialising change.
On expertise, it sits at an intersection that isn’t always formally recognised: qualitative research, facilitation, systems thinking, institutional and research culture literacy and the ability to build alignment and translate across institutional contexts.
My sense is that making this work more visible, and explicitly supported, is the next step.