In the Evaluation Collective – a cross-sector group of like-minded evaluation advocates – we have reason to celebrate two related interventions.
One is the confirmation of a TASO and HEAT helmed evaluation library – the other John Blake’s recent Office for Students (OfS) blog What’s next in equality of opportunity regulation.
We cheer his continued focus on evaluation and collaboration (both topics close to our collective heart). In particular, we raised imaginary (in some cases…) glasses to John Blake’s observation that:
Ours is a sector founded on knowledge creation, curation, and communication, and all the skills of enquiry, synthesis and evidence-informed practice that drive the disciplines English HE providers research and teach, should also be turned to the vital priorities of expanding the numbers of students able to enter HE, and ensuring they have the best chance to succeed once they are admitted.
That’s a hard YES from us.
Indeed, there’s little in our Evaluation Manifesto (April 2022) that isn’t thinking along the same lines. Our final manifesto point addresses almost exactly this:
The Evaluation Collective believe that higher education institutions should be learning organisations which promote thinking cultures and enact iterative and meaningful change. An expansive understanding of evaluation such as ours creates a space where this learning culture can flourish. There is a need to move the sector beyond simply seeking and receiving reported impact.
We recognise that OfS has to maintain a balance between evaluation for accountability (they are our sector regulator after all) and evaluation for enhancement and learning.
Evaluation in the latter mode often requires different thinking, methodologies and approaches. Given the concerning reversal of progress in HE access indicated by recent data this focus on learning and enhancement of our practice seems even more crucial.
This brings us to two further collective thoughts.
An intervention intervention
John Blake’s blog references comments made by the Evaluation Collective’s Chair Liz Austen at the Unlocking the Future of Fair Access event. Liz’s point, which draws on a soon to be published book chapter, is that, from some perspectives, the term intervention automatically implies an evaluation approach that is positivistic and scientific – usually associated with Type 3 causal methodologies such as randomised control trials.
This kind of language can be uncomfortable for those of us evaluating in different modes (and even spark the occasional paradigm war). Liz argued that much of the activity we undertake to address student success outcomes, such as developing inclusive learning, teaching, curriculum and assessment approaches is often more relational, dynamic, iterative and collaborative, as we engage with students, other stakeholders and draws on previous work and thinking from other disciplinary area.
This is quite different to what we might think of as a clinical intervention, which often involves tight scientific control of external contextual factors, closed systems and clearly defined dosage.
We suggest, therefore, that we might need a new language and conceptual approach to how we talk and think about evaluation and what it can achieve for HE providers and the students we support.
The other area Liz picked up concerned the burden of evaluation not only on HE providers, but also the students who are necessarily deeply integrated in our evaluation work with varying degrees of agency – from subjects from whom data is extracted at one end through to co-creators and partners in the evaluation process at the other.
We rely on students to dedicate sufficient time and effort in our evaluation activities. To reduce this burden and ensure we’re making effective use of student input, we need better coordination of regulatory asks for evaluation, not least to help manage the evaluative burden on students/student voices – a key point also made by students Molly Pemberton and Jordan Byrne at the event.
As it is, HE providers are currently required to develop and invest in evaluation across multiple regulatory asks (TEF, APP, B3, Quality Code etc). While this space is not becoming too crowded (the more the merrier), it will take some strategic oversight to manage what is delivered and evaluated, why and by whom and look for efficiencies. We would welcome more sector work to join up this thinking.
Positing repositories
We also toasted John Blake’s continued emphasis on the crucial role of evaluation in continuous improvement.
We must understand whether metrics moving is a response to our activity; without a clear explanation as to why things are getting better, we cannot scale or replicate that impact; if a well-theorised intervention does not deliver, good evaluation can support others to re-direct their efforts.
In support of this, the new evidence repository to house the sector’s evaluation outcomes has been confirmed, with the aim of supporting our evolving practice and improve outcomes for students. This is another toast-worthy proposal. We believe that this resource is much needed.
Indeed, Sheffield Hallam University started its own (publicly accessible) one a few years ago. Alan Donnelly has written an illuminating blog for the Evaluation Collective reflecting on the implementation, benefits and challenges of the approach.
The decision to commission TASO and HEAT to develop this new Higher Education Evidence Library (HEEL), does however, beg a lot of questions about how material is selected for inclusion, who makes the selection and the criteria they use. Here are a few things we hope those organisations are considering.
The first issue is that it is not clear whether this repository is merely primarily designed to address a regulatory requirement for HE providers to publish their evaluation findings or a resource developed to respond to the sector’s knowledge needs. This comes down to clarity of purpose and a clear-eyed view of where the sector needs to develop.
It also comes down to the kinds of resources that will be considered for inclusion. We are also concerned by the prospect of a rigid and limited selection process and believe that useful and productive knowledge is contained in a wide range of publications. We would welcome, for example, a curation approach that recognised the value of non-academic publications.
The contribution of grey literature and less formal publications, for example, is often overlooked. Valuable learning is also contained in evaluation and research conducted in other countries, and indeed, in different academic domains within the social and health sciences.
The potential for translating interventions across different institutional and sector contexts also depends on sharing contextual and implementation information about the target activities and programmes.
As colleagues from the Russell Group Widening Participation Evaluation Forum recently argued on these very pages, the value of sharing evaluation outcomes increases the more we move away from reporting technical and statistical outcomes to include broader reflections and meta-evaluation considerations, the more we collectively learn as a sector the more opportunities we will see for critical friendships and collaborations.
While institutions are committing substantial time and resources to APP implementation, we must resist overly narrowing the remit of our activities and our approach in general. Learning from failed or even poor programmes and activities (and evaluation projects!) can be invaluable in driving progress.
Ray Pawson speaks powerfully of the way in which “nuggets” of valuable learning and knowledge can be found even when panning less promising or unsuccessful evaluation evidence. Perhaps, a pragmatic approach to knowledge generation could trump methodological criteria in the interests of sector progress?
Utopian repositories
Hot on the HEELs of the TASO/HEAT evaluation library collaboration announcement we have put together a wish list for what we would like to see in such a resource. We believe that a well-considered, open and dynamic evaluation and evidence repository could have a significant impact on our collective progress towards closing stubborn equality of opportunity risk gaps.
Submission to this kind of repository could also be helpful for the professionalisation of HE-based evaluation and good for organisational and sector recognition and career progression.
A good model for this kind of approach is the National Teaching Repository (self-upload, no gatekeeper – their tag line “Disseminating accessible ideas that work”). This approach includes a way of tracking the impact and reach of submissions by allocating them a DOI.
This is an issue that Alan and the Sheffield Hallam Team have also cracked, with submissions appearing in scholarly indexes.
We are also mindful of the increasingly grim economic contexts in which most HE staff are currently working. If it does its job well, a repository could help mitigate some of the current constraints and pressures on institutions. Where we continue to work in silos there is a continued risk of wasting resources, by reinventing the same intervention and evaluation wheels in isolation across a multitude of different HE providers.
With more openness and transparency, and sharing work in progress, as well as in completion, we increase the possibility of building on each other’s work, and, hopefully, finding opportunities for collaboration and sharing the workload, in other words efficiency gains.
Moreover, this moves us closer to solving the replication and generalisability challenges, evaluators working together across different institutions can test programmes and activities across a wider set of contexts, resulting in more flexible and generalisable outcomes.
Sliding doors?
There are two further challenges, which are only nominally addressed in John Blake’s blog, but which we feel could have significant influence on the sector impact of the repository of our dreams.
First, effective knowledge management is essential – how will time-pressed practitioners find and apply relevant evidence to their contexts? The repository needs to go beyond storing evaluations to include support to help users to find what they need, when they need it, and include recommendations for implications for practice.
Second, drawing on the development of Implementation Science in fields like medicine and public health could help maximize the repository’s impact on practice. We suggest early consultation with both sector stakeholders and experts from other fields who have successfully tackled these knowledge-to-practice challenges.
At this point in thinking, before concrete development and implementation have taken place, we have the potential for a multitude of possible future repositories and approaches to sector evaluation. We welcome TASO and HEAT’s offer to consult with the sector over the spring as they develop their HEEL and hope to engage in a broad and wide-ranging discussion of how we can collectively design an evaluation and evidence repository that is not just about collecting together artefacts, but which could play an active role in driving impactful practice. And then we can start talking about how the repository can be evaluated.
John Blake will be talking all things evaluation with members of the Evaluation Collective on the 11th March. Sign up to the EC membership for more details: https://evaluationcollective.wordpress.com/