Evaluate, evaluate, evaluate is a mantra that those engaged in widening participation in recent years will be all too familiar with.
Over the past decade and particularly in the latest round of Access and Participation Plans (APP), the importance of evaluation and evidencing best practice have risen up the agenda, becoming integral parts of the intervention strategies that institutions are committing to in order to address inequality.
This new focus on evaluation raises fundamental questions about the sector’s capacity to sustainably deliver high-quality, rigorous and appropriate evaluations, particularly given its other regulatory and assessment demands (e.g. REF, TEF, KEF etc.).
For many, the more exacting standards of evidence have triggered a scramble to deliver evaluation projects, often facilitated by external organisations, consultancies and experts, often at considerable expense, to deliver what the Office for Students’ (OfS) guidance has defined as Type 2 or 3 evidence (capable of correlative or causal inference).
The need to demonstrate impact is one we can all agree is worthy, given the importance of addressing the deep rooted and pervasive inequalities baked into the UK HE sector. It is therefore crucial that the resources available are deployed wisely and equitably.
In the rush for higher standards, it is easy to be lured in by “success” and forget the steps necessary to embed evaluation in institutions, ensuring a plurality of voices can contribute to the conversation, leading to a wider shift in culture and practice.
We risk, in only listening to those well placed to deliver large-scale evaluation projects and communicate the findings loudest, of overlooking a huge amount of impactful and important work.
Feeling a part of it
There is no quick fix. The answer lies in the sustained work of embedding evaluative practice and culture within institutions, and across teams and individuals – a culture that imbues values of learning, growth and reflection over and above accountability and league tables.
Evaluation Capacity Building (ECB) offers a model or approach to help address these ongoing challenges. It has a rich associated literature, which for brevity’s sake we will not delve into here.
In essence, it describes the process of improving the ability of organisations to do and use evaluation, through supporting individuals, teams and decision makers to prioritise evaluation in planning and strategy and invest time and resources into improving knowledge and competency in this area.
The following “keys to success” are the product of what we learned while applying this approach across widening participation and student success initiatives at Lancaster University.
Identify why
We could not have garnered the interest of those we worked with without having a clear idea of the reasons we were taking the approach we did. Critically, this has to work both ways: “why should you bother evaluating?” and “why are we trying to build evaluation capacity?”
Unhelpfully, evaluation has a bad reputation.
It is very often seen by those tasked to undertake it as an imposition, driven by external agendas and accountability mandates – not helped by the jargon laden and technical nature of the discipline.
If you don’t take the time to identify and communicate your motivations for taking this approach, you risk falling at the first hurdle. People will be hesitant to invest their time in attending your training, understanding the challenging concepts and investing their limited resources into evaluation, unless they have a good reason to do so.
“Because I told you so” does not amount to a very convincing reason either. When identifying “why”, it is best you do so collaboratively and consider the specific needs, values and aspirations of those you are working with. To those ends, you might want to consider developing a Theory of Change for your own ECB initiative.
Consider the context
When developing resources or a series of interventions to support ECB at your institution, you should at all times consider the specific context in which you find yourself. There are many models, methods and resources available in the evaluation space, including those provided by organisations such as TASO, the UK Evaluation Society (UKES) or the Global Evaluation Initiative (BetterEvaluation.org), not to mention the vast literature on evaluation methods and methodologies. The possibilities are both endless and potentially overwhelming.
To help navigate this abundance, you should use the institutional context in which you are intending to deliver ECB as your guide. For whom are you developing the resources? What are their needs? What is appropriate? What is feasible? How much time, money and expertise does this require? Who is the audience for the evaluation? Why are they choosing to evaluate their work at this time and in this way?
In answering these and other similar questions, the “why” you identified above, will be particularly helpful. Ensuring the resources and training you provide are suitable and accessible is not easy, so don’t be perturbed if you get it wrong. The key is to be reflective and seek feedback from those you are working with.
Surround yourself with researchers, educationalists and practitioners
Doing and using evaluation are highly prized skills that require specific knowledge and expertise. The same applies to developing training and educational resources to support effective learning and development outcomes.
Evaluation is difficult enough for specialists to get their heads around. Imagine how it must feel for those for whom this is not an area of expertise, nor even a primary area of responsibility. Too often the training and support available assumes high levels of knowledge and does not take the time to explain its terms.
How do we expect someone to understand the difference between correlative and causal evidence of impact, if we haven’t explained what we mean by evaluation, evidence or impact, not to mention correlation or causation? How do we expect people to implement an experimental evaluation design, if we haven’t explained what an evaluation design is, how you might implement it or how “experimental” differs from other kinds of design and when it is or isn’t appropriate?
So, surround yourself with researchers, educators and practitioners who have a deep understanding of their respective domains and can help you to develop accessible and appropriate resources.
Create outlets for evaluation insight
Publishing findings can be daunting, time-consuming and risky. For this reason, it’s a good idea to create more localised outlets for the evaluation insights being generated by the ECB work you’ve been doing. This will allow the opportunity to hone presentations, interrogate findings and refine language in a more forgiving and collaborative space.
At Lancaster University, we launched our Social Mobility Symposium in September 2023 with this purpose in mind. It provided a space for colleagues from across the University engaged in widening participation initiatives and with interests in wider issues of social mobility and inequality to come together and share the findings they generated through evaluation and research.
As the title suggests, the event was not purely about evaluation, which helped to engage diverse audiences with the insights arising from our capacity building work. “Evaluation by stealth,” or couching evaluative insights in discussions of subjects that have wider appeal, can be an effective way of communicating your findings. It also encourages those who have conducted the evaluations to present their results in an accessible and applied manner.
Establish leadership buy in
Finally, if you are planning to explore ECB as an approach to embedding and nurturing evaluation at an institutional level (i.e. beyond the level of individual interventions), then it is critical to have the buy in of senior managers, leaders and decision makers.
Part of the why for the teams you are working with will no doubt include some approximation of the following: that your efforts will be recognised, the insights generated will inform decision making, the analyses you do will make a difference, and will be shared widely to support learning and sharing of best practice.
As someone who is supporting capacity building endeavours you might not be able to guarantee these objectives. It is important therefore to focus equal attention on building the evaluation capacity and literacy of those who can.
This can be challenging and difficult to control for. It depends on, among other things: the established culture and personnel in leadership positions, their receptiveness to new ideas, the flexibility and courage they have to explore new ways of doing things, and the capacity of the institution to utilise the insights generated through more diverse evaluative practices. The rewards are potentially significant, both in supporting the institution to continuously improve and meet its ongoing regulatory requirements.
There is great potential in the field of evaluation to empower and elevate voices that are sometimes overlooked, but there is an equal and opposite risk of disempowerment and exclusion. Reductive models of evaluation, preferencing certain methods over others, risk impoverishing our understanding of the world around us and the impact we are having. It is crucial to have at our disposal a repertoire of approaches that are appropriate to the situation at hand and that fosters learning as well as value assessment.
Done well, ECB provides a means of enriching the narrative in widening participation, as well as many other areas, though it requires a coherent institutional and sectoral approach to be truly successful.