This article is more than 2 years old

Brief encounters of the evaluation kind

Julian Crockford disentangles demands for access and participation work to prove its impact and efficacy
This article is more than 2 years old

Julian Crockford is a researcher and evaluator in the Student Experience Evaluation and Research team at Sheffield Hallam University.

On the Office for Students (OfS) blog, John Blake – its new director for fair access and participation – highlights the importance of evidence-based practice through an extended film metaphor.

It’s a great reference, and the key point he makes is that while there are places where widening participation theory and practice may meet, it is often as it is for Celia Johnson and Trevor Howard in Brief Encounter – aware there is something powerful between them, but not quite able to make the longer-term connection.

I see what he’s doing there – but for me it’s more like Serena and David in Four Weddings and a Funeral, where thanks to a newly acquired and somewhat sketchy grasp of British Sign Language, early conversations are riddled by mistakes and missed meanings. This metaphorical discussion is taking place between sector practitioners and those formulating the policy and guidance.

When you’re talking at cross purposes, you’ll never be able to take it to the next level.

Well, I thought it over a lot, you know, I wanted to get it just right

Although, I’ll grudgingly accept the label of “evaluation nerd” used in the blog, for many of us “evaluation advocate” is our preferred term. Evaluation is the swiss army knife of practice and delivery. It can tell us “what works”, where best practice is taking place or which interventions provide biggest bang for the buck.

Crucially, it can also help us think deeply about these activities and our practice to support us in delivering the outcomes we want. Evaluation can be a really powerful exploratory tool for professional reflection. But it can’t do all these things at once.

Return on investment and strategic decision-making require an evaluation approach that aligns with the black box logic of that kind of decision-making – we’re not interested in how it happens, we just want to prove it does. This requirement leads to demands for “robust” and clear evidence, preferably expressed quantitatively or, even better, in binary form – the threshold measures proposed in the current B3 student outcome measures consultations, for example. Hence the shove from some parts of the sector towards trial-based quasi-experimental designs.

On the other hand, issues of practice development and outcome improvement require a theory-driven approach, which explores the nuts, bolts and inner workings of our WP activities to understand exactly how they achieve the outcomes we want, all the better to improve them or transfer their active ingredients to other contexts. There’s definitely room for both!

Another wedding invitation. And a list. Lovely

The role of evaluation in WP policy has changed significantly over the last couple of decades. OfFA kicked off by focusing primarily on monitoring and tracking, before the 2008 financial crash forced a pivot toward value for money approaches. In recent years, OfFA and subsequently OfS have moved closer to theory-driven approaches, insisting that interventions should be underpinned by both an evidence-base and an articulation of why and how we think they work.

As a sector of practitioner-evaluators we should be able to do all of these things – although to be honest, I suspect value for money analysis is still going to need quite a bit of unpacking… I think the challenge, though, is that these issues have all been collapsed into each other and merged in the regulatory guidance. This mixed messaging has made it really difficult for the sector to respond effectively, as we vacillate between monitoring and tracking, trying to demonstrate return on investment, and seeking out and sharing best practice. No wonder WP evaluators often feel caught between a rock and a hard place.

The opportunity we have now, following John Blake’s intervention, is to untangle these different imperatives and clearly separate out and articulate clear and appropriate research questions, so we can be sure about exactly what is expected and what we’re trying to do. We can then consciously select the methods and approaches that will best deliver rather than being driven down a particular path or towards a particular evaluation approach.

I think we both missed a great opportunity here

A second set of crossed wires concerns the relationship between research and practice. In recent years, the direction of travel for OfS has been to build capacity within the sector, and embed evaluation thinking as a vital and integral part of WP practice. This makes total sense and we’ve seen significant progress.

This particular “nerd” is also very happy to agree that effective evaluation should be collaborative. The example John Blake gives of an emergent school evaluation ecosystem is spot on – practitioners and researchers swapping places, assuming each others’ ideas and positions and forging a direct two-way connection between the ivory tower and the classroom.

Yet attempts to match academics and WP practitioners within universities has not delivered to the same degree. Perhaps this is because performative pressures on academics (grant capture and publish or perish) don’t align with the smaller scale (and limited funding) of typical WP evaluation projects. At the same time, no matter how dedicated to objectivity they are, WP practitioner-evaluators marking their own homework run the risk of introducing inadvertent bias. So we need to carefully rethink how this is going to work.

We should note, for example, that independence is still very much possible within institutions, that experienced evaluators based within HE providers usually assume a very valuable objectivity relative to the projects they evaluate, but also bring extensive contextual and institution specific knowledge, which enriches the outcomes of their work.

Whether institutionally based or “independent”, academic researchers and evaluators bring significant technical expertise – including the complexities of designing and implementing trial-based designs, if that’s your bag – whilst WP practitioners bring years-worth of professional wisdom. Practice-based occupations such as nursing, social work, and yes teaching, have long recognised the central role of phronesis, social-professional-practical wisdom, grown from years of working directly with the people they serve.

There is then, plenty of room for matchmaking. Researchers dropping in to observe or evaluate WP interventions may bring in a technical perspective or framework through which to classify and interpret what they see, but it is unlikely they will be able to match the nuanced complex understanding of people working day in day out with disadvantaged and under-represented young people. Those are two very powerful sets of expertise right there! And we need them talking to each other. In the same language.

But I don’t think we can leave it to chance to ensure they will hook up and get the best out of each other. To meet John Blake’s expectations, as well as those of evaluation advocates across the sector, I suggest we need to step back and do some careful collaborative thinking about how we structure the conversation, approach and the emerging relationship between technical and professional wisdoms to get the best and most relevant outcomes from each.

The good news is that these discussions are already taking place and new movements are springing out into the space that John Blake’s blog has opened up. My own organisation, Villiers Park, will soon be announcing a new Pracademic project, which partners widening participation practitioners with academics who can help them formalise their experience and expertise so that it can be published and shared.

The last few months have also seen the launch of The Evaluation Collective, a grouping of WP and student-focused evaluators (led by Liz Austen, Sheffield Hallam University’s Head of Evaluation and Research and Rachel Spacey, Research Fellow in Higher Education at the University of Lincoln). This group is determined to steer sector thinking about broader values, purpose and potential of WP and student focussed evaluation. It feels like this conversation is only just starting.

3 responses to “Brief encounters of the evaluation kind

  1. Great piece Julian. Not a sci-fi film buff, but, for me, we should aim for what has to be the cheesiest feel good film ever: Love Actually 🤔. The sentiment of recognising the value of others, regardless of position, surely rings true for inclusive approaches to evaluation. Going down a more elite form of what I call know-all evaluation will only lead to a Nightmare on Eval Street….(sorry, couldn’t resist).

    I would take slight issue with the slant on independent evaluation: surely, if you get your evaluation design right at the outset, it minimises bias and increases transparency. You could then use an ‘independent’ evaluation process, or input, as a moderating lens or where there are complexities that require additional evaluation expertise. Critical capacity building which is accountable has to be the best route forward, unless we have a trust issue at play….

  2. The importance stressed on ‘independent’ raises several questions around what this actually means and the expectations of OfS. In a free-market context, such as higher education, independent evaluation may suggest the involvement of private sector or third-sector consultants to ensure ‘validity’.

    However, being independent also includes being

    “free from external pressure if it [evaluation] is to produce meaningful evidence in support of institutional learning and effective and accountable decision making.” https://drive.google.com/drive/u/0/folders/15-kwir_6G6Kl8NfYNdZJzP5GePEHcujx

  3. Naomi, just had a quick glance at your posted links and these look incredibly interesting for the sector. Looking forward to reading fully to gain more insights into how the concept of ‘independence’ is nuanced. Thanks for sharing 🤓

Leave a Reply