This article is more than 6 years old

Of Magic Bullets and Black Boxes – the role of evaluation in WP outreach

A recent OFFA report evaluates the evaluation of widening participation activity in institutions. Julian Crockford asks what makes for a useful evaluation in this field.
This article is more than 6 years old

Julian Crockford is the Widening Participation Research and Evaluation Unit Manager at the University of Sheffield.

Those of us engaged in evaluating university widening participation interventions were likely to be both cheered and disappointed by the publication in October of a report by the Office for Fair Access (OFFA) on the institutional evaluation of access agreement activity. Cheered because we now have a whole publication making a case for the importance of what we do, and disappointed because the report exposes the lack of progress we have made as a sector in effectively evaluating the impact of our widening participation efforts.

The report’s executive headlines set the agenda from the outset. The good news is that the sector is cracking on with developing its evaluation activities, with 100% of HEIs conducting at least some evaluation of their access agreement activities in 2015-16. From here on in, however, things get a little weird. The report turns out to be an evaluation of our evaluations, a meta-evaluation, if you will.

Getting meta

OFFA’s report sets out to rate our sector’s activity across a number of measures; the extent to which evaluation practice is advanced or embedded in practice, and the level at which impact is measured (from ‘happy sheets’ to organisational or societal impact). Finally, the report considers coverage; what proportion of OFFA-funded outreach activities are being actively evaluated. For example, OFFA point to a 9% increase in institutions evaluating their financial support provision, but that this still left £37 million ‘unevaluated’ in 2015-16.

In short, OFFA tell us a fair bit about how much universities are doing in terms of evaluation, but much less about what they’re actually doing or why. Full disclosure here, as an experienced HE-based WP evaluator, I have a lot of appreciation for what OFFA do in this area. I think they expertly tread a line between the Government’s baldly stated concern with return on investment, and what they know to be the situation on the ground for outreach practitioners in institutions. Accordingly, over the past few years, they have employed a wide range of tools to nudge, cajole and encourage institutions to improve their evaluation practice, of which this report is only the latest manifestation.

Bottom lines

In terms of WP evaluation, the bottom line is, of course, financial. Jo Johnson noted in his 2016 letter to the Director of Fair Access; ‘we would like you to require more from institutions in the information they provide to you about how they use evaluation and reflective practice, and the expertise they draw on to help them make their investment decisions.’ Les Ebdon clearly took the hint, subsequently stressing that ‘evaluation is key to squeezing maximum impact from every pound and every hour invested in widening participation’. But a slightly more opaque bottom line is the oft-stated need to understand ‘what works’ in terms of WP outreach. The implication is that, once evaluation has identified it as such, a successful outreach intervention is like a magic bullet, creating social mobility and a demographically balanced population in our universities wherever it is aimed.

I would like to suggest, however, that effective evaluation can be about more than this. As a sector, by failing to think harder about evaluation and what it can do to support social mobility, we are missing a fairly crucial trick. Moreover, and given the generally parlous nature of current HE finances, vice chancellors might be forgiven for having an eye on OFFA-countable income (which can represent 30-40% of additional fee income for some research intensives) and wistfully considering where else it might be used. It is up to us, as evaluators, to make the case for how and why the money currently invested in widening participation is being wisely and effectively spent.

Social, or science?

There are perhaps, two schools of thought about how best to evaluate widening participation interventions.

  • The first adopts a ‘scientific’ approach informed by clinical models, and is exemplified by the Sutton Trust’s 2015 report Evaluating Access, which pushes randomised control trials, or at a pinch, quasi-scientific designs featuring comparator groups. From this perspective, widening participation practice operates as a kind of black box; we don’t really need to know what happens inside, as long as we can prove that something changes that plausibly correlates with it.
  • The second approach sees social reality, and indeed human beings themselves, as inherently complex and context dependent, arguning that any impact from outreach interventions is likely to be equally as complicated, and therefore not easily reducible to a set of metrics.

The ‘what works’ agenda and the promotion of evidence-informed policy tends to lean towards a black box approach – just show me evidence that something’s happened. But in so doing, it ignores the crucial question of why something works. I want to suggest that this is not enough, and nor is it enough just to be seen to be ‘doing’ evaluation, as a cynical reading of OFFA’s report might conclude. As evaluators, we should be looking for magic bullets, those elements of a successful intervention that we know can transform lives and increase social mobility. But should we find them, we then need to take them apart and work out their active ingredients. Only by so doing, will we have a chance of understanding which elements of outreach interventions might be transferable, work in different contexts, or for different types of students.

Reading their report, one can’t help but feel that OFFA know all this, and that the lack of detail about HEI’s evaluation practice is strategic, less a lack of ambition on their part than a response to the sector’s slow progress in developing its evaluation practice. We could even read the report as an attempt to cover for us, to hide the fact that we’re not evaluating deeply enough, hard enough, or, indeed, honestly enough, to assess the detailed essence of our widening participation practice.

Why evaluate?

In short, evaluation should not just be about coverage or spread, but about actively interrogating what we do, not just to prove that it provides a satisfactory return on the very significant investment we make as a sector, but also to understand more about the young people we’re working with, their lives and motivations, and discovering how we can support them better. Only when we do this, will we be able respond to the question posed by a forthcoming OFFA conference, Why Evaluate?, which takes place later in November.

Leave a Reply