This article is more than 6 years old

We need realistic evaluations of HE access

Lauren Bellaera of the Brilliant Club outlines how more realistic WP evaluations can improve our understanding of what is effective in different contexts.
This article is more than 6 years old

Lauren Bellaera is the Director of Research and Impact at The Brilliant Club.

Many so-called ‘gold standard’ research methods (e.g. Randomised Controlled Trials, RCTs) are not conducive to messy real-life settings, such as schools or universities. And the challenge gets even more complex when you work with different schools, in multiple areas.

On top of that, institutions (and the schools they partner with) need to know if access interventions, such as mentoring schemes and summer schools, are ‘working’ now – they can’t always wait several years for a longitudinal study. But there is a solution. With meaningful measures and an appropriate design, it’s possible to see the impact of interventions in months, not years. This article focuses on three key areas that need to be considered when measuring the impact of an intervention.

Be in control, but be active

OFFA’s ‘The Evaluation of the Impact of Outreach’, published last year, provides clear guidance and proposed standards for the evaluation of outreach by universities and colleges. A key message is that, where possible, a control or comparison group is important. This is true, and whilst there is a huge focus in the research community on whether individuals can be randomly allocated to a control group, we also need to consider what constitutes a control group.

The whole point of a control group is to have a fair comparison, so the control group should be identical to the intervention group except that the control group does not receive the ‘active ingredient’ that is anticipated to affect performance. An active control group is where individuals take part in a similar activity to the intervention group minus the ‘active ingredient’.  At The Brilliant Club, The Scholars Programme sees PhD tutors deliver a series of tutorials based on their research leading to a final assignment. In this context, we use an active control group to establish whether a specific learning design feature (e.g. higher-order questions; one-to-one feedback)  has an impact on disadvantaged pupils academic outcomes above and beyond our standard programme. This type of design allows us to understand what it is about the intervention that is working not just that the intervention is working (i.e. it allows us to see into the black box of an RCT). The importance of identifying active ingredients within WP interventions is also discussed in Julian Crockford’s Wonkhe blog.

There are of course ethical and methodological challenges with placebo trials in education research, not least the question of whether it is right to systematically provide interventions for some pupils and not others. One way to address these challenges is by using ‘data lab’ approaches, where synthetic control groups can be assembled from administrative datasets.

To intervene or not: complexity vs simplicity

WP interventions are complex, and they tend to be doing more than one thing at once. For example, an intervention can involve attainment raising and aspiration raising activities. Typically, in psychology experiments, if we have two groups (intervention and control), we manipulate one variable. This is because if we manipulate more than one variable we do not know which manipulation has caused the results. That said, in the real-world interventions are complex and multi-faceted, so should we be assessing the impact of interventions through the lens of complexity or simplicity?

Complex interventions refer to interventions that contain multiple interacting components, whereas simple interventions have a linear pathway between the intervention and outcome. It has been suggested that interventions are neither inherently ‘complex’ or ‘simple’, it all just depends on what perspective you adopt and the framing of your research question and analysis. For instance, a multi-faceted WP intervention could establish the interaction between a number of different components on pupil outcomes (a complex perspective), or the impact of single component within the intervention could be explored (a simple perspective).   A really great example of a study that adopted a complex perspective is provided by James Mannion and Professor Neil Mercer in Learning to learn: improving attainment, closing the gap at Key Stage 3.

Pupil characteristics drive outcomes

When evaluating WP interventions, the biggest challenge is controlling for pupil characteristics – variables that are not of primary interest in the intervention, but which can influence the outcomes (i.e. confounding variables). Pupils’ prior attainment is an example of a confounding variable, as is gender, ethnicity, and school context – the list goes on (and on). The key thing, I think, is to be pragmatic when controlling for variables.

Ultimately, in every evaluation there are going to be numerous confounding variables – some known, most unknown. Ideally, you would randomly assign individuals to groups – this should counterbalance any differences – and then as an added precaution, measure key confounding variables and include the variables in your analysis (e.g. prior attainment). If this is not possible, and it rarely is, then match groups on key confounding variables (either at a group level or an individual level), and again include the variables in your analysis. For example, matching the schools in the intervention group and the control group in terms of school attainment and proportion of pupils on Free Schools Meals – a common proxy for disadvantage. Importantly, when evaluating WP interventions, we are working with a group of pupils with certain characteristics unique to this group, so it is essential that these characteristics are identified and built into the matching process and analysis plans early on.

The message with all of the above is that, as WP practitioners, we will always be evaluating interventions under conditions that are not optimal, and this is also true of education research more widely. So, we do the best we can do in the given context, and this means that, as is fast becoming my favourite saying, ‘[we] don’t let the perfect become the enemy of the good’.

The Brilliant Club is running a series of research seminars, in partnership with UCL, looking at how academic research is being used to inform practices in schools and widening participation . The next seminar is taking place on  Monday 29th January and focuses on ‘Promoting Oracy Skills in the Classroom’. This article is from an accompanying four-part series with Wonkhe on the theme of ‘impact in school-based university access work’.

3 responses to “We need realistic evaluations of HE access

  1. This is very thoughtful and interesting post. Great to see the OFFA guidelines on impact evaluation being taken up. The right mix of pragmatism and purism seems to be the way forward for WP evaluation.

    Something I have consistently come across is the tendency to ‘pick winners’ in allocating to control and intervention groups. E.g. if working with schools, there can be a tendency to choose a ‘keen’ teacher or class or pupils to participate in the intervention. This is natural – people want to be helpful and for the intervention to ‘work’. They are used to being judged negatively when things don’t go well, but in a trial it matters less whether the intervention works than what we learn from doing it. The risk in picking winners is that they would have gone on to get the outcome we are seeking without the intervention.

    Pragmatically though, if it weren’t for ‘keen’ schools then many interventions would not happen in the first place. And the suggestion here about looking at process and mechanism, not just outcome, is a good one.

    More articles like this please WonkHE!

  2. Excellent article, Lauren. Great to see a balanced intervention in the long running to-RCT or not to-RCT debate. The pragmatic note you sound is really welcome, in particular your acknowledgement that WP evaluators are often expected to try squeezing an assessment of long-term outcomes into a much shorter time frame. Your observation about the difference between simple and complex interventions puts me in mind of Neil Harrison and Richard Waller’s excellent paper, ‘Evaluating outreach activities: overcoming challenges through a realist ‘small steps’ approach’ (2016), and their suggestion that another way to isolate the active ingredients of these kinds of complex, ‘messy’ real-world interventions is to ensure that there is a detailed theory of change underpinning the intervention, and that evaluators realistically consider casuality, measurability and appropriate timescales when designing their evaluation approach.

Leave a Reply