John Blake was last week announced as the new Director for Fair Access and Participation at the Office for Students (OfS).
In a move which indicates a renewed focus on university and school partnership, the appointment puts a former school teacher and senior leader in one of the most critical positions in higher education.
This offers an exciting opportunity. John will no doubt bring a wealth of expertise and lessons from schools and has already signalled a focus on working with universities to “improve attainment for disadvantaged young people throughout their schooling”.
But beyond a welcomed focus on attainment, and developing closer university and school links, there is a more fundamental lesson that higher education and, in particular, those who work in improving student success, can learn from the school sector – the importance of “What Works”.
Where’s your evidence?
In 1999, Professor Rob Coe in his inaugural lecture at Durham University, set out a manifesto for evidence-based education. “Education may not be an exact science”, he explained, “but it is too important to allow it to be determined by unfounded opinion, whether of politicians, teachers, researchers or anyone else”.
Optimistically, Rob predicted that “Before long, everything fashionable, desirable and good will be “evidence-based”. We will have Evidence-Based Policy and Evidence-Based Teaching, Evidence-Based Training – who knows, maybe even Evidence-Based Inspection.” Rob was right.
In the following 20 years, the education sector in England underwent an evidence-informed transformation. This was driven, in large part, by the establishment of the Education Endowment Foundation in 2011, the ‘What Works’ centre for education in England that aims to use evidence to close the attainment gap.
When Rob presented his manifesto in 1999, you could count the number of truly robust randomised controlled trials (widely considered to be the gold standard of evidence) that had taken place in English schools on one hand. Today, more than half the schools in England have participated in an RCT. Gone are the majority of the ethical objections to RCTs in schools, with headteachers across the country showing an increasing desire to engage in the creation and use of evidence.
More broadly, evidence literacy across schools has continually improved. Aside from getting involved in research, teachers and senior leaders now cast a discerning eye on programmes, approaches, and policies, expecting them to be evidence based.
The sector’s leaders have followed suit. Significant policy change is now regularly informed by evidence, most notably demonstrated by the sector’s response to the pandemic. The announcement of a National Tutoring Programme, and of a targeted programme of small-group language support to early years children, were both underpinned and justified by robust evidence. Meanwhile Ofsted’s latest inspection framework is underpinned by an evidence review, while the inspectorate have even begun producing their own reviews of evidence.
These fundamental changes, accompanied by a proliferation of organisations intent on improving the use of evidence in schools (such as the Chartered College of Teaching, or Evidence Based Education) mean that we can now (convincingly) claim to have one of the most evidence-informed school sectors in the world.
Meanwhile in higher education
Thankfully colleagues in this sector, specifically in access and student success, have started on the same journey. Recognising the promise of evidence-based practice, various institutions and organisations have started exploring how we can use evidence to improve access, retention, attainment, and progression, particularly for underrepresented groups.
My institution, King’s College London for instance, founded a What Works team in 2018. The team, now part of the Social Mobility & Widening Participation Department, aims to contribute to the understanding of what works in enabling people to access and succeed at university, and has made rapid progress since its foundation.
Alongside conducting social research with over 100 students and delivering several institution wide surveys (measuring critical student success concepts such as belonging and self-efficacy), the team has run several RCTs involving nearly 12,000 pupils and students. This includes an RCT run in collaboration with TASO designed to measure the effectiveness of K+: our flagship multi-intervention post-16 WP initiative.
Beyond King’s, the establishment of TASO is a hugely exciting development. Set up in 2019 with funding from the OfS and designated as the what works centre for access and student success, their stated mission is to improve lives through evidence-informed practice in higher education. Their work has begun at a pace, and an independent review published last week noted that they are set to fill some of the priority evidence gaps in access and participation.
HE professionals should continue to embrace this new direction towards evidence informed practice. Indeed, we still have some way to go. As Advance HE’s literature review on student success interventions identified this month, institutions must fix the ‘leaky pipeline’ of impact evidence. Specifically, as other recent reviews have argued, there is a particular ‘dearth of causal evidence’ which we must address. But we should not be disheartened.
The experience of the school sector in England in the last 20 years demonstrates what can be achieved for pupils if we focus on “what works”. Let us commit to our own manifesto of evidence-based access and student success.
Reading this, I can’t help but think of the persisting inequalities in school outcomes experienced by working class children. Inequalities, for example between FSM recipients and other children, have remained stubbornly static over the last 10 years. If the answer to these inequalities is to be found in the kind of changes to learning and teaching that an RCT can test, how long should we expect it to take for a ‘what works’ approach to translate into meaningful reductions in inequality? Is there reason to believe that the same approach applied in WP will be able to ameliorate gaps in… Read more »
A really fair challenge Jessica! Thanks for such a thoughtful comment. I don’t think small numbers of RCTs are the full answer- but they are a start. Some of the EEF’s most important results from RCTs have demonstrated substantial improvements for FSM children and are now being scaled up to thousands of schools across the country (the best example being the Nuffield Early Language Intervention). Trials like this, and subsequent scale up work, can help to close the gap. It is my understanding that the gap was closing prior to COVID, but very difficult to know exactly why of course!… Read more »
This is a dangerously naive argument which is as likely to harm students’ education as to benefit it. A well conducted, carefully designed and analysed RCT never warrants a policy conclusion as simple as “intervention X works”. At best it warrants the highly localised conclusion “on average, for the sample who took part in the trial, intervention X resulted in higher scores than intervention Y for a particular outcome”. Unless the planned policy context happens to be a situation in which intervention Y is the existing approach, intervention X is the planned replacement, the outcome one intends to impact is… Read more »
Thanks for such a detailed and thoughtful response Adrian! You certainly raise some of the challenges of using RCTs, but I don’t think our positions are completely at odds. I agree that one RCT doesn’t mean you can say ‘this intervention works’ in all contexts; as I’ve said in the comment above, not all RCTs are equal- of course, they need large enough samples to support making decisions from them, while they also need appropriate outcome measures, limited attrition, and other markers of robustness; but also, critically, they should be accompanied with high quality process evaluations, so the context, the… Read more »
Sorry, but the argument here is just as flawed. Larger samples will not help with the task of transporting a result from RCT to policy. Larger samples at best help with precision of parameter estimates and with justifying the exclusion of the randomisation process as playing the sole causal role in the local difference in outcomes. Similarly reduced attrition will help answer the question “x worked better than y on average for the sample in the study in the context of the study” but will not substantially improve our knowledge about whether x is good policy anywhere. There is something… Read more »