“Evaluate, evaluate, evaluate” is John Blake’s first aspiration for widening participation and student success.
The OfS’ new Director for Fair Access and Participation set out this “unapologetically nerdy” intention this week, reflecting an emerging consensus that HEI’s need to improve their evaluation.
As I explained in November, this focus is hugely welcome and will allow universities to follow in the footsteps of our school counterparts.
The question now becomes, how does John Blake take the sector with him? He may well be expecting some resistance to a renewed focus on what works, so how can we effectively prepare and encourage the sector to embrace robust evaluation? We must build the will, the capacity, and the momentum.
Building the will
First and foremost, to embed better evaluation across universities’ access and participation work, we must win the argument. While there has been extensive progress in many areas, the case for high quality evaluation still needs to be made in some quarters.
We can start to make this case by sharing the immense value that derives from effective evaluation; disseminating the brilliant outputs of organisations like TASO (the what works centre for access and student success) is a good place to start. It is also critical to share examples of evaluation that surprised the sector; just because you think your intervention works, does not mean it actually does, hence why robust evaluation is so important. Learning once more from schools, the Education Endowment Foundation has conducted numerous evaluations where the impact was not what the delivery team expected or intended (including interventions where the programme had a negative impact on pupils). Sharing these examples, and thus building the case for evaluation in HEIs, is imperative.
However, we must also recognise this reality that every access and participation intervention evaluated by HEIs will not show a positive impact. It is really important that this does not dissuade universities from embarking on evaluation projects. A fear of failure or the risk to institutional reputation should not prevent good evaluation practice. This is where the OfS can come in, and seemingly intends to, with Blake suggesting that he is “keen to explore the “sandbox” of regulation to give providers committed to generating robust evidence the space to do so”.
An evaluation that demonstrates no impact is not useless, providing its design allows us to draw lessons from the intervention and change our practice in future. The OfS recognises this and appear keen to provide the psychological safety required for universities to take the leap of faith.
Building the capacity
Once the argument for better evaluation is won, Blake may be met with resistance that alludes to resource constraints. As 52 per cent of 88 HE providers surveyed last year explained, “there is a lack of internal capacity to generate evidence”, with 53 per cent remarking that “Time/resource pressure makes it very difficult to use evidence and/or robustly evaluate our activities”. This needs to be challenged.
In terms of time and resource, effective evaluation enables the more efficient use of capacity. Indeed, it is an investment rather than a cost. If we are not robustly evaluating our access and participation work, we are not evaluating whether we are using our time and resources effectively. It is far better to do less, but evaluate it well, and it may be useful for the OfS to explicitly state this.
On expertise, the sector is brimming with evaluation know-how; unfortunately, it hasn’t always found its way into access and participation work. For instance, the Education Endowment Foundation, when beginning their work 10 years ago to evaluate interventions across the school sector, set up a “Panel of Evaluators” who represent the world experts in how to conduct evaluation in education. Thirteen out of 24 of these evaluators are UK universities, and we need to capitalise on their expertise within their own sector.
The aforementioned TASO have also provided a wealth of resources and support to the sector in recent years, while a plethora of third sector organisations (such as the Brilliant Club) offer expert evaluation consultancy. The expertise is there, and we need to direct it towards access and participation and share it across the sector.
Building momentum
Delivering robust evaluation within an institution can seem like an overwhelming task; building evaluation will and capacity across the sector even more so. So, to build momentum and support, we should take advantage of the low hanging fruit and rack up some quick wins. Within an institution, access and participation departments could:
- Develop theories of change for their interventions
- Write evaluation protocols for their programmes
- Conduct reading groups which share the findings of recent reviews of evidence (such as the Advance HE Funded review by Liz Austen and colleagues on access, retention, attainment and progression).
More broadly across the sector, a first quick win which would require no further evaluation work would be to encourage universities with existing evaluations to share the findings with other institutions. An OfS commissioned survey last year found that less than 60 per cent of the sector share their evaluations with other providers, and it is critical that this changes. Sharing and collectively learning from findings is an integral element of an effective evidence infrastructure.
These quick wins should accompany the planning of longer, larger scale, robust evaluation work, building momentum and awareness of the value of evidence across the sector.
A renewed emphasis on evidencing what works in access and participation is to be welcomed. Now we must build the will, the capacity, and the momentum to deliver.