How to evaluate an access and participation plan

Anna Anthony reveals five tried and tested principles for successful evaluation of access and participation plans

Anna Anthony is Co-Director at the Higher Education Access Tracker (HEAT) Service

Predictably, the Office for Students (OfS) updated regulatory guidance for higher education providers to produce their access and participation plans (APP) maintains a strong emphasis on evaluation.

John Blake praises the early submitting providers in “Wave 1” for investing in evaluation by hiring evaluation specialists and training staff in evaluation. This is something I have observed in my role as Co-Director of HEAT.

In particular, I have witnessed the trend for providers to appoint new staff, or deploy existing staff, to oversee the evaluation of their whole APP. This role is often known as the “APP Evaluation Manager”, although titles vary.

These APP evaluators have tough jobs, and arguably, are some of the most pressured among us as they are responsible for planning, and then delivering, the evaluation of all interventions listed within their institution’s APP.

This covers an enormous scheme of work that is often taking place in different departments, involving numerous staff, all of whom may be using separate systems to record data.

APP evaluators must ensure that data for all these interventions are being collected and recorded consistently, so that they can eventually be used to show impact and, ultimately, translated into evidence that satisfies OfS.

This is a daunting task even for the hardiest among us so, in an attempt to make the lives of APP Evaluation Managers a little easier, below are five tried and tested principles for successful evaluation.

These principles have been informed by the expertise and experience of the HEAT membership, where universities and third sector organisations have worked together for over a decade to build evaluation capacity and capability through the sharing of costs, resources and knowledge.

Understanding what is involved in evaluating an APP is also very relevant to senior leaders who hold the purse strings, as it is critical to ensure that APP evaluators have adequate resources to accomplish this essential task properly.

Establish a system to enable a “single source of truth”

As good quality data are the bedrock of good evaluation, the first principle is that all APP evaluators must have a system in place, in essence, a central database in which they bring together, and monitor, all data collected within the institution that are relevant to their APP delivery.

As mentioned above, these data may be collected in different departments, by different people with different approaches to coding and recording data. It is crucial that the APP Evaluation Manager oversees the recording of all essential data relating to interventions on a central system. It is also their task to ensure that data are of as high a quality and as consistent as possible.

Once these data are brought together on to a single, shared system within the institution – a huge amount of work – the system represents a ‘single source of truth’ and provides the basis for all analyses within the institution. Having all analyses based on data from a central system gives integrity to the evaluation and guards against siloed working within the institution.

Wave 1 providers have told us that the HEAT system provides a one-stop-shop for APP evaluation, giving them oversight over the wide array of interventions being delivered across their institution.

Link theories of change to data on delivery

Something that all evaluation experts agree on is the need for a well thought out theory of change. Indeed, so deep is the consensus on this, OfS has built theories of change into their reporting template (see p38 of Regulatory Advice 6).

All APP evaluators will therefore have been through (wave 1), or are now going through (wave 2), the process of creating theories of change for the intervention strategies which appear in their APP. These theories of change come with a commitment to evaluating the outcomes listed within.

On this basis, then, all APP Evaluation Managers need to have a way of recording the theories of change listed within their APP within their data system, and then, importantly, linking them to the actual data on delivery that they will be using to evaluate their APP.

At HEAT we have developed a system to enable the membership to do this, allowing APP evaluators to monitor data collection and ensure everything is in place to enable them to report on their targets in four years’ time.

Track students to access key outcomes

‘Tracking’ is now widely accepted as providing a key dataset to be used to evaluate the impact of interventions. As OfS remarks in its guidance, tracking ‘provide[s] useful longitudinal data as part of an evaluation.’(p58).

Tracking services like HEAT provide members with outcome data from our longitudinal tracking study. This includes accessing exam attainment from the Department for Education’s National Pupil Database to evaluate pre-entry attainment-raising interventions, and data from HESA to examine impact on participation in HE. These data are notoriously difficult to access and would cost institutions far more in time and money to access individually, if they were not part of a collective.

Tracking data should always be part of a well thought out evaluation design, and should be used alongside data for intermediate outcomes identified in the intervention’s theory of change. Through TASO we now have a set of sector-wide resources to help us, such as the pre-entry MOAT to help choose appropriate outcomes; and validated Access and Success Questionnaire (ASQ), to measure intermediate outcomes through surveys.

All these tools have been integrated into the HEAT system so they can be used together, alongside tracking data, to build up a picture of evidence.

Don’t fear the “Types” of evidence

Evaluation methods, and what counts as credible evidence, can be a controversial issue. The debate around quantitative versus qualitative methods dates back at least 100 years! OfS asks providers to have regard for its Standards of Evidence, with the Type 3 “causal” evidence being something we are often told is in short supply within the sector.

To reach Type 3, the system we use must be capable of storing (and tracking) data, not just for participants, but also for comparator and control groups. Of course, HEAT already does this.

Yet Type 3 evidence (aka Randomised Control Trials (RCT) and Quasi-Experimental Designs (QED)) can be taxing for providers to achieve. I have observed evaluators who are left feeling paralysed when they are not able to meet these standards. In these cases, “perfect” really is the enemy of “good”.

And there is a lot of “good” evaluation we can do with the often descriptive data to which we have access through a tracking service. HEAT provides standard comparisons like school averages, with further breakdowns by prior attainment and level of disadvantage, which can often meet strong Type 2 standards of evidence. This is underutilised in my experience, but when triangulated with other data points such as survey data and qualitative data, can build up a convincing compendium of evidence.

Collaborate to save costs and resource

At a recent partnership event John Blake spoke about the importance of collaboration in intervention delivery and evaluation. Working in partnership with other higher education institutions and third sector organisations should be at the forefront of planning, rather than an afterthought.

With the associated cost and resource saving that collaboration can bring, this is almost certainly a good idea. Yet there can be practical barriers to this, especially when it comes to multiple partners in different locations needing to access shared systems.

Being part of a nearly national collective, HEAT member institutions can easily share data through the system they are already using, facilitating peer evaluation. This avoids silo working and promotes a collaborative, honest and open research environment, more likely to yield high quality evidence.

And this works for national-level evaluation too. Being part of the HEAT collective also means that members contribute to the generation of national level evidence – see our latest impact reports for APP funded delivery and Uni Connect delivery.

The latter report, comprising the largest available sample of Government-funded outreach participants, has been used by the OfS in their report to inform future policy and funding decisions around the Uni Connect Programme. Together we have a collective power to build evidence that ministers may listen to – and even hear.

Leave a Reply