Liz Austen is Associate Dean Teaching and Learning (Social Sciences and Arts) at Sheffield Hallam University


Rebecca Hodgson is a Professor of Higher Education at the University of Manchester

How do we prove that what we’re doing to help students both “get in” and “get on” is making a difference?

A review we worked on for Advance HE (Access, retention, attainment and progression: an integrative review of demonstrable impact on student outcomes) cemented many of the things the HE sector already knows.

Interventions which show that a university cares, that their students matter and are designed from a student-centred perspective have an impact.

Building on that, the report discussed a range of specific and multi-faceted interventions across access, retention, attainment, and progression outcomes.

Harder than it looks

The types of evidence we reviewed were both interesting and challenging. Our focus on finding empirical and causal evidence created seemingly strict inclusion criteria – but definitions of student outcomes appeared more fluid than anticipated, and identifying the standard of evidence was not always straightforward.

A more pressing concern was that whilst some studies were evaluating impact of their interventions on short or medium term outcomes – such as learning gain or development of skills or behaviours – there was a lack of further evidence on longer term student outcomes. This was also not a new revelation.

In light of this, we developed “Guidelines for Demonstrable Impact” which aim to address the under-investment in higher education evaluative practices over the long term. These recommendations still apply, and are perhaps more relevant in light of the proposed overhaul of Access and Participation Plans.

What practitioners should do

At the practitioner level, we suggest that practices need to be sustained long enough for impact to be identified. Practices need to become “sticky” beyond funding deadlines, niche personal interests, and time commitments.

Let’s utilise an example of “active learning”, which our report discussed in relation to retention and attainment impact. At the level of the course, this requires a commitment from course leaders and those in similar roles to ensure that there is a shared understanding and culture of pedagogical practice that, for example, prioritises “mattering” and embeds this beyond the service of one or two enthusiastic team members.

This means ensuring that all staff “matter” as well – building effective inductions for new colleagues that model effective approaches and designing shared materials and methods that enable time-poor staff to embed successful student-centred approaches in their own practice.

Sound evaluation strategies, including resourcing and a commitment to long term evidence generation, should be built into project plans and business cases whilst practice is being researched, tested, and embedded. It’s not unusual to see new developments, initiatives or interventions planned (and in some cases funded) without any reference to how the work will be evaluated and any impact assessed.

Access to good quality data and data confidence are crucial. Colleagues in access, retention, attainment, and progression spaces should be enabled, via structured support, to utilise existing institutional (or sector) data and explore a baseline evidence of change, where needed.

What researchers and funder should do

Next, we recommend the clearer identification of short, medium and long term outcomes for evaluations of interventions, and challenge funders to be more astute when setting funding deadlines, especially around budget year end. In addition, we suggest that impact evidence should be collated across outcome measures, and this will require some joined-up working beyond institutional silos.

For example, a pre-entry enrichment or extra-curricular intervention that seeks to improve students’ reported confidence in the short to medium term, may improve access and also retention, attainment and/or progression outcomes in the long term. Few outcome studies coherently map these long term connections beyond an intention or a hypothesis.

Progression evidence was a particular methodological gap within the impact pipeline that requires attention.

To move beyond an institutional focus, and to scale evidence of impact, researchers and practitioners should have access to appropriately pitched capacity building for how to conduct causal evaluations, replicate small scale empirical studies (beyond one institution, one course, different student groups), develop qualitative evaluation data, and conduct evaluations of programmes which focus on multiple interventions.

We then suggest that the regulatory reporting cycle of Access and Participation Plans (whatever that may now look like), provides a good opportunity to schedule sector wide evidence reviews across access, retention, attainment, and progression – given the commitments to evidencing impact within them.

What providers should do

There are steps that institutions can take to support robust impact evidence, starting with the identification of any restrictions and barriers.

For example, providers should review existing well-established processes within the “academic calendar” to ensure that they are not inadvertently contributing to the short-term focus/reporting of enhancement work. By encouraging an iterative, longitudinal approach to data use and analysis, institutions can build evaluative processes that naturally contribute to the generation of impact evidence over the longer term.

Our challenging call to action is to encourage “stickiness” beyond practice itself and in the leadership and governance of practices that are known to enhance access, retention, attainment and progression. In our report, we recommend that institutions support and resource longevity in intervention design and evaluation, moving away from short ‘quick wins’ and investing in projects which are designed, implemented, and evaluated over a significant period (for example, two to three years).

For many, this will be a shift in organisational culture, requiring shared leadership / ownership of projects across multiple stakeholders, and reducing risks to project continuation and completion. We suggest that institutional projects are designed to ensure accountability for the measurement of long-term impact.

This requires leaders to adopt a ‘what works’ approach, informed by an existing evidence base and literature, as well as a meticulous commitment to longitudinal evaluation and review – rather than a new shiny initiative every year. This commitment extends into an appreciation of diverse evidence types, from practitioner reflections to students’ voices, and the value of qualitative evidence of impact.

Understanding the student perspective of their higher education experience is an essential part of the data jigsaw, adding essential breadth and depth to analysis, and informing associations.

What the sector should do

The suggestion that universities may be asked to raise pre-HE educational standards as a revised focus of APP work means that impact measurement over the long term is about to get even more complicated. There is a supportive culture of evaluation for those designing and delivering access interventions. Practice sharing and the dissemination of methods and findings is common and supported by sector agents such as NEON, Villiers Park, and NERUPI, in addition to OfS. In the teaching and learning space (student retention and attainment outcomes), the culture of evaluation is slightly different – less service orientated and more aligned to quality enhancement through review – although QAA and QAA Scotland have developed a visible commitment to evaluation in recent years.

We did find evidence of robust evaluation research methodologies used within pedagogic research which was reporting impact on student outcomes, but these were overtly positive. Noting the work of Dawson and Dawson on reporting bias in higher education research, we recommend supporting the value of failure to avoid an over-reliance on good practice and positive outcomes. Funding cultures and the pressures of positive evidence for professional development are two possible contributing factors.

To encourage the collation of progression evidence, we recommended that the higher education sector create a valued profile for research and evaluation of progression outcomes, aside but distinct from the policy-led Graduate Outcomes agenda. This would include providing inclusive spaces for practitioners and researchers to work together to explore progression outcomes (communities of practice, funding bids, rewards and recognition), and avoid the burden of evaluation resting on careers and employability services.

These often stretched practitioners could be supported by those working on access, retention, and attainment interventions by encouraging a look forward at progression outcomes, rather than relying on progression evaluation research to look backwards at retrospective outcomes.

These recommendations challenge sector stakeholders to collectively fix the ”leaky pipeline” of impact evidence, with the aim of enhancing student outcomes. The full report is supplemented by a series of Advance HE podcasts, webinar recordings and summits.

Leave a Reply