This article is more than 1 year old

Solving gaps in evaluation needs wicked solutions

There's pressure on universities over gaps in evaluation when supporting students to get in and get on. Liz Austen on what might be done to close them
This article is more than 1 year old

Liz Austen is Head of Evaluation and Research at Sheffield Hallam University

Yet again, a TASO report has concluded there is limited sector evidence of causal impact regarding interventions which aim to close equality gaps in higher education.

Since 2020, a variety of TASO reports have reached similar conclusions and recommended more and better evaluation across the sector.

This is echoed by the Office for Students (OfS) which is about to set expectations for an increase in the “quality and volume” of evaluation within institutions as part of updated Access and Participation Plans.

This perceived lack of progress on gathering evaluative evidence could lead to various conclusions. I believe the positivist focus on collating causal evidence is too restrictive – but with colleagues, I have written about this before.

A further conclusion is that the sector is stagnating, grappling with issues in intervention design and evidence generation that have manifested into “wicked” problems – open ended and with no obvious solution.

I also believe that the sector has not been sufficiently supported to develop evaluation confidence, which is a crucial step in the development of effective practices. Continuous calls to “evaluate causally” and “increase quality and volume” may only contribute to the problem. No-one learns by just being told to “do more” and “do it better”.

Capacity building from within

In 2021, a team from Sheffield Hallam University published a similar recommendation in a review of evidence of demonstrable impact on access, retention, attainment, and progression – the sector needed better evaluation and evidence of impact on long term student outcomes. But we didn’t stop here.

With further support from AdvanceHE, we developed a pedagogically informed resource to build confidence with intervention design and evaluation (#ChangeBusters, Theory of Change Game). This resource is now being used to develop evaluation confidence across the HE sector.

I have been involved in other work which has embedded evaluation capacity building. During a 3-year independent evaluation of the Scottish Enhancement Themes, we built in capacity building for developing confidence and skills as part of the commission, and designed a Universal Evaluation Framework , accessible for all, to support better evaluation design and reporting in the future.

In a participatory evaluation of the PGCert/MA Student Engagement at the University of Winchester we provided evaluation capacity building for participants (students) to enable meaningful evaluation co-design.

With a similar ethos, the Evaluation Collective committed to supporting HE practitioners with evaluation, recognising varying levels of confidence, experience, skills, and knowledge. Most recently, we asked the sector to tell us about the wicked evaluation issues that are troubling them with the hope that we could listen and respond. My reflections on these responses are themed around the value of evaluation, evaluation methods, evaluation planning and evaluating specific topics.

Wicked values

Institutional responsibility for evaluation differs across the sector but likely occurs in most jobs roles involving student access, success, and progress. How evaluation is valued and supported in these spaces can be a wicked issue.

This includes managing the pressure to report positive outcomes for flagship initiatives or the sensitivities of reporting “no impact” when impact is so desperately needed (think recent TEF and future APP). How to champion the importance of evaluation and gain the support of institutional leaders is also a concern that can present as a barrier to progress.

Wicked methods

Whilst airtime is being devoted to the quantitative assessment of impact, the sector is trying hard to explore alternative routes to evidencing causality. In addition to the general use of qualitative methods, there is an interest in creative – but no less robust – evaluation methods, in addition to evaluation with small samples and engaging evaluation methods to employ over the long-term.

Here we see a desire to adopt methods which best fit the context being evaluated, rather than a methods-first approach.

Wicked planning

Making sound evaluation decisions at the planning stage is essential. It appears that the sector is fighting hard to manage evaluation expectations within considerable constraints on time and resource. There is also a desire to learn and collaborate with others but the ways of doing this (or managing this time) are often unclear.

Embedding evaluation at the intervention design stage is still an issue; a lack of evidence informed rationale or proposed outcomes remains a challenge for those seeking to evaluate impact. There are also practical issues to grapple with – the knowledge that student comparison groups would be beneficial to an impact evaluation but a lack of knowledge and skills for how to implement this and make sense of any resulting data.

Wicked topics

Finally, there are some pertinent questions which require appropriate evaluation to be designed and implemented, but where progress is wavering. These include: How to evaluate educational gain? How to evaluate disparity in degree awarding? How to evaluate attainment-raising in schools? How to evaluate learning and teaching interventions?

These are important topics but increasingly difficult to evaluate due to varying definitions, a multitude of diverse stakeholders, and complex educational spaces.

Help us to help you

The Evaluation Collective has been funded by QAA Membership to work collaboratively to respond to these crowdsourced wicked issues. We want the sector to feel listened to and supported in their efforts to embed effective evaluation, access practical solutions, build confidence, and ultimately design better interventions which lead to improved student outcomes.

We are currently running a series of Twitter polls (@evaluation_c) which invite followers to vote on the most wicked of wicked evaluation issues. There is still time to add yours to our open access Padlet. This summer, we will publish the first edition of an agony- style Zine which responds to these issues.

We will reflect on these wicked evaluation issues and provide a range of suggested ideas for how to progress. We hope to launch this publication at an in-person event later this year. In addition to a variety of other activities from the Evaluation Collective in the coming months, we hope that our approach will start to fill the gap between sector expectations of evaluation and the reality of experiences. You can subscribe to our mailing list here to join us in this endeavour.

The QAA funded Collaborative Enhancement Project is being led by Sheffield Hallam University on behalf of the Evaluation Collective, and partners with the University of Reading, University of Sussex, University of Lincoln, Staffordshire University and Queens University Belfast. This project will be discussed at the QAA Membership Member Network event on Thursday 20 April.

2 responses to “Solving gaps in evaluation needs wicked solutions

  1. Thanks Liz – a really informed and helpful response to some quite frustrating claims. I really struggle with the whole idea that we can find causal connections and use control groups in higher education – where so many factors (in and out of the study environment) influence and frame behaviours and outcomes. If only we could take our students and put them in a sealed lab and remove all social and cultural history and ongoing social and cultural contact eh? (Joke claxon – just in case anyone took me seriously for a second).

    Some time ago I had the privilege of leading a team working on a DfE funded longitudinal evaluation (a whopping 9 year study) following the first cohort of secondary school students to study citizenship education as a national curriculum subject in England. Many lessons were learnt along the way – but a key finding writ large in the final report (after I had left the project and moved back into HE) was that, even after 9 long years of extremely complex and expensive quantitative and qualitative evaluation drawing on a sample of 24k students across 169 schools and some pretty sophisticated statistical analysis, we could only really show preliminary indications of the impact of citizenship education on citizenship outcomes over and above the impact of other factors. If only current government departments and quangos could learn from the wealth of work they have already funded before rather than reinvent wheels!

    For anyone who is interested in a bit of evaluation history – the final report of the Citizenship Education Longitudinal Study (CELS) can be found here https://www.gov.uk/government/publications/citizenship-education-in-england-2001-2010-young-peoples-practices-and-prospects-for-the-future-the-eighth-and-final-report-from-the-citizenship-e

  2. Thanks for this post! I think we (people in HE) need to consider evaluation theory and practice in the simplest possible terms that we can, so as to not overcomplicate the act of doing evaluation in what is already a supercomplex educational system and/or setting (being evaluated). Better Evaluation is a great open resource, which people may find useful: https://www.betterevaluation.org/

    These open access research articles may also help (both are educative, situated in HE, and cite useful evaluation-focused references):

    Boyle, F., & Cook, E. J. (2023). Developmental evaluation of teaching quality: Evidencing practice. Journal of University Teaching & Learning Practice, 20(1). https://ro.uow.edu.au/jutlp/vol20/iss1/11

    Cook, E. J. (2021). Evaluation of work-integrated learning: A realist synthesis and toolkit to enhance university evaluative practices. International Journal of Work-Integrated Learning, 22(2), 213-239. https://ro.ecu.edu.au/ecuworkspost2013/10403

    Finally, think of evaluation as an opportunity, not as a scary obstacle to overcome. It can be fun to show the impact of your work!

Leave a Reply