Before coalescing into the Office for Students (OfS), a key focus for the Office For Fair Access (OFFA) was the effective and robust evaluation of widening participation outreach and student support.
Just as significant, from our perspective, as a university-based research and evaluation unit, was OFFA’s broad view of evaluation, in both methodological and structural terms.
Recognising that evaluation at a local (individual institution) level might only ever result in outcomes that have incremental and local implications, OFFA took a whole-sector view and expressed an expectation that individual universities did likewise, asking them to think about how their evaluation approaches and outcomes could have an impact beyond their own institutional context. In their last full set of guidelines for institutions preparing their 2017-18 Access Agreements, OFFA noted:
As well as looking at your institutional progress, we will also be looking closely at your contribution to the sector’s progress – for example, the quality of your evaluative work; your record in contributing to the sector’s understanding of effective practice by delivering and sharing research…
Reflecting this, we have just published a book that commemorates a half a decade of inquiry. We gather reflections on what WP can be seen to encompass, and the various methods available and utilised by those researching and evaluating in the field. The Widening Participation Research and Evaluation Unit (WPREU) was set up by the University of Sheffield in May 2012 to research and evaluate the impact of its range of widening participation (WP) activities. Over the last five years, the unit has undertaken a breadth of work, examining amongst other topics: persistent issues of access; student experience; success and progression; and financial support and student budgeting. This work, amidst an ever-changing policy landscape, has – we hope – progressed the institution’s understanding of what’s working for whom and why, whilst also feeding into and being informed by debates across the wider sector.
Our book attempts to straddle the practitioner/researcher divide, providing accessible insight into the issues we face together as a sector, while also offering some direction for institutions and policymakers based on the lessons we have learned. Below we offer a flavour of what is discussed.
Issues of access
Access is an undoubtedly complex and thorny issue – the sector’s lack of progress in widening access has been the focus of much debate and consternation. Are we doing enough, and what’s more, can we show that the things we are doing work, or successfully achieve their stated aims? In the wake of the recent “NCOP action week”, and as consortia move into year two – focused, local, and impartial outreach work is well and truly back on the agenda.
In this section of the book, we discuss the higher education policies that have characterised the sector’s contemporary approach to outreach. These include the need to raise “aspiration”; the need to raise attainment; the need to encourage a sense of belonging; and the need to provide appropriate financial support. Taking these broad areas as a starting point, we critically explore how they interact with the issues faced by a number of underrepresented student groups, and further, how these sorts of interventions are used to boost the participation of such groups in HE, with varying levels of success.
Researching and evaluating WP
How do we create knowledge about the efficacy of the programmes and interventions we deploy to aide widening access and participation? With much at stake, both in moral and economic terms – the methodologies and associated methods we use are of the utmost importance.
A recently published OFFA report establishes a three-tiered evaluation practice framework. Of specific note is level 3, achieved when HEI’s are able to evidence through control or comparison group analysis an intervention’s causal effect. Taking a plural approach, WPREU has approached evaluation from a number of angles, and have fed into wider debates around the utility of various approaches in a WP context. Ensuring we can design and deliver evaluations of programmes that offer both information around causal impact, as well as how they are implemented and received on the ground – is in our view – what will give us the best evidence of their efficacy.
To this end, the book considers and explores a range of evaluative approaches, with many taking quite divergent notions of social reality as their starting point, offering us different perspectives on whether, why and how programmes work. These include randomised control trials (RCT’s), realist evaluation, participatory action research, narrative inquiry, and longitudinal designs.
As was discussed at last years ‘Why Evaluate’ conference, the existence of different methodological approaches across the sector is no bad thing – and is an inevitable consequence of its diversity. However, there must be transparency as to why an approach is taken, and what questions it seeks to address. We hope the contributions in this section of the book offer fellow WP researchers and evaluators some support in making these decisions.
Student experience, success and progression
There has been an increased policy focus in recent years around what happens to WP students once they get to university, and what happens after they navigate their way through an undergraduate degree. The full “student lifecycle” approach to WP is one WPREU subscribes to – something evident in the growing diversity of our work.
We must consider the wide range of programmes and interventions deployed to support WP students when they get to university, including the movement towards inclusive learning and teaching practices, and the re-conceptualisation of financial support as an integral mechanism for student success and progression. It is worth an taking in-depth look at the constituent parts of the student budget – given the inherent complexity and changeability built in, something illuminated through both the “Sheffield Student 2013” longitudinal tracking project, and our yearly financial support evaluations.
These outcomes show how sufficient and stable maintenance support underpins the student experience, supporting meaningful engagement within the institution as well as planning ahead. Fundamentally, a balanced personal budget will create time and space, and as such, will help students to shape their time at university – both with regards to academic engagement and extracurricular participation. Further, additional financial support enables students to choose more suitable part-time work: as an inevitable part of student life for some, student jobs situated within the university are seen to provide more flexibility and a higher level of pay.
A recent report by the Economic Affairs Committee of the House of Lords called for the discontinued maintenance grants and loan system to be reinstated, with the support reflecting the true cost of living. Further, a working paper from the Centre for Global Higher Education (CGHE) by Gayardon and colleagues discussed the adverse consequences of a high student debt burden on life events such as starting a family, purchasing a home, as well as “physical and mental health, career choices, and the decision of whether to pursue postgraduate education”.
Our own research highlights the need for a whole institution approach to WP, ensuring it is embedded across the institution, and that support for students is linked up between central teams and academic departments.
Where are we going?
Despite the undoubted challenges that remain, we would argue that WP research and evaluation is in a much better place than it was in 2012 – with progress coming as a result of the work delivered by a thriving and growing community of researchers and practitioners, a community we are proud to be a part of.
Of course, OFFA’s sudden absorption into the new OfS is still taking place, and it is not yet clear what final form the entity as a whole will take, or what its angle on a sector-wide approach to evaluation might be. Early indications, from the OfS’s recently published Business Plan for 2018-19 for example, suggest that it’s thinking around evaluation approaches is, itself, mutating and might become increasingly centrally and OfS-led, with priority being given to the regulator, commissioning its own research and administering a central ‘Evidence and Impact Exchange’.
It would be a real shame if OFFA’s previous commitment to collaboration and cross-sector reach was to dissolve in this new mix. Arguably, the many and complex challenges of evaluating complicated domains such as widening participation can only be addressed by a sector joining forces, working bottom up, to share, collaborate, and negotiate ‘what works’ from a ‘coal-face’ practitioner perspective. It was this belief, in particular, that drove our decision, as a collective of widening participation researchers and evaluators, to pool our combined experience and thinking, and make it available to our peers across the sector. We hope this prompts discussion, disagreement and new ideas for solutions, thus informing and offering a guide to policy-development at institutional and sector levels.
As we enter this new regulatory environment, one that will (rightly) stress the importance of consistent, robust and in-built evaluation – the need to invest in, and to critically think about why and how we evaluate, is self-evident. The outcomes and thoughts gathered in the book represent our current vantage point, from which we hope to move forward across the next five years.
Five years of WPREU: Critical reflections on evaluation, policy and practice in widening participation and student success is available to download for free here.
Thanks for an interesting article WPREU colleagues. As your article welcomes some friendly disagreement, here’s mine……
Having been involved in WP research and evaluation for getting on for 15 years now (crikey), I actually do think it’s time for some form of centralised evaluation control. I think it’s fair to say that despite ongoing focus from the various funders, governments and individual institutions, the success of evaluation of widening participation outreach (and more latterly student success and progression) has been somewhat mixed. A considerable contributing factor in this has been a distinct lack of standardisation across the sector. Take the Aimhigher programme as an example, which was deemed unsuccessful and subsequently scrapped, not least due to a perceived lack of evidence of its impact. There were 45 Aimhigher area partnerships, with 45 different evaluation plans and 45 different evaluation methodologies. With no standardisation, it was notoriously difficult to separate the good, the bad and the indifferent. There was some very good and robust evaluation work taking place, but, with little centralisation, much of this was confined to the shelves or obscure repositories.
With the cessation of the Aimhigher programme, the 45 area partnerships have been replaced by 120 or so individual institutions, only this time with 120 or so evaluation plans and 120 or so different evaluation methodologies, of varying quality but little standardisation.
The current ‘collective responsibility’ for evaluation remains a sticking point. Does collective responsibility go far enough to overcome the limitations of previous (and current) structures? How can we ensure a consistent approach if we continue with the devolved approach? A more centralised approach to evaluating how the access, student success and progression stages of the life-cycle contributes to social mobility would ensure a consistent and robust approach. Of course, this would involve working with a number of HEIs and data agencies – particularly those with established evaluation methodologies and data tracking systems. But it is sensible to free universities from at least some of the responsibility for evaluation. This would free up resource so universities can focus on formative evaluation, strive for continuous improvement and inform their access and success provision.
The relatively new OFFA standards of evaluation that your article notes (three levels) are helpful, but currently it is individual institutions which determine which level they are at for their different activities. This is subject to huge variation in (mis)interpretations. Perhaps the time has come for those of us in the WP research and evaluation field to embrace a potential new direction. After all, we are all striving for the same thing, and maybe, just maybe, a degree of centralisation will help us prove as a sector, once and for all, ‘what works’.