The Office for Students (OfS) has confirmed its approach to regulating access and participation from 2024.
For those who are already across the detail, the new guidance will hit familiar notes. Following the consultation providers will be expected to undertake their own risk assessment of which student groups are at disproportionate risk of not achieving good outcomes – that could be about student demographic groups, geography, prior educational experience, or the intersection of one or more of these, for example – and explain, using evidence and theory of change, what they plan to do to mitigate those risks and the associated specific objectives and numerical targets for entry, persistence, completion, and so on.
Access and participation plans (APPs) will run on a four-year cycle, rather than five, and providers will also be required to pay due regard to a set of national priorities for access and participation, particularly the challenge of raising attainment in schools. Whatever a provider does, it will also need to evaluate to understand whether it is actually achieving what it sets out to do.
Providers have a good deal of discretion in deciding what counts as a significant risk and what its strategic priorities should be – though OfS may, of course, disagree and send it back for a resubmission. But there is a requirement to pay due regard to the national Equality of Opportunity Risk Register (EORR) – which gives a heavy steer on exactly how the analysis should take shape – and we’re seeing it for the first time now.
The idea is that a provider’s APP should “respond” to this list of research and data derived risks to access and participation at a national level.
It’s not that you should develop a plan to address every item on the risk register, more that you could use the EORR to interrogate your own data and understand who may be at risk and how this may manifest itself. Certainly, it expects that provider actions will contribute to reducing these risks – and, crucially, that evidence gathered through the APP monitoring process can be used to improve and iterate the EORR in years to come.
The development of the register is admirably scholarly – a TASO literature review and other relevant research papers sit alongside data from HESA (cheekily credited to OfS), DfE, and UCAS collections. There’s even material gleaned via consultation responses, and a helpful “improve the EORR” button that will allow you to feed in new findings and insight.
The idea of risk
If you’re an old school project or programme manager by training (and you are old enough to be around before the rise of agile methods) you will be familiar with the idea of a risk register – in its simplest form a list of possible risks, graded by likelihood and severity, and mitigations to those risks (designed to reduce either the likelihood of the risk manifesting or the severity of the implications).
Risks in the national EORR do not have the likelihood or severity calculus, but they do have indicators – ways in which the impact of a risk is played out in data. If you see an unexpectedly low value in data for students in a particular group, you have found an indicator – the risk itself is the underlying issue, not what you see in the data. This of course raises questions around causality – how can you be sure in tracing an indicator back to a given risk. We get this language:
Indications of risk may be the result of a risk, but they may also result from something else. We encourage providers to evaluate access and participation activities and explore whether a risk may be contributing to the indication of risk.
Which is a more circular way of talking about the same point. It’s not enough to link a pattern in the data to a plausible explanation, you need to get stuck in and do the research. Mind you, a lack of data can also be a risk – and this may be indicated by the ability to see an indicator in the data you do have.
Here are the 12 national risks from the EORR:
- Students may not have equal opportunity to develop the knowledge and skills required to be accepted onto higher education courses that match their expectations and ambitions.
- Students may not have equal opportunity to receive the information and guidance that will enable them to develop ambition and expectations, or to make informed choices about their higher education options.
- Students may not feel able to apply to higher education, or certain types of providers within higher education, despite being qualified.
- Students may not be accepted to a higher education course, or may not be accepted to certain types of providers within higher education, despite being qualified.
- Students may not have equal opportunity to access a sufficiently wide variety of higher education course types.
- Students may not receive sufficient personalised academic support to achieve a positive outcome.
- Students may not receive sufficient personalised non-academic support or have sufficient access to extracurricular activities to achieve a positive outcome.
- Students may not experience an environment that is conducive to good mental health and wellbeing.
- Students may be affected by the ongoing consequences of the coronavirus pandemic.
- Increases in cost pressures may affect a student’s ability to complete their course or obtain a good grade.
- Students may not have equal opportunity to access limited resources related to higher education, such as suitable accommodation.
- Students may not have equal opportunity to progress to an outcome they consider to be a positive reflection of their higher education experience.
Each of these comes with an impact (so for risk 1 we hear about differential level 2 and level 3 attainment, limited level 2 subject choices, the progression rate, and on-course success). And each has a list of likely groups that may be affected – presented without intersectional analysis or scaling – we hear for instance that white British students are at risk from risk 1, but we don’t know if this is at the same likelihood or severity as students with care experience. Whether and how these risks should be of concern at institution level is for individual providers to take a view of.
Strictly speaking, completion is an indicator, not a risk, though the inclusion of completion data drives the morning headlines. As John Blake reads it:
This is profoundly unfair. Students who have overcome obstacles to get into higher education should not find further barriers in their way through their studies.
The data in question is actually from 2017-18 entrants – we’ve plotted it below for all providers so you can see how profoundly this indicator varies from setting to setting.
It’s an interesting thing to focus on – given that non-continuation is pretty much a non-completion risk factor by definition and we have known about the impact of student backgrounds on this measure since the halcyon days of the UK performance indicators – and one that perhaps runs some risk of offering heat (why are universities failing students from disadvantaged backgrounds?) rather than light (what is it about having a disadvantaged background that makes completing a university course harder, and what can universities do to help?).
DK’s dashboard notes: the data here is being pushed to the limit to do something it has not been designed to do – not all splits are available, and for some you may need to adjust the population or other factors to make it work. This dashboard only shows the raw data rather than allowing you to see gaps, some intersections are available – please see the OfS version for those functions. And yes – we have one of these for Access, Continuation, Attainment, and Progression too!