Both were focussed on access and outcomes, and the extent to which we might reasonably hold providers responsible for performance:
It cannot be right that those students’ entry to higher education is used to polish the laurels of providers who are consistently and persistently not delivering on the quality of teaching and support those same students need to thrive in higher education, and succeed after graduation. The access and participation plan process can do more to prevent this.
I absolutely reject any suggestion that there is a trade off between access and quality – if providers believe the regulation of quality justifies reducing their openness to those from families and communities with less experience of higher education or who have travelled less common, often more demanding, routes to reach them, they should be ashamed of themselves.
As such, one of the things we’re watching in real time – not least via the use of core metrics – is what Blake described as a process where “our access and participation work will be brought into firmer alignment with our approach to quality.”
But here’s the puzzle I’ve been thinking about.
I can see the rationale for OfS taking a subject area in a large university, and saying “this must hold its own on continuation, progression and completion” rather than poor scores in a subject area being buried in the averages.
I can also see the argument for a renewed emphasis on gaps for students with particular characteristics in success and progression rates rather than just access. Again, slicing them out stops them being buried in averages.
But isn’t the point that the two intersect?
For example. The OfS dashboards land, and you’re then interrogating internally.
If you are looking at a progression (to graduate employment) gap for mature students at provider level, you might scratch your head and think “I wonder why that is”. And you might reasonably put that down to the distribution of mature students at your provider – you may well find mature entrants clustered in a couple of subjects which tend to have poorer outcomes.
Alternatively, you might be staring at a subject area with poor outcomes (as many sector types have been doing in op eds) and be arguing “Ah well, that’s down to the fact that that area does the heavy lifting on access”.
And the thing is, in part, that would be right on both counts. To the extent to which outcomes are always partly someone’s responsibility to impact, and partly outside of their responsibility to impact, both would be reasonable justifications for the part that is outside of the control of the person you’re talking to.
But that doesn’t help on the bit you can hold someone responsible for, now does it?
In fact, the danger is that it just creates a merry go round of blame and justification that doesn’t actually help people to make things better.
So here’s the thing. If the APP gaps were all published at subject level – access, continuation, completion and progression, you could do better things.
You could for example look at a law school with an over representation of white middle class students and start to interrogate why that is.
You could ask yourself why it is that afro-caribbean students doing geography are doing worse on graduate jobs than other students doing geography.
And so on. Surely the APP, B3 and TEF regimes need to work off data that intersects in the way I’ve described? Gaps by subject area.
The point is partly about data that is useful, partly that I’m interested in OfS’ data capacity supporting internal discussions, and partly about the fact that I’m interested not just in students getting graduate jobs, but in ensuring our professions (which HE plays a central in delivering training for) are diverse.
Put another way, it still looks very possible to me on what’s being proposed that a disproportionate number of medics will share particular student characteristics, and for that not to end up being picked up by the regulatory regime. That would surely be a problem.