This article is more than 3 years old

Proceed with caution – measuring positive outcomes needs context

Proceed claims to project the likelihood a student will complete their course and progress to a positive outcome. But with UKPIs disappearing, it all feels a little out of time.
This article is more than 3 years old

David Kernohan is Deputy Editor of Wonkhe


Jim is an Associate Editor (SUs) at Wonkhe

Data has a poignancy. It has a structure of meaning beyond what the numbers tell you, and beyond the comparisons you can make.

Data design reifies societal conceptions around what is important, what is meaningful, and what is valuable. Today we’re dealing with the loss of a valuable old dataset, and examining the advent of a new one of uncertain utility. Ironically, the new one is very much based on the old.

Proceed to the runway

We’ve been covering the advent of Proceed since it was called Start to Success, and we laughed uproariously at the first attempt at publication (as “Entry to Professional Employment”) just before Christmas. It combines existing data on non-continuation and skilled employment to give a projection of progression between the two – if a person enters a given provider to study a given subject, how likely are they to end up in a professional job or other “positive outcome”?

Proceed, therefore, is a contrived acronym of noughties Jisc project-like beauty: “Projected completion and employment from entrant data.” Since the first attempt, there have been a few other tweaks:

  • Students that move to another provider to study are no longer counted in the statistics for their first provider.
  • The range of “positive” outcomes has been expanded. Because graduates may end up doing many things, if just one of those is “positive” it counts as a positive outcome.
  • If a graduate goes travelling, cares for someone, or retires after graduation these are counted as a “positive” outcome.
  • There’s more contextual information available on interim studies. And there are sector adjusted benchmarks.

Data is presented on a subject and provider basis, but there are also measures by provider only and by subject (CaH2) only. Here’s the first one, showing absolute scores and differences from benchmarks:

[Full screen]

(note: numbers relating to low volumes of students in either of the base measures are suppressed)

The completion projections relate to 2018-19 graduates, whereas the progression data comes from the 2017-18 cohort. There’s a flag (which we’ve shown in the tooltips, for low Graduate Outcomes response rates). We’re looking at students registered at English higher education providers that were registered with OfS in late October 2020. The benchmarks are similar, but not identical, to the HESA UK Performance Indicator (UKPI) benchmarks.

This very much feels like an attempt to answer the “poor quality course” conundrum. For those just getting up to speed, Gavin Williamson is very clear he doesn’t like such courses but defining them is different. I’ve had some fun having a try on Wonkhe.

It doesn’t actually provide an answer, of course, because it talks about subjects of study (at quite a high level) rather than actual courses. Subject coding is a whole world by itself, so we can’t reliably convince ourselves that Proceed has anything to say about a particular course unless we already know how it is coded. And don’t worry:

The OfS has no current plans to use this data for our regulatory purposes, although the indicators we will use in future to regulate quality and standards are subject to consultation.

A bear in the woods

That said, you’d have to assume that the scores will at least provide both university governing bodies and the regulator a place to start looking if it was on the hunt for providers to focus on when absolute baselines are set. Already Gavin Williamson has said:

This government has a manifesto commitment to tackle low quality higher education and drive up standards, and this data proves there is much more work to be done. Our landmark Skills Bill makes clear the power of the Office for Students to take much needed action in this area, including it’s ability to enforce minimum standards for universities on course completion rates and graduate outcomes, and I look forward to seeing the results of this work.

…and you’d have to assume that some of the more eye-widening numbers will be quoted from both the back and front benches.

In addition, the covering doc states that in discussions with careers advisers and with the OfS student panel, it was felt that the measure would be useful to prospective students and their advisers, though “appropriate presentation and contextual information would be important to improve understanding of the data.”

Now the data is published it may be too late for that – we are surely only a few weeks away from a national newspaper tipping the subject and provider score into a table for comparison purposes, with a covering quote from Chris McGovern on the “bottom thirty”.

Good bye, UKPI

The old orthodoxy of higher education data (of which Proceed is a part) is that outputs are linked to inputs. Nowhere is this relationship made clearer than the HESA UKPIs – data that explicitly takes into account the context in which it is generated.

On the HESA blog, Jonathan Waller tells the long story of UKPIs beautifully. Going right back to the early 1990s – and thus pre-devolution – these indicators exist to facilitate comparisons between providers. Once spanning domains as diverse as research performance and graduate destinations, the current crop covers widening participation performance and non-continuation.

Both of these data collections have contextual effects embedded. We know, for instance, that students from disadvantaged backgrounds are less likely to continue in higher education, we know that the regional setting and the subject mix on offer from a provider acts as a constraint as to who will be recruited. This is basic, basic, stuff.

But the emerging approach (as seen in the skills bill) is that providers should be providing equivalent, excellent, outcomes to all students – no matter who they are or where they come from. It’s a difficult proposition to argue against at face value, and because nobody ever deals with data-driven policy at anything other than the most superficial level it now has every chance of being backed by legislation.

So the next set of UKPIs, as used in Proceed, will be the last. The interest simply isn’t there for UK wide approaches to data (a crying shame), the steering group (UKPISG) hasn’t met since 2018, and attempts to secure buy-in from funders and regulators have failed. Some of the data will migrate into standard HESA datasets, but not the benchmarks – which will be scrapped in their current form. Development on an indicator linked to Graduate Outcomes will cease. All of which might cause Proceed some problems too.

In brief: against output equity

Education (or “skills”) is not a device to fix fundamental societal problems. It can help – education can be a wonderful, transformative, experience – but is not a solution in and of itself. This disconnect is exacerbated by a keenness to define what success looks like – a “good” job, or a “good” salary, or even a “good” qualification is not a universal good. If we take the modern centre-left position on the primacy of self-actualisation and societal benefit we situate these values within personal and community goals rather than economic ones.

Fundamentally – if you would feel personally satisfied as a professional artist, it is the role of civic structures to support you in achieving this goal. You’d be happier and more fulfilled, society would benefit from your art and from your contentment. The position is damaged by the reality of low pay and poor working conditions.

The levelling up agenda very much takes the other perspective. Sure, personal fulfilment is important, but it is driven by meaningful work that has an economic value. Society also benefits from an economically useful working population as it drives more money into the local area. It’s a funny world where the left position deals with individuals and the right deals with society and class, but bear with us – the left position sees the individuals as dealing with existing disadvantages, the right position largely does not.

These two philosophies are visible in higher education data when we think about start points and endpoints. Either we take the starting point for an individual as defined and adjust the endpoint according to desire and aptitude, or we take the endpoint as a requirement and imagine (there is no other word) that the differences in starting point don’t exist.

Benchmarking corrects for the difference in starting points in a cohort of students. It’s based on characteristics and choices rather than truly personalised but it is a decent enough proxy. It means we can ask awkward questions, like what would happen if students enrolled at Oxford went to another provider: the “levelling up” position is that the students would do better because Oxford is better, the traditional position is that they would still have to deal with underlying inequalities through no fault of their own.

Data futures

Whatever the future role of data in regulation, the demise of openly available and transparent benchmarks will hurt public understanding of the quality and nature of higher education. Absolute data is valuable, but benchmarks offer a clear and simple (for the user) way to compare providers or subject areas without having to mentally model the impact of student characteristics.

If you remove this way of accounting for demographic difference from data, this does not mean that the demographic differences cease to exist. It just makes it harder to compare providers fairly. And we’re not (yet) cynical enough to imagine that this is a policy goal.

The big question on Proceed is whether universities respond to a low score in a subject by backing out of the provision, or seeking to improve student and careers support for students in it. That’s not straightforward – it’s partly also about student numbers volume generally and the cost of running that subject specifically – but we’d expect those sorts of considerations and discussions to be coming up within academic and corporate governance meetings in the next cycle.

5 responses to “Proceed with caution – measuring positive outcomes needs context

  1. Is there a convenient place where I can check which professions or job titles fall into the definition of “professional employment”? I quickly looked through the OfS website but I couldn’t find anything.

    Thanks!

  2. Hi Jason. It isn’t convenient, but if you google SOC2020 (standard occupational classification) then you’ll find what you’re looking for. This is a list of all known job titles, grouped into nine categories of differing skill levels. Any job grouped as SOC1-3 is counted as professional. There is a tool called CASCOT that helps with finding jobs, but I’m unsure if it’s been updated to the new SOC2020. Have fun!

    1. And thereby hangs a tale! Not all of the sub-groups in SOC2020 (or even SOC2010, which would have been used as the basis for this experimental “proceed”) groups 1-3 are “highly skilled”, but some from elsewhere are.

      The “highly skilled” element is taken from reporting by employees (including the old DLHE returns!), so whether or not a job is “highly skilled” has less to do with whether the job itself requires a particular level of qualifications, and more about whether people who do it tend to have them!

  3. Hi both.

    Can you tell me if this still applies:

    If a graduate goes travelling, cares for someone, or retires after graduation these are counted as a “positive” outcome.

    I can’t find anything like this in OfS regulatory documents or on their website.

    Thanks

Leave a Reply