David Kernohan is Deputy Editor of Wonkhe

What is policy, after all, but a series of discussions about data definitions?

The Consultation on constructing student outcome and experience indicators sits at the heart of the two parallel exercises covering TEF and B3 baselines, and related work on access and participation. In essence:

  • Compliance with registration condition B3 will draw on student outcomes indicators (continuation, completion, progression)
  • TEF assessments will draw on student outcomes and experience indicators (adding the National Student Survey – NSS – to the mix)
  • Access and participation regulation will use indicators about access to higher education, and student outcomes (access, continuation, degree outcomes, progression)

The intention is that these indicators will be defined consistently, making for a less confusing and more comprehensible system. The same data, for example, on student outcomes could also be used when OfS reports on its own key performance measures, for risk-based regulatory monitoring, and for any other kind of sector-level analysis, and for information published to students via Discover Uni.

Abandon hope all ye who enter here

So far, you may think, so good. However, this approach makes something that should be done in a calm and considered way drawing on advice from relevant stakeholders – data definitions – into something that has a very visible political implication, which would be seen in attempts to maintain a UK-wide system of data collection and use, and in relationships with the independent Designated Data Body (currently HESA).

OfS has tried very hard to make this all sound dull and complex – we get five early paragraphs hoisting the equivalent of a “here be dragons” sign in our path:

This is a technical consultation, making proposals about the detailed construction and implementation of approaches to the analysis of individualised student data returns submitted by higher education providers, and the application and interpretation of advanced statistical methods.

But do not abandon hope, gentle reader, because it really isn’t as hard as all that. It’s just long. Really, really, long.

In a nutshell

Right at the top, for instance, we get broad definitions of each of the key indicators as they would be constructed:

  • Continuation – the percentage of students continuing in their study of a higher education qualification (or have completed it) one year and 15 days after they started the course (two years and 15 days for part time)
  • Completion – the percentage of students that complete a higher education qualification (there’s a chance to comment on whether this should be a cohort-tracking measure, or a compound indicator similar to the current HESA UKPI projections for continuation)
  • Progression – via Graduate Outcomes survey data, and to “managerial” or “professional” employment, or further study, 15 months after a qualification has been awarded.
  • Student experience – rates of agreement in the National Student Survey, at scale rather than question level.
  • Access, and degree outcomes – no change to current measures.

Who’s in charge?

We learn that OfS (not HESA) will centrally derive these measures from raw data on the same basis from all providers. The regulator takes the view that:

it is not possible to rely on UK Performance Indicators or other existing measures published by HESA or the ESFA, because none of those measures (singularly or collectively) use definitions which are consistent with the OfS’s proposed policy priorities for assessment of student outcomes, nor provide complete and consistent coverage of providers registered with the OfS.

In other words, because OfS has chosen to design and use definitions that are not commonly in use elsewhere – and because it hasn’t consulted with other data users beforehand – it needs to construct these indicators itself rather than use a dedicated and respected designated data body to do this independently.

The trend in wider policy circles has been towards the use of unimpeachably independent data sources, most notably the Office for National Statistics and the Office for Budgetary Responsibility. The Department for Work and Pensions doesn’t get to muck about with the definition of “unemployment” for political gain, the Treasury can’t randomly invent a new measure of inflation to flatter economic activity it likes and down play activity that it does not. It’s not made clear why OfS thinks it can do otherwise with higher education data.

There also seems to be an appetite within OfS for producing classifications. There are times when custom classifications are unavoidable, but for the likely use cases in higher education regulation – and, indeed, in the coming world of a single tertiary sector – it would be a much better look if these slices were derived independently.

Ones and zeros, all the way down

A big feature of the B3 consultation is the conversion of detailed measures into binaries – reducing proportions to a simple “positive” and “not positive” judgement. There is a welcome presumption that where it is not clear which of these categories a data point falls it should be considered as positive or neutral – more nuance would clog the pipes of regulation and result in different interpretations for different providers. And regulatory decisions will not be made on the basis of data alone – data flags areas of concern, but interventions are designed based on the context of each individual provider – which is as close as you can get to having contextual benchmarks while still looking like you don’t have to.

But in rounding off the data in the indicators themselves in this way we are left with a few intriguing anomalies. Such as:

  • Any level of further study will be treated as a positive outcome (including study below the level of the previous qualification)
  • Counting students who do not complete in the expected time, but are actively studying, as having completed.
  • Counting students who complete a qualification different from the one they started as having completed.
  • Treating the outcomes of student who have transferred to a different provider as neutral (and thus removing them from the calculation on continuation entirely)

Here I’m not intending to make a value judgement as to whether these in and of themselves are bad decisions, I am flagging them as “interesting” when compared to the general direction of travel for OfS. Certainly to lose fidelity in understanding further study as compared to current practice by deliberate choice has quite the vibe.

If you loved the Access and Participation dashboard (and I know there must have been a few of you) you will be delighted to learn that both TEF and B3 data will feature on similar delights of data visualisation in future (though there will still be .csv and .xlsx files available). What’s striking here is that both dashboards will feature the same data on outcomes, derived in the same way, but only the TEF version will include benchmarks, and only the B3 data will be available at the full level of granularity.

These dashboards and datasets will be updated annually, with providers getting more support on submission and losing the right to ask that data is not published.

It’s like JACS, but for student populations

I do love a common overarching hierarchy, and it looks like we’ll be getting one for future OfS data releases. At the top level sits four populations:

  • Registered population
  • Taught population
  • Taught or Registered (ToR) population
  • Partnership population

You’ll recognise the first two from NSS releases, the third could loosely be defined as “all students affected by decisions at a provider” (taught, registered, subcontracted in, subcontracted out), and the fourth covers subcontracted out and validated provision only. This will allow different regulatory functions to look at the population of most relevance to that purpose – and adds a welcome eye on the experiences of partnership students, which is often lost in mainstream data.

Beneath populations sit indicators, as trailed in the phase 1 consultation on quality and standards, and broadly equivalent to the list under “in a nutshell” above. Indicators would only be available for each population, mode of study, and level of study- there would be no “overall” value for indicators.

And beneath indicators sit splits, familiar from TEF data. These can include intersections (as seen in the access and participation data) and are designed to be used primarily to ensure “equality of opportunity”. Splits, I should also add, are the kind of considerations that are currently used in the construction of benchmarks.

As a “for instance” the indicators used in the B3 would examine:

  • Taught or registered, taught, or partnership populations
  • Continuation, completion, or progression indicators over full-time, part-time, or apprenticeship modes and what looks like all levels of study.
  • Time series, subject, characteristics, course type, and partnership arrangements as univariate splits with characteristics showing one of age, domicile, disability, FSM, ethnicity, ABCs quintile, sex, IMD quintile, and geography of employment quintile.

If your eyes glazed over there, don’t worry. It just means that if you want to know about the progression of full-time undergraduate taught population students by domicile at a given provider you are in luck. If you wanted to know about full-time undergraduate taught population students domiciled in the UK studying engineering at given provider you are out of luck. And so forth.

Implicit in this is that other regulatory uses of this data may mean that OfS select other options from the OfS data menu – the information you seek could be in another castle, as it were. TEF, it appears, is only concerned with the taught or registered population and undergraduate study, the access and participation dashboard is only interested in the registered population, but also adds additional indicators by year aggregation.

In attempting to standardise data definitions, OfS has only made it more apparent that there have been some fairly arbitrary choices about data use. Do students at partner institutions represent a different access and participation profile? It’s a fair question (some would say a pressing question), but the answer has been designed out of the data that is used in this area of regulation.

It gets worse

So, there’s some issues spotted already, but it’s worth it in the race to standardised and comprehensible data use, right? Not so fast. Because the Access and Participation dataset has already been in use (in those five year plans that Michelle Donelan unceremoniously axed a few weeks back) the data in that collection will gradually transition to use refined definitions, while the new B3 and TEF collections will go straight to the new definitions. Meaning that data use will not be standardised for a while yet.

Parts of this are unavoidable – with DLHE no longer a thing the transition to graduate outcomes needs to be managed. But the upshot is you will be writing your A&P plans based on data with one definition, and see performance against that plan measured against another.

And here’s another one. You know how the Lifelong Loan Entitlement is bringing modular approaches to funding qualifications at level four and above? The design of the student outcome and experience indicators (those used in the new TEF, in other words) explicitly excludes students studying a module (rather than a course) at a provider for the foreseeable future. I can see why this has been done, but I can’t help but think there is another consultation on the way that might make OfS rethink this approach. The sheer scale and scope of the set of decisions within this consultation mean that few would be keen to consider it twice in three years.

It’s enough to make one consider the wisdom of introducing a set of definitions underpinning a data-driven system of regulation in the run up to a complete rethink of funding and participation modes. Instead we get the half-hearted promise of another data definition consultation on higher technical qualification, and one on modular provision of higher education.

More on mobility and retention

If you’re a UK registered student, but you are studying overseas, the Office for Students doesn’t care about collecting data about your outcomes or experience (good news though, the QAA does). With the expansion of offshore and remote provision I do feel like we should know at least something about what happens to these students (who pay fees to English providers to study qualifications awarded by English providers), but we’d have to look outside of OfS publications (maybe at the HESA Aggregate Offshore Record) to do so.

And you may recall my lasting bugbear about students who leave their provider within two weeks. I can’t blame OfS for this, but the presumption that students who leave their course within 14 days of commencement shouldn’t count towards statistics – nobody (other than those absolute beasts at ESFA) even collects data on them – still worries me a little.

The old continuation HESA KPI actually removed students who left within 50 days – the OfS presumption to a two-week cut off (even if based on the largely nonsensical idea that “cooling off” periods are standard in HE and loan liability starts after 14 days – on that latter keen Wonkhe readers will know that this is quite variable and can refer back to a learning aim start date or the date that actual study commenced). There’s a lost opportunity here to synchronise continuation data with SLC returns – showing, for the majority of home students (or nearly all if we ever get round to offering Sharia-compliant loans) attrition after the point that they become liable for fees.

This 14 days thing comes back in another curious decision – OfS will identify an entrant for a given year as someone starting a course between 17 July and the following 16 July. This differs from the current HESA data collection period (1 August to 31 July, since you ask) and the stated reason is to allow for that 14 days grace period – if you started on 16 July and were still studying on 1 August, you are definitely a student. Something that has never been a problem in the history of higher education data has, in other words, now become a problem.

Casually adding data burden like a boss

We learn that OfS has decided that we need to collect degree classification information for things that are not first degrees, even though the use of degree classification information as an indicator feels like it has been deprecated. This probably makes sense in the light of the oncoming LLE that is ignored elsewhere, but it feels like the kind of thing that HESA should decide on based on the needs of all data users.

On the other hand, the inclusion of people who study multiple higher education qualifications sequentially (students on top-up courses, for instance) multiple times in progression data because they need to be asked about Graduate Outcomes multiple times is merrily hand-waved away. We want to gather data on all qualifications (again) but this time there is an impact on the quality of progression data.

Meanwhile intercalating students – like medical students who might move to a new provider for a year to study a standalone qualification – appear in the entry cohort for that provider. Why do that, rather than using GMC data collected to address this very issue and concentrate on the main registration and qualification aim? No idea.

Again and again we see the limitations of existing data in these decisions, and the OfS presumption that it would rather add data burden that use someone else’s definitions (or, gods forbid agree definitions with others) feels rather old-fashioned here.

Cohort or compound

Let’s return to the vexed issue of completion measures. Intended as a complement to continuation data showing where a student has completed a course, this is currently collected in HESA data as “qualifiers” – students who have gained a qualification in the year in question. It’s slightly problematic in that we don’t know if the achieved qualification was the one they were aiming for (they wanted a degree but gained a HND as an exit qualification instead), but it is a well understood measure.

The gold-standard way to address this problem would be the first of OfS’ alternatives – tracking individual students from entry to completion on their chosen course. This is a cohort measure, but one with a severe lag – we wouldn’t get the first data on students that started last September until 2024 at the earliest. The platinum-standard (see, there is another metal!) method would be to track credit accumulation. We can’t do this because of data limitations that feel rather like the kind of thing we need to fix in the next couple of years.

The other way, as noted, owes a methodological debt to the completion projections in the old HESA UKPI (although OfS noisily denies this – the difference appears to be that you look over six cohorts and subtract proportional non-continuation from 100 rather than using the most recent four years of data to come up with a historic model). Using what we know about rates of withdrawal at each point in the student journey to calculate the likelihood of students completing the qualification they started, presents one clear upside: more timely data. The downside is that we have invented this data based on the experiences of lots of different groups of students, making it very difficult to use it as a means of regulation – one, for instance, that can see the impact of provider attempts to improve completion.

The UKPI completion rate method was good enough for Proceed, but not for regulation – whichever method is chosen will probably replace the UKPI in the Secretary of State’s measure of choice. We’re clearly being steered towards the compound indicator despite the flaws – for me if you want to measure completion you need to actually measure completion, however difficult that makes it to make toy metrics to impress ministers. But that’s just me.

My definition is this

One of the nice things about Proceed is a sensible definition of a positive graduate outcome. A green tick in Proceed comes with news that a graduate is travelling, caring, or retiring, studying, or working in a professional job. This stays – and I welcome it without caveat, though the official reasoning seems to hint that these are merely negative outcomes that a provider has no control over. The activities have to be listed as a “main activity” too, so gods forbid you are working all hours in a non-graduate job to support your ability to be a carer.

We’re at the very top level of SOC (major groups 1-3) too, so we’re not exactly claiming fidelity to the idea of a “graduate job” here. There are many examples of roles primarily undertaken by graduates that are not in groups 1-3, so perhaps a bit more fidelity could be useful here – OfS are aware of this, but because no possible approach is absolutely perfect(!) we are sticking with what we have: graduates going into their desired job, one that the course directly leads to, and this not being a positive outcome.

The “skills level” thing is faintly suggested as an alternative – it is arguably a better one. Even my old examples of printmaking (and footwear production) crop up as graduate jobs that graduates do that are somehow not “desirable”. Value judgements in data definition? This feels like policy to me.

Characteristic splits

Grouping data on personal characteristics is – arguably – an act of violence. There is a literature that suggests that such groupings suppress the essentially fluid nature of personhood, and attribute political and personal implications that may not accord with a person’s own perceptions of their identity,.

I mention this because OfS intend to adhere to the principle that a split “should provide meaningful information that is capable of supporting reliable interpretations”, putting a use case rather than a descriptive case quality bar on the definition of these categories. There’s a risk here that a defined group becomes defined by disparities in higher education performance and attainment for reasons other than those which could be reasonably expected to cause such a disparity. It may be that white working class boys have a lower rates of access to university-level study than any other group, but it is arguable whether this is because of things providers might be getting wrong in recruitment or offer making, or whether this is due to wider societal norms that suggest white working class boys would be more likely to choose other post-compulsory destinations.

As OfS reserves the right to come up with any splits that it fancies for regulatory purposes, we do need to be alive to the possibility of p-hacking here.

Meanwhile, we know from survey data that trans and non-binary students have a very different experience of higher education to those that identify as male and female. Here’s what OfS says about them (though, to be clear, it does earlier on “recognise the importance of considering all categories of sex” but notes low proportions in current datasets):

We also acknowledge that values of ‘other’ may be returned to reflect characteristics of a student’s biological sex, but previous guidance associated with collection of the underlying data item also permits a degree of ambiguity for use of the value to reflect a student’s gender identity. As a result, we propose to define the split indicator to show student outcomes and experiences of male and female students.

There’s still more

On and on this very technical and very dense consultation goes. In a sane world there would not be a presumption that people with the skills to parse these important decisions have the time to respond by 17 March, especially if you factor in two other major consultations from OfS, a consultation on the LLE from DfE, and whatever is going in with Data Futures this week. This is a process that should have started back in 2018 with a proper consultation on underpinning principles, and proceeded through partnership and close collaboration with domain experts in each data area.

It’s a ridiculous way to make policy – everything from the construction of split metrics to the occupation coding to benchmarking methodology is rammed in here, while the scaffolding questions are sparse and general. The likelihood of any one response engaging meaningfully with the entirety of this material (alongside parallel and expected policy consultations) is limited. Got a view on why student experience benchmarks for part time students shouldn’t include information on sex? It’s right there on page 133 of 193 (in the table above paragraph 433).

Why are benchmarks and statistical reliability only a thing in TEF? We never really get a response on that. Why is 23 the magic number of students for data suppression – why not 33? Why are the principles for benchmarking factor selections guiding rather than binding if they are intended to build public trust (Annex D, my friend, page 169)

There’s enough in here for a whole series of consultations – as would be expected if you are basically designing a higher education data system from the ground up for purposes that no-one is particularly happy about (and you just need to look at the phase two resources to see that). As it stands, OfS will get what it wants again and we’ll deal with a similar consultation in two years when it all falls apart ahead of the Lifelong Loan Entitlement. Happy days.

Leave a Reply