Jim is an Associate Editor at Wonkhe


David Kernohan is Deputy Editor of Wonkhe

It’s easy to miss something you’re not looking for – especially when you’re in the midst of cobbling together an institutional response to a global pandemic.

But something really important – “landmark” even – has happened in the history of the English higher education regulator. Put simply, our old friend the B3 bear has been up in court, and has scored a comprehensive victory – and it’s one with far reaching ramifications for how regulation could pan out in the future.

You’ll know in general that the sector in England largely abandoned its old way of assessing “quality” in the middle of the last decade, and in its place we now have a regulator that carries out its activities on the basis of judgments about things that can be counted.

You’ll also recall that many of the original proposals a decade ago for things that might be counted – like contact hours, or the teaching qualifications of academic staff, fell by the wayside – but we have a few left – employment outcomes, graduate salaries, and student satisfaction. And super-fans will know that during the initial provider registration process that OfS was operating, having said “no” to some applications on the basis of outcomes, and having slapped conditions of registration and enhanced monitoring on others, it eventually revealed the “baselines” that it was working to.

The problem with blogs that update on developments to the way OfS’ outcomes based regulation is panning out is that there is a lot of exposition required – it’s easy to bust our (unpublished) internal editorial guidelines on word count by doing “the story so far” before we even get to the update. There’s a whole truck full of things on the site that provide a helpful way in here – but if you need to be choosy, try here on provider registration refusals, and here for our introduction to what we’re affectionately calling the B3 bear.

Summer daze

Back in July 2019, OfS announced that it had refused the registration application of the Bloomsbury Institute (which ages ago was called the “London School of Business and Management”) – rendering it unable to recruit new (funded) students and having to apply to OfS for special designation to “teach out” the rest. OfS refused to register for a couple of reasons – a judgment against the E2 (management and governance) condition (“a lack of credibility in its student and financial forecasts”), but more importantly the B3 (quality) condition.

For clarity, OfS took the view that the Institute’s performance in relation to continuation rate data (evidencing the number of students progressing from their first to their second year of study); and rates of progression to graduate employment (in particular progression to professional and managerial jobs or postgraduate study) showed in its view that the institute had failed to demonstrate that it “delivers successful outcomes for all of its higher education students”, which are “recognised and valued by employers and/or enable further study for all of its students”.

In October it then emerged that the High Court had granted Bloomsbury permission for a judicial review of OfS’ decision – based on a bunch of grounds. Bloomsbury argued that OfS had:

  • Ignored the Quality Assurance Agency’s (QAA) positive assessments and commendations of Bloomsbury Institute’s Quality;
  • Failed to consider the characteristics of the Institute’s students and in doing so contravened its own obligation to promote equality of opportunity in relation to participation in higher education.
  • Failed to consult when setting the baselines that it applied in deciding to refuse registration to a provider;
  • Discriminated against the Institute by holding it to identical standards as higher education providers with very different student bodies;
  • Acted contrary to its own guidance – with a disproportionate impact on black, Asian and ethnic minority and mature students – in refusing to treat foundation year students differently to first year students in its assessment.

There’s now been that judicial review, and following lengthy hearings and extensive written submissions, Bloomsbury’s case has been dismissed on all grounds.

If you’re keen to know more about how OfS’ outcomes based regulation works – or if you’re just bored and/or self-isolating – the full judgment should definitely be on your reading list, as it sets out pretty clearly the background, the legal battle and the rationale for the outcome. But if not, here’s some edited highlights.

Outcomes

As a reminder, 30.2 per cent of students starting degree courses at Bloomsbury between 2014-15 and 2016-17 did not continue their studies into a second year. Of all degree students graduating from Bloomsbury in 2015-16 and 2016-17, only 32.7 per cent progressed into professional or managerial jobs, or went on to postgraduate study. These, argues OfS, are weak outcomes.

A big part of the case was Bloomsbury’s grumble that the outcomes “thresholds” and calculations used to determine if a provider’s numbers were of “significant concern” was essentially a secret until after it had been assessed. We went into more detail about how the magic calculations worked here – and you could forgive Bloomsbury for making an assumption based on OfS’ “Regulatory Advice 1” whose paragraph 8 said that OfS “does not intend to set out numerical performance targets” to meet.

But it turns out that OfS didn’t mean it wouldn’t use numerical performance targets at all, just that it wouldn’t set them out in advance – because whilst “performance against an indicator” did form a big part of the overall context for assessing risk, OfS only used them as part of a broad, flexible approach that took into account the context for an individual provider:

For example, when monitoring continuation rates, a decrease for an individual provider could mean performance had worsened. However, levels of absolute performance need to be considered in the context of performance across the sector as a whole and might be considered to be of less concern in the wider context.”

Imagine, if you will, underperformance against a threshold value as a little red light that goes on to encourage the fine folk in the registration team to take a closer look. It’s not the basis by which registration decisions are made, but it’s a handy indicator of where something might be wrong.

Effectively, that’s why the judge accepted that the baselines metrics thresholds that ended up being used weren’t revealed to Bloomsbury in advance – they were only a part of the judgement, and OfS had sort of said it would operate in this way in the consultation on its regulatory framework.

Context yes but benchmarks no

Another thrust of the Bloomsbury defence was that OfS hasn’t taken into account its performance on WP and the sorts of students it was educating – the argument used by much of the college HE sector when critiquing the OfS approach. This is, of course, a bit different to the way something like the TEF works.

But all OfS had to do here was point out that it had consulted on this use of absolute minimum performance already. Paragraph 350 of the Regulatory Framework makes clear that:

the OfS requires all providers to meet a minimum level of performance with respect to all the B conditions, including B3, rather than meeting “sector-adjusted benchmarks” which vary to take into account a variety of factors such as student demographics and subject areas.”

It goes on:

In other words, there will be a single minimum level of performance across the whole of the higher education sector…. notwithstanding this “one-size fits all” approach to minimum levels of performance, the context in which the provider is operating will be taken into account”

OfS’ argument here is that benchmarking the minimum performance would allow poor outcomes to be justified because the provider is compared to other providers whose performance was also poor – risking that some students, particularly those from disadvantaged backgrounds, will inevitably experience worse outcomes. The judge translates this as “such an approach would have authorised lower standards for higher education providers which had a large proportion of disadvantaged students”.

The context bit then applies after the stats – for example, if there was a particular problem with a certain course and the provider was planning to close the course, OfS argued that this would be taken into account.

Eventually we learn that the confidential, internal metrics concern thresholds in its “Decision-Making Guidance” were actually set in May 2018, and it does raise the question – why did it take until October 2019 for OfS to publish them? Notwithstanding whether it was somehow unlawful to hide them, wouldn’t publishing them have saved a lot of heartache and expense?

Umming and aahing

Then we get to what we might call the “fascinating behind the scenes” bit of the story. By September 2018 (when everyone was getting antsy about registration delays) Bloomsbury was asking for updates – and it turns out that in October a draft “Decision Letter” was prepared by OfS assessors that said it had decided to register Bloomsbury, but to impose a “specific ongoing condition of registration” because its assessment of outcomes data suggested that Bloomsbury was at increasing risk of breaching Condition B3.

It turns out the letter was never finalised and not sent – and in fact, it was one of a number of draft assessments that were prepared so that OfS’s Provider Risk Committee (“PRC”) could consider some “generic issues” that were cropping up in the assessments.

And this bit deserves the full detail. Internal OfS assessors at that stage noted that only two years’ worth of data were available, and the provider had made various changes recently. So the recommendation that the evidence did not “conclusively demonstrate” that Bloomsbury was not delivering successful outcomes for all of its current students – and therefore that OfS should decide that Bloomsbury satisfied all of the conditions.

This recommendation went to PRC on 19 November 2018 – and during the meeting, as well as considering Bloomsbury’s application, it held a “general discussion” on the further education sector. It agreed that OfS should “remain mindful” that students were unlikely to be receiving a high quality education experience if the provider’s outcomes fell below OfS’ baselines, and resolved that the PRC was minded to adopt a “strict line” towards those providers which did not satisfy the baselines for B3.

And there’s the thing. There are interesting arguments about the extent to which you can meaningfully hold providers responsible for, say, continuation or employment outcomes – and at least some evidence that suggests that you might have the same “quality” outputs, teaching or student support as others, but the type of student might mean worse outcomes. Just the other day I was chatting to a provider on the bubble of the baselines who was worried that too many degree course students were leaving mid-year for all sorts of reasons entirely unrelated to the provider’s efforts – in many cases positive – but whose leaving would look terrible for B3.

But to the extent to which those arguments hold any water, they didn’t (and still don’t) with OfS’ PRC – so the baselines, judged on the outcomes, are very much here to stay.

And this, by the way, is what I mean when I often say that the “interests” of students aren’t always as clear cut as OfS pretends. For all the trumpets about OfS and student engagement, I’ll bet the PRC didn’t chat to OfS’ student panel, the affected students at Bloomsbury or ideally more generally the sorts of students that it has tended to recruit to ask them about this “strict line”, why students aren’t continuing, or about the impacts of the decision on them.

See if it sticks

Bloomsbury tried plenty of other arguments – in fact a full read suggests it tried to throw pretty much everything at the wall to see if it stuck, but one that stuck out was its attempt to argue that because QAA had said it was OK – and because QAA was the “Designated Quality Body”, that its view should have held.

The judge was having none of that. Noting that QAA had no experience in judging metrics-based “outcomes”, it accepted OfS’ argument that “the Regulatory Framework made clear that the OfS would work with the QAA in relation to Conditions B1, B2, B4 and B5, but not in relation to Condition B3” – carefully differentiating between standards (QAA) and quality.

Interestingly, unless I’ve missed it, the thing Bloomsbury didn’t try was to argue that there may well be plenty of large providers on the register where individual subject areas have as many students as, but worse outcomes than, Bloomsbury. The question of consistency of decision making – and the divide between provider level and subject level metrics – isn’t really in the judgment at all.

Blame it on the baselines

So what happens next? In this document on the baselines, OfS gave us a heads up that performance indicators for providers would be updated annually “in around March” to incorporate the most recent year of student data once it has become available (although it says “there is not a published timeline for this”). And the update becomes interesting for all sorts of reasons.

First, there are still some providers to be registered who have applications pending. Then there’s some new providers that made it on to the register who as of this month will have enough data banked with HESA and HEFCE to allow OfS to make a “sound” B3 baseline judgment – which could push them under and back off the register (putting their student protection plan instantly to the test).

Then there’s Gavin Williamson’s letter to OfS from last September when he said:

Where… there are unacceptable levels of drop-out rates or failures to equip students with qualifications that are recognised and valued by employers… we fully support the OfS in using the full range of monitoring and enforcement powers it now has at its disposal”.

He also “fully supported” OfS’ intention to “revisit the minimum baselines” used when making regulatory judgements about student outcomes, exploring where “current baseline requirements might be raised” and wanted to see “even more rigorous and demanding quality requirements” to apply to providers in the future.

And in response in its Christmas annual review, OfS said:

We will consult on raising these baselines so that they are more demanding, and on using our regulatory powers to require providers to improve pockets of weak provision.”

Raise it on the baseline

There’s a number of ways it could do that raising of the baseline. Right now, for each level of study and each mode of participation, OfS sets two percentages that generate three categories.

  • One is called “not of concern”, where the proportion of the provider’s students who experience a particular outcome is higher than X. Let’s call that “green”.
  • Another is called “of concern”, where the proportion of a provider’s students who continue their studies is between X and Y – “amber” if you will.
  • “Significant concern” is if the proportion of the provider’s students who experience an outcome is less than “Y”. That’s a flashing red.

As a reminder, here’s what that looks like for the “continuation” metric (for full-time students)

So one thing that OfS could do is just to raise the percentages. Even if it didn’t, worsening sector-wide performance on a given metric would have an impact. But there are other options.

When it did round one, having done split student characteristics metrics (age, POLAR, IMD quintile4, ethnicity, disability, sex and domicile), OfS decided that if 75 per cent or more of a provider’s students fell into demographic groups with at least one outcome of significant concern (ie “red”), then Condition B3 would be “unlikely to be satisfied”. It could have said that no students should fall into a demographic group experiencing “red” outcomes, but it “did not consider it proportionate to set the threshold at 0 per cent” as this was likely to have resulted in a “very large proportion of the sector not satisfying the baselines”. Hence one way of raising the threshold would be to pick a halfway point.

Once the baselines are (re)set, we’re assuming that a provider already on the register whose outcomes would have caused a rejection if it was trying to get on will be chucked off, notwithstanding the “contextual factors” bit.

If you’ve followed this far, you might have started to sense that this feels unfair if OfS knows everyone’s marks and then sets the grade boundaries afterwards – but that ship sailed in the provider registration process. And if you’re a lateral thinker, you might have spotted that this might be a good way of controlling student numbers, if you wanted to. But that would depend on the result looking right, wouldn’t it? After all, you wouldn’t want the outcome to hit the “north”. Or WP. Or “provider diversity”.

You could even do this at subject level within a provider, if you consulted on doing so. OfS told us that “we have committed to review our approach to the way we assess student outcomes and this will be subject to consultation”.

Does my refused registration look big in this

So – we thought we’d at least have a go at looking at who’s close to coming a cropper in relation to the B3 bear. But we hit several brick walls.

For now, we only looked at the continuation indicator – as this appears to be the one that has caused providers to fall foul of the regulator most often so far (there’s also baselines for completion, degree outcome gaps, and graduate employment). Trouble is, we don’t have access to the characteristics splits referred to above so can’t do that whizzy thing with the splits described in the methodology.

OfS indicators are constructed to show a provider’s performance in aggregate, over a time series – for the number of years up to a five year period for which indicators could be derived from available student data, using the most recent eight years of student data returns. But that would have been a job too complex even for DK, who works with public data of the sort mere mortals outside of the OfS would use to understand continuation rates within providers.

And as a reminder – for this exercise OfS uses students registered at a provider not just taught; and all students (UK, EU and non-EU, undergraduates and postgraduates) unlike the more restricted student set in the TEF and the UK performance indicators.

So the tableau below couldn’t be more incomplete if it tried – but at least you can play with the slider on a slice of the outcomes data and see who falls below 75%, and who might fall below say 80% or 85% if OfS “raised the regulatory baseline(s)”.

Where’s the data?

[Full Screen]

You may, at this point, be idly wondering whether the Office for Students, as a provider of national statistics and official statistics with the badge and the statutory instrument and everything, should perhaps see its way to publishing this data. Apart from the interests we have around transparency in regulation, it would also be helpful (for instance) in understanding the differences between the experiences of home and international students. Given that OfS is quite happy to publish nonsense about “unexplained” first class degrees, it feels like a curious omission. OfS said questions over whether it would publish the metrics by mode/level/characteristic split by provider “will be part of [its forthcoming] consultation”.

The arguments that Bloomsbury Institute made – effectively that they were unable to understand in advance that their provision would struggle to pass muster – could equally be levelled by anyone unable to see the data that would be used. This group would include many smaller providers without the inhouse data capacity (or even the HEIDIplus subscription) to work it out themselves. And crucially, not being able to see everyone’s data is stopping us from being able to assess whether OfS is being consistent in its decisions. But maybe that’s the point.

Corks a poppin

All of this, by the way, is why we think those in the sector popping corks about the “delay” to subject-TEF need to think hard. Remember what those board papers said. While three different models of subject level ratings have been trialled, “it does not yet enable robust and credible ratings to be produced at subject level”. But “from the time when the [new] proposed approach [to institution TEF] is published for consultation”, OfS will “supplement this” by publishing metrics at subject level “as they become available”. And given the evidence they already have on “variability between subjects”, there is an imperative to “demonstrate subject differences” within the TEF metrics, while recognising the constraints on producing subject level ratings in the next phase.

That means, folks, that soon enough universities’ single subjects which hold more students than many of the providers below the B3 thresholds will be exposed as flashing “red” – a hard position for OfS to maintain long term. Either OfS will use the powers we think it already has to refuse funding for those subjects in those providers, or at the very least it might threaten to do so (and consult on doing so) as that performance at subject level becomes exposed for the first time. In any event, not only is OfS’ B3 bear here to stay – but we expect will be popping up with much more regularity in the months and years to come.

Leave a Reply