This article is more than 4 years old

Are student number controls coming back to English higher education?

In a long read, Jim Dickinson pulls the pieces together to think through how student number controls might return to English higher education

Jim is an Associate Editor (SUs) at Wonkhe

Lots of people seem to be worried about the reintroduction of student number controls to the English undergraduate system.

As I write, eight higher education providers have officially had their application to join the Office for Students register publicly refused – one has now been let on, and of the remaining seven, six have refusals justified at least in part on the basis of their (low) continuation rates.

Two providers have been given a formal condition of registration for similar reasons that “prohibit any increase in the number of students and higher education courses”. And right now I count 21 providers with student numbers recorded by HESA for 2018-19 that haven’t yet made it onto OfS’ register.

Sure looks like student number controls to me.

So what’s going on, and what could be coming next? Are the “controls” we’ve seen so far mere experiments – live pilots for what could come next? Could the next phase see the English regulator being altogether tougher, and playing a central role in the spending restraint that government will require? We thought we’d take a look back at the clues and see if we could piece together a picture.

Winners and losers

The first thing is to reflect on is who won and lost the election. In a Dominic Cummings, “baddies and goodies” sort of way, the (liberal) elites lost the election. Virtue signalling-snowflakes lost the election. Remainers lost the election. The left lost the election. Students and academics lost the election. The 50% of young people that go to university? They lost the election.

When a tiny proportion of young people go to university and you’re thinking of making people paying personally towards that palatable, talk of a “graduate premium” makes lots of sense. But once half the population goes, you’re saying that the other half will be “left behind”. And in many ways, that other half – a half painted vividly in the opening sections of the Augar report – just “won” the election.

In my pre-election prediction envelope, I wrote down “it’s going to be really uncool to talk about the graduate premium”. And then in the blink of an eye, there’s UUK urging us all to stop talking about the “graduate premium” having not stopped talking about it for twenty years, marching vice chancellors directly into a trap.

Back to the future

Those clues, then. Let’s start here – with 2010’s cross-party commissioned Browne review. “We are relying on student choice to drive up quality”, said the review. “Students … will decide where the funding should go; and institutions will compete to get it.” But did it pan out like that?

  • Mistake one was that no-one competed on price (to the surprise of absolutely no-one outside of DfE) – so the system was more expensive than it was supposed to be.
  • Mistake two was hiding the true cost of the system by making loans to students and their institutions whose cost was not included in the deficit – turning those in their twenties against a government whose failure with this group in turn needed to be blamed on higher education.
  • Mistake three was developing progressive system features whose progressiveness only dawned on people at the other end of their life if they’d been economically unsuccessful.
  • Mistake four was assuming that “quality” could be metricised and fed back to applicants to help them make their choices – only a small group in the middle choose like this, with everyone else choosing on the basis of tariff point currency or distance from home.
  • As the decade continued, mistake five was letting every provider expand as much as they wanted – causing the purveyors of those course-level after eight mints to quietly reduce the cocoa content and put fewer mints in a box, pushing the mid-market to near collapse, and pushing everyone to do things that just look so… uncouth.
  • Mistake six was assuming that demand for HE and HE’s willingness to accept students onto its courses would be roughly consistent – but in fact as a proportion of the population student numbers grew, removing any fiscally helpful system-wide impact of a contracting market of 18-year olds.
  • And mistake seven was letting a croaky-voiced Theresa May over-react to 2017’s snap election result – not so much with her “let’s cut fees” Augar review, but with a knee-jerk yank of the graduate repayment threshold from £21k to £25k overnight – shifting the burden of the cost of the system from about 65 per cent graduate and 35 percent state to around 50:50.

Golden Browne

We should also reflect on Browne and metrics. Right now there’s concern about an over-reliance on a single type of metric to signal value, but is it any wonder? It feels like a long time ago now, but 2010’s “Students [applicants] at the heart of the system” white paper that responded to Browne was all about metrics. It called on higher education institutions “to provide a standard set of information about their courses” (the largely ignored Key Information Set (KIS)) and “make it easier for prospective students to find and compare this information”. The white paper had plenty of cunning ideas on this front.

David Willetts wanted to put contact hours in there, but this was organised against. He encouraged institutions to publish “information for students about teaching qualifications, fellowships and expertise of their teaching staff”, but that was too complicated (and the data quality was terrible), so the sector killed it off. It invited the Public Information Steering Group (HEPISG) to consider a “National Student Survey of taught postgraduates” – fiercely resisted and still only ever piloted. He wanted “data showing the type and subjects of actual qualifications held by previously successful applicants”, still never implemented. And it called for data on employment and earnings outcomes to be “analysed and presented in a variety of formats to meet the needs of students, parents and advisors”.

In other words, the sector has spent a decade trashing all of the government’s ideas for metrics, and now salary and “satisfaction” are the only ones left – being boiled into benchmarked TEF medals.

Government choices

That gives a government keen to get the cost of higher education under control some choices. It could apply some blanket LEO-data thresholds to the institutions, courses, or students it is prepared to fund – but that would be awfully blunt, and would probably harm the North disproportionately (unless they perform some unlikely data alchemy) – where money needs to be seen to be pouring in.

You could propose that entitlement to student finance in the future be determined by a minimum entry standard, based on aptitude. This was one of the central proposals in Browne – but again its bluntness (and likely impact on WP) saw it killed off both back then and when it was floated during the Augar review.

That leaves you another tempting alternative. Remember that Michael Barber’s Office for Students (that’s Michael “key member of the Browne review panel” Barber) is much more closely modelled on the Number 10 policy unit that he used to run than a traditional regulator. What if OfS is offering DfE an already worked up way to get the cost of higher education under control? We’ve certainly been left some clues to this end.

Those clues in full

Note the political framing around the Teaching Excellence (and now Student Outcomes) framework. TEF was always supposed to be usable as a tool to influence fee levels and therefore course funding. At the launch of TEF year 2, Jo Johnson took the trouble to say that “the purpose of the [subject] pilots is not to test whether to proceed to subject assessment, but to determine how best to do so”.

Then in September, Gavin Williamson said to OfS:

I would like the OfS to publish subject level TEF in 2021. This should be alongside the implementation of a new TEF model to be developed following the publication of the government response to the Dame Shirley Pearce’s Independent Review of TEF undertaken under Section 26 of HERA 2017.

This new model should ensure the TEF is seamlessly integrated into OfS’s approach to the regulation of quality more broadly.”

It’s widely believed that September’s letter to OfS was Johnson’s – but his resignation over Brexit meant that it was signed by Gavin Williamson – and it was this announced the roll out of subject-TEF. Chris Skidmore was said to be cool on subject TEF, but Skidmore has gone, and Williamson remains.

Dame Shirley Pearce’s review of the TEF will include a close look at the metrics, work on international attitudes to the exercise, the views of applicants on the usefulness of the exercise, and broader debates about relevance, cost, usefulness, and fairness (benchmarking). In other words it’s likely to be a refining exercise, whose recommendations could strengthen the exercise rather than weaken it.

The review has now been complete for over six months, and it’s likely that DfE’s response to it has largely been complete too. Could the delay be about lining up the response to the review with its response to Augar? And does anyone really think that the funding status quo will prevail in a scenario where Dominic Cumming’s Treasury is demanding 5% cuts to departmental budgets – particularly now that higher education spend is now in the post-fiscal illusion mix?

Regulatory evidence

Let’s then look at some of things OfS itself has said. The Conservative manifesto said that it would “continue to explore ways to tackle the problem of grade inflation and low quality courses”. But what’s a “low quality course”? OfS’ annual review – published in December – said that it sets:

numerical baselines for indicators such as continuation, completion and employment as part of our assessment of the outcomes delivered for Students. Our view is that a minimum level of performance should be delivered for all students, regardless of their background or what and where they study.”

We looked at those “B3 baselines” on the site. Note that that Williamson letter also asked OfS to:

use the evidence you have gathered through the registration process to identify where current baseline requirements might be raised to ensure that providers deliver successful outcomes for all students.

And that Christmas OfS report said that:

We will consult on raising these baselines so that they are more demanding, and on using our regulatory powers to require providers to improve pockets of weak provision.”

At the time we noted that West London College has a formal condition of registration on the basis of some of its outcomes – yet on its website it’s boasting about its TEF Silver award, which denotes “the excellent outcomes achieved by our students”. When I asked OfS whether it would be taking steps to reconcile the logic of the two approaches (baseline based ongoing registration, and the TEF), I got a short and sharp “Yes”.

Then at the launch of its review into Access and Participation plans in January, it said:

We are also actively seeking to align our regulation of access and participation more closely with other elements of the regulatory framework such as the TEF and the quality conditions in the regulatory framework, to ensure that these different aspects of our regulation work consistently and coherently with one another. Our consultation on the TEF during 2020 will explore how it can more effectively support access and participation priorities.

Then look at the things OfS board papers tell us about its funding review. The proposed framework that’s coming soon for recurrent funding is to be based around three themes.

  • First is courses, where the proposed (but not yet fully disclosed) approach will support the “quality” of provision, the “choice and experience” available to students, and the “wider economic and social benefit” of higher education. Understanding costs within this theme “would be important”, as would “driving value”.
  • The second theme will be students, where investment is to be focussed on “where it can most effectively reduce gaps in equality of opportunity in access, success, and progression”, moving towards an “explicit link” between funding and the commitments in access and participation plans.
  • And the third will be providers, where investment will “protect and promote world leading provision”.

And then, finally, look at what the minutes of November’s OfS board told us. It will implement a new TEF “in a way that is coherent with” its regulatory conditions. The subject TEF trials “highlight the importance of addressing subject variation”, but do not yet enable robust and credible ratings to be produced at subject level. Nevertheless, to influence behaviour and incentivise improvement, OfS will “publish [supplementary] metrics at subject level as they become available.”

Note the subtle difference. “Given the evidence on variability between subjects, there is an imperative to demonstrate subject differences within the TEF metrics, assessments and outcomes” whilst recognising the constraints on producing “subject level ratings in the next phase” (my bolding). In other words, subject-level medals aren’t yet ready. But that won’t stop OfS publishing subject level metrics (that others could could turn into medals).

And once those subject-level metrics cats are out of the bags, is it really tenable for OfS not to implement baseline outcomes regulation at subject level – given in many universities, entire subject areas dwarf the size of individual colleges who’ve already been told “no”? See also access and participation – where the truth about hitherto hidden progress on access and participation to the elite professions (as opposed to easy to expand business and social sciences courses) could be about to be laid bare.

What if?

All of that leaves us with some tantalising questions.

  • What if eventually OfS brings together its three main regulatory activities – baseline monitoring, TEF and Access and Participation – into a single, mega-machine operating at subject level?
  • What if the DfE Augar response takes back some of the subsidy given to the sector (via the headline fee cut Theresa May so desperately wanted) and distributes it via OfS – making its funding review much more meaningful?
  • And what if the system of B3 baselines – already used to control and restrict funding and places – is fanned out around the sector at subject level?

The reality is that Government right now commits to funding higher education only above a baseline quality threshold – constructed from a mix of metrics. It’s an essentially automated, metrics-driven system of funding and regulation. Human input is needed to set the thresholds, and then as the data flows in it’s a case of “computer says no” for provision that don’t hit the mark. If you want to spend less on it – or at least control the spend – you can raise the threshold, as Gavin asked OfS to do in September.

But doing so at provider level is blunt. What is stopping it operate at a more granular level than that of provider? Data Futures is unlikely to start providing real time data any time soon, and data quality (especially where data is directly linked to institutional viability) is clearly open to question. DfE’s admission that region of graduate residence should join sex, prior qualifications, provider region, and subject area as significantly impacting salary data in LEO means that we are unlikely to get information of the quality needed to measure accurately against baselines. There’s a “major review” of the NSS coming. The Royal Statistical Society criticisms of the TEF methodology may well have holed the exercise below the waterline.

And in truth the big difference between TEF and baseline regulation is really that there are major differences in approach to benchmarking. Ministers might well be prevaricating over alternative systems which result in different sets of unpalatable “losers”. But the delays could equally be covering attempts to make the framework look respectable.

Because here’s the thing. If OfS really is bringing together different aspects of its regulation to work “consistently and coherently” with one another, it’s a hell of a lot of work if the net result is some “enhanced monitoring” and a handful of tiny colleges being told “no”. Having largely stood aside when QAA’s role was sidelined and “outcomes” became the thing, the sector should probably brace itself for those judgments to start operating, one way or another, at subject level.

One response to “Are student number controls coming back to English higher education?

  1. “And in truth the big difference between TEF and baseline regulation is really that there are major differences in approach to benchmarking.”
    I couldn’t agree more, and benchmarking is a fundamental difficulty within the TEF as well due to the exercise’s mixed objectives of improving quality (take latent advantages into account to adjust expectations), and informing student choice (show students which providers/departments will bestow any kind of advantage upon their future prospects).

Leave a Reply