David Roberts is chair and co-founder of The Knowledge Partnership.


Amy Ross is a senior market analyst at The Knowledge Partnership.

For most UK universities, revenue derived from enrolment, both in the form of tuition fees and charges for other services, is their major source of income. Having an attractive range of courses is the most critical factor in achieving healthy recruitment. The flip side is that the teaching of programmes also accounts for a hefty share of a university’s costs.

The delicate financial state of the UK sector, even before the pandemic, was in no small measure due to the weak governance of university programme portfolios. This has resulted in an overly complex range of programmes being all too typical, adding cost and risk, whilst simultaneously confusing applicants.

We all know that research is subsidised by tuition fee income, but there is also significant cross-subsidy of unpopular courses using income secured by the more popular ones. This is a potential PR disaster waiting to explode at a time when the sector is asking for increased public funding.

Take languages, where the lowest proportion of programmes attract viable numbers. I am fully behind an argument to retain courses in strategically important niche subjects. Many of SOAS’s language/culture courses in say Sanskrit, Tamil or Tagalog enrol tiny numbers but our full engagement with the world relies on them. Most modern language programmes struggle to recruit but these departments also contribute to preparing graduates for many growing global careers.

There are other examples of such underpinning subjects, but this neither explains nor justifies the clear oversupply of programmes in many fields of study.

I sympathise with those trying to calculate what the optimal range of courses should be; balancing the desire to maximise appeal whilst managing costs is a complex process. But that is the case in many other sectors and businesses too.

If you get it wrong, and I think the data shows that most universities do, there is a financial price to pay. Maintaining too many small programmes also adds complexity to the timetable, which hits NSS scores, creating reputational risk.

Many students also pay a price if cross-subsidisation between programmes is rife. To some extent this is tolerated but if the data were more transparent, serious questions might be asked. In New Zealand 20 years ago, I interviewed social science undergraduates who complained their fees were subsidising high cost, high salary courses in engineering. At that time, the fees paid by students were but a few hundred dollars.

Fewer, better supported, fully resourced programmes, would probably result in better experiences and outcomes for all stakeholders.

Offering too many programmes does more than just thin out enrolments across the portfolio, it’s actually a barrier to recruitment, reducing total enrolments, and thus student-related income. Fewer courses would not reduce participation.

There is a natural density of course provision given a university’s academic/subject footprint. The optimum density varies by subject, but counterintuitive as it sounds, four single honours courses is likely to mean a department attracts fewer students than would three – assuming it is the right three.

Customer confusion is the theoretical context and Mary Portas’ TV show provided many retail examples we can all relate to. Cluttered shops filled with an array of display goods from which bemused shoppers retreated empty-handed. Faced with a bewildering array of courses, prospective applicants do likewise.

Analysis of courses

I looked at 2018-19 enrolments across the 11,000 single subject undergraduate courses recorded by HESA. Joints and interdisciplinary programmes were excluded.

We are all familiar with Pareto’s Principle – that roughly 80 per cent of the effects (eg enrolments) come from 20 per cent of the causes (courses). In our analysis 80 per cent of the enrolments came from 37 percent of the courses and 90 per cent from just over half the courses.

So here is the mind boggling reality: the most popular 111 courses admitted the same as the least popular 5,329 courses.

The summary is similar and just as startling for postgraduate courses. The most popular one per cent of courses admitted 15 per cent or 30,000 students and the most popular ten per cent of courses a sizeable 94,000 students – nearly half (48 per cent) of the total.

Or at its most sensationalist, the most popular 101 courses admitted the same as the least popular 5,585 (55 per cent).

Yes, the data is flawed because the HESA course title field includes anomalies but a forensic evaluation indicates this accounts for less than two to three per cent of the courses in the cleaned dataset we have used.

So what are these data anomalies that mean those using the raw HESA data are likely to generate somewhat inaccurate outputs?

Well some universities report data for courses with work experience as discrete from the same programme without the sandwich element. However this is not very common and accounts for about 200 courses in the data. The same issue applies to courses with a Year 0 or foundation (which is in truth FE Level 3).

Then there are the four-year undergraduate masters courses that sit alongside a three-year equivalent – BEng and MEng a good example. These share a lot of common modules and assessment but in reality these are rarely the type of courses that attract very few students. In any case, most universities assign students to BEng/MEng as one programme as some students subsequently transfer between the two. We estimate there are 550 UG Masters where there is a separate student cohort reported, with most (800) reported collectively.

Specialist pathways are increasing common such as BSc Biosciences (Oncology) but enrolments are only assigned to the pathway if the student has committed to it at the time they enrol, so in this example where choice of path is deferred, all the students are assigned to BSc Biosciences.

In the PGT data there are very few duplicates and prior to running analysis we removed PGCert and PGDip courses where there was an associated Masters programme. Clearly all share the same curriculum but some universities report their numbers for each level where they market the certificate and diploma stages as distinct propositions – mostly in health, education and business fields.

But there is another fundamental flaw in the HESA data – you can only identify courses if there is at least one enrolment. So all those courses with no admissions magically disappear. Fortunately at TKP we maintain a dataset of all courses that universities market and a quick piece of maths showed that in 2018, universities collectively marketed well over a 1,000 more single subject courses than were recorded in the HESA data. One can only assume they admitted no students.

Excess courses create inefficiency

Of course the headline figures will exaggerate inefficiency because virtually every course shares at least some modules with other courses. Some universities are lean and mean and have relatively few modules for the scale of their admissions, others carry far more than is typical.

We have no evidence that the latter perform better in terms of NSS or attracting more applications, but they will carry more costs and make the timetable more complex. A decade or more of experience in this realm tells us that for most types of university the coherence of a course is the most critical characteristic in terms of marketing programmes not the amount of module choice on offer.

While offering a lot courses built on a bank of shared modules may appear efficient, the danger is that the product range looks decidedly inauthentic, with courses appearing indistinct from one another.

Crucially our analysis shows that offering too many courses within a subject field not only leads to cannibalism, it very often results in a reduction in the total number of students enrolling for that subject at the host university. So what seemed like an effective model with a range of courses sharing a lot of modules at a low marginal cost actually results in lowered tuition fee income. Less can mean more.

We can trace this problem back to the adoption of modular frameworks, a shift that made it (too) easy to create more, but all to often, indistinctive programmes. I call this the Lego Effect.

When the government talks about poor value degrees they mean those not leading to jobs with graduate-level salaries. But these figures suggest that it is vice chancellors who ought to be asking the question about low value degrees from a sustainability perspective.

A few years ago as part of the I-MAP work funded by HEFCE we asked universities how many FTEs they used to determine if a course was covering its full costs or to approve a new course.

The results indicated 25-35 for a first degree and 15 for a Masters. This was of course an average and costs vary by subject but as the analysis for Augar showed for most subjects the variance is small, which is reflected in the top up grants made by OfS.

So using 25/15 students, how does the sector’s provision fare? The table shows data for all courses at general public universities recruiting at least one full-time student.

I have separated the Russell Group from other established pre-92 universities, and the “new wave” (those awarded university status after 1992) from their post-92 (mostly ex-polytechnic) counterparts to give a more nuanced picture.

University type% courses enrolling
minimum of 25 UGs
% courses enrolling minimum of 15 PGT
Post-92s 5942
Russell Group5551
Established 5536
New Wave 3822

The surprise is probably that the post-92s have a better undergrad hit rate than other universities. Less of a surprise is that the Russell Group fares best at PGT. Significant variation by subject is observed but most fall in the 30-50 per cent range with a sector-wide average of just 42 per cent.

If we compare courses with start-up business survival or product success rates, 42 per cent would be a fair benchmark. The difference is that failing businesses go bust and unappealing products get killed off. Unviable courses seem to remain on life support for years.

So there is a very good case for saying that far too many courses are on university books, and far too many have little appeal to students. Unless the sector shows it is addressing this issue, claims it needs more funding are likely to fall on deaf ears.

Reasons for the proliferation of unviable programmes

Having worked in this field for 30 years, and taken a specific interest in this problem here are my top reasons, all of which I think are visible in most universities.

Put very simply, far too many new courses are approved based on wishful thinking when it comes to forecasting enrolments. And too many courses stay on the books when there is no evidence they will ever attract viable numbers. So course portfolios proliferate faster than the scale of applications and enrolments. Ergo financial pressure.

Greater accountability for failure to meet business case forecasts for course enrolment would probably mean fewer new programmes would be approved in the first place. Last year I obtained feedback from key staff in 30 universities and none reported that the actual enrolment performance of new courses was monitored and then compared with what the proposing department said would be achieved. So how does the sector know how effective (or not) its approval processes are?

There is still deference to an academic-led culture; those with market insight have little leverage on decisions to approve courses even where the evidence of likely demand is thin. This is particularly the case at Masters level.

Too often universities launch courses based on a novel research theme, forgetting that, as universities are at the forefront of creating knowledge, the rest of society, and prospective students in particular, need time to become aware of this emergent field and understand it, well before they are willing to enrol on a specialist course.

Many universities seem reluctant to close unviable courses. This is partly cultural, but also a reflection on the challenge of teaching out programmes and the associated bureaucracy. Running out one year Masters courses is not a major obstacle though.

Some are worried about the potential for bad publicity of closure, with campaigns to “save our course” initiated by alumni or unions. In truth, such campaigns usually find life when a department or subject is earmarked for closure, not individual courses.

The predilection for devolved financial management within the sector, with budgets monitored at school or departmental levels, encourages the retention of small programmes with their modest but valued revenue streams. The idea that fewer courses might attract more students seem just too counterintuitive.

Cutting the costs allocated against a course (staff time for example) is complex, particularly if the lecturers are on permanent contracts and teach niche topics on other programmes. This is the house of cards problem. Portfolios are interdependent and thus removing one card from the deck has consequences elsewhere. To a head of school, doing nothing may seem a good option.

An issue associated specifically with smaller and newer universities is that many have been stimulated into setting ambitious student growth targets by analysis suggesting there are financial and other benefits from scale. I have my doubts, especially when being friendly and personal is simultaneously promoted as their point of difference.

Smaller institutions have limited resources, but they know that it is courses that attract students/revenue so many have set in train ambitious product development plans. But in attempting to grow on the cheap, they proliferate their portfolios within existing subjects as an alternative to strategically investing in new fields with strong demand that expand their subject footprint. Thankfully I now detect a more thoughtful approach is emerging across this set of institutions.

What needs to change?

Whilst success in a market is not completely in a university’s own hands, the rate of success of its courses mostly is because the great majority of failures are highly predictable.

Regular, systematic, evidence-based evaluations of portfolios are essential as markets become more competitive and stressed, and the rate of social and technological change increases. This process must be fully integrated within the planning of faculties and schools.

Portfolios need to be rationalised with more choice within programmes (pathways) rather than offering an array of choice at the point of enrolment. Deferred student choices are likely to be more informed choices.

Fewer new courses should be approved, the consequence of more robust market-based scrutiny of proposals. All new course concepts should be evaluated in the round, rather than on an ad hoc basis.

More primary research of the best new course concepts in innovative fields is needed. Will the courses find a market? Which features of the proposed design will most enhance the appeal of the course to both applicants and where relevant, employers and other stakeholders?

Too often the expertise of an academic or their research team finds expression as a full Master’s degree, when it would be more appropriate for it to enrich the curriculum of an existing programme, perhaps through a specific module or pathway. The risk of approving new postgrad courses that rely on the niche expertise and unique networks of one or two academics ought to be given more weight in the QA process.

The use of independent critical friends can ensure that decisions are not based on power and position, but on the evidence and from a market perspective. Data is only as good as its interpretation, and those evaluating the data often lack wider market experience and mindset.

5 responses to “Universities must address the oversupply of courses

  1. All aspects of this article correspond with my experience of the matters at hand. What’s also important, though, is the manner and extent to which HE providers need appropriate governance structures in place to ensure the regular and effective application of the processes and procedures identified in the piece. Quite often such comes down to the relative roles and responsibilities of academics in taking (or too frequently not taking!) the kind of decisions required to implement successfully the very sage advice provided.

  2. This is all so very true of all the institutions I have worked in. Courses get approved on the basis of projected student numbers but very rarely is there a review of these projections one or two years in. If there was then a lot of these courses would have been found to failed to meet their projections. Neither does there tend to be any real consideration of the impact on other similar courses within the portfolio.

    I suspect that a number of the 1000 courses found to have been marketed but not to have a single enrolment is a consequence of the tendency to approve courses late in the recruitment cycle. These courses don;t recruit and are then often the starts are deferred for a year, if they ever start at all.

  3. I’m not sure the causality is the right way around here. IME poor recruitment leads to a lot of programme development rather than the other way about.

  4. Interestingly, one of the universities to introduce more choice (Keele) then ended up having very high NSS scores for quite a few years in succession, even though not all students used that flexibility to pick modules on different courses. It could be an exception to prove the rule, of course, as it was known for Dual Honours courses in various combinations, so students may have been predisposed to like choice. However, in my experience elsewhere, students do like to have choices in their curriculum (so arguing for module choice), even if you were to restrict the number of courses available.

Leave a Reply