This article is more than 2 years old

How universities identify “low quality” courses

Universities UK's framework for programme review attempts to shed more light on an often-ignored facet of quality assurance. David Kernohan changes the bulb.
This article is more than 2 years old

David Kernohan is Deputy Editor of Wonkhe

Ahead of a big OfS consultation dump due later this week, Universities UK is attempting to get ahead of the game with the release of a framework for programme reviews.

A programme review – for those mercifully uninitiated – is an internal examination of the quality and scope of a course (or group of related courses) with a view to either making iterative adaptations to make it even better, or putting it out of its misery. Although in policy terms we often think of quality assurance as an external process – conducted by the QAA or via a series of largely arbitrary historic metrics – the actual work happens autonomously in these smaller scale, often externally invisible, processes.

So for me, a big part of the appearance of this framework is a high profile reminder that this kind of thing is an important component of the way higher education develops course portfolios, and is arguably the reason why it remains so hard for anyone to sensibly identify a “low quality” course from metrics alone.

How it should work

Programme reviews are usually conducted annually, with a more intensive review conducted in a third of providers about once every three years – though a provider will also step in wherever and whenever it sees something to be concerned about. This concern is most likely to stem from application numbers (in nearly every provider), from student (or graduate) feedback, from continuation rates, from a high or growing cost of delivery, or from one of a basket of output metrics. As UUK puts it – alongside the current regulatory darling that is output metrics, the key measures used should include a consideration of:

  • Financial viability – do enough people want to do the course to bring in enough money to run it?
  • Quality – is the course able to offer appropriate learning opportunities to students (often with reference to the UK Quality Code and associated guidance)?
  • Standards – what is known about the course from assessment results and student feedback
  • Professional requirements – what do employers (or PSRBs) require from the course, and does the course deliver that?

As can be seen from that list, decisions around course closure are not made on the basis of metrics alone – a great deal of qualitative data is also involved. This “soft” data will include conversations with and reports from external examiners, employers, students, graduates, outreach teams, and academic staff.

Because course closure can have a significant financial cost to providers, and is a huge detriment to students currently on the course, a university needs to be very clear and confident about the basis for such a decision. Short of closing a course, it may be decided to put together an action plan to enhance provision appropriately.

Why now?

It’s not as if ministers and the regulator in England have even tried to hide their desire to identify “low quality courses” via external data on outcomes. This process has been through various iterations (Subject TEF, LEO data…) but recent comments have focused on the findings of the OfS’ Proceed metrics, which includes a loosely defined graduate level outcome with what amounts to the old continuation UKPI and – despite recent posturing that would suggest otherwise – is benchmarked.

Here’s a plot of Proceed data by subject area from last year, so you can see what we are talking about. This one plots the two components on axes rather than the core measure, but you can still see what we might call “areas of concern”:

[Full screen]

As we went over at the time this is a very blunt instrument – subject coding is a whole other world of complexity, and there is no sense by which this can identify low quality “courses” per se. There’s also some concerning symmetries between Proceed scores and some student characteristics, for example (at provider level):

[Full screen]

What it looks like UUK is doing here is providing a gentle reminder that better and more nuanced provider level systems to identify and remove low quality courses already exist and are (broadly) effective. It’s not in a provider’s interest to run a course that students and employers are unhappy with, in the same way (and as the framework rightly points out) that it is not in a provider’s interest to run a course that loses a pile of money every year either by failing to recruit or by having costs substantially beyond what is sustainable.

There’s also a sense in which talking about the actual process cuts across the disingenuous way that ministers use “low quality” language to take a swipe at subject areas or types of provision that they don’t like. Certainly, if you run an arts or humanities course you would have felt your ears prick up a few times since 2017.

Why a framework?

Because of the autonomous nature of universities, nobody can point to a national standard in this space. It is possible, for instance, that a course considered viable at one provider would be quickly closed at another.

So while UUK’s framework stops short of setting national standards, it does helpfully describe the way the process should be working and the kind of things that need to be taken into consideration. We get clarity – for example – that metrics should be used contextually to flag concerns for further investigation rather than as simple cut-off points, that diversity and innovation should be taken into account and that measures of quality (supposedly objective and inputs based) and value (supposedly subjective and outputs-informed) should both be considered.

There’s a specific section on the use of data, pointing to possible data sources for contextual and core metrics, that adds more specifics to the principles you’ll remember from the Quality Code guidance. What’s striking here is the range of reference points – we see a discussion of the expected data sources (NSS, continuation, completion, graduate outcomes) alongside more qualitative stuff (graduate views on career progress, learning gain) and complex measures like social mobility, high skilled work in local areas, and contribution to culture.

What’s new?

It’s also interesting to see a consideration of wider institutional strategy – for example it may make sense for a provider to offer a course that links specifically to local needs or areas of strategically important research. But if you’re thinking that strategic links leave latitude for personal agendas, this is coupled with an expectation that decisions are made transparently and consistently – with this framework’s major innovation being a request to UUK members in England to publish an annual report on their course review process (starting in early 2023).

This annual report will be a brief high-level overview at provider level, so there’s no expectation of a report per course. A methods statement like this would allow external and internal stakeholders to hold a university to account if they felt a particular decision had been made poorly. In particular, we see a demand for details of thresholds and comparator groups that would shed a lot of light on what can often be an opaque process – an approach that would reassure regulators and the public that enhancement and course closure plans were based on meaningful review findings.

What’s missing?

It would have been good to see more here on the use of student and graduate voices within these processes. After all, students and graduates are the ones having the experiences that these interventions are meant to improve – and the executive summary highlights that the quality of provision and assessment (with output metrics a very poor third) are their primary concerns.

It’s a missed opportunity because many providers do this very well – it is hardly an unorthodox practice – and more could learn from this approach. In year module and course surveys, indeed the free text portions of the NSS, are very valuable sources of insight, but nothing beats the understanding gleaned from even a short conversation with an actual learner. Maddeningly there is stuff on consulting with students on a choice of metrics – a valuable activity in itself, though one that possibly misses the point.

I’m also not convinced by the split between quality and value – these are two variables that interoperate (surely good quality delivery may in itself be expressed in value). In separating them we ignore a lot of nuance – something that this publication implicitly rebukes OfS and DfE for doing – and risk suggesting a trade-off where none exists.

And though there is a certain bravery in publishing a guide to closing courses during a period of heightened industrial action, we could usefully have seen more on managing the impacts on the staff and students involved. The framework does nod in this direction, but right now we need more than a nod.

Leave a Reply