Richard Harrison is Deputy Secretary at the University of York, but is writing in a personal capacity

A decade since his passing, David Watson’s work remains a touchpoint of UK higher education analysis.

This reflects the depth and acuity of his analysis, but also his ability as a phrasemaker.

One of his phrases that has stood the test of time is the “quality wars” – his label for the convulsions in UK higher education in the 1990s and early 2000s over the assurance of academic quality and standards.

Watson coined this phrase in 2006, shortly after the 2001 settlement that brought the quality wars to an end. A peace that lasted, with a few small border skirmishes, until HEFCE’s launch of its review of quality assessment in 2015.

War never changes

I wasn’t there, but someone who was has described to me a meeting at that time involving heads of university administration and HEFCE’s chief executive. As told to me, at one point a registrar of a large and successful university effectively called out HEFCE’s moves on quality assessment urging HEFCE not to reopen the quality wars. I’ve no idea if the phrase Pandora’s box was used, but it would fit the tenor of the exchange as it was relayed to me.

Of course this warning was ignored. And of course (as is usually the case) the registrar was right. The peace was broken, and the quality wars returned to England.

The staging posts of the revived conflict are clear.

HEFCE’s Revised operating model for quality assessment was introduced in 2016. OfS was establishment two years later, leading to the B conditions mark I; followed later the same year by a wholesale re-write of the UK quality code that was reportedly largely prompted and/or driven by OfS. Only for OfS to decide by 2020 that it wasn’t content with this; repudiation of the UK quality code; and OfS implementing from 2022 the B conditions mark II (new, improved; well maybe not the latter, but definitely longer).

And a second front in the quality wars opened up in 2016, with the birth of the Teaching Excellence Framework (TEF). Not quite quality assessment in the by then traditional UK sense, but still driven by a desire to sort the sheep from the goats – identifying both the pinnacles of excellence and depths of… well, that was never entirely clear. And as with quality assessment, TEF was a very moveable feast.

There were three iterations of Old TEF between 2016 and 2018. The repeated insistence that subject level TEF was a done deal, leading to huge amounts of time and effort on preparations in universities between 2017 and early 2020 only for subject-level TEF to be scrapped in 2021. At which point New TEF emerged from ashes, embraced by the sector with an enthusiasm that was perhaps to be expected – particularly after the ravages of the Covid pandemic.

And through New TEF the two fronts allegedly became a united force. To quote OfS’s regulatory advice , the B conditions and New TEF formed part of an “overall approach” where “conditions of registration are designed to ensure a minimum level” and OfS sought “to incentivise providers to pursue excellence in their own chosen way … in a number of ways, including through the TEF”.

Turn and face the strange

So in less than a decade English higher education experienced: three iterations of quality assessment; three versions of TEF (one ultimately not implemented, but still hugely disruptive to the sector); and a rationalisation of the links between the two that required a lot of imagination, and a leap into faith, to accept the claims being made.

Pandora’s box indeed.

No wonder that David Behan’s independent review of OfS recommended “that the OfS’s quality assessment methodologies and activity be brought together to form a more integrated assessment of quality.” Last week we had the first indications from OfS of how it will address this recommendation, and there are two obvious questions: can we see a new truce emerging in the quality wars; and given where we look as though we may end up on this issue, was this round of the quality wars worth fighting?

Any assessment of where we are following the last decade of repeated and rapid change has to recognise that there have been some gains. The outcomes data used in TEF, particularly the approach to benchmarking at institutional and subject levels, is and always has been incredibly interesting and, if used wisely, useful data. The construction of a national assessment process leading to crude overall judgments just didn’t constitute wise use of the data.

And while many in the sector continue to express concern at the way such data was subsequently brought into the approach to national quality assessment by OfS, this has addressed the most significant lacuna of the pre-2016 approach to quality assurance. The ability to use this to identify specific areas and issues of potential concern for further, targeted investigation also addresses a problematic gap in previous approaches that were almost entirely focused on cyclical review of entire institutions.

It’s difficult though to conclude that these advances, important elements of which it appears will be maintained in the new quality assessment approach being developed by OfS, were worth the costs of the turbulence of the last 10 years.

Integration

What appears to be emerging from OfS’s development of a new integrated approach to quality assessment essentially feels like a move back towards central elements of the pre-2016 system, with regular cyclical reviews of all providers (with our without visits to be decided) against a single reference point (albeit the B conditions rather than UK Quality Code). Of course it’s implicit rather than explicit, but it feels like an acknowledgment that the baby was thrown out with the bathwater in 2016.

There are of course multiple reasons for this, but a crucial one has been the march away from the concept of co-regulation between universities and higher education providers. This was a conscious and deliberate decision, and one that has always been slightly mystifying. As a sector we recognise and promote the concept of co-creation of academic provision by staff and students, while being able to maintain robust assessment of the latter by the former. The same can and should be true of providers and regulators in relation to quality assurance and assessment, and last week’s OfS blog gives some hope that OfS is belatedly moving in this direction.

It’s essential that they do.

Another of David Watson’s memorable phrases was “controlled reputational range”: the way in which the standing of UK higher education was maintained by a combination of internal and external approaches. It is increasingly clear from recent provider failures and the instances of unacceptable practices in relation to some franchised provision that this controlled reputational range is increasingly at risk. And while this is down to developments and events in England, it jeopardises this reputation for universities across the UK.

A large part of the responsibility for this must sit with OfS and its approach to date to regulating academic quality and standards. There have also been significant failings on the part of awarding bodies, both universities and private providers. The answer must therefore lie in partnership working between regulators and universities, moving closer to a co-regulatory approach based on a final critical element of UK higher education identified by Watson – its “collaborative gene”.

OfS’s blog post on its developing approach to quality assessments holds out hope of moves in this direction. And if this is followed through, perhaps we’re on the verge of a new settlement in the quality wars.

Leave a reply