David Kernohan is Deputy Editor of Wonkhe

If you work in higher education you will eventually hear about the “quality wars” – a running disagreement that can be seen as a struggle for the soul of higher education.

Like much in higher education policy, it was an idea crystallised by David Watson. In attempting to characterise the historic changes in thinking around the quality of higher education provision, Watson noted in 2006 that:

the audit society and the accountability culture have collided (apparently) with academic freedom and institutional autonomy

Though the actors have changed, the two sides have remained visible for over 30 years. The creation of the QAA in 1997 could be seen as the start of an unstable interregnum – a peace that wavered memorably in 2001 and 2008 and was finally shattered in 2014.

But we are getting ahead of ourselves.

What do we mean by quality in higher education?

In many ways the entire long-standing disagreement is a question of definition. What we mean by quality is intimately tied up with what we mean by higher education. It is far easier to describe how we measure quality (“quality assurance”) or improve quality (“quality enhancement”) than to fully define what we understand by quality – an issue that is still live today.

The older debates, as we will see, focused on the linked idea of “academic standards” – a concept linked to academic quality that is based on comparability between providers rather than an external measure.

At base we could define quality assurance as the mechanisms and processes that allow those experiencing or paying for higher education to be confident of reliable and appropriate outcomes. The QAA uses language around “students working towards a UK qualification get the higher education experiences they are entitled to expect”, and uses an examination of institutional processes and practice as a primary means of understanding this.

Conversely, the Office for Students links teaching quality to the wider concept of “value for money”, and uses student satisfaction and destination data to measure this – alongside policing minimum quality thresholds in regulation.

A brief history of quality

As long as there have been universities there has been a concern about their quality and usefulness – a concern that has frequently been used for wider political and societal purposes. The historical default is that a university has a responsibility for its own practice – or in other words, autonomy. There are two generally accepted deviations from this – the first is (since the establishment of the University of Durham in 1832) the use of external examiners to ensure cross-provider comparability, the second is the input of professional or statutory bodies into the curriculum.

But the university has never been the only provider of what we would recognise as higher education – other providers have other traditions. In the UK the polytechnic system (the other half of the binary divide) was overseen by both the Council for National Academic Awards (CNAA) and Her Majesty’s Inspectorate (HMI). While the amount of intervention from these bodies varied from provider to provider (the idea of “risk-based regulation”), this was substantially more centralised governmental oversight than the university system.

With the writing on the wall regarding the merging of the two main UK systems, 1990 saw the establishment of an Academic Audit Unit at the Committee of Vice Chancellors and Principals (CVCP, the forerunner of Universities UK). The Academic Audit Unit was developed to check institutional compliance with a series of codes of practice (the forerunners of the quality code!) created by an earlier CVCP Academic Standards Group. These covered things like external examiners, post-graduate training, and appeals processes. Providers generally complied with this guidance, but there where a number of areas where local practice was very different.

Getting through 24 provider reviews in a little over three years, this unit was incorporated into the Higher Education Quality Council (HEQC) when the binary divide was closed in 1992 – though this could also be seen as the unit being merged into the CNAA quality support group. HEQC continued with the pattern of institutional reviews established by both organisations.

The first quality wars

Meanwhile, the new funding councils, HEFCE in particular, were conducting detailed reviews of subject provision. This was not a new development – reviews of provision in particular subjects of interest have a long history – but this was a more systemised approach. The first formal HEFCE subject review – of Law – was published in January 1995, following work in 1993-94.

Though there was initially some talk of the grades given in these reviews being linked to funding via the allocation of student numbers, this transformed into a link between quality assurance (QA) and quality enhancement (QE), via a programme of funded projects called the Fund for the Development of Teaching and Learning (FDTL). The idea of a link between funding and QA has re-erupted frequently since (most notably around attempts to link TEF to fee levels), but has never successfully been implemented at anything other than a threshold level.

Two parallel systems of quality assurance – HEQC at the institutional level, the funding councils at the subject level – were widely seen as overly burdensome. The funding council involvement in subject review – with a wider duty on funding councils to “secure that provision is made for assessing the quality of education provided in institutions for whose activities they provide, or are considering providing, financial support” in the 1992 act – was generally seen as a threat to institutional autonomy.

The “quality wars” of popular legend refer to the tensions between these two competing models of quality assurance – the consensual, sector-owned, provider level views taken by HEQC and the imposition of the subject review process seen very much as a tool of government.

Something had to give, and in 1997 a joint planning group recommended the establishment of a new, independent agency to take on both subject and institutional quality function – the Quality Assurance Agency.

To Gloucestershire, and beyond

A new, independent (but nominally sector-owned) agency was seen as a way around the perceptions of an attack on autonomy. But it was the other end of the argument, that of burden, that caused the first problem.

On 21 March 2001, Secretary of State David Blunkett responded to lobbying by the HE sector by promising to implement a “40 per cent reduction in the assessment of higher education teaching”. This manifested itself as a cessation of mandatory subject review, with the possibility of audit trails from institutional review reaching back into faculties remaining until 2005. The resignation of the QAA’s first chief executive – John Randall – was seen as a scalp claimed by a resurgence of institutional autonomy. Blunkett’s decision came in an atmosphere of antipathy – with talk of refusals and boycotts.

But, as is often the case, there was a quid pro quo. A 2001 HEFCE consultation called for the release of information about quality that led – eventually – to TEF, the National Student Survey and Discover Uni. Student information, previously confined to the prospectus and the newspaper league table, had become another potential front in any future quality war, with a 2005 review group recommending the removal of some of the more qualitative aspects (external examiner summaries?) from Unistats.

HEFCE, in particular, retained a programme of quality enhancement that owed a lot to HEQC, taking on things like the FDTL and the Teaching Quality Enhancement Fund (TQEF). I’ve covered some of this history elsewhere, and there’s even a commemorative tea towel.

In 2009, the House of Commons Innovation, Universities, Science and Skills Select Committee published a report that was sharply critical of the QAA. Spurred in part by complaints from a number of academics, MPs produced a detailed (and still relevant – there’s even stuff on grade inflation) report that made a case for substantial reform. In some ways this went against the likely intentions of the complainants – there was a sense that a return to reviewing quality directly, rather than processes, was needed.

The second quality wars

But before this call for reforms had time to work its way through the system, an election and the Browne Review prompted a much wider (read: about fees) national debate about HE policy. A 2011 proposal from HEFCE to link funding to quality assurance was lost in the dust – the similar hints in the Browne report led to a HEFCE suggestion there was “no formulaic way” for quality to be incorporated into funding (or transitional student number) decisions.

2013 also saw a new institutional review method from the QAA, the Higher Education Review, which (in England, at least) brought FE colleges and alternative providers into the system on a level playing field. The old standard cyclical reviews were replaced with a risk-based approach that allowed more attention to be paid where it was needed.

But, on the 7 October 2014, the synergy between regulator and agency was cast into doubt. Funders, with HEFCE taking the lead, had always negotiated a contractual relationship with QAA – but the next renewal would be put out to open tender, or – possibly – taken in house by HEFCE. (Recall here that QAA has been operating using powers delegated from the funding councils).

A great deal was written as to why, but the consensus was that this was an act of aggression on behalf of a HEFCE facing radical reconfiguration or closure, as the new funding model took hold. The thinking was that the responsibility for quality assurance should be placed on provider governing bodies, and HEFCE would top this up with a quinquennial review.

But no-one expected the Teaching Excellence Framework! First showing up in the 2015 Conservative Manifesto, and then fleshed out in Green and White papers, this was an explicitly metrics-based approach to quality that was positioned (oddly) as quality enhancement rather than assurance.

Designation (that’s what you need)

The passage of the Higher Education Research Act saw a surprising amount of discussion about quality in parliament, and all of it was focused around TEF. But seasoned quality watchers were more interested in the role of the designated quality body.

Though HEFCE had eventually re-appointed the QAA to do quality assurance, the reapplication process was not one that impressed the DfE. This could well have been a factor in the decision to dismantle HEFCE (another story suggested it was a refusal by senior agency staff to move HEFCE to Swindon).

Appointing a “Designated Quality Body”, by statute, was an attempt to put quality power games at an end – giving the responsibility of identifying a suitable body on the new Office for Students, but requiring approval and oversight from the Minister. As widely expected, the QAA became the DQB, and almost immediately realised the problems that this caused.

The issue was financial – the DQB is required to fund DQB-style activities (quality assurance of providers, in England) by a specific subscription, that could not be spent on anything else. The QAA does a lot of other things, not least the Quality Code and Subject Benchmarks, which are UK wide. So another subscription is required to pay for these things – and subscribing to the same organisation twice is seldom popular.

Meanwhile, providers are also subscribing (in England) to the Office for Students – who have, in opting for primarily data-driven regulation, diminished the importance of the more qualitative operation of the QAA institutional reviews, guidance and provider visits.

Peace in our time?

The debates around ownership and agency in quality continue, though field of battle may have shifted. The widely reported overreach of the OfS in the Data Futures programme shows that the regulator has an appetite for the ownership of the data it now uses in regulation. Both the QAA and HESA are in a weaker financial position following success in the designation process, with constraints on subscription funding leading both to propose a complex dual funding model.

The first quality war was fought with autonomy, the second with agency and remit. The third will be fought with data. We started off with concerns about approaches to quality – with the sector lining up against the government. The second war was between what were widely seen as two arms of the same creature, with the sector interests resigned to sniping from the corners.

Data-driven approaches drive the focus of quality assurance not just, as institutional review did, out of the classroom – it drives it out of the university altogether. A focus on input and output measures (like in TEF, and with LEO) treats the activity within a provider as unknowable – the QAA examination of processes at least ensures that a provider knew what was going on in the lecture theatre and lab. The third quality war is a methodological skirmish, a debate about qualitative and quantitative approaches, or between sampling and big data, that mirrors wider global problems

But the idea of a UK wide HE sector, as seen by prospective international students, is becoming a problem too. The quality environment is increasingly fragmented. I’ve focused mainly on England here, but in Scotland the early 00s focus on enhancement themes has been retained, whereas in Wales recent changes following the Diamond review mean that providers can seek assurance from any EQAR (the European Register for Quality Assurance Agencies) registered body, though all providers have chosen the QAA.

Juggling the four systems, and changes to the English approach in particular, has made it hard for the QAA to make a continued case for the excellence of what it does to EQAR, ENAQ and others – a sorry state of affairs. In many ways the history of quality in HE in the UK has influenced the world, but we now find ourselves struggling to match up to the standards we ourselves set.

Read David Watson’s seminal 2006 article, “Who killed what in the quality wars?”, here. With grateful thanks to the QAA.

Leave a Reply