Ticking boxes misses the point
Jim is an Associate Editor (SUs) at Wonkhe
Tags
It said that it was not sustainable to satisfy both the Office for Students’ specifications for DQB work in England and the expectations that flowed from its registration on the European Quality Assurance Register for Higher Education (EQAR) based on the ESG.
The ESG underpins that EQAR registration – agencies have to demonstrate “substantial compliance” with the standards and guidelines through an external review.
EQAR temporarily suspended QAA’s registration earlier in 2022 because aspects of its English DQB activities (notably restrictions on publishing reports and student involvement) were judged non-compliant with ESG expectations, forcing QAA to change practice and then to confront the deeper structural conflict between OfS’s design and ESG norms.
Now OfS says that its proposed overall approach “broadly includes all the elements required” by the ESG, and that it will:
…explore… whether adherence would involve further adjustment to the future system and any steps that would be needed to apply to be listed on the European Quality Assurance Register for Higher Education.
Its argument is that England’s proposals contain the right ingredients – self-evaluation (provider submissions), peer review (academic and student assessors), published outputs (TEF reports and, in some cases, investigation reports), follow-up (regulatory monitoring and conditions), and a cyclical element (a rolling TEF that covers all providers).
On that view, the system is ESG-compliant “in substance” even if it does not look like the classic continental agency model with cyclical institutional reviews.
The problem is that this fundamentally misreads what ESG is trying to do.
Purposes, values and relationships
The ESG – as adopted in 2015 and as they are being revised again now – are not a neutral shopping list of quality assurance components that can be assembled in any order.
They are a framework built around specific purposes, values and relationships – quality assurance serves the twin purposes of accountability and enhancement, assumes that “higher education institutions have primary responsibility for the quality of their provision and its assurance”, and explicitly connects QA to quality culture, academic freedom, academic integrity, public responsibility and the “social dimension” (access and participation) of higher education.
That framing matters. ESG is not neutral about who owns quality. It starts from the position that institutions are responsible for their own quality, that external quality assurance exists to test how well that internal responsibility is being discharged, and that the whole system should support improvement as well as provide assurance to students, governments and society.
Part 1 of ESG sets out what institutions should be doing internally – programme design and approval, student-centred learning, assessment, student admission and progression, teaching staff, learning resources, information management, public information, and complaints and appeals.
Part 2 describes what external quality assurance should look like – self-evaluation, peer review, “normally” including a site visit, published reports, and follow-up.
Part 3 defines what a quality assurance agency should be – independent from both government and providers, with multi-stakeholder governance, transparent procedures, and itself subject to cyclical external review against ESG.
The whole point is to create a system where, when agency A in country X says “this provision is sound”, agency B in country Y can trust what that means – because both are working to the same underlying standards and both have themselves been reviewed as ESG-compliant through EQAR procedures.
England is not Latvia
The obvious response is – so what? England is not Latvia – English degrees are globally recognised. Why should a domestic regulator contort itself to satisfy a European framework when providers can just get on with being excellent and students can judge the results?
A response of that sort treats ESG compliance as purely about mutual recognition and comparability – important for cross-border mobility and partnerships, but fundamentally optional for a large, confident system – rather than about the design of quality assurance itself.
But the ESG model encodes specific answers to questions that matter regardless of whether you care about EQAR listing – and those answers cut across the direction in which OfS is now travelling.
Who defines quality? Under ESG, the answer involves multiple stakeholders – academics, students, employers, professional bodies – operating through institutions and an independent agency whose governance cannot be dominated by government or providers.
Under the OfS model, quality is increasingly defined through ministerially-steered regulatory conditions and outcomes metrics, with OfS sitting as a non-departmental public body in the Department for Education’s sponsorship orbit and its priorities shaped by statutory guidance. That difference is not abstract – it shapes what “quality” means in practice and who has standing to contest it. And who knows who could be in government next.
How does external scrutiny drive improvement? The ESG and the wider European quality assurance literature assume that peer-based external review – with site visits, dialogue and engagement with internal QA – creates different internal dynamics to metrics monitoring and the threat of sanction. Jethro Newton’s work on external quality monitoring famously talks about the “implementation gap” between quality policy and classroom practice, arguing that intensive audit regimes tend to generate documentation and compliance rather than transformative teaching change unless they take seriously the conditions of academic work.
The broader impact literature finds robust evidence that quality assurance has led to clearer documentation and transparency, but much thinner evidence of direct positive impact on teaching and learning – while evidence of compliance-oriented side-effects is stronger.
The European University Association’s (EUA) long-running “quality culture” projects make a related point from inside institutions. Almost all providers now have internal QA structures. The recurring concern is that these are experienced as externally-driven compliance machinery rather than as genuine tools for reflection and enhancement. Where internal QA is primarily about feeding evidence into external dashboards, the cultural shift that actually improves teaching and student experience never quite happens.
What protections exist against politicisation? ESG’s insistence on independent agencies with multi-stakeholder governance is not some technocratic tic – it is designed as a buffer between day-to-day politics and what counts as “good” learning and teaching.
The House of Lords Industry and Regulators Committee’s inquiry into OfS concluded that the regulator has “a lack of independence from the Government” and at times appears to implement political priorities rather than regulating straightforwardly in students’ and providers’ interests – a conclusion echoed in commentary from bodies like the Institute for Government and in sector responses.
A regulator whose strategy and priorities are set within a ministerial sponsorship framework is not the same thing as an agency whose independence from government is itself periodically tested through ESG-based external review.
Not so normal
And anyway, beyond the architectural mismatch, there are concrete areas where what OfS is now proposing diverges from ESG expectations.
ESG 2.3 expects that external QA processes will “normally” include a site visit, with meetings and interviews with different stakeholder groups. OfS’s proposals do not provide for routine site visits as part of TEF; TEF is explicitly designed as a desk-based exercise drawing on data and written submissions, with in-person work limited to investigations when risk is identified through monitoring. That is a very different reading of “normally” to the one embedded in ESG practice across much of the European Higher Education Area (EHEA).
ESG 2.8 expects that all institutions undergo cyclical external quality assurance, with the European Association for Quality Assurance in Higher Education (ENQA) and EQAR practice typically interpreting that as implying a roughly five- to seven-year review cycle for each provider. In the English proposals, TEF becomes the only universal cyclical mechanism – but it is not a full institutional quality review against all Part 1 standards, and it is heavily metrics-driven. Targeted investigations under the B-conditions are clearly risk-based, not cyclical. Treating that combination as equivalent to ESG-style cyclical QA is, at best, a stretch.
ESG 3.2 requires agencies to be organisationally and operationally independent from both government and providers, with governance arrangements that prevent any single stakeholder from dominating. OfS is a non-departmental public body sponsored by DfE, with a Board appointed by ministers and duties and priorities set through primary legislation and statutory guidance. That may be an appropriate model for a domestic regulator, but it is not what ESG has in mind when it talks about agencies.
ESG 2.7 expects agencies to have clear complaints and appeals procedures for their own QA decisions, distinct from general public-body complaints routes or judicial review. OfS has internal review processes and, ultimately, the possibility of judicial review of its regulatory decisions, but nothing that looks like a dedicated, transparent appeals mechanism embedded in its quality assessment procedures in the way ESG envisions. ENQA’s recent thematic analysis of agencies’ follow-up and appeals work underlines how central such mechanisms are in ESG-aligned systems.
The consultation also proposes “an evolving pool of academic and student assessors” for TEF, but leaves open – and in places encourages – the idea that OfS staff themselves may sit on assessment panels or take a direct role in ratings decisions. That moves further away from independent peer review and closer to internal regulatory judgment. It was constraints on student involvement and report publication in QAA’s English DQB work, set through OfS specifications, that EQAR pointed to when it temporarily suspended QAA’s registration in 2022.
Yeah but we’re better
OfS might reasonably respond that all of this architecture talk misses the point. Its job is to protect the interests of students in England, not to satisfy continental frameworks. If outcomes-based conditions and TEF ratings drive providers to focus on what matters – completion, employment, student satisfaction – then the system is working, regardless of what ENQA or EQAR think.
There is no question that OfS’ focus on outcomes has concentrated institutional minds on continuation, completion and graduate employment in ways that previous quality regimes did not.
But the evidence on whether high-stakes, outcomes-focused regulation actually improves educational quality – as opposed to optimising behaviour around the metrics – is much less reassuring. Newton’s “implementation gap” work, Harvey and Williams’ reviews, and ENQA and EUA syntheses all point to the same pattern – external quality monitoring tends to produce better documentation, clearer processes and more sophisticated data systems, while demonstrable direct impact on teaching practice and student learning is hard to pin down and often indirect at best.
None of this means regulation is pointless or that poor outcomes should be ignored. It does suggest that a principally metrics-and-sanctions-focused approach may be good at changing what institutions report and how they manage their dashboards, without necessarily changing what students experience in seminars, labs and supervision meetings.
If the goal really is both accountability and enhancement – and if England wants to remain meaningfully connected to the European quality assurance conversation rather than permanently sitting in the “non-compliant” column of Bologna implementation maps – a different model is available.
The core idea would be a genuine two-tier system – OfS as regulator, setting minimum conditions, monitoring risk and imposing sanctions where necessary; and a designated quality body operating as an ESG-compliant agency with independent governance, peer-based review methods including site visits, and EQAR registration.
That is broadly the architecture Parliament put into the Higher Education and Research Act – OfS on funding and conditions, a separate DQB doing external quality assurance – even if the version that emerged in practice proved incompatible with ESG.
In that kind of rebuilt model, the regulator would set and enforce baseline conditions, use data and intelligence to identify risk, and trigger investigations when evidence suggests serious problems at a provider. The agency would design and run ESG-compliant external QA – cyclical institutional reviews against Part 1 standards with self-evaluation, peer panels including students, site visits, published reports and follow-up – and could also be commissioned to conduct targeted reviews of a course, campus or partnership when OfS flags risk, while retaining methodological autonomy and publishing its findings.
The crucial point is that triggered, risk-based reviews would sit on top of a cyclical baseline, not instead of it. ESG doesn’t forbid targeted investigations – many EHEA systems have “extraordinary” reviews when institutions hit trouble. What it does not recognise as compliant is treating occasional investigations and a (mostly) metrics-based ratings scheme as a complete substitute for regular independent external review for all providers.
England’s current trajectory – a powerful regulator, metrics-driven monitoring, desk-based ratings, and investigations when things go wrong – may be defensible on its own domestic terms.
What it is not is ESG-compliant quality assurance in anything more than a superficial sense. Pretending otherwise does nobody any favours – least of all the students whose interests are often invoked to justify the system, but who, in the European conversation that produced the ESG, were supposed to be partners in designing it, not just lab rats whose outcomes it counts.
Over the last seven years we’ve had repeated assertions from OfS that they meet the ESG – e.g. I remember being at a session in 2018 where the current CEO was asked if this was the case and told us all it did; the assertion in the current TEF consultation. But OfS to my knowledge has never been willing to put this to the test. The current QA proposals perpetuate the lack of alignment, when we look under the hood of TEF and also look beyond TEF to the other elements of the proposed revised approach to QA: https://lefttomyowndevices.blog/2025/12/08/alone-again-naturally/ .
The whole quality assurance model and its application to higher education providers is misconceived as a system of control for quality and accountability. If it were true to a system of quality assurance it would know most of the weaknesses in quality begin in the design of the degree to which students are matched. If the students are not matched to the degree, can adjustments be made to the processes of teaching and learning? The faculty must agree to the design according to their own rules and standards, which may include reference to external organisations. Reviews for agreeing quality are… Read more »