How you secure, enhance and measure academic quality depends on what you think it is.
Unfortunately, there is no consensus on what quality is: the public thinks it’s one thing, the HE sector another, and the English regulator a third. And while each approach has its pros and cons, this lack of consensus is going to make any effort to assess “value” in higher education incoherent.
What is put in
There is a sense in which academic quality is incorporated in the person of the student. Universities (in theory) recruit only the students who have demonstrated sufficient academic ability to cope with the rigours of university learning. This view appears in the Robbins report of 1963, which established the principle that “courses of higher education should be available for all those who are qualified by ability and attainment to pursue them.”
Though this position doesn’t exclude the possibility that higher education learning environments can adapt to the preferences and needs of students, it does assume that there is a measurable state of “readiness” for higher education that prospective students should acquire.
And though this view is highly unfashionable as a way of thinking about academic quality, it still holds sway in grumbles within and outside universities about whether too many students are being admitted, it fuels the public reputation of universities (“good” universities are the ones that “good” students apply to), and it underpins the idea, floated but ultimately not proposed by the Augar review, that there should be an attainment threshold for entry to degree programmes.
What is done
The sector’s dominant mode of thinking about quality, driven by QAA codes and guidance, is focused on what happens to students while they are at university. The old pre-2018 Quality Code had a lot in it that described the sorts of processes that reviewers would expect to see evidence of, for things like developing and reviewing course content, collection and publication of information, managing a student complaints system and so on.
Though there was a reasonable degree of consensus for quite a long time that university processes were the thing you wanted to measure to assess quality, it’s fair to say that much of what the QAA has traditionally assessed had nothing to do with educational quality in the specific sense of students’ learning gain.
The old Quality Code was noticeably reticent on some of the specifics of teaching and learning: what sort of teachers students are exposed to, and what pedagogical approaches are favoured. As Graham Gibbs observed in 2010, these aspects of quality are the most fruitful for drawing meaningful links between the stuff universities do and the amount that students learn. But these were not assessed, primarily on the basis that to do so would be burdensome and trespass on institutional autonomy.
What comes out
And so we come to OfS’ outcomes-based approach which holds that quality is both an outcome in itself and the outcome you get once a student has engaged with a course of learning. Measuring in a consistent and efficient way whether students have actually learned anything is hard to do, hence the argument over the comparability of degree standards and the demise of the learning gain projects. But you can measure other things that are considered important, if not necessarily directly and consistently associated with students’ learning (satisfaction, retention, employment, graduate salary).
In practice this means that the updated Quality Code presents baseline quality as a series of outcomes – courses are well-designed, students are supported – rather than as a series of processes. The supposed merit of this approach is that HE providers are now free to adopt any practice that achieves the desired outcome – regulators will take no direct interest in the specifics of what goes on inside providers unless the student outcomes data or a trigger-happy reporting officer raises cause for concern. And it’s assumed that the measurable student outcomes – retention, employability et al – are a reasonable test of the academic quality.
The merit of the outcomes-based approach is that the bureaucratic processes of internal and external quality review are jettisoned, and providers are freed up to be innovative in achieving outcomes. It’s more appropriate for a diverse sector with new kinds of providers doing things in different ways. It’s a system that is much more efficient and appropriate for a dynamic sector that’s responsive to students.
That’s the theory – what’s the reality?
It was never about the tick boxes
Quality professionals are understandably thrown by the shift, and we’d expect some teething problems. But from what we’re hearing, providers are not exactly embracing the opportunity to innovate in delivering quality for students.
Some are simply maintaining the established, tried and tested processes developed under the old quality regime. Which makes sense if you have confidence in the old ways as a guarantor of quality, but does risk failing to adapt to evolving regulatory and student expectations.
Some have stripped back those processes to simply focus on the data they will be judged on. In practice, this means that where there are no obvious problems in a subject area, no further questions are asked. The whole quality process is still driven by external accountability, just in a different way – and the scope for student involvement is significantly reduced.
Some new providers who are at the very beginning of their quality journey are frustrated by having to develop quality processes without any central guidance about what’s been proven to be effective. It’s understood that the behemoth of the old Quality Code might be overkill, but the stripped-back post-2018 version doesn’t give them much to go on either.
While we’ll probably not fully grasp the implications of the change in approach for some decades, what should not be underestimated at this stage is the effect of the loss of the quality culture, which at its best brought students, academics and professional staff together to talk about and improve academic quality.
For some – and we appreciate not everyone will feel this way – rather than the biggest tick-box exercise in history, the Quality Code functioned as a cultural repository for how the higher education community was thinking about quality – for all students, not only those for whom outcomes data exists. Rather than constraining innovation, it arguably established a community of practice within which creativity could flourish. And it should be noted that the Scottish sector has acted to protect this culture through retaining its enhancement-driven approach to quality.
Value and quality
And this is where you get the emerging culture clash. OfS is keen to ensure that students get value from their investment (you could call it money, but you could equally call it time or effort). And though the pre-2018 quality regime didn’t strictly assess or measure the specific pedagogical practices that the evidence suggests are valuable for student learning, it did assess a lot of things that mattered to students, things that gave them reason to value their experience. Plus, it created the conditions for students to be involved in debating what those things were and how they should be delivered.
Ironically, the data-driven approach feels to some much more like ticking boxes, as it does not always leave space for qualitative interrogation of the specifics of what students are experiencing. OfS’ value for money strategy (though “statement” would probably be a better word) exposes the threadbareness of a purely data-driven approach to quality.
In theory a value judgement can be made by to comparing quality outcomes out to cash input – but when we can’t decide what the outcome is (Is it quality? Enhancement? Graduates?) or even what the cash in is (what should be included in an assessment of the full costs of HE?), and we want students to be the judges, but know that they’re not making those judgements in the same way the OfS is, then the old regime begins to look, if not exactly a streamlined approach to arriving at value judgements, then certainly a lot more productive in creating value.
This need not be a call for a return to the good old days. The question is whether it’s possible for the HE sector to act outside the parameters of frameworks imposed by regulators. There is scope to develop a quality culture around the new regime – but it would require greater voluntary association and engagement, and rebuilding from the ground up. This, in effect, is what the QAA’s membership scheme proffers – it will be for institutions to decide whether this approach is the most effective available.