Ok, it was the quiet news period between Christmas and New Year, but who had money on subject benchmark statements hitting the national headlines again?
According to the Telegraph QAA “advises universities on course standards”, which is closer to reality than much of the earlier coverage about benchmarks. So the Telegraph article is slightly more nuanced than those from earlier in the year (I’m sure taking its prompt from the Vicki Stott’s and David Kernohan’s Wonkhe articles on what the status and standing of QAA benchmarks actually is).
Still, it did set me wondering about the continuing currency of the mythology about benchmarks. And what interested me wasn’t that the national press doesn’t understand the nuances of OfS regulation, institutional autonomy and QAA guidance. It was that the press is writing up the views of colleagues from within the sector about the benchmarks.
How benchmarks are really used
I completely agree with Vicki’s and David’s descriptions of what the benchmark statements are and aren’t, and how they should be used. And about the significant value of embedding inclusive education in the new generation of benchmark statements, which Ailsa Crum has highlighted on Wonkhe. I’ve also seen time and again constructive engagement with the benchmarks as part of programme design processes, to the benefit of programmes and their students.
But I don’t think the criticisms from colleagues in universities (seen in the articles linked to, and also in the comments on the articles linked above) are all being made in bad faith. So where do they come from?
Sometimes we underestimate the half-life of things within higher education. Of course subject benchmark statements are reference points to support effective programme design. That isn’t how they started life though. When they were being developed in the late 1990s they were being badged (to quote Paul Greatrix’s book on the First Quality War, Dangerous Medicine) as “broadly prescriptive”.
Of course QAA quickly (and rightly) moved away from this, clearly stating from around 2001 that benchmarks were reference points. However, some colleagues perhaps have long memories. Others don’t, but it’s still possible that the idea of benchmarks as requirements has become embedded in some institutional and/or departmental cultures. So while it seems a long time ago, there’s perhaps a bit of this still in play.
And there’s also how universities have treated benchmarks. In the time I’ve worked in higher education I’ve not been aware of any universities that treated subject benchmarks as setting out requirements that must be complied with. But I have seen instances where the use of benchmarks in programme design has almost shaded into a “comply or explain why you don’t” approach. Of course some colleagues might interpret this as an implicit requirement to comply. More likely though is that academics under huge workload pressures see a comply or explain approach (or something they feel looks like this), and think that just complying is the most effective use of incredibly scarce time.
We also need to think about how subject benchmarks get used within academic departments. I’ve seen an instance where a head of department blatantly distorted the status of the benchmarks to try to impose an approach to programmes on a department, against legitimate questions and concerns from colleagues.
This highlights something that we don’t always acknowledge enough. Whatever the formal status of benchmarks in practice they can be political documents, used in ways that were not intended. And this isn’t always the “bad” institution misbehaving towards academic departments and colleagues. I’ve seen two examples where departments have used subject benchmarks as a specious justification for curriculum overhauls, which in reality were primarily about freeing up time for staff research by reducing contact hours and student module choice.
Now, I know that this doesn’t reflect the reality of what subject benchmarks are intended to be, or what they are, in many institutions. And I agree with other writers that, if used properly, subject benchmarks are potentially even more valuable now to support meeting the new OfS B conditions. But it might suggest where at least some of the current criticism of benchmarks, from some colleagues in the sector, may be coming from.
And the misunderstandings might be more widespread than we think. In seven years delivering development sessions for new programme directors, I always emphasised the status (as well as the value) of benchmark statements as reference points not requirements. It frequently struck me many pairs of raised eyebrows were raised when I said this. All of which reinforces the importance of those of us supporting development, review and improvement of the programmes, continuing to emphasise what benchmarks are and what they aren’t and the value they can bring when properly used. I also wonder though if it sheds a little light on one of the more perplexing elements of the new OfS B conditions.
Many of us in the sector have been surprised at the strength of criticism from OfS towards the UK Quality Code and other established quality and standards reference points (the injunctions to abandon many aspects of established QA approaches have at times felt like a model of the parable of the scorpion and the frog). There were also scratched heads from many when OfS’s response to the consultation on the new B conditions included the claim that “providers should note that there are likely to be some parts of the Code which would lead to practices that we would consider non-compliant with our regulatory requirements”.
Perhaps at least part of the answer lies in the multiple and imperfect ways in which the existing, established quality and standards requirements (still in place of course in three of the four constituent jurisdictions of the UK) such as subject benchmarks have been understood.