Last year, we saw three examples of policy developments where the time between negative headlines and the full weight of regulatory enforcement was a mere matter of weeks. To prepare for future policy uncertainty, we should investigate what’s underneath the official soundbites.
In 2017, complaints about excessive pay for vice chancellors resulted in a new regulatory tool for the justification of salaries. Similarly, concern about the denial of free speech quickly resulted in an oft-repeated ministerial response that universities must do more to promote free speech and be seen to do more. There followed the creation of a “public interest principle” in the regulatory framework.
When it comes to the third example – grade inflation – there is some hard evidence (through HESA’s stats) behind the argument, and the sector has put its virtual policy hands up to the accusation. On Wonkhe, William Hammonds from Universities UK said: “It is essential that the sector actively addresses these complex challenges to maintain confidence in academic standards and protect the value of a degree to students and employers.” There’s recognition that the number of first and upper second degrees has increased without any apparent corroboration of increasing standards. But is the source of this “problem” quite the one we’ve been led to believe? And is the proposed work on degree algorithms any use?
Behind the headlines
In September 2017, Jo Johnson spoke to the assembled vice chancellors at Universities UK’s annual conference. When berating the sector for insufficient control on standards, he used a quote (not for the first time) in which he claimed that “The Higher Education Academy has found that nearly half of institutions had changed their degree algorithms to ‘ensure that their students were not disadvantaged compared to those in other institutions’.” When you look back at the HEA’s report [pdf], however, you can see that the “half” refers to those respondents who had the information (c. 40% of the total), and the “institutions” is actually people who responded to the survey. For the sample of quality managers, 98 institutions (of 159) were represented, but the survey reported 126 responses, so it’s clear that some institutions were counted (at least) twice.
We should also note that UUK’s degree algorithms report found that a smaller proportion of institutions had changed their behaviour to align degree algorithms with competitors: “Fewer than 10% of institutions indicated that they had made changes to award regulations with the intention of aligning the profile of their awards with comparator institutions or the wider sector.” So while Johnson used this information to show how the sector had supposedly misbehaved, the evidence cited to support the accusation doesn’t exactly stack up.
While this instance of mis-quoting is a source of some annoyance to the policy wonk in pursuit of evidence-led initiatives, it masks an even bigger issue. Degree algorithms are irrelevant. Johnson himself – in the same speech to UUK – said: “At the very heart of this issue is a lack of sector-recognised minimum standards for all classifications of degrees.” Changing the algorithms by which classifications are made is a sideshow. The calibration of standards is too arbitrary – between subject areas, between institutions – and the costs of “solving” it would be immense. Comparability of degree outcomes is a fiction which persists within the HE sector: and there don’t seem to be any proposal which would create a system of easily comparable standards. To call for standardising algorithms is to miss the point that the award of marks is far from an exact science.
The quality of debate
The abuse of statistics and the twisting of quotes is hardly a new phenomenon. And we shouldn’t be surprised that people will be selective with the evidence when aiming to bolster a political argument. But I’d like to see more robust challenge when this happens in higher education because, unchallenged, there is a risk that false accusations become policy truths.
Of much more concern than the grade inflation quote is the use by the National Audit Office of a finding in the HEPI/HEA student experience report about students’ perceptions of value for money. It’s prominent in the report and the first line of the press release: “Only 32% of higher education students consider their course offers value for money.” But the provenance of the stat is hard to find in the report: you have to know where it comes from. It’s likely (almost certain) that, in the post-18 of fees, this statement will be used uncritically as a fact: “the NAO says…”
But we know that this is the finding of one survey, of students at a particular point in time, and needs to be seen in context (it’s hard to know how to define value at any given point for a service that we’re expecting students to benefit from over a lifetime). OfS has conducted its own research into perceptions of value for money; this also doesn’t provide a definitive answer to the question, but it’s another data point to consider in the debate. We need more evidence, but we also need to be highly critical of the evidence we have, and not assume that any particular set of results presents a full truth (even if it supports our argument).
Stay alert
Whether the issue is grade inflation, VC pay or value for money, let’s call out mis-quotes when we see them. With the post-18 funding review outcomes so critical to the health and future of the higher education sector, the stakes are as high as they could be for the use of evidence in current HE policy making. We shouldn’t let a good headline or pithy soundbite get in the way of understanding what’s important.
On 21 March, the government opened a call for evidence on the post-18 funding review. It is accepting submissions until 2 May 2018.
The other lesson? UUK should push back against the government, and they should push back very hard indeed.