Last week saw the publication of The Guardian university league tables, which are compiled by my colleague at Kingston the meticulous Matt Hiely-Rayner. The Guardian league tables are prized in the sector, partly because lots of academics read The Guardian, but mostly because they only take account of measures related to teaching quality.
The stand out performance in the 2015-16 ranking of UK universities is that of Coventry University, which moved up from 27th place the previous year to the nose-bleed inducing heights of 15th. Coventry, the one time parish of HEFCE Chief Executive Madeleine Atkins, where she was vice chancellor from 2004-2013, now out-performs 15 members of the Russell Group, according to the table’s criteria.
The Russell Group describes itself as an association of ’24 leading universities’ and has been synonymous in recent HE speak with the idea of ‘elite’ institutions and the UK’s ‘better’ universities. They are often thought of, including by their own staff, as a self-serving lot whose lobbying line around research funding usually takes the form of ‘what resource we have is ours and what resource you have is ours as well’.
However, if we are to put any store by The Guardian’s statistical measures (a four year rolling average of HESA, NSS and DEHLE data) then the established landscape of mission group dominance may be loosening up. The aspirational Lancaster and Loughborough outperform UCL, Birmingham and Edinburgh in the most recent table, while Herriot Watt, Falmouth, Aston, Robert Gordon, and Portsmouth all earn honourable mentions in the top 50.
On paper, Coventry looks relatively uncompetitive on ‘spend per student’ and entry tariffs, but they have clearly achieved their position on the back of an excellent performance in the NSS.
Any vice chancellor will cherish the league table that places their own institution in the best possible light. The Complete University Guide for 2016, which takes account of research performance, puts Coventry at a still respectable 48th, but behind every member of the Russell Group and after both City and Herriot Watt.
Those near the lower echelons of the league tables will tell you that such indexes do not matter, that they are invented to sell newspapers, or, that they fail to include genuinely significant metrics. For example, what would a league table look like that privileged access data and BME attainment?
However, ever since the Major government in 1994 first published GCSE exam results by school in the form of a league table, the idea of the competitive index has secured a decisive role in the psychic space of UK education, and academics in all institutions do care about them, regardless of what they say in public.
There are two ways you can think about this. Either, worrying disproportionately about league table positions is a sign of a higher education sector set against itself by a divisive and acquisitive set of values foreign to intellectual life and driven by a marketised agenda that skews institutional priorities and seeks to re-establish class privilege at the level of university ranking.
Or, league tables merely formalise existing metrics to inform the public, who pay for universities, about relative performance in an already highly competitive, world leading industry, the ranking of which is intrinsic to the idea of a propositional good, which defines the value of a university degree.
As an arbitrary composite of un-nuanced data, they are perhaps really only useful to the sector as one piece of management information amongst others to be taken into account during the more slippery exercise of academic judgment, which can never be reduced to an easy arithmetic calculation.
Fans of league tables will point to the success of Coventry and others as evidence that a spirit of competition encourages rather than hinders the disruption of historical hierarchies in the sector. League tables, unlike mission groups, are by definition meritocratic, promotion is always possible if you learn how to play the game.
If the history of universities in the UK is thought of as a 24 hour clock, starting with the foundation of Oxford as midnight, then mission groups and league tables only make an appearance at 11.20pm. By the same measure, the Universities of Manchester, Leeds and Sheffield only pop up after the watershed at 9.00pm. In so far as league tables and mission groups are relatively new phenomena their long term effects or benefits are as yet hard to judge.
Equally, as a late entrant to academic history they might look to some like a belated attempt to reassert long-term privileges in an increasingly diverse and ever fragmenting field. It is not surprising that with the challenge to hierarchy comes resistance to that challenge. Perhaps, this is the paradox of universities as part of a modern speculative economy: the contest between the aspiration of an emerging middle class of institutions and the conservation of the historic privilege of an academic aristocracy. History tells us that this dialectic does not necessarily end well for the aristocrats, even if new formations of the ‘elite’ soon emerge.
The Russell Group is not NATO or the European Union; they will not be rushing to expand their membership for aspirants. The point of an exclusive club is that it is select. That is how desire works, and how league table envy enters into the world. In this sense, desire is irrational; the heart of a vice chancellor wants what the heart of a vice chancellor wants, irrespective of the evidence of HESA metrics.
It is certainly strange that so many institutional strategies are set up in relation to the contingent and perverse logic of league table position, as if a university and its specific mission will not last beyond the next 12 months. At this time of year pundits on Match of the Day are fond of saying that ‘the league table does not lie’. The latest Guardian results, however you want to interpret them, might suggest that sometimes the crude calculation of an HE league table is not beyond offering misleading information.
The issue is with NSS. It is like comparing the opinion of someone in First Class to someone in Standard. They both might give a flight 8/10 but it doesn’t mean their experiences are of the same quality/worth. This is exactly what happens at UK institutions
Someone getting AAA at A level is going to want and expect very different things to the student getting BCD. Their NSS will reflect this accordingly.
I admire how well Coventry are performing and this should be quite rightly commended. The Russell Group is, however, principally a group of research intensive Universities, something that Coventry cannot claim to be. Coventry clearly focus on teaching over research and the weight of their REF submission speaks for that.
The very idea of a league table – ANY league table – is that there is an ordered list from good to bad. In the case of HE, this is ludicrous on many counts:
1) The definition of ‘good’ in HE is highly debatable (and much debated). For some (such as many in the Russell Group), it is research excellence. For others, it is teaching excellence. And for others yet, it is access, student experience, value-for-money, industrial partnerships, cultural innovation, labour force production, etc, etc. A healthy HE sector is diverse, ie. one that facilitates ALL these different kinds of ‘good’.
2) By condemning some HEIs to the bottom of the league table, they are condemned as the opposite of the ‘good’ ones at the top, i.e. they are ‘bad’. Quite apart from the point about diversity above and having completely different missions from those at the top of the table, the ones at the bottom may not even be bad at whatever the spurious definition of ‘good’ is that is being used. They may simply be not as outstanding. If a sprinter comes last in an Olympic 100m, it doesn’t necessarily mean they’re a poor runner.
3) League tables belie the potential bunching of the whole set at the top. They also don’t show if there is bunching in the middle. For example – without wanting to detract from Coventry’s enviable performance in recent years – bunching means that, by raising its game just a little in a couple of metrics, a uni in the middle of the pack might jump 20 or 30 places in a league table. Meanwhile, in a less bunched part of the bell curve, a similar improvement might make no difference to their position.
3) As Martin points out, most league tables are made up of arbitrary composites of weighted metrics. The slightest shift in that composition can rearrange the whole table to suit whatever the editorial penchant of its compilers might be. The arbitrariness exists not only in the choice of metrics, but also in the mixing. The way you set up your league table defines the results you’ll get and, even if they wanted to, this is not something that league table compilers can do with absolute disinterest and impartiality.
4) Even which end of the metric you use in a league table is arbitrary. Take for instance, entry requirements. Every league table that uses them assumes that the higher the entry grades, the better the university. That’s a relative position. From the perspective of a student with three Ds at A level, higher grades most definitely are not a mark of ‘good’. What would a league table look like where lower UCAS tariffs scored the HEI more highly? This is not mere facetiousness: the OU was founded on this concept of an open education being a social good. Interestingly, the OU is consistently among the highest performers in the NSS and so is arbitrarily excluded from most league tables for fear of undermining their credibility by performing ‘too well’.
5) As Martin also points out many metrics that might be relevant are left out. As he says, where’s access data? Also what about cost? Learning gain? Student drop-out?
6) One of the reasons these things are left out, is that they’re almost impossible to measure reliably and comparatively. Measuring learning gain will always be difficult – it certainly not equivalent to degree classifications or employment data. As for teaching quality, as Tori (above) has pointed out, the NSS is a feeble proxy. The NSS has its uses, but measuring comparative teaching quality is not one of them. Throughout league tables, these feeble proxies abound. That’s because to paraphrase the title of a damning report on league tables last decade, they are counting what is measured, not measuring what counts.
I could go on, but what it boils down to is that universities are not premiership football teams. Scoring goals and winning matches comprises success in football. It is belittling to HE to suggest they can be compared in the same way. Furthermore, it is very damaging to their interests and those of students, because it drives down diversity and choice and misleads students into making poor choices based on metrics that do not matter to them (or anyone else), rather than on what will help them realise their potential.
The author and commentators leaves out one of the biggest absurdities of league tables in practice: Universities bullying staff into grade inflation. This is because good degrees are measured. The lower your standards, the more good degrees the higher the ranking. All they need to do is get an external examiner – who works for another UK HEI and on the receiving end of the same race to the bottom pressure in their own workplace to confirm that the standards are indeed similar. It would be naive to believe that moral corruption that gave us the expenses scandal, phone tapping, NHS data manipulation etc etc wouldn’t also touch UK HEI. Through in leveraged banalce sheets and ever increasing VC salaries subject to performance and you are not far off the worst ethical behaviour ascribed to bankers.
Great article sharing with us.
https://www.smartermax.co.uk/