David Kernohan’s reflection “What are tariff groups and why do they not matter?” was an interesting read.
At Jisc, we wondered whether any of the “lot of people” asking David about this were university applicants, seeking his expert help in completing their UCAS forms? Because originally one imagines these tariff groupings were devised for the benefit of applicants, to help them focus their efforts on courses and institutions where their applications were more likely to be successful.
But as often happens, such classifications become proxies for something different – here quality of institution – and may start to take on a life of their own as institutions look to game the measures, with the interests of applicants taking a back seat.
Data brings light
Until, that is, authoritative data comes to the rescue, in this case via UCAS’s decision to publish actual entry grades.
The most notable example of this effect is perhaps university mission groups: originally self-selecting groups acting as a collective voice on strategy and policy for the common interests of their members, they too have become a proxy for quality of institution: particularly the Russell Group, membership of which may significantly influence applicant behaviour.
Returning to tariff groups, it is perhaps stating the obvious that it is not the grades students enter with, but what happens to them once they have enrolled – the value that is added by the course and wider experience, that is key to understanding the quality of education. But measuring “value add” in higher education is notoriously difficult, as shown by the £4m OfS study into learning gain, which concluded in 2019 “with a whimper” according to one Wonkhe commentator. In such circumstances we may fall back to what we can measure, rather than what is important.
Can we do better?
I acknowledge David’s analysis is valiant, making as much sense as one can of the “tariff group” paradigm, at least from the applicant perspective, by zooming in on academic subjects – and again, authoritative data comes to the rescue.
But can we bring this discourse to a more rounded conclusion? If we assume that one key purpose of such data and analysis is to benefit prospective student choice (others including benchmarking by providers and graduate employers), and if we assume measurement of “value add” remains out of reach, what are we left with?
Can authoritative data again come to the rescue? Through Discover Uni, applicants have access to rich, authoritative data, built on the HESA student data set, regarded by our international peers as the finest student time series data in the world. And yet research by Heathcote, Savage and Hosseinian-Farr shows “[a data-rich model] has been largely ineffective in addressing student choice behaviour” – which is anecdotally backed up by straw poll of colleagues’ kids applying for university, none of whom had heard of Discover Uni, let alone looked at it. And yet, we argue this data is crucial to the process: it is necessary but not sufficient.
What about narrative, story-telling and culture? Applicant behaviour is human behaviour, with all the nuance that entails. The initial university choice model of Heathcote, Savage and Hosseinian-Farr ranks highly “current reputation – historic reputation of institute, opinion of peers, image, national rankings, etc.”; and so we see a direct route for the authoritative data into the narrative, via the various league tables, which receive significant annual media coverage and more anecdotally were clearly on the radar in the poll of colleagues’ kids above.
The world of the league table
In the UK, the main league tables are driven primarily from the same underlying data – Discover Uni/HESA data. The tables then differentiate themselves from each other via augmentation by other relevant data, along with different slicing and dicing of the core underlying data, often via bespoke Jisc Tailored Data Sets analysis. But the common data foundation provides a crucial level playing field, and limits the scope for game-playing that may not be in the best interests of applicants.
The differences between the tables, their presentations and their core themes, leaves ample room for the required narrative to be constructed (via league tables and other routes) and to emerge via the applicant ecosystem.
Although relatively few university applicants may refer directly to the Discover Uni web site, that data plays a fundamental role, in ensuring a firm and level data foundation on which the narrative, culture and nuance around UK university admissions is built.
Specifically by not having a standard good/middling/bad grouping the data proactively “comes to the rescue” of applicants because, without it, the nuance of human choice could potentially be manipulated and drawn in all manner of different directions. Choosing the right university is hard enough as it is; we should all be grateful for the level foundation our data provides.
A great read Phil. While I argue elsewhere on the site that we need some data informed groupings I would never recommend these as the basis of student choice, there’s simply too much intra-course variation and students study courses. I do of course support the excellent data curated and shared by HESA and published on Discover Uni even if most students use much more vague metrics to guide their choices having robust and official data available for fact checking is important.
Good thoughtful article. Discover Uni is one of those potentially great services that may benefit from more UX/Customer Journey type work. Currently it feels more data distribution than student facing service. For example if you choose a subject then it shows all institutions running that subject and the only filters appear to be location and study mode. So where are the league tables, tariff groups, mission groups, societies, reviews, etc? Each category would have its own challenges admittedly, but it’s difficult to comment that something has been “ineffective in addressing student choice behaviour” without then questioning the rationale.
My experience at UCAS fairs and other recruitment events (or talking to friends and family) over several years is that no one has heard of Dicoveruni, and almost all decisions made about where to apply — and accept — are based on hearsay from friends, family and sometimes teachers, lecturers and the media. Few applicants had even looked at league tables, nor had their parents, or friends or teachers. Many applicants are not even going to open days. Human nature, it appears, is not really interested in a marketplace of competing offers. It wants a shortcut and the narrative — the myth — around some providers and groupings is a much bigger draw than what the data about an institution or course actually says. Human nature appears to not want to investigate further.
My experience is that all this publicly available data fails to shift recruitment for some providers and courses because it isn’t cutting through. At a time when many institutions or departments are struggling financially, it would be nice to think that only genuinely weak provision (mickey mouse?) was closing (or likely to close) as the market ideal would have you believe. I would be interested in a study that looked at whether there was any link between the closure of courses, their publicly available data, and the current HE funding crisis. I would love to be reassured that some very good provision — based on the data — is not being lost because of recruitment and, therefore, the financial challenges experienced at some institutions. If good provision is being lost it is a damning indictment of the failure of the data to have any impact on the HE “market”.