The Higher Education Policy Institute (HEPI) released a study yesterday that ranks universities by one measure of widening participation (WP), a Gini coefficient.
The promotion of widening participation as an important area on which HEIs should be evaluated is a welcome contribution. However, flaws in the research methodology undermine the key message, and may mean the intervention is more harmful than useful.
From such a ranking one would expect Russell Group universities to do poorly and indeed that was the case, and the finding which generated the most headlines. But, you would also expect providers with a very specific mission based around widening participation to fare well. As an Open University academic I eagerly sought our place in the graphic (all academics hate league tables until they perform well in them). But it was absent, as was Birkbeck, which has a similar WP-focused mission. In addition, some providers who have a specific WP focus and were included, such as Ulster and the University of the Highlands and Islands, seemed to fare poorly. Why was this self-proclaimed attempt at ‘benchmarking’ WP in universities full of such obvious anomalies?
The wrong tool
The answer lies in the methodology. The ranking is based on a Gini coefficient derived from “publicly-available 2016 UCAS POLAR participation data reported by universities”. Herein lies the problem. POLAR is a classification based on the proportion of the young population in an area that participate in higher education. It’s not a bad measure of social inequality, but is “based on the proportion of 18-year-olds who enter HE aged 18 or 19 years old”. It is therefore not a good measure for institutions that have many mature students.
For this reason this year’s Teaching Excellence and Student Outcomes Framework (TEF) also included the Indices of Multiple Deprivation as a measure of WP. This includes measures of employment, income, and health in order to determine the deprivation of a small geographical area. It is not without its limitations also, particularly in inner-city areas where these factors can vary wildly from one side of the street to another. Which is why the TEF exercise included both measures.
By focusing solely on POLAR data from UCAS, the HEPI methodology incorporates implicit assumptions about education that undermine the very point of the study. It ends up focusing, or at least privileging, the data on traditional universities and traditional (young, full-time, campus-based) students. If the aim is to argue that widening participation is an important metric, then that message is entirely undermined if the definition of WP is, ironically, too narrow to include many of the students who qualify.
A better way
A study that shows how providers who prioritise WP perform would be more powerful. In order to meet the needs of WP students, education often needs to be rethought, with open entry, part time or blended study, as well as new outreach and community support programmes. By adopting a methodology that disadvantages providers who seek to realise such approaches and reinforces the conventional model of higher education, the HEPI report does little to advance the WP agenda.
As a research student supervisor I always advise my students that they shouldn’t choose their methodology first and then try to make reality match it, but this seems to have been the approach here. If the methodology is excluding institutions or disadvantaging students that a common-sense analysis tells you should be included, the it is time to reconsider the method.
So when HEPI replied on Twitter that “there is not a valid way of including [the OU] in this study as POLAR focuses on young people, the data was sourced from UCAS” that seems like an admission that the study is flawed. It is not the job of WP-focused providers to make sure they fit HEPI’s methodology, but rather vice-versa, especially before reports are pushed to the press.
HEPI have argued that it is just one contribution to a bigger picture. That may well be the case, but its title does not suggest such a modest intention. The report is titled “Benchmarking Widening Participation”. This has the intention then to become a useful metric, and if so, the exclusion of widening participation institutions from the outset is not just annoying, it’s potentially damaging. If it had the more appropriate title of “One measure of widening participation at conventional universities” then this response may not be so forthright, but also I suspect the media coverage would not have been so extensive.
The danger is that such a report reinforces traditional notions of what constitutes a student, and study in higher education, which are at odds with the needs of many WP students. League tables always result in a loss of nuance, and this could make life more difficult for providers who seek to prioritise WP through non-traditional provision. For a study that seeks to raise the profile of WP, that would indeed be an unfortunate outcome.