The Higher Education Policy Institute (HEPI) released a study yesterday that ranks universities by one measure of widening participation (WP), a Gini coefficient.
The promotion of widening participation as an important area on which HEIs should be evaluated is a welcome contribution. However, flaws in the research methodology undermine the key message, and may mean the intervention is more harmful than useful.
From such a ranking one would expect Russell Group universities to do poorly and indeed that was the case, and the finding which generated the most headlines. But, you would also expect providers with a very specific mission based around widening participation to fare well. As an Open University academic I eagerly sought our place in the graphic (all academics hate league tables until they perform well in them). But it was absent, as was Birkbeck, which has a similar WP-focused mission. In addition, some providers who have a specific WP focus and were included, such as Ulster and the University of the Highlands and Islands, seemed to fare poorly. Why was this self-proclaimed attempt at ‘benchmarking’ WP in universities full of such obvious anomalies?
The wrong tool
The answer lies in the methodology. The ranking is based on a Gini coefficient derived from “publicly-available 2016 UCAS POLAR participation data reported by universities”. Herein lies the problem. POLAR is a classification based on the proportion of the young population in an area that participate in higher education. It’s not a bad measure of social inequality, but is “based on the proportion of 18-year-olds who enter HE aged 18 or 19 years old”. It is therefore not a good measure for institutions that have many mature students.
For this reason this year’s Teaching Excellence and Student Outcomes Framework (TEF) also included the Indices of Multiple Deprivation as a measure of WP. This includes measures of employment, income, and health in order to determine the deprivation of a small geographical area. It is not without its limitations also, particularly in inner-city areas where these factors can vary wildly from one side of the street to another. Which is why the TEF exercise included both measures.
By focusing solely on POLAR data from UCAS, the HEPI methodology incorporates implicit assumptions about education that undermine the very point of the study. It ends up focusing, or at least privileging, the data on traditional universities and traditional (young, full-time, campus-based) students. If the aim is to argue that widening participation is an important metric, then that message is entirely undermined if the definition of WP is, ironically, too narrow to include many of the students who qualify.
A better way
A study that shows how providers who prioritise WP perform would be more powerful. In order to meet the needs of WP students, education often needs to be rethought, with open entry, part time or blended study, as well as new outreach and community support programmes. By adopting a methodology that disadvantages providers who seek to realise such approaches and reinforces the conventional model of higher education, the HEPI report does little to advance the WP agenda.
As a research student supervisor I always advise my students that they shouldn’t choose their methodology first and then try to make reality match it, but this seems to have been the approach here. If the methodology is excluding institutions or disadvantaging students that a common-sense analysis tells you should be included, the it is time to reconsider the method.
So when HEPI replied on Twitter that “there is not a valid way of including [the OU] in this study as POLAR focuses on young people, the data was sourced from UCAS” that seems like an admission that the study is flawed. It is not the job of WP-focused providers to make sure they fit HEPI’s methodology, but rather vice-versa, especially before reports are pushed to the press.
HEPI have argued that it is just one contribution to a bigger picture. That may well be the case, but its title does not suggest such a modest intention. The report is titled “Benchmarking Widening Participation”. This has the intention then to become a useful metric, and if so, the exclusion of widening participation institutions from the outset is not just annoying, it’s potentially damaging. If it had the more appropriate title of “One measure of widening participation at conventional universities” then this response may not be so forthright, but also I suspect the media coverage would not have been so extensive.
The danger is that such a report reinforces traditional notions of what constitutes a student, and study in higher education, which are at odds with the needs of many WP students. League tables always result in a loss of nuance, and this could make life more difficult for providers who seek to prioritise WP through non-traditional provision. For a study that seeks to raise the profile of WP, that would indeed be an unfortunate outcome.
Really thoughtful response. For a number of HEIs with proud WP legacies this report without upfront caveats induced a day of damage limitation with local press and elected representatives. A more nuanced account would have been welcome and may have stimulated a more inclusive debate. Ironic – given the WP agenda at stake?
Interesting response to the HEPI report and I agree that the methodology is far from convincing. I take more issue with the notion that an even spread across the five POLAR quintiles constitutes WP. Where did that assumption come from? I think the report treats the concept of equality as synonymous with equity.
It is good of Martin to take the trouble to respond at such length to our latest publication. Here are a few further thoughts in response. 1. HEPI Policy Notes are very short documents (typically just four pages) to stimulate debate and discussion about important issues, as this one has done. They are not designed to be the final word on any issue. Despite its brevity, our report includes a whole section on ‘Challenges’ that discusses the limitations in its own methodology. 2. Two options might have satisfied Martin: use POLAR but find a way to include the OU; or… Read more »
Wow, entirely missing the point the response makes there Nick. I’m not arguing that just the OU should be included. I am arguing that your methodology is poor, and punishes WP providers, which is unhelpful. See Brian’s response above – they had to spend a day dealing with the fallout of this. That’s not a helpful intervention. There are many ways the methodology could have been improved beyond your suggestions. You could combine POLAR with IMD for instance. Your response repeatedly seems to be that the methodology comes first, and to reject any criticism. HEPI should listen to the many… Read more »
I find myself sympathising with both points – standardised indicators often call for some form of standardisation of the object they want to compare. I was the leader of the European wide EUROSTUDENT project for 10 years, where we strived to provide comparative data on (inter alia) how socially inclusive national higher education systems are. This led us to exclude both distance providers and (often) private providers of higher education in the comparative analysis. But at least we have always strived to include all types of students in the analysis. This has often led to policy-makers from different countries criticising… Read more »
By ‘this report’ I meant the HEPI report that was the subject of the author’s article.
For my part I did not read that Martin was calling for any exception in treatment. I would have preferred that HEPI made explicit comment on obvious outliers and through these explain the constraints in the methodology. I understand the brevity point but making space for the obvious would have helped. The HEPI press release drained a whole day in damage limitation. At a time when HE is on the backfoot with the press, the HEPI report was a bit of an own goal for some major WP providers.
Interesting discussion. I would say the biggest flaw is that the report presents the findings as if WP were only about access. Universities that attract a high proportion of WP students but then fail to meet their needs during their studies, would perform very well based on this methodology. Without a measure that takes into account success and progression of WP students, I don’t see that the the ranking has any utility at all. Surely the ultimate aim of WP, should not be just to attract as many WP students as possible, but to ensure that WP students benefit as… Read more »
I absolutely agree with Debbi’s point about comparable progress of WP students with their more privileged counterparts. A point reinforced by Vincent Tinto’s analysis that access without opportunity forms, at best, a partial approach to WP and, at worst, a damaging process.