This article is more than 5 years old

When is widening participation not widening participation?

Martin Weller of the OU critiques yesterday’s HEPI report on widening participation.
This article is more than 5 years old

Martin Weller is a Professor of Educational Technology at the Open University.

The Higher Education Policy Institute (HEPI) released a study yesterday that ranks universities by one measure of widening participation (WP), a Gini coefficient.

The promotion of widening participation as an important area on which HEIs should be evaluated is a welcome contribution. However, flaws in the research methodology undermine the key message, and may mean the intervention is more harmful than useful.

From such a ranking one would expect Russell Group universities to do poorly and indeed that was the case, and the finding which generated the most headlines. But, you would also expect providers with a very specific mission based around widening participation to fare well. As an Open University academic I eagerly sought our place in the graphic (all academics hate league tables until they perform well in them). But it was absent, as was Birkbeck, which has a similar WP-focused mission. In addition, some providers who have a specific WP focus and were included, such as Ulster and the University of the Highlands and Islands, seemed to fare poorly. Why was this self-proclaimed attempt at ‘benchmarking’ WP in universities full of such obvious anomalies?

The wrong tool

The answer lies in the methodology. The ranking is based on a Gini coefficient derived from “publicly-available 2016 UCAS POLAR participation data reported by universities”. Herein lies the problem. POLAR is a classification based on the proportion of the young population in an area that participate in higher education. It’s not a bad measure of social inequality, but is “based on the proportion of 18-year-olds who enter HE aged 18 or 19 years old”. It is therefore not a good measure for institutions that have many mature students.

For this reason this year’s Teaching Excellence and Student Outcomes Framework (TEF) also included the Indices of Multiple Deprivation as a measure of WP. This includes measures of employment, income, and health in order to determine the deprivation of a small geographical area. It is not without its limitations also, particularly in inner-city areas where these factors can vary wildly from one side of the street to another. Which is why the TEF exercise included both measures.

By focusing solely on POLAR data from UCAS, the HEPI methodology incorporates implicit assumptions about education that undermine the very point of the study. It ends up focusing, or at least privileging, the data on traditional universities and traditional (young, full-time, campus-based) students. If the aim is to argue that widening participation is an important metric, then that message is entirely undermined if the definition of WP is, ironically, too narrow to include many of the students who qualify.

A better way

A study that shows how providers who prioritise WP perform would be more powerful. In order to meet the needs of WP students, education often needs to be rethought, with open entry, part time or blended study, as well as new outreach and community support programmes. By adopting a methodology that disadvantages providers who seek to realise such approaches and reinforces the conventional model of higher education, the HEPI report does little to advance the WP agenda.

As a research student supervisor I always advise my students that they shouldn’t choose their methodology first and then try to make reality match it, but this seems to have been the approach here. If the methodology is excluding institutions or disadvantaging students that a common-sense analysis tells you should be included, the it is time to reconsider the method.

So when HEPI replied on Twitter that “there is not a valid way of including [the OU] in this study as POLAR focuses on young people, the data was sourced from UCAS” that seems like an admission that the study is flawed. It is not the job of WP-focused providers to make sure they fit HEPI’s methodology, but rather vice-versa, especially before reports are pushed to the press.

HEPI have argued that it is just one contribution to a bigger picture. That may well be the case, but its title does not suggest such a modest intention. The report is titled “Benchmarking Widening Participation”. This has the intention then to become a useful metric, and if so, the exclusion of widening participation institutions from the outset is not just annoying, it’s potentially damaging. If it had the more appropriate title of “One measure of widening participation at conventional universities” then this response may not be so forthright, but also I suspect the media coverage would not have been so extensive.

The danger is that such a report reinforces traditional notions of what constitutes a student, and study in higher education, which are at odds with the needs of many WP students. League tables always result in a loss of nuance, and this could make life more difficult for providers who seek to prioritise WP through non-traditional provision. For a study that seeks to raise the profile of WP, that would indeed be an unfortunate outcome.

9 responses to “When is widening participation not widening participation?

  1. Really thoughtful response. For a number of HEIs with proud WP legacies this report without upfront caveats induced a day of damage limitation with local press and elected representatives. A more nuanced account would have been welcome and may have stimulated a more inclusive debate. Ironic – given the WP agenda at stake?

  2. Interesting response to the HEPI report and I agree that the methodology is far from convincing. I take more issue with the notion that an even spread across the five POLAR quintiles constitutes WP. Where did that assumption come from? I think the report treats the concept of equality as synonymous with equity.

  3. It is good of Martin to take the trouble to respond at such length to our latest publication. Here are a few further thoughts in response.

    1. HEPI Policy Notes are very short documents (typically just four pages) to stimulate debate and discussion about important issues, as this one has done. They are not designed to be the final word on any issue. Despite its brevity, our report includes a whole section on ‘Challenges’ that discusses the limitations in its own methodology.
    2. Two options might have satisfied Martin: use POLAR but find a way to include the OU; or reject POLAR. Both are unsatisfactory. POLAR is imperfect (as with nearly all data), but ignoring evidence because it doesn’t reveal all that we want would be unacademic. The second option of somehow wrenching the OU in (a non like-for-like basis) because we have a sense it might do well could be even worse.
    3. Including the OU would have done little to make the study truly comprehensive anyway because, for example, the study would still exclude alternative providers and FE colleges delivering HE, many of which have impressive stories to tell about successfully delivering higher education to under-represented groups. It would also still exclude other important student characteristics, such as ethnicity. But not every publication has to be encyclopaedic to be useful.
    4. The issue of data not reflecting the OU well because of its unique position in the higher education sector is a much bigger one. For instance, when the OU chose to stand aside from the Teaching Excellence Framework, their Vice-Chancellor explained: ‘What we have found with standardised metrics across the higher education sector is that the distinctive nature of OU students is not easily represented.’ It is hard to disagree.
    5. Unless you believe league tables will be banned or are likely to disappear, both of which seem incredibly unlikely, Martin’s wider concern about their impact is probably best addressed by creating more of them, so that each university’s different strengths and weaknesses in different areas can be understood more fully, enabling a more rounded picture to be built up overall.
    6. We have covered the positive impact of Martin and his colleagues at the OU in other ways, including – for example – in an excellent paper written by the Vice-Chancellor, Peter Horrocks, and in a study produced with support from the OU on the way to address the decline in part-time learning called It’s the finance, stupid! The decline of part-time higher education and what to do about it. We shall aim to continue to do so.
    7. Martin’s argument is that the OU is so different, it should have been included. Our view is that it is so different that to have included it in this one short study would have been hard to defend. But perhaps the world is not as black-and-white as this division implies? There is a need for a much wider assessment of the OU’s unique historical role in our higher education system (which may not always be as it has seemed) – as well as the way in which other institutions have sometimes seemed to regard the OU as an excuse not to do things. As Martin Trow wrote in the Higher Education Quarterly many years ago, ‘I yield to no one in my admiration for the Open University … [But] While the Open University is nominally open, we know that well over half of the students could qualify to entry to the universities or polytechnics. … Indeed it can be argued that the existence of the Open University has helped to justify the lack of expansion of the university system itself – one can always point to the Open University as the safety valve which would take increased demand for university entrance by those who did not have the full qualification entry.’ It would be more fruitful to address this bigger question than to worry about whether every single individual dataset reflects the work at every single higher education institution.

  4. Wow, entirely missing the point the response makes there Nick. I’m not arguing that just the OU should be included. I am arguing that your methodology is poor, and punishes WP providers, which is unhelpful. See Brian’s response above – they had to spend a day dealing with the fallout of this. That’s not a helpful intervention. There are many ways the methodology could have been improved beyond your suggestions. You could combine POLAR with IMD for instance. Your response repeatedly seems to be that the methodology comes first, and to reject any criticism. HEPI should listen to the many WP experts who have expressed concerns about this report and not just reject their criticism.

  5. I find myself sympathising with both points – standardised indicators often call for some form of standardisation of the object they want to compare. I was the leader of the European wide EUROSTUDENT project for 10 years, where we strived to provide comparative data on (inter alia) how socially inclusive national higher education systems are. This led us to exclude both distance providers and (often) private providers of higher education in the comparative analysis.

    But at least we have always strived to include all types of students in the analysis. This has often led to policy-makers from different countries criticising our results as not being representative of national analysis – only to release, for instance, that the difference was that we had also included students with part-time status and older students, who are frequently studied separately. A consequence of this was, for instance, that average tuition fees were higher, since part-time students often pay more. The latest data set (published in March) provides an interesting insight into age group by highest educational attainment of a student’s parents using the contrast students whose parents graduated from higher education versus students whose parents didn’t. For Ireland (the UK doesn’t participate!), the share of students whose parents didn’t graduate from higher education themselves (‘first generation students’) grows from 39% to 62% between younger and older student groups- see here: http://database.eurostudent.eu/#topic=edupar_3&countries=%5B%22IE%22%5D&focusgroup=e_age

    But I also think that ignoring providers such as open universities is becoming increasingly untenable, since practices of open admission, online and blended learning are being increasingly adopted by the ‘traditional sector’ too. A fair and comprehensive analysis would include these providers. This is surely easier to achieve on a national level than for an internationally comparative study and I would urge data analysts to explore new approaches, which neither standardise higher education providers nor students of higher education in the manner criticised here.

  6. For my part I did not read that Martin was calling for any exception in treatment. I would have preferred that HEPI made explicit comment on obvious outliers and through these explain the constraints in the methodology. I understand the brevity point but making space for the obvious would have helped. The HEPI press release drained a whole day in damage limitation. At a time when HE is on the backfoot with the press, the HEPI report was a bit of an own goal for some major WP providers.

  7. Interesting discussion. I would say the biggest flaw is that the report presents the findings as if WP were only about access. Universities that attract a high proportion of WP students but then fail to meet their needs during their studies, would perform very well based on this methodology. Without a measure that takes into account success and progression of WP students, I don’t see that the the ranking has any utility at all. Surely the ultimate aim of WP, should not be just to attract as many WP students as possible, but to ensure that WP students benefit as much from HE as their more traditional peers.

  8. I absolutely agree with Debbi’s point about comparable progress of WP students with their more privileged counterparts. A point reinforced by Vincent Tinto’s analysis that access without opportunity forms, at best, a partial approach to WP and, at worst, a damaging process.

Leave a Reply