This article is more than 4 years old

POLAR, MEM and equality: you don’t have to choose

Mark Corver of dataHE sets out the advantages and disadvantages of POLAR and MEM as equality measures.
This article is more than 4 years old

Mark Corver is the founder of dataHE and the former director of analysis and research at UCAS

It is good to see equality measurement in higher education is receiving serious attention again. It is important.

But the debate often seeks to establish what measure is “best”. POLAR and MEM were not meant to be this kind of either/or choice. And those responsible for widening participation in universities shouldn’t feel forced to take sides between the two. It can help to go back to basics about what they are designed to do. And what they are not.

POLAR exploration

POLAR (Participation Of Local Areas, developed at HEFCE) is simple. It does what it says: assign young people to groups by their observed higher education entry level alone across a high resolution geography of neighbourhoods. It doesn’t presume to have an underlying social model of what causes low higher education entry. Instead it serves a policy agenda which is “we want to tackle low entry to higher education, whatever lies behind it”.

POLAR’s underlying assumption is that people who live in the same neighbourhoods are often more similar to each other than they are to people who live in different neighbourhoods. It is a single-dimensional measure, but has some multi-dimensional characteristics, simply because the forces that form neighbourhoods are. It consistently shows high discrimination across a range of higher education-related statistics, reflecting the strong partitioning of residential neighbourhoods in the UK. And it has a small data footprint, making it usable for both monitoring (we recommend POLAR3 if looking at trends), targeting (POLAR4) and evaluation (either).

There are a large number of things POLAR doesn’t do but people sometimes seem to expect it to. For example, it doesn’t partition people with low household incomes from those with other incomes. If that was your intent it would be much better to form groups by, say, low household income. In terms of higher education equality, you might be forced to use such an income proxy if you couldn’t measure higher education entry directly. But you can measure and target directly by the policy issue you are interested in, low higher education entry. Which is what POLAR does. And does well, within the limitations of being a single-dimension measure. But the multi-dimension nature of inequality in entry to higher education means no one single-dimension measure can properly reflect groups with low entry rates.

Complexity increases

Enter the Multiple Equality Measure (MEM), developed by UCAS.

I see MEM as more of a framework than a measure. The motivation for developing it was to be able to bring the strengths of different single-dimension measures together in a data-led way. This is especially important for dimensions that are associated with equality patterning within neighbourhoods (for example, variations in income) and, even more so, within households (sex). I also thought that it could help to reduce the amount of time and energy the sector spends deliberating over which single measure was best: it hasn’t.

MEM’s underlying statistical model predicts higher education entry probability at individual level using only factors that (by agreement) shouldn’t matter to higher education entry rates. These probabilities are then ranked to form quintiles, like POLAR. Indeed POLAR and MEM are closely related. If you build a MEM model using only indicators for each of the neighbourhoods used in POLAR (essentially saying “the neighbourhood where you live shouldn’t matter to your chances of going to university”) MEM is POLAR. The strength of the MEM framework over POLAR is that you can add in a whole set of factors together. You can’t go completely wild here, or you will end up with individuals, but you can go a long way. Then you can make that powerful “shouldn’t matter” statement jointly across all factors, with the resulting statistics showing how distant equality is from this statement.

In this way a correctly formulated MEM is conceptually superior to POLAR. And this is reflected in the greater entry rate differentials between the MEM quintiles. But this comes at the cost of very high (and intrusive) data collection requirements on, for example, outreach activities students if you want to use operationally. And to calculate the rates you would need to have access to that same type of data in the underpinning population estimates too. This is heavy duty data analysis. This makes MEM more suitable for specialised measurement in very data rich settings: more laboratory than in the field, where POLAR is a more practical choice. Comparing the two allows a scaling of the penalty of using POLAR as a practical implementation against a complex multi-dimensional reality.

In comparison

We’ve looked at this at dataHE (using the last detailed data UCAS published on MEM, the 2017 version, and neglecting Independent school pupils). This shows that using POLAR Q1/Q2 to identify the (lowest 40 per cent) MEM G1/G2 captures just under three quarters of that ‘true’ population group. Not bad for a field measure. The quarter that POLAR Q1/Q2 misses of MEM G1/G2 are mostly men (around 85 per cent), reflecting the strong gradient of entry rate by sex within households (that POLAR can’t see). The free-school-meals (FSM) group forms around 40 per cent of that missed quarter: a very substantial over-representation but, numerically, there are still more “non-FSM” who are missed than FSM. For the same reasons, FSM alone is poor at capturing that key MEM G1/G2 group.

One drawback with MEM is that the more complicated methodology does give heavy reliance on it being correctly formulated. It is not clear this has been the case since the latest revision at the end of last year. This introduced admission policy within state schools (for example, comprehensive/selective/modern/etc) to the set of factors that “shouldn’t matter”. The problem here is that admissions policy acts as a direct measure of individual prior attainment. And individual prior attainment is generally regarded as a factor that “should matter” to higher education entry in the UK system. With admission policy included the underlying logic of the MEM model, “only factors that shouldn’t matter”, frays. This make the classification currently rather less useful for that perspective than it should be.

Equality measurement and activity targeting are too important not to get right. POLAR is a strong single-dimension measure and widening participation staff can feel confident in using it as such. But they should continue to be alert to other equality dimensions – most clearly sex –where entry can vary within neighbourhoods in a way that POLAR can’t see. The MEM framework handles these variations analytically. It can give a more holistic picture of equality. But it needs lots of data and careful statistical handling, which limits how it can be used. Ultimately MEM is the better laboratory measure, but POLAR is more practical. Both can help universities with equality.

9 responses to “POLAR, MEM and equality: you don’t have to choose

  1. MEM looks like a really interesting measure, but to the best of my knowledge it is not possible to see what is ‘under the bonnet’. The list of contributing factors (‘variables’) is available, but it is not very clear how they have been combined and with what technique(s). This is a shame, because it has potential for use in academic research, where there is further potential for independent testing and validation of the construction of the measure, but for it to be so used, it needs to be open to inspection. David Best’s WonkHE article of 2.15.2019 only says that “MEM is based on sophisticated statistical modelling techniques”.

  2. All well and good saying that MEM > POLAR, but where is MEM data available? It doesn’t appear to be in any of the more recent institution-level UCAS data releases. Also, is MEM better than/any different to IMD? This is the better question. The OfS Access and Participation used IMD in their Access and Participation data release. So, with IMD data more readily available why use MEM?

  3. This still leaves me unclear on how a university would (or even if they should) use MEM as a factor in contextualised admissions for example, which seems to be a key area where POLAR falls down. I echo Paul’s concern that until MEM is properly transparent and open to independent scrutiny by researchers, it’s difficult to see how the sector could trust or make use of it.

  4. The trouble with POLAR is that the “people” who expect POLAR to do things it doesn’t includes policymakers in the DfE and OfS who design policies that are based on a misunderstanding of what POLAR actually measures. Read almost any policy document or statement about social mobility coming out of the Department for Education and you’ll see that the conventional wisdom is that coming from POLAR Q1 = coming from a socially disadvantaged background.

    Another example that troubles me is that the OfS “fair access” measure (KPM2) measures the POLAR Q5 versus POLAR Q1 gap in access to the most selective universities. This drives OfS funding and policy, as well as university access activity, to inadvertently discriminate against ethnic minorities.

    Ethnic minorities are half as likely to live in POLAR Q1 thanks to the higher propensity of (disadvantaged) young people from ethnic minorities to progress to HE for a given set of Level 3 qualifications. This drives the fact that no young people in Inner London and very few young people n Outer London live in POLAR Q1.

    However, POLAR does not take into account the type of HE that is being accessed. These disadvantaged young people from ethnic minorities tend to enter low tariff “local” universities and they are more likely to study while remaining in their parent’s house rather than get fully involved in student life and the related social networking, to the detriment of both continuation rates and their future careers.

    There is little funding aimed to tackle this “fair access” issue and universities are not incentivised to fix the issue – the reward is for getting someone from POLAR Q1 into your university (and this is aside from the well-documented incentive to attract students from advantaged backgrounds who live in POLAR Q1 rather than a disadvantaged student from POLAR Q2-Q5)

  5. At UCAS we don’t think MEM should replace POLAR – there’s a time and a place for both. That said, MEM is, in our opinion, the best measure when talking about participation in HE as a whole, and the best way to avoid blind spots.

    Contextual admissions is about an individual’s potential and whilst MEM has an important role here, it can’t replace the individual factors which can’t be picked up by any statistics and are often self-declared information in a UCAS application, for example, as part of a student’s personal statement, or in their supporting reference. Our modernised contextual data service (MCDS) provides individual-level MEM data as well as a context-adjusted grade profile for each applicant to providers (all free of charge).

    We’re keen to promote MEM’s adoption by the HE sector, and are committed to statistical rigour and transparency. That’s why, in October, we published both our summary and technical reports about MEM, both available via our web sites.

  6. Thanks for comments.

    Technically it is possible to change the ranking/modelling variable for POLAR/MEM from ‘all HE’ to ‘higher tariff HE’ (etc) and recalculate. As anticipated, this can give a very different classification results and can be a powerful method for providers with more extreme grade distributions.

    It is very helpful of UCAS to publish the model fit details, but as people have commented, the data is needed too. Equality is rightly set as the top priority for the sector. But at the same time the sector is being starved of the equality data it needs to be effective.

    The obvious way to help is to publish the detailed (but safely aggregate) core data file on the entrants/populations for all the component category combinations that make up the MEM. This would be of immense benefit to researchers and WP staff across the sector. I can’t think of a single reason not to do this.

  7. Elephant in the room – the exorbitant charges by UCAS and HESA to get ‘our’ data – where do they get it from? Our HEIs – You used to be able to download the full UCAS annual weighted data for free – now you want them to run a query for a dataset and it will cost thousands of pounds – and they gave some UCAS data to the ADRN – but it had no measure of educational attainment so was basically useless for anything to do with admissions or WP.

Leave a Reply