This article is more than 4 years old

A cold spot on the TUNDRA

A new measure of area based participation arrives in England - but what will it change? David Kernohan is your man with the cold weather kit.
This article is more than 4 years old

David Kernohan is Deputy Editor of Wonkhe

Have you heard of the TUNDRA Wash?

It’s going to prompt some pretty serious questions about HE access and participation – and school standards – in the fens. There’s an arc extending from just outside Grimsby right round to Lowestoft, and as far inland as Ely and Boston, where you would be hard-pressed to find an area outside of quintile one or two on the Office for Students’ new, experimental, TUNDRA HE participation data.

[Full screen]

It’s not often you see such a pronounced sub-regional pattern in participation data. We’ve been used to HEFCE’s POLAR measure of area participation, which simply divides the number of 18 year olds living in a given area (since POLAR4, the Mid-Level Super Output Area) by the number of 18 year olds from that area participating in HE.

Trudging across the TUNDRA, mile after mile

TUNDRA – “tracking underrepresentation by area”, yes we know… – is a slightly more sophisticated beast, looking at the proportion of 16 year old mainstream state school students who participate in HE at age 18 or 19. It manages this by tracking individual learners through the school system and beyond – a linked data approach similar to that employed in LEO.

OfS released this new measure for feedback as a part of a laudable interest in developing better data. POLAR has served the sector well, but as it ages issues are becoming much more apparent – and in the last couple of years we’ve seen some pushback from the sector. UCAS, for instance, is shifting attention to measures of multiple deprivation. Research Neil Harrison has described POLAR as a classic example of an “ecological fallacy”. In response, Mark Corver, who effectively invented POLAR as his PhD project while at HEFCE, highlighted the unsophisticated and practical advantages of a simple participation measure. And OfS are still keen to emphasise these benefits.

But the criticisms remain. It is an area measure, so the size of the area in question and the nature of sub-sections of the area have an impact on the overall measure. The MSOA is designed to include an average population of between 5,000 and 15,000 people, and between 2,000 and 6,000 households. If you think of the area around a small town, you can imagine a fairly low participation central district, and a more leafy suburb with a greater participation rate – clearly an issue for understanding areas and setting targets. Even worse, imagine a town with a single area that has the majority of people in that area in an ethnic minority.

TUNDRAmatic changes?

TUNDRA doesn’t address these criticisms – the areas involved are the same – but it does take action on another issue relating to a local area, namely school choice. It is well known that a high proportion of private school pupils attend university, and if a large number of households in an area send children to a private school this can mask much lower participation rates at other schools in the area.

Similarly, children in non-mainstream provision – for example to address health, learning or emotional needs – are less likely to attend university (OfS gave me a frankly alarming figure of around 1 per cent), and if there are a large number of pupils in an area that attend non-mainstream schools this can mask a higher participation rate elsewhere.

Because the National Pupil Database includes the permanent home address of children in state schools, linking this to HE participation and filtering out non-mainstream provision means you get a clearer picture of the performance of children who attend school in or near the area in question. There’s not a straight read across, as many children cross MSOA boundaries to attend school. But it removes a “university effect”, where young people may move to an area near their chosen area just before attending university, which has long been noted in POLAR.

As above, this is an experimental measure released for feedback, so we shouldn’t expect to be tearing up our Access and Participation Plans just yet. It clearly takes some steps to tidy up POLAR, though neither the existing nor the suggested data is a perfect measure of participation that can be use to precisely tailor interventions Although seriously, why request five year plans based on a measure you are already working to replace?

TUNDRA regions

But for data nerds the delight is that we can directly compare POLAR4 and TUNDRA, as both are based around the MSOA. I was interested particularly in winners and losers – areas placed in a lower or higher TUNDRA quintile (or with a lower or higher TUNDRA participation rate) than the POLAR4 equivalent.

As an example, there are nine MSOAs that were in POLAR4 quintile 4 that are now in TUNDRA quintile 1, moving from the second highest level of participation on one measure to the lowest on another. This looks more like a university city than a regional effect, for example including areas in Portsmouth, Nottingham, and Bournemouth. If you are abnormally attracted to PDF format lists, OfS has one for you here.

[Full screen]

Comparing regional average participation rates, TUNDRA shows significantly lower participation in the South East than POLAR4.

[Full screen]

As providers who do most to widen access to HE tend to recruit locally, this will make institutions like this based in the South East look comparatively worse. It may be argued that this is a more “accurate” measure, and thus will lead to more effective action to widen participation. But neither TUNDRA nor POLAR4 are “accurate” in any meaningful sense regarding the lived experiences of disadvantaged young people living in each MSOA – indeed TUNDRA excludes a known low participation group, as those who attend non mainstream schools are not included.

So what’s the point of TUNDRA? The data looks better, as there are no more 100 percent participation areas, which always struck me as a particularly dubious facet of POLAR4.

[Full screen]

But focusing participation data, and thus (eventually) participation measures, on mainstream state school attendees feels like a mistake. If we have access to the National Pupil Database why not have the data follow the pupil, and look at university participation rates based on the individuals that are recruited, not one of five buckets based on where they happen to live. I suppose that would make it harder to publish participation data, and thus harder to name and shame providers. Which I suppose is kind of the point.

3 responses to “A cold spot on the TUNDRA

  1. There’s a surprising amount of movement from what seems like a pretty minor methodological change: 18% (224 out of 1,243) of POLAR Q1 areas aren’t in TUNDRA Q1.

    Some local authorities lose all or most of their POLAR Q1 areas. There are some surprising losers e.g. Hartlepool (lose 3 out of 5 Q1 areas), Birmingham (lose 11 out of 27), Dudley (lose 4 out of 10), Stoke (lose 8 out of 21), Wigan (lose 4 out of 11), Doncaster (lose 5 out of 19), Liverpool (lose 5 out of 22). So big funding implications if TUNDRA replaces POLAR in funding allocation formulae for the student premium and identifying NCOP areas, as well as from changing incentives for HEIs.

    Other things don’t change: London is ignored in this measure and has even fewer TUNDRA Q1 areas than it has POLAR Q1 areas – fair enough for uses of this measure aiming to increase the participation of state-educated children in HE but raises exactly the same issues as POLAR in using the measure to track “fair access” to the most selective universities of incentivising Russell Group universities to not focus on attracting disadvantaged young people from London (with associated unintentional discrimination against many ethnic minorities)

  2. Give it a few years, and POLAR and TUNDRA will be replaced by PLATEAU (Participation and Lifelong Access Targeting of Equal Access to University)…

  3. @ David K – I’m struggling to follow your suggestion here:

    ‘If we have access to the National Pupil Database why not have the data follow the pupil, and look at university participation rates based on the individuals that are recruited, not one of five buckets based on where they happen to live’

    Is that not what OfS ABCS data release aims to do?

Leave a Reply