This article is more than 10 years old

Life of PI: performance indicators in higher education

Performance indicators might sound dull, but how the sector chooses to evaluate themselves in the future will have a huge impact on league tables, reputation and institutional success. Post-financial crisis and with a political desire to create a 'level playing field', shaping the future of performance indicators takes on a new urgency and raises a host of complications that the sector needs to get to grips with. Adam Child takes a look for us.
This article is more than 10 years old

Adam Child is Senior Policy and Strategy Officer at Lancaster University having previously held the role of Assistant Registrar. All the views posted here are his own.

For those of us working outside the corridors of power, determining the priorities of policymakers especially for the longer term is notoriously difficult. Wonks engage a full range of analytical skills examining announcements, reports and speeches looking for those vital clues that may give an indication of future political direction.

The run up to Christmas saw the start of a process which will lead to a set of revised national performance indicators for higher education – the UKPIs. Taken within the context of a changing sector this process will offer some insight into the priorities of the government and the UK funding councils. And they are likely to have a much longer lifespan than the average political speech.

Performance indicators matter – reading between the lines as the indictors are refined and changed may help us understand the regulatory end point – what will the sector look like in ten years’ time?

Institutions routinely include ambitious performance indicators within their own strategic and operational plans. These have a habit of taking on a life of their own, occasionally becoming foregrounded at the expense of the values they represent. This in turn influences behaviour in ways which may or may not be helpful.

So the difficulty of determining a short and useful list of performance indicators for the whole sector should not be underestimated. In December HEFCE issued an invitation to comment following the publication of a report they commissioned from NatCen and the IES How should we measure higher education? The last wholesale review of the UKPIs was in 2006/07 when the policy context was very different.

Justifying government expenditure is a key aim of national performance indicators, and so the stakes were raised after the financial crisis. How the sector responds to this new reality and devises ways to evaluate itself will be critical. The NatCen/IES puts it well, stating the UKPIs must ‘fit with long-term strategies and policy priorities across the whole UK’ (p vi).

The current UKPIs only cover a small range of topics: research output; widening participation; non-continuation rates and employment of leavers. In combination with the data from the 2008 RAE this gives a collection of institutional level data largely focused on research and teaching.

A key recommendation from the recent report is to expand the list to take account of other elements of university activity such as ‘international outlook’ or ‘business engagement’. This seems sensible; universities are far more complex and multi-faceted than the current indicators suggest.

However we quickly hit a series of problems. The moves afoot to smooth over differences in regulation between institutions of different types (David Willetts’ ‘level playing field’) could actually  lead to a more diffuse and diverse sector. The Higher Education Commission neatly summarises these differences suggesting steps needed to create that level playing field. For the UKPIs this means including many more and increasingly diverse providers of higher education.

Squaring the political desire to see all institutions assessed in a similar manner with the statistical and practical barriers to achieving that will be a challenge – to put it mildly. Indicators applicable to all may be so few in number as to render them virtually meaningless.

Another principle guiding the development of UKPIs is the need to allow benchmarking of institutions against one another. The intention is that each UKPI should have built within it a clear indication of what represents a positive or negative score. This would make incorporating the data into national league tables very straightforward. Currently only ‘completion rate’ data is directly fed in to league tables from the UKPIs.

With league table position being seen as a proxy for quality this has possible repercussions with institutional reputation in the new world order depending to a greater extent on what is contained within the UKPIs. This will focus the minds of those responsible for those figures and institutional effort and resource will likely follow. We have seen how this mechanism worked to raise the profile of teaching and learning after the NSS was rolled out in 2006.

Those interested in policy should keep a watch for changes to the UKPIs over the next few months and years. What we will have at the end of the process is a suite of data which purports to define what is important in higher education, issues common to all institutions whatever their history, size or shape. We must keep sight of what the UKPIs represent and the underpinning value judgements supporting the decision to include, or not include, a measure as part of the set. Less a discussion about metrics and more part of the wider debate about the direction of UK higher education itself.

Leave a Reply