This article is more than 1 year old

The messy world of tidy outcomes

For James Coe, the unique strengths of higher education are at risk from the excessive standardisation of targets and performance measures
This article is more than 1 year old

James Coe is Associate Editor for research and innovation at Wonkhe, and a partner at Counterculture

Earlier this week my colleague Debbie McVitty wrote about the conflict between the tidy data requirements of the Office for Students and the messy, mixed-up world, that is universities with all of their strengths, weaknesses, and quirks.

As she put it:

A high-handed response would scoff at this proposition [giving notice on changes to TEF] and argue that good, autonomous universities should be across their data, evidence, and practice and the TEF should simply be translating that work. That could be technically true, but it would not be fair or reasonable. If the TEF is to support teaching enhancement at all, it needs to take meaningful account of the conditions for that work to take place.

We also heard from Shân Wareing who wrote about how the TEF had focussed organisational activity and given her university a single source of truth from which decisions can be taken and plans can be made for the future.

In total, both pieces speak to an exercise that while not perfect by any means has implicitly achieved part of its purpose. TEF may not inform student choice, it may not have addressed uneven power dynamics between students and their universities, but it has undoubtedly forced providers to take a moment to consider student experiences and how they wish to present these to the world.

Centrist DAPs

The TEF is also indicative of OfS’ changing role. Across all of their target setting, new initiatives, or interventions, the underpinning ethos is universities should make decisions based on data and then be held publicly accountable for them. There is disagreement on whether these targets are right or even measurable but whether it is graduate outcomes, student experiences, access or student success, data is central to their approach.

Returning to Debbie’s idea of the messy and the tidy one of the central tenants of our sector is not only university autonomy but academic autonomy too. Historically, this means schools, departments, and faculties have grown up with their own ways of working, practices, and approaches to student support, with varying degrees of central control. As well as driving activity on their own professional services staff variously constrain work where regulation requires it and enable it where it fits organisational purpose.

There is a spectrum of the universities and their autonomy. There are the universities that are largely federated with looser central control and the universities with much stronger central direction. It is not to say that any way of working is better. Each university alights on this spectrum based on their own circumstances and history. The extent to which it is possible to change the way a university works is a debate for another blog and the life work of many vice chancellors.

In place of strife

The intended consequence of the TEF is that it discourages too wide divergence by departments, schools, and faculties from a universities regulated activity. Clearly, it would not be sensible to have every department follow an unrelated set of access or outcome targets.

This means there must be a single point of truth, a single provider who is held to account, and a single submission that captures the sum total activity of the university. There is no space for “this subject is just different”, “this school collates and acts on a different set of data”, or “these departments have chosen to act on a different set of student issues”.

The unintended consequences are that measurement could hit the target but miss the point. In the debate of learning gain it has been loosely accepted that it’s hard to measure so therefore it’s ok for providers to come up with lots of ways to describe the work they are doing. To put it another way, whether students are actually improving their knowledge or skills is left to a set of proxy measures. In part, this is surely because there is no target, no measurement, and this activity is too diffuse. Put simply, it is not tidy enough in a regulatory system which abhors mess.

In messiness comes discovery but variation is not always a good thing at a provider level. The use of split metrics and subject investigations helps to stop providers from larger providers hiding bad practice within institutional averages. However, in a centralised system, it is much harder to make the case for departments or schools to address issues where there are no central targets. This is not to say university staff will not act on the issues they think are important but whether universities choose to invest resources in these areas is a separate issue.

The future is surely the even greater centralisation of universities. With this will come benefits like the recognition of the importance of the professional services staff keeping data, insight, and operations ticking over. It may bring efficiencies, it may even bring better student experiences, but it would be a loss if it crowded out the interesting and novel that comes from the self-direct interests of staff who are addressing crucial issues which are not measured by the regulator, and therefore not nourished by providers.

Leave a Reply