This article is more than 5 years old

Policy in numbers – what counts without counting?

Ben Williamson contrasts the rise of HE metrics with the wider global distrust of experts, and makes a case for better use of qualitative methods in policy making.
This article is more than 5 years old

Ben Williamson is a Chancellor’s Fellow in research at the Edinburgh Futures Institute and the Centre for Research in Digital Education at the University of Edinburgh.

Numbers, data, and metrics is the dominant language spoken in higher education today.

Anything that can be counted is counted, and many things that aren’t already measurable are being prepared for measurement. Putting students at the heart of the system has become a relentless political project to use student data to present the economic value of a degree.

But numbers should not be the only way of talking about the reality of HE. As numeric values, data cannot account for much of the complexity, messiness, or public value of education, research and management.

One of the ongoing crises of HE is a growing distrust of data-led management and regulation. Alternative approaches to HE data are required to address this, particularly to challenge the highly politicised metrics of market comparison and performance management. Together, academics, policy wonks, sector agencies and policymakers should be seeking to draw on a fuller range of quantitative and qualitative methods to help interpret and narrate the practices and experiences of staff and students in universities.

Number-mania

Data collected about students – their learning, satisfaction, and experience – is currently the focus of a politically motivated “number-mania” sweeping through the sector.

This number-mania can be seen in many current developments. At a large scale, HESA’s Data Futures programme is upgrading the national infrastructure for recording and reporting student data and is setting up the new Graduate Outcomes survey to follow them into employment. Jisc has created a national “learning analytics architecture” to further understand historical and real-time student data. The QAA has scraped social media to gather student sentiment data and aggregated it into “collective judgment scores” on student satisfaction, suggesting universities could use that intelligence to anticipate TEF and NSS.

As a data-led regulator, the Office for Students is using its performance metrics to target “bad universities”. It has additionally stated its commitment to learning gain metrics – based on analysing student competence at the start and end of their studies – to measure the quality of education and aims to rate providers and courses by graduate earnings. Although a universal learning gain proxy may be unlikely, it reflects a strong political desire to quantify learning as a way of comparing and ranking institutional performance.

One result of number-mania is that HE has been opened up as a market opportunity for software companies and education businesses to sell data services solutions to universities. The Department for Education has even seed-funded startup companies to build prototype apps that present graduate earnings data to prospective students. There is “clearly a market opportunity”, said universities minister Sam Gyimah, for apps that show students the return on investment they can expect from certain courses or providers.

No doubt many of these efforts will generate useful data for careful and inventive analysts to examine, visualise and publish. But framing data through emphasising performance, market values, competition and comparison reveal that numbers are being treated less like science and more as politics.

Data doubles

This technocratic number-mania rests on some assumptions about statistical authority that social science has questioned. Many social researchers in recent years have begun to study the production and uses of data. One key insight is that data does not exist separate from the social practices that brought it into being.

Data, we now know, is not neutral, innocent, or impartial. It is influenced by the decisions and choices made at every point in its generation. Dynamic social lives lie behind the numbers, and data has to speak for itself. Data has to be made and then trained to talk. This also means that claims to objectivity have to be treated cautiously since different techniques and analyses could always have generated different numbers and results.

In reality, people with numerical expertise make HE data, and they also make it talk often by scripting what they want it to say. A university made out of numbers is not the same one that was measured — it’s a “data double“, or an aggregated representation put together from digital traces. What that data double tells others about the institution, however, shapes what others think about it, influencing their choices and decisions.

Distrusting technocracy

Increasingly people are refusing to trust the numbers or the experts who made them.

Sociologist William Davies argues that a recent plunge in trust in technocratic experts has been mirrored by the rise in populism. People have learned to distrust facts because experts appear to act out of self-interest, or use their privileged position to reproduce their political standpoints. Instead of facts, people have learned to rely on their feelings, often shaped by real-time social media trends, political misinformation, and media manipulation on the web.

The number-mania of HE is proceeding oblivious to fading public trust in numbers and objectivity, or to the general feeling of distrust within universities about excessive performance measurement.

The challenge to technocratic experts in HE data comes less from populism, but more from a position of scientific scepticism. For instance, both the UK Statistical Authority and Royal Society of Statistics recently berated the Department for Education for misleadingly “messaging” its own statistics about schools. Alongside public distrust in numbers, statistical authorities and social science researchers alike have begun to scrutinise the political handling of data.

It shouldn’t come as a big surprise if authorities and academics with deep understanding about how data is constructed and (mis)used in society shift their sceptical gaze to look at HE data in the same way.

Adding quality to quantity

Nothing is wrong with using numbers in sensible measures. The responsible metrics agenda in research assessment is a good example. But if numerical data about students is going to be meaningful and trusted sources of insight then new approaches are necessary. So how can we report what counts without counting? And how can we avoid talking the language of data and metrics as if everything meaningful about HE can be measured?

If widespread counting is to continue, it should be done in recognition of the limits of numbers, and extend to qualitative methods that may reveal different accounts of HE experience. Qualitative researchers have a huge range of methods and resources for generating insights from complex social spaces and practices. Social scientists can examine the contemporary university up-close, rather than relying on numbers to do so at a distance.

Methods here could include in-depth interviews with appropriate samples of students across different institutions and courses, gathering subjective datasets of personal experiences as an alternative to survey scores. Longer-range ethnographies would help us understand student experience in much more contextualised and fine-grained ways. The real-time REF review aims to evaluate perceptions and experiences of research assessment across different universities and disciplines, yet we lack similarly granular studies of the diverse student experiences underlying their data.

Such studies would complement rather than entirely replace statistics. HESA’s Data Futures could, for example, highlight issues in the statistics requiring qualitative examination and interpretation, opening up possibilities for collaboration between sector agency data specialists and social science researchers.

Another option is to co-produce inventive methods by bringing together social and data scientists. Social scientists can bring insights into social dynamics that can help data analysts formulate questions, study datasets and generate meaningful visualisations. Done ethically, experiments such as the QAA’s student sentiment study could be extended to text mining and visual analysis of social media images produced by students. This data would make qualitative and quantitative entry points for careful interpretive analysis, rather than fixed scores.

An arts-based practice could generate new ways of creating, visualising and engaging with data of many kinds. Numerical data does not always have to be presented as ranked metrics. Digital humanities researchers have experience in inventive analyses and visualisations of textual and visual data too. A willingness to be more experimental with HE data could lead to fresh perspectives on the accomplishments of providers, courses and student cohorts.

A shared narrative

Bringing together the specialists who create large student datasets with qualitative researchers in these ways would lead to the co-production of insights, stimulate new lines of inquiry, and generate new understandings of HE for presentation to the public, policymakers, the media, and the sector itself. These forms of co-production might involve student and staff participants too, making their subjective experiences heard in qualitative rather than only numerical terms.

Changing the HE culture of quantification to accept qualitative accounts as evidence will take time, patience, and perseverance. If HE is to convincingly demonstrate its value and values—both to a sceptical public outside and a sceptical academic body inside—speaking in numbers alone won’t be enough. We’ll need meaningful qualitative narratives too.

8 responses to “Policy in numbers – what counts without counting?

  1. This is a very interesting piece and is a useful contribution to the debate about the use (mis-use?) of data in HE.

    I think it would be helpful to untangle some of the things that are referenced here. There is a difference between initiatives to upgrade the sectors data infrastructure (eg Data Futures and Graduate Outcomes) and concerns about the way in which data is used in management, policy and “public information”.

    I think this debate should concentrate on the latter; the former is largely a question of plumbing…

  2. I appreciate what you’re saying Andy. My immediate response is that the ‘plumbing’ matters too–the data infrastructure provides the ‘pipework’ for the flow of data that may then be used in management and policy. And the ‘taps’ can of course be switched to limit or unleash, heat or cool what sprays out. The configuration of the boiler makes a significant difference to the material flowing through it. That at least is how the field of ‘infrastructure studies’ might approach it, and I have tried to in my own work on HE data infrastructure. The infrastructure sets the rules for what can be captured, how it can be analysed, how it can be presented, how ultimately it can be consumed and used. I want to be careful here to acknowledge the usefulness of those plumbing projects–and the great care taken by statistical experts who manage them and the analyses they produce–while also exploring how strong political discourse and priorities to measure universities in certain ways might also ‘wash back’ into the system to colour or channel what comes out of the taps.

  3. A thoughtful and stimulating article Ben and very topical; the sector needs to better explain how it creates and sustains value over short, medium and long term periods. The current mono-focus on students and earnings is misguided . However, the message remains current because a significant number of stakeholders attach importance to it. We need to broaden the discussion of what universities are for and how they are a positive influence to everyone, not just those who work and study there. I agree with you on the need for quantitative and qualitative measures to be combined to present a better context. this could also be framed as financial and non-financial measures. There is a very good report out this month from the Financial Reporting Council titled Performance metrics – how to improve reporting which echoes the need to consider more than just traditional narrow focussed metrics when considering performance of an organisation.

  4. That FRC Performance Metrics report looks useful Phil–especially the ‘principles’ it suggests of transparency in how metrics are calculated and defined, and more narrative accounts of the contexts and cultures that metrics can only partly record. Looks like a key issue in the report is also whether people trust the metrics–are they credible, consistent, reliable, valid etc? Clearly HE metrics have many audiences–policy, biz intel, strategic managers, the media, students–it would be interesting to know how much trust they put in the numbers, or look for alternative sources. Do students really make data-driven decisions about how to choose a degree?

  5. Thanks for a comprehensive and plumbing-filled response.

    I *think* we’re basically in the same place on this: a more powerful infrastructure could enable more questionable things to be done – though it is not the case that the former inevitably leads to the latter.

    For me this emphasises the importance of two things. First the need to improve data capabilities at all levels across all producers and consumers of data and second is the importance of good governance across data and information operations, both within institutions and at a sector level.

  6. I agree of course Andy that the infrastructure itself does not inevitably produce specific outcomes – like all tech, the way it is produced is always socially contingent, and the way it is used / misused / not-used also highly shaped by context. But it does also of course make certain kinds of actions possible. I think you’re right about improving data capabilities and governance to protect from mis-use. But the lessons I think we can also take from social science and arts/humanities research about data and infrastructures are that (1) the numbers themselves (or their visualization as metrics) are not ‘reality’ – they could always have been produced in a different way if the infrastructure was built differently, leading to different results and conclusions (2) they risk being decontextualized, and presented as ‘comparative’ metrics as if everything is commensurable (3) the current political fixation on counting and comparing ‘competitor performance’ risks washing back to re-shape the kind of data practices that analysts and statisticians undertake. Comparative data dashboards produced via biz intel packages are one concrete example of this third issue – they strengthen and expand the (politically favoured) model of competitor analysis, indicator monitoring, and performance benchmarking – even if the designers of the infrastructure had no such objective in mind!

  7. Good article. In terms of qualitative feedback on the student experience, at Cardiff we’ve worked with one of our applied linguistics academics to undertake a sentiment analysis of responses to the free text NSS questions. This has been a really interesting area to explore further, as it is a rich dataset. We have also looked at sentiment analysis of our students’ twitter, which was an interesting precursor for some innovative quality assurance work the OfS are – I believe – considering.

Leave a Reply