Numbers, data, and metrics is the dominant language spoken in higher education today.
Anything that can be counted is counted, and many things that aren’t already measurable are being prepared for measurement. Putting students at the heart of the system has become a relentless political project to use student data to present the economic value of a degree.
But numbers should not be the only way of talking about the reality of HE. As numeric values, data cannot account for much of the complexity, messiness, or public value of education, research and management.
One of the ongoing crises of HE is a growing distrust of data-led management and regulation. Alternative approaches to HE data are required to address this, particularly to challenge the highly politicised metrics of market comparison and performance management. Together, academics, policy wonks, sector agencies and policymakers should be seeking to draw on a fuller range of quantitative and qualitative methods to help interpret and narrate the practices and experiences of staff and students in universities.
Data collected about students – their learning, satisfaction, and experience – is currently the focus of a politically motivated “number-mania” sweeping through the sector.
This number-mania can be seen in many current developments. At a large scale, HESA’s Data Futures programme is upgrading the national infrastructure for recording and reporting student data and is setting up the new Graduate Outcomes survey to follow them into employment. Jisc has created a national “learning analytics architecture” to further understand historical and real-time student data. The QAA has scraped social media to gather student sentiment data and aggregated it into “collective judgment scores” on student satisfaction, suggesting universities could use that intelligence to anticipate TEF and NSS.
As a data-led regulator, the Office for Students is using its performance metrics to target “bad universities”. It has additionally stated its commitment to learning gain metrics – based on analysing student competence at the start and end of their studies – to measure the quality of education and aims to rate providers and courses by graduate earnings. Although a universal learning gain proxy may be unlikely, it reflects a strong political desire to quantify learning as a way of comparing and ranking institutional performance.
One result of number-mania is that HE has been opened up as a market opportunity for software companies and education businesses to sell data services solutions to universities. The Department for Education has even seed-funded startup companies to build prototype apps that present graduate earnings data to prospective students. There is “clearly a market opportunity”, said universities minister Sam Gyimah, for apps that show students the return on investment they can expect from certain courses or providers.
No doubt many of these efforts will generate useful data for careful and inventive analysts to examine, visualise and publish. But framing data through emphasising performance, market values, competition and comparison reveal that numbers are being treated less like science and more as politics.
This technocratic number-mania rests on some assumptions about statistical authority that social science has questioned. Many social researchers in recent years have begun to study the production and uses of data. One key insight is that data does not exist separate from the social practices that brought it into being.
Data, we now know, is not neutral, innocent, or impartial. It is influenced by the decisions and choices made at every point in its generation. Dynamic social lives lie behind the numbers, and data has to speak for itself. Data has to be made and then trained to talk. This also means that claims to objectivity have to be treated cautiously since different techniques and analyses could always have generated different numbers and results.
In reality, people with numerical expertise make HE data, and they also make it talk often by scripting what they want it to say. A university made out of numbers is not the same one that was measured — it’s a “data double“, or an aggregated representation put together from digital traces. What that data double tells others about the institution, however, shapes what others think about it, influencing their choices and decisions.
Increasingly people are refusing to trust the numbers or the experts who made them.
Sociologist William Davies argues that a recent plunge in trust in technocratic experts has been mirrored by the rise in populism. People have learned to distrust facts because experts appear to act out of self-interest, or use their privileged position to reproduce their political standpoints. Instead of facts, people have learned to rely on their feelings, often shaped by real-time social media trends, political misinformation, and media manipulation on the web.
The number-mania of HE is proceeding oblivious to fading public trust in numbers and objectivity, or to the general feeling of distrust within universities about excessive performance measurement.
The challenge to technocratic experts in HE data comes less from populism, but more from a position of scientific scepticism. For instance, both the UK Statistical Authority and Royal Society of Statistics recently berated the Department for Education for misleadingly “messaging” its own statistics about schools. Alongside public distrust in numbers, statistical authorities and social science researchers alike have begun to scrutinise the political handling of data.
It shouldn’t come as a big surprise if authorities and academics with deep understanding about how data is constructed and (mis)used in society shift their sceptical gaze to look at HE data in the same way.
Adding quality to quantity
Nothing is wrong with using numbers in sensible measures. The responsible metrics agenda in research assessment is a good example. But if numerical data about students is going to be meaningful and trusted sources of insight then new approaches are necessary. So how can we report what counts without counting? And how can we avoid talking the language of data and metrics as if everything meaningful about HE can be measured?
If widespread counting is to continue, it should be done in recognition of the limits of numbers, and extend to qualitative methods that may reveal different accounts of HE experience. Qualitative researchers have a huge range of methods and resources for generating insights from complex social spaces and practices. Social scientists can examine the contemporary university up-close, rather than relying on numbers to do so at a distance.
Methods here could include in-depth interviews with appropriate samples of students across different institutions and courses, gathering subjective datasets of personal experiences as an alternative to survey scores. Longer-range ethnographies would help us understand student experience in much more contextualised and fine-grained ways. The real-time REF review aims to evaluate perceptions and experiences of research assessment across different universities and disciplines, yet we lack similarly granular studies of the diverse student experiences underlying their data.
Such studies would complement rather than entirely replace statistics. HESA’s Data Futures could, for example, highlight issues in the statistics requiring qualitative examination and interpretation, opening up possibilities for collaboration between sector agency data specialists and social science researchers.
Another option is to co-produce inventive methods by bringing together social and data scientists. Social scientists can bring insights into social dynamics that can help data analysts formulate questions, study datasets and generate meaningful visualisations. Done ethically, experiments such as the QAA’s student sentiment study could be extended to text mining and visual analysis of social media images produced by students. This data would make qualitative and quantitative entry points for careful interpretive analysis, rather than fixed scores.
An arts-based practice could generate new ways of creating, visualising and engaging with data of many kinds. Numerical data does not always have to be presented as ranked metrics. Digital humanities researchers have experience in inventive analyses and visualisations of textual and visual data too. A willingness to be more experimental with HE data could lead to fresh perspectives on the accomplishments of providers, courses and student cohorts.
A shared narrative
Bringing together the specialists who create large student datasets with qualitative researchers in these ways would lead to the co-production of insights, stimulate new lines of inquiry, and generate new understandings of HE for presentation to the public, policymakers, the media, and the sector itself. These forms of co-production might involve student and staff participants too, making their subjective experiences heard in qualitative rather than only numerical terms.
Changing the HE culture of quantification to accept qualitative accounts as evidence will take time, patience, and perseverance. If HE is to convincingly demonstrate its value and values—both to a sceptical public outside and a sceptical academic body inside—speaking in numbers alone won’t be enough. We’ll need meaningful qualitative narratives too.