It appears that metrics will have a major regulatory function in the operation of the Office for Students (OfS). There are to be no more cyclical institutional visits or annual provider reviews to assess quality. Rather, there is high emphasis on ‘leading indicators’ to assess risk in the sector.
But metrics have their drawbacks, particularly when isolated from broader approaches, such as pedagogic theory and behavioural science. For example, deriving lessons from collected student learning data is difficult without knowing the original pedagogic designs of the teacher.
The OfS cannot regulate effectively if it becomes over-reliant on data and indicators. Nor will it elicit trust, which implies accepting uncertainty about future behaviour and tolerance of errors, thus taking on board a level of regulator vulnerability. Empirical evidence suggests that trust in the regulator-regulated relationship is an important driver of compliance and organizational responsiveness.
Of course, if institutions could be trusted all the time, we would not need regulatory-driven accountability. But this does not mean that they should be distrusted most of the time, either. The rationality of data and indicators fits well with notions of procedural justice and fair play that can encourage trust at the sector level. But it does little at the more important local level where personal relationships have to be built to foster trust. ‘Governing at a distance’ through the monitoring of data and key indicators will not be enough to strengthen the situational incentives for moral conduct. And although trust can be abused, targets and performance goals are more likely to work when the regulatory culture is right.
Data-driven models have their own intrinsic problems. Metrics often distort, especially as higher education is a complex activity. When sector or organizational accountability cultures are predisposed to self-regulation, regulatory performance indicators are regarded more as big sticks than soft or persuasive encouragement. Moreover, big data lends itself to correlation rather than causality. While more and more correlations will continue to be produced, many will be spurious and theoretically unexplained.
HESA, which already has its work cut out in interpreting and understanding a diverse sector within a single data model (notions of a course or a full-time student can vary significantly between institutions) will have many compromises to strike to fulfil a Designated Data Body remit for the OfS. As the TEF moves to include a subject-level assessment of institutions, then these data mixes, variable cultures, and concerted and competitive attempts to ‘play the game’, are likely to undermine the very data and processes they are intended to measure.
Probably, metrics will have a greater and more accepted role in research evaluations in the REF, although this is likely to vary by discipline. Wilsdon (2015) nonetheless found considerable scepticism among researchers, universities, and representative bodies about the broader use of metrics in research management. Peer review, despite its flaws, continues to command widespread support as the primary basis for evaluating research outputs, proposals, and individuals.
For some time to come, in both research and teaching evaluation frameworks, we are likely to find a variety of methods, including expert judgement, quantitative indicators and qualitative measures that respect differing institutional cultures. Yet there will be strong sector pushback to insist on transparency – keeping data collections, and the algorithms used, open so that those being evaluated can test and verify the results. Although published data and indicators have the potential to provide greater transparency in processes of risk regulation, they also possess the possibilities of obscuring accountability through an overly technical and hidden-away technology.
But the nature of data collections by regulators will change, too. Although in its infancy, analyses of data on dedicated patient and student feedback sites using Twitter and Facebook posts, appear to provide relatively accurate representations of the patients’ ‘collective voice’ or students’ consumer opinions. In time, social media data may well come to be of more real-time regulatory interest than the NSS or sporadic quality reviews by the QAA.
Widening the data conversation
Data bodies will also try harder to stop talking mainly to themselves. They will have to ally with more complementary entities, such as ‘nudge’ policy units (as in the Cabinet Office) or behavioural economics university departments. Regulators outside higher education are showing the way, although the OfS consultative papers (October 2017) show a welcome willingness to adapt some of the monitoring techniques of outside bodies.
A recent report for the Quality Assurance Agency on ‘Data-Driven Risk-Based Regulation’ by this author outlines how non-higher education regulators are increasingly sophisticated in their use of data for evaluative purposes. The Financial Conduct Authority (FCA), for example, has a key objective of ensuring that its big data analysis underpins econometrics. Behavioural economics helps guide designs to the consumer market and how choices are put to participants to provide more transparency and informed demand. In retail insurance services, the FCA has begun disclosing the premium that consumers paid the previous year. This is proving the most effective way of prompting consumers to shop around and then switch or negotiate their home insurance policy to achieve substantial savings. OfS must be tempted to experiment with such methods when it is fully up and running.
Another such entity, HM Revenue and Customs (HMRC), perhaps surprisingly, is undertaking leading edge behavioural work alongside its vast data bases to improve compliance by taxpayers. Predictive analytics – which integrates data and behavioural insight processes – are being used to predict those taxpayers that are likely to need help in being compliant. Where a taxpayer is having difficulties and gets into debt, these techniques provide remarkably accurate information as to whether they are likely to sort it out relatively soon. HMRC models are being used to predict those who are likely to miss key tax-return submission deadlines, then giving them a gentle nudge in the right direction, thus preventing added work for HMRC that could be avoided.
The OfS should be encouraged to further consider behavioural theory and its various insights, such as those contained in ‘nudge’ theory, and thus design interventions that incentivise compliance from the outset. Although the regulator appears to be setting its face against adopting an ‘enhancement’ function, actively pursuing policy designs to encourage compliance and good regulatory standing among institutions should be high on its priority list. This will save time and resources in the long run.