This article is more than 6 years old

A game of risk

Regulation is all about managing risk - how well do the proposals in the regulatory consultation do that? David Kernohan sees an uncomfortable financial parallel...
This article is more than 6 years old

David Kernohan is Deputy Editor of Wonkhe

The English HE sector has a complex risk profile – we all know it is likely that an institution, somewhere, will not meet expected standards at some point. The question is, which one? The regulatory burden that is put on to all institutions has to be proportional to the potential for each individual institution to have problems.

Regulation is, at heart, the bureaucratisation of risk. Risks that would otherwise be taken on an individual basis are managed, centrally, by a body designated for that purpose. To aid the management of these risks, processes will be devised that allow for either early warning of emerging risks, or their direct mitigation.

So – when we look at a consultation on regulatory reform, we need to look at whether the proposals meet these goals and whether processes usefully add anything to the risk management process.

Assessing the assessment

So how does the new framework measure up? The first thing that stuck out for me was the use of a “Basic registration” category for institutions where most risk may exist. English higher education delivery institutions need to be just that – “English higher education delivery institutions” (each word is defined in the consultation) – in order to qualify for this registration category. In terms of ongoing requirements, all that is needed is a (nominal) annual fee and submission of data via the designated data body relating to the higher education courses in question. Institutions in this category cannot apply for degree awarding powers (DAPs), university title (UT) or Tier 4 designation (which would allow for the recruitment of international students).

It’s helpful to think here in terms of investment – the Approved categories are the premium product; Basic is… well… sub prime. In financial instrument development terms, the OfS is recognising that this tranche of the overall English HE sector is more likely to present quality issues, and flagging it as such in order to limit the risk to the overall package.

You’d think here that this approach would be useful in developing proportional risk management processes, which might present a higher regulatory burden on such institutions. And you would be wrong.

Whose risk?

Despite the avowed focus on students at the heart of the system, the proposed regulatory framework is focused on managing the risk to the state. Only Approved (in receipt of state funding) institutions are required, for instance, to present a Student Protection Plan as a hedge against the possibility of institutional failure. Students at Basic institutions do not have this level of protection – institutional failure here is a risk borne by the student. This is doubly perplexing as we recall that this category of institution recruits primarily from groups that are underserved by traditional HE.

If the OfS is intending to be a market regulator –  it must regulate the entire market. It will, after all, be held to account for doing so.  If it wants to speak up for the interests of students it needs to do so for all students. What is actually being done is laudable – accountability, effectively, for the use of state funding. But it is limited – it protects the use of the state as the customer rather than the student.

Information sharing

This pattern continues with the way information about institutions is presented. The register will be the primary means of gathering reliable independent information on a provider. Students applying to institutions in Approved categories will have a wealth of information to go on, ranging from TEF (suggested as mandatory) to detailed HESA (or other designated data body) statistics and QAA (or other designated quality body) reviews.

But correspondingly little information will be provided in the basic category. OfS will collect lead indicators, but neither these – nor their overall risk profile of each institution – will be published. This almost certainly comes after concerted sector lobbying, but is – alas – the wrong decision.

The data use to construct these lead indicators is generally in the public domain – student numbers, NSS scores, financial data. But the overall judgement reached by OfS is not shared (as is currently the case with HEFCE financial risk monitoring) despite it being, in the Basic category, the most complete and informed judgement on institutional quality that may exist.

The other potential information source in that category is the OfS formal risk assessment, carried out on registration. Again, this is not published. The current system requires QAA involvement on any kind of sector entry – and this involvement culminates with a published report. It is not clear why this change has been made, except to facilitate a low entry bar to OfS recognition.

And all institutions are expected to self-report on “reportable events” such as a change of senior manager or owner, or a fraud investigation. Surely there are more robust ways to check for financial irregularities? Approved providers have annual financial returns (still), why not basic?

Degrees of confidence

The Basic registration category is designed to “provide a degree of confidence for students that is not present in the current system, with providers in the Registered basic category being able to let students and other bodies know that they are recognised by the OfS as offering higher education courses.”

It says nothing about the quality of said courses, and offers no safeguards for students who take them. The idea that it provides a degree of confidence is laughable – it adds the OfS mark of approval to any institution that can reasonably demonstrate that it is a higher education institution in England – something that most students could probably work out for themselves.

With all providers required to re-register with the OfS, even those currently on the HEFCE register, it feels like now should be the time to manage risk by making registration meaningful. The urge to grow and diversify the sector is a good one – but the interests of all students, be they supported by public funds or not, must be paramount.

Find the full Wonkhe index of all documents published by DfE here and all of Wonkhe’s coverage of the new framework at #Regulation.

3 responses to “A game of risk

  1. Thanks, David. You make a number of good points but I wonder whether you are being too kind. In my view, the approach to regulation that is proposed by the consultation document is not genuinely risk-based, and, if the proposals were to be implemented, we would be stepping back from the more sophisticated approach that had been advocated by HEFCE.

    My concerns are prompted by your reference in the first paragraph to the attempt to ensure that the ‘regulatory burden’ is ‘proportional to the potential for each institution to have problems’. The operative word is ‘potential’, indicating (as the consultation document puts it) that a risk-based approach should ‘identify and respond to emerging risks early on’, and ‘anticipate the future threats, challenges and opportunities that may not immediately be apparent’.

    How would we (and how would OfS) identify these emerging risks and know that a provider has the potential to have problems? The short answer is that our arrangements for overseeing providers would employ ‘lead indicators’. The term ‘lead indicator’ appears frequently in the consultation document (32 times to be precise), and on page 175 it is correctly defined as an indicator that would ‘allow OfS to anticipate future events’.

    The problem is that most of the proposed indicators fall into the ‘lag’ and not the ‘lead’ category. Consistent with the Department’s preoccupation with an ‘outcomes-based’ approach, TEF results, student progression and non-completion rates, the incidence of complaints and graduate employment data are all measures of a provider’s past and current performance and not of its potential to have problems in the future.

    It is only when the OfS tests a provider’s eligibility for approved status, or for authorisation to award degrees that there will be a serious assessment of its risk potential and of its competence in handling the risks it encounters. As Catherine Boyd has pointed out elsewhere on this site (What has happened to quality?), once admitted to the fold a provider will not be subject to ‘routine reassessment’. Instead, providers will be monitored using risk indicators of the kind listed in paragraphs 236-243 of the consultation document.

    Problems will happen, things will change. There is no guarantee that, after its initial assessment for registration or DAPs, a provider will continue to present a low level of risk potential, and remain competent in managing any risks that might arise. If the OfS is going to be reliant on the types of indicator and metric described in the consultation document, it won’t know that things are going wrong until the risks have been realised and by then students will have suffered the consequences.

  2. Three genuine questions for Colin:

    1) What is the sophisticated part of HEFCE’s current approach that you feel we should maintain?

    2) What leading indicators do you have in mind? Yes, NSS scores detail past student satisfaction, but looking at time series of NSS data can indicate the direction of travel of student satisfaction and therefore help regulators to act before a satisfaction descends too far.

    3) How does one objectively assess an organisation’s ability to manage risk?

  3. Thanks, Alex, for your questions. I can’t do justice to them in a brief note but, for the moment, my responses are …

    (1) The ‘sophisticated’ features of HEFCE’s Revised Operating Model (ROM) would include the routine focus on governance and, specifically, the monitoring of institutional performance through the Annual Provider Review process together with the possibility that Assurance Reviews might be used to check inter alia a provider’s risk potential and its competence in the management of risk.

    (2) I would hesitate to extrapolate into the future from NSS outcomes and other statistical measures of an institution’s performance. As the Financial Services Authority has reminded us, ‘past performance is of little indicative value’ – it is an ‘uncertain or potentially misleading guide’.

    On the ‘lead indicator’ question, one might look no further than the Australian quality agency, TEQSA. Many of the measures that the consultation paper describes as lead indicators are, in fact, classified by TEQSA as lag indicators. TEQSA offers us two examples that fall into the ‘lead’ category – staff student ratios and the proportion of staff on casual contracts.

    (3) We could have a long and philosophical debate on the meaning of ‘objectivity’! That aside, I would argue that, in effect, QAA Audit provided relatively reliable assessments of institutions’ competence in the management of their responsibilities (and risk) and that Quality Review Visits have the potential to do so. Whatever method is used, this would necessarily entail peer judgements based on an assessment of qualitative as well as quantitative data.

    The last point takes me back to the first. Evidence of HEFCE’s ‘more sophisticated’ approach is its recognition (in para 109 of the ROM) that, rather than adopt a ‘crude metrics-driven approach’, assessments of risk should employ what Wilsdon had termed ‘a variable geometry of expert judgement, quantitative and qualitative indicators’.

    The issues you raise are dealt with in some detail in a paper (Risk, Regulation and Quality Management) that you will find on the Academic Audit Associates’ website. And, if you would like to discuss this further, you can contact me through that website.

Leave a Reply