The only thing that everyone in the HE sector agrees on is that no one can agree what quality looks like, and there is no blueprint from overseas to draw on. The good news is that there is relevant evidence from other sectors, including schools and hospitals. The bad news is that, when looking at these, the proposals for the TEF do not stand up. In a report published today by HEPI, I look at what we can learn from these sectors and provide two considerations for the Government and the sector to take in to account.
Some people in higher education feel uncomfortable about sectoral comparisons, fearing the sector’s uniqueness will be overlooked. But, when conducted in an informed way, they can deliver meaningful insights. This holds particularly true for the TEF. The need for it is unquestionable: league tables are one of the most drawn-on sources of information by students and yet are a poor indicator of quality. This is bad for students, who may make uninformed choices. It is also bad for institutions, whose good work may not be getting recognised.
Fortunately, ratings have existed for some time in other markets including schools, where they have been delivered by Ofsted since the early 90s, and in the health and the social care market. These sectors are different from higher education but are interesting points of comparison. Quality is similarly difficult to unpick in these services, where users can have very different needs. The sectors have also experienced quite different levels of success. Ofsted ratings were initially established in schools but have since expanded to further education and early years, and evidence suggests that some 57 percent of parents use ratings when choosing their child’s school. In contrast, while health ratings were first introduced in the late 90s, they have had a fairly intermittent history and were only reintroduced in 2013 via the Care Quality Commission.
Looking at the evolution of user-ratings in these markets, one striking difference is the different degrees of stability in the organisations delivering them. While there have been changes to Ofsted’s processes and leadership, the organisation itself has stayed the same. In contrast, in health and social care, there have been numerous changes to the bodies conducting ratings which have led to changes to their name and substance over time. The Performance Assessment Framework became Star ratings, which then became the Annual Health Check… the story goes on.
Aside from that, the systems share many similar features and experiences, demonstrating Ofsted’s powerful role in shaping the design and evolution of ratings in other markets. Regarding features, both the Care Quality Commission and Ofsted:
- Provide a comprehensive rating across all providers, drawing on the same four-point scale.
- Draw on a rich set of data including good outcomes data, data on the concerns and views of staff and students, and a visit.
- And are increasingly delivered by experts, at a more granular level (i.e. hospital ward, nursery).
The ratings also sit alongside a wider set of comparable information on NHS choices or within school performance tables. All of these points are also likely to all be important for the TEF.
Regarding experiences, they have also shared common problems, which should present some warning signs for institutions and the Government. Aside from the most obvious ones, such as gaming and timeliness, these include:
- Inconsistencies and potential bias with Ofsted’s chief statistician admitting that, despite attempts to recognise context, it is harder for schools with lower-ability intakes to gain ‘good’ or ‘outstanding’ awards.
- Concerns about staff welfare, with many teachers reporting that they want to leave the profession, and evidence of bullying styles of management.
- Cost. While not directly comparable, Ofsted and CQC have operating costs more than £100 million whereas QAA’s was a modest £14 million in the last year.
The key question is how to maximise the benefits of other public service rating systems while minimising burden and cost. In particular, it will be important to look at ways to more deeply integrate the TEF with the quality assurance system. This might include making better use of the new and improved external examining system, as proposed by HEFCE, to provide information about teaching quality. This would have the advantage of being available at subject-level and being drawn from experts who understand the institutional context, but who are also independent of it. This is likely to be particularly important given the current lack of good outcomes data in higher education.
To support this and limit the risk of future instability, which could jeopardise the TEF’s success, the Government should also consider postponing it. The Green Paper proposed using positive QAA review outcomes to award providers with a ‘Level 1’ rating. However, there are limits to the usefulness of this information. The current quality assurance regime does not consider standards above a basic threshold level, focussing heavily on processes, and the information also already exists via the QAA’s quality kite mark. Low take-up of the kite mark suggests that it is not considered information that students deem to be valuable: just 54 per cent of eligible higher education institutions use the quality mark, and only 40 per cent of further education colleges.
All in all, there is considerable potential for the TEF to ensure that the Government’s marketisation agenda drives competition in the right places. But to do so, it needs to be done properly. Learning from the mistakes and merits of systems in other sectors will be an important part of this.
Read the report in full here.