This article is more than 9 years old

Metrics and quality: do the numbers add up?

Learning from the Australian quality experience, QAA's Ian Kimber argues for the need for a contextualised and nuanced approach to the use of metrics, both in terms of identifying potential quality risk and of assessing student outcomes and their link to teaching quality.
This article is more than 9 years old

Ian Kimber is Director of Quality Development at the Quality Assurance Agency for Higher Education.

In an open letter to HEFCE and UUK recently, a group of leading academics questioned the link between teaching quality and students’ outcomes. Their conclusion was, I believe, somewhat reductive: that if the link is not causal and direct then there is no place for metrics in the Teaching Excellence Framework (TEF) or in any future quality assurance system.

The reality is that although it’s complicated, there is often a causal link. Take, for example, the often contentious metric of student:staff ratios. There is no doubt that timely and constructive feedback to students following assessment fosters good learning outcomes. If the ratio is high, staff may have more limited capacity to provide this feedback. But of course, any such metric must be considered in context such as trends over time and delivery model and mode.

There are many other elements of students’ experiences that foster good outcomes, and universities and colleges know this both instinctively and empirically. They use metrics continually in their internal review processes to guide improvements in what they offer, from large scale data analytics to identify trends in study patterns to detailed scrutiny of NSS outcomes.

In turn, students’ experiences and their outcomes are determined by many different factors.  The quality of teaching is one of those, but there are many other social and cultural pressures at work – so assessing quality and identifying excellence needs to take account of inputs, outputs, and context; data plus intelligent contextual analysis.

To contribute to the debate about the use of metrics in quality assurance, QAA (in collaboration with the Economic and Social Research Council) is supporting doctoral research at King’s College London to analyse the predictive validity of data in risk-based approaches. This research, to be presented in November, has found virtually no predictive link between an exhaustive range of indicators and QAA review outcomes; although three risk indicators (related to the age composition of student cohorts, income from research grants and contracts, and the number of full-time teaching staff) emerge as the most significant.

On the face of it, these findings could call into question the application of metrics to identify areas of ‘risk’ on which to focus quality assurance effort. However, I would argue that the findings underline the need for a contextualised and nuanced approach to the use of metrics, both in terms of identifying potential quality risk and of assessing student outcomes and their link to teaching quality.

In terms of policy-making, I can see that arguing for greater nuance isn’t on the face of it helpful. It can be challenging to reconcile nuance and simplicity, but a conservatoire and research intensive university are very different beasts. The TEF needs to have crystal clear outcomes, to be understood by students, prospective students, parents and employers. But I think if the brain-work is put in at an early stage, to really understand the complicated relationships between metrics and contextual elements and how to assess them, something straightforward, accessible and reliable can be devised. The TEF as a proverbial public policy swan is perhaps stretching it a bit, but that’s the idea.

So what might be more helpful than big bird analogies? Perhaps to learn a bit from the experience of the Tertiary Education Quality and Standards Agency (TEQSA) in Australia. I joined QAA in February 2015 from TEQSA, where I led on regulation and review. Australia does not have a TEF equivalent and TEQSA never planned to use data to incentivise excellence. It has, however, used data and metrics in its regulatory and quality assurance framework in new and innovative ways.

TEQSA has been described as having a metrics-driven approach, but this is misleading – in reality the data was never looked at in isolation, always in context.

True, TEQSA has had a bumpy ride in setting up its approach – the data collection was initially demanding on institutions, and the agency was accused early on of not taking sufficient note of context to deal with the sector proportionately. This is because TEQSA decided as the first national agency with a sector-wide remit that it needed to establish baseline information from scratch. The centralised, joined-up data collection now in place in Australia was not fully set up, and the agency didn’t have full access to the data anyhow.

TEQSA interpreted the legislation on which it was based in a particular way, believing it couldn’t differentiate between institutions until it could contextualise based on track records of interaction with its own, new framework. The sector pushed back and the agency found itself under review within two years of opening its doors for business.

TEQSA responded positively though, and there is now a shared understanding of how the agency will conduct its activities. TEQSA annually updates a risk assessment for each provider. It is a judgement on the current and future risk to the quality of students’ experiences, and it is reached based on a complex interplay of factors.

This assessment gives TEQSA the ability to differentiate between providers and focus its resources where they are most needed. There is a core set of standards applied to all providers, with any extension informed by the risk judgements.

The UK can learn from this experience. We can use the rich qualitative and quantitative data from many years of institutional interactions with QAA and other agencies to firmly establish a track record and context. There are significant gaps in the data currently available in relation to alternative providers (especially those without degree awarding powers) which present a challenge, but for those directly funded by HEFCE, we do have the HESA collection, the KIS, and the UK performance indicators for higher education to inform quality judgements.

There is also student satisfaction data available which sheds light on some aspects of the learning experience where appropriate metrics have a role to play. We can take robust metrics and place them in context to establish a quality profile of all providers. And we can use this profile to tailor the frequency and scope of quality assurance activities proportionately for each provider. We can also draw from this profile that data and contextual elements that go directly to teaching and learning, taking us towards integration of a TEF with the broader quality assurance framework.

I absolutely understand the concerns about over-reliance on data. The point is not to rely on data alone, and not to rely on anything else in isolation either. Better, more intelligent use of data has such potential that it cannot be ignored. Internally, institutions can use metrics to point to possible problems, but also to show where there is no need for additional processes of quality assurance. This mirrors its potential to reduce the burden of external quality assurance, too.

Beyond that, the potential for a much greater focus on student outcomes is positive. These outcomes are certainly caused by complex factors, but ignoring one part of the picture would be an opportunity missed.

One response to “Metrics and quality: do the numbers add up?

  1. Interesting and thoughtful article. Indeed we need quantitative and qualitative data and both should be seen in their context. However, it confuses and conflates student experience with teaching quality. For once the student experience is much broader than just the teaching students receive. Second, students are taught by many people while at universities and it would be difficult to see the impact that each lecturer, module or extra-curricular activity has had on outcomes (some of which might take years to show). Finally, bid data capture trends across a cohort of students, but they cannot capture the nuances of the personal experience.

Leave a Reply