Today the new Unistats site goes live and Key Information Sets are soon to finally emerge; blinking into the sunlight, as endlessly cycling widgets designed to add a certain effervescence to course websites. Most of the attention has been focused on their role in the march of the market and the rise of the consumer. However, I want to make a separate point about the relationship between the KIS and quality which I do not believe has been explored in as much depth as it could have been.
First a detour via the National Student Survey. The NSS has been called many things (some of them unrepeatable on such a reputable blog) but from the Powers That Be there have been two key and familiar messages. Firstly, the NSS has been presented as a means by which students, key stakeholders in the system, can have their say. Secondly, the NSS has been described as a way of capturing educational quality.
These are pretty distinct aims – one is to do with empowerment (of customers, citizens or stakeholders depending on your allegiance), the other is to do with how we technically and rigorously evaluate student learning. The tension is not unproblematic, and is insufficiently appreciated, and I think something similar applies to the Key Information Sets, to which the NSS contributes 8 of the 17 items. The KIS have been presented as sets of information useful to students, and that is certainly the most straightforward interpretation. But they have also been presented as reliable indicators of the quality of courses : the HE White Paper said they were opportunities for providers to “illustrate the quality of the experience that they offer”.
Again, those are two very different aims. Educational quality is about students achieving desirable outcomes (i.e. learning), and makes relevant what we know from research about quality in higher education. The other aim is about providing information that is useful for students; that is, information to help them make the choices that meet their own immediate desires and ambitions. That is not to say that robust information about quality – such as total number of hours studying, students’ approach to study (deep vs surface), prevalence of teaching qualifications or data on student engagement such as that captured by the NSSE survey – would not be “useful” to students, but it is pretty obvious that such data would be far less eye-catching than numbers relating to future earnings and the levels of satisfaction among previous cohorts.
Yes, the existence and nature of the KIS does relate directly to the issue of consumerisation (or customerisation, to be more precise). But putting aside questions about the current system (whether it will reduce state expenditure, whether it will put students off higher education, whether it will damage the pedagogical relationship) it is hard to deny that given the current system, students are entitled to information that relates directly to what they want to achieve. And this – as both common sense and the research that gave rise to the KIS would suggest – includes how happy previous students have been with their experience. So in that light, as a set of information relating to students’ aims and ambitions, the KIS is entirely appropriate.
As a reliable guide to the quality of the education that students experience, the KIS may not be quite so helpful. Quality of educational provision in HE is a well-discussed and complex concept, but we do have some clue as to the factors that contribute to it. Graham Gibbs in Dimensions of Quality highlights them well, and was in fact quoted in fair detail in the White Paper. But those factors don’t appear in the KIS. Yes, gathering that information would be hard, and yes it may make little sense to prospective students, and yes it would probably not connect much with what they really want to know. Which is just to reiterate the difference between information that students will find immediately useful and information about educational quality.
Nevertheless, there should be space for the provision of more detailed information about the quality of courses, using metrics that have been proven to relate to positive educational outcomes. As Adam Child has already said in this blog, is the KIS “really the be-all and end-all of higher education”? And this is my concern: not that the KIS contain inappropriate information, but that labelling it as information about quality threatens to block the possibility of more robust (and complicated and detailed) quality-related information being produced and communicated. If – my worry goes – we already have information about the quality of courses, why would we need any more?
So I share some of the trepidation about the KIS, but my view is not quite as negative. If the KIS can be presented simply as a range of useful information that can guide prospective students, then there is little to complain about. It is not the content, but the labelling, that bothers me.
The new UniStats site is even worse (if glossier) than the last. You can’t find the average NSS for the HEI as a whole – and if you try to search for a course by initial (e.g. History) you will only get HNDs because is History is under B, with the rest of the BAs. Mad.
Despite being something of a pro-NSS zealot, I absolutely agree with the argument that it is not a measure of educational quality. What it is however is a measure of perception of the quality of the organisation and the people that are (supposedly) delivering a quality education and that in itself means that it is powerful. The NSS is a key component of student voice and so despite the fact that a course may, by other measures, be deemed to be of quality, the NSS gives students the opportunity to say “but it’s a nightmare” or otherwise! For students and prospective students who now (rightly or wrongly) place a value for money judgement on what we offer this is a key factor. Why go to X which may be quality but you won’t have a good time, when Y is supposedly also quality and will also allow you to lead a happy life?
I think the argument in your final paragraphs is a powerful one. The NSS presents a competing focus for both management and the academy, and this is problematic for some who are not used to having to walk this tightrope. Others of course have been doing this for years, but this was often by quirk of personality and personal motivation rather than by design in my opinion.
The future that current policy is trying to shape however is where we will have no choice but to walk that tightrope or else risk falling to the sharks below. The key is balance – academic quality and student satisfaction (in the terms espoused by the NSS) must not be mutually exclusive – they need to be integrated and embraced. This is not something I think that the policy framework properly recognises or enables – its complex to evaluate (take note QAA) and complex to apply metrics to that can be used for funding (take note HEFCE). As such, it is very easy to get hung up on easy evaluations and easy measures such as NSS which come in a neat package and allow judgements to be made. Perhaps we should ask ourselves now – who actually cares about quality?