Today the new Unistats site goes live and Key Information Sets are soon to finally emerge; blinking into the sunlight, as endlessly cycling widgets designed to add a certain effervescence to course websites. Most of the attention has been focused on their role in the march of the market and the rise of the consumer. However, I want to make a separate point about the relationship between the KIS and quality which I do not believe has been explored in as much depth as it could have been.
First a detour via the National Student Survey. The NSS has been called many things (some of them unrepeatable on such a reputable blog) but from the Powers That Be there have been two key and familiar messages. Firstly, the NSS has been presented as a means by which students, key stakeholders in the system, can have their say. Secondly, the NSS has been described as a way of capturing educational quality.
These are pretty distinct aims – one is to do with empowerment (of customers, citizens or stakeholders depending on your allegiance), the other is to do with how we technically and rigorously evaluate student learning. The tension is not unproblematic, and is insufficiently appreciated, and I think something similar applies to the Key Information Sets, to which the NSS contributes 8 of the 17 items. The KIS have been presented as sets of information useful to students, and that is certainly the most straightforward interpretation. But they have also been presented as reliable indicators of the quality of courses : the HE White Paper said they were opportunities for providers to “illustrate the quality of the experience that they offer”.
Again, those are two very different aims. Educational quality is about students achieving desirable outcomes (i.e. learning), and makes relevant what we know from research about quality in higher education. The other aim is about providing information that is useful for students; that is, information to help them make the choices that meet their own immediate desires and ambitions. That is not to say that robust information about quality – such as total number of hours studying, students’ approach to study (deep vs surface), prevalence of teaching qualifications or data on student engagement such as that captured by the NSSE survey – would not be “useful” to students, but it is pretty obvious that such data would be far less eye-catching than numbers relating to future earnings and the levels of satisfaction among previous cohorts.
Yes, the existence and nature of the KIS does relate directly to the issue of consumerisation (or customerisation, to be more precise). But putting aside questions about the current system (whether it will reduce state expenditure, whether it will put students off higher education, whether it will damage the pedagogical relationship) it is hard to deny that given the current system, students are entitled to information that relates directly to what they want to achieve. And this – as both common sense and the research that gave rise to the KIS would suggest – includes how happy previous students have been with their experience. So in that light, as a set of information relating to students’ aims and ambitions, the KIS is entirely appropriate.
As a reliable guide to the quality of the education that students experience, the KIS may not be quite so helpful. Quality of educational provision in HE is a well-discussed and complex concept, but we do have some clue as to the factors that contribute to it. Graham Gibbs in Dimensions of Quality highlights them well, and was in fact quoted in fair detail in the White Paper. But those factors don’t appear in the KIS. Yes, gathering that information would be hard, and yes it may make little sense to prospective students, and yes it would probably not connect much with what they really want to know. Which is just to reiterate the difference between information that students will find immediately useful and information about educational quality.
Nevertheless, there should be space for the provision of more detailed information about the quality of courses, using metrics that have been proven to relate to positive educational outcomes. As Adam Child has already said in this blog, is the KIS “really the be-all and end-all of higher education”? And this is my concern: not that the KIS contain inappropriate information, but that labelling it as information about quality threatens to block the possibility of more robust (and complicated and detailed) quality-related information being produced and communicated. If – my worry goes – we already have information about the quality of courses, why would we need any more?
So I share some of the trepidation about the KIS, but my view is not quite as negative. If the KIS can be presented simply as a range of useful information that can guide prospective students, then there is little to complain about. It is not the content, but the labelling, that bothers me.