The release today of this year’s Destination of Leavers of Higher Education (DLHE) data will spark the annual debate about the usefulness of this self-reported snapshot of what students are doing six months after they graduate.
We’ve also recently learnt that the Higher Education Statistics Agency will review the data it collects on destinations and outcomes for leavers in higher education.
The DHLE is self-reported in two respects, first by students and then by universities. This is probably relatively harmless in relation to the overall employment rate of former students. However, it becomes more of a concern when used as the basis for the calculation of those in graduate level employment. Descriptions of their jobs need to be accurately reported by students and then accurately coded by universities. In some cases this is straightforward but in others – in particular for those students who are entering various forms of self-employment – it becomes reliant on local interpretation. This makes the data inevitably prone to error and potentially liable to gaming.
There are two other significant problems with DLHE: the omission of certain occupations in the Standard Occupational Classification codes on which it is based, most notoriously veterinary nursing, and the fact that many students choose not to pursue their chosen career – including entering a graduate-entry training scheme – immediately after graduating. Admittedly, it does have one virtue; the immediacy that comes with providing data on a cohort which is only three or four years ahead of the potential applicant. In my view this is not enough to offer redemption.
Admittedly, none of these concerns are new. However, the need to find a better alternative to DLHE is given added current impetus by three factors.
The first is that, as vice chancellor, my top priority for Nottingham Trent University is to deliver on the promise to our students that we will enable them to transform their lives by gaining the knowledge, skills, and attributes to pursue the career of their choice. I am sure that I am not alone in this aspiration. It seems to me that we – and they – deserve a more robust measure on which to base the judgement on whether we deliver on that promise than the DLHE provides.
Secondly, the forthcoming availability of HMRC tax data to HESA and the Student Loans Company means that we could use a robust measure where we can select the census point at which we present data on average earnings by university and/or by course. This would not be dissimilar to the approach some rankings take to MBA programmes. With secondary education performance data also being brought into the mix, we have the hope of finding a much needed way to measure added value or learning gain. Other input measures could include: the postcodes of origin of students derived from UCAS; the % of professional accreditations held by eligible courses; and further development of the National Student Survey. These would all provide important factors to set alongside pure earnings data.
The third is that employment rates surely will feature in some form in the Teaching Excellence Framework (TEF); in so doing they will impact directly on the financial well-being of universities whereas currently, whether standalone or contained within leagues tables, this effect is at best indirect.
These issues were explored at a University Alliance roundtable that I chaired last week which brought together colleagues from the civil service, employers, the HE sector with league table compilers.
We acknowledged that the emerging connection between the TEF and the level of fee that universities can charge creates an entirely new environment in which to measure teaching success in higher education. There are major challenges to overcome when determining the metrics we might use. First of all, they will be contested in ways that current league table metrics are not, given they will result in winners and losers in the race to generate income. Indeed, it would be unwise to start with most of the current league table measures. For instance, many of the metrics they deploy would be perverse in the TEF – such as staff-student ratios and spend per student which reward inefficiency – or do not adequately reflect the significant government policy objective of widening participation.
Furthermore, this new context means that the measures must possess political and public confidence. Integrity is one of the great strengths of higher education in the UK. In the age of the TEF this will be best protected by putting the chosen metrics as far as possible beyond the reach of either error or gaming by individual institutions. Given the record in other sectors over the last two decades it would show extraordinary confidence in universities not to do so if that choice is available. This would surely represent a triumph of faith over experience, to assume that higher education would be entirely immune as the financial stakes rise.
Another output measure that TEF may lead us to question is the comparability of degree outcomes across institutions. Some form of national test to assess ‘cognitive gain’ may develop here as it is in the US, albeit we heard at the roundtable that the results to date are not particularly useful for cross-institution comparisons. Notwithstanding the HEFCE pilots in this area and the OECD’s AHELO project, looking at whether it would be feasible to assess what students in HE know and can do upon graduation, such a test will take time to design and deliver.
It may also struggle to gain the support of the sector, which may see it as a step too far. In truth, the evidence from the use of degree classifications in league tables over many years suggests that academics make judgements about individual student performance without any reference to overall institutional standings. Perhaps HEFCE’s proposal of a national register of external examiners to moderate standards more effectively across universities is a practical and palatable way forward here, giving a beefed-up level of verification to the process. In any event, the first iteration of the TEF will need to proceed in the absence of such a national test if it is to facilitate the rise in fees that is becoming pressing in many universities.
However, and to return to where we started, the TEF should not be launched using the DLHE in its current form. In a period when opinion will differ on most aspects of the TEF, I hope we can all agree on that.
4 responses to “Finding new ways to measure graduate success”
I’m all in favour of looking at learning gain and allowing that to inform improvements to the student experience. I’m not in favour of releasing irrelevant and crude data in a league table. I’m still pondering the author’s logic when he presents graduate salary levels as an indicator of ‘value added’ by universities, and appears to see this as a proxy measure for learning gain. This is worrying. I would have hoped a vice-chancellor would have challenged these assumptions, not propounded them. It is vital that universities do not let the data salad made available by the SBEE (Small Busines, Enterprise and Employment Bill 2015) drive the agenda of academic institutions. Universities are repositories and generators of research and knowledge, not factories for ‘salary men’. The danger is that courses, and whole areas of knowledge may be axed if they do not produce high earners. Teaching quality and graduate salary are not linked in any way whatsoever. It is more probable that the social class of students will predict their earning potential. There are some disappointingly naive and illogical assumptions in this piece.
A little unfair, Liz. Edward was extremely lucid at the event he describes on the perverse incentives inherent in the current KIS, and I’m sure he is just as au fait with the even worse situations that would arise if we switched to a salary-driven model. Note that he doesn’t advocate using salary data to drive TEF, merely notes that it will (to an extent) exist.
Nevertheless, you’re right that the sector doesn’t want to get too enraptured with the data that arises from SBEE, not merely because of the philosophical or economic weaknesses in salary data as a metric, or other issues with the coverage and nature of the data that will emerge (it will contain no occupational information so will be of limited value to anyone as a stand-alone), but more because, as data including tax information, the raw data is likely to be very, very restricted in access. Planners and wonks hoping to get their hands on that data can forget it. Join the Treasury or HMRC if you want it. It’s not for the likes of you. You can have post-processed information with absolutely no potential identifiers, not individual records.
This Act went through very quietly and I am not sure the populace at large is even aware of what this means for their tax records. I am not sure everyone will necessarily be very happy when they find out.
Poor old DLHE wasn’t designed to meet the needs of the range and volume of stakeholders it is now obliged to support and does need rethinking. The simple point that DLHE is too easily gamed (and it is gamed) is sufficient to condemn it.
Some of the issues singled out here do deserve a little more of an airing, though.
Firstly, we can’t really get by without an outcomes survey of some kind. We could not measure outcomes at all, but TEF requires it.
We could rely solely on the SBEE, but the data will not contain occupational information and the weaknesses of that will swiftly become clear when you see NTU data stating that graduates earned £30,000 in the higher education sector, but no way of telling what job they were actually doing. That’s ok for extremely crude metrics, but no use to anyone else (with the criticisms of SOC, which I’ll address at the moment, it is also amusing to think that people might be putting their faith in Standard Industrial Classification data, which is far, far worse). Access to the raw data will be subject to very significant restrictions. This isn’t a satisfactory solution.
We could use third party data (LinkedIn is most commonly cited), but coverage is far from complete and so the data nowhere near good enough to support the kind of requirements laid upon it. I would like to think that nobody wants a situation where access to UK HE requires signup to a third party social media network.
The only solution is to conduct a survey. Here is where the edicts of the DLHE review come in, because it must be *cheaper*. We can argue about the timing of the survey (we do a lot of that every time DLHE is reviewed), and were we starting from scratch, 12 months might well win over 6 months, but the question isn’t as clear-cut. The first thing is that no potential reference point is ‘right’. Almost all the arguments against 6 months as a survey point apply perfectly well to 12 months.
The longer you put your survey from graduation, the lower your response rate and the costlier it is to up it. With league table compilers very keen to maximise sample sizes to maximise course coverage, putting your reference point later inevitably means less coverage for your money. We have to think about the trade-offs.
The SOC issue is simpler. If we need an outcomes survey – and we do – and we need occupational data in it – and we do – then the SOC issue comes right with us. SOC is not DLHE, it’s the standard occupational system used in the UK for all occupational analysis. We are not going to get the *main* SOC changed for the HE system, for two reasons. The first is that it would be far too costly. The second is timing. We’re not due a new SOC until 2020, and lead times are measured in years. The issue Edward identifies is actually with the use of SOC (which is also independent of DLHE), the perennial issue of ‘what is a graduate job’, and how we measure it, a very interesting and complex set of questions that need to be answered for TEF, regardless of DLHE. SOC will remain (it does have veterinary nurses, by the way, SOC 6131, but they are – incorrectly – not considered a graduate job by the measures currently used), but we may be able to modify it further (we’ve already added a fifth digit for HE purposes to much of SOC levels 1 to 3) and should be able to develop smarter derived classifications for graduate employment.