Imagine, if you will, your standard economics textbook-style factory that produces a single product – your standard economics textbook-style widget.
Because the factory only produces one thing, you can fairly assume that all the money the factory spends (salaries, maintenance, financial analysis…) goes towards the production of widgets – and you can estimate the cost-per-widget with a fair degree of accuracy. Doing this lets you compare WidgetCorp with feared disruptive market entrant Widget.io, which may well have a smaller cost-per-widget. Ah, says the WidgetCorp Director of Strategy, our widgets are of better quality…
That’s pretty much the argument about teaching costs that has raged since time immemorial – with one important difference… universities are not just factories that produce teaching for students. They produce a bunch of other things too – many of which (it is often argued) have an impact on the quality of teaching.
My favourite example would be the idea of “research-informed teaching”. Back in 2010 the Russell Group argued that “research-led learning actively engages students in their learning experience”, suggesting that the research performance of a provider is linked to the quality of the student academic experience.
Aside from reminding us all (again) about the undoubted strength of research in the Russell Group, to me the purpose of such claims is to muddy the waters between teaching and research academic activity. At Russell Group universities, most full-time academic staff split their time between research and teaching – this would not be true at, say, an FE college. Attempts to quantify the cost of teaching have been primarily based on the TRAC methodology – which uses an academic time allocation survey in an attempt to apportion costs between the two.
By this measure, a research intensive university may look like it is spending less on teaching than a provider where academics teach full time and do not research. Or it may look like it is spending more, as proportions of star researcher salaries are added in. Complicated stuff.
Limited data, limited fidelity
Now I’d love to look at these splits by institution to get a better idea of what is going on – but TRAC data is never released in this granularity. Why? “It’s commercially sensitive data that could influence behaviour in the market – this has become even more relevant as not all providers delivering HE are required to submit TRAC data. So the data collection includes an undertaking that individual institution’s returns are treated as confidential and restricted to the OfS, other UK funding bodies and UKRI,” – that’s what an OfS spokesperson told me. I’d add to that the data is not incredibly useful for anything outside what it was designed to do.
But be that as it may – we get an awful lot of financial information linked to cost centre in the new open HESA finance data. And allocations of student numbers to academic cost centres, while a little harder to come by, can be found (if you ask HESA very nicely).
There are a lot of vagaries in the data, so I would hesitate to use this approach to calculate an exact cost per student at cost centre or institution level. But, assuming the vagaries are evenly distributed, I feel a little more confident in looking at rankings and proportional measures.
To be clear – this is not how much teaching a student costs, this me looking at how much an institution spends per student (both at cost centre level and including some central costs), and how this spending varies between cost centre and institution. This allows me to predict one overall trend – more research active institutions and cost centres will have a higher spend per students than those that don’t. But we can still look at variations from this overall pattern.
Research spending is the fly in the ointment here. With the HESA data, there’s not a way to disaggregate teaching and research spending – to do that you need (at the very least) the TRAC workload data, which remains a closely guarded secret, and is not (really) designed to do this kind of thing. That’s why KPMG chose to collect their own data from the 40 institutions in their final sample.
So what we are left with is an assessment of how much money is spent within a cost centre for each student. This tells you more about institutional structure than about financial efficiency – so I’ve also included a by-student split of two other cost centres in some of the institutional figures:
- 202: Central administration and services
- 203: General education expenditure
Obviously this is an arbitrary decision – there are whole careers built on disaggregating cost centres between teaching and research, and any number of smaller cost codes that could be allocated one way or the other. Choosing largish costs like these kept me out of those weeds.
Chart envy
What I really wanted to do is recreate Chart 25 from the KPMG report on the cost of teaching, but with institutional information. I’ve not been able to do that, though I have got an indication of the relative financial power of cost centres within institutions instead. But let’s start with my proud failure – moderated ranked spend per student and subject, against the variation (standard deviation) of spend. There are two tabs here – one is the academic cost centre spend plus the two central cost codes, the other is just academic cost centre spend.
On spend per student, we are seeing – as we expected – more money passing through institutions that will also be handling a lot of research. There’s also a higher spend per student for smaller and more specialist institutions, as highlighted by Augar.
Sometimes what is fascinating is what we don’t see. There is no “London effect”. There is no North East effect either. There is no small and specialist effect, either considering them as a discrete group or just looking at student numbers (note that I’m excluding monotechnics as they mess up the graph).
I’m quietly fascinated by the spend variation trend. Differences in the spend for cost centres in Russell Group and other post-92 institutions tends to be more pronounced. I think what we are seeing here is the lesser degree of school/faculty autonomy in providers based on old polytechnic structures.
Variations on a theme
To explore this further, I’ve plotted individual cost centres across institutions. Aside from the sheer prettiness of the plot, we’re looking at something heavily nuanced but potentially useful to planners.
The y axis shows the position of an academic cost centre in a ranking across that cost centre in all institutions of the spend per student (taking into account spending within the cost centre and spending on the two general cost centres (202,203) as detailed above. The X axis shows the degree of variation in the calculated spend per student, across all academic cost centres in an institution. So institutional data will appear in a vertical streak of paint blobs (the size of the spend in each cost centre is shown as the size).
This takes a little bit of interpretive thought. For example the small dot at the top of the De Montfort University streak, for history, shows how the cost centre spending is ranked across all history cost centres in England – it doesn’t make it the most expensive in DMU. So what does it say about history at DMU? I’d argue that it says that it is competitive with other history cost centres in terms of spending – and that this is true about DMU as a whole (which, like other post-92 universities, has a low level of variation in spend per student across the piece and tends to rank towards the lower end of each.
The “CC” tab lets you look at the ranking per cost centre with the number of students assigned to each cost centre on the X axis, you choose the cost centre via the filter. If we stick with history we see another fascinating trend – cost centres with more students in them tend to spend more per student than those with less students (and thus, of course more overall – shown as the size of the blobs.)
Why would this be? Well, looking at the groups (colour) it seems that Russell Group providers tend to have more history students – and they will also be likely to be spending more on research. Why do the Russell Group have lots of history students? – arguably because they invest in research.
Go where the data is
Because we don’t have good data on teaching spending, we have to make do with the data we do have. It’s flawed and it is indicative rather than precise. But it is what everyone from the OfS through to Philip Augar and Damian Hinds uses to make such judgements. The answer to our current worries about how “expensive” (or otherwise) teaching in higher education is will come from the force of the argument we make for overall resource – and very likely not from the data that is currently collected.
I suggest that the Russell Group universities have lots of history students for the same reason I expect they have more Latin students than other universities: they enrol mostly students from a high socio economic status background who have high cultural capital which values high-culture studies such as of Latin and history.
The underlying problem with all TRAC data is that it is unaudited GIGO – garbage in, garbage out. Sure, heads of institutions sign these numbers off. But they do so assured in the knowledge that the figures themselves are guesstimates. In the ordinary working day of an academic, it is not possible to accurately compartmentalise what they do in this way, even if they could be bothered to do so.
@Rebecca – I agree, as I say in the last paragraph. TRAC is useful as an indicator, it is not useful in understanding precise costs. Because that’s not what it was designed for.
Having submitted to the KPMG exercise, I can say that we also used HESA data and our own management accounts to assist and to reduce the margin for error so that the garbage going in was refined to a large degree. The largest problem is actually in allocating costs which aren’t incurred at subject/school/faculty level, such as buildings, IT and the head office functions. I suspect that most of the 40 participants experienced the same and I would also say that the subject costs relative to one another did make sense, and were way more than £7500!