Sometimes, in quantitative research, you can want a particular result so much you can lose sight of what it means in context.
The drop out rate for students who accepted unconditional offers is 7.08 per cent, and a detailed Office for Students model suggests we would have seen a non-continuation rate of 6.44 per cent if those same students had received a conditional offer – a percentage point difference of 0.65, and a change (as widely reported) of ten percent.
But an unconditional offer doesn’t present as big a risk of non-continuation as getting AAA rather than A*A*A* at A level. Or studying maths.
OfS has been very keen to demonstrate that unconditional offers are bad – specifically, that unconditional offers make students work less hard at their A levels (which is bad in itself) and then this lack of A level grinding makes them less successful at university (doubly bad).
This walks the line between an educational harm argument and a moral argument – I’m reminded of some of the grit or resilience arguments from a couple of years back. Student success is forged in the crucible of an eighteenth year spending 18 hours a day studying. If you don’t have those two years of pain and suffering you’ll never be a great student. Or something.
That’s the story that’s being sold – let’s kick the tyres and decide whether or not we want to buy it.
It’s a model – is it looking good?
The first thing to remember about these non-continuation figures is that they are compared with a modelled performance, not the actual performance of other students. The 185 students in question would not have been expected to drop out given their provider, course, and personal characteristics. That they did drop out is therefore put down to their offer type.
So how good is the model? It starts with individual student data for those entering HE in 2015-16 and 2016-17. You’ll recall that this January OfS looked at similar data from those entering in 2014-15 and 2015-16 – and identified a statistically insignificant difference in continuation for those with an unconditional offer.
OfS then takes data on the individual students that actually dropped out, and traces them back through the characteristics data to find which characteristics are more likely to be found in students that don’t continue.
All reasonable so far, but the model has changed between the January and October publication. I’m told that this is to bring it in line with the Association Between Characteristics of Students (ABCs), what it means in practice is that we get more coefficient estimates in the disability and ethnicity categories, and a new IMD category. This is not just a presentational change – it’s a material difference between the two models.
For those of you who read a lot of quantitative papers there’s probably some alarm bells ringing about population sizes too. A significant difference is harder to identify in a smaller population than a larger one. And unconditional offer numbers have grown substantially between the two periods measured. Because we only have two datasets under analysis, and both of them contain a chunk of the same data (the 2015-16 entry data), any effect we are seeing is due to one new year of data. Which is interesting in itself, but feels a bit unsafe.
So after adding a load more data and changing the model, the effects of an unconditional offer is still less than the effects of studying health and social care.
To compare the two periods you have to dig back into the (recently updated) “Data analysis of unconditional offer-making” publication from back in January. Table C3 in Annex C shows the coefficient, standard error, and p value for a number of student characteristics including offer type – table D3 in the annexes to the new publication is directly comparable. I’ve plotted them here if you don’t fancy digging through the annexes.
In a nutshell, we’re looking at the likely effect of each student characteristics on the likelihood of a student dropping out. Each of the tables lists, for various characteristics:
- A coefficient estimate – this, broadly speaking, is the effect of a change in a given characteristic compared to the “reference value” if all of the other statistics are “held fixed” (assumed to be the normal value that is stated in the table). A higher value means that characteristic in question has more of an effect.
- The standard error – this is an indication of how wrong this estimate might be. Note that it includes both a negative (actual coefficient is less than the estimate) or positive (actual coefficient is more than the estimate). A higher value means a bigger likely error – and is often linked to there being small amounts of relevant data.
- A p value – a measure of the likelihood that a finding can be attributed to chance rather than the variable under investigation – a p value of 0.001 means there being at most a 0.001 probability of observing getting this result solely by chance.
So what does this all mean in context? You’ll see, for example, in the January data release unconditional offers have a p value of 0.30 – which at first glance is low but believable. OfS found that for the 14-15 and 15-16 data a student is very slightly more likely to drop out if an unconditional offer. Alas, the standard error is larger than the effect size, and the p value is high, so it is not going to be a meaningful finding.
One weirdness is that the October estimate is looking at the likelihood of continuation and the January data is looking at the likelihood of non-continuation. I’m not clear why this change has been made, but I’ve reversed the October one in the visualisation to make it easier to compare.
Because we know that there were more unconditional offers in 15-16 than there were in 14-15 we can be confident that most of this effect is due to the 15-16 data. And as this same data is included in the October publication we can be equally clear that any change to any effect – and the new headline finding of a small but significant negative effect from unconditional offers – is primarily due to adding the 16-17 data.
Predicting the present, not the future
But there’s a lot more fun to be had with the coefficient estimates. Oh yes. For instance – forget unconditional offers – in the latest data a mathematical studies course is far more likely to see students fail to complete. But entry qualifications have a far larger effect. As does being Black or from a minority ethnicity. But POLAR quintiles don’t have as big an effect as you might imagine.
All this is potentially actionable data – it could be used to design and target interventions to stop students dropping out of their studies. It’s only two years of data but you could imagine building it year on year to do a fairly decent piece of research that could have a real student benefits.
I suppose the continuation of a moral panic over unconditional offers is useful to some people too. Just not students, or those who support them.