There is a reasonably well known philosophical puzzle called “the trolley problem”.
It’s a moral puzzle that goes along the lines of; “there is a trolley thundering down a hill out of control, heading towards a group of five workers. The workers are oblivious and will remain so until it is too late.
However, there is a fork in the track leading to a solitary worker – that worker is equally oblivious to the tram and will also remain oblivious until it is too late. Next to you is a lever that controls the points that will divert the tram onto this branch line.
The question is: do you pull the lever or allow the tram to continue on its current course? Most people, when questioned, say they will pull the lever, justifying it as acting in the greater good. The thing about the trolley problem is that there are variations that really make you question your judgement. What if there was no branch line, but pushing someone in front of the tram would be sufficient to stop it?
A ‘thinking machine’ capable of doing accidental harm, the self-driving car presents us with an example of a constant trolley problem. Of course, these systems are designed to have safety at their core to avoid doing harm, and yet we are forced to hard-code these choices; we have to tell these morally-neutral machines that one choice outweighs the other, that there is a value-judgement to be made. We have to code them to hit an animal rather harm a human driver, or drive off a cliff rather than hit a group of bystanders, or hit a single person to save multiple occupants.
What should universities do?
At the AMOSSHE – the student services organisation – conference, we discussed the case of a university who had been presented with their very own trolley problem. This university was using some incredibly sophisticated data analytics and machine learning that they had found could predict the likelihood of a student dropping out. Now, this wasn’t just a ‘student x is a first in family low socio-economic group so has a 10% chance of dropping out’, this was data analytics saying this student will drop out; the data predicts this, we’ve modelled it, tested it and observed it to be true and the observation is repeatable. It’s difficult to express just how compelling this was – this university was literally saying “What do we do?”
So, what would you do? Do you accept the algorithm as fact and put in revolving doors on enrolment? Of course not, and I’m not suggesting for a moment that the university in question was suggesting this, but what are the decisions that need making?
Choosing the individual
You could use this data to intervene and to trigger processes, ignore the supposed certainty of the predictive algorithm and go for the hope that this is the one case that is different – that you can do something that the algorithm couldn’t account for and this will be the thing that tips the scales. But what if the level of sophistication of the algorithm was such that it could say “if you direct attention to this student, they will still fail to engage and furthermore your intervention will divert resources away from five other students with less intensive support needs but who, as a result of this diversion, are now predicted to drop out?” Faced with this prediction, would you be able to still choose the individual? Or do you believe that if you save the one, you can save each one of the five? What if for each individual saved, you were faced with five more at risk?
Choices and obligations
So, do we instead choose to ignore the information we have been given? Do we accept that we can only act on what we know to be the moral choice and let fate decide the rest? Are we not equally morally obligated to consider all the information we have available to us no matter how unpleasant this prospect may be? To extend our self-driving car comparison; when faced with the ability to choose between the lesser of two evils, do we tell the ‘AI’ to switch off and leave things to fate?
At Wonkhe’s ‘Secret Life of Students’ conference, we heard how the power of data can challenge the assumptions that we make of others. We also heard that, when dealing with data, we need to be cautious that we do not lose sight of the individual; that there is no homogenous ‘student voice’ or ‘student experience’; there are ‘student voices’ and ‘student experiences.’
At this point in time, our analytics struggle with the messy nuances of our chaotic and unpredictable individualism but excel at our more predictable group behaviours. Conversely, our brains are limited in their ability to comprehend big data and see the patterns but instead understand what the individual in front of us is thinking.
The issue facing us is our analytics are improving exponentially; the more data they have, the more accurate the prediction. But how long is it before our systems decide that the lever at the side of the track is just an unnecessary variable?