Can you predict a student’s wellbeing? Using administrative data? To the extent that it allows you to target support and interventions to the students most in need?
Learning analytics – which uses similar data to deploy academic interventions – is already widespread within higher education.
Just about every large university provider is able to use information about student participation and characteristics to promote appropriate study support or intervention. So why not on an issue that could have far greater repercussions for students?
It should be a straightforward choice – a clear benefit to students, and a way to make the most effective use of scarce provider resources. And a recent OfS funded project has been a qualified success.
The project
In 2019 the Office for Students funded a project led by Northumbria University (with Buckinghamshire New University and the University of East London as partners to investigate the possibility of analytics for mental health. This was one of ten collaborative projects (at a total value of £6m) funded as part of the Mental Health Challenge Competition – a funding competition designed to foster collaborative projects to address common issues.
The final evaluation report for this programme of work – including for the Northumbria project – was published in October 2022. The impact of the restrictions in activity associated with Covid-19 meant that many of these results, although promising, could not be fully validated for applicability in a “normal” year. Depending on your own personal definition of “normal” we’ve now had a couple of these, resulting in a further report (out today) from project technology partner Jisc.
The Northumbria project deployed a number of indicators – disability information (including student support recommendations), personal extenuating circumstance information, change of circumstance information, care leaver status, and first language – finding that a combination of these could be used to predict where students would likely suffer from wellbeing issues.
And yet…
At the time the project was launched there was a considerable amount of concern expressed, particularly among university learning technology specialists, about the ethical basis of the project. Early publicity didn’t quite show the nuanced and human process that was eventually developed – and some believed the project would mean automated designation of students with mental health issues, to the detriment of students involved.
It is an iron rule of education technology that most “automated” systems are anything but – and in this case the incorporation of human decision making within the process has served to quell anxieties as well as increase the cost of the initiative.
But it does prompt the wider question more usually associated with diagnostic predictions: if your background, behaviour, or characteristics made you more likely to experience a life-changing health condition would you want to know?
For some parts of the population the answer would be an immediate “yes”. Data about us is gathered and interpreted nearly constantly – we carry sophisticated sensors in our pockets and on our wrists, while numerous systems track our behaviours and engagements with online tools. If we can use this information to make ourselves healthier and happier, why wouldn’t we?
The corollary is one of agency. Many identified risk factors (our genetics, our socio-economic background) are outside of our control – others (our behaviours, our diets) may technically be within our control but are in practice very difficult to alter. If you knew that something about you made you more prone to mental health issues (to anxiety, say), is there a risk that it would become a self-fulfilling prophecy?
How did they know?
University systems hold a vast amount of information about individual students – Northumbria counted more than 800 data points. To identify information that could help identify students at risk, the project used a standardised wellbeing survey from the World Health Organisation (WHO) across multiple waves to build an overall picture of the mental health of registered students.
Comparing this survey data with student behaviour and characteristics yielded a usable range of indicators that appeared to correlate with low wellbeing as measured by the WHO survey. If you know one thing about correlation it is that it doesn’t imply causation – in this case this allows us to be clear that neither the project nor the university believed that the identified characteristics caused or inexorably led to mental health issues.
This level of information processing and data mapping were resource intensive, and it is to be noted that Northumbria had spent the previous decade updating and improving the quality of internal systems – other providers, including those originally involved in the project, would likely struggle. Like so much in higher education, good data processes and data governance are required – and those things don’t come cheap.
Interventions
Initially, given that it had high quality survey data (using the WHO wellbeing instrument) to act on, the university instigated what it called “nudges”. Students who, based on their own responses, were at “high risk” were offered one-to-one support, those at medium risk pointed to guided self-help resources, and those at low risk invited to wellbeing workshops. Everyone benefited from a generic signposting campaign.
This approach rolled over as the predictive data became useable – but the interventions were not an automated process. Rather, the indicators fed into a data dashboard for each individual student, with the decision to send a “nudge” made manually. Given that 74,717 “nudges” were sent over the lifetime of the project, this required a large amount of manual work. As the report notes:
Putting the analytics project into action required a high level of expert involvement from the team. While it would be possible and desirable to automate some steps, they placed a high value on human decision making when deciding who to contact and how.
This required the recruitment of additional skilled staff resource, within the university’s central Counselling and Mental Health Team. Indeed, the data on the dashboards was not made available outside of this central team.
Of course, all of this work would be in vain if students had not looked at their emails or messages. If you have worked with contacting students, you will not be surprised by open rates that hovered between 40 and 50 per cent, and click rates that topped out around three per cent. As potentially disappointing as these numbers may be, it does appear that students at higher levels of risk were more likely to open and engage with targeted messages, and support services reported that “nudges” led to an increase in student demand. The “nudges” were having an impact.
A question of consent
Jisc was the source of two codes of practice (for learning analytics, and wellbeing/mental health analytics) that have informed the approach to consent employed by the project. During the lifetime of the project, students provided explicit and informed consent for their participation (including the use of their data and survey responses to inform personalised interventions). Some 61 per cent of students chose to do this (according to a case study published by the Office for Students) rising to 70 per cent in later years – but as the report notes:
This was an entirely appropriate approach for the research project but it could prove unhelpful in delivering mental health analytics in a business-as-usual setting as there may be legal, safeguarding or other circumstances where students are not permitted to opt out of such interventions.
Most learning analytics systems use a “legitimate interest/contract” basis – this allows all students to be included, but removes agency from the student and as such may leave a university open to legal challenge.
What to make of it
Would we want to know? Would our students want to know? Should we be required to tell them?
It is possible to imagine a situation where a student had characteristics that made them more likely to experience periods of mental health challenge – and that during one of these they died by suicide. Would the university be in breach of a duty of care if it held information that would suggest such an outcome might be likely, in the form of student characteristic and behavioural data indicators, and did not act on it?
It has become largely expected that universities will track attendance and engagement (in the form of assessment submissions) and act on this information where a student is at risk of non-continuation or non-completion. Though perhaps in an idealised world this information would be identified via a pastoral relationship with an academic tutor, in practice our mass system means that staff need a little more help to put the right support in place, and data can offer that.
If you predict anything you wouldn’t expect to be completely accurate all the time – but this becomes less of an issue if your resulting intervention is non-harmful. If a weather forecast suggests rain and I return home with a dry umbrella what have I lost? If a student is not experiencing a wellbeing issue, why would it be a problem if they are encouraged to engage with the university counselling service?
The counter-argument is one of agency – should personal characteristics determine outcomes? I would argue that though this position is valid when it comes to things like the examnishambles (where students from a less advantaged socio-economic background were algorithmically assigned lower predicted exam results) it is not valid where personal characteristics do demonstrably have a direct impact.
It’s not a tidy direct impact – but if things that have happened in your past, or are currently experiencing, mean that you might benefit from additional support using the available personal data to offer this support feels benign enough.
Mandatory?
Edward Peck’s Department for Education sponsors Higher Education Mental Health Implementation Taskforce has, as you may expect, taken a strong interest in the emerging field of wellbeing analytics. On this particular project, it notes:
Jisc will soon publish its evaluation of the innovative Northumbria Wellbeing Analytics project. We expect this evaluation to demonstrate that it is possible to predict student wellbeing with accuracy, as well as to identify additional students who may otherwise have remained unknown to HEPs’ services. The evaluation is also likely to conclude that the quality, availability and accuracy of data are essential conditions to generate reliable insight on student wellbeing
While the published report is positive overall on the project, it is difficult to agree based on the presented evidence that it is able to predict student wellbeing with accuracy (DfE is dead right on needing good data, mind you).
To be clear – at the start of the report we get “The project successfully proved that it is possible to predict a student’s wellbeing with significant accuracy to add operational value to student support models of intervention,” but adding that last clause makes this a very different matter to being able to point out which students were going to have support needs based on indicators alone.
Accuracy – in many ways – isn’t the point: the risks inherent in a student being “incorrectly” encouraged to attend a one-to-one session are low (the student in question would likely not bother), though risks in failing to identify a student in need remain high. Any intervention that encourages students to seek support when they need it is an improvement over the status quo.
Wellbeing analytics are not the answer to student mental health support, any more than learning analytics are the answer to academic support. In both cases a useful source of information is made available to inform the targeted promotion of relevant support – but in both cases, you can’t force a student to do something even if it is clearly in their own interests. Initiatives like this are useful, but they augment rather than replace the human element of support – and that is the difficult (and expensive) bit.