This year, 128 members of the University Mental Health Advisers Network (UMHAN) completed a survey covering the 2024/25 academic year, and the results should give the sector pause.
The vast majority – 71 per cent – support students using AI in their academic work, and 68 per cent have noticed an increase in AI use among the students they work with.
A further 38 per cent have supported students sanctioned by their university for AI use in the last academic year alone.
When asked whether their university had an AI policy or guidance for students – and some members work across multiple universities – only 57 per cent said yes, while 29 per cent didn’t know or weren’t sure, four per cent said no, and four per cent reported a policy was in progress.
This is what worried us. If we know more students are using generative AI and that number is increasing – and if, as HEPI/Kortext found, two-thirds of students believe it’s essential to be able to use it effectively – where’s the guidance about what’s appropriate, acceptable, ethical, or permissible?
UMHAN has around 800 members who support students with their mental health, the majority of whom are accredited practitioners – staff with specific qualifications and/or professional registration with bodies including the Nursing and Midwifery Council and Social Work England, among many others.
They are mental health advisers, who support students experiencing emotional or psychological distress and act as a point of contact throughout their studies, and specialist mental health mentors, usually funded by Disabled Students’ Allowances – DSAs – and employed in-house, through an agency, or on a freelance basis.
Members have been discussing their experiences of supporting students with mental health conditions and their use of generative AI – for good and bad – for some time, which is why we included a set of questions on AI in our recent member survey, developed with UMHAN member and specialist mental health mentor Tara J Murphy.
The guidance gap
When asked about their experience of university AI policies and guidance for students, members described the uncertainty and unease they observed even among students at universities with a policy on AI use – such that some students are avoiding using it altogether rather than risk getting it wrong – suggesting there needs to be greater clarification and support in using AI appropriately.
As a recent survey of students’ use of generative AI found, only 36 per cent of students received support from their institution with their AI skills.
The Office for the Independent Adjudicator – OIA – noted in its 2024 annual report, reflecting on student complaints about AI and academic misconduct, that higher education institutions need to:
…include more information about the use of generative AI, and to support this with information in course handbooks and module specifications about what is permitted for specific assessments.
More worryingly, some students in receipt of DSAs for assistive technology aren’t using it for fear of getting into trouble, as this specialist mental health mentor noted:
…it is a little confusing for students in receipt of DSA software though as that is sometimes classed as generative AI. I have seen some students stop using their software for fear of doing something wrong which then defeats the point of having it in the first place.
The university’s guidance may also not always be as clear as it should be, creating ambiguity that can be particularly frustrating for neurodivergent students who may interpret language literally:
Our institution has good guidance but there is still room for uncertainty and I worry about this for our students, especially those who would struggle with abstract language.
Accused and distressed
Almost all of the complaints OIA received about AI were from students subject to an academic misconduct procedure. OIA noted that while the number of these complaints is low, the incidence of academic misconduct linked to AI is rising within institutions. We explored this in some detail – asking what impact sanctions had on students’ mental health and how the process was managed and supported by the university.
What emerged was that some members were supporting students with their mental health because they were being accused of academic misconduct and the process was stressful, while others found that the students they were already supporting struggled even more.
Significant distress arose for those who felt falsely accused of inappropriate AI use, and concerns were raised about the accuracy of plagiarism or cheating detection systems. While some students did accept responsibility for their actions, others were unaware they’d committed misconduct or were unsure about the boundaries of acceptable AI use.
Waiting for the outcome of such processes, not knowing what they entailed, and their often formal nature also caused stress and worry. Investigations were frequently overwhelming, sometimes poorly managed or handled insensitively.
Although some members pointed to instances where students were supported by an academic, an SU, or signposted to wellbeing or mental health services, and while OIA has noted that some institutions pursue an “educative rather than punitive approach for minor or first instances of academic misconduct,” one mental health adviser reflected:
It tends to have a negative impact, they often come to us for support in managing the stress associated with the academic hearings and are sign posted to us by their academics.
We also wanted to know what members are observing in relation to the students they support and AI, including both positive and negative impacts. Some members noted how students were finding AI particularly useful for summarising research, planning their time effectively, organising their ideas, breaking down questions, locating journal articles, creating timetables, and managing their workload – something that could be particularly useful for neurodivergent students, as this specialist mental health mentor noted:
Great to break down questions to aid understanding and help with plans. Can set timetables for work to be done. Easily accessible.
AI as therapist
But the most frequently mentioned observation was students using AI as a mental health support tool – like a counsellor or therapist – ranging from seeking reassurance to help with managing anxiety.
As mental health professionals, our members were understandably conflicted about this – as a self-help tool it might be a useful form of support, but relying on it also had worrying connotations, as these mental health advisers shared:
I have heard of students using it as a therapist and in one case a student was using it as a form of self harm. I feel we are not really ahead of this in terms of knowing how best to work with this.
Some students have accessed AI for mental health support with varying degrees of success. Occasional concerning story about suicide options have been offered up by AI.
The mental health charity Mind has announced the launch of its AI and Mental Health Commission, noting the growing number of individuals becoming dependent on, or forming therapeutic relationships with, AI tools that “are not designed, regulated or clinically aligned to provide mental health support.”
Two questions for the sector
Our members take a measured view of AI – most see it as a tool that students need to learn how to use safely, ethically, and critically, and as such higher education institutions need clear, firm, consistent policies and support for its appropriate and acceptable use. But our survey data raises at least two key questions for us, our members, and the sector to consider.
First, if we know that it’s often barriers within the higher education environment that can worsen existing mental health conditions or contribute to mental ill health for students, what is being done about those created by or relating to AI use – particularly if they lead to accusations of academic misconduct? As HEPI/Kortext suggested, it might seem that “efforts to safeguard assessments are more advanced than efforts to boost students’ AI literacy” – and, by association, their mental health.
Are students and mental health support staff being included, as key stakeholders, in the development of providers’ academic integrity policy and academic misconduct procedures for generative AI? As Kennedy et al. reflected on their experiences of writing the University of Limerick’s policy:
In retrospect, it would have been beneficial to invite a greater number of representatives from the student body, educational technologists from the staff community or members of the disability support services.
Second, as Phelan discussed, the potential harm for students in using AI for their mental health “is real and affecting students right now,” and yet this aspect of AI use has been largely absent from the sector’s conversations. Iftikhar’s study of AI chatbots identified 15 ethical risks, including a lack of safety and crisis management – failing to refer users to appropriate resources or responding indifferently to crisis situations including suicidal ideation.
This is where the “human connection” provided by our members and mental health professionals in higher education is so crucial – nuanced, empathetic, and responsible care that AI cannot replicate.
As ORCHA – the Organisation for the Review of Health and Care Apps – which recently announced the formation of the ORCHA Digital Health AI Advisory Group asserts, the challenge is no longer whether to adopt AI – but how to assess risk, governance, and safety in a way that’s consistent, proportionate, and operational.
University Mental Health Day, organised by Student Minds and UMHAN, takes place every March to get the nation talking about student wellbeing and working together to make mental health a university-wide priority. This year’s theme is “human connection.” Join Student Minds on Thursday 12 March 2026 for a free online panel discussing the impact of AI on student mental health, including UMHAN member Tara J Murphy.
I really appreciated this information, thank you.