With 18 per cent of students reporting mental health difficulties, a figure which has tripled in just seven years, universities are navigating a crisis.
The student experience can compound many of the risk factors for poor mental health – from managing constrained budgets and navigating the cost of learning crisis, to moving away from established support systems, and balancing high-stakes assessment with course workload and part-time work.
In response, universities provide a range of free support services, including counselling and wellbeing provision, alongside specialist mental health advisory services. But if we’re honest, these services are under strain. Despite rising expenditure, they’re still often under-resourced, overstretched, and unable to keep pace with growing demand. With staff-student ratios at impossible levels and wait times for therapeutic support often exceeding ten weeks, some students are turning to alternatives for more immediate care.
And in this void, artificial intelligence is stepping in. While ChatGPT-written essays dominate the sector’s AI discussions, the rise of “pastoral AI” highlights a far more urgent and overlooked AI use case – with consequences more troubling than academic misconduct.
Affective conversations
For the uninitiated, the landscape of “affective” or “pastoral” AI is broad. Mainstream tools like Microsoft’s Copilot or OpenAI’s ChatGPT are designed for productivity, not emotional support. Yet research suggests that users increasingly turn to them for exactly that – seeking help with breakups, mental health advice, and other life challenges, as well as essay writing. While affective conversations may account for only a small proportion of overall use (under three per cent in some studies), the full picture is poorly understood.
Then there are AI “companions” such as Replika or Character.AI – chatbots built specifically for affective use. These are optimised to listen, respond with empathy, offer intimacy, and provide virtual friendship, confidants, or even “therapy”.
This is not a fringe phenomenon. Replika claims over 25 million users, while Snapchat’s My AI counts more than 150 million. The numbers are growing fast. As the affective capacity of these tools improves, they are becoming some of the most popular and intensively used forms of generative AI – and increasingly addictive.
A recent report found that users spend an average of 86 minutes a day with AI companions – more than on Instagram or YouTube, and not far behind TikTok. These bots are designed to keep users engaged, often relying on sycophantic feedback loops that affirm worldviews regardless of truth or ethics. Because large language models are trained in part through human feedback, its output is often highly sycophantic – “agreeable” responses which are persuasive and pleasing – but these can become especially risky in emotionally charged conversations, especially with vulnerable users.
Empathy optimisations
For students already experiencing poor mental health, the risks are acute. Evidence is emerging that these engagement-at-all-costs chatbots rarely guide conversations to a natural resolution. Instead, their sycophancy can fuel delusions, amplify mania, or validate psychosis.
Adding to these concerns, legal cases and investigative reporting are surfacing deeply troubling examples: chatbots encouraging violence, sending unsolicited sexual content, reinforcing delusional thinking, or nudging users to buy them virtual gifts. One case alleged a chatbot encouraged a teenager to murder his parents after they restricted his screen time; another saw a chatbot advise a fictional recovering meth addict to take a “small hit” after a bad week. These are not outliers but the predictable by-products of systems optimised for empathy but unbound by ethics.
And it’s young people who are engaging with them most. More than 70 per cent of companion app users are aged 18 to 35, and two-thirds of Character.AI’s users are 18 to 24 – the same demographic that makes up the majority of our student population.
The potential harm here is not speculative. It is real and affecting students right now. Yet “pastoral” AI use remains almost entirely absent from higher education’s AI conversations. That is a mistake. With lawsuits now spotlighting cases of AI “encouraged” suicides among vulnerable young people – many of whom first encountered AI through academic use – the sector cannot afford to ignore this.
Paint a clearer picture
Understanding why students turn to AI for pastoral support might help. Reports highlight loneliness and vulnerability as key indicators. One found that 17 per cent of young people valued AI companions because they were “always available,” while 12 per cent said they appreciated being able to share things they could not tell friends or family. Another reported that 12 per cent of young people were using chatbots because they had no one else to talk to – a figure that rose to 23 per cent among vulnerable young people, who were also more likely to use AI for emotional support or therapy.
We talk often about belonging as the cornerstone of student success and wellbeing – with reducing loneliness a key measure of institutional effectiveness. Pastoral AI use suggests policymakers may have much to learn from this agenda. More thinking is needed to understand why the lure of an always-available, non-judgemental digital “companion” feels so powerful to our students – and what that tells us about our existing support.
Yet AI discussions in higher education remain narrowly focused, on academic integrity and essay writing. Our evidence base reflects this: the Student Generative AI Survey – arguably the best sector-wide tool we have – gives little attention to pastoral or wellbeing-related uses. The result is, however, that data remains fragmented and anecdotal on this area of significant risk. Without a fuller sector-specific understanding of student pastoral AI use, we risk stalling progress on developing effective, sector-wide strategies.
This means institutions need to start a different kind of AI conversation – one grounded in ethics, wellbeing, and emotional care. It will require drawing on different expertise: not just academics and technologists, but also counsellors, student services staff, pastoral advisers, and mental health professionals. These are the people best placed to understand how AI is reshaping the emotional lives of our students.
Any serious AI strategy must recognise that students are turning to these tools not just for essays, but for comfort and belonging too, and we must offer something better in return.
If some of our students find it easier to confide in chatbots than in people, we need to confront what that says about the accessibility and design of our existing support systems, and how we might improve and resource them. Building a pastoral AI strategy is less about finding a perfect solution, but more about treating pastoral AI seriously, as a mirror which reflects back at us student loneliness, vulnerabilities, and institutional support gaps. These reflections should push us to re-centre these experiences, to reimagine our pastoral support provision, into an image that’s genuinely and unapologetically human.