So far, discussions in higher education around AI have been about its use in curriculum and upskilling students.
What’s lacking is an acknowledgment that our students have a wide range of diverse, sometimes conflicting, views on AI, particularly when it comes to ethics, workers’ rights and data privacy.
Students’ unions can and should be much louder in raising these concerns.
Recent legislation consultation on the Data (Use and Access) Bill has already zoomed by. This bill aims to clarify data sharing rules, with some Lords raising concerns about AI companies being allowed to use copyrighted materials without initial consent or compensation.
And whilst SUs have been vocal on bills related to housing, employment and transport, the student movement seemed to be missing from this debate. Policy like this will affect the job market, it’s the learning tools our students are and will use.
What do students think
Recently, we at Exeter Students’ Guild ran one of our paid, monthly surveys to ask students about their views, drilling down as deep as we can.
We received nearly 800 responses and found that “consensus” looked different.
50 per cent of our students feel “curious” about AI and 40 per cent felt “concerned.” Only eight percent reported not having any feelings or not knowing how they feel about AI.
In open-text comments there were animated and principled views expressed, especially on data, the environment, and workers’ rights. Avid or routine users of AI resoundingly expressed that they believe that AI should only ever be used as a supplementary tool in education.
They cited concerns about their quality of learning and echoed the ethical concerns of their AI-hesitant counterparts. PGR students were exceptionally in favour of stricter policies with 58 per cent agreeing and only 18 per cent disagreeing.
AI and privacy
68 per cent of respondents reported concern on AI’s impact on data privacy, with 80 per cent affirming they hold data privacy as a personal value.
From an ethics point-of-view, there were objections to AI models being trained on data without consent. Students are also worried about their own data unknowingly being accessed by third parties and being used in ways they haven’t consented to. If a student from an authoritarian country writes an essay on their country’s human rights situation and uses AI tools for research or grammar, can they be sure it won’t be traced back to them?
Even if students don’t use AI to analyse their essay, what happens if a marker, free to act without guidance, runs their essay through these same tools?
In our survey, students broadly expressed positive things about moving towards project-based learning (instead of in-person exams), which is an opportunity to strike while some universities undergo curriculum reform.
With the government increasingly scrutinising foreign influence (as seen with the Foreign Influence Registration Scheme), surely they should be concerned about American tech companies and others’ ability to mine data from UK residents?
AI and the future
In addition to offering guidance on data-friendly AI tools, universities should rank tools based on environmental impact.
On the micro-level, students reported worries about their own employment in the shifting job market.
For universities, this means that students need support in understanding and communicating the value of their learned skills. At Exeter, for example, our university is exploring a “skills taxonomy” tool that directly links to modules.
In terms of messaging, universities should be wary not to solely rest on telling students that all will be well if they learn how to use these tools, as large numbers have ethical barriers. They should, however, work to address digital inequity for those who wish to use these tools, as others have called for.
What next?
Universities and SUs should keep an eye on what comes out of the Data (Use and Access) Bill. As mentioned earlier, AI firms being allowed to train off data is unfair because it puts the responsibility to “opt-out” on the user, rather than waiting for them to “opt-in”.
Students are putting massive amounts of personal data like photos, academic work, and creative pieces on the internet, with the “opt-out” framework not covering the situation of someone else (like your friends or your professor) sharing what you own and forgetting to opt-out themselves.
On this and on the macro-level, we’ve found that students see AI as a challenge to labour rights. Why can models be trained off the products of labour, without consent or compensation, when these models compete with workers?
As universities, treasure troves of data, are being courted by AI companies, SUs should ensure that past and current students’ work is protected from being given away to these firms without consent and compensation, that consent is withheld by default.
We’ve already watched tensions play out at Oxford and Cambridge’s University Presses, with academics outraged at the prospect of their work being sold to train AI without their knowledge.
I believe that we must move equitably, perhaps at the cost of short-term gains, in order to prevent the long-term effects of setting foundational frameworks that harm students’ labour and privacy rights.
SUs should not shy away from calling for an “opt-in” data system, pushing for universities to include commitments from external tech partners to decarbonise, ensuring “informed choice” is an embedded principle in curriculum, and ensuring data from the UK stays in the UK.
We owe it to students to shape emerging systems now, instead of waiting for when talk of change or overhaul feels politically unpalatable. We represent students on things they care about. It’s not just AI in assessment, it’s AI in the world.