Last summer, our student voice team received an intriguing request from the university.
They wanted our help organising a student-staff debate on a hot topic for incoming students during freshers’ week. To the dismay of some of my colleagues, I suggested AI.
Though the apprehension was understandable, I had good reason to care. Our officer election analysis that year revealed 78 per cent of all manifestos addressing teaching and learning expressed concern about AI. It was also a key component of half our elected student officers’ manifestos. Our VP Postgraduate, Sheeba Naaz, was particularly passionate after navigating challenges around AI as a student herself.
Despite a general consensus that students cared about AI the resulting event was, at first glance, a flop.
The lecture theatre was scattered with no more than ten students, facing a student-staff panel of six. The discussion for those present was fascinating, it revealed a complex landscape of anxiety and opportunity, covering academic misconduct, assessment, graduate outcomes, learning, and the power of dialogue. The seeds for change had been sown.
Painting a picture
Like most student union advice centres, our number of academic misconduct cases has skyrocketed in recent years. In one instance, they described an academic who had set a canary trap for students by planting fake readings into their reading lists, supposedly seeking to smoke out AI-using students, burning them with the technology’s own penchant for hallucination.
More broadly, our conversations with the advice team revealed an atmosphere in which students felt unable to share how they use AI for fear of accusation of cheating. In this sense, universities were operating based on perceptions, rather than knowledge, of actual student use.
Even so, we knew that the staff themselves were also in a bind. One lecturer I spoke to described their head of department stating in no uncertain terms that “teachers are pretending to teach, and students are pretending to learn.”
It was clear that addressing this culture of mistrust between students and staff needed more than just plasters, it needed a deep look at the very heart of why and how we teach and learn.
Building a manifesto
We had heard from our colleagues at King’s Academy that LSE Students’ Union had recently created a student manifesto for “Assessment in the Age of AI” and were excited to see how we could do something similar at King’s, leveraging existing student-staff partnership on the Transforming Assessments at King’s (TASK) project.
The most crucial aspect of this process would be creating a judgement-free space for student-to-student conversations grounded in the reality of how and why they use, or don’t use, AI tools. We weren’t here to catch them out, we were here to learn.
The first step of this process was to be our AI manifesto labs. These were discussion groups for students to share and reflect on their lived experiences of AI in higher education. To recruit students, we parked ourselves across our campuses and invited students to share one word to describe how they were feeling about AI. These initial interactions demonstrated the wide spectrum of student views, with answers ranging from “optimistic” to “terrified.”
The workshops themselves focused on understanding how exactly students were using AI. We saw that before class even began, AI was aiding in course selection and planning, especially for international students navigating university systems.
During lectures, it clarified, summarised, and translated content. It generated discussion questions, organised literature, and created practice questions, and tools like Consensus and ChatPDF were used to research and find sources.
AI was used to check work against rubrics, prepare exam materials, and analyse assessment feedback. It was evident that the widespread use of AI tools was not merely a shortcut for assignments but also constituted an integrated and personalised approach to learning.
With this wider perspective in mind, we organised a further reflective workshop to begin the work of synthesising and focusing the lab findings. Students prioritised problem statements drawn from previous discussions and identified the foundational principles that could guide our approach to how AI is used at King’s.
In particular, students were keen to highlight the relevance of AI tools to disadvantaged students: commuter students, international students, and students with disabilities. In all cases, students were using AI to address gaps in institutional resourcing and find novel ways of enhancing their learning experience. Suddenly a more nuanced picture of AI in higher education had materialised, one that wasn’t just about teaching but also about filling in support gaps.
What next
By the end of this process, we had a manifesto composed wholly of student insights shaped in conversation with institutional realities.
We settled on five key principles that should guide King’s long-term engagement with AI and outlined three priority areas for immediate action: clear and consistent guidance, thoughtful assessment design, and AI literary provision.
The vision is much broader than just policy changes. We hope that this manifesto can serve as a guiding document for years to come and act as both an impetus and a tool for students to be active partners in the design and delivery of their education.
We know that AI technology is rapidly evolving. Today it’s ChatGPT, tomorrow it will be wearable technology and AI agents. What’s clear is that a challenge as dynamic as AI requires students to be ongoing partners in educational change.
And students are not only keen to do this, but they are capable and innovative.
Throughout our workshops, students expressed the desire not simply to whinge about issues but to put forward solutions and ideas to meet this challenge. To exclude these students would be not simply an injustice but also a strategic error with long-term consequences. The challenge of AI is here, and students are ready. The question is, are we?