The past couple of years in higher education have been dominated by discussions of generative AI – how to detect it, how to prevent cheating, how to adapt assessment. But we are missing something more fundamental.
AI isn’t just changing how students approach their work – it’s changing how they see themselves. If universities fail to address this, they risk producing graduates who lack both the knowledge and the confidence to succeed in employment and society. Consequently, the value of a higher education degree will diminish.
In November, a first-year student asked me if ChatGPT could write their assignment. When I said no, they replied: “But AI is more intelligent than me.” That comment has stayed with me ever since.
If students no longer trust their own ability to contribute to discussions or produce work of value, the implications stretch far beyond academic misconduct. Confidence is affecting motivation, resilience and self-belief, which, consequently, effects sense of community, assessment grades, and graduate skills.
I have noticed that few discussions focus on the deeper psychological shift – students’ changing perceptions of their own intelligence and capability. This change is a key antecedent for the erosion of a sense of community, AI use in learning and assessment, and the underdevelopment of graduate skills.
The erosion of a sense of community
In 2015 when I began teaching, I would walk into a seminar room and find students talking to one another about how worried they were for the deadline, how boring the lecture was, or how many drinks they had Wednesday night. Yes, they would sit at the back, not always do the pre-reading, and go quiet for the first few weeks when I asked a question – but they were always happy to talk to one another.
Fast forward to 2025, campus feels empty, and students come into class and sit alone. Even final years who have been together for three years, may sit with a “friend” but not really say anything as they stare at phones. I have a final year student who is achieving first class grades, but admitted he has not been in the library once this academic year and he barely knows anyone to talk to. This may not seem like a big thing, but it illustrates the lack of community and relationships that are formed at university. It is well known that peer-to-peer relationships are one of the biggest influencers on attendance and engagement. So when students fail to form networks, it is unsurprising that motivation declines.
While professional services, student union, and support staff are continuously offering ways to improve the community, at a time where students are working longer hours and through a cost of living, we cannot expect students to attend extracurricular academic or non-academic activities. Therefore, timetabled lectures and seminars need to be at the heart of building relationships.
AI in learning and assessment
While marking first-year marketing assignments – a subject I’ve taught across multiple universities for a decade – I noticed a clear shift. Typically, I expect a broad range of marks, but this year, students clustered at two extremes: either very high or alarmingly low. The feedback was strikingly similar: “too vague,” “too descriptive,” “missing taught content.”
I knew some of these students were engaged and capable in class, yet their assignments told a different story. I kept returning to that student’s remark and realised: the students who normally land in the middle – your solid 2:2 and 2:1 cohort – had turned to AI. Not necessarily to cheat, but because they lacked confidence in their own ability. They believed AI could articulate their ideas better than they could.
The rapid integration of AI into education isn’t just changing what students do – it’s changing what they believe they can do. If students don’t think they can write as well as a machine, how can we expect them to take intellectual risks, engage critically, or develop the resilience needed for the workplace?
Right now, universities are at a crossroads. We can either design assessments as if nothing has changed, pivot back to closed-book exams to preserve “authentic” academic work, or restructure assessment to empower students, build confidence, and provide something of real value to both learners and employers. Only the third option moves higher education forward.
Deakin University’s Phillip Dawson has recently argued that we must ensure assessment measures what we actually intend to assess. His point resonated with me.
AI is here to stay, and it can enhance learning and productivity. Instead of treating it primarily as a threat or retreating to closed-book exams, we need to ask: what do we really need to assess? For years, we have moved away from exams because they don’t reflect real-world skills or accurately measure understanding. That reasoning still holds, but the assessment landscape is shifting again. Instead of focusing on how students write about knowledge, we should be assessing how they apply it.
Underdevelopment of graduate skills
If we don’t rethink pedagogy and assessment, we risk producing graduates who are highly skilled at facilitating AI rather than using it as a tool for deeper analysis, problem-solving, and creativity. Employers are already telling us they need graduates who can analyse and interpret data, think critically to solve problems, communicate effectively, show resilience and adaptability, demonstrate emotional intelligence, and work collaboratively.
But students can’t develop these skills if they don’t believe in their own ability.
Right now, students are using AI tools for most activities, including online searching, proof reading, answering questions, generating examples, and even writing reflective pieces. I am confident that if I asked first years to write a two-minute speech about why they came to university, the majority would use AI in some way. There is no space – or incentive – for them to illustrate their skill development.
This semester, I trialled a small intervention after getting fed up with looking at heads down in laptops. I asked my final year students to put laptops and phones on the floor for the first two hours of a four-hour workshop.
At first, they were visibly uncomfortable – some looked panicked, others bored. But after ten minutes, something changed. They wrote more, spoke more confidently, and showed greater creativity. As soon as they returned to technology, their expressions became blank again. This isn’t about banning AI, but about ensuring students have fun learning and have space to be thinkers, rather than facilitators.
Confidence-building
If students’ lack of confidence is driving them to rely on AI to “play it safe”, we need to acknowledge the systemic problem. Confidence is an academic issue. Confidence underpins everything in the student’s experience: classroom engagement, sense of belonging, motivation, resilience, critical thinking, and, of course, assessment quality. Universities know this, investing in mentorship schemes, support services, and initiatives to foster belonging. But confidence-building cannot be left to professional services alone – it must be embedded into curriculum design and assessment.
Don’t get me wrong, I am fully aware of the pressures of academic staff, and telling them to improve sense of community, assessment, and graduate skills feels like another time-consuming task. Universities need to recognise that without improving workload planning models to allow academics freedom to focus on and explore pedagogic approaches, we fall into the trap of devaluing the degree.
In addition, universities want to stay relevant, they need agile structures that allow academics to test new approaches and respond quickly, just like the “real world”. Academics should not be creating or modifying assessments today that won’t be implemented for another 18 months. Policies designed to ensure quality must also ensure adaptability. Otherwise, higher education will always be playing catch-up – first with AI, then with whatever comes next.
Will universities continue producing AI-dependent graduates, or will they equip students with the confidence to lead in an AI-driven world?