Jim is an Associate Editor (SUs) at Wonkhe

Working in a group, an architecture student heard another student suggest asking AI for a list of rooms in a house.

She replied: “Why don’t you just use your brain?” She told us:

This happened a year ago and I still think about it now. I don’t want to get to a point where I don’t think to use my own brain first.

That a trivial moment left this deep an impression tells us something about what it feels like to be a student paying close attention to what AI is doing to how they think, in a sector that seems to have rarely noticed they’re doing it.

For this year’s Secret Life of Students, we wanted to get past the AI adoption statistics that already exist – the ones that tell us the vast majority of students use AI, that most use it for assessed work, and that ChatGPT dominates – and ask harder questions.

Do students feel they have actually learned what they have produced? What are they weighing up when they decide how to use AI on a specific piece of work? And do they think their assessments test understanding?

A national survey of 1,055 students across 52 providers, weighted for gender and level of study, combined with focus groups involving student reps from across disciplines and levels, helped us produce the findings in Trained to stop learning.

And one thing that surprised us wasn’t how students are using AI. It was how seriously many of them are thinking about it.

I never thought I’d find someone like you

Across the focus groups and survey responses, students described personal ethical positions around AI that were often more considered than anything their institution had produced – positions with principles they could articulate and defend.

A computer science student had arrived at a governing rule:

I make sure my use of AI doesn’t inhibit my understanding of the topic. For essay submissions, the final text is written by myself – I don’t want to lose the ability to report on my findings.

An engineering student described a principle of “augmentation, not replacement” – using AI for repetitive tasks while retaining responsibility for core logic and final validation. A philosophy student located the line in ethical self-governance rather than institutional rules:

I work it out based on my own internal ethical values. I understand that university is for learning, so using AI to produce evidence of learning is crossing the line.”

A graphic design student drew a precise line:

Using AI to do a final work for me, I say no, but to help me make a final work as a tutor or a supporter or a friend, I will say yes.”

Another described AI in terms that deliberately limited its authority:

I believe we should treat AI as a junior researcher, not a tool that provides the final output. It helps with the process, but the final judgment and direction must come from the student.”

These are not students waiting for better policy. They’ve outrun it. They’ve worked through the relationship between tools, effort, learning, and identity – and they’ve done it largely alone, with no institutional support and no recognition.

However far it seems

The ethical precision doesn’t stop at broad principles. It extends into how students think about specific modes of use – and the distinctions they draw are sharper than anything in most institutional guidance.

A distinction between structural scaffolding and content generation came up again and again:

From a humanities perspective, for me it’s about whether the ideas are still coming from me – that’s the important thing. I would never ask it “what should I write about?” It would be “here are my ideas, can you provide a structure for this, or some extra reading?

Whether or not universities agree that’s a valid line, it is the line students are actually operating on – and policy that fails to engage with it will continue to misfire.

A game design student had arrived at a framework built on the principle that foundational learning preserves future choice:

I believe it better to learn the skill from the ground up, because it gives you a choice later on. You either have the skills to deliver it yourself, or you have the skills to better utilise the AI should you want to do it. You know the terminology, you know the output, you know the pipeline of what you’re supposed to be making, so you can catch the AI better.

That reasoning – learn the fundamentals so you can supervise the tool, not be supervised by it – goes further than most institutional policies manage. A computer science student described a similar logic applied in practice:

I make sure that the final report is fully written by me as I don’t want my skills to atrophy over time. In coding assessments, AI can write code for you, and while most of the time it can accurately write it, supervision is required. I make sure to monitor and understand what is produced, and where it makes mistakes I prefer to correct it myself rather than reprompt it.

Students also distinguish between at least six different modes of AI use – from search replacement to structural scaffolding to always-on tutoring to production acceleration – each with a different relationship to learning and different ethical implications.

The same student routinely moves between modes on the same assignment, adjusting based on interest, time pressure, the clarity of the brief, and their relationship with the material.

Any policy framework that treats “AI use” as a single behaviour – permitted or prohibited – is answering a question that bears almost no resemblance to how students actually work.

I can always count on you

Having principles is one thing. Acting on them is another – and some students have gone further, actively using AI to deepen their learning rather than bypass it. An aerospace engineering student described a deliberate strategy:

I used ChatGPT to let me know what I actually needed and what process I should follow. I asked it to question me in return and not give the solutions directly. So I felt confident, because it helped me to confidently ask my inputs to other teammates and get my work done and not look dumb!

A postgraduate student described a similar practice – doing the work first, then using AI to interrogate it:

I’d rather do some work and then show it to the AI to kind of validate it based on what’s actually true. You don’t just see it as “get this work done and forget about it.”

When we asked students in the survey what they weigh up when deciding how to use AI on a specific piece of assessed work, ethics and learning value far outranked detection risk and peer behaviour as decision factors. Any assumption that students are primarily motivated by whether they’ll get caught isn’t supported by the data. Most are weighing up whether the use is right, not whether it’s safe.

But the ethical work is happening in near-total silence. One engineering student said she hadn’t heard others discuss their AI use “as it might look like cheating or like they do not understand the assignment.”

The furtiveness is itself a cost – and a waste. Universities haven’t routinely drawn on their students’ own ethical reasoning, and making space for it to be heard would do more good than another round of policy revision.

Taking off my blues

If these students were being rewarded for their seriousness, the story would end there. They aren’t.

In this data, the costs of unclear AI policy fall hardest on the students most trying to comply – and barely at all on the students who aren’t.

One focus group participant – student reps from a range of disciplines, levels, and universities took part in sessions conducted in February and March – described not using AI in their dissertation because they couldn’t get a clear answer about whether AI transcription was permitted for something integral to their argument.

Their supervisor wasn’t sure either, so they omitted it. The work was described as publishable quality. They didn’t receive a distinction – a direct academic penalty for caution over a governance failure that wasn’t theirs.

A nursing student described complete avoidance:

Mainly it’s the fear of not really knowing how to use it effectively without accidentally cheating. And there’s no guidance from the university on what you can use it for and what you can’t. Also, I’m afraid of it changing the way my brain works – losing the ability to find things out myself and write things myself.

There are two distinct fears in that response – and in many others across the data. The first – “accidentally cheating” – is compliance anxiety, driven by ambiguous policy. The second runs deeper – a worry about what repeated AI use does to the student’s own capacity to think, reason, and write.

Fifty-nine per cent of all respondents share that worry. It’s the majority view.

Every time I get myself around you

A PhD student who uses AI as cognitive support for ADHD described what that developmental anxiety actually looks like when it stops being hypothetical:

I had to do a very brief presentation to some peers and I asked ChatGPT to help me draft it. When the day of presenting came I was so anxious and unsure, and then I realised it was because I had depended on GPT to know for me – I was telling myself “why can I not remember my script or even my topic that I’ve done so much work on.” And I noticed a massive difference at the next presentation when I didn’t use it at all and felt much more confident.

This is someone learning in real time what dependency costs, and adjusting – doing the ethical work on themselves, not waiting for a policy to tell them what to do.

Other students who had chosen not to use AI described the decision as principled:

I have not used AI in any of my work despite believing it could help improve my grades. It goes against my morals and would impede my ability to grow and think critically over taking shortcuts.

I have, and will never, use AI for assessments. The whole point of a degree is to develop critical thinking skills. I realise I take quite a strong stance – but the ability to think through a problem is a dying skill, exacerbated by overreliance on generative AI.

Its biggest challenge is against AI. So we need to keep on proving our skills can be used without it – and use it to improve our work, not do it for us.

These students carry the cost of their choice as anxiety. Most institutional AI guidance treats non-use as the safe option – the absence of a problem. In the survey data, for a substantial minority it’s an active decision made under competitive pressure, and it generates real distress.

I only knew you for a while

Forty-six per cent of all students worry that not using AI puts or would put them at a competitive disadvantage. Among non-users who carry that anxiety, 74 per cent are women.

A humanities student with dyslexia described the bind:

For a person who has dyslexia, having the rephrase ability in Grammarly helps me make my writing clearer – but in the back of my head I always worry it will impact my grade.

She’s caught between a real need for support and a deep sense that accepting it compromises who she is as a learner.

A paramedic science student identified the social dimension:

You do feel like you are putting your all in and others don’t show any interest in lectures yet score really well in assignments and actively state their AI usage when not within earshot of lecturers.”

The feeling goes beyond marks. It’s about the devaluation of effort and commitment in an environment where the de facto norm is being set by the most permissive lecturers and the most risk-tolerant students. The full weight of the ethical decision falls on the individual conscience of students who have done the thinking the system hasn’t.

And now I know

AI declarations illustrate the distributional problem in miniature – a mechanism that asks for honesty in conditions that punish it.

Across the focus groups, declaration forms didn’t emerge as a transparency mechanism. They emerged as something that destroys trust. One postgraduate student:

The lecturer will say use your brain, do everything yourself. One tutor will be like, I encourage you to use AI as a guide. Another lecturer will say don’t use AI, do your research. They’re not communicating with each other, and it’s making our brains go crazy. And then at the end they say we need to sign an AI declaration saying we didn’t use AI – and I’m like, but I did use it. It’s not like we don’t use our brains. Sometimes you just need a guide.

A languages student described avoiding AI partly because “as we have to declare use of AI, this can affect how lecturers mark your assessment” – the declaration creating a perceived penalty for honesty rather than an incentive for transparency.

The combined effect is corrosive. Students who use AI legitimately either lie on declarations or avoid AI entirely to dodge the perceived risk of declaring. Students who use AI most heavily are presumably the least likely to declare honestly. The declarations penalise the conscientious and catch none of the heavy users.

Beyond individual courses, it’s no better:

My course says no AI at all, but the university itself says some parts of AI are fine. And the problem is AI is so integrated in everything now that it’s impossible to avoid.”

One student emailed her department to ask what was permitted, was told to ask the academic integrity lead, found the module was run by the academic integrity lead, asked him, and was told to go and ask the academic integrity lead. She gave up and stayed up all night completing the assignment manually.

Another described two staff members playing out their disagreement over AI ethics in front of students – leaving the student to guess who might do the marking in order to determine whether AI use was safe. That’s an academic governance failure, not a problem for students to absorb.

There’s a reciprocity problem too:

There’s so much focus on students, but not necessarily on staff. They may be telling you to not use AI for writing or researching – staff shouldn’t be using AI for marking in that case. I think there needs to be that kind of reciprocation.

If students are told not to use AI for producing work, but staff are using AI to produce feedback, the moral authority of the restriction collapses. And the students who have done the hardest thinking about these questions are the ones left holding the consequences.

And all I ever wanted

So the conscientious pay the price – penalised for caution, anxious about compliance, doing ethical work the system doesn’t recognise. But here is where the story turns. Because the qualities these students are developing – independent judgment, ethical reasoning, the ability to think rather than merely produce – are not incidental virtues.

When we ran the correlations in the Trained to stop learning data, they were the strongest predictors of career confidence in the entire dataset. The system is penalising precisely the disposition it should be cultivating.

A common argument holds that assessment should more closely resemble authentic work tasks – that the problem is too much academic abstraction and not enough real-world simulation. If this were true, students who feel career-confident should report greater confidence in work-based formats. They don’t.

The strongest correlates of career confidence are all about intellectual honesty – whether feedback develops thinking, whether stated values match actual rewards, and whether assessment tests understanding rather than production. Feedback quality is the single strongest correlate. The absence of a gap between what a course says it values and what it rewards is close behind.

Among students with low career confidence, 40 per cent strongly agree there is a gap between what their course says it values and what it actually rewards. Among the high-career-confidence group, that falls to four per cent. Only five per cent of the low group feel their assessment primarily rewards thinking. Nearly a third of the high group do.

Oral examinations – among the formats most dismissed as archaic – have the strongest positive correlation with career confidence in the data. Placement-based outputs – the format most overtly designed to look like “real work” – are among the weakest.

The course doesn’t need to look like a job. It needs to build the capacity to do one.

The most career-confident students don’t describe what they value in the language of workplace simulation:

I believe I am being taught the skills and work ethic needed for industry but that nothing can truly compare to hands-on work experience. Even with our live briefs that make us work with an industry client in real time, the university structure is completely different. So while I believe the course is doing a great job I feel I will still have an immense amount to learn from experience.

That student has the most “authentic” assessment imaginable – live briefs with industry clients – and still frames what the course gives them as skills and work ethic rather than workplace replication.

Career confidence also correlates negatively with every measure of AI dependency. The most career-confident students are least likely to worry about falling behind by not using AI. At the bottom of the career confidence scale, students don’t complain about insufficient workplace simulation – they complain about insufficient thinking –

We learn how to implement, rather than think – and if we graduated with only the stuff we learnt inside the curriculum, nothing good would come out of us.

An engineering student put it in terms the career confidence data bears out:

Real learning happened when I applied theory to a lab project – seeing the physical results made the math “click.” In contrast, many complex derivations were just memorised to pass assessments without deep intuition. The difference is that true understanding allows me to troubleshoot and adapt when a system fails, while exam-based learning only works if the problem matches the textbook.

And the sector-level debate about whether AI is simply a graduate skill to be taught produced sharp disagreement in our focus groups –

AI is a skill like using Excel and should 100 per cent be taught – if you don’t, you will be behind in industry.

I am going to have to respectfully disagree that one may be behind in their industry. I really think it depends on the industry.

What are we at university for if not for special learning?

That last question – from a creative writing student – cuts to whether higher education exists to develop the capacity to think, or to certify the possession of transferable technical skills. The career confidence data suggests the students who experience their course as doing the former feel most prepared for their future.

What the data points toward is intellectually honest assessment – rewarding thinking, giving feedback that develops reasoning, and closing the gap between what courses say they value and what they actually reward.

Together in electric dreams

The sector’s response to AI has been built around the assumption that students need to be governed – that the problem is misconduct, the solution is policy, and the mechanism is some combination of detection, declaration, and restriction. The students in Trained to stop learning suggest a different starting point.

They’ve built ethical frameworks that go further than most institutional policies. They distinguish between types of AI use with a precision that no tiered framework matches. They can describe when real learning happens and when it doesn’t. They’ve designed assessment alternatives that would test understanding – and their proposals, while diverse, converge on properties any assessment designer would recognise as sound:

The single way to assess a person’s understanding is an individual presentation on your essay or work in a Q&A manner. If you are confident in your work, regardless of your usage of AI, you will be fine in a Q&A. Of course you can factor in extensions and adjustments for people who require them – but on the whole, if you cannot answer questions about your own piece of work, then that piece of work is not yours.”

I would redesign presentations to include discussion and viva elements, allowing students to display deeper knowledge that many know and just don’t communicate.

I would include fake scenarios – like clients – and test the application of what we have learned. Something real.

A presubmitted essay that involves either a video or in-person explanation of the essay to the lecturer. Most people make essays and never talk about the content with anyone, and sometimes never talk about it out loud at all even with themselves. Teaching is one of the best ways to learn something and cement it in your head, so having to submit a complementary explanation would really make learning more efficient – and would mean people could remember and explain it again in future, since they’ve already done it once.

A “Live Troubleshooting Exam.” Give students a pre-designed system with intentional flaws and ask them to find and fix them in real time. This tests technical judgment and safety validation – things a standard report can’t show. It’s impossible to fake because it requires immediate, practical application of engineering principles.

Making space for students to surface and share their ethical reasoning – in seminars, peer discussions, and case study workshops – would do more good than another round of policy revision. And it would do something most current policy actively prevents – treat students as partners in working out what learning means now, rather than subjects to be caught.

The conscientious students in this research aren’t confused. They’re thinking harder about AI and learning than most of the institutions around them. The least the sector could do is stop penalising them for it – what it should do is listen.

2 Comments
Oldest
Newest
Inline Feedbacks
View all comments
Adam Andrascik
30 days ago

The assessment alternatives posed by the surveyed students are excellent examples of capturing learning as it evolves in an AI landscape.

The problem, as outlined in this piece, are the inflexible policies, frameworks and digital infrastructure currently in place that capture final assessment over learning, making it nearly impossible for educators to implement said assessment alternatives when needed.

The flexibility demonstrated by these students assessment structures requires a flexible capture system that is frictionless and open semester long, able to be utilised by educators at will in relation to student learning, feedback, technology and progression, if we are to truly treat students as partners.

This is something we are piloting at Provineer for UK and EU universities, the centering of student process while allowing for educators to engage with alternative assessment strategies to more fully capture authentic outcomes, regardless of level or discipline.

My favorite student assessment idea – “A presubmitted essay that involves either a video or in-person explanation of the essay to the lecturer.” Excellent, and one I am seeing time and again.

Sherrill Stroschein
24 days ago

It would be good to talk to academic staff about our perspectives. We are working harder than ever, changing assessments, reading what we can on this. Marking is taking 2-3x as long as it used to, unacknowledged. We are caught between student demands, academic integrity, and a duty of care — and imperatives from above that sometimes conflict with these. So there is your next set of focus groups, I suppose.