Jim is an Associate Editor (SUs) at Wonkhe


Mack Marshall is Wonkhe SUs’ Community and Policy Officer

Two students were in the same focus group, at the same university, on the same morning in February.

One was studying maths, the other computing. Both used AI on their assessed work.

The difference was a single structural feature – whether a future moment existed at which they’d need to demonstrate they actually understood what they’d produced.

In the back of my mind, I know in a few months I’m going to have to sit an exam and get tested on similar stuff,” the maths student said. “So I do need to actually study and do my own assignment instead of just allowing AI to carry it for me.”

Asked whether the same applied, the computing student – whose module was assessed entirely by coursework – replied simply: “No, it doesn’t, no.”

She wasn’t describing a moral failure. She was describing a strategic response to a system that had removed the only moment at which her understanding would be tested.

She’d used AI to structure her entire assignment on “autopilot” – her word – and acknowledged she hadn’t done well. The AI hadn’t even produced a good grade. But nor had it produced any understanding.

The exchange captured something we think the sector’s debate about AI has largely missed – and it’s one of the central findings from new research we’re publishing soon.

Known unknowns

For the Secret Life of Students 2026, we set out to push beyond what everybody already knows.

We already know the vast majority of UK students use AI, that most use it for assessed work, and that ChatGPT dominates.

We know students reach for it primarily to save time and improve quality, that they worry about cheating accusations and hallucinations, and that most institutions now have a policy.

The questions we wanted to answer sit underneath the adoption statistics. Do students feel they’ve actually learned what they’ve produced? What are they weighing up when they decide how to use AI on a specific piece of work? Do they think their assessments test understanding – and if not, what would?

A national survey of 1,055 students across 52 providers, weighted for gender and level of study, combined with focus groups involving student reps from across disciplines and levels, produced uncomfortable findings.

The full report launches to delegates at the Secret Life of Students and will be here soon – what follows are a small slice of the findings, and the connection between them.

Mind the gap

Nearly half the students in our sample – 47 per cent – worry that their grades don’t fully reflect what they actually know. Thirty-eight per cent admit they sometimes submit work they couldn’t fully explain without going back to their sources. And when we asked how they’d feel if a tutor asked them to explain, without notes, the reasoning behind their last assessed submission, 14 per cent said they’d feel anxious or worse.

The gap between submitting work and understanding it isn’t new, and it doesn’t appear to be primarily caused by AI. It’s existed for as long as higher education has assessed learning by asking students to produce written artefacts – and it’s been getting worse for as long as “produce a digital asset, upload it to a VLE, have it asynchronously marked against a rubric” has been the dominant mode of assessment.

The model has long treated the production of a thing as the symbol of learning. AI has made it trivially easy to produce the thing without any of the learning.

A student who wrote a mediocre essay in 2015 probably absorbed some understanding through the effort of producing it. A student who prompts an AI to structure and populate an essay in 2026 may absorb nothing at all – and the system, as designed, can’t tell the difference.

One student described feeding the assignment brief and the rubric into a large language model, asking it to reverse-engineer a structure from the marking criteria, then populating that structure. The learning outcome becomes a formatting problem. The more precisely a rubric specifies what a good submission looks like, the more machine-readable it becomes – and the less intellectual effort is required from the student.

One postgraduate student, averaging 80 per cent in their course, described the calculation with candour:

If I dislike the subject or don’t find it particularly interesting, my main goal is to just get a good grade, and I will use AI as much as possible to help me – my human contribution is just checking against potential hallucinations, bias, and other things that might drag my grade down.

But the worst story came from a student describing a peer – a diligent international student who had worked hard, avoided AI, and received a lower second. They told the focus group they were “now forced to use AI.”

In third year, using it, they were getting good grades – and when asked if they’d learned anything, said they couldn’t be very certain.

Their peer added: “It was literally terrible. It was quite unsettling.” They weren’t shaken by the AI use itself but by what it revealed about the system – that a conscientious student had been trained to stop learning in order to succeed.

Show your working

AI hasn’t just widened the gap between submitting and understanding. For many students in our research, it has turned not-learning into an active choice.

As one student put it:

The difference in learning or not learning is a choice you can make now that AI is around. You didn’t really have that choice before.

But what determines which choice students make? If we return to that pair from the opening, one had an exam coming, and the other didn’t.

That single difference – whether a future moment existed where they’d have to show they understood what they’d produced – changed everything about how they used AI.

The accountability moment doesn’t stop students using AI. It changes how they use it.

I use AI completely differently depending on whether there’s an exam at the end,” a biomedical science student told us. “If I know I’m going to be sat in a room with a question and no laptop, I’ll get it to explain things to me, then I’ll argue with it, then I’ll get it to quiz me until I can actually do it myself. If it’s just coursework – I just want it to help me get the thing finished.

In our discussions, students who know they’ll need to demonstrate understanding don’t avoid AI – they prompt it more carefully, interrogate its answers, push on the reasoning rather than accept the first output. They use it to explore the material rather than bypass it.

But when no downstream verification exists, the same students describe something closer to production on autopilot.

A postgraduate student discovered the gap the hard way. When presentation day came, she was “so anxious and unsure” – and realised it was because she’d depended on GPT to know for her.

“I was telling myself, why can I not remember my script or even my topic that I’ve done so much work on?” She noticed:

…a massive difference at the next presentation when I didn’t use it at all and felt much more confident.

Any programme that has moved away from exams without replacing the accountability function – not just the format – has removed the incentive to use AI well along with the incentive to use it less. Students told us that the signal needs to be visible from the start of the module, shaping how students use AI throughout, not just whether they use it at all.

This isn’t an argument for more exams – students describe timed, closed-book exams as tests of memory rather than understanding. And students with social anxiety, depression, or neurodivergent conditions need accountability formats designed with them in mind. One student held both sides of the tension at once:

I do have social anxiety and depression, so on some days I may not be as good at presenting. Both matter.

But students in our focus groups didn’t respond to this tension by arguing for the abolition of accountability. They argued for its redesign.

They proposed multiple low-stakes attempts rather than single high-pressure performances, conversations rather than presentations, uncapped formative practice that builds confidence before anything is graded, and peer explanation normalised as a routine activity rather than reserved for formal assessment.

One computing student suggested assessment where students explain their work in what feels like a normal conversation – “you don’t tell them that it’s graded or anything” – arguing that “it’s difficult for anyone to fail if you’re really grading them based on what they understand, if they’re actually saying stuff they’re interested in.”

The question isn’t whether to build accountability moments. It’s whether they feel like an opportunity to show off their learning or a trap.

And there’s a payoff the sector should notice. Our data shows that oral examinations – among the formats most dismissed as archaic – have a strong positive correlation with career confidence.

Others explicitly designed to look like “real work” are among the weakest. Career confidence doesn’t necessarily track superficially vocational assessment formats.

It tracks intellectual honesty – whether feedback develops thinking, whether stated values match actual rewards, and whether assessment tests understanding rather than production. Among students with low career confidence, 40 per cent strongly agree there’s a gap between what their course says it values and what it actually rewards.

Among the high group, that falls to 4 per cent. The course doesn’t need to look like a job. It needs to build the capacity to do one.

Own goal

There are two other findings to highlight here. The first is that much of the AI use that students describe would disappear if universities could fix problems within their control.

Students described adopting AI as a research discovery tool not because they prefer it to library systems, but because library systems have failed them. When briefs are unclear, AI becomes the first interpreter – and sometimes the only one that responds at a useful speed.

Feedback routinely arrives after students have started the next assignment, making the assessment sequence functionally summative regardless of what module handbooks claim.

Students on multiple courses described being expected to have knowledge or skills their programme had never provided, and asking AI questions becomes a faster way of testing understanding than emailing a tutor and waiting weeks for a response.

If you took away AI tomorrow, every one of these problems would still exist. Almost all of the heavy AI use cases in our data are diagnostic information about what the institution isn’t providing.

The other is the way in which rules around AI appear to reflect the opinions of staff rather than something specific to the discipline.

Staff disagreement is leaking into the operative rules of assessment – one tutor says use it as a guide, another says do everything yourself, the software students are already using has AI built into it – and then at the end they’re asked to sign a declaration as if the boundary had been clear all along. That isn’t confusion – it’s unfairness.

And the cost falls hardest on the conscientious. One student didn’t use AI in their dissertation because they couldn’t get a clear answer about whether AI transcription was permitted. Their supervisor wasn’t sure either. The work was described as publishable quality. They didn’t receive a distinction – a direct academic penalty for conscientious caution in the face of a governance failure that wasn’t theirs.

Another student described a moment where two of their staff played out their disagreement over the ethics of AI use in front of them – with student left having to guess who might do the marking to determine whether to use AI.

If a university can’t tell students clearly and consistently what’s permitted, where the boundaries lie, and how those boundaries relate to learning – that isn’t a student failure, it’s an academic governance failure. Students shouldn’t be asked to absorb the cost of institutional indecision.

But the research also shows a resource. The students who took part can describe with precision when real learning happens and when it doesn’t. They’ve built personal ethical positions around AI that are often more considered than the policies their universities have produced. They’ve designed alternatives to current assessment that would test understanding.

The sector would do well to treat them as partners in working out what learning means now. That partnership is impossible if students think the system is out to catch them cheating.

Can students be trusted? On the evidence we’ve gathered, the answer is very much yes.

9 Comments
Oldest
Newest
Inline Feedbacks
View all comments
peter j
1 month ago

“my human contribution is just checking against potential hallucinations, bias, and other things that might drag my grade down” Goodness me, that is depressing to read.

T Steven
1 month ago

Thanks for an interesting article. It’s always interesting (and a little scary) to hear feedback from students on how they are approaching learning. I found a couple of points in the article were a little unclear. Would it be possible to expand on them?

“Students described adopting AI as a research discovery tool not because they prefer it to library systems, but because library systems have failed them.” – this statement is directly followed by discussion of assessment briefs and feedback, neither of which are things that libraries are usually responsible for, so I’m uncertain as to where the quoted statement comes from.

“And the cost falls hardest on the conscientious. One student didn’t use AI in their dissertation because they couldn’t get a clear answer about whether AI transcription was permitted. Their supervisor wasn’t sure either. The work was described as publishable quality. They didn’t receive a distinction – a direct academic penalty for conscientious caution in the face of a governance failure that wasn’t theirs.” – I’ve read this several times and I’m still at a loss, is the assertion that the student erred on the side of caution by not using AI and was marked down? As with the previous statement about libraries I feel like there’s maybe some more detail that could be added to clarify the point.

A Heath
1 month ago
Reply to  T Steven

As a librarian I’m also curious to know more about how library systems have failed students – I can identify issues with our discovery systems but would be good to know the issues that students have. As the previous comment picked up. the following sentence talks about interpretation of an assignment brief which is not the role of the library.

Sandie D
1 month ago
Reply to  A Heath

Interesting & useful to read more about the “why” of AI use from student perspectives & the point about student partnership as more effective way forward. Agreed it would be helpful to understand the link being made between interpretation of the brief & failed library systems.

Tim
30 days ago
Reply to  A Heath

I don’t think they’re failing students. I’ve worked in academic libraries and learning resource centres in the past. There’s always been the need to give students the skills and techniques to use and the available resources and how to interrogate the information. Many students are so used to “googling” for information these days and getting the answer on one website (or from AI these days) they may not feel it’s necessary – or have the skills – to search through multiple sources of information to get what they need. We used to teach students “information literacy”. It’s needed far more these days than it ever was.

Leo McCann
1 month ago

This is an interesting article and I look forward to seeing the full report. The perspective at times seems a little overly pro-student and somewhat anti-university. Too often universities and academics are told that everything they do is outdated, that they’ve got their whole approach to learning and teaching wrong: ‘the lecture is dead’, ‘the essay is worthless’ etc. That’s inaccurate and unfair. Students and universities are trying to figure their way through the tangle created by GenAI. It’s a difficult learning process with much disagreement. And many students are disregarding the advice and regulations given. Overall I agree we need a rethink, involving all parties. The research cited in this article has the potential to help with this. I also agree that the answer might lie in a mixture of assessment – exams, essays and continuous assessment of classroom engagement – not just one piece of assessment per module which has dominated the approach in recent years.

Juniper
1 month ago

It’s so frustrating on the ground, because it seems senior leadership at many universities are happy as long as the simulacrum of a degree, involving money in, classes provided (even if students don’t come, because they are working jobs to afford university in a difficult economy) and degrees out (even if largely written by AI) continue the financial pipeline. Redesigning assessments is clearly key, but the intertia and administrative barriers in an ever more time-pressured environment for academics are daunting.

Mary
30 days ago

>>Asked whether the same applied, the computing student – whose module was assessed entirely by coursework – replied simply: “No, it doesn’t, no.”

As a careers professional what I find really interesting about this is that she doesn’t perceive a future job interview or the job itself as an accountability moment. That’s the connection that we should be making for students: we should be showing them how the extrinsic and immediate reward of “a good grade” is only a short-term structure to scaffold their learning in order that they can participate in the production of knowledge as professionals, not an end in itself. We’ve got a key role to play in how we make that link up for students.

Tim
30 days ago

I could never understand why feedback and any intervention from tutors comes at the end of the assignment. It just becomes something that’s forgotten, and lessons never learned because the same assignment will never be repeated. Maybe it makes sense for assignments to be submitted at various stage in drafts, or proposals/part-submissions, for feedback from academics. I realise there’s probably little or no time to fit this in but it would partly eliminate the reliance on ChatGPT to produce a complete assignment.