Students know exactly where learning happens – and it’s not in the upload

Exams, placements, seminars, peer explanation – students name the moments that made learning stick. Jim Dickinson argues the common thread is accountability, not format

Jim is an Associate Editor (SUs) at Wonkhe

A student had used ChatGPT to help draft a presentation. It went well on paper. But when she stood up to present it, something was wrong:

When the day of presenting came I was so anxious and unsure, and then I realised it was because I had depended on GPT to know for me. And I noticed a massive difference at the next presentation when I didn’t use it at all and felt much more confident.

She wasn’t a student opposed to AI – she used it extensively, including as support for ADHD. But the difference between the two presentations wasn’t about quality. It was about what she knew, and she could feel it in her body, in real time, under pressure.

The AI had produced the artefact. It hadn’t produced the understanding.

That account came up in one of several focus groups we ran for this year’s Secret Life of Students. A national survey of 1,055 students across 52 providers, weighted for gender and level of study, combined with focus groups involving student reps from across disciplines and levels, produced the findings in Trained to stop learning – our research into how students are experiencing assessment and learning in an age of AI.

We set out to understand how students experience assessment and AI, not to measure learning gain, which is a different and harder question. But across the focus groups, a related question kept producing unexpectedly rich material – can you think of a moment during your time at university where you really felt you had learned something?

Students answered without hesitation. The same types of moment appeared again and again across disciplines, levels of study, and universities.

Five clusters stand out clearly. A possible sixth – which didn’t emerge from this question directly but appeared consistently enough to include – follows at the end.

What follows is a record of where students feel learning happening, not a claim about what produces it. And that record is telling, because almost none of the moments students describe involve the format that carries most of the weight of most degrees – the private written submission, uploaded to a virtual learning environment (VLE) and marked asynchronously against a rubric.

Stand and deliver

The most widely shared type of learning moment across the focus groups was revision – or more precisely, the intensive period preceding a moment at which students knew they’d have to show understanding without assistance.

Every single thing that I’ve said I’ve learned is because there was an exam coming up. When you’re really studying for an exam, that’s when I learn a lot. And I’m usually grateful that I get to show it off when I’m writing my exam.

This type appeared across almost every discipline in the research. Students described the pressure of having to know – as opposed to having read, or having assembled something – as the condition under which understanding clicked. The exam itself matters less than its approach: the sustained effort of preparing for a moment when the material will need to be recalled and applied without the document in front of you.

Verification pressure is the active ingredient, and an exam is one way of creating it – not the only way. The students describing it weren’t making a case for exams as such, but for the experience of knowing that unassisted performance was coming.

The connection to AI use was articulated clearly by students themselves. As we reported in An accountability moment is what makes AI work for learning, one student described the knowledge of an upcoming exam sitting in the back of her mind throughout the coursework period, making her cautious about what she outsourced.

In the back of my mind, I know in a few months I’m going to have to sit an exam and get tested on similar stuff. So I do need to actually study and do my own assignment instead of just allowing AI to carry it for me.

When asked whether the same logic applied to a peer whose module was assessed entirely by coursework, the reply was simple – “No, it doesn’t, no.” The coursework-only student had used AI to structure her entire assignment on autopilot – her word – and had not done well. Two students, the same institution, the same morning. The difference was one structural feature of their assessment.

Another student described the same mechanism from the other side – how the knowledge that exams were coming changed his relationship with AI during coursework.

When you actually learn from what AI is giving to you, you’re more likely to remember, or it’s easier for you to prep for those exams, and then there’s less fear or tension towards exams.

The exam’s existence was shaping AI use weeks earlier, turning what could have been a delegation tool into something closer to a study partner. The student knew that a moment of unassisted performance was coming, and that knowledge changed how he engaged with the material AI offered.

The pedagogical function of verification pressure depends on students not being able to offload preparation to AI. Students in the focus groups weren’t doing this. But whether AI-assisted revision still creates the same conditions, or whether it pre-empts them, is a question the sector has barely begun to ask.

A longer-arc variant also appeared – the experience of not realising until a later year that the foundations covered in an earlier one had actually been retained. The verification moment that revealed what had been learned arrived years after the original learning was supposed to have happened.

The understanding was stickier than expected, but it took a downstream accountability requirement to surface it – none of the assessments along the way had done so.

Contact hours

Across vocational and creative disciplines, students described a consistent qualitative shift when theory became something they had to do.

The moment of application – executing a clinical skill, performing a scene, writing functional code – produced a feeling of understanding that reading and writing about the same material had not.

I remember learning about it in theory, but actually going out and doing it practically – that’s where my passion and learning strikes more.

The performance made me feel like I’d learned more than the actual coursework part of it.

An environmental science student described a version of this where theory stayed inert until she encountered it physically.

I didn’t fully understand a lot of my reading until I went on my field site visit and took some samples. When I analysed stuff under a microscope, it really put into perspective what I was actually learning – it made it feel more real rather than them just being words on a screen.

In these accounts, assessment is the site of learning, not the proof of it – but only where the assessment is designed to make that possible.

The most revealing example in the data came from a design student who described an assessment requiring a research folder of between 100 and 200 pages, with analysis required on every single page.

So you’re just constantly reflecting and looking back at your design choices throughout the entire project. The design of the assessment actually causes you to have to sort of show your working.

The format required students to account for every decision as it was made, producing continuous reflective engagement rather than a retrospective justification assembled at submission. It’s also resistant to AI-assisted production, because the folder traces a process that can’t be fabricated in advance of having done the thing. Here the learning happens through the assessment design – the opposite of the dominant upload-and-mark model.

A less expected version came from a student preparing for clinical placement. Anxious about calculations she felt underprepared for, she described generating practice scenarios through AI – presenting herself with problems, working through them, then checking her answers.

Having that practice at home is very helpful. I’m less likely to make those critical errors on placement – and I think that reduces a lot more anxiety than going on a hospital ward and you’re like, oh wait, I don’t know what I’m doing, because we’ve spent more time singing in class than working on more practical, important elements.

She had identified a gap in her preparation, constructed her own application environment, and used AI to populate it with realistic problems. The learning was hers – AI was providing the problem set the course hadn’t. If that preparation had been available through the programme, the AI use would have been unnecessary.

The room where it happens

Small-group discussion appeared as the most consistently named site of felt learning across humanities and social science students.

Something different was described as happening in that space – ideas becoming real through dialogue, understanding sharpening when it had to be defended against someone who saw it differently:

The seminars were where the learning happened – that’s where you actually got to debate and protect your ideas. I think the assignment was just the chance to show off how good you were at writing stylistically. I wouldn’t say there was much learning done in the actual essay writing.

The frustration running through this account is that the learning is almost never assessed – it happens in the space that counts for nothing officially. The same student identified what makes it irreplaceable.

I probably would have performed better based on in-class contributions, because I think that I perform better when I get to be reactive.

Reactivity – the chance to respond to what someone else has just said, in a way a static prompt never demands. It requires engagement with uncertainty about what comes next, and it’s what AI is currently very good at simulating away.

A philosophy student described wanting exactly this kind of accountability built into the assessment itself.

I would actually be thrilled if we had to defend our essay in front of our lecturers because it would encourage me to learn more than just content for an essay.

He was describing the kind of pressure that would change the way he prepared – an oral accountability moment that would make the learning deeper because he knew it was coming.

An informal variant appeared in the account of a student on a course where long team projects meant sharing studio space with peers from different disciplines. Learning happened constantly and laterally.

You could just go over and say, ‘oh, that looks really cool, how did you make that?’ And they’re more than happy to explain it. It just builds friendship and communication and relationships.

There was no facilitator, no structure, no assessment. The learning happened because someone was interested in what another person had made, and because explaining it consolidated the understanding of both.

Assembly required

Several students described a distinctive late-stage experience – a moment near the end of a long piece of work, usually when trying to order or present accumulated material, when disparate understanding suddenly cohered. It wasn’t the same as completing work. It was the feeling of the whole thing clicking into place.

A dissertation student described having mostly written her points and trying to put them in order – and then suddenly grasping how the whole thing connected. Another put it like this.

Only really in that process of putting it together as a presentation did I actually start to clock and understand what I’d actually been writing about for 10,000 words.

A conservation student described the same mechanism – the moment when everything connects and you can take what you’ve been trying to figure out and actually use it.

She’d spent two and a half years with concepts from first year that she couldn’t fully grasp. The understanding arrived only when she had to put theory into practice in her final year – applying the material produced the synthesis that studying it alone hadn’t.

The mechanism described across these accounts is the act of assembly itself. The laborious process of putting things in order – working out which argument leads to which, deciding what belongs where – produces the understanding. The synthesis is a product of the process, and doesn’t exist beforehand waiting to be expressed.

AI-assisted writing may specifically displace this. Where AI is used to structure an argument before students have done the intellectual work of accumulating and sitting with material, the synthesis moment is pre-empted – there’s nothing to cohere, because assembly has already happened before any accumulation.

As we reported elsewhere in the Trained to stop learning findings, students who feed a rubric into a large language model and ask it to reverse-engineer a structure are skipping exactly this stage – the stage that, in other students’ accounts, is where the deepest understanding forms. They receive a structure they haven’t earned through the process of trying to impose one on material that resists.

One student described a version of this that didn’t close at submission – an essay tied deliberately to prior personal research and ongoing curiosity, where the synthesis became a starting point. He described still carrying the research forward after the module had finished.

Most synthesis moments in the data are retrospective. This one had a different shape, and the difference appears to have been prior investment in the question.

Muscle memory

A smaller but clear cluster of students described learning anchored to a moment of full intellectual ownership – when they’d built something themselves, without shortcuts, and knew it completely.

The test they applied, consistently and retrospectively, was retention: the work they could still explain clearly months or years later was the work they’d constructed from scratch.

I did the traditional research – going to the library, downloading articles, going through them. I got a distinction. It’s been almost one and a half years. I can still tell you what it was all about.

The same student drew a direct comparison with AI-assisted work done since. Those pieces had also produced good grades, but she could remember them less clearly. The contrast she was drawing was between what remained after submission – deep construction left something in memory and confidence that production for submission did not.

The arts student at the top of this piece described exactly the same thing from the other direction – AI-assisted work that looked fine on paper but left nothing behind when she had to stand up and present it.

Neither account was a moral objection. Both were observations about two types of work and the difference in what they left behind – a claim about learning that the sector’s current approach to AI, structured almost entirely around ethics and detection, doesn’t engage with at all.

A politics student articulated the construction principle as a deliberate strategy.

If I don’t understand something as I research it, I have to be able to learn it and what it means otherwise I won’t put it in my essays.

He had imposed a constraint on himself – nothing goes in unless I understand it – that the university didn’t require but that turned essay-writing into a learning process. Without it, the same essay could have been produced by assembling material the student hadn’t fully engaged with, and the format can’t tell the difference. Only the student’s own standard can.

A version that complicated the picture came from a student whose module had been badly affected by a strike, with no teaching delivered. She’d used AI only to find secondary reading, building the argument entirely independently.

It was just literally as a research tool for secondary reading, so it was all my own learning.

AI found the building materials. It didn’t design the building. She knew the difference – and the knowing of the difference was itself part of what she retained.

Show and tell

This type came from a different question, asked towards the end of one of the sessions – when you explain something to another student, does it feel intimidating or empowering?

Every participant said empowering – without hesitation, without exception. In a dataset where almost nothing produces consensus, that unanimity matters.

Several students went further and described the experience as generative – explaining something to a peer produced understanding that hadn’t fully been there before. A nursing student described it extending well beyond the classroom.

Being able to then educate and teach other people, whether that’s colleagues, whether that’s going out in the community – I think that sparks your knowledge.

What distinguishes this from the seminar room is the complete absence of any formal frame. The explanation happens because one person is curious about what another has made, and the one who made it finds themselves having to account for it in real time.

That combination – unforced interest, real-time response, no stakes – appears to create a particular kind of clarity.

The research can’t establish that peer explanation reliably produces learning in the way the other five types do, but the unanimity and the accounts that accompanied it are strong enough to sit alongside them.

If explaining something to another person is one of the most reliably described learning experiences in the data, building peer explanation into the learning environment as a normal activity – not leaving it to the accident of social connection – seems worth serious attention.

Submit

Five types, a possible sixth, and a pattern that runs through all of them. The moments students describe as learning share a set of properties – effort sustained over time, a requirement to show understanding to someone or something that can push back, and a sense of ownership over what was produced.

Almost all of them involve a form of accountability – to an examiner, to a patient, to a peer asking a question, to the material itself as it resists being ordered.

Some of these moments happen inside assessment – but only where the assessment requires performance, application, explanation, iteration, or accountability. The design folder that demands analysis on every page. The presentation that requires you to stand up and defend what you know. The placement that puts theory into contact with a real patient. These are assessment formats that generate learning through their design.

What students almost never describe is the format that carries the most weight on most UK degrees – the private written artefact, produced alone, uploaded to a VLE, and marked asynchronously against a rubric.

That doesn’t mean the format is worthless. But the sector is relying most heavily on the mode of assessment that students associate least with understanding – and that AI has made substantially easier to complete without showing any.

As we found across the Trained to stop learning research, the real question is whether the structure around students gives them a reason to learn, or merely a reason to produce.

Students in these focus groups could identify the conditions under which understanding became unavoidable. The task of universities is now to create more of those conditions, not fewer.

2 Comments
Oldest
Newest
Inline Feedbacks
View all comments
Nigel Adams
28 days ago

Jim,

As I read your article, I kept agreeing with all the points that the research received from the students they interviewed.

It reminded me that although it is nearly 20 years since the first students enrolled in the world’s first undergraduate Venture Creation Programme (BSc Business Enterprise – BBE) at the University of Buckingham, many of my alumni tell me that they still remember and use some of the “stuff” I taught them.

Using the subheadings in your article, I think I understand why this has happened.

“Stand and deliver”
My BBE students’ most important “Stand and Deliver” was that, within four months of starting their honours degree, the students had to pitch their business proposal to “Buckingham Angels”. This was to get the funding they needed to start and run their business, which was an integral part of their honours degree. If they failed to secure funding, they had only one more chance to pitch in the summer term. (Buckingham operates four terms each year.)

“Contact hours”
My BBE students were taught a range of business and other research and theoretical topics, which they then applied immediately in their businesses. This resulted in many of my undergraduates not just learning academic theories, but also questioning them, as often the business theories and case studies were based on large corporations, not the start-ups that my students had to run.

“The room where it happens”
At the University of Buckingham, small-group seminars and even smaller tutorial groups are the norm, and this is where much learning occurs. In addition, our BBE students had an Enterprise Hub wher they based their businesses.

“Assembly required”
BBE students had to combine all the topics they were taught in order to run their businesses successfully or sometimes their businesses would fail, again resulting in learning outcomes!

“Muscle memory”
My BBE students had to “build something themselves” to run a successful business or even one that failed. During the BBE programme, if a business failed, the students had to again pitch to “Buckingham Angels” if they needed more money to start a new business.

“Show and tell”
I know that BBE students were always explaining things to each other, since the teams running each business had their own responsibilities; as a result, those who were good at marketing helped those who were good at accounting.

“Submit”
BBE graduates have told me that they often wanted to gain more experience before starting their own company, so they applied for jobs. During interviews, employers were amazed by the students’ knowledge and understanding of business; frequently, BBE graduates were told they were not like “normal” business graduates!

The shoveller
26 days ago

Any neuroscientist would tell you that this is how your brain works and there is nothing surprising in here at all….the bottom line is “learning” in the broadest sense requires effort, there are no short cuts. Good to have examples and evidence from students that backs this up though. The tricky bit is that up and down the land multiple people are saying this til blue in the face by way of “student support and development” activity but the penny only really drops when students actually experience it for themselves and this often requires placing students in positions they feel uncomfortable with like being faced by an exam or a presentation. However, if you survey students before and after such activities the majority identify that these situations had more carrot than stick function for their learning and recognise the benefit.