Jo Irving-Walton is a Principal Lecturer in Learning and Teaching at Teesside University

The sector usually talks about generative AI as if it is primarily a governance problem – something to be addressed through policy statements, detection tools, carefully drafted guidance or AI literacy frameworks.

It isn’t. It’s a dopamine problem.

Generative AI doesn’t simply save time. It produces a small, reliable hit of cognitive relief in a system wired for acceleration. Blank pages recede. Awkward sentences are shaped. Structure appears. The tone sounds composed, assured, finished. At the very least, looming deadlines feel easier to meet.

In institutions defined by overload, performance metrics and constant productivity, that shift from friction to output is not incidental. It is deeply attractive, and it is shaping our behaviour faster than policy can meaningfully respond.

The asymmetry of cost and reward

The sector has become adept at mapping AI’s costs: environmental impact, invisible labour, bias, hallucination, skills erosion, institutional dependency. We can articulate the risks with confidence and often with urgency.

But cost-awareness is no match for immediate reward.

We have seen this pattern before. Fast fashion was convenient before it was environmentally destructive. Social media was connective before it was corrosive. Plastic was revolutionary before it was everywhere… forever. In each case, the benefits were immediate and personal; the costs were distributed, delayed and easier to ignore. We normalised first and interrogated later.

Generative AI follows a similar curve except this time the reward is cognitive. It smooths the discomfort of thinking-in-progress, compresses effort into output and offers affirmation where there was uncertainty. That shift, subtle as it seems, changes behaviour.

A strange emotional temperature

Listen carefully to the tone of university conversations about AI and something unsettled runs beneath the surface. Some colleagues defend standards fiercely, often without sustained experimentation with the tools they are condemning. Others integrate AI enthusiastically, sometimes faster than reflection can keep pace. In between, staff and students occupy a widening half-light.

Students use AI but hesitate to disclose it fully. Staff experiment privately while speaking cautiously in public forums. Official statements emphasise integrity, yet corridor conversations reveal a mixture of fear, curiosity, relief and pragmatic adaptation. Uncertainty, defensiveness, excitement and quiet guilt circulate at once.

We have, perhaps inadvertently, created a culture where admitting AI use can feel riskier than using it badly.

This isn’t simply disagreement about policy. It’s professional identity meeting reward architecture and neither side is entirely comfortable with what that collision reveals. If generative AI reliably reduces friction, it will be used. Not because people are unethical, but because it offers relief. It lowers the cognitive cost of production in systems that already reward visible output.

The metric trap

We should also be honest about the environment into which AI has arrived.

Students are not operating in leisurely academic spaces. Recent sector data suggests many are balancing close to 50 hours a week between study, travel and paid work, with average employment hovering around 17 hours per week. In that context, AI does not feel like a shortcut. It feels like a flotation device.

For many students, generative AI does not come across as rupture or disruption. It feels infrastructural another layer of digital assistance in a world already shaped by predictive text, algorithmic curation and on-demand answers. The tension they describe is often less about rule-breaking and more about ownership: how much of my thinking is still mine, and how much has been quietly scaffolded away?

At the same time, institutions measure speed, celebrate efficiency, monitor turnaround times, completion rates, and track improvements in metrics such as “timely feedback”. When positivity scores rise because feedback is delivered quickly and systems appear more organised, those gains matter reputationally and financially.

If a technology optimises for exactly those pressures, it is not subverting the system; it is thriving within it. The dopamine loop is not operating in isolation. We built the conditions in which it flourishes.

If the struggle of drafting, the pause, the slow working-through of ideas is where much learning occurs, and yet our systems primarily reward the finished artefact, then we are already structurally aligned with the relief AI provides.

That alignment deserves scrutiny not in the form of tighter controls, but in how we define value. If output is rewarded and friction disappears, judgement will thin. Designing for hesitation, revision and resistance is not inefficiency but formation.

From interface to ecosystem

Institutions are embedding chosen AI systems into core platforms at speed. When specific tools become invisible infrastructure, they do more than assist; they habituate. If students encounter generative AI primarily through one or two institutional interfaces, they are not being invited to interrogate AI as a phenomenon. They are becoming fluent in a particular system. There is a difference.

A more deliberate approach would expose learners to an ecosystem rather than a single assistant. It would surface where models disagree, where they overreach, where they are biased or overly fluent. Treating AI as contested, evolving and imperfect rather than seamless and embedded. It would begin with a simple question: “how are you actually using AI?”

This is not merely a pedagogical preference. The Department for Education’s 2026 Generative AI Product Safety Standards (for schools and colleges) already warn against cognitive deskilling and manipulative design patterns. Yet institutional infrastructure risks baking in exactly the frictionless engagement those standards caution against.

Technical infrastructure is never neutral. It shapes what becomes habitual and what eventually feels normal.

You can’t out-policy a reward loop

All this is why “AI literacy” on its own can feel insufficient. We do not lack information about risk. What we lack is the intentional friction and the structures that make pause possible before habit settles. We are acclimatising ourselves to comfort. Meanwhile, policies proliferate, attempting to regulate behaviour at the surface while leaving the underlying reward structure intact.

The hard truth is this: you can’t out-policy a dopamine loop. You can only build a different loop. If we continue to reward speed, output and neatness above depth, problem solving and difficulty, we are not competing with AI; we are training ourselves to resemble it.

The real question is not only how we govern AI, but what we reward and what that reveals about the kind of learning we are prepared to defend.

2 Comments
Oldest
Newest
Inline Feedbacks
View all comments
Charles Knight
26 days ago

“We have, perhaps inadvertently, created a culture where admitting AI use can feel riskier than using it badly,” which is in complete contrast to what you get if you spend any time in commercial enterprises where everyone and their dog is happy to tell you about their use of AI.

“Technical infrastructure is never neutral.”

And increasingly in an age of software as a service, not really under the control of the org – if you take M365 – the deliberately confusing range of options is there to ensure user take-up. Some aspects can be mitigated, but it’s a full-time job and hard to keep up.

Sophie Nicholls
26 days ago

Brilliant article. We need to keep celebrating and making space for ‘the struggle of drafting, the pause, the slow working-through of ideas.’ Real magic happens in that space.