Shame and fear won’t fix students’ AI use

When we published our research on AI in assessment last month, one of the findings that generated the most response was what we called Finding 7.

Mack Marshall is Wonkhe SUs’ Community and Policy Officer


Jim is an Associate Editor (SUs) at Wonkhe

It was the idea that policy incoherence is a distributional justice problem that consistently punishes the most conscientious students.

The students carrying the heaviest emotional burden weren’t the ones using AI most aggressively.

They were the ones most trying to comply with rules that didn’t function in practice – students who stayed up all night completing work manually because they couldn’t get a clear answer about whether AI transcription was permitted, students who felt “awful” after using Grammarly because they weren’t sure if it counted, students who described the calculation as closer to moral injury than administrative confusion.

We touched on this emotional dimension, but we knew it deserved more attention than we gave it in the report.

Now a new paper from Australian researchers has done very similar work – and what they’ve found raises real questions over where many current approaches are heading.

The study

“Feeling AI: Circulating emotions, institutional climates, and moral boundaries in student use of AI” is published in Higher Education by Glenys Oberg, Yifei Liang, Margaret Bearman, Tim Fawns, Michael Henderson, and Kelly Matthews. It draws on a national survey of 8,021 students across four Australian universities, plus 79 focus group participants.

The researchers use Sara Ahmed’s theory of “affective economies” – basically, the idea that emotions don’t just happen inside individuals but circulate between people, technologies, and institutions, accumulating weight through repetition.

When guilt or anxiety “sticks” to AI, it’s not random – it’s shaped by institutional messaging, policy framing, and the stories that students tell each other.

The quantitative findings show what we might expect – optimism and excitement correlate strongly, scepticism and worry correlate strongly, and many students hold both simultaneously.

More than half expressed optimism about AI, more than half expressed scepticism. The emotional landscape is ambivalent rather than polarised.

But as ever, it’s the qualitative findings that deserve the closest attention.

The moral weight

Students in the Australian focus groups described AI use in terms that were very familiar. One student captured the tension like this:

I’m so grateful there’s AI… but I feel like I’m getting lazier because I rely on ChatGPT a lot… it planned it out for me, and you stop kind of thinking about things.

Another said simply:

I feel guilty about not being a good student… just taking the easier path.

Policies often imagine that students would think about AI use as doing something wrong, but here students have made a considered choice to use a tool that’s often explicitly permitted – and yet they still experience the choice as a moral failing.

The ease that AI provides triggers anxiety about effort and authenticity. As the researchers put it,

…the emotion here is not a trivial side-effect but rather it is doing important work in aligning the student with the ‘right’ kind of academic body.

Students in the Australian study were policing their own boundaries with remarkable precision. One drew the line clearly:

Cutting down paragraphs, grammar stuff, that’s fine. But research? Do it yourself.

Another insisted that using AI for anything beyond technical assistance would make her feel “like a fraud or a robot.”

That’s almost word-for-word what we heard in the UK. Our focus groups surfaced students who had developed personal ethical frameworks more sophisticated than anything their institutions had provided – distinguishing between structural scaffolding and content generation, between AI as tutor and AI as production accelerator, between using AI to understand and using AI to bypass understanding.

The Australian paper gives the phenomenon a name – “moral affective boundaries.” Students are doing serious intellectual work to figure out where the lines should be. And they’re doing it largely alone.

The regulatory divergence

Interestingly, the Australian study was conducted in a specific regulatory context. In 2024, TEQSA (a kind of OfS/QAA hybrid) required all universities to submit institutional action plans in response to AI, explicitly noting that:

…the growing power and increasing availability of gen AI tools raises concerns about the authenticity of student work.

The researchers describe this as creating:

…a specific institutional affective climate of vigilance and suspicion that pre-configured student engagement.

Students had apparently absorbed the framing. One described plagiarism as “drilled into me even in high school.” Another said universities are “looking to catch” students – that using AI felt like “a potential danger” that could expose them to punitive scrutiny. The researchers argue that institutional policies operate through “affective governance” – mobilising shame and fear to enforce academic norms.

The UK hasn’t had the same regulatory intervention, at least not yet. OfS has been comparatively quiet on AI, leaving institutions to develop their own approaches without a national framework. Our research found plenty of anxiety – but it was more often about unclear guidance than about being caught.

The Australian findings suggest that cheating-first regulatory framing doesn’t prevent problematic AI use – it just makes conscientious students feel worse about legitimate use. The researchers note that:

…current institutional responses frequently rely on ‘affective governance’ that mobilises shame and fear to enforce academic integrity… which risks fracturing human pedagogical relationships.

If OfS (via a nudge from DfE in the Skills White Paper) is considering a more prescriptive national approach to AI in assessment, this should give pause. The Australian experience suggests that leading with integrity concerns creates an emotional climate that falls hardest on exactly the wrong students.

Trust as the missing variable

Both studies converge on the relationship between belonging, trust, and AI use.

In our research, students with stronger belonging were markedly less likely to use AI for assessments. Where belonging was absent, students experienced their course as a production line and reached for AI accordingly. The absence of resourced peer learning appeared as a structural driver of AI adoption.

The Australian paper arrives at a similar place from a different direction. It calls for “pedagogical trust”:

…a reciprocal learning relationship open to uncertainty and co-navigated through shared sense-making.

The argument is that AI becomes emotionally fraught precisely because the conditions for trust have broken down. Students don’t know what’s permitted, don’t know how tutors will respond, and don’t feel confident that the system has their interests at heart.

Our research identified the accountability moment as the key design variable – students use AI very differently when they know they’ll need to demonstrate understanding in person. The Australian research identifies the affective climate as the key relational variable – students navigate AI very differently when they trust the institution and feel they belong.

These aren’t competing explanations. Assessment design (the accountability moment) is the structural fix. Trust and belonging (the affective climate) are the relational fix. Universities probably need both.

The resource we’re wasting

But the most notable convergence is what both studies say about students themselves.

The Australian researchers conclude that “students are negotiating complex affective and moral decisions when engaging with AI in their studies” – and that institutions should respond by fostering “critical affective literacy” to “create spaces for emotional and ethical dialogue.”

We concluded that students “are not confused consumers of a broken system” but “thoughtful, ethically engaged people working through real complexity with very little institutional support.” We argued that universities have an untapped resource in students’ own ethical reasoning – and that creating structured opportunities for that reasoning to be surfaced and shared would do more than another round of policy revision.

The students in both studies aren’t waiting for better policy. An engineering student in our focus groups had arrived at a principle of “augmentation not replacement.” A computing student had developed a doctrine that AI use must not “inhibit my understanding of the topic.” The Australian students were drawing lines with equal precision – grammar assistance fine, research assistance not, core intellectual work protected as authentic.

These frameworks were often more considered than those their institutions had produced, and they’re invisible to those institutions – developed in private, shared only with trusted peers, and never formally acknowledged.

What universities could do is create the conditions for this thinking to surface – not through more declarations or surveillance, but through genuine dialogue about what learning is for and where AI fits.

The Australian researchers call this “supporting students as morally engaged, reflective learners through ethically attuned dialogue.” We called it treating students as partners in working this out rather than subjects to be governed. Potato potato.

Subscribe
Notify of

0 Comments
Oldest
Newest
Inline Feedbacks
View all comments