Disadvantage is a predictor of AI use. But not in the way you might think

Jim Dickinson finds that disadvantaged students use AI as a production shortcut while their better-off peers treat it as a learning tool – and that the hidden curriculum explains why

Jim is an Associate Editor (SUs) at Wonkhe

If I asked you to guess which students lie awake worrying about whether everyone else’s AI use is putting them at a disadvantage, you might guess the poorest ones. You would be wrong.

In our Trained to Stop Learning data, students eligible for means-tested funding are nine percentage points less likely to use AI on their assessed work than those who aren’t eligible.

They feel clearer about the rules. They are far less worried about falling behind. And when given an open text box and asked how they would redesign their assessments, they called for practical, hands-on formats at four times the rate of their better-off peers.

None of this fits a simple story about disadvantaged students. The students with the least financial security are not scrambling to keep up with an AI arms race.

They appear to have opted out of it – and they want assessment formats where the whole question is irrelevant.

Who worries, and who doesn’t

For this year’s Secret Life of Students, we wanted to get past the AI adoption statistics that already exist, and ask harder questions.

Do students feel they have actually learned what they have produced? What are they weighing up when they decide how to use AI on a specific piece of work? And do they think their assessments test understanding?

A national survey of 1,055 students across 52 providers, weighted for gender and level of study, combined with focus groups involving student reps from across disciplines and levels, helped us produce the findings in Trained to stop learning, out on Monday.

The competitive anxiety gap is one of the most interesting in the dataset. Among students not eligible for means-tested funding, 62 per cent agree or strongly agree that not using AI puts or would put them at a disadvantage. Among eligible students, it’s 45 per cent.

The clarity gap runs in the same direction. Thirty-eight per cent of eligible students say the line between acceptable and unacceptable AI use is “completely clear” to them, compared to 28 per cent of not-eligible students. Not-eligible students are seven percentage points more likely to describe it as “somewhat unclear.”

Some of this is mechanical – if you don’t use AI, you don’t spend much time worrying about where the boundaries are. But the free text responses suggest it runs deeper than that. Eligible students who did describe uncertainty tended to recount specific, contained scenarios – they tried something, assessed it themselves, and moved on.

I asked it to produce an example work based on the topic I had chosen. I read it, deleted and never used it but I debated how acceptable it was.

I was wondering whether planning via AI was allowed, since I tried to plan out when I should do certain parts of the assessments.

Not-eligible students were more likely to describe a systemic fog – uncertainty not about a particular act but about the rules themselves, sometimes even when they hadn’t used AI at all.

I do not use AI, but if I did, I definitely don’t know where the line is and how far you can go due to a lack of clear guidance.

The pattern is consistent with something we found across the broader class analysis in this data. First-generation students are 11 percentage points more likely than continuing-generation students to find the AI boundary unclear.

State and FE students are 15 points more likely than privately educated students to find it unclear. But bursary-eligible students reported near-zero incidence of having used AI in ways they weren’t sure were acceptable. Privately educated students, by contrast, were far more likely to have pushed the boundaries and far more confident about defending what they did.

There are at least two ways to read this. One is that the students with fewest resources are making cautious choices because they have the most to lose – a misconduct charge is a bigger deal when you can’t afford to repeat a year.

The other is that these students have a different orientation to the whole question – less invested in the AI-as-tool discourse, more focused on getting through their degree on their own terms. The data doesn’t resolve this cleanly, but the free text leans toward the second reading.

The production shortcut

Among eligible students who do use AI, the most common use is planning, structuring, and producing. Among not-eligible students, the skew is toward using AI to understand concepts or materials – 32 per cent versus 17 per cent of eligible students.

Students from lower socioeconomic backgrounds are more likely to experience AI as a production tool. Students from higher socioeconomic backgrounds are more likely to experience it as a learning tool.

The single largest differential in the whole class analysis is on “AI helps me learn more effectively” – 66 per cent of privately or grammar-school educated students agree, versus 34 per cent of state and FE students. That’s a 32-point gap. First-generation students are twice as likely as continuing-generation students to say AI helps them produce work but doesn’t help them learn.

When we titled it all Trained to Stop Learning, we were describing what happens when assessment rewards production over understanding – and AI makes it trivially easy to produce without understanding.

But the dynamic doesn’t land equally. The students with less cultural capital, less inherited familiarity with how higher education works, and less financial cushion are the ones most likely to end up on the production side of the split – using AI to get through rather than to get better.

The decision-making data reinforces it. Students from lower-occupation backgrounds are far more rule-driven when deciding how to use AI – 85 per cent consider the assignment guidelines, compared to 61 per cent of those from professional and managerial backgrounds. They’re nearly twice as likely to factor in the risk of detection. They’re six times more likely to look at what other students are doing.

And they’re significantly more driven by circumstance. Among bursary-eligible AI users, 40 per cent say time pressure and deadlines are a major factor in how they use AI – compared to 17 per cent of non-eligible students. Forty per cent cite assignment difficulty, versus 13 per cent. Those are not small gaps.

They describe students reaching for AI not out of laziness or ethical indifference but because the clock is running out, the brief is unclear, and the assignment is hard – conditions that track closely with the financial pressure to work alongside studying, the caring responsibilities that eat into study time, and the workload compression that follows from all of it.

Students from professional and managerial backgrounds, by contrast, are more ethics-driven – 74 per cent consider “what feels right ethically” versus 46 per cent – and more learning-driven, considering whether AI would actually help them learn at a rate 20 points higher.

State and FE students are significantly more likely to worry about whether AI use would undermine their learning– 56 per cent versus 25 per cent of privately educated students. That’s a 30-point gap, and it cuts against any assumption that disadvantaged students are cavalier about AI. They’re not cavalier. They’re anxious about a different thing – not “will I get caught?” in the abstract, but “is this making me worse?”

The hidden curriculum, again

One reason poorer students may relate differently to AI is that they relate differently to the rules that surround it.

First-generation students are 17 percentage points more likely to agree that there are things you need to know to succeed that nobody tells you. State and FE students are 29 points more likely than privately educated students to say they’ve learned from peers about how the system works. Privately educated students are 16 points more likely to agree that official guidance tells you everything you need to know.

This is the hidden curriculum at its plainest – the tacit knowledge that circulates informally among students who arrived without it, while those who arrived already equipped barely notice it exists.

And it matters for AI because the rules governing AI use are, in most institutions, exactly the kind of knowledge that requires confident handling of ambiguity. If you already know how the system works – which rules are real, which are aspirational, which tutors care about what – you can push the boundaries of AI use with relative safety. If you’re still working out which rules matter, caution is the rational response.

Tutor confidence data makes this concrete. Seventy-nine per cent of privately educated students say they would feel confident explaining their AI use to a tutor, compared to 60 per cent of state and FE students – a 19-point gap.

That may not be a gap in competence. It may well be a gap in confidence – in the belief that you can have a conversation with an authority figure about what you’ve done and come out of it unscathed. The students who arrived with the most cultural capital are the ones most sure they could talk their way through it. The students who arrived with the least are the ones most likely to avoid the situation entirely.

Assessment legitimacy data runs parallel. Students eligible for bursaries are 19 percentage points more likely to see a gap between what their course says it values and what it actually rewards. At the same time, they’re 19 points more likely to agree their tutors are interested in how they arrived at conclusions, and 16 points more likely to say their assessments test thinking.

That’s a counterintuitive combination. These students perceive more individual care from their tutors and more structural misalignment in the assessment system at the same time. They can see what their course is trying to do – and they can see that it isn’t doing it.

What they want instead

An assessment redesign question saw eighteen per cent of eligible students call for practical, hands-on, applied assessment formats – compared to just four per cent of not-eligible students. The gap is the largest on any single theme in the redesign data.

I would redesign one of the assessments to focus more on practical, real-world projects instead of only written assignments. For example, creating a working software application, solving a real computing problem, or developing a small system.

Instead of reflective essays, make the essay more related to practical knowledge within the course.

Eligible students also raised exam and memory-based concerns at a higher rate – 23 per cent versus 16 per cent – and were more likely to mention group work as something they’d change.

The demand for practical assessment formats fits the accountability moment argument we’ve made elsewhere in this series. Students across the data told us that a single structural feature – whether a future moment existed at which they’d need to show they actually understood what they’d produced – changed everything about how they used AI.

The eligible students calling for practical, applied formats are, in effect, asking for that accountability moment to be built into the assessment itself. They want to prove competence through doing – in conditions where AI can’t do it for them, and where the thing being tested is visibly connected to the thing being learned.

Read alongside the production-shortcut finding, this is a pointed critique. The students most likely to experience AI as something that helps them produce but not learn are also the students most likely to ask for assessment formats where production without learning is impossible. They’ve identified the problem and they’re describing the fix.

The system’s blind spot

When I’ve had informal conversations about AI and disadvantage, folk have mostly assumed that poorer students need more access to AI tools, or more training in how to use them, or more generous policies about what counts as acceptable use. The data suggests something different.

It’s not so much that students with the least financial security are falling behind in an AI arms race – although other datasets suggest a real digital divide in paid-for v free tools. It’s that many of them have opted out of it entirely – and they’re less anxious for having done so. When they do use AI, they’re more likely to experience it as a production shortcut than a learning enhancer – and they know it.

They seem follow the rules more cautiously, not because they’re confused, but because the cost of getting it wrong is higher for them, and the tacit knowledge that would let them push boundaries with confidence circulates through networks they weren’t born into.

And when asked what they want from assessment, they don’t ask for more AI guidance or better detection or clearer policies. They ask for assessment that would make the whole question irrelevant – practical, applied, visible. Assessment where what you know is what you can do, and no language model can do it for you.

Universities should take notice – the students who have the most at stake are arriving at exactly the same answer as the students who have thought hardest about what learning means – assessment should work for both.