Creative students are either afraid of being caught or afraid of being left behind

Gemma Veal is Societies and Employability officer at Exeter Students' Guild

In creative writing, every word is chosen by the author to craft their story in their style.

So how can you morally pass off machine learning as you? In an industry where plagiarism discredits you, is this really any different?

The thought of being replaced by a series of programmes and probability is unsettling, and with its associations of taking jobs from actors, filmmakers, artists and scriptwriters – and just being all-round generally worse and lazier than human creation – AI in the creative classroom became a word no one wanted to say on my degree.

But what strikes me now, having graduated, is just how rapidly the ground has shifted.

According to the Higher Education Policy Institute’s 2025 Student Generative AI Survey, 92 per cent of students now use AI tools in some form – up from 66 per cent just a year earlier.

On my degree we were still whispering about it, but now it seems almost universal. The survey found that 88 per cent of students have used generative AI for assessments, with the most common uses being explaining concepts, summarising articles, and suggesting research ideas.

The question isn’t whether students are using AI anymore – it’s how, and whether anyone is helping them use it well.

Elevator pitches

A friend of mine tried to get AI to write her poetry in first year but deemed the product terrible. In my second year, use of AI to generate mock-up images of a club were praised for thinking outside the box.

AI cover sheets didn’t come into my degree until my final year – I had made it up to this point without using it at all, but I started seeing my coursemates pulling up ChatGPT to draft them elevator pitches in my creative writing classes when I was writing mine in my notebook.

This would get under my skin because I saw it as cheating on the task at hand. No matter – I still won the competition with my elevator pitch – but I felt cheated of my time.

In my final year, a teacher told us that despite our essay being AI-supported, she would be “disappointed” in the class if they used AI because we should be “using our brains.”

I wasn’t alone in my discomfort. The same HEPI survey found that 53 per cent of students worry about being accused of cheating, while 51 per cent fear getting false or biased results – and women in particular express greater anxiety about these risks.

As historian D. Graham Burnett observed after asking his Princeton students whether they’d used ChatGPT and receiving silence:

“It’s not that they’re dishonest. It’s that they’re paralysed.

Students seem to have internalised the belief that using AI for coursework is somehow wrong – yet the data shows almost everyone is doing it anyway.

Writer’s fraud

Doing a creative degree, I didn’t want to use AI to replace my ideas. I saw it as writer’s fraud, and I wanted to see myself reflected in my work and take pride in that.

However, I did use Grammarly, as I was losing marks on spelling and punctuation – I could justify it in my head that this didn’t touch my ideas or creativity, not that I voiced this on my degree.

I haven’t knowingly seen AI out-compete human creativity, yet, and the value in human creativity seems to outweigh it. But AI clearly has uses as a tool – and there’s a moral line you have to walk when it comes to creativity.

Where’s the line? AI-generated photograph? Takes jobs from photographers. AI actors? Takes jobs from actors. AI sound editing tools? Permissible. AI photo editing software? Saves time.

Where you decide to draw that line is in the creative’s hands, and in a line of work where time is money or coursework deadlines are fast approaching, I can see the temptation to cut corners.

How you present your use of AI can then subject you to scrutiny from others who go by a different moral compass – audiences, peers, or university staff.

Mixed messages and greowing gaps

What made this harder was the inconsistent messaging. Only 29 per cent of students in the HEPI survey felt their institution encouraged them to use AI, while 40 per cent felt discouraged or outright banned from using it.

Staff literacy has improved – 42 per cent of students now say their instructors are well-equipped to support them with AI, up from just 18 per cent in 2024 – but there’s still a long way to go. The result is confusion – one module says experiment, another says don’t, and students are left to draw their own invisible lines.

There’s also a growing gap in who feels confident using these tools. The survey found that wealthier students, those on STEM courses, and male students are more likely to use AI confidently and frequently, while students from lower-income households, arts and humanities disciplines, and women report significantly lower usage and comfort levels.

If AI skills become essential to employability – and they likely will – this inequality matters.

The question should be – how are we using AI in creative subjects, and how are we standardising this to protect creativity while showing how it can be used as a tool? How do we keep students in the loop so they aren’t left behind or stuck in misconceptions? And how are we standardising a staff approach too?

Because right now, students like me are left to figure it out alone – caught between the fear of being replaced and the fear of being left behind.