AI hasn’t broken assessment, it’s exposed what we were already ignoring

Jayne Pearson and Brenda Williams explore whether staff and students can articulate why they're assessed through essays

Jayne Pearson is a Senior Lecturer in Educational Assessment at King's College London


Brenda Williams is a Reader in Experimental Neuropathology (Education)

Since the arrival of ChatGPT in 2022, higher education has been scrambling to respond.

Much of the conversation has centred on plagiarism, misconduct and whether written assessments can still be trusted as evidence of learning. Universities have issued guidance, updated regulations and encouraged staff to “embrace” AI while designing more “authentic” assessments.

But beneath this activity sits a more uncomfortable truth: AI hasn’t created a crisis in assessment so much as exposed an existing one.

For decades, universities have relied on written assignments in no small part because of the premise that writing facilitates learning. Yet assessment practices continue to focus almost entirely on the final product, leaving the process of writing invisible.

If we want to find sustainable responses to AI, we need to stop treating this as a technical problem and start asking a more fundamental question: what do we actually value about student writing, and how do our assessment systems reflect that?

Writing is central to learning

Our research, which is currently in press, took place in late 2025 with participating student and staff groups across two UK HEIs, asking students why they are assessed through essays. Many struggled to give a clear answer.

This is not because they lack insight, but because the purpose of writing is rarely made explicit. When prompted, students consistently linked writing to deeper learning, critical thinking, authenticity and employability. Staff, meanwhile, are often deeply committed to writing as thinking, as a means, or process, through which ideas are formed, tested and refined.

However, our participants confirmed our hypothesis: that this shared belief is undermined by assessment practices that reward only the finished product of written text. The struggle, uncertainty and revision that make writing educationally powerful are relegated to the hidden curriculum.

Students are expected to “pick up” how writing works, often through trial and error, while being judged solely on the outcome. AI has made this contradiction harder to ignore. If writing really is about facilitating thinking, what happens when parts of that thinking are outsourced, or cognitively offloaded, to a machine?

Behind the curtain

Much of the anxiety around AI rests on assumptions about student behaviour.

In practice, our research confirmed that of many others that students have a far more pragmatic and cautious relationship with AI tools. Students from both institutions reported using AI primarily to manage the mechanics of writing: summarising readings, structuring arguments, editing language, checking clarity or translating ideas into academic English.

They are often acutely aware of the limitations of AI-generated content and express concern about accuracy, generic output, and the risk of over-reliance on these tools as detrimental to long-term learning.

Students do not frame AI as a replacement for thinking. Instead, they frame its use as a response to structural pressures: heavy assessment loads, overlapping deadlines and the need to perform consistently across multiple modules. For international students and some neurodivergent learners, AI is also seen as compensating for long-standing inequities in how writing is taught and assessed.

Staff concerns tend to lie elsewhere. When prompted more deeply, it is clear many actually worry less about misconduct and more about what might be lost if parts of the writing process are outsourced: the slow, effortful work through which understanding develops. This concern is often difficult to articulate, particularly in institutional cultures that frame efficiency and productivity as unqualified advantages.

Shedding light

One proposed response to these tensions is to assess not only the final essay, but how it is produced. Our research proposed the form of a “processfolio.” This is an original concept from Arts education adapted for writing which asks students to depict their journey of producing a piece of work through a collection of artefacts, for example, drafts, plans, feedback, source notes, AI prompts, accompanied by a reflection.

The appeal is obvious. Process-based assessment promises to foreground learning rather than policing integrity, reduce reliance on unreliable detection technologies, make tacit expectations about writing more explicit, and acknowledge that writing is already a hybrid human machine activity. However, while staff were largely enthusiastic about these possibilities, students were more ambivalent.

While students appreciate opportunities for reflection and transparency, they are also highly sensitive to workload and intent. Many are sceptical that the documenting process is in fact beneficial for them, suspecting instead that it primarily serves institutional interests in monitoring AI use. This scepticism deepens when process-based approaches are framed, even implicitly, as tools for deterrence or proof of originality.

The perceived purpose of assessment shapes behaviour. If students believe that documenting their process is a form of surveillance, they are less likely to engage honestly, undermining both trust and educational value.

A crisis of trust

The most contentious issue is grading. Some staff argued that unless the process is assessed, students will not engage with it. Students, however, often strongly resist the idea of grading something they see as personal and linked to their identity.

They worry about subjectivity, bias and being judged not just on what they produce, but on how they think. This is particularly concerning for neurodivergent students when the focus on process is often framed through an ableist discourse of linearity, time management and the necessity of “struggle.”

Workload concerns cut across both groups. Adding process documentation as an accompaniment to existing assessments risks exacerbating over-assessment, particularly in modular systems with limited coherence across programmes. There are also legitimate equity concerns. Asking students to retrospectively reconstruct and narrate their process may disproportionately burden those already managing cognitive or organisational challenges.

What emerges here is not opposition to innovative pedagogy, but a crisis of trust. Students are acutely aware of how assessment is used as a control mechanism, and AI has heightened those sensitivities. Any attempt to make learning processes visible without addressing this context risks backfiring.

Reasserting the value of assessment

The most consistent message from both staff and students is that process-based approaches should be used selectively, not universally. They appear most valuable when used during the first-year transition, in formative or low-stakes contexts, as part of programme-level thinking rather than isolated module interventions and when accompanied by clear explanation of purpose and limits.

Used well, process-focused activities such as the processfolio can help students understand what academic writing actually involves and why it matters. Used poorly, they become another layer of bureaucratic compliance.

The age of AI has made one thing clear: we cannot continue to rely on essays as proxies for learning while ignoring how those essays are produced. Detection software and declarations of AI use on cover sheets do little to address this mismatch.

What is required instead is a cultural shift that aligns assessment, teaching and institutional messaging around writing as a process of learning, not just a product to be judged.

The question is not how to stop students using AI. It is whether universities are willing to be honest about what they value in student writing, and to design assessment systems that reflect that, even when doing so is uncomfortable, complex and resistant to easy solutions.

3 Comments
Oldest
Newest
Inline Feedbacks
View all comments
Paul Vincent Smith
1 month ago

Thanks for this interesting piece. The idea that writing is a form of thinking – when carried out by humans – is a necessary come-back to arguments relating to the death of writing in universities.

It is indeed the case that for a long time written products could reasonably be taken as the culmination of various learning practices. The essay genre in particular requires that a series of useful competences needs to be exercised in order to bring one successfully to completion. Process approaches are certainly worth considering, although I suppose one has to ask whether it’s always about the journey, or sometimes the destination.

At the moment I find it hard to be dissuaded from two positions. First, that assessment should be based on disciplinary practices, with many iterations of “the same” types of assessment, thus allowing for the “slow learning” that you set out here. I’m not sure I see this as very much divorced from the “picking up, through trial and error” that you cite earlier. At least since David Bartholomae, we’ve had the notion that students “reinvent the university” with every assignment. And disciplinary practices can be wide-ranging and “authentic” in themselves; David Russell’s book “Writing in the Academic Disciplines” shows that something like writing for non-specialist consumption is in fact a well-established form of writing, replicated nowadays in blog posts and similar.

The second position is that our travails with AI assessment will not be solved purely within universities and by reflecting on student behaviours. I’m more concerned with reflecting on university leadership and big tech behaviours.

Peter Jones
1 month ago

I’m sceptical about the idea of a “processfolio.” You say that students would be asked to provide “a collection of artefacts, for example, drafts, plans, feedback, source notes, AI prompts, accompanied by a reflection” – couldn’t those things all be produced by AI anyway? They’re just more written products, after all. You might want to see timestamps etc. and this would at least push students to use AI at appropriate times, but they could still ask AI to generate each step.

The essential problem is that, outside of exams, many kinds of writing tasks commonly set for undergraduates can be generated without meaningful human input. An alternative might be to link tasks to things that happen in the real world that the AI cannot know about. For example, students observe an experiment, take notes, and write a report based on what happened. They could still use AI for the writing, but they would at least have to give it the relevant information.

Another example might be to ask dissertation students to include notes about the discussions they had with their supervisors throughout the year. Supervisors could keep records to compare with these notes. This would encourage engagement throughout, and would also require students to show how they have incorporated something that happened in the real world (the meetings) into their work.

Matt Robb
30 days ago

I agree that there is a problem: in the AI world, the quality of the output from a student may well not relate to the quality of their understanding. You can’t solve this with compliance policies, and you shouldn’t try because AI is pervasive now in workplaces and will become more so in future

You can however, reliably assess the critical thinking and creativity skills that students need to be developing. And -if need be – make scores on those assessments part of the grading.