Some reasonable adjustments may have just become academic misconduct

Lots of universities across the UK permit the use of Grammarly in reasonable adjustment plans for disabled students, and some permit it in some circumstances for those whose first language isn't English.

It makes sense as a support mechanism – the software catches spelling mistakes, suggests grammar improvements and helps users communicate more clearly.

But Grammarly’s latest product launch raises an uncomfortable question about where the line sits between support and doing the work for students.

The company has just announced eight new AI agents that go far beyond checking grammar. Each is quite a step away from proofreading and moves into territory that universities might typically consider core academic “human” skills.

The Citation Finder agent searches for academic sources that support, dispute or refute claims in a student’s writing. It doesn’t just find sources, it automatically generates correctly formatted citations. This is work that students would normally be expected to do themselves as part of developing research skills and critical engagement with literature.

The AI Grader agent is a doozy. It provides substantive feedback based on uploaded marking rubrics, course information and writing topics. It delivers tailored recommendations and provides estimated grades. Students can basically mark their own work before submission using the same criteria their tutors will use.

The Expert Review agent offers subject-matter expertise and topic-specific feedback to meet “rigorous academic standards”. It goes well beyond checking whether a sentence is grammatically correct – it’s providing the kind of discipline-specific guidance that students would normally get from personal tutors, supervisors or subject librarians.

The Paraphraser agent adapts writing to fit different tones, audiences and styles. It can make work more academic, more professional or more creative. It evaluates the current tone and lets users create custom voices. It’s not correcting errors as much as fundamentally rewriting content to achieve a particular effect.

The Reader Reactions agent goes even further – it predicts how a target reader will respond to a piece of writing. It identifies likely key takeaways, anticipates questions readers might have and flags potential confusion. That kind of anticipatory feedback would normally come from peer review, tutor comments or a student’s own developing sense of audience.

There’s also a Proofreader agent that offers inline suggestions for clarity and confidence, a Plagiarism Checker agent that scans work against databases, academic papers, websites and published works, an AI Detector agent that scores whether text appears to be AI-generated or human-written, and students now have access to AI Chat – an integrated assistant within Grammarly’s docs workspace that helps with brainstorming, summarising and generating suggestions throughout the writing process.

Where the line gets crossed

If a student with dyslexia uses Grammarly to catch spelling errors, that may be a reasonable adjustment. For a student using the same platform to have an AI agent find sources, generate citations, provide expert subject feedback, paraphrase their arguments, predict reader reactions and estimate their grade, that starts to look rather different.

Lots of these functions replace work that’s theoretically central to what universities assess. Finding appropriate sources and evaluating their relevance is a core research skill. Understanding how to construct arguments that anticipate reader questions is part of developing academic writing – learning to write for different audiences and purposes is an explicit learning outcome on many programmes.

When AI agents do the work, what exactly is the student demonstrating? They’re showing they can operate software. But are they showing they can research, think critically, construct arguments or write effectively? That becomes much harder to determine.

Where policies or adjustment plans reference Grammarly, they’re likely to have been drafted before these new agents existed. The assumption was that we were talking about a grammar checking and proofreading thing.

If that’s the case, the policies haven’t caught up with the software’s evolution. A blanket permission to use Grammarly now means something entirely different from what it meant even a month ago. Students who were told they could use the platform might reasonably assume all its features are fair game.

Universities risk putting students in impossible positions where they’re simultaneously told they can use a tool and that using certain functions of that tool constitutes cheating. Academic staff marking work may not even know what features their students have access to.

It does mean that many universities need to move faster than they typically do when technology outpaces policy. Disability services teams may need to review reasonable adjustment plans that reference Grammarly and specify exactly which functions students may use.

Programme teams and academic departments may need to clarify what’s permitted on their courses. SUs should be pushing for clear communication to students – and universities also need to think about how they communicate these distinctions to staff.

In the longer term, though, making students do all of these things by themself may be the 2025 equivalent of making students find things using index cards in a library.

On Radio 4 this morning, Peter Howard – professor of Economics at Brown University – was asked what his advice be to someone who was thinking about the future:

I want to have a set of skills that eqip me for the modern era for the AI era. I think that the skills that will be most valuable relative to the past will be what we would call “people skills”. It would be skills of getting along with other people, leadership skills, skills of communication, personal things that you can’t do by just asking an intellectual task to be performed by computer data.

As I discussed over on the Post-18 project, we’re back to the same debate – what do we really need humans to do? And do those that fund HE agree?

2 Comments
Oldest
Newest
Inline Feedbacks
View all comments
Charles Knight
18 hours ago

“If that’s the case, the policies haven’t caught up with the software’s evolution.” No policy will keep up in an era of software as a service—it becomes even more nonsensical when you realise that virtually all UK Universities provide Microsoft 365, where AI capabilities continue to extend. So many students will be breaching the regulations as currently written daily. The challenge is not even the end-point represented by the assessment ,but what happened before—it is now possible to use agentic AI to search the VLE for you, provide summaries, and even answer questions posed there. There is a new mode… Read more »

Natalya D
5 hours ago

Universities have been feeding back to DSA for ages that Grammarly Premium (and products like Scholarcy which is an AI ‘summariser’) are problematic for us within academic guidelines. Some students genuinely Do Not understand the finer distinctions within the tools that X is OK but Y is not… And it changes so fast we can’t keep up. At the university where I am a disability adviser, we had the policy that if DSA paid for Grammarly the student could use it within some guidelines if it was added to their adjustments document. Except we had no way of reliably knowing… Read more »