It looks like you’re trying to assess a student

Next week, Turnitin's AI text detection tool will go live - and the news is causing widespread consternation (and, dare I say it, panic) across the sector.

Jim is an Associate Editor (SUs) at Wonkhe

At Wonkhe towers we were alerted to the news via an excitable press release from its PR firm that read like the firm was trying to get into the hit parade with their new single:

Within 10-14 days, Turnitin is going to announce that the “switch has been flipped” on AI detection. Educators who have Turnitin licenses – and there are lots of them – will immediately be able to see how much AI wrote a student’s paper. We will officially announce this the day BEFORE the switch is turned on.

The FAQ (which isn’t publicly available on its website) reveals that the tool will go live on Tuesday, April 4th – and will be rolled out to everyone, without the option to opt-out.

You can’t test the tool before it goes live either, and the AI writing detection indicator and report “are not visible to students”, because of course not.

You might be thinking, as I was, what exactly the tool is purporting to be detecting. Well, it turns out that the tool has been trained on GPT-3 and GPT-3.3 – but the paid-for GPT-4 (which is much better at avoiding false “hallucination” references, for example) has been out for a fortnight now. Turnitin says that it plans to expand detection capabilities to other models “in the future”.

In other words, students that pay for the latest version of Chat-GPT might be able to evade detection by the tool, and at least will assume they can. What’s $20 when you’ve paid £17k in fees for your MBA this year?

In most universities the policy is that a Turnitin score is a trigger for a proper investigation where staff should then interrogate the matches that are described.

But a Turnitin score on AI won’t facilitate that kind of investigation because the software doesn’t produce the same answers from the same prompts.

Now clearly, a university could – and almost certainly should – instruct staff to ignore the warning signs on the dashboard when they appear.

But it’s almost inevitable that some academic staff who see one of those flashing red lights are going to give students worse grades – either consciously or subconsciously.

Alternatively, a university could attempt to use the score in academic misconduct processes. That’s a recipe for students that can afford lawyers getting off, hundreds of false positives and an academic misconduct system in instant crisis.

Won’t students just run their text through Quillbot, or even just prompt GPT-4 to create some text that would get past Turnitin? Les Dawson playing the piano springs to mind here.

On January 1st, 2024, the AI tool will become a paid feature – the desperate Turnitin equivalent of Elon musk trying to charge you to have a Blue Tick.

Of course the wider issue is that the panicky statements and emergency amendments to assessment regulations issued by some universities so far all look like my granddad complaining about new fangled technology and not being able to program the video recorder.

One I was looking at yesterday specifically mentions Chat-GPT. Right now, to be released in days, OpernAI is testing over 100 GPT-4 plugins – and it’s unlikely they’ll all have a sticker on saying “built on GPT-4” or even “this is generative AI”.

Universities “banning” it for this spring/summer assessment round really need to clarify exactly what they’re banning.

I’ve also seen some statements banning the use of “generative IT tools” in general, or relying on originality statements. Are the spelling and grammar checkers in Word OK? And if so, are AI powered versions, which go a bit further on suggestions and are already being tested in Google Docs and Sheets for some customers, OK?

Where are you drawing the line?

I think Easter is going to give everyone a breather for a week or so – but my guess is that the sector, its regulators and its convening bodies are going to have to step up on handling what’s about to be a spring/summer assessment crisis if we’re not careful.

If nothing else I’d be working with my local students’ union on convening some honest focus groups to draw out some questions and insights into how these tools are being used – because everything I hear suggests that usage is very widespread, and so is talk of usage in WhatsApp groups.

Are students supporting each other with assessment or colluding over cheating? If you’re not confident about the answer, it’s hard to see how rules might be enforced.

How would you handle this, for example, if you saw it in your refectory?

Oh – and universities with computing and IT degrees – how are you hitting the OfS B Conditions on up-to-date content?

To pick up the conversation about how AI is shaping higher education, join us on 19 April for a half day online event: The avalanche is here.

2 responses to “It looks like you’re trying to assess a student

  1. Must say I’ve been curious about Turnitin were going to do. Thanks for the update.

    Sticking my neck out: I think Jim is right. I’m always sceptical of any technology that claims to be the next big thing (just look at the various blockchain-related cryptocurrency news articles of late). But this doesn’t feel like storm-in-a-teacup territory – there’s a need for more fundamental rethink than just a tweak of assessment policies.

    Also: Where’s a “clippy” GIF when you really need one? 🙂

  2. Thanks to concerted action across many different forums UK HE appears to have secured a last minute agreement to opt-out method for institutions who want to evaluate the evidence, test how the AI indicator fits into their current processes and explain about it to staff and students in their own contexts.

Leave a Reply