I met Livia Scott earlier this week at a university, and while on campus we saw one of those pull up banners, which said something like “what does ChatGPT mean to me” or similar.
We thought that was interesting – not least because a student was taking a snap of it. There was no URL, just a QR code.
Fair enough, most students have phones. I scanned it and followed the link. That went to a page of IT Guides, of which the one advertised was the last possible option.
I followed the link and was presented with two further options – Drawbacks of using AI Tools, and a guide to writing useful AI prompts and questions. Both were PDFs to download and read. On a phone.
The former, dated April, banged on about hallucinations and fake references – issues that were major with Chat-GPT 3.5 in early part of the year, but not so much now. It didn’t mention any other tools.
The latter was another PDF – and much much better guides to prompting GPT 4 can be googled.
Neither actually told us what the university’s rules were on G-AI, the number one thing students say they want to know – although there was a link to the university’s academic integrity policy – a 13 page pdf.
Buried in there was a reference to “AI systems” that framed them in the same way as contract cheating services – “ghosting”.
We haven’t scooby whether tools that make suggestions about grammar, or help students write a first draft, or do the research for them, or help them form arguments and so on actually count. In other words, AI Tools as collusion were not mentioned.
In any case, we were wondering why, having evaluated the downsides and given tips on prompts, the actual policy says that any work “produced in part” world be banned?
Ironically, we couldn’t even get on ChatGPT on eduroam there. But when we switched to The Cloud – presumably using the same WiFi beacons – we could.