Are students really that keen on generative AI?
David Kernohan is Deputy Editor of Wonkhe
Tags
Nearly four in five (79 per cent) of students feel that their universities approach to students using generative AI tools is about right or not strict enough.
It’s a finding (from a YouGov survey of 1,027 students conducted in June and early July of this year) that suggests that students are not motivated to use large language model based technology to pass their degree. Indeed 93 per cent of students felt that creating work for assessment using generative AI was unacceptable, with 82 agreeing that using these tools to create even parts of such work was unacceptable.
With commercial generative AI tools first capturing the attention of many just a few years ago, we saw all kinds of predictions about the transformation of just about every aspect of modern life. Companies involved in developing generative AI tools, or even in providing the underlying technology, saw rapid increases in value based on very optimistic projections.
We are, at this point, long past the initial hype and it very much feels like people are taking a much more critical look at this technology – many firms who have tried replacing staff with AI tools have now resorted to rehiring staff. I’ve heard it (cynically) argued that one of the most viable use cases for generative AI was to facilitate academic misconduct (that and making pictures of anime characters) – and again, students are not only against their peers using it, they are not even sure it would be any good.
Around half (47 per cent) of students who have used generative AI tools to support their study (perhaps by summarising sources or identifying relevant materials very or fairly often see answers to their queries based on false claims, or information that has been made up (“hallucinated”) by the tool itself. And 66 percent think it likely (or very likely) that their university would spot work created using AI if it was submitted for assessment.
It’s not all negative, however, students who had used the tools (about a third had done so once a week or so, with just over two thirds having done so at any point) seem to feel that they have learned slightly more (or about the same) – and that their marks are slightly higher (or about the same) – than in a counterfactual world without accessible generative AI tools.
It’s perhaps not the avalanche that was predicted, though if some students are reporting learning benefits then this should be taken as evidence that there is at least some good being done. But for institutions looking at investments in new software and new platforms, it all feels like rather thin stuff.