This article is more than 1 year old

Setting the curve 4 – the contours of the generative AI debate

Will generative AI stop students from thinking or help them do it better? Educators are split - but Kelly Coate calls for calm
This article is more than 1 year old

Kelly Coate is provost at Richmond American University London


Debbie is Editor of Wonkhe

The incursion of generative AI tools such as Chat GPT into learning and teaching has posed an immediate threat to academic integrity in assessment, but there is also a longer term question about what the technology means for teaching and learning at university, and the extent to which it might profoundly reshape the idea of learning.

Asked how significant on a five point scale they believed machine learning and AI tools will be to changing teaching and assessment, two thirds (69 per cent) of respondents to the Wonkhe/Kortext survey of educators selected four or five, suggesting that most think it will be significant or very significant.

However, they are not necessarily optimistic about the impact of those changes. When we asked whether on balance they feel more optimistic or more concerned about students having access to AI tools, 37 per cent were on the optimistic end of the scale, while 24 per cent were on the more concerned end. The remainder were neutral.

Those who were more concerned in a lot of cases cited worries about academic misconduct – but a wider worry is that generative AI tools will be used as a substitute for deep critical engagement with knowledge, or to “bypass the learning experience” as one respondent put it. The risk of students being “tempted” was a common concern.

I’m not against AI but I think one of the frustrations of HE teaching is that students often want to get a degree but don’t appreciate the skills they are learning on the course – it’s a lot about the assessment and grades. AI will further diminish independent study skills and time management, etc. due to its ease of use, making students less employable. Academic, post-92 university

Those who were more optimistic took a polar opposite view – they said that AI tools could potentially enhance students’ ability to think critically, be creative, and solve problems. They were also more confident about the possibility of teaching students to use the technology with integrity. There was also a pragmatic acknowledgement that if the technology exists and is likely to be used in the workplace, then universities will need to teach students how to use it.

The toothpaste is out of the tube – it would be like saying should we tell students not to use the internet for research – I remember when I was a student and some people tried to stop us using word processors to write essays – if the technology is there of course students should use it. Our job as academics is to keep abreast of new developments and adapt our teaching and assessment accordingly. Learning and teaching professional, specialist institution

It is exciting about making a level of academic literacy available and achievable to students or learners who often feel overwhelmed or intimidated about trying to ‘sound’ or ‘write’ in a formal academic tone. It should allow more space to read, to ‘think’, to critically analyse and explore texts, concepts, wider access and sharing of knowledge and understanding, VERY EXCITING. Member of executive team, FE college

Sometimes a good moral panic about technology can help us to examine our preconceptions about learning

Kelly Coate, Provost, Richmond American University London

When ChatGPT came on the scene and everyone realised the potential implications for student assessment the debate immediately took on the contours of a moral panic. In fact, it felt disturbingly similar to previous moral panics about technology.

I recall that even when PowerPoint was introduced years ago, there were academics who refused to use it because they worried it would lead to dumbing down – forcing them to reduce their teaching material to bullet points on slides rather than the fluent and complex prose they had been used to. And when MOOCS arrived there was concern that learning would become too “bite-sized” and easy. Most moral panics around new technologies have also included the fear that students would use them to cheat.

Despite an inauspicious start, I actually think that the conversation around AI has moved on much more swiftly than, say, that on the rights and wrongs of lecture capture. What I see now in the sector is sensible conversations about how we support students to use AI appropriately, and support staff to engage students on these issues.

It helps that perhaps this is not seen as a top-down initiative driven by university management – higher education is part of a national, even global confrontation with a novel technology. And AI is affecting so many professions in such a profound way that academia can’t claim to be an exception. There’s also the concern in some quarters that AI could mean the end of civilization – which can be a bit scary for some but gives the topic a certain degree of edge – you can’t ignore it, and it’s something that you have to have an opinion on.

The assumption that generative AI will actively encourage students to cheat is lazy. In my experience students may cheat when they are stressed and overwhelmed, so the more sensible conversations we can have about how AI can support effective learning and the more tools we make available to do this, the more we can be confident that students will be enabled to do the right things.

The learning technology specialists are all over this conversation, running workshops for staff, thinking through how the sector will adapt, sharing practice. Academics have approached academic developers and pitched ideas for new approaches to assessment, some have pitched for education innovation funding – people recognise that the only sensible response is to try new things and experiment. That said, I’m not convinced that a lot of what we are seeing is genuinely novel – it’s great when educators can integrate use of AI tools into assessment in interesting ways, but I’m not sure we’re asking students to do anything that is substantially new.

I’m also mindful that there is a lot that is beyond any institution’s control about how this technology develops – we’ll need to watch it unfold and be sure we are keeping up with the conversations. Those begging for clarity about exactly what should be done may need to make their peace with the idea that it’s not something they can have.

The bigger challenge, beyond the immediate pressures of integration of AI into day to day learning, is reflecting on how changing technology challenges our values and preconceptions as educators. For every technology shift there are gains and losses of skills and knowledge – people learn differently now from how they learned thirty years ago because they were raised on the Internet, and you could argue that curation is increasingly a more important cognitive skill than synthesis.

If in academia we value things like authenticity, integrity, and originality, we need to be able to articulate why those values remain important in the age of generative AI. Doing this can only help students to make meaning from their higher education learning experience – in fact, it’s really what we should have been doing all along.

This article is published in association with Kortext as part of a suite on the theme of universities deploying technology for learning, teaching, and student success. You can download the full report of the survey findings and leaders’ insight over on Kortext’s website here.

3 responses to “Setting the curve 4 – the contours of the generative AI debate

  1. I am very resistant to AI being used without clear parameters of use in HE. I have used this analogy recently to explain my resistance.

    AI is like a bread maker. A bread maker can make a decent loaf of bread – it’s quick and works and you just need to stick the ingredients in and turn it on. Hey presto, bread for the short of time.

    However, there is no love in a break maker. You don’t understand how to combine the ingredients yourself, choose and experiment with types of flour, you don’t have to work hard to achieve the end result, you don’t earn the or feel you have earned the end product. It is just process.

    AI learning has no soul, no understanding or creativity for the user. It’s an impressive piece of software that fast forwards you to an end product. It will take the soul out of education gain if we are not careful in its use. HE should be about the exploration of ideas, investigation into our subjects and ourselves. Using AI will lead to students who don’t question the world around them and graduates who will struggle in the workplace.

    We need a sector-wide response to AI that clarifies a position clearly.

  2. It’s not just process. I use a bread maker at home, and in the process have experimented with flour & ingredients, recipes, and flavours, and people in the office love (or hate) my unique Marmite / Granary mashups. I’ve exploited the algorithms, in other words. I don’t feel less of a bread provider because the little paddle did the boring kneading bit for me. At work, I’m looking forward to going forward with AI (possibly with murky guidelines). I’ll be mainly asking my students about the value of the recipes thye have chosen: Why did you do this? Why does it taste like this? What if I need something else? I won’t ask which algorithms they used, so much.

  3. Every student (and policy maker) should have to watch a video on how so-called AI works (https://www.youtube.com/watch?v=wjZofJX0v4M is a good one). No matter how much maths you know, by the time you get to the bit where it picks the next word at random you should be cured of any faith you may have had in the technology.

    Generative AI is a parlour trick which is good for creating echo chambers but push its training set boundaries and it will make stuff up faster than Donald Trump giving evidence under oath. Plus, of course, it is all plagiarism all the way down.

Leave a Reply