I’m a little worried about how artificial intelligence might affect students’ motivation, sense of identity, and intuition around knowledge production and consumption.
A lot of us in higher education are thinking about using AI tools creatively, perhaps even embracing them. This is ultimately probably the right thing to do. But it’s going to be difficult to do right, especially if we rush.
The problem of fewer problems
Students already have access to tools that can write for them, improve their writing, perform literature searches, summarise arguments, provide basic critical analysis, create slide decks, and take the role of a tutor. With the right subscription, you can even get a bot to go online to try to accomplish long lists of automated tasks to achieve all manner of projects.
So these tools suddenly make it really easy to do all kinds of things, very quickly, that used to be hard and slow. Many of these things used to be essential to the intrinsic value of university work – planning, decision-making, struggling, searching, assessing sources, grappling with difficult texts, and learning to accept the discomfort of uncertainty.
AI tools remove problems. But some of the problems they remove are useful to have and to solve as a human. Students will find it really difficult to distinguish between what problems are good to solve quickly with AI, and which problems are more valuable to solve themselves.
And society is entering a whirlwind of new possibilities regarding what is and is not legitimate knowledge or legitimate knowledge work. Students will, for some time, I think, be very confused about what’s okay, what’s worth doing, and what their individual role is within the nuanced ecosystem of consuming and producing knowledge. What’s my role in this education thing anyway? Where’s the intrinsic value of learning? What am I aiming for, and will that be the same in two years?
Students have already been trained by our system to think instrumentally or extrinsically about the outcomes of their work. Many focus on getting the essay done, achieving the passing grade, obtaining the piece of paper. It often seems more difficult for students to see the intrinsic value, the self-transformative value that comes from wrestling with difficult tasks and processes. AI tools will exacerbate this conflict. We may, for a time, lose any clarity at all around what’s exciting and what’s important in university. A kind of built-in nihilism of knowledge work.
Through a scanner, darkly
In the broader information environment, there will be a steep rise of deepfakes, scams, pranks, political mischief and spammy internet content. It will become, in the next couple of years, quite difficult to know what’s real and what’s not. Especially in the absence of regulation.
Students, both in their private lives and in their university work, will find it increasingly difficult to know what to trust and what not to. Assessing and reassessing information could become a nearly constant activity. In this environment there’s a risk of exhaustion.
With all of these forces at work, then, it’s important to recognise that student energy, motivation, even identity could be at risk. Again: why am I doing this? What’s important in all this? What is my role?
So before we in higher education rush to integrate AI tools into the classroom, rush to have fun with ChatGPT and all the rest, and risk inadvertently accelerating or elevating these harms, can we resolve to proceed with caution?
Criticality and caution
Universities, at a governance, planning, and policy level, can aim to redouble values and aims for humane and people-centric cultures. This will mean a deep and serious reconsideration of budgets, allocation of resources, spaces, and teaching philosophy. Now more than ever it will be important to increase the ratio of teachers to students.
We must be crystal clear on what’s allowed. That needs to come from the university, but we can also do that in faculties and in individual courses and modules. What is the code of conduct? This will probably have to be reassessed frequently.
We should be crystal clear on what’s assessed. So, whereas a year ago we might have assessed clarity and correctness of language, for example, it no longer seems that that would be fair. It would benefit students using AI tools at the expense of those not using such tools.
We should also resist the urge to require the use of AI tools and we should try not to advantage those who do. Students who cannot afford to, or who do not wish to, use these tools should not be discriminated against. It will be quite difficult to design assessments so as not to create inequities of accessibility, but this is the right aim to have.
Perhaps most importantly, we should teach criticality and caution around AI, and acknowledge the complexity of the present moment. Talk with students about the confusion, the ethics, and the potential harms.
And finally, we should reintroduce humanity and physicality into learning and teaching wherever possible. Devise ways to let students unplug and interact with the real world, with the physical environment and with each other. Design digital interactions to minimise clicks and tabs and constant surfing and non-stop switching from platform to platform and app to app.
In other words, let’s leverage the embedded criticality in our higher education culture to promote a more humane world, even in the flood of new technologies that challenge equality, humanity, identity, knowledge, and the future of learning.