The real risk of generative AI is a crisis of knowledge

Universities are often focusing on new technologies’ impact on assessment, but for Joshua Thorpe there are more fundamental questions about how these tools will affect students’ identities as learners

Joshua Thorpe is an Academic Skills Advisor at University of Stirling

I’m a little worried about how artificial intelligence might affect students’ motivation, sense of identity, and intuition around knowledge production and consumption.

A lot of us in higher education are thinking about using AI tools creatively, perhaps even embracing them. This is ultimately probably the right thing to do. But it’s going to be difficult to do right, especially if we rush.

The problem of fewer problems

Students already have access to tools that can write for them, improve their writing, perform literature searches, summarise arguments, provide basic critical analysis, create slide decks, and take the role of a tutor. With the right subscription, you can even get a bot to go online to try to accomplish long lists of automated tasks to achieve all manner of projects.

So these tools suddenly make it really easy to do all kinds of things, very quickly, that used to be hard and slow. Many of these things used to be essential to the intrinsic value of university work – planning, decision-making, struggling, searching, assessing sources, grappling with difficult texts, and learning to accept the discomfort of uncertainty.

AI tools remove problems. But some of the problems they remove are useful to have and to solve as a human. Students will find it really difficult to distinguish between what problems are good to solve quickly with AI, and which problems are more valuable to solve themselves.

And society is entering a whirlwind of new possibilities regarding what is and is not legitimate knowledge or legitimate knowledge work. Students will, for some time, I think, be very confused about what’s okay, what’s worth doing, and what their individual role is within the nuanced ecosystem of consuming and producing knowledge. What’s my role in this education thing anyway? Where’s the intrinsic value of learning? What am I aiming for, and will that be the same in two years?

Students have already been trained by our system to think instrumentally or extrinsically about the outcomes of their work. Many focus on getting the essay done, achieving the passing grade, obtaining the piece of paper. It often seems more difficult for students to see the intrinsic value, the self-transformative value that comes from wrestling with difficult tasks and processes. AI tools will exacerbate this conflict. We may, for a time, lose any clarity at all around what’s exciting and what’s important in university. A kind of built-in nihilism of knowledge work.

Through a scanner, darkly

In the broader information environment, there will be a steep rise of deepfakes, scams, pranks, political mischief and spammy internet content. It will become, in the next couple of years, quite difficult to know what’s real and what’s not. Especially in the absence of regulation.

Students, both in their private lives and in their university work, will find it increasingly difficult to know what to trust and what not to. Assessing and reassessing information could become a nearly constant activity. In this environment there’s a risk of exhaustion.

With all of these forces at work, then, it’s important to recognise that student energy, motivation, even identity could be at risk. Again: why am I doing this? What’s important in all this? What is my role?

So before we in higher education rush to integrate AI tools into the classroom, rush to have fun with ChatGPT and all the rest, and risk inadvertently accelerating or elevating these harms, can we resolve to proceed with caution?

Criticality and caution

Universities, at a governance, planning, and policy level, can aim to redouble values and aims for humane and people-centric cultures. This will mean a deep and serious reconsideration of budgets, allocation of resources, spaces, and teaching philosophy. Now more than ever it will be important to increase the ratio of teachers to students.

We must be crystal clear on what’s allowed. That needs to come from the university, but we can also do that in faculties and in individual courses and modules. What is the code of conduct? This will probably have to be reassessed frequently.

We should be crystal clear on what’s assessed. So, whereas a year ago we might have assessed clarity and correctness of language, for example, it no longer seems that that would be fair. It would benefit students using AI tools at the expense of those not using such tools.

We should also resist the urge to require the use of AI tools and we should try not to advantage those who do. Students who cannot afford to, or who do not wish to, use these tools should not be discriminated against. It will be quite difficult to design assessments so as not to create inequities of accessibility, but this is the right aim to have.

Perhaps most importantly, we should teach criticality and caution around AI, and acknowledge the complexity of the present moment. Talk with students about the confusion, the ethics, and the potential harms.

And finally, we should reintroduce humanity and physicality into learning and teaching wherever possible. Devise ways to let students unplug and interact with the real world, with the physical environment and with each other. Design digital interactions to minimise clicks and tabs and constant surfing and non-stop switching from platform to platform and app to app.

In other words, let’s leverage the embedded criticality in our higher education culture to promote a more humane world, even in the flood of new technologies that challenge equality, humanity, identity, knowledge, and the future of learning.

4 responses to “The real risk of generative AI is a crisis of knowledge

  1. Probably the best thing I have read so far about AI in HE; an extremely valuable perspective. That an argument for refocussing on the human purposes of education feels like such a novelty is quite an indictment of the direction things have been taking for some time.

    1. As many colleagues in HE are, I am struggling to make sense of AI and how we need to move forward. I agree with Sarah: this is the best article I have read on the subject for a while – it eloquently echoes my discomfort around the numerous communications, webinars, etc in HE which seem to be advocating ’embracing’ AI instead of pausing, remembering our values and offering students an alternative to the fast and often superficial world of technology – a space to think, explore, compare, create.
      I have often felt over the years that in education, some technology has been adopted without a great deal of thought. Of course, some tools do help and we all use them, but this recent development is in a different league and requires careful consideration.

  2. Surely universities should be providing AI tools to students as they do other learning resources to avoid issues of who has access and who does not.

  3. Thank you, Joshua.
    This really struck a chord with me (does that mean I like what you say because you write what I think I might have been already thinking….?). Anyhow – this is the bit that stood out to me: “AI tools remove problems. But some of the problems they remove are useful to have and to solve as a human.” I also like that you open with “I’m a little worried”. I’m a learning developer, too, and I’m often a little worried. I worry that learning development is often asked to simplify, boil down, make swallowable that which is hard, thorny, unpindownable….
    I see my role as revealing and celebrating complexity. It’s about making space and taking time. It’s about process as well as product. For me, learning is about being human and being authentic. Computer-generated and artificial worry me. Fast and simple worries me.

Leave a Reply