This article is more than 1 year old

How AI could undermine diversity in the curriculum

The biases inherent in artificial intelligence are well known – and could be coming to a classroom near you. Sam Illingworth tackles the dark side of ChatGPT
This article is more than 1 year old

Sam Illingworth is an Associate Professor in the Department of Learning and Teaching Enhancement at Edinburgh Napier University

The commentary we’ve seen since the end of 2022 about the dangers that ChatGPT and other chatbots pose to higher education assessment does a great disservice to our students, overlooking the opportunity for positive change and growth when it comes to assessment and critical thinking.

But it also misses a far more serious potential challenge to higher education in the UK – ChatGPT threatens to undermine the steps that have been taken in recent years to diversify the curriculum.

The curriculum is an essential tool in shaping the knowledge and values of our students. It determines what students learn and how they learn it. In the UK and other Western countries, the curriculum has historically been dominated by a Western perspective, which contributes to the marginalisation of other perspectives and voices.

Skew whiff

A concerted effort has been made in recent years to diversify the curriculum and bring alternative perspectives to the forefront. This has included the introduction of topics such as postcolonial literature, critical race theory, and feminist theory. Many programmes and modules now have much more diverse reading lists, whilst others use approaches such as critical science agency and place-based education to contextualise how knowledge is created, claimed, and taught.

Despite all these efforts, we still have not done enough to challenge the Western-centric view that pervades our educational institutions, and there’s still so much work to be done. But if we don’t take a moment to seriously consider the potential of ChatGPT in higher education, things may only get worse.

Given that ChatGPT and other similar technologies reflect the values of their designers and often assert the designers’ beliefs and values in their programmed interactions, they are likely to draw from sources that reflect the dominant Western perspective, whether by accident or design.

This means that students who use ChatGPT as a source of information are likely to receive a skewed view of the world, one that strengthens the Western-centric perspective. By perpetuating this perspective, such tools are likely to marginalise other knowledges and voices.

Critical tech

To address this, we need to start talking about the problem. Rather than either hoping that ChatGPT will go away (it won’t) or worrying that all our students will use it to undermine our carefully constructed assessments (they won’t), we need to engage our students in a dialogue about the potential biases inherent in language models like ChatGPT and the importance of critically evaluating the information generated by these models.

Let’s make sure they’re not blindly relying on this digital doppelganger for their coursework, and instead provide them with the skills they need to critically evaluate the information it generates, just like we do with any other source of information.

Universities can support research that explores the potential impacts of artificial tools on the curriculum and the ways in which they can be used in an ethical and responsible manner. This research can help to inform best practices for the use of such tools in educational settings and prepare us for when weak AI eventually gives way to its stronger relation (if this resonates with any funding bodies or tech philanthropists who want to fund this kind of research, then feel free to slide into my DMs).

Similarly, universities can collaborate with technology companies like OpenAI to help ensure that language models such as ChatGPT are developed and used in a manner that supports, rather than undermines, efforts to diversify the curriculum. This can include providing feedback on the data sets used to train these models and working to ensure that they reflect a diverse range of perspectives and voices.

Walking cure

Finally, let’s not just talk the talk, let’s walk the walk. Let’s incorporate subjects that challenge the Western-centric view of the world and encourage other perspectives in our curricula. Let’s monitor and evaluate the use of language models in our classrooms and make sure they’re not perpetuating Western hegemony but rather challenging, contextualising, and rewriting it.

Rather than ignoring or fearing the impact of AI in education, we need to engage in meaningful dialogue with our students and equip them with the critical thinking skills to evaluate the information generated by these models. Through research, collaboration, and action, we can help ensure that the future of education remains inclusive, diverse, and free from Western-centric biases.

Let’s rise to the challenge and make sure that we continue to shape the minds of our students in a way that reflects the rich tapestry of voices and perspectives that make up our world, rather than those of an empty white text box waiting for a prompt.

Leave a Reply