The choices universities and colleges make about AI are political

Ahead of this week's Digifest, Michael Webb and Rebecca Flook confront the complex values systems behind general purpose AI technology

Michael Webb is Director of AI at Jisc


Rebecca Flook is Principal AI Specialist at Jisc

Let’s start with the uncomfortable truth. Generative AI is not neutral. It never was. And yet much of the public conversation still treats it as if it were simply an advanced calculator, a clever tool that sits above politics. It is not.

The systems now being woven into education are shaped by a remarkably small group of people. Not “the internet” as the source of training material. Not “society” influencing the way we use these tools. It’s shaped by a small leadership class in a handful of companies, operating within specific political and economic pressures. Those pressures are translated into the systems themselves: through choices about what data is included in training, how the model is aligned and fine-tuned to prefer some behaviours over others, and how outputs are filtered and moderated in deployment.

If you use ChatGPT, you are using a system shaped by Sam Altman and OpenAI’s board. If you use Gemini, you are encountering decisions made inside Google under Demis Hassabis and Sundar Pichai. If you use Claude, it reflects choices made by Dario Amodei and Anthropic. If you experiment with Grok, you are stepping into Elon Musk’s highly public worldview. In China, models from Alibaba and Baidu reflect a very different political context. In Europe, Mistral represents an attempt at technological sovereignty.

This is not a “great man” story, even if many of the faces are still men. It is more accurately a story of leadership circles, investors, boards, regulators and governments. But it is human. Decisions are being made in rooms, under pressure, about what these systems should optimise for, what they should refuse to answer, what values they should reflect and what risks are acceptable. And those decisions matter.

Whose intelligence?

Publicly, many of these leaders say they are building something potentially dangerous: artificial general intelligence. Hassabis speaks about the need for international coordination and even suggests that a slightly slower pace of development might help society keep up. Amodei has repeatedly framed advanced AI as requiring strong safety commitments. Geoffrey Hinton and Yoshua Bengio, pioneers of the field, have warned of existential risks and argued that even reducing the probability of catastrophe would justify serious intervention.

At the same time, these companies operate inside competitive markets. As Geoffrey Hinton points out, they are legally bound to pursue growth and shareholder value. And they are caught in a geopolitical race, most visibly between the US and China. Regulation is often framed as a risk to national competitiveness. Do you regulate and risk falling behind, or deregulate and risk social harm?

When Google states that “democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” that is not a neutral claim. It is a political position. It assumes a particular understanding of freedom and governance, rooted in Western liberal traditions. In those traditions, freedom is often framed as freedom from state interference. In other parts of the world, freedom may be understood more collectively, as the ability to fulfil social roles or maintain social harmony, sometimes with stronger state coordination.

These differences are not abstract. They shape product design. They shape what content is moderated, what speech is prioritised, how safety is defined and how risk is distributed.

Alongside existential risk sits a more immediate concern: work. The International Monetary Fund has warned of large-scale labour disruption and rising inequality driven by generative AI. Leaders in the field acknowledge that AI will compress roles, automate tasks and potentially displace workers. Universal basic income is often floated as a partial answer, but income is not the same as dignity. For many people, identity and purpose are tied to contribution. The question is not simply how wealth is distributed, but what kind of society we are building.

Playing it out in education

Our students are not just learning with AI. They are learning through systems shaped by these people, these incentives and these geopolitical tensions. While some embrace the new capabilities, many others are arriving with deep scepticism and are rightly questioning the environmental, social, and economic costs of these tools as we’ve seen in our Student Perceptions of AI research. Increasingly, AI will act as tutor, feedback engine, research assistant and career adviser. It will mediate knowledge and opportunity.

If generative AI is shaped by power, politics and principle, then education cannot treat it as neutral infrastructure. It is not simply a tool that reflects human bias. It is a technology aligned to particular visions of the future.

This is not an argument for stepping away from AI in higher education. On the contrary, universities and colleges should be spaces where complexity is confronted, not avoided. But we owe students honesty. They need to understand not only how to use these systems, but who shapes them, whose interests they reflect and how they might be governed differently.

Because the question is not whether AI will influence education. It already does. The real question is whose vision of the world is being built into the systems our students increasingly rely on. And beyond that, the challenge for higher education is whether we will simply be consumers of these embedded visions, or whether we will assert our own values into the development of the next generation of intelligence. And if so, what would they be? Criticality, plurality, and commitment to the public good, maybe?

Let’s not forget that the foundational research for these tools started in our universities. Our institutions are not just observers of this story. They are, in many ways, its origin point, and they continue to drive the story forward.

The authors will be speaking on the topic of AI governance at Jisc’s Digifest 2026 conference this week.

4 Comments
Oldest
Newest
Inline Feedbacks
View all comments
cim
1 month ago

A discussion of the politics of generative AI, with respect to the Higher Education sector, should surely also recognise:

1) The underlying industrial-scale plagiarism, IP theft, and disdain for crediting sources used to both construct and operate these tools. Universities should I think be more upset about that than they apparently are, given that a human academic operating to those standards should expect unceremonious dismissal.

2) The immense energy costs needed to run a glorified unreliable chatbot this way and the environmental impacts of all those new data centres and power plants in terms of energy consumption, water pollution, noise pollution – which surely is completely contradictory to any “net zero” or other environmental goals a university might have set.

Discussions of the hypothetical dangers of an “artificial general intelligence” (which these chatbots come nowhere near to providing, just like every other “AI” of the last fifty years) form a convenient distraction from the immediate dangers and costs of this technology.

(The proponents of asbestos had a genuinely useful product and were honestly unaware of the downsides, and the current generative AI companies don’t come anywhere near meeting that bar)

Jim
1 month ago

Thankfully alternatives already exist. The work EDINA have done with the ELM platform shows that, with the right support and backing, a sector built solution could be viable. And avoid, what sometimes feels like the inevitable, further descent into deeper vendor lock in with the Big Tech companies

Rich
1 month ago
Reply to  Jim

Hear hear – alternatives are possible

Chris
1 month ago

illustrated with AI slop artwork… not a good look