Rachel Maxwell is Director of Sector Engagement and Academic Research at Kortext

The jury is still out as to the scale of transformation AI technology will bring in its wake.

It’s indubitable – confirmed in the latest iteration of HEPI/Kortext research – that the vast majority of students are routinely using generative AI. The pace of technological change in AI is unprecedented – yesterday’s generic LLM slop is today’s sophisticated co-worker. And higher education institutions are under significant pressure to show, individually and collectively, that they have a grip on all this, even as the goalposts shift around them.

As has been widely documented on these pages and elsewhere, AI demands a dual response from institutions. All institutions have had to carry out some degree of rearguard action, even if only temporary, to manage the immediate implications for assessment, academic misconduct, and general risks of misuse of the technology.

But the second – and arguably more significant – task, is to start to forecast, describe, and shape a future in which learning is routinely AI-enabled. Within that proposition a range of possible learning futures are embedded, some much more appealing than others. And while in pretty much every higher education institution someone – or a group of someones – have been tasked with leading the thinking, convening the debates and supporting the development of practice that can start to bring the best possible version of that future into focus, that job is far from straightforward.

Clarity isn’t available

During January 2026 as part of our Educating the AI Generation project, Wonkhe and Kortext hosted three private senior round tables, engaging with senior leaders and colleagues from more than 25 institutions from across the sector, to discuss the institutional response to AI in learning and teaching, and the changing expectations of educators in particular. And our findings show that discourses focused on strengthening and implementing institutional policies may not take account of the way AI is challenging some of the fundamental architecture of learning and teaching management and institutional change.

At the heart of the challenge is a tension between staff’s desire for clarity from institutional leaders about taking a formal position on AI use, and the need for local interpretation of the risks, possibilities and applications of the technology. Participants talked about fostering “productive discomfort”, and a mindset of “curiosity over certainty.” As one senior leader put it: “We understood that the opportunity and risk was going to be different between disciplines, between programmes, even modules. Yet at the same time, we had people saying, well, why isn’t the university producing a policy? Why isn’t it telling us what to do?” Behind those demands, they noted, lay “a lot of worry, fear about what it meant for the future of teaching and assessment and higher education. And also their own concern that they weren’t digitally or AI literate enough.”

Confident exploration of the implications of AI requires a level of psychological safety, as one participant noted: “”No one feels able to say, ‘I’m really scared of this, or I don’t understand it, or I’m not interested, or, help. Or, oh, this is amazing and I love it’… I think I’ve spent a lot of time acknowledging and saying to people, ‘yes, I understand this is difficult as a human. This is fundamentally changing, not just how you work, how you live, how you exist, and how society is as well.’”

Leaders are being called on to offer clarity and certainty when there is still a great deal that is unknown. Sessions discussed the various emerging possibilities for how in the coming years the external AI landscape will evolve further, with applications and uses in primary and secondary education; changing expectations of AI competence among graduate employers; and the ways the AI market in general will likely coalesce around particular platforms, applications and use cases.

Institutions may make informed guesses in some of these areas and may have some not inconsiderable power to shape how the future evolves, but it remains a high-stakes – and potentially high-cost – gamble, with clear risks to equity between students and between institutions depending on their access to technology. One leader at a smaller institution described their sense that the sector is gearing up for an AI “arms race” – a metaphor that captures both the reality that different institutions are starting from wildly different places and positions of power, and the distinct prospect of nobody winning as everybody rushes to adopt the technology of the moment whether or not its value has been proven.

There remains a knowledge gap between the sense that things will change and the exact nature of that change, illustrated by one comment about employer perspectives: “I asked CEOs at an employer engagement event who thought students needed AI skills and they all raised their hands. When I asked what they meant by skills there was a lot of looking at the floor and embarrassed coughing.” Another pointed out that many industries are also experiencing a level of whiplash with AI: “it does seem to be that they’re still trying to understand the landscape themselves. So it’s not that they’re always able to articulate to us exactly how they want students to be using it, because it’s such a moving picture for them as well.”

Given that is the case, it is arguably surprising that institutions have to a large degree been expected to work this out for themselves, on an individual basis. As one leader commented: “This is a massive decision for every institution, because whatever is decided will determine the flavour and shape of what student experience is, what staff opportunities are, what the nature of AI literacy and student AI literacy is all about.” The traditional approaches to making policy and developing strategy may not be sufficient – or sufficiently speedy – when the landscape is so uncertain.

A mixed response

Yet despite all the challenges, and the need for human sensitivity and compassion, it does seem reasonable and necessary to address the question of what can be expected of educators in this emerging learning landscape. And it is hard to avoid a sense arising from our discussions that the established mechanisms to develop and support effective learning and teaching may need to scale up to achieve the kind of positive human-centred learning experience that the arrival of AI both demands and which it can, deployed appropriately, help facilitate.

It’s clear that there is a large volume of work going on across the sector to respond to AI. Institutional leaders, together in many cases with designated institutional leads for AI, are convening and supporting debate, strategic discussion, and concrete development and sharing of practice and use-cases for AI. Most institutions are thinking about “AI literacy” and defining the moments AI should show up positively in learning, teaching and assessment, as well as the moments where it should be formally excluded, through formal student journey mapping, curriculum review and assessment reform.

When engaging any community, there are obviously going to be some early adopters and enthusiasts who will be leading the charge in doing things differently, and others who need to be persuaded that their time and effort is worth the change. While the technology continues to evolve and there are many different tools available, those who are curious and willing to experiment are well positioned to find productive ways to deploy AI, allowing for some false starts and failures.

From a leadership perspective, supporting the principle of experimentation or giving “permission to experiment” is seen as critical. One senior leader articulated this sense that experimentation becomes a valued element of professional practice when the landscape is changing rapidly: “We can’t predict where we’re going to be in five years, and we couldn’t have predicted where we are now. All we can do is give people the tools to use them, and the confidence to really step out of comfort zones and push those boundaries, and also reward people who are doing that. We need to start to reward the risk takers.”

But free experimentation is not everyone’s preferred way to develop – we also heard that there is a need for clear examples of the application of AI tools in professional practice or, as one attendee put it, the “opportunity to expose colleagues to quite a focused or structured use case.” Another pointed out that these use-cases need to be developed within disciplines: “The problem is, people attend training; they attend these AI literacy workshops and everything, yet nobody tells them their own use case. For example, how would a physics professor know how to use AI for their physics work? Or a physics student know how to use it for physics work? The same goes for a marketing student, a finance student. So, we discussed that for this to work, it has to be discipline-level or faculty-level training that has to happen, either by champions or people who know that area really, really well.”

Another observed: “I find when I talk to staff I focus on their discipline – their passion – and how it might change and be enhanced with AI. This tends to give a way into them taking control of how they want to contextualise AI in their teaching and research.” There is a strong sense that whether arising from professional anxiety or a critical view of the technology, institutions are having to offer quite a lot of personalised support and encouragement for some educators to engage with the ways AI could change their practice.

While all this work and energy shows the sector could by no stretch of the imagination be accused of being sanguine in the face of transformation, there are also hints arising from our conversations that there are cultural challenges of the kind that will feel very familiar to anyone tasked with implementing institution-wide change. While every institution could probably point to multiple innovative projects and agendas around AI, there is no consensus within the academic community about the scale of the transformation AI is likely to bring or what the corresponding responsibilities of educators might be.

One leader captured this tension: “What worries me is that we have such diverse views among staff: staff who absolutely embrace AI and just say, ‘yes, we’ll get on, we’re using it, we’ll do, you know, as much as we can with it’ and others who say, ‘no way, we’re not letting AI anywhere near anything in my module, because, it’s just taking over from what students need to learn.’”

Another said: “ I think perhaps we’ve been in a situation where we can say, well, I know my discipline, I’m an expert, I can understand, perhaps, how the job market is evolving, and I engage with employers and so on. And AI seems to be putting a cat among the pigeons, where there’s lots of staff who are saying that this is really like a generational divide like nothing else, so we don’t know what to expect. How can we prepare our students for that? And so I feel there’s a big culture change piece there.”

A steep learning curve

What we heard across our conversations was a view that AI is forcing, in potentially quite a positive way, a confrontation with the core value of higher education. As one leader said, “we have to think about what is the purpose of a university, and I think that then it will have to be about those human skills. We’re bringing people together, we’re inducting them into their disciplines, in person, in an active way.” Another specifically expressed enthusiasm for adopting “much more authentic, iterative, process-led assessments that are much less about just producing one final output or artefact at the end of a piece of work, trying to actually capture the student’s developmental journey they go on during their degree programme.”

When asked directly what they will expect to see from educators over the next few years as a response to AI, leaders articulated a sense of a systematic and critical engagement across all their institution’s programmes. For example: “[I would want staff to have] contextualised it, they’ve understood and they’ve considered how it works within their teaching, so that’s their learning outcomes, what they’re doing in the classroom, what they’re including the content, and how that relates to the assessment, but also how that relates to their wider discipline. And also to employability of students taking their discipline out into the world.”

Another said: “I would say that I would want everyone in my institution to design every assessment in a way that makes conscious and up-to-date choices about AI. Which could be designing and could be designing out, but it needs to be considered and balanced…I would also want everyone to have really considered their learning outcomes in view of the modern world.”

There’s no sense here that the leaders we engaged with plan to enforce AI adoption in particular ways, but there is a clear expectation that educators acknowledge and muster a professional response to the ways that AI is reshaping their disciplines, the expectations of students, and the way they teach and assess. Apart from any wider consideration of quality or educative purpose, it is simply not sustainable for students to have widely disparate experiences of AI-enabled learning, and receive different instructions on what is permissible depending on their module or programme.

Nobody wants the sector to buy incredulously into what one attendee dubbed “tech bro hype” or to adopt an uncritical or ill-informed stance in deploying AI technology in teaching and learning. We were struck in the discussions how institutions seem to be bringing the various tools of higher education to bear on the AI challenge: open debate; experimentation; structured research and development projects; and partnership and co-creation of response with students.

Yet we can’t help but observe that the hoped-for systematic engagement articulated by the leaders we spoke to sits somewhat at odds with the current environment for learning and teaching development in which there are multiple opportunities to engage but often little by way of formal obligation. And if the scale of transformation is as great as some predict, then a lack of deep critical engagement in any quarter potentially becomes a significant issue if higher education institutions find themselves less able to adapt as a result.

Leaders are mindful of the risks, as one put it: “You can’t just say we want to do what we’ve always done, and somehow plug AI in – that’s not going to work this time around.” Another added: “In other industries and in the commercial sector, there wouldn’t be an option not to grow, and not to transform, and not to move forward with the technology available, and they would be working to the best that they could afford.”

While AI remains an emerging technology and the various possible learning futures remain hypothetical, it is possible, and even desirable, to debate, experiment and explore with a coalition of the willing within institutions. But in doing so, higher education leaders are grappling with the possibility that if AI is going to drive large-scale change, the response in terms of educator development may need to be more comprehensive than has traditionally been the case.

This article is published as part of a partnership with Kortext. On Thursday 30 April Wonkhe and Kortext will co-host an online event exploring the themes arising from our Educating the AI generation project – you can find out more and sign up for your free place here.

1 Comment
Oldest
Newest
Inline Feedbacks
View all comments
peter j
1 month ago

Interesting to note that dmg ventures holds a 20% stake in Kortext. dmg ventures is part of the Daily Mail and General Trust.