Adopting AI across an institution is a pressing leadership challenge

Janice Kay and Rachel Maxwell set out all the elements of a whole-institution AI strategy and the leadership capabilities required to make it real

Janice Kay is Director at Higher Futures, and was formerly Deputy Chair of the Teaching Excellence Framework


Rachel Maxwell is Principal Advisor (Academic, Research and Consultancy) at Kortext

Artificial intelligence is already reshaping higher education fast. For universities aiming to be AI-first institutions, leadership, governance, staff development, and institutional culture are critical.

How institutions respond now will determine whether AI enhances learning or simply reinforces existing inequalities, inefficiencies and, frankly, bad practices. This is not only an institutional or sector question but a matter of national policy: government has committed to supporting AI-skills at scale, and the UK has pledged an early ambition that a “fifth of the workforce will be supported with the AI skills they need to thrive in their jobs.” Strategic deployment of AI is therefore a pressing HE leadership question.

Whole institution AI leadership and governance

Universities will benefit from articulating a clear AI-first vision that aligns with their educational, research and civic missions. Leadership plays a central role in ensuring AI adoption supports educational quality, innovation and equity rather than focusing purely on operational efficiency or competitiveness. Cultivating a culture where AI is viewed as a collaborative partner helps staff become innovators shaping AI integration rather than passive users (as the jargon frames it, “makers” not “takers”). Strategic plans and performance indicators should reflect commitments to ethical, responsible, and impactful AI deployment, signalling to staff and students that innovation and integrity go hand in hand.

Ethical and transparent leadership in AI-first institutions is vital. Decision-making, whether informed by student analytics like Kortext StREAM, enrolment forecasts, budgeting, or workforce planning, should model responsible AI use. The right governance structures need to be created. Far be it from us to suggest more committees, but there needs to be governance oversight through ethics and academic quality boards to oversee AI deployment across the education function.

Clear frameworks for managing data privacy, intellectual property, and algorithmic bias are essential, particularly when working with third-party providers. Maintaining dialogue with accreditation and quality assurance bodies including PSRBs and OfS ensures innovation aligns with regulatory expectations, avoiding clashes between ambition and oversight. This needs to be at individual institution, but also at sector and regulator level.

Capability and infrastructure development

Staff capability underpins any AI-first strategy. This needs to be understood through taking a whole institution approach rather than just education-facing staff. Defining a framework of AI competencies will help to clarify the skills needed to use AI responsibly and effectively, and there are already institutional frameworks, including from Jisc, QAA, and Skills England, that do this. Embedding these competencies into recruitment, induction, appraisal, promotion and workload frameworks can ensure that innovation is rewarded, not sidelined.

Demonstrating AI literacy and ethical awareness could become a requirement for course leadership, or senior appointments. Adjusting workload models to account for experimentation, retraining, and curriculum redesign gives staff the space to explore AI responsibly. Continuous professional development – including AI learning pathways, ethics training, and peer learning communities – reinforces a culture of innovation while protecting academic quality.

Investment in AI-enabled infrastructure underpins an AI-first institution. We recognise the severe financial challenges faced by many institutions and this means that investments must be well targeted and implemented effectively. Secure data environments, analytics platforms, and licensed AI tools accessible to staff and students are essential to provide the foundation for innovation. Ethical procurement practices when partnering with edtech providers promote transparency, accessibility, and academic independence. Universities should also consider the benefits and risks of developing their own large language models alongside relying on external platforms, weighing in factors such as cost, privacy, and institutional control. See this partnership between Kortext, Said Business School, Microsoft and Instructure for an example of an innovative new education partnership.

Culture and change management

Implementing AI responsibly requires trust. Leaders need to communicate openly about AI’s opportunities and limitations, critically addressing staff anxieties about displacement or loss of autonomy. Leadership development programmes for PVCs, deans, heads of school, and professional service directors can help manage AI-driven transformation effectively.

One of the most important things to get right is to ensure that cross-functional collaboration between IT, academic development, HR, and academic quality units supports coherent progress toward an AI-first culture. Adopting iterative change management – using pilot programs, consultation processes, and rapid feedback loops well – allows institutions to refine AI strategies continuously, balancing innovation with oversight.

AI interventions benefit from rigorous quantitative and qualitative evaluation. Indicators such as efficiency, student outcomes, creativity, engagement, and inclusion can offer a balanced picture of impact. Regular review cycles ensure responsiveness to emerging AI capabilities and evolving educational priorities. Publishing internal (and external) reports on AI impacts on education will be essential to promote transparency, sharing lessons learned and guiding future development. It almost goes without saying that institutions should share practice (what has worked and what hasn’t) not only within their organisations, but also across the sector and with accrediting bodies and regulators.

An AI-first university places human judgment, ethics, and pedagogy at the centre of all technological innovation. AI should augment rather than replace intellectual and creative capacities of educators and students. Every intervention must benefit from assessment against these principles, ensuring technology serves learning, rather than it becoming the master of human agency or ethical standards.

Being an AI-first institution is certainly not about chasing the latest tools or superficially focusing on staff and student “AI literacy.” It is about embedding AI thoughtfully in every part of the university. Leaders need to articulate vision, model ethical behaviour, build staff capacity and student ability to become next generation AI leaders. Staff and students need time, support and trust to experiment responsibly. Infrastructure and external partnerships must be strategic and principled. There must also be continuous evaluation to ensure that innovation aligns with strategy and values.

When implemented carefully, AI can become a collaborative partner in enhancing learning, facilitating creativity and reinforcing the academic mission rather than undermining it.

This article is published in association with Kortext. Join Janice and Rachel for Kortext LIVE on 11 February in London, on the theme of “Leading the next chapter of digital innovation” to continue the conversation on AI and data. Keynote speakers include Mark Bramwell, CDIO at Said Business School. Find out more and secure your spot here

0 Comments
Oldest
Newest
Inline Feedbacks
View all comments