One of the significant themes in higher education over the last couple of decades has been employability – preparing students for the world of work into which they will be released on graduation.
And one of the key contemporary issues for the sector is the attempt to come to grips with the changes to education in an AI-(dis)empowered world.
The next focus, I would argue, will involve a combination of the two – are universities (and regulators) ready to prepare students for the AI-equipped work where they will be working?
The robotics of law
Large, international law firms have been using AI alongside humans for some time, and there are examples of its use for the drafting of non-disclosure agreements and contracts, for example.
In April 2025, the Solicitors Regulation Authority authorised Garfield Law, a small firm specialising in small-claims debt recovery. This was remarkable only in that Garfield Law is the first law firm in the world to deliver services entirely through artificial intelligence.
Though small and specialised, the approval of Garfield Law was a significant milestone – and a moment of reckoning – for both the legal professional and legal education. If a law firm can be a law firm without humans, what is the future for legal education?
Indeed, I would argue that the HE sector as a whole is largely unprepared for a near-future in which the efficient application of professional knowledge is no longer the sole purview of humans.
Professional subjects such as law, medicine, engineering and accountancy have tended to think of themselves as relatively “technology-proof” – where technology was broadly regarded as useful, rather than a usurper. Master of the Rolls Richard Vos said in March that AI tools
may be scary for lawyers, but they will not actually replace them, in my view at least… Persuading people to accept legal advice is a peculiarly human activity.
The success or otherwise of Garfield Law will show how the public react, and whether Vos is correct. This vision of these subjects as high-skill, human-centric domains needing empathy, judgement, ethics and reasoning is not the bastion it once was.
In the same speech, Vos also said that, in terms of using AI in dispute resolution, “I remember, even a year ago, I was frightened even to suggest such things, but now they are commonplace ideas”. Such is the pace at which AI is developing.
Generative AI tools can, and are, being used in contract drafting, judgement summaries, case law identification, medical scanning, operations, market analysis, and a raft of other activities. Garfield Law represents a world view where routine, and once billable, tasks performed by trainees and paralegals will most likely be automated. AI is challenging the traditional boundaries of what it means to be a professional and, in concert with this, challenging conceptions of what it is to teach, assess and accredit future professionals.
Feeling absorbed
Across the HE sector, the first reaction to the emergence of generative AI was largely (and predictably) defensive. Dire warnings to students (and colleagues) about “cheating” and using generative AI inappropriately were followed by hastily-constructed policies and guidelines, and the unironic and ineffective deployment of AI-powered AI detectors.
The hole in the dyke duly plugged, the sector then set about wondering what to do next about this new threat. “Assessments” came the cry, “we must make them AI-proof. Back to the exam hall!”
Notwithstanding my personal pedagogic aversion to closed-book, memory-recall examinations, such a move was only ever going to be a stopgap. There is a deeper pedagogic issue in learning and teaching: we focus on students’ absorption, recall and application of information – which, to be frank, is instantly available via AI. Admittedly, it has been instantly available since the arrival of the Internet, but we’ve largely been pretending it hasn’t for three decades.
A significant amount of traditional legal education focuses on black-letter law, case law, analysis and doctrinal reasoning. There are AI tools which can already do this and provide “reasonably accurate legal advice” (Vos again), so the question arises as to what is our end goal in preparing students? The answer, surely, is skills – critical judgement, contextual understanding, creative problem solving and ethical reasoning – areas where (for the moment, at least) AI still struggles.
Fit for purpose
And yet, and yet. In professional courses like law, we still very often design courses around subject knowledge, and often try to “embed” the skills elements afterwards. We too often resort to tried and tested assessments which reward memory (closed-book exams), formulaic answers (problem questions) and performance under time pressure (time constrained assessments). These are the very areas in which AI performs well, and increasingly is able to match, or out-perform humans.
At the heart of educating students to enter professional jobs there is an inherent conflict. On the one hand, we are preparing students for careers which either do not yet exist, or may be fundamentally changed – or displaced – by AI. On the other, the regulatory bodies are often still locked into twentieth century assumptions about demonstrating competence.
Take the Solicitors Qualifying Examination (SQE), for example. Relatively recently introduced, the SQE was intended to bring consistency and accessibility into the legal profession. The assessment is nonetheless still based on multiple choice questions and unseen problem questions – areas where AI can outperform many students. There are already tools out there to help SQE student practice (Chat SQE, Kinnu Law), though no AI tool has yet completed the SQE itself. But in the USA, the American Uniform Bar Exam was passed by GPT4 in 2023, outperforming some human candidates.
If a chatbot can ace your professional qualifying exam, is that exam fit for purpose? In other disciplines, the same question arises. Should medical students be assessed on their recall of rare diseases? Should business students be tested on their SWOT analyses? Should accounting students analyse corporate accounts? Should engineers calculate stress tolerances manually? All of these things can be completed by AI.
Moonshots
Regulatory bodies, universities and employers need to come together more than ever to seriously engage with what AI competency might look like – both in the workplace and the lecture theatre. Taking the approach of some regulators and insisting on in-person exams to prepare students for an industry entirely lacking in exams probably is not it. What does it mean to be an ethical, educated and adaptable professional in the age of AI?
The HE sector urgently needs to move beyond discussions about whether or not students should be allowed to use AI. It is here, it is getting more powerful, and it is never leaving. Instead, we need to focus on how we assess in a world where AI is always on tap. If we cannot tell the difference between AI-generated work and student-generated work (and increasingly we cannot) then we need to shift our focus towards the process of learning rather than the outputs. Many institutions have made strides in this direction, using reflective journals, project-based learning and assessments which reward students for their ability to question, think, explain and justify their answers.
This is likely to mean increased emphasis on live assessments – advocacy, negotiations, client interviews or real-world clinical experience. In other disciplines too, simulations, inter- and multi-disciplinary challenges, or industry-related authentic assessments. These are nothing revolutionary, they are pedagogically sound and all have been successfully implemented. They do, however, demand more of us as academics. More time, more support, more creativity. Scaling up from smaller modules to large cohorts is not an easy feat. It is much easier to keep doubling-down on what we already do, and hiding behind regulatory frameworks. However, we need to do these things (to quote JFK)
not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone.
In law schools, how many of us teach students how to use legal technology, how to understand algorithmic biases, or how to critically assess AI-generated legal advice? How many business schools teach students how to work alongside AI? How many medical schools give students the opportunity to learn how to critically interpret AI-generated diagnostics? The concept of “digital professionalism” – the ability to effectively and ethically use AI in a professional setting – is becoming a core graduate-level skill.
If universities fail to take the lead on this, then private providers will be eager, and quick, to fill the void. We already have short courses, boot camps, and employer-led schemes which offer industry-tailored AI literacy programmes – and if universities start to look outdated and slow to adapt, students will vote with their feet.
Invention and reinvention
However, AI is not necessarily the enemy. Like all technological advances it is essentially nothing more than a tool. As with all tools – the stone axe, the printing press, the internet – it brings with it threats to some and opportunities for others. We have identified some of the threats but also the opportunities that (with proper use), AI can bring – enhanced learning, deeper engagement, and democratisation of access to knowledge. Like the printing press, the real threat faced by HE is not the tool, but a failure to adapt to it. Nonetheless, a surprising number of academics are dusting off their metaphorical sabots to try and stop the development of AI.
We should be working with the relevant sector and regulator and asking ourselves how we can adapt our courses and use AI to support, rather than substitute, genuine learning. We have an opportunity to teach students how to move away from being consumers of AI outputs, and how to become critical users, questioners and collaborators. We need to stop being reactive to AI – after all, it is developing faster than we can ever do.
Instead, we need to move towards reinvention. This could mean: embedding AI literacy in all disciplines; refocusing assessments to require more creative, empathetic, adaptable and ethical skills; preparing students and staff to work alongside AI, not to fear it; and closer collaboration with professional regulators.
AI is being used in many professions, and the use will inevitably grow significantly over the next few years. Educators, regulators and employers need to work even more closely together to prepare students for this new world. Garfield Law is (currently) a one-off, and while it might be tempting to dismiss the development as tokenistic gimmickry, it is more than that.
Professional courses are standing on the top of a diving board. We can choose obsolescence and climb back down, clinging to outdated practices and condemn ourselves to irrelevance. Or, we can choose opportunity and dive in to a more dynamic, responsive and human vision of professional learning.
We just have to be brave enough to take the plunge.