When I ask apprentices to reflect on their learning in professional discussions, I often hear a similar story:
It wasn’t just about what I knew – it was how I connected it all. That’s when it clicked.
That’s the value of dialogic assessment. It surfaces hidden knowledge, creates space for reflection, and validates professional judgement in ways that traditional essays often cannot.
Dialogic assessment shifts the emphasis from static products – the essay, the exam – to dynamic, real-time engagement. These assessments include structured discussions, viva-style conversations, or portfolio presentations. What unites them is their reliance on interaction, reflection, and responsiveness in the moment.
Unlike “oral exams” of old, these conversations require learners to explain reasoning, apply knowledge, and reflect on lived experience. They capture the complex but authentic process of thinking – not just the polished outcome.
In Australia, “interactive orals” have been adopted at scale to promote integrity and authentic learning, with positive feedback from staff and students. Several UK universities have piloted viva-style alternatives to traditional coursework with similar results. What apprenticeships have long taken for granted is now being recognised more widely: dialogue is a powerful form of assessment.
Lessons from apprenticeships
In apprenticeships and work-based learning, dialogic assessment is not an add-on – it’s essential. Apprentices regularly take part in professional discussions (PDs) and portfolio presentations as part of both formative and end-point assessment.
What makes them so powerful? They are inclusive, as they allow different strengths to emerge. Written tasks may favour those fluent in academic conventions, while discussions reveal applied judgement and reflective thinking. They are authentic, in that they mirror real workplace activities such as interviews, stakeholder reviews, and project pitches. And they can be transformative – apprentices often describe PDs as moments when fragmented knowledge comes together through dialogue.
One apprentice told me:
It wasn’t until I talked it through that I realised I knew more than I thought – I just couldn’t get it down on paper.
For international students, dialogic assessment can also level the playing field by valuing applied reasoning over written fluency, reducing the barriers posed by rigid academic writing norms.
My doctoral research has shown that PDs not only assess knowledge but also co-create it. They push learners to prepare more deeply, reflect more critically, and engage more authentically. Tutors report richer opportunities for feedback in the process itself, while employers highlight their relevance to workplace practice.
And AI fits into this picture too. When ChatGPT and similar tools emerged in late 2022, many feared the end of traditional written assessment. Universities scrambled for answers – detection software, bans, or a return to the three-hour exam. The risk has been a slide towards high-surveillance, low-trust assessment cultures.
But dialogic assessment offers another path. Its strength is precisely that it asks students to do what AI cannot:
- authentic reflection, as learners connect insights to their own lived experience.
- real-time reasoning – learners respond to questions, defend ideas, and adapt on the spot.
- professional identity, where the kind of reflective judgement expected in real workplaces is practised.
Assessment futures
Scaling dialogic assessment isn’t without hurdles. Large cohorts and workload pressures can make universities hesitant. Online viva formats also raise equity issues for students without stable internet or quiet environments.
But these challenges can be mitigated: clear rubrics, tutor training, and reliable digital platforms make it possible to mainstream dialogic formats without compromising rigour or inclusivity. Apprenticeships show it can be done at scale – thousands of students sit PDs every year.
Crucially, dialogic assessment also aligns neatly with regulatory frameworks. The Office for Students requires that assessments be valid, reliable, and representative of authentic learning. The QAA Quality Code emphasises inclusivity and support for learning. Dialogic formats tick all these boxes.
The AI panic has created a rare opportunity. Universities can either double down on outdated methods – or embrace formats that are more authentic, equitable, and future-oriented.
This doesn’t mean abandoning essays or projects altogether. But it could mean ensuring every programme includes at least one dialogic assessment – whether a viva, professional discussion, or reflective dialogue.
Apprenticeships have demonstrated that dialogic assessments are effective. They are rigorous, scalable, and trusted. Now is the time for the wider higher education sector to recognise their value – not as a niche alternative, but as a core element of assessment in the AI era.