Most higher education institutions awarded gold for the student experience element of their 2023 Teaching Excellence Framework (TEF) submissions mentioned peer review of teaching (PRT).
But a closer look at what they said will leave the reader with the strong impression that peer review schemes consume lots of time and effort for no discernible impact on teaching quality and student experience.
What TEF showed us
Forty out of sixty providers awarded gold for student experience mention PRT, and almost all of these (37) called it “observation.” This alone should give pause for thought: the first calls to move beyond observation towards a comprehensive process of peer review appeared in 2005 and received fresh impetus during the pandemic (see Mark Campbell’s timely Wonkhe comment from March 2021). But the TEF evidence is clear: the term and the concept not only persist, but appear to flourish.
It gets worse: only six institutions (that’s barely one in ten of the sector’s strongest submissions) said they measure engagement with PRT or its impact, and four of those six are further education (FE) colleges providing degree-level qualifications. Three submissions (one is FE) showed evidence of using PRT to address ongoing challenges (take a bow, Hartpury and Plymouth Marjon universities), and only five institutions (two are FE) showed any kind of working relationship between PRT and their quality assurance processes.
Scholarship shows that thoughtfully implemented peer review of teaching can benefit both the reviewer and the reviewed but that it needs regular evaluation and must adapt to changing contexts to stay relevant. Sadly, only eleven TEF submissions reported that their respective PRT schemes have adapted to changing contexts via steps such as incorporating the student voice (London Met), developing new criteria based on emerging best practice in areas such as inclusion (Hartpury again), or a wholesale revision of their scheme (St Mary’s Twickenham).
The conclusion must be that providers spend a great deal of time and effort (and therefore money) on PRT without being able to explain why they do it, show what value they get from it, or even ponder its ongoing relevance. And when we consider that many universities have PRT schemes but didn’t mention them, the scale of expenditure on this activity will be larger than represented by the TEF, and the situation will be much worse than we think.
Why does this matter?
This isn’t just about getting a better return on time and effort; it’s about why providers do peer review of teaching at all, because no-one is actually required to do it. The OfS conditions of registration require higher education institutions to “provide evidence that all staff have opportunities to engage in reflection and evaluation of their learning, teaching, and assessment practice”.
Different activities can meet the OfS stipulation, such as team teaching, formal observations for AdvanceHE Fellowship, teaching network discussions, microteaching within professional development settings. Though not always formally categorised within institutional documentation, these nevertheless form part of the ecosystem under which people seek or engage with review from peers and represent forms of peer-review adjacent practice which many TEF submissions discussed at greater length and with more confidence than PRT itself.
So higher education institutions invest time and effort in PRT but fail to explain the benefits of their reasoning, and appear to derive greater value from alternative activities that satisfy the OfS. Yet PRT persists. Why?
What brought us to this point?
Many providers will find that their PRT schemes were started or incorporated into their institutional policies around the millennium. Research from Crutchley and colleagues identified Brenda Smith’s HEFCE-funded project at Nottingham Trent in the late 1990s as a pioneering moment in establishing PRT as part of the UK landscape, following earlier developments in Australia and the US. Research into PRT gathered pace in the early 2000s and reached a (modest) peak in around 2005, and then tailed off.
PRT is the Bovril of the education cupboard. We’re pretty sure it does some good, though no one is quite sure how, and we don’t have time to look it up. We use it maybe once a year and are comforted by its presence, even though its best before date predates the first smartphones, and its nutritional value is now less than the label that bears its name. The prospect of throwing it out induces an existential angst – “am I a proper cook without it?” – and yes of course we’d like to try new recipes but who has the time to do that?
Australia shows what is possible
There is much to be learnt from looking outside our own borders, on how peer review has evolved in other countries. In Australia, the 2024 Universities Accord offered 47 recommendations as part of a federally funded vision for tertiary education reform for 2050. The Accord was reviewed on Wonkhe in March 2024.
One of its recommendations advocates for the “increased, systematised use of peer review of teaching” to improve teaching quality, insisting this “should be underpinned by evidence of effective and efficient methodologies which focus on providing actionable feedback to teaching staff.” The Accord even suggested these processes could be used to validate existing national student satisfaction surveys.
Some higher education institutions, such as The University of Sydney, had already anticipated this direction, having revised their peer review processes with sector developments firmly in mind a few years ahead of the Accord’s formal recommendations. A Teaching@Sydney blog post from March 2023 describes how the process uses a pool of centrally trained and accredited expert reviewers, standardised documentation aligned with contemporary, evidence-based teaching principles, and cross-disciplinary matching processes that minimise conflicts of interest, while intentionally integrating directly with continuing professional development pathways and fellowship programs. This creates a sustainable ecosystem of teaching enhancement rather than isolated activities, meaning the Bovril is always in use rather than mouldering behind Christmas’s leftover jar of cranberry sauce.
Lessons for the UK
Comparing Australia and the UK draws out two important points. First, Australia has taken the simple but important step of saying PRT has a role in realising an ambitious vision for HE. This has not happened in the UK. In 2017 an AdvanceHE report said that “the introduction and focus of the Teaching Excellence Framework may see a renewed focus on PRT” but clearly this has not come to pass.
In fact, the opposite is true, because the majority of TEF Summary Statements were silent on the matter of PRT, and there seemed to be some inconsistency in judgments in those instances where the reviewers did say something. In the absence of any explanation it is hard to understand why they might commend the University of York’s use of peer observation on a PG Cert for new staff, but judge that the University of West London meeting their self-imposed target of 100 per cent completion of teaching observations every two years for all academic permanent staff members was “insufficient evidence of high-quality practice.”
Australia’s example sounds rather top-down, but it’s sobering to realise that they are probably achieving more impact for the cost of less time and effort than their UK colleagues, if the TEF submissions are anything to go by.
And Australia is clear-sighted about how PRT needs to be implemented for it to work effectively, and how it can be joined up with measures such as student satisfaction surveys that have emerged since PRT first appeared over thirty years ago. Higher education institutions such as Sydney have been making deliberate choices about how to do PRT and how to integrate it with other management, development and recognition processes – an approach that has informed and been validated by the Universities Accord’s subsequent recommendations.
Where now for PRT?
UK providers can follow Sydney’s example by integrating their PRT schemes with existing professional development pathways and criteria, and a few have already taken that step. The FE sector affords many examples of using different peer review methods, such as learning walks and coaching in combination. University College London’s recent light refresh of its PRT scheme shows that management and staff alike welcome choice.
A greater ambition than introducing variety would be to improve reporting of program design and develop validated tools to assess outcomes. This would require significant work and sponsorship from a body such as AdvanceHE, but would yield stronger evidence about PRT’s value for supporting teaching development, and underpin meaningful evaluation of practice.
This piece is based on collaborative work between University College London and the University of Sydney examining peer review of teaching processes across both institutions. It was contributed by Nick Grindle, Samantha Clarke, Jessica Frawley, and Eszter Kalman.