This article is more than 3 years old

To improve quality assessment we should bring back subject review

Now historic enough to have passed into sector mythology, David Kernohan calls for the unlikely return of subject review as a way out of the current quality impasse.

David Kernohan is Deputy Editor of Wonkhe

It’s not exactly a slogan to bring people to the barricades. Nobody will ever march on Westminster to call for a return to the world of baserooms, inspection visits and FDTL. But we’re faced with a peculiar set of problems to do with quality (and the perception of quality) in higher education – and something very like England’s approach to quality assessment around the turn of the millennium ticks a surprisingly large number of boxes.

Quality wars redux

I’m glossing a lot of history that I’ve covered elsewhere here, but it was a 1991 White Paper (“Higher Education: a New Framework”) that set out an expectation that both performance indicators (though such measures “cannot by themselves provide a comprehensive view of quality”) and external judgements “based on direct observation of what is provided” should play a part in understanding what actually happens in HE provider and whether or not it is any good.

HEFCE’s Teaching Quality Assessments drew primarily on the approach adopted by Her Majesty’s Inspectorate (kind of an Ofsted predecessor) for polytechnics with degrees awarded by the Council for National Academic Awards, rather than the academic audit approach (based on a Quality Code forerunner) that universities themselves were attempting to establish.

Both approaches continued initially – the early years of the new system saw HEFCE’s early inspections of samples from 15 subject level “Units of Assessment” between 1993 and 1995, and numerous Academic Audits (conducted by the Higher Education Quality Council, and previously run by a predecessor body spawned out of Universities UK’s forerunner) visits at provider level.

When the Quality Assurance Agency was established in 1997 after the Dearing review, it set out to complete the funding council’s round of subject reviews and also to integrate the parallel system of examining institutional processes. QAA Subject Review was originally a measure to reduce bureaucracy while expanding coverage, although it does not now have that reputation.

That bureaucracy in full

The move to a QAA let approach made the previous sampling approach to subject level quality assurance universal – as the plans progressed every subject area in every university would be inspected. Initially this would cover 16 units of assessment (ranging from Food Science to General Engineering to American Studies), with assessments made before the end of September 1998.

Providers would develop a self-assessment document setting out the aims and objectives (in 500 words) of their course offer within a subject area. This was followed by an assessment visit – conducted by a combination of subject specialist assessors (usually two, drawn primarily but not entirely from academic staff in other providers) and contract assessors (who would have responsibility for the management of the visit). Activities carried would include the following:

  • Observation of the various forms of teaching and learning being carried out during the assessment visit (including direct observation of classroom, seminar, workshop and laboratory situations as appropriate).
  • Meetings with current students, and with teaching and non-teaching staff.
  • Meetings with graduates, diplomates and employers, where appropriate.
  • Scrutiny of institutional and course documents, reviews, and reports.
  • Scrutiny of examination scripts, courseware, projects and dissertations.
  • Examination of the student learning resources.
  • Examination of the academic and pastoral support for students.

In latter years, quality assurance developed a reputation for focusing on process (how a provider itself managed quality) rather than the provision itself. But the subject review process was arguably the last time that anyone looked in expert detail at the actual quality of teaching in a department. It was an intense experience, and one that institutional managers actively disliked. But they disliked it even more when a revised (“Academic Review”) approach rolled in the process aspects formerly covered by academic audit – a move that led inexorably to the four UK nations taking independent approaches.

In England the last vestiges of Subject Review vanished as the ability of the QAA to drill down into how process were implemented in subject areas of interest (the so called “discipline audit trail”) was lost in 2005, and a typically fractious series of complaint led to a 2008-9 parliamentary review of HE quality assurance in England put further controls on the remit of the agency.

Best forgotten?

Subject Review was the last time higher education regulators made any serious attempt to assess the quality of teaching provision in higher education. If that seems bluntly put, it is – deliberately so. Everything since has focused on either processes used by providers themselves to assure quality, or on metrics related to the outputs (employment, student satisfaction, continuation).

Interestingly, Dearing felt that subject review style approaches didn’t quite get there either:

[W]e believe that it is exceedingly difficult for the TQA process to review the quality of learning and teaching itself, rather than proxies for learning and teaching, such as available resources or lecture presentation” (10.68)

This provided the impetus for a call for common terminal standards, arguably setting us on the road to the now dominant “black box” model of higher education where we examine what comes out and what goes in and pay no attention to what happens in the middle.

The Office for Students has committed itself repeatedly to an indicator-driven approach that makes input and output data almost the entirety of regulation. As much as I love arguing about data and what it can tell us, there are big gaps where data can never offer us anything of value.

The limits of regulation

The Office for Students’ “principles-based” regulator has – for all the noise made to the contrary – reached the end of the road. It is becoming a “baseline” regulator, which will only concern itself with instances where indicators have fallen below an absolute (not even a contextual!) threshold. The chosen indicators are clearly influenced by student demographics and subject portfolios, and the impact of this on things like grade inflation measures (where regulatory powers are newly sought) is already becoming clear.

For institutional leaders the question posed is a simple one – is having a regulator setting out what degrees you can award and which students you should be admitting better or worse than a series of quinquennial teaching observations by expert reviewers?

The burden argument will be raised in response – a 2000 audit of the impact of the then-current arrangements by JM Consulting included explicit images of a bank of box files, and tales of skips being filled and photocopiers being broken by the demand of reviewers for more and more paper. Costs per review to providers were cited as being in the tens of thousands, focused primarily on staff time.

Technology (we wouldn’t even need physical visits these days, much less baserooms), and an increasing standardisation of documentation and process based on statutory and registration requirements means that both the physical and staff resources required would be lower now. The principle of expert academic review of teaching is long established in internal peer observation exercises and the venerable external examination system – and a more systematic use of student feedback makes provision more responsive to student needs.

Subject review also marks the only time where quality assurance has directly driven funding allocations. To support providers in addressing review feedback, HEFCE funded a series of projects – the Fund for the Development of Teaching and Learning (FDTL). Rather than a prize for good performance, these projects were targeted to raise standards of teaching and assessment by developing and sharing good practice. While I would admit I am congenitally predisposed to empowering academic and support staff to reflect on and improvise their own work, the idea of sharing practice jarred helpfully with a sector already on the road to competition at all costs.

Where now?

The Covid-19 pandemic laid bare the weaknesses of our current approach to teaching quality.

  • We had no way of knowing anything about what students were doing online during the pandemic in terms of quality.
  • Our indicators, calibrated to “normal” times, were not reliable indicators of quality during the pandemic (if they ever were) and will not be – even under their own terms – for many years afterwards.
  • We are still only able to speculate whether academic standards and practices are being maintained – unhelpful when Vote Leave is still laying into the sector four years on.
  • We had no direct route into specific subject provision – only the (largely unfunded) links maintained by the QAA with professional bodies meant we were able to understand and support the delivery of professional requirements in accredited courses.
  • The currently available regulator comments (the register) offers no useful information about quality to prospective students – the student facing information offer (Discover Uni) offers input and output measures of varying levels of applicability and fidelity – applicants themselves tell us that actually talking to people is preferable to either.

Had we a technologically mediated system of actually observing student learning and the development and delivery of teaching we would have none of these problems. Peer review of research is seen as a gold standard, peer review of teaching isn’t quite there but it is far from unusual and – pitched right – could very much be a developmental opportunity rather than another stick to beat academics with. Ideally it would not be observations or inspection that would “fail” a subject in a provider, but there needs to be the facility for regulatory action if recommendations are not implemented.

This would be a 180 degree reset of the direction of recent regulatory practice. But it solves (at a price, though probably not as much as we might think) nearly all current regulatory problems. And it would once again put the recognition of teaching quality, as recognised by expert teachers, at the heart of regulation.

2 responses to “To improve quality assessment we should bring back subject review

  1. If I was still teaching, I would welcome a return to Subject Based Reviews (SPRs). I took part in two externally moderated subject reviews (and many internal equivalents) and found them useful in unearthing poor practice on my programme.

    Yes, they were labour intensive, but this because universities never really planned for these periodic inspections. Put simply, they didn’t collect the evidence as they went along, and didn’t reflect on it in a meaningful way. The whole review process was not hard wired into course management or staff contracts, as a result the ’visits’ where viewed as disruptive and usually resented.

    The additional problem was the long gap between SPRs (e.g. 5 plus years) which often meant programme teams ‘took their foot of the pedal’ in terms of quality (or value) management. In this regard the near annual collection of TEF data does (did) focus attention (albeit on only a few variables) and has probably enabled some programmes to identify potential problems early enough that would avoid a disastrous Subject Review in the future.

    Subject Reviews could potentially enable better quality assurance on the UK’s international provision, something that is currently not really done with any rigour.

  2. Very good article. I oversaw the administration and support for about 20 different Subject Reviews in the 1990s, and analysed and compared the results and data on them all, attending many useful meetings and conferences to discuss what they showed, internally and externally.

    I believe it was undoubtedly the best hitherto process for HE quality assurance and improvement, of the many we have had, and one that actually benefitted students through genuine developments in the delivery of programmes with a strong focus on teaching, learning and assessment rather than research cultures.

    The Vice Chancellors were far too quick to try and put the knife in and demonstrated their lack of leadership in forcing its abandonment in my view.

    Of course there were problems, like the Languages debacle where those universities that opted for reviews for each single language leading to some rather unfair inflated outcomes caused by tiny groups of specialists getting their pals in the language ‘chumocracy’ (to use what should be one of the words of the year) to give them maximum marks in return for reciprocating the favour when they visited them. A small element of overall moderation would not have gone amiss. The bureaucracy David Allen describes certainly true but there were many hidden and sometimes unexpected benefits of collating data at the level of the teaching department/subject area selected that the gross aggregations of the likes of TEF do not remotely compare with.

    On balance it was a good system, well founded and well executed.

Leave a Reply