A friend got in touch recently to talk about a video exposé he’d seen about university league-table rankings.
Admittedly, this tells you something about my friends, and the kinds of conversations we have, but nevertheless, the video had been eyebrow-raising for him. It was his final question that got me though:
so are there any good metrics?
From my PhD on data and education, in a school context, I learnt that metrics in most systems are gameable, that avoiding perverse incentives is hard, but that metrics can provide more-or-less good proxies for the things they purport to tell you about.
Still, it brought me back to something I’ve been pondering this term: the staff-student ratio, and whether it could do with an overhaul.
Staff-student ratios (SSRs) are typically seen as an “input” which supports academic quality, in contrast to other league table metrics which focus on outcomes. They might be offered at a university and subject level.
SSRs tend to be calculated as the full-time equivalent (FTE) number of academic staff who are available to teach relative to the FTE number of students taught. Staff on research-only contracts are not included. A lower ratio is assumed to be better, in that a student might reasonably expect more contact.
And while 69 per cent of students in one 2018 survey rated these ratios as being a helpful factor in assessing whether a university provides value for money, most other factors available to select in the survey scored higher. Despite questions about credibility and the lack of correlation with other measures of teaching quality, they remain.
The picture gets complicated when we understand the data sources used, for example, need to map HESA cost centres to subjects. Add to this that staff bought out for research, but whom are not on “research-only contracts” are still included in the ratios ,and that some may only teach at Masters-level or supervise PhD students. There have also been long-held concerns about how SSRs take into account casual staff.
Wonkhe’s David Kernohan below uses custom HESA data, presented here for the first time, to plot Student FTE against Teaching Staff FTE (rather than Academic Staff FTE). Hover over a bubble and you can see the difference in Academic Staff to student ratio and Teaching Staff to student ratio.
In some cases, these represent significant differences to the published SSRs.
And, with the pandemic showing up in short and long experiences of Covid-19, in interaction with ongoing mental ill health, and in care for children and elders, most staff have taken leave, reducing the numbers actually available to teach. Some of those optional modules can’t run. Tutorials and marking need covering. These experiences don’t show up in SSRs.
In their defence
At the risk of stating the obvious, staff-student ratios do still tell us something about the numbers of staff which might be available to teach, and they are reasonably comparable between courses, though likely to the detriment of teaching-intensive universities. It’s also plausible that where there is a lower ratio it might indicate more resilience or ‘flex’ which in important in periods of additional pressures – such as admitting and educating students during Covid-19, especially for institutions that welcomed unusually large cohorts. There’s also a suggestion that higher SSRs are associated with an institution’s financial resources and greater module choice. And for some courses accreditation requirements include maximum ratios.
Overhauling the staff-student ratio
I’m proposing two new ratio metrics I’d be interested in seeing. One should be calculable based on available data, such as TRAC, which has its own reliability issues, the other not. And it’s fair to note the costs and trade-offs that might be needed to produce the kinds of data I’m talking about. But for now, let’s imagine some fantasy ratio metrics.
Teaching Staff-Student Ratio
For this metric, take all staff who have teaching as part of their workload (for the unit of interest – e.g. university, subject or indeed programme) and calculate the fraction of their time given to teaching-related activities and multiply it by the number of FTE staff used to calculate the metric. This can then be used to create a notional teaching staff-student ratio which takes into account the proportion of time, and number of staff, who contribute to that teaching relative to FTE students.
|Proportion of time spent teaching||FTE staff||Notional teaching staff||FTE students||TS-SR|
While this metric might not be that different to current figures at the university level it likely would be at the subject level – which I would suggest is of most interest to students. The benefit of this metric is that it strips out staffing that only indirectly results in benefit to students (e.g. where are staff member doesn’t contribute to teaching, or to that programme).Hours would include preparation, contact-time, marking, personal tutoring. You could debate in the comments whether to exclude administrative roles like programme directors, admissions, or academic misconduct roles, but I’m inclined to exclude these hours.
You would expect to see some differences between institutional types and subjects, but at least this would be clearer than with the current metrics. Research-intensives could, and would need to, make a case for why staff should be spending time doing research, but it would at least give a more realistic picture to students about the time given to teaching.
I also like that this metric would only go up if more staff-time was spent on teaching, or more staff were employed who had hours given to teaching. To be sure, we would need to have a conversation about when the % of time spent teaching might have a negative impact on quality and you might notice I’ve avoided any assumption that more contact hours is a mark of quality. While this metric could be gamed by a research-intensive university recategorizing ‘research time’ into ‘preparation for teaching time’ that would have its own interesting effects.
Personal tutor: tutees ratio
In honour of the concerns of this site’s Jim Dickinson about what rising student numbers do to personal tutoring, this metric tries to keep an eye on personal tutoring remaining, well, personal. In the sector, it is not unheard of for personal tutors to have 50+ tutees, or simply a named member of staff they ‘can’ contact but are unlikely to have built the relationship with to provide the support, feedback and advice that great personal tutoring enables.
While we might want to see academic tutoring (in leading subject-specific group tutorials) as distinctive, if sometimes overlapping, this metric is focused on the likelihood that a student has an identifiable person who knows them.
This ratio would provide an incentive to ensure that personal tutoring is shared between staff, not carried out by a few, and at numbers that allow a student to be known and supported by that member of staff. Attention would have to be given to the possibility of this being gamed by employing casual staff to act as tutors to reduce the ratio. Ideally, staff on open-ended contracts could provide greater continuity and the embeddedness in the organisation to signpost effectively, and to process and work with professional services staff around challenges that can come from supporting students who are more vulnerable.
A time for alternatives
A good metric may be hard to find, and an ungameable one is probably impossible, but that doesn’t mean we need to stick with the current metrics we have. While both the HESA and TRAC datasets have their own issues, the changes we’re seeing in sector, and the pressures on fees and funding models, and students themselves, means it’s all the more important that our measures reflect what we think matters.