This article is more than 1 year old

What’s the use of the staff:student ratio?

Staff student ratios are familiar from league tables - Matt Finn wonders what they really tell applicants about the student experience
This article is more than 1 year old

Matt Finn is a senior lecturer in human geography at the University of Exeter.

A friend got in touch recently to talk about a video exposé he’d seen about university league-table rankings.

Admittedly, this tells you something about my friends, and the kinds of conversations we have, but nevertheless, the video had been eyebrow-raising for him. It was his final question that got me though:

so are there any good metrics?

From my PhD on data and education, in a school context, I learnt that metrics in most systems are gameable, that avoiding perverse incentives is hard, but that metrics can provide more-or-less good proxies for the things they purport to tell you about.

Still, it brought me back to something I’ve been pondering this term: the staff-student ratio, and whether it could do with an overhaul.

Staff-student ratios

Staff-student ratios (SSRs) are typically seen as an “input” which supports academic quality, in contrast to other league table metrics which focus on outcomes. They might be offered at a university and subject level.

SSRs tend to be calculated as the full-time equivalent (FTE) number of academic staff who are available to teach relative to the FTE number of students taught. Staff on research-only contracts are not included. A lower ratio is assumed to be better, in that a student might reasonably expect more contact.

And while 69 per cent of students in one 2018 survey rated these ratios as being a helpful factor in assessing whether a university provides value for money, most other factors available to select in the survey scored higher. Despite questions about credibility and the lack of correlation with other measures of teaching quality, they remain.

The problems

The picture gets complicated when we understand the data sources used, for example, need to map HESA cost centres to subjects. Add to this that staff bought out for research, but whom are not on “research-only contracts” are still included in the ratios ,and that some may only teach at Masters-level or supervise PhD students. There have also been long-held concerns about how SSRs take into account casual staff.

Wonkhe’s David Kernohan below uses custom HESA data, presented here for the first time, to plot Student FTE against Teaching Staff FTE (rather than Academic Staff FTE). Hover over a bubble and you can see the difference in Academic Staff to student ratio and Teaching Staff to student ratio.

[Full screen]

In some cases, these represent significant differences to the published SSRs.

And, with the pandemic showing up in short and long experiences of Covid-19, in interaction with ongoing mental ill health, and in care for children and elders, most staff have taken leave, reducing the numbers actually available to teach. Some of those optional modules can’t run. Tutorials and marking need covering. These experiences don’t show up in SSRs.

In their defence

At the risk of stating the obvious, staff-student ratios do still tell us something about the numbers of staff which might be available to teach, and they are reasonably comparable between courses, though likely to the detriment of teaching-intensive universities. It’s also plausible that where there is a lower ratio it might indicate more resilience or ‘flex’ which in important in periods of additional pressures – such as admitting and educating students during Covid-19, especially for institutions that welcomed unusually large cohorts. There’s also a suggestion that higher SSRs are associated with an institution’s financial resources and greater module choice. And for some courses accreditation requirements include maximum ratios.

Overhauling the staff-student ratio

I’m proposing two new ratio metrics I’d be interested in seeing. One should be calculable based on available data, such as TRAC, which has its own reliability issues, the other not. And it’s fair to note the costs and trade-offs that might be needed to produce the kinds of data I’m talking about. But for now, let’s imagine some fantasy ratio metrics.

Teaching Staff-Student Ratio

For this metric, take all staff who have teaching as part of their workload (for the unit of interest – e.g. university, subject or indeed programme) and calculate the fraction of their time given to teaching-related activities and multiply it by the number of FTE staff used to calculate the metric. This can then be used to create a notional teaching staff-student ratio which takes into account the proportion of time, and number of staff, who contribute to that teaching relative to FTE students.

Proportion of time spent teachingFTE staffNotional teaching staffFTE studentsTS-SR
0.81081001:12.5
0.81008010001:12.5
0.81000800100001:12.5
0.51051001:20
0.51005010001:20
0.51000500100001:20
0.21021001:50
0.21002010001:50
0.21000200100001:50

While this metric might not be that different to current figures at the university level it likely would be at the subject level – which I would suggest is of most interest to students. The benefit of this metric is that it strips out staffing that only indirectly results in benefit to students (e.g. where are staff member doesn’t contribute to teaching, or to that programme).Hours would include preparation, contact-time, marking, personal tutoring. You could debate in the comments whether to exclude administrative roles like programme directors, admissions, or academic misconduct roles, but I’m inclined to exclude these hours.

You would expect to see some differences between institutional types and subjects, but at least this would be clearer than with the current metrics. Research-intensives could, and would need to, make a case for why staff should be spending time doing research, but it would at least give a more realistic picture to students about the time given to teaching.

I also like that this metric would only go up if more staff-time was spent on teaching, or more staff were employed who had hours given to teaching. To be sure, we would need to have a conversation about when the % of time spent teaching might have a negative impact on quality and you might notice I’ve avoided any assumption that more contact hours is a mark of quality. While this metric could be gamed by a research-intensive university recategorizing ‘research time’ into ‘preparation for teaching time’ that would have its own interesting effects.

Personal tutor: tutees ratio

In honour of the concerns of this site’s Jim Dickinson about what rising student numbers do to personal tutoring, this metric tries to keep an eye on personal tutoring remaining, well, personal. In the sector, it is not unheard of for personal tutors to have 50+ tutees, or simply a named member of staff they ‘can’ contact but are unlikely to have built the relationship with to provide the support, feedback and advice that great personal tutoring enables.

While we might want to see academic tutoring (in leading subject-specific group tutorials) as distinctive, if sometimes overlapping, this metric is focused on the likelihood that a student has an identifiable person who knows them.

This ratio would provide an incentive to ensure that personal tutoring is shared between staff, not carried out by a few, and at numbers that allow a student to be known and supported by that member of staff. Attention would have to be given to the possibility of this being gamed by employing casual staff to act as tutors to reduce the ratio. Ideally, staff on open-ended contracts could provide greater continuity and the embeddedness in the organisation to signpost effectively, and to process and work with professional services staff around challenges that can come from supporting students who are more vulnerable.

A time for alternatives

A good metric may be hard to find, and an ungameable one is probably impossible, but that doesn’t mean we need to stick with the current metrics we have. While both the HESA and TRAC datasets have their own issues, the changes we’re seeing in sector, and the pressures on fees and funding models, and students themselves, means it’s all the more important that our measures reflect what we think matters.

6 responses to “What’s the use of the staff:student ratio?

  1. Thanks for tackling this subject. You are right that finding an “ungameable” metric may be near-on impossible, and how much individual institutions currently “game” their SSRs varies wildly. However, although I agree with the need to look for alternative metrics, your proposed alternatives would be very labour intensive – particularly the former – and as you have acknowledged, this metric would still be open to significant gaming. In my experience it is the Universities who have the most resource that are the best at gaming, and metrics/league tables drive a huge volume of work in HEIs – changing metrics would surely only create more of an industry, wouldn’t it? I don’t have an answer apart from the obvious – just scrap the SSR altogether. And scrap league tables whilst we’re at it!

  2. I’m really sympathetic to that perspective – scrapping the lot does have appeal and there is a movement of universities who are removing themselves from them. I suppose I do still see value in students having information that might help them understanding how courses work and are structured and staffed (with working conditions being learning conditions). But then, as you say, that take work to produce the data, and with all the questions about trustworthiness of that data.

  3. A group of Planners have been considering the definition of SSRs, and whether the HESA definition needs amending.
    Part of the discussion has been about how SSRs are described – so they don’t indicate what a student would actually experience, or on the flip side what an academic member of staff would experience either. Taking the example of medical students – they experience large scale lectures (400:1), and some of the smallest group “teaching” when in clinical settings, and end up with one of the lowest SSRs of all subjects (which as an average is probably a figure they’d never actually experience).
    So what are we trying to signal to applicants and students through the SSR?
    In proposing ideas for how the academic “teaching” FTE is arrived at, there should also be consideration of what proportion of the student FTE is being taught. The HESA method discounts student FTE for students in industry, and abroad. We have discussed other discounts for where students are on specific placements and where there isn’t specific academic oversight.
    Every student FTE is not the same – a FT PGT student is studying across 40+ weeks, while a UG student is ~30 weeks. So one has more academic input than the other (should a PGT FTE be scaled up).
    Then there is the question of how to handle PGR student FTEs which are included in nearly all SSR formulations, but aren’t “taught” (and probably delivering some atypical teaching via lab or examples classes)

    1. All good points! There’s definitional and operationalisation difficulites on all the terms – staff, students and the unit of focus (and that both staff and students don’t ‘stay’ in their units – e.g. optional modules, teaching on modules across units etc).
      There was work on a ‘teaching intensity’ metric wasn’t there, but I’m not sure how much traction that gained.

  4. And all this ignores the fact that it is only HESA cost centres that provide any granularity and these are themselves a very flawed proxy for subject or department.

  5. Thanks for an interesting article. At my own institution I proposed a metric very similar to your first one – and indeed the data is included as part of the TRAC data given to departments each year (although TBH it has not yet got wide spread traction).

Leave a Reply