Postgraduate quality will not be taken seriously until there is a national PG taught survey

Richard Puttock rehearses the arguments against a public annual survey of postgraduate taught students - and finds them all to be spurious when weighed against the benefits

Richard Puttock is Head of Business Intelligence and Data Analytics at the University of Leeds, writing in a personal capacity

It’s time for the Office for Students (OfS) and the sector to take postgraduate taught education seriously and give it the policy attention it deserves.

I’ve been working in higher education for over 30 years and one of the enduring constants over that time is the almost complete lack of policy debate about postgraduate taught students and their experience – somehow they always seemed to get lost.

There is occasionally some discussion about diversity – although more recently this has sadly been about the reliance on Chinese students to sustain many universities rather than postgraduate study as a driver of social mobility.

A good example of this is the publication of B3 data by OfS – the first time that a comprehensive set of performance indicators on taught postgraduate students, despite the equivalent undergraduate measures being around since the late 1990s.

The same is not true for postgraduate research students who have always had their needs championed by UKRI, and before that the individual research councils, who rightly see them as the next generation of research talent.

Back in the day HEFCE even dabbled with projected completion rates for research students – but these were fraught with difficulty yet still they took precedence over similar, much easier to calculate and more robust, figures for taught students.

Two out of three

The publication of B3 data by OfS (at least in England), and the long-standing publication of the Graduate Outcomes survey by Jisc, means that there is just one glaring omission in the postgraduate taught student landscape – any attempt to ask students about their experience.

There is currently no mandated national survey for postgraduate students despite the fact that 38 per cent of all entrants each year are to postgraduate programmes. How can it be right that such a large group of students have no voice and we know so little about their experience?

The first serious discussion of a postgraduate equivalent to the National Student Survey (NSS) appeared in the 2010 white paper Students at the Heart of the System – and in 2013 the UK funding bodies started work on a possible national survey, getting as far as quite detailed pre-consultation during 2017.

OfS continued this work, and as recently as 2022 commissioned a second pilot survey, but since then nothing has happened.

Thanks to the good work of Wonkhe we have some headline figures from those pilot exercises. Those figures show that in aggregate students are positive about their experiences with 86 per cent of respondents responding positively to the question “Overall, how would you rate the quality of your course?”, something that is sadly unlikely to ever make it into a survey run by OfS for what can only be described as spurious reasons.

PTES out

Of course, we do already have a national survey which has been run for many years by Advance HE. This survey again highlights an overall positive experience for postgraduates at those universities that choose to participate, 83 per cent in the latest iteration.

There is very little wrong with this survey itself other than the timing – which means many students will have only started dipping their toe into the dissertation by the time they complete the survey.

AdvanceHE publishes a quite detailed analysis of the survey every year and provides benchmarking data to participating universities. The real problem is that the detailed university and subject within university figures are not published and participation is optional, although over a hundred universities participated in 2023.

The lack of detailed publication is not AdvanceHE’s fault, these features are necessary when you have no power to compel participation. Very few universities would risk participation in a voluntary survey if the results were public. The lack of telephone responses also hinders response rates which could surely be improved with a multi-mode survey.

Reputational damage

Way back in 2005 when the UK funding bodies ran the first NSS, many questioned the value of asking for students’ views on their courses and some actively disagreed with the findings – it wasn’t that feedback it was poor, it was final year students failing to recognise good feedback!

There were also significant concerns about potential damage that the survey would do to the UK’s international reputation. As the twentieth iteration of the survey ends and we eagerly await the results it is now accepted as a valuable and enduring survey that has focused minds on the undergraduate student experience.

There are those who detract from NSS arguing that the concepts captured do not necessarily reflect the current academic literature on what makes for a high-quality student experience. However, I would argue that, with a few exceptions driven by political motives, the questions focus on topics that are, and should be, important to students.

I have yet to see a compelling argument as to why it is not important that staff are good at explaining things or that students are able to contact staff when they need to.

During early discussions about a postgraduate survey many in the sector feared the untold damage that could be done to international recruitment through the UK being one of the few countries with such a survey. Based on the OfS pilots and the AdvanceHE survey there is good evidence that the score of any national survey would be sufficiently strong so as to support recruitment rather than undermine it.

There is also no evidence that the existence of NSS has harmed our recruitment of international undergraduates at the national level – and I see no reason why this pattern should not be followed for postgraduate study.

Of course, there may be some subjects at some universities where the results are poor and this damages their recruitment – but it is hard to argue that a mechanism that penalises courses with a poor student experience is a bad thing.

Arguably those universities upping their game will help UK higher education in the global marketplace.

The rational choice

Throughout its history, NSS has been framed as a vehicle to drive choice, as part of a deeply flawed assumption that students make rational choices of where to study based on a detailed analysis of vast swathes of data.

All of the research that I have seen shows this simply isn’t the case – with decisions more likely driven by the heart than the head, allowing data to be used as a hygiene factor to rule out poor quality courses or confirm high quality ones.

I would argue that far from being irrational this is the height of rationality. A prospective student planning on spending three or more years of their life somewhere and incur significant liabilities on their future earnings had better be sure it’s somewhere I feel at home. At the end of the day the differences between many universities on the metrics are, in practical terms, quite small, it is the outliers students need to be aware of.

Similar research on the way postgraduates make choices indicates that they are heavily influenced by factors such as research strength and detailed course content, with many choosing programmes based on single modules led by world leading academics. A national postgraduate taught survey would therefore be unlikely to radically change the way students choose their courses.

But that doesn’t mean it is a waste of time and money.

The impact on the reputation of universities or faculties of poor results in such a reputation sensitive sector will be such that senior leaders will take notice of poor outcomes – and will seek to improve them. Of course, many universities already do this by participating in the AdvanceHE survey, but arguably they are not the universities that need the prodding to take the postgraduate student experience seriously.

Fourteen years on from the white paper, it is time that OfS and its counterparts in the devolved administrations stepped up to the plate and delivered a universal survey of postgraduate taught students, with detailed and benchmarked data published. Better still, OfS could actively include results of the student surveys in their regulation of providers and truly become the office for students.

11 responses to “Postgraduate quality will not be taken seriously until there is a national PG taught survey

  1. While the B3 data for PGT students is welcome (not to mention the UG framing of the measures), this compares student outcomes against a threshold – for true reputation enhancement of PGT courses we need benchmarked data

    1. I agree that adding benchmarks for PGT and PGR would significantly enhance the value of the data and allow the OfS to make better judgements taking into account context as they are legally obliged to do. However, you have to remember that the OfS had to dragged kicking and screaming into considering benchmarks for UG where they were already well established.

  2. I think the value of shining a light on PGT, and using a survey, is clear. The question for me is always around timing, though.

    Run the survey early, as Advance-HE does, and you don’t pick up dissertation/projects. Run it late, and response rates will be impacted (I’d expect a bump if we got a compulsory survey, but PTES has long struggles with rates in the 20s and 30s).

    1. I agree that there’s a challenge on timing although I suspect there is a sweet spot in July and August once students are into the meat of dissertations but before the final crunch time. A compulsory survey would have the benefit of telephone responses and published data would in time lead to students who were aware of the survey when they applied wanting to pay it forward to the next generation.

      While response rates will be an issue we shouldn’t get too hung up on the 50% required for NSS publication as it really doesn’t need to be that high to mitigate response bias risks

  3. Really good Richard and you know already that you and I agree on how students (quite reasonably) seem to make their choices.

    We know already from outcomes data and labour market supply and demand factors (demand for PGT in the UK is quite different to demand for other qualifications and, crucially, the main area where demand is different to other skilled economies such as the US and in Europe) that there are potential areas of concern in PGT provision, and it would be wise for the sector to choose to share a light on it ourselves and take – and be seen to take – appropriate action.

  4. I’m not opposed to a postgrad survey per se, but given the cost and burden, I think we have to be clear about why we’d want to do one and want changes we’d hope to achieve.
    Your comments about student choice are spot on. NSS’s impact on informing choice has been negligible (except indirectly and not altogether helpfully via league tables) and I think we’re agreed that for PG students, the impact would be even less.
    That means we’d be doing it for institutional improvement or for public accountability.
    For improvement, while a compulsory survey would be better than PTES, I have to wonder whether it would be so much better that it justifies the cost and burden? After all, those with PG enhancement as a priority would be mad not to be participating in and using PTES and PRES already. Any others would, sadly, probably be willing to take the hit on low PG survey outcomes.
    That leaves public accountability. Given that public up-front investment in PG fees is a lower proportion of the costs than for UG, it sets the bar lower in terms of the need to be accountable. However, that’s no excuse to just shrug off the issue. Public money and interest is vital so I’d still say we need to find a way to ensure good value is being delivered whether for the public purse or the personal investment.
    So the question boils down to whether we would be able to paint a sufficiently a robust and granular picture to inform the debate? I worry that all the noise and nuance that it difficult to present a fair understanding of NSS may be even worse in an PG survey. Meanwhile, there are the dangers of inviting un-nuanced and heuristic interpretations of data.
    You’re raising a really important issue and it’s a great discussion to have. I could certainly be convinced that a PG survey is the way to go, but I think that it’s at one end of the responses that we should pursue. Other options include Advance HE making changes to PTES or finding non-survey based ways to dip into quality assessment.

    1. For those who care about enhancement they will already be doing PTES (and PRES) so there’s no cost there. For those not doing PTES, surely a regulator should be caring even more about those providers that don’t see enhancement as important as there is a risk they are not even delivering baseline.

      I think the lack of public money going into PGT is a bit of a red-herring as the total public plus private contribution is high especially when you consider the opportunity cost of taking time out of earning. Of course it goes without saying that the OfS regulates all students including those who are subsidising the UK students so they really should care.

  5. This article claims a consensus about the NSS when no such consensus exists. In reality, many students and the lecturers who teach them day to day do not accept that the NSS has value. Puttock claims ‘I have yet to see a compelling argument as to why it is not important that staff are good at explaining things or that students are able to contact staff when they need to’. Yet I have yet to see a compelling argument as to why endless surveys – and the way tiny fluctuations in those scores from year to year are weaponised by management against teaching staff, despite the answer again and again being that a large majority of students are satisfied with their courses – lead to staff becoming better at explaining things or more contactable. To think that we wouldn’t care about these things without a numerical survey really misunderstands the motivation of those of us who have chosen teaching as our vocation. A more pertinent question would be: why in our lived experience as lecturers are actual students so much less keen on engaging with these surveys than the figure of the student as constructed by managers intent on turning our public services into businesses?

    1. That’s actually assuming that ALL academic staff care about the student experience. For a lot of them, research is what matters. I am one of those ‘managers’ that has to constantly remind (some) staff to do the bare minimum such as responding to student emails or following university policy such as releasing teaching material in advance. I would absolutely love to ditch any student survey if everyone was doing what they’re supposed to do but I live in the real world.

  6. I also, oddly enough, live in the real world. So I am aware that not every academic is as committed to their teaching as some of us are. But if you’re still having to remind colleagues about those things despite 20 years of student surveys, that suggests that surveys, and the micromanagement they encourage, are pretty useless as motivation. The underlying problem is that trust between senior management and staff in UK higher education has collapsed. It doesn’t matter how many league tables are set up to monitor performance: there are so many different and contradictory simultaneous imperatives (make your classroom teaching more student-friendly! no, police attendance more strictly! no, put your teaching on virtual learning environments in case it isn’t convenient for students to turn up! no, spend more time emailing students! no, spend more time on institutional emails responding to data requests! no, work more smartly with less emails! no, focus more on postgrads! no, focus more on undergrads! no, focus more on foundation years! no, produce more publications! no, produce less but better quality publications! no, get more research grants! no, research loses us money! no, follow this month’s latest innovation! no, that was last month’s! etc, etc) that their net effect on how academics do their jobs day to day is practically non-existent. No survey can fix the problem that there is simply not enough time during the hours that we are paid to do everything that someone is expecting us to do. So, assuming the days of regularly working lots of free overtime out of goodwill are over (which I think they probably are after the various industrial relations fiascos of the past half decade) inevitably we have to make choices in what to prioritise. So if you want us to prioritise part X of our work, don’t be surprised if part Y of our work becomes less good. Thankyou for your understanding.

  7. As someone who fulfills both roles plus research, I do have to agree the number of surveys done (NSS/ PTES, module surveys, course surveys) during a student’s lifetime is a killer for both students and lecturers.

Leave a Reply