This article is more than 4 years old

Student voice is more than answering surveys

Surveys are not the only means to gather student feedback. Miriam Styrnol outlines five alternative approaches.
This article is more than 4 years old

Miriam Styrnol is Senior Research and Evaluation Advisor for What Works at King’s College London.

The 2019 National Student Survey (NSS) finishes next week.

So let’s use this landmark in every university administrator’s calendar to address the enigma machine that no one in higher education has been able to crack yet: how to integrate student voices into service design without using a survey.

To understand the size of this phenomenon, it’s worth considering the number of surveys an average third year student has to complete over the course of his or her final year at my institution:

  • Evaluation of modules: 2-10
  • Evaluation of personal tutor: 1
  • Evaluation of academic tutor: 1
  • Evaluation of centrally provided services: 3-15
  • Evaluation of extracurricular study support: 2
  • Evaluation of non-academic central services: 3-8
  • Evaluation of facilities and estate premises: 1
  • Evaluation of an evaluation questionnaires: 1

Even if each survey is short, how useful is the data from a student who is being asked for the fourth, fifth, tenth, twenty-sixth time for their opinion about the excellent teaching, excellent facilities, and excellent, prizewinning faculty? Especially since, by survey ten, response rates are probably down below 10 per cent, and therefore only reaching an unrepresentative subgroup of the student population.

In theory, the NSS and feedback surveys are a good idea. Universities should, and want to, seek information on students’ experiences and needs. Doing so helps us to improve as academics and administrators. But after years of polarising NSS debates, we should take ownership and introduce research designs that can help us to measure and integrate the student voice more effectively. Fewer surveys mean better student data in the long term.

Below, I explore five ways to wean the sector off the addiction to surveying.

1. Don’t ask students if you already know the answer

We need to think about how to use existing student data to answer questions about student experiences, or how current data collection processes can be altered to gain richer insight. We don’t need to ask students how often they use the library, for instance, because we should know this from access data. And consider whether things have really changed enough since the last time you surveyed to need to do so again. Unless evaluating a pilot project, there’s probably a wealth of student data available from previous or similar projects.

2. Think about what you really want to know

Are you asking students to recall or estimate something when you could measure it directly? If interested in the spending patterns of students to improve financial support services, asking students to provide an output of their monthly spending via apps such as Monzo can be used (anonymously) to investigate where trends and average spending behaviour actually lie. Likewise, if you want to know how students navigate feedback on assignments, ask them to review their feedback using a screen recording app. This gives an accurate, non-biased insight into their actual behaviour patterns.

3. Think about whether you need quantity or depth of responses

Universities miss out on a huge amount of valuable, actionable information that can only be elicited by talking to students. As an example, let’s say you’ve identified that students only engage with your careers service at certain times within their academic journey. Well-prepared focus groups help to understand students’ thought processes and the reasoning behind them. This in return helps in rewriting and targeting the promotion of said career services to become more salient for students in the future.

4. Can students contribute more than just as subjects?

Deliberative panel designs are a great way to move beyond the students-as-feedback mechanism. At Royal Holloway, a representative panel of students convenes at key stages over the academic year. It is asked to provide input on how they want to shape and steer student experience initiatives. The representative nature of the panel ensures that particularly underrepresented student groups get a say in the co-creation of service designs. The steer by and for students puts the focus on implementing and considering their suggestions directly into the way we conceptualise and implement our services. The repeated nature of the panel also allows for continuous research and demonstrates institutional long-term commitment to integrate student voices into service design.

5. If you really must survey, be smart about it

Lastly, if you want to use surveys, then consider designs that are savvy. Initiatives like Warwick’s HearNow appreciate the value of surveys that are student-focused rather than institution-focused. Questions that provide your participants with the opportunity to reflect on how they are feeling, on topics that they are passionate about and the sense that their institution cares about it, can be as valuable for students as they are for us. For each question, always consider why a student would want to complete it rather than why you want them to do it. Careful incentivisation and relevant content can make surveys a useful tool to inform student-led service design.

Likewise, consider the timing of your survey to allow for a more accurate representation of the way students feel during their degree. While end-of-academic year surveys, such as the NSS, seem like a convenient time to reflect on someone’s experience, research has shown that the so-called ‘recall bias’ makes us quite bad at recalling how we felt in a past moment or identifying our reasons for doing something. To avoid this bias, Pulse Surveys can be circulated at pre-identified key points of the academic year, asking students how things are and what they are doing and feeling at that specific time.

Establishing formal structures, alike Heriot Watt’s Student Survey Management Group, which is tasked with overseeing the quality and quantity of surveys in the institution can ensure consistency of approaches and to make sure that you only survey what really matters (and that you only do it once).

Of course, none of the above is a one-size-fits all solution. But with student engagement and integration as key themes within OfS regulation, ever more university vision statements touching on service co-creation, and students voicing their scepticism around tokenistic feedback channels, it is time to think outside the box and figure out how genuine, collaborative student engagement works, and what it tells us.

4 responses to “Student voice is more than answering surveys

  1. This advice is rather good for large western universities which have digital systems in place on all aspects of the university education. In many cases, the university has no other way to know their student experiences except through surveys. As unreliable as these tools are, it is better than assuming we know how the students are faring.

  2. The number of surveys listed is 39 (at top estimate) and when you add the NSS to it, it becomes 40, or one per week on average. About right. However, mention of the enigma (machine) highlights the issue of feedback, and feedback is part of the NSS too… What is feedback, what importance does it have, and is the desire for feedback (whatever it may be) the very thing that corrupts the whole relationship between university, staff, students and external stakeholders in the first place?

    To do the same thing again and expect different (better) results is madness… Yet who would be courageous enough to walk away from the mechanisms and actually conceive a truly impactful and appropriate relationship mechanism for those “inside the classroom” without the need to beat them with the measuring stick?

    Anecdotally, it offers cold comfort of universities (and courses) seeing a reduction in the engagement with the NSS this year (I think it is particularly marked) and all other such in-class, cross-institutional surveys.

    Death by survey, or the mismanagement of expectation on all sides is the true legacy of the NSS et al. The system has to make a break from these mechanical devices, and as the writer appeals, find a different way of understanding the relationships we share.

  3. Fascinating article and some useful leads to explore, thank you. When designing any survey, we should always understand exactly why we are asking the question(s) and whether we are equipped (and prepared) to do anything in response to the answer. We should also anticipate that we might be asking the wrong question(s) entirely, which is why it is so important to look closely at free text comments and to talk to students about the results. I look forward to your next piece.

  4. If we are to follow good practice when it comes to collating feedback, we should also be closing the feedback loop in all cases too – reporting back to students about what was discovered and what will be done to action from that. But with that comes a huge amount of work for communications/engagement teams if we are looking at an average of one survey per week!

Leave a Reply