The students are back on campus – but are they? Since the resumption of what was supposed to be normal service post-Covid, university lecturers have been reporting half-empty classroom and lecture theatres.
There’s lots of plausible theories about what’s going on: rising costs pushing students to seek paid work off-campus, more students living further away from campus and dealing with expensive and patchy public transport provision, a decline in students’ mental and physical wellbeing post-pandemic with associated knocks to academic confidence.
The worry is that students’ absence from campus signals a deeper problem with their engagement with learning, and their sense of connectedness and belonging on their courses – both triggers for early withdrawal and lack of progression. The prospect of students slipping away is a serious one – both for the students’ wellbeing and future prospects, and for the universities who are held accountable for their students’ successful outcomes.
Universities that are grappling with patchy student attendance understandably want to take action to address the risks to student success. Closer monitoring and enforcement of student attendance might present itself as a helpful solution – after all, in theory, if students are in class there is a greater prospect they will be encouraged to engage and connect than if they are not.
But before pulling the trigger on attendance monitoring it’s worth thinking carefully about whether it can be expected to achieve the goal of improving student engagement.
Attendance and engagement
In the student engagement analytics world there’s a lively debate about whether attendance at scheduled contact hours is a meaningful indicator of student engagement. “Attendance is not engagement” is one perspective – where it is pointed out that a student can be present in body but not in spirit.
Attendance, on this side of the argument, corresponds to “presenteeism” in that it incentivises presence over other forms of engagement more directly connected with learning gain – and if approached punitively can unfairly penalise students who are already struggling with juggling paid work, caring responsibilities, or making ends meet financially. If a student does not show up to a lecture but catches up later on the recording, then the problem is possibly more that the lecturer has a poor experience than that the student does.
At the University of Essex, for example, rolling out the StREAM student engagement analytics system that tracks a range of indicators of engagement allowed university staff to identify with much greater accuracy the less engaged students at risk of withdrawal than tracking attendance data alone – you can read more here.
On the other hand, some universities developing engagement analytics systems have found that attendance does correlate to some extent with student outcomes – and have made the decision to include attendance as part of their engagement algorithm (ie the basket of measures that in combination produce a single engagement indicator that can help direct universities to the students most in need of intervention and additional support).
Others monitor attendance for their own information, but do not include it in their engagement algorithm. The rationale might be around managing timetabling of scheduled teaching more effectively, or to understand students’ patterns of presence on campus, or to comply with UKVI requirements, without drawing conclusions about students’ overall level of engagement.
Importantly, the approach is informed by evidence of what the data says, and by judgement about the cultural meaning the act of monitoring attendance will have in the specific institution. And deciding to monitor attendance also implies having a plan of action for students who do not attend – this might include setting a threshold above which non-attendance is judged to be problematic, and establishing an expectation of whose job it is to trigger a response, as well as what kind of response is appropriate.
Universities in the process of revising their student engagement policies are actively moving away from a punitive approach in which students perceive they will be penalised for failing to show up to scheduled contact time. Frustrating, and demoralising, as it is for educators to show up to half-empty classrooms, if a student is not motivated to show up, threats of retribution and penalties are likely to make things worse rather than better – pushing the student further away rather than restoring connection.
Where universities are using engagement analytics, the simple act of emailing or calling a student who appears not to be engaging to check they are fine can help to signal concern and direct the student to appropriate support – and, if the student is genuinely OK, can help reassure course and module leaders.
But it is also vital to be pragmatic about the scale of the resource available to reach out to less engaged students, as well as the limits of the support that can be provided. Devising a policy that relies on university staff putting in the hours chasing down non-attending students may only serve to raise expectations beyond what can be met.
Feeling supported
The other aspect to the debate is pulling back to look at what it is that students are being expected to show up to. Student polling undertaken for Wonkhe’s pilot Belong student insight platform in January and February found that of a sample of 1600 undergraduates around four in ten had not attended at least one of their last five scheduled contact hours.
But, unexpectedly, reported non-attendance did not seem to correlate with paid work or distance from campus – though qualitative reporting suggested that these, as well as illness, were factors for some students. Feeling a sense of community showed only a modest correlation.
What did show a stronger correlation was student perception of how staff made the subject engaging, and how supported they felt with their learning.
Let’s be absolutely clear on this – student attendance shouldn’t ever be perceived as a referendum on the quality of teaching. The Wonkhe research showed that there’s a lot else going on with students, including illness, work, general low motivation, and more. But the mantra that “attendance is not engagement” holds true here – monitoring attendance may only serve to demonstrate a wider engagement challenge, not solve it.
Instead, actively working on improving student engagement, and acknowledging and supporting students’ choices about where they put their own limited attention and personal resources might just help to address the attendance problem.
This article is published in association with Solutionpath.
You should probably make it more apparent that Solutionpath are the makers of StREAM.
yes, while this is an interesting article, and I agree that attendance does not = engagement, it is to all intents and purposes an ad 🙂
Whilst I agree its a sales pitch, I do think it raised a very valid point. Many institutions (mine included) gave attendance a high predictor of risk of adverse outcomes. It’s clearly not so cut and dry. We need an approach (and tools) that allow for predictive modelling of what weightings to give each engagement activity mapped to specific programmes. Preferably modelled against our historically continuation data.
And there is a real danger that we are going to be required to monitor attendance – we are already seeing this approach to our partner institutions and for apprenticeships and can feel this ‘coming down the line’ for everyone else despite all our knowledge around monitoring attendance not being necessarily what works to enhance engagement.
There isn’t a one size fits all type solution, each course needs a nuanced method of monitoring engagement with delivery. For obvious reasons, attendance with courses that are highly practical is something that needs more than attempting to catch up after hours. However, having the foresight to plan out what the appropriate methodologies would be and then ensuring applicants are aware of this prior to committing to the course is something we should all be capable of doing. After all, we know what the LO’s are, we should know what it takes to achieve them and measure against that accordingly.