This article is more than 1 year old

Did 1 in 6 students cheat in online assessments this year?

There's new polling out on the prevalence of online cheating in higher education. Jim Dickinson predicts a riot.
This article is more than 1 year old

Do you remember the academic integrity good old days, when all we had to worry about was the odd human-based essay mill and some hamfisted Harvard referencing?

There’s already been plenty of stories about the scarily good GPT-3 AI software that evades all the usual types of cheating detection – some of them actually written using the software itself just to prove the point.

If essays are over, then maybe authentic assessment is due its time in the sun – were it not for the fact that its enthusiasts always seem to underplay the potential for discrimination to enter the judgements being made, and the impossibility of meaningfully scaling watching humans doing things in the modern, massified university.

All that really leaves you with is exams. And unless we’re swinging back to the kind of paper and pencil, high stakes hellscapes of the past (which also seemed oddly good at maintaining attainment gaps of all sorts of stripe) then the online versions we’re left with are going to cause us trouble too.

I raise this because there’s some new polling out on the prevalence of cheating in online assessment that abruptly reminds us that legislation banning essay mills (in England) was very much an analogue solution to a digital problem.

The numbers suggest that 1 in 6 students in the UK have cheated in online exams this academic year. Over half of those surveyed knew people who had cheated in online assessments. Almost 8 out 10 believed that it was easier to cheat in online exams than in exam halls, and the methods for cheating were often laughably rudimentary – including calling or messaging friends for help during the exam, using google to search for answers on a separate device, or asking parents to read through answers prior to submission.

And perhaps tellingly, when asked about the morality of cheating in online assessments, a third believed it was either “not wrong” or only “mildly” wrong.

This bit of polling – 900 unweighted student responses from across the UK commissioned by Alpha Academic Appeals and carried out by the Schlesinger Group – is not really sophisticated enough to be relied on as an accurate picture of what’s going on with students and academic integrity.

But the figures do suggest that the sector ought to be worried – and if anything it’s the qual in the response tables that AAA has published alongside that ought to cause the most alarm.

It’s the way that you do it

When asked how they’d done it, one student said they’d used online sources and had help from a third party. Another gave someone parts of their essay where they were struggling to write their own. One discussed the question with a friend online. Another said they completed an assessment with two of the students on the course and they all chose questions to answer and then shared them:

This meant we only had to answer a couple each and they were high quality.

Some of them video-called friends while doing exams. Some had shared some of the questions in an exam in group chats before others began the assessment so students knew some of the questions that might appear.

One respondent says that answers were “consistently posted” in shared chats by students, where whole year groups had access to pooled knowledge. One admits to using Google Translate for Spanish written assessments, and also by looking at verb conjugation tables to complete multiple choice answers:

For my linguistics elective I also cheated by looking up the answers and looking at the International Phonetic Alphabet.

Maybe an endless and expensive arm’s race of online proctoring will prevent this kind of thing from happening – temporarily. But what’s striking is the extent to which the confessions are about collaboration – the sort of thing that almost certainly is happening for coursework and project work too.

Do students even understand the practical boundaries between collaboration and collusion? Why do we assess assuming no collaboration? Does the 24 hour timed exam sit in a space that is seen as a short form bit of course work or a long time to do an exam? Are we using rules which frame the assessment the wrong way?

What’s fascinating about that is that the original “vision of students today” viral video alerted us to the idea that students were using the internet to learn socially over 15 years ago, even when they’re sat in lecture theatres designed for a different age.

They might feel ignored, lonely, and exhausted – but the thing they have had is those group chats. Is it any wonder that the generation that have been through Covid can’t resist a little “collaboration” during an online exam?

And should we be surprised that, in a culture that seems to be ignoring something so rife and putting such little effort into stopping it, students’ regard for the “honour” of academic integrity might be a little weakened in the face of, well, everything else?

That’s what gets results

As such, how they did it is one thing. Why they did it is a whole other ball game – and the free text qual here is even more revealing – because you get a sense that cheating has moved form something that was furtively done by ashamed individuals to something done by almost everyone – and if that’s the case the battle’s lost before the academic integrity lecture even begins.

I don’t think is cheating as such, just extra help”, says one student. “It’s wrong but more support from the university should be given”, says another. One student even reckons that an allowance for cheating is priced in:

Our uni doesn’t police cheating therefore it is less stigmatised and is factored into the difficulty of exams.

Another draws a contrast with in-person learning and assessment:

Cheating online isn’t the same as cheating in person. I feel that students are almost expected to cheat during online exams and teachers should expect it. It’s all down to a students guilty conscience and their ability to cheat if they will do it or not. Online learning is already so pressuring and difficult with so many yet so little distractions around. For some people cheating is the only way that aren’t going to let their academic year got to waste.

Some feel they’re being watched by a blind eye:

The university doesn’t attempt to catch cheaters so it feels as though they are just allowing us to cheat in a way.

And others question the assessment method being deployed:

Assessments aren’t a great measure of ability and this cheating in them is only a result of academic pressure, regardless of morality, the need students experience for cheating results entirely from the failures of the educational system and its inability to equally assess everyone

Pesky kids

To really understand what’s going on here, I’d recommend having a read of this detective/thriller themed version of a set of real life events from a US academic called “My students cheated… A lot”. It’s a long and fascinating story that reveals much about both the how and why of modern “cheating” – but ends on this fascinating conclusion:

I’m so over trying to deter my students from cheating. There are so many ways I could lock down my courses. Not interested. If real life was about being monitored by proctoring software that spies on you at home and forces you to test under duress, it would be a sad real life.

The alternative I came up with was to open the course like a flower, and let students smell the roses if they wanted to. Most of them hadn’t engaged in the course material at all, so in the second syllabus I gave them all the opportunity they would need.

…The semester from hell ended. Some students still failed. Some did some more plagiarism and failed. But, most of them got decent grades and engaged substantially with the course material. A small win for me and them.”

It’s a story that ends well – sort of. It’s also a story that seems to involve being able to make a decision to do what he did without the Victorian-age assumptions of the right wing British press breathing down his neck, expecting a lecturer to issue 10 lashings to someone who’s written the answers on their arm, and it’s also the story of someone who had the time and capacity to listen to his students, understand them, and re-create his teaching and assessment for the online, asynchronous age in a way that inspired at least some of his class to engage without cheating.

I expect that kind of autonomy, freedom and pragmatism will be in short supply when the probably justifiable panic over online cheating in the UK really hits the fan in the coming years.

3 responses to “Did 1 in 6 students cheat in online assessments this year?

  1. 2If essays are over, then maybe authentic assessment is due its time in the sun – were it not for the fact that its enthusiasts always seem to underplay the potential for discrimination to enter the judgements being made, and the impossibility of meaningfully scaling watching humans doing things in the modern, massified university.

    All that really leaves you with is exams. And unless we’re swinging back to the kind of paper and pencil, high stakes hellscapes of the past (which also seemed oddly good at maintaining attainment gaps of all sorts of stripe) then the online versions we’re left with are going to cause us trouble too.”

    A pretty accurate summation in this age of Diversity, Inclusion and EQUITY, and one that already causes many employers to pause when looking at University qualifications, is that a First, or an equity first? And with this data being published they will add in potential or even probably cheating as well to their considerations, we owe it to our students to eliminate potential cheating as far as possible, so results are as honest as possible. Perhaps honest attainment gaps and ‘real’ differentials between students becoming visible once again will even assist employers in believing a First is a First…

    1. Don’t be so bloody daft. The point there is the same one we’ve been saying for years – test results are an accurate depiction not of students’ knowledge and understanding, but of how well they perform in tests.

      I’m neurodivergent and, despite being one of the smartest kids in the class all the way through school, my test scores have always been terrible. My A-level grades, appropriately enough, were A*DD. The A* was in a coursework-assessed subject; the Ds were from subjects that were assessed by exams. Since that wasn’t enough to get me into university, I spent the next two years doing a BTEC – a coursework-based Level 3 course in the same field – and sailed through with a distinction on every single assessemnt. That’s not because the work was easier; that’s because I didn’t have to sit exams and try to remember things under pressure. For people with ADHD, actively trying to focus on something actually makes it *harder* to do so – which essentially means our memories disintegrate under pressure. Sitting an exam is like trying to eat soup with a fork.

      Stress is also known to degrade the brain’s ability to function under pressure, and in a similar way. Amongst the main neurological symptoms are poor recall, lack of focus and low motivation – these are three of the most common and most typical symptoms of ADHD, and are the entire reason I test so poorly. The biggest difference is that the stress symptoms are given to spiking based on the presence of stressors, whether actual, perceived, or expected – and no matter the underlying causes, it’s quite likely that the exam itself would contribute. So, someone under significant stress would reasonably also be expected to test poorly.

      I hope it wouldn’t be necessary to explain to a sector colleague why and how someone of a racial, sexual, gender or religious minority might be subject to stress that their majority peers are not, or why they would be more likely to be of lower socioeconomic status that would compound that stress. But if it is, Google Scholar is => way. The same goes for the role discrimination plays in the first paragraph – why do you think we use blind assessment?

Leave a Reply