You can’t hold me to rules you never explained and that I don’t understand

As I type, we don’t have a Designated Quality Body in England.

Jim is an Associate Editor at Wonkhe

That role is being fulfilled in-house, by the Office for Students itself.

On the issue of academic misconduct, the B Conditions (of registration) and associated guidance say that academic misconduct includes presenting work for assessment that is not the work of the student being assessed, and includes but is not limited to the use of services offered by an essay mill.

Not clear what it excludes, mind.

Later it says that a provider not taking reasonable steps to detect and prevent plagiarism, students’ use of essay mills, or other forms of academic misconduct by students, would likely be of concern.

Doesn’t say what those other forms could be, mind.

In B4, OfS says that “academic misconduct” means any action or attempted action that may result in a student obtaining an unfair academic advantage in relation to an assessment, including but not limited to plagiarism, unauthorised collaboration and the possession of unauthorised materials during an assessment.

Do you have a sense of which generative AI tools count as “unfair advantage”? Or should count as “unauthorised collaboration”? Or would fall under the definition of “unauthorised materials during an assessment”?

Does OfS? Do students?

For clarity, I’m not talking about students daft enough to take text outputs from GPT-3 and copy and paste them into submitted work. Bang to rights.

I’m talking about the thousands of tools that draw on GPT and other large language models to improve research, writing, synthesis, analysis and creation.

Later in the condition OfS makes clear that providers have a duty to provide support to students on understanding, avoiding and reporting academic misconduct.

Have students been offered that support? Are providers confident that students in the middle of preparing for final assessments know whether the tools they’re using count as cheating or not?

Would saying that you relied on intel from a QAA webinar – or worse, a Wonkhe webinar – count if the boots hit the ground?

Later in B4, OfS says that providers must design assessments in a way that minimises the opportunities for academic misconduct.

But if nobody’s sure what counts as academic misconduct in a world of generative AI, how can providers design assessments in a way that minimises the opportunities for academic misconduct?

As it stands as of today, this Google search draws no results. That’s not really on, is it?

It’s not like OfS hasn’t been thinking about AI. In Michael Barber’s Gravity Assist review (“propelling higher education towards a brighter future” etc) in 2021 we learned that:

…sophisticated software like proctoring and automated marking can provide for greater consistency by reducing human error and improving impartiality. Tools such as automated marking for essays which use artificial intelligence can save staff time and provide instant feedback to students, but there are potential risks and ethical issues to consider and a need to ensure that these tools supplement rather than replace staff feedback over the duration of their course.

Was anything done?

To be fair, it’s interesting I think that Gravity Assist tends towards thinking about AI from the perspective of providers rather than contemplating students’ use of it.

At the risk of labouring last week’s point about student engagement and understanding their lives rather than just seeking (and then seeming to set aside) their views, there’s a missed opportunity here.

More importantly, you’d want the lead quality body in England to be discussing the reality – that given we depend largely on the production of digital assets to assess learning, when the robots make the production of those assets effortless, or when we can’t differentiate between robot and human easily in the production of those assets, we need the lead quality body to lead a conversation about what good assessment looks like.

I know, I know, it’s all happening very fast. But it was OfS’ choice to replace the Quality Code, OfS’ choice to go for detail in the B conditions, and so I don’t think I’m being unreasonable to therefore think OfS ought to say something, anything, about generative AI.

Oh, and if you’re a provider thinking “I like the silence, they can’t hold me to rules they haven’t explained or clarified” then surely the same applies to students you might want to accuse of crossing the line with generative AI this spring too?

3 responses to “You can’t hold me to rules you never explained and that I don’t understand

  1. Time to embrace new forms of working. Did we hear the same howls of ‘cheating’ when the web was invented, or books included indexes? We must move forward and use new methods to advance learning. Google has almost made our memories redundant- now we need to learn to use writing tools like we use google to search for us.
    “Providers” as named in this article must set assessments that can test students skills at using tools to manipulate information.

  2. A key skill we ought to be assessing for is ability to find resources and collate them into a coherent format that meets clearly stipulated criteria. We cannot van calculators or the use of search engines. We cannot revert to testing memory as the main criterion of assessment. The challenge lies squarely on us, educators and universities, and we are largely burying our heads in the sand. If we see this as an opportunity and think ahead, we may be able to leap ahead of the AI curve

  3. I think the really interesting conversation about AI is its potential to change the very nature of knowledge (and education). It won’t be enough to assess knowledge acquisition. It won’t be enough to assess how acquires knowledge is understood. What is authentic knowledge/understanding in the world of AI?? I don’t think curation is quite it, either. This technology asks deep epistemological and pedagogical questions. It goes far beyond “is this cheating”?

Leave a Reply