During a recent discussion that I attended on generative AI, there was that rare and fleeting thing in higher education – a panel in unanimous agreement, within the first five minutes, no less.
With heads nodded and backs patted regarding the impossibility of ever really being able to ban (or accurately detect) students’ use of AI, the panel moved onto much more interesting discussion – on the further complexities on what to actually do with AI tools like Chat GPT, Microsoft Copilot and Google Bard.
But as I sat in front of my Zoom webinar window watching the comments cycle furiously across the screen, I got thinking – why aren’t we considering the other side of the question, what AI might be able to do for students?
There was an interesting moment where the panel looked at some early examples of staff and student guidelines on the use of generative AI in assessment. The more progressive end of the spectrum still seemed to have fallen on some iteration of the following:
Students must declare use of any AI tools, and refrain from using them without the expressed permission of their tutor.
For me, while maybe a little better than an outright ban, this approach still seems to wildly underestimate the impact these tools are about to have on our lives.
Prohibition has a long and storied history of not really working, and that’s for things that, sometimes, are hard to get your hands on in the first place.
As has been pointed out on Wonkhe before, in the case of generative AI tools like Microsoft’s soon to be launched Copilot, the approach above would be more like giving every student a car and then asking them to declare if they intend to use the brakes.
Any staff member in charge of assessment and regulation needs to begin watching what the next phase of AI is going to look like.
This isn’t a student logging into an external site like Open AI’s Chat GPT and typing their essay question in. This is a student logging onto their university-provided laptop, using the university-provided Microsoft package, and being prompted, while they sit in the university coffee shop, whether they’d like Copilot to turn their lecture notes into a presentation, using the university style guides they’ve got saved on their local files.
If you find the above situation an aberration of academic integrity, you’re going to need to start thinking a whole lot more progressively – because that’s likely just scraping the surface of what the technology can do.
A bear in the woods
In much of the coverage about the impact of AI on higher education, I’ve also been surprised with the lack of consideration of how the sector is regulated.
From a regulatory perspective, for now at least, universities don’t just offer education for education’s sake. We are regulated by outcomes – employment, experience, prospects. Regardless of what any of our personal views are, in the eyes of the regulator, more so than ever before, students are consumers.
Universities have to ensure that their students are getting something (read: prospects and job readiness) or tempt the wrath of a certain B3 bear with its Micky Mouse degree beating stick.
This, to me, cuts to the heart of current debates around AI.
If academic integrity is viewed as an abstract ideal – something immovable and unchanging – then a student using Microsoft Copilot to help with an assessed presentation should be punished.
But surely, in reality, academic integrity is not this. It is self-defined, and enshrined in individual institutions’ regulations and individuals’ within those institutions interpretation of those regulations.
As such, if we as a sector have control over how we view academic integrity, we must also have the power to shift the dial.
In doing so, we have to look at the world around us in order to better design our rules and regulations to suit the needs and behaviours of our students.
In a world where one university is preparing students for the world of work, encouraging the use of all digital tools at their disposal (including AI) and designing assessments that encourage the creative use of such tools, and another is seeking to restrict the use of such technology to fit with an antiquated regulatory framework it can’t be bothered to change, which institution do we think students will want to choose?
A new flavour of assessment: authentic and original
There’s a reason academics and university support staff are losing sleep over generative AI tools, and that reason is plagiarism.
However, as pointed out in this short but excellent presentation by Dr. Philippe De Wilde, if we as a sector shifted our fixation from seeking out plagiarism at every opportunity to rewarding originality, many of our fears would be allayed.
Imagine, for a second, we paused to ask ourselves – do we want students to simply compile resources and parrot the conclusions back in a relatively structured way (someone with a better sense of irony might say, in quite a robotic way), or do we want to design assessments that utilise research skills, new technologies, fact-checking and critical thinking, delivered through alternative means including vivas and presentations.
This opportunity to shift towards portfolio-based assessments which better prepare students for the kind of activity they will actually be carrying out in the world of work should not be underestimated.
It should not be viewed as the gradual decay of academic integrity. If we reduce the number of essays and assessments we ask of our students, we give those students (and academics) more time to engage critically with materials, and ensure when they are assessed, they are asked for justifications of their tools and research methodologies (including the use of technologies and potential ethical issues).
The student gets the opportunity to build skills using the kind of tools that will make them more employable, and the course leaders get a potentially much richer, more varied marking experience, driven by in-person conversations about subject expertise.
AI literacy is not something we can, or should, be punishing for or seeking to prevent. It is something we need to actively build into our curricula and assessment practices. And fast.
To pick up the conversation about how AI is going to impact on higher education, join us for our online event: The avalanche is here on 19 April.