Mark is founder and Editor in Chief of Wonkhe

Until last week, the response from the sector on the rise of generative AI was focused on thinking about Chat-GPT.

Based on GPT-3, the version of OpenAI’s large language model that most have played with does not have access to the live internet, cannot access information updated after 2021, and has been quaintly relying on “thumbs up / thumbs down” validation from users to know, and then learn, if a response is correct.

It has no internet lookup function, can’t access search engines or library databases, and can’t source references. If it doesn’t know an answer, unless you use the right prompts, it just makes it up – in a pretty convincing manner.

As such much of the debate has focussed in two directions – on detection, on the basis that students might use it to cheat, and on integration, on the basis that teaching and assessing students on using it within academic work is inevitable and/or desirable.

And then everything changed, again, last week. In a big way.

Like Clippy, only on steroids

Bubbling away for a few weeks now has been a beta version of Bing – the search engine that nobody uses – powered by GPT-3. If you’ve used it, you’ll know that Bing search is already “solving” the problem of AI chat not knowing what the current news is.

But given that most universities give an Office 365 (actually called Microsoft 365 these days) licence to each student, the really big news was Microsoft’s announcement of something called Microsoft 365 Copilot.

Although it will appear as a sidebar in the software, and will also appear in the middle of a document making suggestions on content or correcting grammar mistakes, it really is more than just “putting Chat-GPT into Word”,

The demo shared by the firm was extraordinary. In Word, you will be able to ask Copilot to create content for a document on a specific topic or based on data available in another document.

In PowerPoint, you will be able to ask it to create a full presentation from a document, style that presentation in a certain way, and add images, animations and speaker notes.

In Excel, using natural language, you will be able to ask it to create or analyse data in the sheet, in Outlook, the AI will be able to summarise emails and help you make sense of the information in a thread.

And in Teams, it will transcribe and summarise meetings, prepare people with updates on specific projects, work out the best time to schedule a follow up meeting, and so on.

Crucially, it won’t just analyse information in the files you have open or those on the public internet, but also in other documents or files you have stored in your cloud. So in Word, you will be able to ask it to create a document based on the tone and style or information that you have in another document in your OneNote or OneDrive.

It will, if even bits of the demo are as sophisticated as they look, either improve students’ written and presentational work, transform it and raise it to a new level, or render much of it pointless and obsolete, depending on your subject area or outlook.

Prompt progress

Copilot wasn’t the only thing that was released last week.

OpenAI released GPT-4 having been working on model “alignment” – the ability to follow user intentions while also making it more truthful and generating less offensive or dangerous output.

The number of “hallucinations,” where the model makes factual or reasoning errors, is now significantly lower, and there are major improvements to “steerability,” which is the ability to change its behaviour according to user requests.

So once you get an output, you can ask it to write in a different style or tone or voice. If you’re paying the fee required to use it and you start prompts with “You are a garrulous academicist” or “You are a terse and sarcastic academicist” you’ll get notably different responses.

You can also ask it to do X drawing on research about Y. So asking it to write an email warning someone about the dangers of something was last week. Asking it to write an email warning someone about the dangers of something in a way that draws on the latest research about kinds of messages we respond to is next week. Telling us why that works best is the week after that.

An entire new skill in designing prompts for GPT models has now sprung up – but may be obsolete in weeks.

Oh – and it can create entire websites by simply looking at a rough design that you’ve scrawled on a piece of paper, and pass a never ending list of standardised tests used to gain access to professions.

The “skills” that a huge number of degree-level learning outcome statements contain could be automated in days, if not already.

On the midjourney to the future

There was more. Midjourney, an Ai image generator, announced a new version that, among other things, has fixed the problem of hands being incorrectly displayed in its creations. It has created the image heading this blog from the following prompt in about five seconds:

1970s style toy robot students at a graduation ceremony in the UK, realistic, natural lighting, detailed, 8k, RTX –ar 16:9

As an aside, some of the versions were pretty cute too.

Google’s PaLM API gives access to another large language model that will soon be built into Gmail, Docs, and its Workspace products.

Indian Twitter rival Koo has integrated ChatGPT to help its users create content, and Chinese search engine Baidu has unveiled a bot called ERNIE that can do creative writing, calculation, business communications and Chinese language understanding,

Grammarly – a tool that already helps students with grammar, spelling and punctuation – is about to launch a new “contextually aware assistant powered by generative AI” called GrammarlyGo, which will be able to write from scratch and help revise existing text in an email or a document. And LinkedIn is about to launch AI tools to help users’ profiles become more attractive and help employers write better job descriptions.

My colleague Jim Dickinson has been thinking about universities are going to change in the months and years ahead. Right now, the majority of universities that have adopted the principle of “strict liability” for academic misconduct – i.e whether a student intended to commit an academic misconduct offence or not is not of relevance – ought to be thinking about what it means, this summer, when a student might have broken that rule by clicking on a single button supplied in the software the university has issued – and triggers some kind of detection software.

Not so long ago the sector thought that Covid and the move online was the big revolution moment in assessment. By the autumn a move back to in-person assessment, or a more considered move to re-imagine both the higher order curricula and skills that will be required in the (near) future, and the access issues posed by the private and paid nature of these technologies, will surely be a new necessity.

To pick up the conversation about how AI is shaping higher education, join us on 19 April for a half day online event: The avalanche is here.

One response to “Like Clippy, only on steroids

  1. As I keep shouting – the problem in UK universities come sept 2023 is not that we can detect AI – it’s that we detect it everywhere. That plus strict liability malpractice policies will bring the system to a halt. You cannot scale current systems to deal with this.

    You have to start again with assessment and policy.

Leave a Reply