There was a fascinating moment the other day at our Secret Life of Students event when someone on a panel suggested that academic staff shouldn’t be personal tutors – on the basis that a more complex and diverse student body needs professional coaching, not amateur chats.
It was a comment that got a very significant cheer.
It was interesting partly because of the way in which most of the sector right now assumes that those who teach must also undertake research and must also coach students.
An assumption – with a growing student body – whose pressure points dominate the informal discourse surrounding industrial action.
You can’t always get what you want
Officially and for inevitable reasons, conditions never seem to receive as much attention as pay in industrial disputes. But tales of overwork are all over the socials.
When I was chatting away to a former SU officer the other day I was reflecting on the way in which that role has developed over the years – and I think there are parallels.
Full time student leaders do lots of things – leading groups, communicating with members, attending committees, running projects and so on. They do more things, for more people, than they used to.
And each of those things have become more complicated – with less room for error – than they used to be.
We might also safely assume that the different things that a traditional all-rounder academic does have each become more complicated, with more onerous demands on each component.
I’m thinking of all the bits – teaching, assessment, research, pastoral, knowledge exchange and so on.
And I think we can also assume that each of those “higher order” tasks have also become more sophisticated and intense – because leaps in IT have automated the aspects of these tasks that used to represent a bit of down time (although I accept that for design reasons, many will argue that the drudgery is still there too).
For both of the roles, at the same time as well as becoming larger, the student body has become much more diverse – where success requires an understanding of multiple perspectives, experiences and circumstances.
Yet I worry that our mental models of what good looks like in these roles hasn’t really changed.
Ask anyone to think about simplifying the “multiple roles” thing – more folks only doing just teaching, more doing just research etc – and many argue that you may lose the amazing benefits of combining those into a single person.
And if you were to simplify the student diversity thing by streaming them all into different classes or institutions, you would lose the bridging social capital benefits of being in rooms with people “not like me”.
But if in 1992 the sector knew it was intending to massify – say, to triple in student number size over the following thirty years – would it really have intended to triple the overall research output of the sector at exactly the same pace?
And as such isn’t the question then – when you have what are essentially multidisciplinary roles but you don’t want to grow each of the components of the “multi” at the same pace – how do you scale?
Under pressure that burns a building down
One way to cope with there being more to do with more people is to muddle through and iterate – by handing ever more complex things to already busy academics to deal with.
But eventually you end up squeezing too much from everyone – because you’re demanding more accountability and consistency in the different tasks, and more support for more diverse students. And that’s even if all you did was triple the academics – which universities aren’t.
Or you can hire ever more professional staff to address them – leading to endless fraught interactions and a bizarre combination of overwork and fragmentation of efforts.
But then you have to plonk generative IT into the mix – contending with potentially fundamental shifts in what both academics need to do in each of the specialisms, and what we need students to be able to do by the time they graduate.
Because as well as a whole new level of intensive overwork, the real change that we’re on the precipice of is that evidence gathering, synthesis and summarising in a particular style are becoming things that we can get a machine to do instead.
Welcome to the machine, where have you been
If you set or mark essays, if you’re not signed up to OpenAI and haven’t tested out this week’s launch of GPT-4, get an account. A lot of the things I’ve seen people say that generative AI can’t do already could with the right prompt – and the writing is much better too. Already.
Next, have a look at this week’s launch of Microsoft 365 copilot, and remember that this will be built into the office software that most universities issue to students in a matter of weeks. This is generative AI that will take your data, plus publicly available data, and fuse it. “Make a PowerPoint out of this blog in my style” will be possible in days.
The capabilities are mind blowing – and literally replace all sorts of tasks that the “we get our students to present and do things instead of just essays” brigade will need to rewrite assessment tasks over.
We will need to worry that where Microsoft’s Satya Nadella imagines that taking the drudgery out of work will mean more time spent on magical creativity, the intensity of what’s left will only… intensify.
Meanwhile as it stands, a series of traditional-pace university working groups are slowly setting terms of reference while the world changes – a kind of comforting denial state as the sector braces for a summer of cat and mouse over what counts as cheating. Some of the betas in Google Docs and Word will be running by May. Are we really going to give every student a bag of weed and then set up misconduct panels if they smoke it?
It’s all a bit like when the first calculators appeared and couldn’t do quadratic equations – with people saying “well ChatGPT is fine but it can’t do X or Y”. It probably already can. And it’s not clear to me that every university assessment can be on “writing good prompts for the AI” or “we’ll do a viva on everyone”.
The bigger issue isn’t that the assessment won’t work or that our meaning of cheating will change. It’s that if the process of synthesising, processing and summarising existing information is now so easy to automate, it rips the heart out of almost every undergraduate degree – because it will develop skills that society no longer needs.
Knowledge is power
One option is to pivot towards the practical and out towards application. This will be easier for some than others, but a glance at the undergraduate degree structure at Roskilde University in Denmark is interesting – because half of the credits are for applied projects. In every subject. Every year.
That just happens to carry reduced academic delivery costs too.
Another option that I’ve heard follows from (generatively) is the idea that all that generative AI does is summarise and synthesize current knowledge. It might do it in a higher order way, but we’ll need new stuff for it to do it with. On that basis the argument goes that given creativity and new knowledge are what matters, all degrees therefore need to become research degrees.
But are we really saying that we should pivot the entire higher education endeavour toward knowledge creation? How much new knowledge do we need? And if the current HE estate was all engaged in micro pockets of knowledge creation rather than generational knowledge transfer, are we sure that what we have is the best way to organise that?
In other words, as the tech develops, if there won’t be a need to “lit review with some opinions”, is tens of thousands of tiny UG and PGT research projects that nobody will read a smart future?
Embrace the change, escape the mind
What I can see – and this is fuzzy and a bit counterintuitive – is actually something of a move back towards subject immersion rather than modularity, with more groupwork (where groups of staff and students consist of diversities of characteristic, level and skill) work on hard(er) questions.
What I can’t fathom is how you assess that when you’re obsessed with the sector acting as a sorting hat. Or, indeed, how that is easily reconcilable with modularity.
I can see that something between a module and a course, run by a team where the teaching “stuff” can grow at a different pace to the research stuff (and in turn a different pace to the pastoral, KE etc) – something that has a long form purpose (say over a year) – could be compelling, immersive and provide “cosier” and more compelling experiences.
Such units could also be multidisciplinary, and fundamentally place based. The idea of the brilliant/eccentric but ultimately lonely academic isn’t a great fit – but that’s probably a cliche anyway.
Next you have to think about assessment – and perhaps this points to a need to assess not on knowledge, skill or competence but on “contribution”. I’ve always assumed that is how a lot of judgement of others in group settings works anyway, but we pretend otherwise.
And in an age where the unit of resource mitigates against running the old model thinner, moving more of what we used to call “extra curricular” into credit bearing service learning is surely both desirable and economically essential.
This will need more thought. But fundamental I think is that the individual student writing the individual essay marked by the individual academic is game over if generative AI can play both roles.
If massification provides a challenge breadth of the all-rounder academic, and generative IT presents a challenge to the things that the all-rounder teaches and assesses, we will need to grasp the nettle on the fundamental structures of knowledge production and application both in a university and beyond.
Asking students to sign a thing saying if they’ve used Bing won’t cut it. And nor will sending an academic on a course on how to teach poor students, or how to teach in a way that supports wellbeing.
Even if the sector still sets the essays and students write them, it’s the nagging purposelessness that allows the complexity and efficiency to get to you – while purposefulness is what helps you turn the efficiency to your advantage and helps make the complex simple.
Knowing that what you spend your days doing is worth it gives you much more agency over the activities you agree to undertake. That’s the kind of power that academics, professional services staff and students will need next.
To pick up the conversation about how AI is going to impact on higher education, join us for our online event: The avalanche is here on 19 April.
15 responses to “An avalanche really is coming this time”
The Team Academy approach offers an interesting model – with a lot of team work, group coaching, and assessments which are in part about reflecting on small group projects. This includes some use of literature etc, but much more about what was done, and the evidence for impact etc, including reflections on the process. This means you can actually explicitly bring in AI as a tool to help, but the assessment outcomes need the students to do the work.
Is this the Team academy? https://www.teamacademy.co.uk/issues-we-solve/
Your analysis of the problem is spot on – but I’m not so sure about the solution. Do we have any evidence that most students want to engage in year-long projects and be assessed on their contribution? On paper this sounds great, and is certainly something I would enjoy teaching a lot more than your standard lecture/seminar/essay/exam format. But it is VERY different to what students have experienced at GCSE/A Level and would take most students far out of their comfort zones. There has been a collapse in lecture/seminar attendance since Covid – would students be any more motivated to get out of bed for a meeting for a project that’s due in in six months’ time? Plus, my experience of supervising much less ambitious student projects is that students (entirely understandably) need a lot of input and supervision – so it’s not necessarily a workload win.
Completely agree, both that this is a brilliant analysis of the problem and that I’m not sure about the solution. I agree that student feelings on group work more widely make me think that this would not be a popular approach. I also think it comes up against one of the other problems Jim has alluded to, the growing complexity of the student body and their needs (or at least growing awareness of it as something that needs to be accommodated rather than punished, which to be clear I think is a very good thing). Handling reasonable adjustments in a long-term group project isn’t impossible but it can be labour intensive. A lot of students who don’t have a diagnosed disability would feel deeply uncomfortable and anxious working in the way described – you could say that that’s no bad thing giving the opportunities for learning from that experience, but I think it would be a hard sell in a consumer-framed HE sector.
Was this written by ChatGPT?
Presumably not, as it actually takes a critical position on something! As others have remarked, this analysis identifies the real issues.
I’m not as pessimistic about the fate of the student essay as this author. One solution is just an extension of adapting questions to open book exams. As long as we continue to assess how well students regurgitate knowledge, if course we’ll be susceptible to AI. But how well can AI use that knowledge to make predictions in novel situations? How well can AI parse messy, complex real life data and draw conclusions. I’m all honesty, I’m not sure, I need to play with it more. But I am confident that professional and competent educators will be able to adapt assessment to fairly judge student learning. And perhaps, as the author predicts this will perhaps mean universities relying more on specialist teachers.
“But how well can AI use that knowledge to make predictions in novel situations? How well can AI parse messy, complex real life data and draw conclusions.”
Well. A reasonable assumption to make is that either now (with GPT4) or soon, that AI (with relatively minimal inputs) can produce 1st class quality work in any written assessment a undergrad level and before long PGT level. This is what we need to plan for.
… and how many routine assignments actually ask writers to ‘make predictions in novel situations’ or indeed ‘parse messy, complex real life data and draw conclusions’? Certainly GPT4 can do the latter, and probably the former as well as most people.
This is very interesting and thought-provoking, but I think an important point is missed. Even if new technology can gather and synthesize information, we still need students to be able to understand that information, so that they can later make decisions in their workplace. Unless in the future we will rely only on technology to decide for us, with most people being unable to understand where those decisions come from, it will still be important to find a way to test understanding. I am going to be a bit provokative and say that maybe it is time to look again at exams (both in written and oral forms), as a means to test understanding? Of course, together with other forms of assessments mentioned in the article, such as group works and practical applications of what is learnt in class, but given the new potential offered by generative AI technology, I think exams (maybe in a new, more reflective form), may have a comeback.
“we still need students to be able to understand that information, so that they can later make decisions in their workplace”
My thoughts exactly. Asking an AI to summarise a bunch of existing work for you is fine, even ask it to make predictions for you, but a human reading what it produces doesn’t necessarily automatically gain a deep understanding of the field. And it assumes that the information that went into the AI response was correct in the first place. Many students already adopt a surface approach to their subject knowledge because “I can just Google it” or “I can learn what I need from YouTube” – they then end up struggling in project work as they don’t have that deep understanding. We owe it to our students to teach them skills in critical thinking and judging the reliability of source material – even more so in the era of generative AI.
I think it might be time to put down the whiteboard marker and find something else to do.
I agree; the information – however aptly digested by AI, or however easily offered by Google – will still need to get from the page / screen into people’s heads, if they want to use that information in any meaningful way. Switching to decision-driven group project work, inspired by real-time research (not that every kind of research is actually fit for being shared with or pursued by students), won’t get around that fundamental problem.
The drive to intensify comes from a model of human endeavour that says that if we can do more it is always good to do more, so we must do more. This is a wish that will be washed away in the flood: But how about we take the opportunity to not intensify and instead spend the time freed up to spend more time on our humanity – on actually enjoying the fruits of our labours instead of just labouring more for fruit we never have a chance to taste?
Great article! But I think we’re missing the point here , this is a neural network that has learned eveything we have ever written on the internet and is learning from 100 million humans in real time.
The challenge won’t be assessments,
it will be the loss of the highly skilled professions , (GPT4 scored 90% in bar exam, can do accountancy and even medical diagnosis)
a graduate with critical thinking skills won’t be able to compete with GPT7 let alone GPT4
HE/FE will shrink , the best bet is to add Python coding or an AI prompt writing module to EVERY degree course and add *AI handler* as a graduate attribute.
University will not be seen as necessary as it has been in the last fifty years. It seems like certifications will be adequate. Students who are willing to read, are willing to become literate and develop the critical thinking skills will be the thought leaders in society. Perhaps this seems callous? The youth is in a race to the bottom. They take shortcuts, resist reading, and it may be ok in the future. It’s a brave new world.