This article is more than 1 year old

Making higher education assessment literate

Artificial intelligence tools represent the latest in a series of challenges to assessment in higher education. Phil Newton asks whether the sector has the skills needed to address these.
This article is more than 1 year old

Phil Newton is a Professor at Swansea University.

An accredited formal teaching qualification, including “competences required for all teaching staff” heralded a new era of professionalisation which would have put teaching on a par with research excellence.

A failure to deliver on those promises means UK higher education is left exposed to some serious challenges in the very near future.

Armchair chat?

At the recent Wonkhe Secret Life of Students event there was an onstage armchair chat with John Blake, Director for Fair Access and Participation from the Office for Students. John had some typically blunt appraisals of many aspects of current policy and practise in the HE sector but one in particular caught my ear; he bemoaned the apparent lack of assessment literacy shown by academic staff in universities. He contrasted this with his experience in the school education sector where he believes teachers are fully versed in fundamental concepts from assessment theory, such as reliability and validity.

An audience member challenged this (full disclosure; the audience member was me), pointing out a key difference between schools and HE; if someone is teaching in a school, they are normally qualified. They have been trained with a postgraduate qualification. Until recently, this was a regulatory requirement.

My question to John Blake was…..surely the HE regulator should..…regulate; make it a requirement for HE teachers to be similarly equipped with the tools they need, including assessment literacy? Surely they could deliver on those 20-year old promises?

A challenge

We face many very significant challenges in assessment. Students are cheating in large numbers, simply because they can. Heavy-handed responses to this, such as remote invigilation or other surveillance, are strongly disliked by students. AI-based Chatbots like ChatGPT and Bard are going to cause an avalanche of disruption to our assessments at a scale and, critically, a speed that the Higher Education sector is just not equipped to cope with. AI-tools are soon to be ubiquitous in most word-processing tools and so students will all be using them to complete coursework this summer, possibly without even realising it. This rapid disruption is also affecting the graduate jobs our students go on to; another change that needs to be reflected in our assessment practices.

In order to address these challenges, and actually harness the undoubted positives and promises of these new tools, we need to be properly trained in effective practical teaching skills and AI-literacy. A good assessment is a good assessment, valid, reliable, inclusive, authentic and more. An understanding of all these concepts is essential to ensuring that teaching and assessment do the thing that they’re supposed to do, and to manage the aforementioned challenges.

Is John Blake’s criticism fair? There is little evidence either way; research on assessment literacy of staff in Higher Education is, in the words of the research itself ‘in it’s infancy’, and well behind that in schools; this itself is a reflection that John Blake has a point. Our external examiners describe themselves as ‘assessment illiterate’.

This is despite two recommendations made by the 1997 Dearing Report (26 years ago!)

We recommend that institutions of higher education begin immediately to develop or seek access to programmes for teacher training of their staff

It is especially important that research outcomes are used to inform policy and improve practice in learning and teaching.

What would good training look like?

Where are we then? There was once the promise of PGCerts in teaching practice, leading to Fellowship (FHEA) of what is now AdvanceHE. There are still many very excellent programmes, but a lack of any regulatory requirement for these has left many hollowed out; largely relying, as FHEA does, on a reflection on one’s current teaching practice, rather than a requirement for training in the practical skills needed to be an effective teacher and assessor. In addition to our apparent illiteracy in assessment, equipping staff with FHEA does not seem to help universities perform well on the NSS either. The comparison I made with school teaching is also weakening, perhaps in the wrong direction, with changes to teacher training.

There are lots of effective and evidence-based ways to teach and assess, but we don’t make enough use of them. Why not? Higher Education providers will, like any organisation, respond to the incentives placed in front of them; just look at the effort that goes into the NSS, TEF and REF returns, but John Blake demurred when I asked whether it should be a requirement for HE teachers to be properly trained, even though the regulator is perhaps best placed to deliver both the carrots and the sticks needed to address his own criticism of us.

Higher education is really important; society trusts us to train people for almost all the important roles that are required for a modern functioning democracy. Higher education is also really expensive, especially for our students. It seems indefensible that it is not a regulatory requirement for us to be qualified to teach and assess, when you compare HE to schools, accountancy, medicine, law etc. This regulatory weakness is perhaps the single biggest cause of the lack of assessment literacy bemoaned by John Blake and others (for the record, myself included).

Our apparent illiteracy is going to be starkly and imminently exposed as these new disruptive AI tools upend our current assessment policies and practices, revolutionise teaching and the graduate destinations of our students. There is a real risk that HE is going to be undone by its own skills gap.

One response to “Making higher education assessment literate

  1. Not just students using the AI to complete their work, we have seen instances of staff using AI to complete their fellowships and PGCerts, and we have also heard about staff cheating on their PGCerts and Fellowships.

Leave a Reply