This article is more than 5 years old

Plenty ventured, but what was gained?

David Kernohan traces the history of learning gain agenda - an agenda whose ambitious goal would probably represent the most significant finding in the history of educational research
This article is more than 5 years old

David Kernohan is Deputy Editor of Wonkhe

A victim of circumstance – changes of minister, changes of government, and changes of regulator – the learning gain programme was scheduled to end last month.

Some projects have held final conferences and events. Others (notably two large scale national projects) either concluded early or have never been publicly spoken of.

It’s a far from glorious end to an initiative that set out with a great deal of ambition – to measure “the distance travelled: the improvement in knowledge, skills, work-readiness and personal development demonstrated by students at two points in time” – a goal that would probably represent the most significant finding in the history of educational research.

The learning gain story

The HEFCE learning gain story starts in February 2014 when Vince Cable and David Willetts asked the former funding council to “consider whether there are better indicators, such as measures of student engagement, to provide information on what a high quality student experience looks like”.

This came within the context of a paragraph focused on information for prospective students – on the face of it, an odd place for the programme to spring from. Other requests covered information on how institutions use income and how information on staff teaching qualifications, student feedback, class sizes and workload could most easily be shared.

Nevertheless, in May 2014 HEFCE released an invitation to tender (at the £30,000-50,000 level for a 5 month project) to answer: “what feasible and robust mechanisms exist for assessing learning gain and what is the extent of their applicability (i.e. for what purposes and in which circumstances are they most appropriately applied) in an English context”.

Answers were also sought on what use was already being made of learning gain approaches in the UK and whether the  National Survey of Student Engagement (now known as the UK Engagement Survey) was valid as a proxy measure of learning gain.

The results of this five month study – awarded to RAND Europe, of which more is to follow – was finally published nearly a year behind schedule, in September 2015. The document states that drafting was completed in January 2015 – the metadata in the PDF suggests it was last edited in March 2015, suggesting that HEFCE sat on it for a while.

In the interim, HEFCE released a call for institutional project bids on 27 March 2014 – announcing successful projects alongside the long-promised release of the RAND Europe report in September 2015. It was at this point that we also found out about the other ill-fated arms of HEFCE’s Learning Gain project – the National Mixed Methodology Learning Gain project (NMMLGP) and the Higher Education Learning Gain Analysis project (HELGA).

It now seems clear it set out (with a few more references) pretty much what the council was going to do anyway.

The report recommended that HEFCE run some pilots and workshops, as HEFCE itself suggested in the tender. As you may have guessed – it wasn’t a great report.

Projects set out to use a wide variety of tools and techniques to plot this “distance travelled” – anything from standardised tests to learning diaries. Camilla Kandiko Howson, writing for Wonkhe in July 2017, offers a great overview of what the projects started to get up to.

Measuring impact

To an English sector dealing with the run up to the 2015 Green Paper, this all felt a little bit like a quick response to Jo Johnson’s UUK keynote, which contained some of the first details of what we came to know and love as TEF. It was just a coincidence, but shell-shocked wonks and academics put two and two together. An improved quantification of learning seemed sensible – Jo Johnson couldn’t really be suggesting three NSS metrics, two from DLHE and HESA’s student continuation data measured teaching quality. Could he?

This contributed to the suspicion in which the learning gain initiative was held. When details of the impending demise of HEFCE leaked, it became both suspicious and irrelevant. Hardly a way to drive engagement.

Project after project reported issues with lack of engagement from students and staff. Why would a student complete a test or exercise that had no bearing on their degree, and that was of uncertain benefit? And why would an academic recommend such a course of action to their students while unsure of the underpinning motivation?

This led to the early closure of NMMLGP, with a year still to run. A pending evaluation will report that issues with low response rates and patchy data collection made the national assessment of a learning gain measure unviable. The methodology drew heavily on the (US) Wabash National Study of Liberal Arts Education, which used a range of learning gain measurement tools between 2006-10, but has yet to produce a formal report (though there are plenty of publications). HEFCE appointed IFF research to administer the online questionnaire – they’ve tested and polished a lot of the sector surveys we know and love, but make no mention of NMMLGP on their sign.

The Higher Education Learning Gain Analysis project (HELGA) is even more mysterious. Since inception, we have heard nothing about it at all. The mix of NSS and DLHE seems hauntingly familiar, but it would be great to see an open data set on course accreditation by professional and statutory bodies. HELGA aims to find another learning gain proxy (see above), but without an understanding of what learning gain actually is it seems a bit, well, TEFish.

Is learning gain measurable?

David Willetts thinks so. He’s got form – anyone who recalls him telling us to get on board the MOOC train before it left the station (it turned out to be a very expensive replacement bus service) will roll the appropriate eye at this point. At a recent project conference he even encouraged academia to come up with a national and public learning gain model before someone else did it for them.

The short answer is – yes – learning gain is measurable. But it is measurable only in terms of the way an individual student understands their own learning. Interventions like learning diaries and reflective writing can prove very useful to students making sense of their own progress. What learning gain may not be is comparable – which on the face of it makes perfect sense. In what world could we say that a student of economics has learned the same quanta of learning as a student of the piano? Before we even get in to the profoundly muddy water of what level learning is at, we may ourselves reflect on the sobering thought that 15 credits from a 360 credit degree means – in real terms – very little.

Students enter university with supposedly the same starting points – three good A levels, a foundation year, an IBAC – but we know in practice that they arrive with very different levels of experience, understanding, motivation, and raw talent. A good initial assessment of where a student is invaluable to supporting learning – running a similar assessment to tell us where they get to after a year is interesting, but one would hope a course of own assessments would do a better (formative or summative) job.

To look at it another way, learning gain is a bit like research impact. You can’t really measure it by looking at it directly, but you can measure the influence on other measures. A struggling student who blossoms into a master is a clear example of learning gain, a research finding that influences policy is an example of research impact. But in both examples it’s not (really) possible to spot anything in the former that would lead to the latter, however much we may wish it were the case. And both were defined for HEFCE by RAND Europe.

One response to “Plenty ventured, but what was gained?

  1. I completely endorse the conclusions of this article. Learning Gain is complicated, individual, and develops through the lifetime of the learner, during and after study. If this is an understanding we can hold onto, then the project had value, though perhaps not representing value for money. As the Chair of the Learning Gain Steering Group, I was fascinated by the idea that it might be measureable, but some things are too subtle for numbers. Continuing to think beyond aggregate data is key, and continuing to see individual students vital.

Leave a Reply