This article is more than 4 years old

What did we gain from learning gain?

In 2015 projects were awarded funding to test measures of learning gain. David Kernohan asks what we've gained from the pilots.
This article is more than 4 years old

David Kernohan is Deputy Editor of Wonkhe

How much learning do students do at university? That was the question that Vince Cable and David Willetts asked former funding council HEFCE to answer back in February 2014.

The idea was to “consider whether there are better indicators, such as measures of student engagement, to provide information on what a high quality student experience looks like”.

Now, the learning gain programme has ended with rather a whimper – three reports covering a pilot national mixed-methods initiative (NMMLGP), thirteen institutional pilot projects, and a peculiarly abortive in-house data driven attempt (HELGA) have all been published by OfS.

You can always tell when OfS published projects look like a problem – the first two of these are accompanied by a pinkish-orange box that dissembles that “The report below is independent research which we have commissioned. As such, it does not necessarily reflect the views or official position of the OfS”.

I’ve told the messy birth myth of the Learning Gain programme on Wonkhe before, where my efforts earned a swift riposte from OfS’s Yvonne Hawkins, who took issue with my “Eeyorish” perspective on the concept. Whether this made OfS “tiggerish”, or simply “bears of little brain”, was not clear at the time.

Was I right about the plight of the programme? These evaluations suggest that there is absolutely no evidence that suggests any kind of sector-wide measure of learning gain will meet even the data integrity standards of the TEF. There’s a great deal of maddeningly inconvenient evidence that problems with programme design and scope caused participants no end of difficulties. But that’s not really what we set out to learn. They even resort to the straw-grasping suggestion that future research will benefit from what we now know doesn’t work.

Compare and contrast

But last week also saw the glitzy release of the tiniest of updates to OfS’s grade inflation research. Unlike with learning gain, the full awesome power of the press and public affairs office was brought to bear – there’s been a wall of mainstream coverage, and even everyone’s favourite current Westminster Secretary of State for Education Damian Hinds found time to weigh in.

So on one level, we know that OfS knows that there is no evidence that learning gain can be measured by interventions or data analysis. On another, very public hand-wringing that the increase in the number of first or upper seconds overstates improvements in student learning. Does anyone else see a problem here?

Academics measure learning gain nearly every day. But they can’t do so precisely, as learning is decoupled from both input and output measures. You can’t say a given student will learn more (or less) if they get more lectures – and you can’t say that students with three A* at A level will learn more (or les!) than those with an Access to HE qualification if both end up with a first.

If you look at LEO salary data, we know that there is no real link between any tentative measure of the quality of the student academic experience (anything from the NSS, to student-staff ratios, to the actual degree classification) and salary after 1,3,or 5 years. We do know that there is a link between institutional and subject choice and salary – but it is at least arguable that the former is a form of socio-economic sorting, and the latter has more to do with the state of the job market.

Question time

This leaves us with a pretty fundamental question – what do universities do and how do we know they are doing it? This is the question that needs to be answered before we start comparing how well each institution happens to do it, and why. It’s a very complex question, and the learning gain research – whilst flawed at inception – was at least an attempt at trying to understand this.

Some of the people best qualified to answer this question are those who actually research teaching in HE, and support those who teach in developing their practice. The end of the learning gain programme represents the last throw of the dice for a lot of these people. The quality enhancement data revolution means that providers are laying off educational developers to hire data analysts to massage their way to a better TEF. The lack of available funding to support low level, results-focused practice research (such as that that used to come from HEFCE, the HE Academy and, to a lesser extent Jisc) mean that the business case for employing such a set of skills is being eroded.

Lots of talented educational developers are freelance, putting their livelihood on the slim bet that learning design would lead to a resumption of the “what works?” culture that did such a lot to improve teaching in the 00s. It wasn’t a good bet, but it was pretty much the only one available.

Now, having dismantled the structures that analyse and support the improvement of actual teaching quality, HEFCE (and latterly OfS) may have done enormous damage to the capacity of the sector. It looks like Tableau isn’t going to fix higher education after all.

9 responses to “What did we gain from learning gain?

  1. Thanks for an interesting critique. As one of the evaluators of the NMMLGP research, I agree that we have stripped out what is meaningful, to being replaced by what is (or isn’t) measurable, which is to the detriment of all learners and to the sector, generally. Pam Tatlow’s thinking about learning gain appears entirely logical: get your co-design pedagogy in place as the norm; negotiate and agree what is meaningful to be learned within a subject; boom! You can then assess what matters.

  2. That’s a great point. It seems to me something that would be difficult to obtain universal agreement on. It is certainly hard to imagine it being measurable or manageable in the way that, for instance, the ALPS is for A-levels (https://alps.education/how-alps-works/). And if you can’t define your problem well, the rest is pointless.

    Of course, the idea of learning can’t be separated from grade inflation in the current climate. I was pleased, then, to see this mentioned here too.

  3. Incredibly disappointing on every level given how important learning gain is to supporting ongoing funding of HE

    NMMLGP looks to have been (on its own admission):
    1) Poorly designed, administered & poorly communicated.
    2) Inconsistently approached
    3) Lacking almost any theoretical or cognitive framework. (student comments are utterly lost, clearly lacking any sense of what they were meant to learn and mostly framed as ‘oh, I understand now I need to do more but can’t get the motivation’)

    No wonder it didn’t find anything useful.

  4. Learning gain pilot projects much more promising, even if first steps. Some useful work clearly done in many of the pilots, including cognitive frameworks and in some cases, links to grades.

    Would be a shame not to build on this

  5. Hi Matt, yes, as the evaluators of the scheme your assertions are fairly accurate for phase 1 and reflect what participating HE Providers tended to think, too. I do feel there was some value in the second phase in which OfS at least had the courage to recognise the fault lines in the original approach, and in letting us explore some students’ perceptions of what LG did, or in most cases, didn’t mean to them. An interesting part for me concerned that most students were not that interested in using LG, however defined, as a way to prioritise Providers.

  6. You were totally right, David. Remember that movie scene where Heath Ledger burned all the money as the Joker? Same script here. £4m of egg on face.

  7. It should not really be that much of a black box – there are lots of good and proven approaches and interventions in pedagogy and andragogy – and growing awareness of good practice around blended learning , learning design etc.

    The question is probably the wrong one. I’d tie learner gain to courses that are underpinned by recognisable professional / national occupational standards – most undergraduate courses with focus on workplace should I think be based on these . That is a contentious statement.

    There are also many institutions who ignore all this and still live in a world of lectures , tutorials and exam halls – if they deal with bright learners they will be doing just fine – and the learners who have never experienced anything else will be fully satisfied with their learning experience.

Leave a Reply