How much learning do students do at university? That was the question that Vince Cable and David Willetts asked former funding council HEFCE to answer back in February 2014.
The idea was to “consider whether there are better indicators, such as measures of student engagement, to provide information on what a high quality student experience looks like”.
Now, the learning gain programme has ended with rather a whimper – three reports covering a pilot national mixed-methods initiative (NMMLGP), thirteen institutional pilot projects, and a peculiarly abortive in-house data driven attempt (HELGA) have all been published by OfS.
You can always tell when OfS published projects look like a problem – the first two of these are accompanied by a pinkish-orange box that dissembles that “The report below is independent research which we have commissioned. As such, it does not necessarily reflect the views or official position of the OfS”.
I’ve told the messy birth myth of the Learning Gain programme on Wonkhe before, where my efforts earned a swift riposte from OfS’s Yvonne Hawkins, who took issue with my “Eeyorish” perspective on the concept. Whether this made OfS “tiggerish”, or simply “bears of little brain”, was not clear at the time.
Was I right about the plight of the programme? These evaluations suggest that there is absolutely no evidence that suggests any kind of sector-wide measure of learning gain will meet even the data integrity standards of the TEF. There’s a great deal of maddeningly inconvenient evidence that problems with programme design and scope caused participants no end of difficulties. But that’s not really what we set out to learn. They even resort to the straw-grasping suggestion that future research will benefit from what we now know doesn’t work.
Compare and contrast
But last week also saw the glitzy release of the tiniest of updates to OfS’s grade inflation research. Unlike with learning gain, the full awesome power of the press and public affairs office was brought to bear – there’s been a wall of mainstream coverage, and even everyone’s favourite current Westminster Secretary of State for Education Damian Hinds found time to weigh in.
So on one level, we know that OfS knows that there is no evidence that learning gain can be measured by interventions or data analysis. On another, very public hand-wringing that the increase in the number of first or upper seconds overstates improvements in student learning. Does anyone else see a problem here?
Academics measure learning gain nearly every day. But they can’t do so precisely, as learning is decoupled from both input and output measures. You can’t say a given student will learn more (or less) if they get more lectures – and you can’t say that students with three A* at A level will learn more (or les!) than those with an Access to HE qualification if both end up with a first.
If you look at LEO salary data, we know that there is no real link between any tentative measure of the quality of the student academic experience (anything from the NSS, to student-staff ratios, to the actual degree classification) and salary after 1,3,or 5 years. We do know that there is a link between institutional and subject choice and salary – but it is at least arguable that the former is a form of socio-economic sorting, and the latter has more to do with the state of the job market.
This leaves us with a pretty fundamental question – what do universities do and how do we know they are doing it? This is the question that needs to be answered before we start comparing how well each institution happens to do it, and why. It’s a very complex question, and the learning gain research – whilst flawed at inception – was at least an attempt at trying to understand this.
Some of the people best qualified to answer this question are those who actually research teaching in HE, and support those who teach in developing their practice. The end of the learning gain programme represents the last throw of the dice for a lot of these people. The quality enhancement data revolution means that providers are laying off educational developers to hire data analysts to massage their way to a better TEF. The lack of available funding to support low level, results-focused practice research (such as that that used to come from HEFCE, the HE Academy and, to a lesser extent Jisc) mean that the business case for employing such a set of skills is being eroded.
Lots of talented educational developers are freelance, putting their livelihood on the slim bet that learning design would lead to a resumption of the “what works?” culture that did such a lot to improve teaching in the 00s. It wasn’t a good bet, but it was pretty much the only one available.
Now, having dismantled the structures that analyse and support the improvement of actual teaching quality, HEFCE (and latterly OfS) may have done enormous damage to the capacity of the sector. It looks like Tableau isn’t going to fix higher education after all.