The Office for Students’ consultation on the Teaching Excellence Framework is (finally) out, two and a half years after Shirley Pearce’s Independent Review was written (and a year after it was published) – and an astonishing five years since the first tranche of TEF awards were granted.
The consultation picks up multiple aspects – Academic experience and assessment, Resources, support and student engagement (under the umbrella of Student Experience), and Positive outcomes and Educational gains (read: Student Outcomes).
So far, much of the debate has been on the proposed timings for the exercise and the mix of metrics that will be used. But what is this “educational gain” that is referenced?
Both the Pearce Review and the consultation briefly mention the programme of work from OfS’ predecessor body on “learning gain”. This was deemed too difficult to measure (as it was so broad a concept), so it’s been recommended to rename it “educational gain” – to allow it to be even more expansive!
So long, learning gain
The HEFCE Learning Gain programme started with bang in 2015. There was a huge amount of interest into how meaningful measures of what students were gaining from their experiences in higher education could be developed – going beyond discourses of satisfaction and salary.
The potential link with newly proposed TEF (and mooted tuition fee increases) brought high profile interest. There were three strands of work, with 13 pilot projects involving over 70 institutions as the core.
The problem was that the challenge of defining learning gain, the bureaucracy of launching projects and a lack of coherence across a diverse sector all slowed momentum. The project lost policy interest when it failed to deliver a single universal measure to slot into the blank evidence box in the TEF Learning Gain box. Like dark matter, we know it’s out there, we just could not pin it down.
As the programme moved from HEFCE to OfS, there was a clear separation of the TEF and learning gain programme. Recommendations of how learning gain, in the absence of a single measure, could feed into the TEF were not welcome by OfS. There was no reporting on the individual pilot projects, and only short evaluation reports of the different strands of the programme, launched with no fanfare.
Learning gain died, but then with the publication of the Pearce report, it was resurrected as Educational Gain. While praise for the concept, and the logic of including it in a teaching excellence framework, were clear, what exactly was being measured – and how – was light on detail. This signals how all the existing measures, dashboards, rankings and frameworks in a data-led sector somehow fail to capture the “Zsa Zsa Zsu” magic of higher education.
This desire for a way to account for what students get out of their higher education experience that kicked off the quest for a measure of learning gain a decade ago remains. The Pearce Review nobly recommended its inclusion, but no action on defining it was taken. The current Department for Education stance is positive on the idea as well, instructing OfS “to consider if and how educational gain can be reliably measured”.
The answer is yes, it can be. Just not as a single measure across subjects.
Somehow in a 126 page Review of the TEF, a 118 page consultation on the TEF, in addition to a another 118 page consultation on student outcomes (not to be confused with the 195 page consultation on constructing student outcomes and experience measures) – none mention how to measure educational gain.
It is left for institutions, on a very tight proposed timeline with no lead-in, to report on the education gain of their students (and how this may vary across subjects etc). And it is left completely open as to what can be included, whether it is what you hope or intend they gain or what you actually measure. No verification process is suggested.
After huge sector investment in exploring learning gain, institutions are now allowed for it to be anything they want. A whole extra five pages is being proposed to be added to the provider submission page limit to account for reporting on learning gain (in addition to many other requests). As the learning gain programme of work highlighted, it is a complex and nuanced area. It’s not something that four years of accounting for all of the gains of students, defined differently across subjects, could ever hope to cover.
Arguably, the inclusion of educational gain in the TEF is a form of virtue-signalling by OfS. It tells us that it cares about all the important outcomes of higher education, while also claiming to be a data-led regulator. But it has abrogated its responsibility to invest the time, effort and collaborative work across the sector to develop ways to actually account for it.
The learning gain programme showed it was possible to break down the broad construct of learning gain into multiple facets, and that there were robust ways to measure different aspects of it. These could be reasonably compared across relevant groups of subjects and institutional types and sizes. But since the programme ended, OfS has not neither progressed this work, or encouraged the sector to take it on board.
As it stands, OfS maintains a handful of dead links from the learning gain pilot projects on their website. There is no learning shared from years of effort to measure gains, suggestions for how to get started or “beware, there be dragons” signs to warn off failed avenues. There are no outputs from the two HEFCE/OfS run streams of the learning gain programme.
As such, simply renaming learning gain “educational gain” and inviting institutions to report on it in any way they please does a huge disservice to the sector. It mocks the efforts to try to capture it, even if imperfect. An open definition and anything-goes measure of it means education gain will never be coherently defined or put in that blank TEF box. It will remain the dark matter “holy grail” of higher education – we can believe it is out there but will never understand it.