This article is more than 2 years old

Is educational gain the “dark matter” of student outcomes?

In the new teaching Excellence Framework, universities are told to show how they're measuring students' learning. Camille Kandiko Howson spots a cop-out
This article is more than 2 years old

Dr Camille B. Kandiko Howson is Associate Professor of Education at Imperial College London

The Office for Students’ consultation on the Teaching Excellence Framework is (finally) out, two and a half years after Shirley Pearce’s Independent Review was written (and a year after it was published) – and an astonishing five years since the first tranche of TEF awards were granted.

The consultation picks up multiple aspects – Academic experience and assessment, Resources, support and student engagement (under the umbrella of Student Experience), and Positive outcomes and Educational gains (read: Student Outcomes).

So far, much of the debate has been on the proposed timings for the exercise and the mix of metrics that will be used. But what is this “educational gain” that is referenced?

Both the Pearce Review and the consultation briefly mention the programme of work from OfS’ predecessor body on “learning gain”. This was deemed too difficult to measure (as it was so broad a concept), so it’s been recommended to rename it “educational gain” – to allow it to be even more expansive!

So long, learning gain

The HEFCE Learning Gain programme started with bang in 2015. There was a huge amount of interest into how meaningful measures of what students were gaining from their experiences in higher education could be developed – going beyond discourses of satisfaction and salary.

The potential link with newly proposed TEF (and mooted tuition fee increases) brought high profile interest. There were three strands of work, with 13 pilot projects involving over 70 institutions as the core.

The problem was that the challenge of defining learning gain, the bureaucracy of launching projects and a lack of coherence across a diverse sector all slowed momentum. The project lost policy interest when it failed to deliver a single universal measure to slot into the blank evidence box in the TEF Learning Gain box. Like dark matter, we know it’s out there, we just could not pin it down.

As the programme moved from HEFCE to OfS, there was a clear separation of the TEF and learning gain programme. Recommendations of how learning gain, in the absence of a single measure, could feed into the TEF were not welcome by OfS. There was no reporting on the individual pilot projects, and only short evaluation reports of the different strands of the programme, launched with no fanfare.

Gain, resurrected!

Learning gain died, but then with the publication of the Pearce report, it was resurrected as Educational Gain. While praise for the concept, and the logic of including it in a teaching excellence framework, were clear, what exactly was being measured – and how – was light on detail. This signals how all the existing measures, dashboards, rankings and frameworks in a data-led sector somehow fail to capture the “Zsa Zsa Zsu” magic of higher education.

This desire for a way to account for what students get out of their higher education experience that kicked off the quest for a measure of learning gain a decade ago remains. The Pearce Review nobly recommended its inclusion, but no action on defining it was taken. The current Department for Education stance is positive on the idea as well, instructing OfS “to consider if and how educational gain can be reliably measured”.

The answer is yes, it can be. Just not as a single measure across subjects.

Somehow in a 126 page Review of the TEF, a 118 page consultation on the TEF, in addition to a another 118 page consultation on student outcomes (not to be confused with the 195 page consultation on constructing student outcomes and experience measures) – none mention how to measure educational gain.

It is left for institutions, on a very tight proposed timeline with no lead-in, to report on the education gain of their students (and how this may vary across subjects etc). And it is left completely open as to what can be included, whether it is what you hope or intend they gain or what you actually measure. No verification process is suggested.

After huge sector investment in exploring learning gain, institutions are now allowed for it to be anything they want. A whole extra five pages is being proposed to be added to the provider submission page limit to account for reporting on learning gain (in addition to many other requests). As the learning gain programme of work highlighted, it is a complex and nuanced area. It’s not something that four years of accounting for all of the gains of students, defined differently across subjects, could ever hope to cover.

Gain-washing?

Arguably, the inclusion of educational gain in the TEF is a form of virtue-signalling by OfS. It tells us that it cares about all the important outcomes of higher education, while also claiming to be a data-led regulator. But it has abrogated its responsibility to invest the time, effort and collaborative work across the sector to develop ways to actually account for it.

The learning gain programme showed it was possible to break down the broad construct of learning gain into multiple facets, and that there were robust ways to measure different aspects of it. These could be reasonably compared across relevant groups of subjects and institutional types and sizes. But since the programme ended, OfS has not neither progressed this work, or encouraged the sector to take it on board.

As it stands, OfS maintains a handful of dead links from the learning gain pilot projects on their website. There is no learning shared from years of effort to measure gains, suggestions for how to get started or “beware, there be dragons” signs to warn off failed avenues. There are no outputs from the two HEFCE/OfS run streams of the learning gain programme.

As such, simply renaming learning gain “educational gain” and inviting institutions to report on it in any way they please does a huge disservice to the sector. It mocks the efforts to try to capture it, even if imperfect. An open definition and anything-goes measure of it means education gain will never be coherently defined or put in that blank TEF box. It will remain the dark matter “holy grail” of higher education – we can believe it is out there but will never understand it.

3 responses to “Is educational gain the “dark matter” of student outcomes?

  1. THe Black Box of Educational Gain could become the Holy Grail that many employers would like to see accompanying graduating students alongside their degree certificates.

    In this case, the content of the Educational Gain Box would include the progress the students have made in developing their self confidence, the ability to work in teams with others to solve problems and come up with soultions, the ability to communicate and explain better, both verbally and in writing, with other adults about a range of subjects and reports on other matters such as development of emotional intelligence.

  2. One potential implication is that this move will make universities even more like schools: learning objectivities will be tested by means of assessment objectivities that are marked to a prescribed set of details about what the student is expected to have learned in terms of subject content and analytical skills. This closed loop reduces tertiary education to secondary education.

    The move towards continuous assessment has already crushed the university students’ ability and inclination to learn in the context of higher education proper – for increasingly its just a transactional, target driven enterprise about getting grades, this diminishes learning.

    See: https://wonkhe.com/blogs/the-performance-of-learning-is-not-higher-education/

  3. All good points – educational gain, however, can only be measured if two things happen: (1) standardised assessment criteria need to be developed to represent each level and used in ALL modules at that level to avoid module leaders creating their own bespoke criteria, as this creates too much variation to make valuable comparisons re student progress. This should not be challenging, as QAA sets out level-specific skills in the FHEQs upon which internal QA is based (but may not currently extend far enough to assessment design). (2) UK HE needs to reconsider the degree classification system. Quantitative rubrics can help standardise marking to help understand student learning trends for each criteria – bit this is difficult to do when e.g. 70% is classified as higher band score, rather than e.g. 90%. This has always caused problems in UK HE, with some (technical) exams being scored out of 100, and written academic work, although measured in percent, is essentially scored to a maximum of 80-85, causing an offset where students have a variation in assessment methods (and disadvantages some modules/programmes/depts over others). But more importantly, until this is addressed, standardising rubric is difficult and hinders performance measurement, losing valuable data for HEIs to understand quality. Many in HE will be reluctant to change, as the degree classifications are traditional. However, we must put aside existing/sentimental perspectives and prevailing attitudes if we wish to find modern solutions.

Leave a Reply