This article is more than 4 years old

What will we take away from learning gain?

With OfS seemingly uninterested, learning gain evaluator Camille Kandiko Howson asks who will take this important work forward.
This article is more than 4 years old

Dr Camille B. Kandiko Howson is Associate Professor of Education at Imperial College London

To say the Office for Students (OfS) Learning Gain Programme ended with a whimper may be an overstatement.

The press and communications-heavy regulator did not even manage a tweet concluding a four-year, £4million investment. Perhaps unsurprisingly, a project that started a decade ago has weathered changes to tuition fees and five university ministers, has produced findings that do not fit neatly into the current policy landscape. But by no means is “learning gain” going away.

The drivers for the project remain—that in education money is not the (only) bottom line—and there still is a need for additional ways of accounting for value, especially when “value for money” is a major policy driver. The current regulatory framework is premised on a data-driven system, which is dependent on high quality data. The topics the OfS does shout a lot about: grade inflation; attainment, progression and success gaps; subject-level comparisons, all require good data on learning gain, however it may be packaged.

And furthermore, although the OfS’ work in England and a previous pilot led by the OECD (Assessment of Higher Education Learning Outcomes (AHELO)) may have mucked up developing comparative learning gain measures, it is an area of keen interest elsewhere – from national initiatives in Brazil to huge investment across China and Asia.

What we learned

From the pilot projects (the other two strands of work failed to reach the finish line), we know: there is no single silver bullet metric, learning goes beyond cognitive gain, also including behavioural and affective measures, and that robustly measuring learning gain requires multiple indicators. We also have confirmatory evidence that students learn different things in different subject, and there are differences across different types of institutions. It remains to be seen if this knowledge feeds into the current policy discourse.

More pressing, the pilot projects identified concerns about the quality assurance system, including the reliability and comparability of grades within and across institutions; variations in grading patterns across subjects; and the efficacy of grade moderation schemes and the external examining system. It is not clear who is taking these issues forward.

There is a lack of comparable data on behavioural and affective measures across the sector, largely due to the dominance of the NSS. This means students’ satisfaction with their course is prioritised over knowledge about the skills and personal development that students gain and how they are engaging with their course and institution. Due to lack of coordinated administration the projects were not able to verify measures of student engagement through the UK Engagement Survey as reliable proxy measures of learning gain, but international evidence suggests this is one of the best options.

Who are the players

It is clear the OfS is not taking this work forward-  the regulator has no stated budget for learning and teaching. This opens the learning gain space to a variety of players, including sector bodies like the Quality Assurance Agency (QAA), AdvanceHE and UUK – but where the funding would come from is another question.

There is a role for the disciplines and professions, who have been the traditional arbiters of what, and to an extent, how, students learn through existing disciplinary learning outcomes, subject benchmarks and professional standards. Some are very active in measuring learning gain, with medicine as a leading pioneer, with the General Medical Council developing a new national exam.

Institutions have opportunities to develop bespoke measures and instruments. Several of the single-institution pilot projects offer a template for how measures of learning gain can be developed to evidence outcomes from institutional strategic plans and facilitate enhancement of the student experience. The University of Manchester embedded measures of affective gains across a variety of subjects. My institution, Imperial College London, is exploring developing measures of learning gain valid in a research-intensive STEM setting.

Institutions can also facilitate networking across relevant groupings. Some of this work was primed by the pilot projects, such as the Ravensbourne project which developed relevant metrics for small, specialist institutions and The Manchester College which focused on institutions delivering Higher Education in Further Education (HE in FE). The LEGACY project brought most of the Russell Group institutions together.

Future partnerships

Such comparisons highlight what went wrong with the OfS approach—trying to find measures that would apply equally across all subjects and institutions. The future of learning gain has two strands. The first is on “what” and “how much” students are learning, questions for disciplines and professions to address. Different measures for different subjects would help address gaps in understanding the value identified by the Longitudinal Educational Outcomes (LEO) data.

The second strand involves developing measures relevant for different institutional types. Early on in the learning gain programme I proposed a matrix of measures and relevant comparisons, a version of a “Chinese Menu” (I’m not the first), or to develop institutional groups for education akin to the Knowledge Exchange Framework (KEF) clusters.

We may not have clear outcomes from the OfS learning gain programme, but it is clear that work in the future will require parts of the sector working together in new ways. The subject experts and the quality experts need to get together, the education experts and the data experts need to partner up and researchers and policy makers need to come together, maybe for a cheeky takeaway.

Leave a Reply