This article is more than 6 years old

Measuring learning gain isn’t easy, but it has become necessary

Camille Kandiko-Howson reports on the opportunities and challenges the first round of HEFCE projects to find better ways of understanding the learning 'distance travelled' through higher education.
This article is more than 6 years old

Dr Camille B. Kandiko Howson is Associate Professor of Education at Imperial College London

Trending right now: Stormzy, Love Island and… Learning Gain. Sometimes, it really feels like everyone is talking about this learning gain. But what is it?

For many in higher education, learning gain came to prominence as one of the three major categories in the Year 2 Teaching Excellence Framework, along with ‘Teaching Quality’ and the ‘Learning Environment’. But proxy measures on employment from the DHLE survey do little to provide clarity about what learning gain really is, or what it can be.

Far better context for learning gain is the recently released report on the Evaluation of HEFCE’s learning gain plot projects. This report summarises thirteen HEFCE-funded pilot projects exploring the feasibility of measuring learning gain. These are part of a wider HEFCE initiative, which also includes a National Mixed Methodology Learning Gain Project, involving ten higher education institutions in England, and analysis of linking existing datasets for possible insights into learning gain. There also are similar projects sponsored by the Higher Education Academy and running at Pearson.

Learning gain can be understood as a change in knowledge, skills, work-readiness, and personal development, as well as enhancement of specific practices and outcomes in defined disciplinary and institutional contexts. Many will be familiar with similar work done in schools on learning gain, as well as the related concept of value-added.

The 2010 book Academically Adrift kick-started much of the debate around learning gain. The book asked what students in the US were actually gaining from their time and financial investment in higher education. The book worryingly found that large portions of American students were not learning much at all.

What is being measured?

As Christina Hughes, who leads the HEFCE-funded LEGACY project, has noted, what you measure about learning depends on what you value and what you think higher education is for, making it debatable, contentious, and at times political.

A range of purposes for higher education were raised at a recent discussion at the University of Winchester: to develop critical thinking; enhance employability and get a job; to immerse students in a discipline; to foster a sense of moral purpose; and to educate students to become lifelong learners. And then there is the question of what students’ think they are there for. Whilst not incompatible with each other, different measures are appropriate for different aims.

The measures being used across the pilot projects cover roughly three aspects of learning: affective, behavioural, and cognitive. Affective measures cover how students feel and approach their learning, through constructs including confidence, resilience and mind-set. Behavioural measures focus on activities students do, such as work-placements, research projects, and engagement with virtual learning environments. What many feel are ‘traditional’ measures of learning fall under cognitive measures, such as critical thinking, problem solving and disciplinary cognitive gain.

It gets trickier when it comes to accounting for starting points on one end and outputs on the other. As a raft of recent reports have found, students do not start—or finish—higher education on a level playing field. We lack straightforward measures to account for what students know when they start higher education. And there 107 or so different qualifications one can enter higher education with, everything from portfolios and performances to A-levels, or nothing at all.

Output and outcome metrics include problematic measures such as grades (easily manipulated); employability (defined in a host of ways); and cognitive gain (but is this generic or disciplinary-specific?). Selecting outcome measures brings one back full-circle, to asking what the purpose of higher education is: developing knowledge workers for the 21st century economy, creating an engine for social mobility or training professionals for a functioning society? A combination? Or something else?

Across the projects, the importance of context is emerging—different measures are appropriate for different groups of students, subjects and institutions. For measures to be robust and meaningful, they need to be relevant, and the same approach may not work in different disciplines, from fine art and forestry.

Uses of learning gain data

Once the challenges of figuring out what to measure, and how to measure it, are worked out, there are a host of uses of learning gain data:

  • Improving student choice
  • Enhancing student learning
  • Supporting teaching development
  • Driving module and course design
  • Facilitating external engagement
  • Evidence-based quality assessment
  • Government regulation
  • Offering market indicators

Across the projects, several are using learning gain data to feed into learning analytics systems. This includes student-level dashboards on progression, data platforms for personal tutors to talk to students about their career planning, and institutional analysis on student attainment gaps.

There are possibilities of learning gain metrics feeding into future iterations of the TEF, either as core metrics, as part of the qualitative submission, as subject-level measures or in a new design of the exercise all together.

Challenges of measuring learning gain

Alas, if measuring learning gain was easy, we would have developed robust metrics long ago. In addition to the philosophical challenges of the purpose of higher education raised above and the methodological challenges of entry points and outcome measures, there are also practical challenges.

Getting students to complete additional tests and surveys is not easy, and motivating students invest time and effort on assessments that do not count can be tough. Many of the instruments developed in other countries, such as the CLA+ test used in Academically Adrift, were not designed with the subject-specific nature of English degrees in mind.

There are debates about the reliability of student self-reported data. Some may have noticed the very literal approach to measuring learning gain in this year’s HEPI-HEA Student Academic Experience Survey: “Since starting your course how much do you feel you have learnt? A lot, a little, not much, nothing or don’t know”. Differences were found across a swathe of student characteristics.

And there is what I refer to as the “triangle of doom” (caution, do not Google if you are feeling stressed) of data protection, data sharing, and research ethics. Balancing what to share, who has access, what data is used for with the new data protection laws coming into force is not easy, and is certainly a debate students need to be a part of across the sector and within institutions.

If measuring learning gain is not easy, why bother? I would argue that in the absence of learning gain measures, we use what is available. And at the student-level that is data on satisfaction from the NSS, data from DHLE on employment, and LEO on earnings—all measures I would call awful proxies for learning gain.

28 responses to “Measuring learning gain isn’t easy, but it has become necessary

  1. Really great article. Totally agree with the final paragraph. Have found it interesting working over the last few months on developing Competency-Based Learning courses, and have found it a surprisingly good basis for thinking about learning gain at a curricular level – especially in the flexibility to include cognitive as well as practical competencies through a more ‘patchwork’ approach to assessment.

  2. Very good article, thank you. Measuring learning gain is not easy – perhaps like most things that are really worthwhile.

  3. Great article. The proxy measures set out for prospective students are awful, but they are very tangible HE outcome measures in a consumer market, presenting a tangible and compelling measure of learning gain is a a very tough challenge for HE marketers.

  4. What is the point of learning?
    If we can answer that question we might have something to measure.

  5. We need to measure “learning gain” because… the way we already measure learning through assessment doesn’t work? Isn’t that the underlying issue here? If so why do we hide behind meaningless noun phrases that aim to quantify rather than qualify? I prefer to stick with the verb and tackle the real issue.

Leave a Reply