Trending right now: Stormzy, Love Island and… Learning Gain. Sometimes, it really feels like everyone is talking about this learning gain. But what is it?
For many in higher education, learning gain came to prominence as one of the three major categories in the Year 2 Teaching Excellence Framework, along with ‘Teaching Quality’ and the ‘Learning Environment’. But proxy measures on employment from the DHLE survey do little to provide clarity about what learning gain really is, or what it can be.
Far better context for learning gain is the recently released report on the Evaluation of HEFCE’s learning gain plot projects. This report summarises thirteen HEFCE-funded pilot projects exploring the feasibility of measuring learning gain. These are part of a wider HEFCE initiative, which also includes a National Mixed Methodology Learning Gain Project, involving ten higher education institutions in England, and analysis of linking existing datasets for possible insights into learning gain. There also are similar projects sponsored by the Higher Education Academy and running at Pearson.
Learning gain can be understood as a change in knowledge, skills, work-readiness, and personal development, as well as enhancement of specific practices and outcomes in defined disciplinary and institutional contexts. Many will be familiar with similar work done in schools on learning gain, as well as the related concept of value-added.
The 2010 book Academically Adrift kick-started much of the debate around learning gain. The book asked what students in the US were actually gaining from their time and financial investment in higher education. The book worryingly found that large portions of American students were not learning much at all.
What is being measured?
As Christina Hughes, who leads the HEFCE-funded LEGACY project, has noted, what you measure about learning depends on what you value and what you think higher education is for, making it debatable, contentious, and at times political.
A range of purposes for higher education were raised at a recent discussion at the University of Winchester: to develop critical thinking; enhance employability and get a job; to immerse students in a discipline; to foster a sense of moral purpose; and to educate students to become lifelong learners. And then there is the question of what students’ think they are there for. Whilst not incompatible with each other, different measures are appropriate for different aims.
The measures being used across the pilot projects cover roughly three aspects of learning: affective, behavioural, and cognitive. Affective measures cover how students feel and approach their learning, through constructs including confidence, resilience and mind-set. Behavioural measures focus on activities students do, such as work-placements, research projects, and engagement with virtual learning environments. What many feel are ‘traditional’ measures of learning fall under cognitive measures, such as critical thinking, problem solving and disciplinary cognitive gain.
It gets trickier when it comes to accounting for starting points on one end and outputs on the other. As a raft of recent reports have found, students do not start—or finish—higher education on a level playing field. We lack straightforward measures to account for what students know when they start higher education. And there 107 or so different qualifications one can enter higher education with, everything from portfolios and performances to A-levels, or nothing at all.
Output and outcome metrics include problematic measures such as grades (easily manipulated); employability (defined in a host of ways); and cognitive gain (but is this generic or disciplinary-specific?). Selecting outcome measures brings one back full-circle, to asking what the purpose of higher education is: developing knowledge workers for the 21st century economy, creating an engine for social mobility or training professionals for a functioning society? A combination? Or something else?
Across the projects, the importance of context is emerging—different measures are appropriate for different groups of students, subjects and institutions. For measures to be robust and meaningful, they need to be relevant, and the same approach may not work in different disciplines, from fine art and forestry.
Uses of learning gain data
Once the challenges of figuring out what to measure, and how to measure it, are worked out, there are a host of uses of learning gain data:
- Improving student choice
- Enhancing student learning
- Supporting teaching development
- Driving module and course design
- Facilitating external engagement
- Evidence-based quality assessment
- Government regulation
- Offering market indicators
Across the projects, several are using learning gain data to feed into learning analytics systems. This includes student-level dashboards on progression, data platforms for personal tutors to talk to students about their career planning, and institutional analysis on student attainment gaps.
There are possibilities of learning gain metrics feeding into future iterations of the TEF, either as core metrics, as part of the qualitative submission, as subject-level measures or in a new design of the exercise all together.
Challenges of measuring learning gain
Alas, if measuring learning gain was easy, we would have developed robust metrics long ago. In addition to the philosophical challenges of the purpose of higher education raised above and the methodological challenges of entry points and outcome measures, there are also practical challenges.
Getting students to complete additional tests and surveys is not easy, and motivating students invest time and effort on assessments that do not count can be tough. Many of the instruments developed in other countries, such as the CLA+ test used in Academically Adrift, were not designed with the subject-specific nature of English degrees in mind.
There are debates about the reliability of student self-reported data. Some may have noticed the very literal approach to measuring learning gain in this year’s HEPI-HEA Student Academic Experience Survey: “Since starting your course how much do you feel you have learnt? A lot, a little, not much, nothing or don’t know”. Differences were found across a swathe of student characteristics.
And there is what I refer to as the “triangle of doom” (caution, do not Google if you are feeling stressed) of data protection, data sharing, and research ethics. Balancing what to share, who has access, what data is used for with the new data protection laws coming into force is not easy, and is certainly a debate students need to be a part of across the sector and within institutions.
If measuring learning gain is not easy, why bother? I would argue that in the absence of learning gain measures, we use what is available. And at the student-level that is data on satisfaction from the NSS, data from DHLE on employment, and LEO on earnings—all measures I would call awful proxies for learning gain.