Our reading of the TEF submissions’ engagement with the concept of educational gain suggests that there is value for institutions in exploring how educational gain is conceptualised in their context, and how it might be measured or assessed.
The direction of travel, however, is towards institutions using the concept to enhance their own internal shared understanding of the educational endeavour, not towards the higher education sector collectively progressing towards an ever-more refined shared definition of the concept or analytical framework for assessing it.
If this is indeed the case, future iterations of the TEF should encourage assessment panels to make judgements about the quality of thinking and action going into mobilising educational gain to serve broader strategic ends, such as understanding the impact of different pedagogical approaches, or helping students make choices about their own approach to the university experience, rather than judgements about the extent of educational gain the institution is able to demonstrate in its students.
The TEF submissions do, however, offer a diverse range of approaches to assessing educational gain – and surfacing these can help the whole sector learn and reflect as thinking on the topic evolves.
Why was educational gain in TEF in the first place?
Educational gain might be described as the holy grail of higher education quality. If institutions can evidence the gains made by students in their time at university, or, as it is often described, as “distance travelled”, it would provide clear evidence to policymakers, politicians and the electorate that universities should be well-funded and can be trusted to get on with educating students.
In principle, the idea of educational gain is straightforward – a student arrives at university ready to learn and develop and leaves several years later as a graduate. During that time – hopefully – they acquire knowledge, develop skills, and grow in confidence and self-efficacy, mobilising the various different social, cognitive, cultural and affective facets of learning.
But trying to nail down the specific nature of those gains, never mind measure them with any degree of robustness, is like nailing jelly to a wall. Identifying the very purpose of higher education and the qualities it brings to students is highly contested, as are the different approaches taken by different types of universities. There are a host of different possible measures.
The independent Pearce review of the Teaching Excellence Framework recommended that the idea of educational gain be retained in future iterations of the TEF. An earlier national pilot project on “learning gain” was unable to arrive at a single universal measure that would have application across UK higher education – though it did identify a number of possible effective measures of the various different dimensions of learning.
Accepting the recommendation, the TEF guidance that OfS issued invited institutions to articulate their own version of educational gain – a move that caused learning gain expert Camille Kandiko-Howson to argue on Wonkhe:
The inclusion of educational gain in the TEF is a form of virtue-signalling by OfS. It tells us that it cares about all the important outcomes of higher education, while also claiming to be a data-led regulator. But it has abrogated its responsibility to invest the time, effort and collaborative work across the sector to develop ways to actually account for it.
In practice, this meant that many institutions felt obliged to retrofit a concept of educational gain to their current practice around learning, teaching, and curriculum. How institutions approached that challenge – under considerable time pressure at that – will have inevitably shaped what they came up with in their submissions.
Of the 157 provider submissions available to us, the term education or educational gain was used at least once by 116 providers. That’s almost 75 per cent of providers engaging with the concept in some way. It appeared less in panel statements (66 out of 157) and student statements (24 out of 143).
Adam – who has published with Ben Kotzee a corpus linguistics-assisted analysis of qualitative provider statements in TEF2 – broke out his magnificent number crunching machine for the whole corpus of TEF provider statements to identify the words that show up most frequently near the term “education* gain”. Not surprisingly we found frequent derivatives of “measure” (measure, measuring, measures, measured, measurement) and “definition” (definition, define, defines, defined) Other similar terms also showed up regularly: approach, model, evaluate/evaluating, travelled, distance, concept.
For the educational gain strand of our crowdsourced TEF analysis we asked our volunteer readers to log two key questions: How does the provider articulate or define education gain for their context? and How does the provider evidence education gain?
Definitions and approaches
There were a variety of ways that providers defined educational gain. Many providers suggested that educational gain is multi-faceted, most commonly incorporating academic/cognitive, personal, and work/employability gains. Some added social impact or citizenship to the mix. But this then left providers confronting the extent to which they could evidence the full range of students’ educational gains. One of our readers suggested they could identify three main ways that providers approached the challenge, and our reading generally concurred with this analysis.
In the first set of approaches educational gain was articulated as good student outcomes, especially in relation to progression to employment, including in some cases where the provider had a particular mission to widen participation and enhance social mobility. The fact of progression – and in some cases, positive employer or wider industry feedback on the quality of graduates – was held to evidence educational gain.
A second set of approaches tied educational gain to a curriculum framework, typically one that incorporated a range of graduate attributes. Success in “embedding” the framework across the range of a providers’ programmes was held to be evidence of achieving education gain. In some cases – though examples of this were also observed in providers in other categories, providers had adopted a novel set of measures to indicate the success of embedding of graduate attributes – for example, surveying students on how they felt their course had contributed to growing their personal confidence, career-readiness, or development of a specific attribute. Some cited internal institutional awards system for recognising students’ achievements outside the academic domain.
A third set of approaches sought to deploy or develop measures to track “distance travelled” in one or more dimensions of learning. Some used the Guardian league table’s established “value added” measure or the government’s Social Mobility Index. Some drew comparisons between access and participation plan data and Graduate Outcome data. Some cited internal pre-arrival or induction surveys or assessments of student skills, capability, or career-readiness that could then be used both as a steer for introducing tailored support activity, and as baseline data for demonstrating educational gains.
In some cases this took the form of a student portfolio in which students were invited to reflect on and assess their own skills at points during their programme. Some sampled student work at different levels and made an academic assessment of progress. Some made reference to learning engagement analytics as a data source that can act as a proxy for educational gain (ie engagement analytics tend to measure the kind of student behaviours that are associated with learning).
In education research terms, only the third approach technically counts as measuring “educational gain”. It’s also notable that many providers indicated that at the point of submitting they had managed to take an initial pass at thinking through educational gain and signalled their intent to further explore and refine their chosen approach and measures. Strategic and organisational changes included setting up an education committee and specific projects using theory of change methodologies.
Institution-wide frameworks
Two particular providers presented institution-wide frameworks which responded to adapting their own policy and practice in a structured way in response to educational gain. Whether this is part of an already in progress strategic move by providers or a response to the hint that educational gain will become a key regulatory measure in the future remains to be seen.
One example of such a strategic framework provides a definition of educational gain:
the experiences we provide that make a difference to our students’ lives, enriching and accelerating learning and personal development beyond what otherwise might be achieved.
Students are tracked over three life-cycle stages as part of a Theory of Change approach: “Transitioning – Moving in”, “Developing throughout – Moving through” and “Progression and Employment -Moving On.” There are specific projects at each stage with identified metrics achieved already as well as plans for “mature metrics” in the future. The statement then outlines how all of these projects and metrics will be combined to produce an “educational gain index,” measured at university, subject and course level.
A second example provides an in-depth analysis of institutional strategy that has been developed since 2015 and includes some “pragmatic” measures for 2023 with future plans outlined. This example uses almost 1500 words to outline their strategy. This approach, similarly to the first example, looks institution-wide and aims to set out a vision and measure a holistic student experience including academic achievement, skills for potential career outcomes, mental wellbeing, and opportunities to engage in a wide range of learning inside and outside the classroom.
This large-scale institution-wide approach is described as being research-informed, referencing involvement in the HEFCE learning gain project as well as peer-reviewed articles by academics at this provider. The project is informed by these publications using findings to develop “dimensions of learning gain” with a focus on cognitive gain linked to grades in the first instance. Examples of gain link to access and participation and quantitative gaps are identified for different demographic groups. The research-informed approach continues with methodology and limitations sections and future directions. And impressively (and presumptuously maybe?) the provider looks forward to TEF 2027 by engaging in projects with international universities and the Quality Assurance Agency.
In line with the difficulty and lack of an agreed definition of educational gain, many providers stated that such a measurement was difficult or not possible. Other issues included specialist providers questioning whether subject specific work should be done on educational gain or more general development, a need to look at multi-dimensional measures, use of degree classification being problematic, the provider stating that their education is “more than a degree”, the contention that student engagement and outcomes may provide some measure but not enough, and claims that students have agency to define their own outcomes in a highly personalised approach meaning a consistent measure is difficult.
It’s also worth noting that there seemed to be little consistency in panel judgements around educational gain – all of these approaches found favour or didn’t with different panels, and in some cases providers were commended for their approach to realising educational gains for students where they had not specifically explained how they did this in their provider statement.
Keeping the conversation going
Our initial assessment of how providers approached educational gain is that it serves two core purposes for institutions. The first is to deeply interrogate the real impact of pedagogical and extra-curricular provision in ways that can prompt an institution to ask itself searching questions, not only about how it knows it is making a difference to students’ lives through education, but why and how that is happening, and whether it is happening at the scale and depth of inclusivity required. Student outcomes on their own offer a heuristic for some of this analysis, but can’t on their own generate a meaningful theory of change of how they can be improved or gaps addressed. In that sense, mobilising and measuring a concept of educational gain can be strategically very important.
The second purpose is to articulate to students themselves a theory of higher education that can prepare students to make good choices about how they spend their time while registered with a provider and make sense of their experience as part of a developmental trajectory.
Neither case requires there to be a definition or approach that is common across the sector – though there may be some value in sharing practice and establishing some level of consistency or standards that can help to develop approaches further without insisting on a shared measure or basis for comparative numerical judgement between providers.
Though the nature of TEF requires that qualitative judgement is exercised in the interests of establishing of a hierarchy of quality, the state of practice at this stage suggests that educational gain should remain firmly in the impressionistic end of the exercise – ie a judgement of the effectiveness with which a provider mobilises its concept in the services of its wider strategic agenda for quality, rather than transitioning educational gain into any kind of hard measure of quality in and of itself.
The range of approaches also surfaced a number of unresolved questions that could focus the sector’s thinking on how to take the conversation forward:
- There is no consensus on whether providers are tracking and measuring individual student progress or measuring – for example through student surveys – in order to evidence that an adequate proportion of students in any given cohort are achieving/progressing. To some extent this question is a function of a wider one of who “owns” educational gain – the student, through their own agency and effort to progress and realise their distinctive ambitions – or the university, that both sets out the expected journey and endpoint, and creates the conditions for these to be realised. Inevitably, the answer is “a bit of both” – but its worth thinking through how the way that providers choose to conceptualise and measure educational gain tilts the balance one way or the other.
- The question of who (or what) is qualified to judge educational gain remains open – and there are obviously pros and cons to different approaches. Academics – either individually as assessors or personal tutors, or in groups through processes like programme review – employers, and students themselves are all considered in different approaches to be able to offer robust judgements. We did not see many mentions of externally validated assessments – for example of generic cognitive skills – but hypothetically it would be possible to use these as well.
- The interaction of educational gain with educational and social disadvantage needs further thought – some providers saw the closing of awarding or progression gaps, or positive employment outcomes for less advantaged students as being educational gain; others explained their curriculum framework or graduate attribute approach as being designed in light of the specific demographic of their student body; still others did not really acknowledge that students’ access to educational gains is shaped by their access to social, cultural and economic capital. We found those providers who are actively mobilising the idea of capitals in their conception of educational gain very interesting, and this could be a fruitful area to explore further. It’s also noticeable that some of the most intensively theorised and developed concepts of educational gain can be found in provider submissions from colleges – there is much to be learned from this part of the sector on this topic.
Although the holy grail of a single measure of educational gain remains out of reach, the TEF submissions suggest that by and large the sector has engaged with the challenge of articulating educational gain, albeit with more or less enthusiasm. Our initial analysis suggests that there are very good reasons to continue to pursue the development of a shared, but flexible, idea of educational gain – and open discussion of different models and the practical function each serves in furthering diverse institutional missions can aid in these efforts.
The above article appears as part of a crowdsourced analysis of TEF submissions, panel statements, and student submissions in December 2023. The authors would like to thank the following colleagues who gave up their time voluntarily to take part in this exercise and supported the analysis of educational gain: Brian Smith, Matt Jones, Alex Mortby, Julie Coverly, Liz Cleaver, Mark Peace, Zena Rittenhouse, Pernille Norregaard, Anna Buckett, Antony Aleksiev, Chrissie Draper, Ellie Reynolds, Helen Barefoot, Nick Moore, Maureen McLaughlin, Robert Eaton.
It’s encouraging to see the sector engaging with this issue, and the work presented here is insightful, helpful, and well informed. Learning gain must also be evidenced beyond the university through doing good in the world, global citizenship, being assured and confident, acting ethically and with integrity, being resilient and aware, engaging in social and collective action, having an informed vision, applying sound knowledge and skills, and usng one’s agency in a never-ending pursuit of social justice and global responsibilty for all, while rejecting tyranny in all its forms.
It does seem odd that the OfS couldn’t at least set out some basic criteria for educational gain even if they weren’t able to articulate a clear definition.
It seems pretty obvious that has to be some kind of measure of added value rather than just outcomes in order to avoid institutions rigging the system by restricting recruitment to those students who already have an advantage.
It also seems obvious that any measure has to be multi-factorial to account for the various different ways in which education may benefit an individual (development of various capitals not just knowledge and skills).
It also seems that the opinions of the students and graduates themselves on what constitutes value should play some part (e.g. more weight to graduate voice).
That gives you three dimensions to assess submissions consistently. I wonder why they were not used.
One would hope the OfS had some idea of what they were looking for, or else the TEF panels would not be able to make judgments against the criteria. It’d be helpful to the sector if OfS could share that understanding. In our case, the panel seemed to miss entirely all the evidence we provided about educational gain, so I did wonder if panels themselves did not agree on / fully understand this concept.