Throughout TEF’s development there’s been a lot of criticism of the exercise. This has ranged from the nuanced to the outright destructive, from questions about the particulars of the metrics to questions about whether we need the damned thing at all.
So given these many and varied criticisms, I took to Twitter to invite views from TEF’s critics on how they think TEF could be done better. With a ‘lessons learned’ exercise and a full independent review both in the pipeline, there could be plenty up for grabs in how TEF might look like in future years. I asked some of our wisest Wonkhe contributors to suggest how, given the ministerial reigns, they would design a method for teaching and student experience accountability for universities. Those were the only parameters.
Here are their resultant AlterniTEFs. We welcome more suggestions in comments below.
Smita Jamdar – Head of Education, Shakespeare Martineau
Institution-specific targets could become registration conditions
My idea to demonstrate accountability for teaching excellence and student experience is that universities who wish to participate in the AlterniTEF should be required to establish a Student Experience & Teaching Quality Stakeholder Group. This should be made up of students, senior staff with responsibility for teaching and learning strategy, and some or all of alumni, employers, professional bodies, relevant experts. The precise mix beyond the staff and students is up to each institution depending on its nature, mission and character.
The role of the Stakeholder Group would be to work with the governing body to produce a teaching, learning & student experience plan for the institution and to agree how performance against the plan will be measured, with KPI’s and benchmarks appropriate to that institution’s mission and values. This plan would be published and accessible to applicants.
The Stakeholder Group would monitor performance against the KPIs and benchmarks, and working with relevant staff, identify and recommend remedial action where they are not being met. The plan will be formally reviewed annually. Students and others with concerns about teaching or the student experience will be able to raise them with the Stakeholder Group who in turn can ask the governing body to take action.
The critical point would be that persistent, unresolved complaints about performance against the plan can be raised as a breach of a relevant registration condition with the Office for Students or the designated quality body, and could lead to investigation and enforcement action under HERA.
Participation in the AlterniTEF could be voluntary, on the basis that institutions who do not participate will be sending a signal that they don’t care about teaching and the student experience, and so will be at a disadvantage when recruiting. Or it could be mandatory, on the basis that students paying any fees at all deserve to know their institution pays attention to these things.
My AlterniTEF is based on the following principles:
- That teaching quality and the student experience deserve to sit at the heart of institutional strategy and mission, so the governing body needs to have free and unfettered line of sight to how the institution delivers these things and to act if things go wrong.
- That institutional autonomy means that teaching quality/the student experience may and should mean different things in different institutions, but in no institution does it mean they can be ignored.
- That the focus on “compare the market” type approaches are unhelpful in HE; what is more important for applicants and students is to see their institution’s defined approach to teaching etc.
- That adopting this approach will eliminate the need for subject specific TEF, which is what students really need, but will be a nightmare to produce.
- That, currently, governing bodies do not have easy and constant access to the student voice. Student governors are not there to act as representatives of the student body (although they will bring an element of student focus to the collective deliberations and decision-making of the governors). Nor do students have a direct line to the governing body to raise their issues and concerns.
Vicky Gunn – Head of Learning and Teaching, Glasgow School of Art
TEF should be about relationships
When it comes to teaching accountability, the recent spate of metrics releases related to the TEF and LEO represent the latest in a line of government attempts to quantify the added value of a university education. Through this quantification, institutions can demonstrate value for money and wise use of public funds.
Before the insurgent algorithmic explosion, quality structures describing processes and regulating the how of reporting teaching reigned supreme. Quality assurance was there to assure and incentivise enhancement as proxies for accountability.
However, both these approaches are faulty in substantive ways well rehearsed in research and academic journalism alike, and are inadequate responses to the accountability problem. Perhaps the accountability problem is inadequately defined?
Let’s imagine, then, that we have the capacity to reset the accountability question, from the politically astute: “how can we prove standards are appropriately high for the investment we’re making and the outcomes are worth it?” to the more socially democratic, “how can we measure the quality of enlargement and enrichment of our communities by our graduates and how does this relate to their experience of higher education?”
The latter is an outcome question. Rather than being simply based on a value-chain (‘what can the student expect from their provider for their money?’) understanding of output, it demands we measure and value co-creation. It thus suggests that to get an accurate measurement for accountability we need metrics produced through relational analyses and cross-referencing.
This sounds laudable yes, but is it practicable? I could design a system to materialize effective answers to my accountability question it would be through the following triad. Though if I’m honest, all are so clearly dependent on multiple variables that even the most intense algorithm designer would probably be sent into meltdown.
- The quality of the experience of being taught. This should not be a measurement of what a student perceives is lacking, but rather the quality of the reciprocal relationship between student, discipline and institution. At the moment, I think the NSS is a bit stuck in this triad. It doesn’t measure reciprocal relationships, such as student self-perception of engagement, academic perception of the student’s relative engagement, and student level of disciplinary understanding. This would be good information to have.
- The quality of disciplinary learning cross-matched with the quality of abstract and practical wisdom beyond the disciplinary context. This would require a raft of measures to be compared, including learning transfer, discipline knowledge, and then more conventional outcome measures: longitudinal earnings, social capital, work-life balance, and more. Graduate Outcomes and LEO fit here, but in this context would need to be cross-referenced with life satisfaction and learning gain.
- The quality of reciprocal social relations that students enjoy after graduation. The reciprocal social relations I am especially interested in are: depth and quality of ideas and practices transfers, social change, defining and moderating systems to improve outcomes (procedural and productive), and the centrality of the graduate in the community networks in which they find themselves (social capital and network analysis). I am not sure our accountability systems have ever managed this; I’m not sure how they would.
All of these components would still need to be benchmarked for socio-economic and other characteristics that tend to distort outcomes.
Retrospectively, I can see how my system resembles something not out of place in ‘Black Mirror’. But accountability for me is about the quality and impact of reciprocal relationships. A system that can measure and value this is at least worth exploring.
Johnny Rich – Chief Executive of Push, Chief Executive of Engineering Professors’ Council, and HE Consultant
Institutions should set goals and be held accountable against them
To know how far I am from the pub, I need to know no only where I am, but where the pub is. This is the inherent problem with TEF. It attempts to assess a relationship or a process by measuring a single point.
In this case, the single point being measured – and not measured well – is the teaching institution, as if teaching excellence were a property possessed entirely by it; as if excellent teaching exists in a vacuum; as if students have nothing to do with it.
You cannot have excellent teaching in an empty room, and the moment you start to fill it with students, you need to adapt the teaching to their diverse learning styles and needs.
It is tempting therefore to declare TEF as a futile exercise, conceptually misguided. However, the impulse is a noble one. There will never be a perfect TEF, but we can do much, much better.
To measure a process that has no single best solution, it only makes sense if we establish the context. Each HEI (or assessment unit) should be allowed to write a Learning Statement – much like the access statement they have to write for OFFA – setting goals they consider relevant to their student intake.
OfS – or its nominee – would receive and approve these statements and reject them if insufficiently ambitious. Light touch guidelines would be offered (again, like OFFA’s) which would suggest suitable metrics. These should focus on measures that research has demonstrated are related to learning outcomes – student engagement, for example, the proportion of teachers who are qualified to teach, and if HEFCE’s learning gain pilots go well, perhaps some of the criteria used by them.
Student surveys may well have a role, but only as a measurement over time. That is not how the NSS operates (which, moreover, was conceived as subject level exercise, not an institutional one), so an alternative would have to be found.
Three years after publishing its Learning Statement, the institutions’ outcomes would be compared with what they had hoped to achieve and statements would be published. These might even be summarised into general descriptions, highlight strengths and weaknesses in their approach. But the damaging farce of gold, silver and bronze would be strictly reserved for sports competitions.
Strengths and weaknesses vary between disciplines, and so an institutional TEF is bound to be more of a fudge than a subject level. I would have started at the discipline level and, if a suitable appraisal mechanism were developed, then – and only then – might it make sense to aggregate the results to look for institutional patterns.
My proposals would be more expensive, I have no doubt, but there’s a higher price in doing TEF badly. But I believe it would encourage the sector to continue to exercise strength through diverse provision, relevant to the individual student and supporting the autonomy of institutions.
Ant Bagshaw – Deputy Director, Wonkhe
PG-TEF should take priority
I’d like to focus on TEF as a source of useful information for taught postgraduate students. Rather than look to increasing the amount, or complexity, of information available to prospective undergraduates, I’d like to consider where there’s a deficit and try and address that.
I’ve banged this drum before: there’s both a taxpayer interest because of the recently-introduced loan system, and there’s a basic equity point about the deficit when it comes to information for master’s students over those taking bachelor’s degrees. To make things worse, the Department for Education seems to have conveniently forgotten that it promised us PGTEF in the future iterations. Perhaps they’ll remember at some point.
For my AlterniTEF, I need some data sources. I’d be interested in graduate outcomes for postgrads, and also in measures of their satisfaction with the programmes (so my metrics would be similar to those for UGTEF). It’s often said that, given the massification of undergraduate education, it’s now necessary to have a master’s degree for a graduate to differentiate themselves in the labour market. I’d be interested in whether the data backed up that assertion. In cases where postgrad programmes are closely targeted at particular professions, are the graduates really working in those roles? I’m not suggesting they should be required to, but it would be interesting to know, and to be able to compare experiences across subjects and across institutions.
I’d also be interested in whether those with master’s degrees do better than the average not because of the qualification, but because of their social capital. For my PGTEF, I’d like to get an understanding of the added value provided in terms of their impact – if any – on promoting social mobility. I’d keep a benchmarking process too, as I’d want a sense of performance relative to expectations. And I would get rid of the provider statements and present judgements based solely on the metrics.
In terms of incentives, I don’t think we should look to (variably) cap students’ fees but I would make PGTEF mandatory for access to the PG loan scheme. Then when results are published, students can vote with their feet taking into account TEF as well as other factors like locations, cost or delivery mode. Thinking this through, the data we’d have to gather – and share – about the postgrad experience would be useful anyway, even if not used for a TEF exercise.
The key is the starting point: what problem are you trying to solve? If it’s more data sources for undergraduates, don’t bother, there’s a bigger prize out there.
I like Smita’s approach, especially the diversity it encourages that some sector metrics seem to deter, but doesn’t it just re-badge what is already going on in most institutions to some extent? And would also presumably be virtually impossible to use to compare institutions with each other – which may be the intended point here, but would seem to be a major stumbling block from Westminster perspective?
As I read this whilst waiting for the Hefce extranet to accept the forecast aspect of our annual accountability returns, I find myself wondering why there is a perceived lack of accountability outside the sector, when it seems quite different from my perspective inside.
Vicky’s triads sound mind boggling, and presumably any data scientists who could fashion something out of these disparate and intangible constructs would be likely to command an extortionate fee! But can we afford the luxury of waiting for balanced scorecard outcomes with the big data metric tide lapping at the shore? And how do we account for the variation in expectations at the starting point in participation and co-creation, compared to a more absolute concept like quality? I’d love to see practical wisdom popping up in module statements, and wonder if the ‘enlargement of communities by graduates’ means taking more of a Jacob Rees-Mogg approach to family life?
No provider statements Ant!!! Data Futures meets Superman 3 – surely we can’t have the quantitative yin without the qualitative yang? And wouldn’t we expect graduates to have well developed research skills when it comes to assessing options from existing sources?
Johnny’s proposal seems to again re-brand existing internal activity, by imaging the quality review cycle as a published learning statement, and can we equate light touch with high quality? Did someone say pub…