This article is more than 6 years old

Cutting through the spin – we can do accountability better than TEF

A recent Advertising Standards Agency ruling has put more pressure on universities to be more honest in their marketing, but Charles Heymann argues that TEF will only make this harder, not easier.
This article is more than 6 years old

Charles Heymann is a consultant in strategic communications and reputation management.

I am the last person to be po-faced about public relations. I have done my fair share of news management, firefighting and darkish arts during more than a decade in the trade.

In a previous life, one of my annual joys was managing DfE’s publication of the GCSE league-table under both Labour and the Coalition. They were all things to all people – an accountability measure; a benchmark of individual pupils’ success; a driver for parental choice; a trigger to intervene in struggling schools; and proof of government policy success.

The reality was they were classic Goodhart’s Law – that when a measure like five A* to Cs becomes a target, it ceases to be a good measure. No.10, ministers, the civil service, local government, heads and teachers all had a vested interest in claiming the figures showed an improving system. The reality was we had no clear idea if school standards were truly rising or whether teachers were simply better at gaming the system by getting C/D borderline pupils over the line.

And so I have been reflecting on whether or not the sector risks the same mistake by how it presents and interprets rankings and ratings – both in the light of the TEF and the post-election fees debate on transparency, accountability and value for money.

That was why I went public in June about me reaching agreement for the University of Reading with the Advertising Standards Authority (ASA) to stop equating a top 200 place in the Times Higher Education or QS rankings with being in the “top 1%” in the world.

There had been no intention to materially mislead anyone. In good faith, Reading and scores of universities globally based this and similar claims on an accepted rule of thumb there are around 20,000 institutions internationally.

But after a detailed, constructive dialogue, we accepted ASA’s view that it was a potential breach of its advertising code. It did not matter the QS publicly insisted that Reading was in the top 1%. The ASA’s view was that the university could not repeat it in its marketing as QS does not formally assess and rank every single university worldwide.

It was a fair cop and the only option was to hold our hands up and phase it out. Reading takes it responsibilities as an advertiser and to our students, staff and partners very seriously.

My belief is that prospective students are far more savvy then being overly swayed by league table rankings. However, my comments in the national media caused quite a stir with various irate competitors privately criticising us for opening Pandora‘s Box.

University marketers are all big boys and girls. It is down to them to to fight their corner or follow Reading‘s lead. The fact, however, is all universities have a joint obligation to better protect integrity and trust in higher education marketing. No university should leave itself in a position where it is knowingly breaching the advertising code. That risks tarnishing their own and frankly, all our reputations.

The deeper debate, perhaps, is that university brands rely on presenting (and being seen to present) robust, evidenced and accurate facts. It goes right to the heart of our integrity as academic institutions and intellectual communities. So we need to ask tougher questions of the rankings we cherry-pick statistics from to sell ourselves.

Should we turn a blind eye to league tables, like QS, which we know lack robust or transparent methodologies, open to institutions across the world continually gaming the metrics?

Should we really use global rankings to market taught courses, when their metrics focus on research impact, output and reputation not teaching?

Should we major on international tables which include opaque ‘reputation’ surveys or questionable calculations on citations – which mean institutions rise or fall by dozens places year after year?

And if the answer to this is no, are we treating our audiences with the honesty, transparency and openness they respect?

Mixed messages, and TEF

This brings me onto the Teaching Excellence Framework, an attempt to cut through this obfuscation with officially moderated, government-approved rating system.

The view of many in the sector has been that TEF’s ambition is right; that there are interesting innovations in split benchmarking and self-evaluation; and that while the overall execution is far from perfect, it’s all better than nothing.

That‘s fine if you are a wonk. But contorting a limited set of metrics into clunky ratings and self-evaluation submissions fall well sort of the demand for greater value for money to the taxpayer and graduates.

It does not enable universities communicate clearly what institutions do. Nor is it is not remotely useful for guiding university leadership over where to invest in teaching and learning. Indeed, it may create perverse incentives to take investment away from crucial activities, which do not feed into the TEF.

We can see this from the conflicting and confused messages the sector has sent out. Various Gold-rated universities claim it is a irrefutable evidence of their teaching quality, hailing the start of a new post-Russell Group world order and rolling out the bunting and trestle tables. On the other hand, some Bronze-rated institutions laid into the metrics, the assessment process, and TEF’s implementation.

If the sector is not on the same page, then it is not a credible exercise. The bottom line is that TEF’s has a host of clashing objectives, just as GCSE league tables did in years gone: differentiating teaching quality between competitors; acting as a lever to raise teaching standards and investment; giving prospective students quality, clear information to make a clear choice; and holding universities to account.

Frankly, it is not realistic that a single, simplistic rating can possibly achieve all these – indeed, it may fail to achieve any. Ministers have set out the next stage of TEF, but it fails to address these fundamentals. It is tough to argue, at this point, that the TEF will ever truly measure teaching quality or value for money in a meaningful way across 300 institutions and tens of thousands of courses.

If students, staff, our public and private-sector partners do not buy into it as a robust exercise, then it will wither and die. Jo Johnson has, for the moment, removed the one lever the TEF had by kicking its link with tuition fees into the long-grass. That‘s why the government‘s independent review leaves the door open to junk it completely – a classic Whitehall ruse to create policy wiggle room.

Doing accountability better

These failures mean that by putting all its eggs in the TEF basket, the sector arguably avoids accountability. We need a more intelligent approach.

We need to liberate the huge datasets, beyond institutions’ HEFCE and HESA reporting obligations, and instead of, dare I say it, of relying on Wonkhe and other number crunchers to do the heavy lifting.

We should enable anyone to access, cut, analyse, manipulate institutional information how they like, using easy-to-use, open-source data visualisation and analytical tools. We should empower students, media and wider public; consultancies, researchers, media and policy makers, to easily build their own rankings, benchmarks and ratings. Measuring learning gain is held up as the holy grail, and given the pressure post-election HEFCE and the wider sector needs to accelerate its work in this area.

It’s time to treat the students and the wider public with more imagination and sophistication than government sanctioned ratings. TEF is a very 1990s, pre-internet solution to accountability. DfE started taking a more enlightened approach to schools data in the early part of the decade – and its time for higher education to follow down that path.

Institutions dependent on public subsidy should open up data about ourselves, with the same values of academic freedom the sector purports to embody.

To date, TEF has absorbed huge time resources in institutions but ultimately has been a limited conversation about putting teaching on a parity of esteem with research. It is little more useful, as it stands, than the media-published league tables it purports to replace.

We can do a lot better than this.

Leave a Reply