I’ve been working in the higher education sector for 35 years.
Shortly after I started my career at Trent Poly, we welcomed a promising, up-and-coming band to the SU. They were called Heaven or Paradise or something. No. Nirvana, that was it.
The point is, it was so far back I can barely remember.
A few years later, the 1992 Further and Higher Education Act converted polytechnics into universities. Thirty years on, I have been thinking about what it’s been like to work in the redefined sector in the last three decades.
In particular I’ve been reflecting on the contrast between what things look like from the outside, and how things are on the inside.
Inter-university valuing
Some things have got better. Universities have been enabled to cite metrics that are intended to reflect the quality of learning and teaching. These have become an important, albeit inevitably imperfect, means for students to make informed choices about where to study.
We’re all suspicious of metrics, in particular when they result in league tables, as they always will. But it’s a good thing that students have access to various forms of empirical evidence about the likelihood of them getting a good deal when they sign up for a course.
There are no guarantees. But being able to find out what percentage of students were generally satisfied with their course, for example, seems to me to be a perfectly reasonable thing to be able to do.
The alternative to the use of these actual metrics is to return to the old-fashioned, implicit metric of “reputation”. That’s basically how students used to have to decide things. And it wasn’t good enough. Trent Poly never did very well in that regard, and that was never justified. It’s always been a good place to study.
The performance of my department – Psychology – is a case in point. When I started working here there were five of us and no one knew we even existed. Now we are one of the biggest providers of accredited psychology courses in the country, with a first-year intake of over 900 undergraduates.
For eight straight years prior to the pandemic we didn’t drop below 91 per cent “overall satisfaction”. That’s some achievement, working at the scale at which we do, and sustained scores like that actually do mean something. And it’s only fair that we can say that out loud to prospective students.
Intra-university valuing
The story is less satisfactory on the inside. Creating a good learning and teaching environment for students is a time-consuming, resource-hungry, expert task. Every department has a set of academics who focus their time and effort in that direction. Some devote whole careers to doing so.
But as a rule those are the workers who end up being valued least. They create the most value for their department, by enabling the recruitment of subsequent cohorts of students – the fees from whom make up the lion’s share of a department’s income.
But none of that value is credited to those who create it. You cannot claim “some proportion” of student fee income in your “income generation” criterion column on your appraisal form.
And this is where the story of metrics becomes more troubling. There is a culture of valuing research higher than teaching in the higher education sector as a whole. One reason for that is that research activities produce metrics that are easier to interpret on an individual level.
Grant-capture and peer-reviewed publication are a currency that are much more easily spent on individual career advancement, than the more diffuse, collective achievements of teaching teams.
It’s one of the things that happens in life. Things that are measurable have a tendency to take priority over things that are less measurable. Much easier to envisage, and thereafter craft a career from research publication than to do the same from teaching excellence.
Career currencies
There is also an unchallenged, implicit reasoning that goes something like this. Academics should be experts in a field. To be an expert in a field you must have done empirical research in that field. Therefore, as a rule, being research active is more or less an essential feature of the best academic careers.
The problem is, it’s simply not the case that being research active in a field is a necessary condition for expertise. Yes, academics should be experts. And it’s probably the case that to do research in an area you should have expertise in it. But the reverse does not follow.
Of course, publishing your own research in a field is one indicator of expertise. No one would dispute that. But should it really be the thing that is valued above all in the sector? Does that have the right consequences for expertise? And does it have the right consequences for learning and teaching?
What happens of course, is that in pursuit of the currency that enables career advancement academics come under pressure to publish research papers, in an ever-broadening and diluted array of peer-reviewed publications. Whether that driver leads to healthy outcomes, with regards to the advancement of knowledge, is a moot point.
On second thoughts, no it’s not. It doesn’t.
A more capacious view of academic careers would be welcome and would benefit the sector. The ideas on “scholarship” set out by Boyer in 1990 become relevant in this matter.
“Discovery” (research) is one aspect of the kind of scholarship that we should be valuing in higher education. But there is a much wider range of activities that need doing across a team (or department) of scholars to make the whole thing work. And it’s foolish to think that everyone should be able to do everything.
So you need differentiation of activity and function across the team. And the different key areas of activities and functions should be equally valued. Yet they never have been in the university sector – and maybe they never will be.
As a Polytechnic we were unambiguously in the business of learning and teaching. And I cannot be the only one in the sector who, proud of their Polytechnic heritage, has mixed feelings about the consequences of the 1992 act.
We dug out that old Howard Newby quote at the weekend – that the English have a genius for turning diversity into hierarchy – when thinking why the HE sector can’t support diversity more. It was a theme through the discussions after the HE sector was brought together in 1992, picked up in Dearing and then Newby led an attempt to consider whether we should fund by ‘mission’ when he was CEO at HEFCE.
The impetus in the early 1990s was to bring the HE provision together – allowing polytechnics to have degree awarding powers and to be universities was subsidiary. Some non-universities were allowed into the sector (although we created a new binary line between ‘HE’ funded and ‘FE’ funded providers) – so we could have had polytechnics in the new HEFCE sector.
What we found where that places that had been thought as ‘bad’ at teaching were actually quite good, and that (especially outside big science) places that were thought as ‘bad’ at research could actually be quite good. The sector has become more permeable – one of the tricks that the polytechnics had particularly developed (as Andy mentions) was teaching big groups. We can see that that’s spread much more through the sector. It’s also interesting to look at the question whether the ‘best’ teaching – the stuff that the students seeking the most selective providers want – has to be found in research-intensive universities. It’s not clear that it is even in the US (Boyer points to that) where outstanding education (which can then be then highly selective) can be found outside the research-intensive universities.