David Kernohan is Deputy Editor of Wonkhe

When you think of people who are expected to be accountable for every hour of their working day, there are two groups that come to mind. The first, and largest, are primarily those in unskilled or semi-skilled casual labour – the second are highly skilled professionals who charge by the hour.

University staff fit into neither of our initial groups, and yet if you asked around on campus, you’d learn that data on the their workload is submitted to the regulator every year, and widely universally used in resource and task allocation internally. However, this data bears only a scant relationship to what academics actually do. Why is this?

Reporting and modelling

There are two overlapping concepts that we need to define first.

  • Reporting is the capturing of an employees own understanding of their day-to-day activity – and might mean the completion of a timesheet or the keeping of a log of time spent on particular tasks. The word “understanding” is key here – reports are only very rarely an exact indication of time spent on given tasks.
  • Modelling uses data derived from reporting to make predictions about current and future workloads, for use in workload planning or resource allocation. The decisions that underpin the development of the model have an impact on real and painful decisions about staffing and resourcing.

All this generates concern about a topic very close to the hearts of academics – autonomy. There’s a significant literature that suggests the loss of autonomy (in the sense of an individual having agency over their own workplace activity) is demotivating and disempowering. Though managers and staff should both have an interest in achieving efficiencies, the degree of resistance that usually accompanies top down attempts to bring this about do this mean that they regularly backfire. So, although the direct use of a workload model to constrain academic autonomy is gratefully rare, the suspicion that such use is implied (or planned) is everywhere.

So workload management is, like timetabling, a topic that it is very easy to become very angry about. The idea that an academic is unable to manage their own time is anathema to cherished ideas of professionalism. An impersonal system can be blamed by both subjects and operators for any number of woes, a convenient scapegoat for both painfully real and widely imagined problems. If workload management was a supra-national political body – academia would vote “leave”.

But the fact that is generally missed is that workload models and workload monitoring are only ever supposed to be indicative. Though poorly-informed managers may hint otherwise, there should be no expectation that workload reporting accurately covers every single task an academic may carry out. Most models include an allocation of “headroom” to offer some recognition of this issue.

Digestive TRAC(T) and Subject-FACTS

Certain components of workload monitoring are a regulatory requirement. In England, Scotland, and Northern Ireland, institutions have to make an annual return to their regulator under the title Transparent Approach To Costing (TRAC). Blossoming from an earlier process of collating published financial statements, Annual TRAC (first collected in 2000) supports the monitoring of full economic costing for research projects, and informs the modelling of funding council teaching grants. In both cases the idea is we use data to ensure that funders offer money for activities that fairly compensates institutions for the work that needs to be done. In a people-focused sector like HE, these costs are largely concerned with staff time.

Each institution completes a mighty spreadsheet every year – using, in part, data on the way academic staff use their time. This Time Allocation Survey (provided as annex 3.1a of the TRAC guidance) will be familiar to many – but nearly half of UK institutions now use data from their own workload monitoring processes. Data collected and collated here informs national allocation calculations for teaching (since 2006) and research (since 2005) in most of the UK.

In England, the Office for Students does still allocate a sizable chunk of funding to support teaching in high-cost subjects – and both the decision to describe a subject as high cost and the premium attached to it stem from the use of TRAC(T) data and student numbers to develop what is described as a “Subject-FACTS” (the subject-related full annual cost of teaching a student) for each cost centre at each institution. Though I obviously salute the contrivance of this acronym it’s not quite the full story, as these calculations exclude the costs of teaching not supported by funding councils, and costs incurred that are not specifically related to the teaching of the subject.

Northern Ireland and Scotland also use TRAC(T) to inform their allocations in a similar way – Wales currently do not (due to a different historical approach to subject based funding) but are consulting on the idea of using it in future.

The availability of data inevitably politicises both its use and its collection – and how closely TRAC and TRAC(T) actually reflect the lived reality of academic staff is (as above) questionable. But, if you assume that any skew in reporting is universal, it is still safe to use the data for the purposes it was designed for – as a proportional indicator across a wide population.

If you’ve been following the Augar review you may recall that DfE commissioned KPMG (who have been instrumental in developing TRAC) to “further inform” the process by taking a look at “how fees and funding compare to the real costs of subject provision” building on TRAC(T) data. This surveyed a group of institutions in an attempt to identify these “real” costs, leading to a month or so of speculation about differential fees.

The MAW of doom?

But how do you get from TRAC to widespread workload monitoring at institutions? With such an effort being made in 2006 to collect institutional data which was then only used on an aggregate basis, HEFCE became interested in ensuring that what was becoming an onerous and time-consuming process would provide a direct benefit to providers. The indirect case – research funding (from funding councils, at least) that actually covered the cost of research – was well made, but applied primarily to research-intensive universities.

So the following year HEFCE’s leadership, governance, and management fund supported the Managing Academic Workload (MAW) project – a consortium led by the University of Salford and based on original work supported by the Leadership Foundation in 2005. The project frequently made recommendations that supported the efficiency of utilising data already collected for TRAC for other purposes, and – conversely – using data collected for other purposes to support TRAC returns.

MAW involved a range of universities and a range of approaches to workload management. Previously the post-92 end of the sector had a figure for the number of hours a year an academic should work – 1650. This was spelled out in the common academic contracts first used by polytechnics. But all institutions had been starting to use the idea of a percentage of workload as an indicative suggestion of what was expected of academics – this again would tend to be contractual, variable from case to case, and often secret.

So MAW set out to address inequality in workload allocation with maths – a single transparent system and a single transparent calculation in each institution. The final report concluded that a consensual policy – involving union and staff buy-in was essential in implementation.

Enter the consultants

In 2011 KPMG produced a report which found wide variation in practice across the system – few institutions were taking the use of this data seriously (spreadsheets still ruled, linkages to other systems was sporadic) and only a few of the models were TRAC compliant, meaning the data being used to model workload at an institution was worse even than that submitted to TRAC.

It proposed three key principles drawing on the MAW work – equity, transparency, and consultation. Again, what is notable here is the lack of interest in depriving academic staff of agency – the idea is still to improve working conditions and practices.

The report sets out that all “managed hours” and all activities should be included in a workload model to ensure TRAC compliance. Institutions still have difficulty with these two requirements as the range and distribution of the work academics actually do does not neatly fit into such categories. The tendency of academics – especially younger and female academics – to overwork for reasons that range from insecurity to sheer passion, mean that a great number of valuable tasks are undertaken that do not fit neatly into managed time.

The current TRAC survey lists teaching, research, support, and “other” as the four principal activities – for the first two these are broken down only by funder, the latter are either too specific to be usable or too general to be useful. (What is “support for other”? Why do we only look for other income generating activity?)

This of course, is only TRAC – concerned primarily with the use of teaching and research funding. For full workload management, other tasks need to be included – administration (everything from admissions responsibilities to institutional committee membership) is an obvious one – teaching preparation and marking are generally separated out from delivery and there are debates around separating wider scholarship from research. And then there’s business and community interaction, student pastoral care…

Making everything fit

It is clearly very nearly impossible to include every task that an academic undertakes in a workload management process, so the case studies described in the KPMG annex display what could politely be termed pragmatism. Some institutions left “headroom” to support uncounted activities, others introduced vague categories to ensure everything could be counted.

Depending on the scope of the system in each particular institution there are tensions between planned and actual activity – some institutions simply treat planned activity as actual activity, others link the system to performance management. Some institutions use input measures (eg number of hours taught), some use output measures (number of students taught).

And there’s no appetite for a single sector-wide system – or even, in many cases, a single institution-wide system. Like much of HE data, TRAC and workload monitoring data is good enough for the task it is designed to do – but it is always indicative rather than absolute, which when you think about it is true for a lot of the data we use every day.

You’d expect by now that there would be a range of competing software tools offering to support institutions in managing this process. One early innovator – the University of the West of England – has spun out their inhouse system (Simitive) as a product to sell to others, and this is currently the only such system available in the UK. However it is only used by a little more than 40 providers, suggesting that the mighty spreadsheet still dominates this corner of sector. It’s not exactly the sexy end of ed-tech, but arguably we need to be paying far more attention to management information software (MIS) than the sparkly stuff aimed at students.

The problems that are perceived as being the fault of workload management are often a symptom of either a lack of workload management or an incompletely embedded system of workload management. Most implementations of workload management take great pains to be clear that the process does not represent a managerial tool – significant attention is paid to governance and localisation.

Teams responsible for workload management, be this at a central or faculty level, should be – and generally are – working to assuage concerns about modeling inconsistencies. It is widely recognised that different tasks will take different people different times. Less experienced staff make take a significant period to plan a lecture, for instance, where staff at a later stage of their career will be able to adapt earlier iterations speedily.

Even when the software and support is centrally provided whole institution workload models are the exception rather than the rule – with many providers running allocations at a school or faculty levels, and institutionally designed and developed solutions holding sway. Data is now used to root out inequalities by staff characteristics as well as to foster transparency and effectiveness. HESPA runs a workload management interest group which discusses these and other current issues in the field.

In conclusion

Academic workload has unquestionably increased over the past decade – policy, technology, and measurement have all played a part in this. A 2015 Financial Sustainability Strategy Group report into the sustainability of teaching was clear that universities had responded well to a great deal of turbulence, but by now such an ability to respond has reached its limit. The institutional workload model is an attempt to understand at an aggregate level what needs to be done and what is done, two often widely diverging ideas, and then to ensure resources are apportioned accordingly. As to the vast range of jobs that academics do (everything from faculty working group meetings to editing journals to processing expense claims…) this data is not frequently collected at a local level, and never at a national level.

The University and College Union are of the opinion that:

Sometimes showing that workloads are unreasonable can be critical in presenting a grievance around stress. This is easier to do if there is an agreed or widely accepted workload model.

What data does exist is useful to ensure that – for central tasks, at least – no-one is significantly over (or under-) burdened. But if you expected the figures in the workload model to accurately reflect what you do and how long it takes, you will be disappointed. And you should never assign an issue to a workload model that could be blamed on the unethical use of this model by an incompetent manager.

10 responses to “A beginner’s guide to academic workload modelling

  1. Really interesting piece, and gives a very useful overview from a WAM perspective. However, as with much in this area, time is actually almost absent in the sense it is really a proxy for cost and hence is restricted to ‘clock-time’. This then shows the real conundrum with WAMs, the complexity of HE work resides in a mixture of temporalities – they can’t be reduced into a spreadsheet. And if the WAM isn’t seen as fair or accurate it is bound to be unpopular as it doesn’t reflect reality. Then the problems really begin as consultants and SLTs point to the numbers as ‘truth’.

  2. I would be interested to learn about the other side – how workload is allocated to UK academics. I think most Australian departments or faculties have a formula for allocating work to academics (the formulas are rarely consistent across the university).

    Typically these formulas provide that that maximum workload for a teaching and research academic is some 20 equivalent full time students, which may be made up of undergraduates, taught postgraduates and research supervision. This is discounted for being head of department and other administrative duties, and some universities further discount teaching loads for being research intensive. More commonly, academics use research grants to buy out some of their teaching duties.

    The net result is that the average teaching load is around 17 equivalent full time students, but that this varies markedly by field and by university. So a research star at a research intensive university might have a teaching load of around 5 equivalent full time students, which would be made up entirely of supervising students.

  3. I would love to have 17 full time students per year. What a luxury and what good teaching and learning could happen.

    1. Yes I would be interested to know what Gavin means by this. I have 58 UG and 12 PG studnets that I am personal tutor for but I am sur ehe cannot mean this.

  4. Less Australia seem to be an academic’s paradise, I would point out that these days the figure is more in the 20s even 30 equivalent full time students. This later amounts to teaching 8 (semester) units a year each with enrolments of 30. Most academic workload allocation models in Australia these days are based on the notion of envelopes for teaching, research and engagement that cover the core hours of a working week. Unlike the old notion of the universal 40:40:20, the size of the research envelope flexes with research outcomes, meaning teaching envelopes up to 70 or 75 are possible as is the research star just supervising research degree students.

    Of course, research productivity as a means of getting out of mainstream teaching does suggest that there is really no teaching-research nexus which opens up a whole other conversation.

  5. Having recently arrived in an academic environment, following 40 years in various private and public sector organisations (including 20 years of filling in a weekly timesheet) to manager a WAMS implementation, I do sometimes marvel(?) at some of the perceived issues in this sector.

    I think the key is not to call it workload management but workload planning to answer: what is it that you and your employer have agreed you should be doing next year, how long have you agreed you should be spending on it, and does it match both institutional and individual aspirations?

    In an age of austerity where pressures on funding (as per other articles in this issue) isn’t it right that there needs to be accountability from the academic to both employer and customer (i.e. student) to ensure that the important activities are done?

    As an example my son recently completed an MSc and his biggest gripe was that it took 3 weeks for his dissertation supervisor to respond to queries. Whether this was down to poor time management or overload or something else on the part of his supervisor, I’m not sure but can now make some informed guesses.

  6. I think whats also concerning is that this vague, estimator of work which may get done is now widely being used to define who does and doesn’t have ‘significant responsibility for research’ for REF 2021. A purpose for which it was never designed, and is likely a poor metric in many cases…

  7. It is the inconsistent application of WAMS that leads to frustration. I know of two Universities that claim to use it yet there is 40% difference in bundles for identical courses. Moreover the ‘tougher’ of these universities requires a bundle allocation almost 20% more per year. Combined this means one establishment expects a lecturer to complete almost 90% more work than the other for the same academic year, yet both claiming to use WAMS!

    To compound this some Uni’s then review work allocation at points during the year. Therefore if work allocation has required greater effort during the period by – let’s say 100 bundles in an 80 bundle period – the review will identify that for the remaining year you are 20 bundles short. So you are then allocated further work to make up this shortfall. So using WAMS without any reference to calendar and timetable just compounds this problematic methodology.

    Without any transparency and clarity it is very hard to argue against, with establishments just claiming the other has misused the calculation.

Leave a Reply