When you think of people who are expected to be accountable for every hour of their working day, there are two groups that come to mind. The first, and largest, are primarily those in unskilled or semi-skilled casual labour – the second are highly skilled professionals who charge by the hour.
University staff fit into neither of our initial groups, and yet if you asked around on campus, you’d learn that data on the their workload is submitted to the regulator every year, and widely universally used in resource and task allocation internally. However, this data bears only a scant relationship to what academics actually do. Why is this?
Reporting and modelling
There are two overlapping concepts that we need to define first.
- Reporting is the capturing of an employees own understanding of their day-to-day activity – and might mean the completion of a timesheet or the keeping of a log of time spent on particular tasks. The word “understanding” is key here – reports are only very rarely an exact indication of time spent on given tasks.
- Modelling uses data derived from reporting to make predictions about current and future workloads, for use in workload planning or resource allocation. The decisions that underpin the development of the model have an impact on real and painful decisions about staffing and resourcing.
All this generates concern about a topic very close to the hearts of academics – autonomy. There’s a significant literature that suggests the loss of autonomy (in the sense of an individual having agency over their own workplace activity) is demotivating and disempowering. Though managers and staff should both have an interest in achieving efficiencies, the degree of resistance that usually accompanies top down attempts to bring this about do this mean that they regularly backfire. So, although the direct use of a workload model to constrain academic autonomy is gratefully rare, the suspicion that such use is implied (or planned) is everywhere.
So workload management is, like timetabling, a topic that it is very easy to become very angry about. The idea that an academic is unable to manage their own time is anathema to cherished ideas of professionalism. An impersonal system can be blamed by both subjects and operators for any number of woes, a convenient scapegoat for both painfully real and widely imagined problems. If workload management was a supra-national political body – academia would vote “leave”.
But the fact that is generally missed is that workload models and workload monitoring are only ever supposed to be indicative. Though poorly-informed managers may hint otherwise, there should be no expectation that workload reporting accurately covers every single task an academic may carry out. Most models include an allocation of “headroom” to offer some recognition of this issue.
Digestive TRAC(T) and Subject-FACTS
Certain components of workload monitoring are a regulatory requirement. In England, Scotland, and Northern Ireland, institutions have to make an annual return to their regulator under the title Transparent Approach To Costing (TRAC). Blossoming from an earlier process of collating published financial statements, Annual TRAC (first collected in 2000) supports the monitoring of full economic costing for research projects, and informs the modelling of funding council teaching grants. In both cases the idea is we use data to ensure that funders offer money for activities that fairly compensates institutions for the work that needs to be done. In a people-focused sector like HE, these costs are largely concerned with staff time.
Each institution completes a mighty spreadsheet every year – using, in part, data on the way academic staff use their time. This Time Allocation Survey (provided as annex 3.1a of the TRAC guidance) will be familiar to many – but nearly half of UK institutions now use data from their own workload monitoring processes. Data collected and collated here informs national allocation calculations for teaching (since 2006) and research (since 2005) in most of the UK.
In England, the Office for Students does still allocate a sizable chunk of funding to support teaching in high-cost subjects – and both the decision to describe a subject as high cost and the premium attached to it stem from the use of TRAC(T) data and student numbers to develop what is described as a “Subject-FACTS” (the subject-related full annual cost of teaching a student) for each cost centre at each institution. Though I obviously salute the contrivance of this acronym it’s not quite the full story, as these calculations exclude the costs of teaching not supported by funding councils, and costs incurred that are not specifically related to the teaching of the subject.
Northern Ireland and Scotland also use TRAC(T) to inform their allocations in a similar way – Wales currently do not (due to a different historical approach to subject based funding) but are consulting on the idea of using it in future.
The availability of data inevitably politicises both its use and its collection – and how closely TRAC and TRAC(T) actually reflect the lived reality of academic staff is (as above) questionable. But, if you assume that any skew in reporting is universal, it is still safe to use the data for the purposes it was designed for – as a proportional indicator across a wide population.
If you’ve been following the Augar review you may recall that DfE commissioned KPMG (who have been instrumental in developing TRAC) to “further inform” the process by taking a look at “how fees and funding compare to the real costs of subject provision” building on TRAC(T) data. This surveyed a group of institutions in an attempt to identify these “real” costs, leading to a month or so of speculation about differential fees.
The MAW of doom?
But how do you get from TRAC to widespread workload monitoring at institutions? With such an effort being made in 2006 to collect institutional data which was then only used on an aggregate basis, HEFCE became interested in ensuring that what was becoming an onerous and time-consuming process would provide a direct benefit to providers. The indirect case – research funding (from funding councils, at least) that actually covered the cost of research – was well made, but applied primarily to research-intensive universities.
So the following year HEFCE’s leadership, governance, and management fund supported the Managing Academic Workload (MAW) project – a consortium led by the University of Salford and based on original work supported by the Leadership Foundation in 2005. The project frequently made recommendations that supported the efficiency of utilising data already collected for TRAC for other purposes, and – conversely – using data collected for other purposes to support TRAC returns.
MAW involved a range of universities and a range of approaches to workload management. Previously the post-92 end of the sector had a figure for the number of hours a year an academic should work – 1650. This was spelled out in the common academic contracts first used by polytechnics. But all institutions had been starting to use the idea of a percentage of workload as an indicative suggestion of what was expected of academics – this again would tend to be contractual, variable from case to case, and often secret.
So MAW set out to address inequality in workload allocation with maths – a single transparent system and a single transparent calculation in each institution. The final report concluded that a consensual policy – involving union and staff buy-in was essential in implementation.
Enter the consultants
In 2011 KPMG produced a report which found wide variation in practice across the system – few institutions were taking the use of this data seriously (spreadsheets still ruled, linkages to other systems was sporadic) and only a few of the models were TRAC compliant, meaning the data being used to model workload at an institution was worse even than that submitted to TRAC.
It proposed three key principles drawing on the MAW work – equity, transparency, and consultation. Again, what is notable here is the lack of interest in depriving academic staff of agency – the idea is still to improve working conditions and practices.
The report sets out that all “managed hours” and all activities should be included in a workload model to ensure TRAC compliance. Institutions still have difficulty with these two requirements as the range and distribution of the work academics actually do does not neatly fit into such categories. The tendency of academics – especially younger and female academics – to overwork for reasons that range from insecurity to sheer passion, mean that a great number of valuable tasks are undertaken that do not fit neatly into managed time.
The current TRAC survey lists teaching, research, support, and “other” as the four principal activities – for the first two these are broken down only by funder, the latter are either too specific to be usable or too general to be useful. (What is “support for other”? Why do we only look for other income generating activity?)
This of course, is only TRAC – concerned primarily with the use of teaching and research funding. For full workload management, other tasks need to be included – administration (everything from admissions responsibilities to institutional committee membership) is an obvious one – teaching preparation and marking are generally separated out from delivery and there are debates around separating wider scholarship from research. And then there’s business and community interaction, student pastoral care…
Making everything fit
It is clearly very nearly impossible to include every task that an academic undertakes in a workload management process, so the case studies described in the KPMG annex display what could politely be termed pragmatism. Some institutions left “headroom” to support uncounted activities, others introduced vague categories to ensure everything could be counted.
Depending on the scope of the system in each particular institution there are tensions between planned and actual activity – some institutions simply treat planned activity as actual activity, others link the system to performance management. Some institutions use input measures (eg number of hours taught), some use output measures (number of students taught).
And there’s no appetite for a single sector-wide system – or even, in many cases, a single institution-wide system. Like much of HE data, TRAC and workload monitoring data is good enough for the task it is designed to do – but it is always indicative rather than absolute, which when you think about it is true for a lot of the data we use every day.
You’d expect by now that there would be a range of competing software tools offering to support institutions in managing this process. One early innovator – the University of the West of England – has spun out their inhouse system (Simitive) as a product to sell to others, and this is currently the only such system available in the UK. However it is only used by a little more than 40 providers, suggesting that the mighty spreadsheet still dominates this corner of sector. It’s not exactly the sexy end of ed-tech, but arguably we need to be paying far more attention to management information software (MIS) than the sparkly stuff aimed at students.
The problems that are perceived as being the fault of workload management are often a symptom of either a lack of workload management or an incompletely embedded system of workload management. Most implementations of workload management take great pains to be clear that the process does not represent a managerial tool – significant attention is paid to governance and localisation.
Teams responsible for workload management, be this at a central or faculty level, should be – and generally are – working to assuage concerns about modeling inconsistencies. It is widely recognised that different tasks will take different people different times. Less experienced staff make take a significant period to plan a lecture, for instance, where staff at a later stage of their career will be able to adapt earlier iterations speedily.
Even when the software and support is centrally provided whole institution workload models are the exception rather than the rule – with many providers running allocations at a school or faculty levels, and institutionally designed and developed solutions holding sway. Data is now used to root out inequalities by staff characteristics as well as to foster transparency and effectiveness. HESPA runs a workload management interest group which discusses these and other current issues in the field.
Academic workload has unquestionably increased over the past decade – policy, technology, and measurement have all played a part in this. A 2015 Financial Sustainability Strategy Group report into the sustainability of teaching was clear that universities had responded well to a great deal of turbulence, but by now such an ability to respond has reached its limit. The institutional workload model is an attempt to understand at an aggregate level what needs to be done and what is done, two often widely diverging ideas, and then to ensure resources are apportioned accordingly. As to the vast range of jobs that academics do (everything from faculty working group meetings to editing journals to processing expense claims…) this data is not frequently collected at a local level, and never at a national level.
The University and College Union are of the opinion that:
Sometimes showing that workloads are unreasonable can be critical in presenting a grievance around stress. This is easier to do if there is an agreed or widely accepted workload model.
What data does exist is useful to ensure that – for central tasks, at least – no-one is significantly over (or under-) burdened. But if you expected the figures in the workload model to accurately reflect what you do and how long it takes, you will be disappointed. And you should never assign an issue to a workload model that could be blamed on the unethical use of this model by an incompetent manager.