The Office for Students (OfS) has warned providers not to allow a post-pandemic increase in degree classifications to be “baked into” the system.
In its Public Interest Governance Principles, OfS expects that “the governing body receives and tests assurance that academic governance is adequate and effective through explicit protocols with the senate/academic board (or equivalent).”
Notwithstanding the role of senates or their equivalent, governing bodies are ultimately accountable to the regulator for effective management and governance arrangements. Following last week’s OfS announcement, universities might expect governors to test assurance by holding them to account for trends in degree classification, with renewed interest in degree outcomes for 2019-20 and 2020-21 – the years where the OfS analysis implies there is specific cause for concern about ‘unexplained’ grade inflation (by which it means unexplainable by its own statistical modelling).
So how can governors challenge constructively in this space? What questions could they ask, what else could or should they be doing, to help a university evidence that governance of degree classifications has been effective? Put another way, and to adapt a trope seemingly beloved by the regulator, how can universities and their boards demonstrate that their stewardship of academic awards constitutes “a good bake”?
Each year, boards receive academic assurance via the senate about the integrity and credibility of academic qualifications in the form of a degree outcomes statement. This is a summary of the institutional degree classification profile over time, ideally including an explanation of recent patterns. If governors wished to interrogate the degree outcomes statement beyond the data showing comparison with sector trends (grist to a critics’ mill) they might want to probe by asking about factors that influence and/or contribute to improved student attainment.
Has the quality of teaching improved? National Student Survey data, the outcomes of external accreditation reviews, and conversations with student representatives will be insightful here. If the institution is especially well prepared, plans for the next Teaching Excellence Framework could shed more light on this. Are more academic staff being recognised by external professional bodies for their learning and teaching achievements? And how is pedagogical research contributing to curriculum development? As a governor, how would you know?
Another factor in improved student attainment could be that students have a better understanding of what is expected of them. Ask if key metrics such as National Student Survey results confirm this. What are the views of student representatives about the effectiveness and accessibility of marking criteria and for opportunities for formative assessment and receiving feedback on it?
There’s external assurance too. Historically, QAA review would provide a respected judgement by impartial experts about the comparability of academic standards with sector norms. Today, in this very different regulatory landscape, confirmation that the quality and standards of degrees is consistent with similar programmes elsewhere is provided by external examiners. But just how familiar are governors with the key outcomes of the external examining process? Governors are likely to only receive a distillation of their reports in the degree outcomes statement, and/or an annual quality and standards assurance report. When (not if) external examiners challenge the robustness of assessment and marking practices, how are issues resolved? Are there any institutional learning points from the external examining process and have these changed over time? What does a university do with this learning to improve the student educational experience?
To manage the implications of the emergency situation during the pandemic, many providers implemented no detriment or safety net policies and changes to assessment methods. The OfS speculates about their inflationary effect on degree classifications for the last two graduating cohorts, but cautions that these policies are not an excuse. I think it is important, though, that governors are aware of the key features of these interventions should they need to be deployed again to provide clarity for future students. What assurance have governors received that the university evaluates the ways such policies support students in an unprecedented situation?
A strong understanding is needed
All these questions suggest that governing boards may need to strengthen and diversify their mechanisms for receiving and testing academic assurance. Otherwise there is a risk that governors possess only a superficial grasp of how the integrity of academic quality and standards is assured, meaning they could be falling foul of the Public Interest Governance Principles and conditions of OfS registration. I can see a growing imperative for governors to ask questions like those above, to triangulate management reporting with other data and with the lived experience of students and staff.
This means more listening to student representatives beyond student governors, more discussion with staff involved in the delivery and support of education. Appointing champions from the lay membership with specific responsibility for the student voice and academic affairs could help. Taking these steps will increase the demands on governors as well as those providing support with induction and training.
Perhaps more contentiously, it will bring into question the effectiveness of board links with the senate. All come with their own sets of difficulties, however, if the regulator has raised the stakes for universities to answer for their results, then governing boards surely also need to rise to this challenge and support the sector in presenting a narrative about its stewardship of grades.
Embedding this enhanced approach to academic assurance would become a key ingredient of good governance.