How our researchers are using AI – and what we can do to support them

Christina Boswell reports on an institutional survey of researchers' use of generative AI at the University of Edinburgh

Christina Boswell is Vice Principal for Research and Enterprise at the University of Edinburgh

We know that the use of generative AI in research is now ubiquitous. But universities have limited understanding of who is using large language models in their research, how they are doing so, and what opportunities and risks this throws up.

The University of Edinburgh hosts the UK’s first, and largest, group of AI expertise – so naturally, we wanted to find out how AI is being used. We asked our three colleges to check in on how their researchers were using generative AI, to inform what support we provide, and how.

Using AI in research

The most widespread use, as we would expect, was to support communication: editing, summarising and translating texts or multimedia. AI is helping many of our researchers to correct language, improve clarity and succinctness, and transpose text to new mediums including visualisations.

Our researchers are increasingly using generative AI for retrieval: identifying, sourcing and classifying data of different kinds. This may involve using large language models to identify and compile datasets, bibliographies, or to carry out preliminary evidence syntheses or literature reviews.

Many are also using AI to conduct data analysis for research. Often this involves developing protocols to analyse large data sets. It can also involve more open searches, with large language models detecting new correlations between variables, and using machine learning to refine their own protocols. AI can also test complex models or simulations (digital twins), or produce synthetic data. And it can produce new models or hypotheses for testing.

AI is of course evolving fast, and we are seeing the emergence of more niche and discipline-specific tools. For example, self taught reasoning models (STaRs) can generate rationales that can be fine-tuned to answer a range of research questions. Or retrieval augmented generation (RAG) can enable large language models to access external data that enhances the breadth and accuracy of their outputs.

Across these types of use, AI can improve communication and significantly save time. But it also poses significant risks, which our researchers were generally alert to. These involve well-known problems with accuracy, bias and confabulation – especially where researchers use AI to identify new (rather than test existing) patterns, to extrapolate, or to underpin decision-making. There are also clear risks around sharing of intellectual property with large language models. And not least, researchers need to clearly attribute the use of AI in their research outputs.

The regulatory environment is also complex. While the UK does not as yet have formal AI legislation, many UK and international funders have adopted guidelines and rules. For example, the European Union has a new AI Act, and EU funded projects need to comply with European Commission guidelines on AI.

Supporting responsible AI

Our survey has given us a steer on how best to support and manage the use of AI in research – leading us to double down on four areas that require particular support:

Training. Not surprisingly the use of generative AI is far more prevalent among early career researchers. This raises issues around training, supervision and oversight. Our early career researchers need mentoring and peer support. But more senior researchers don’t necessarily have the capacity to keep pace with the rapid evolution of AI applications.

This suggests the need for flexible training opportunities. We have rolled out a range of courses, including three new basic AI courses to get researchers started in the responsible use of AI in research, and online courses on ethics of AI.

We are also ensuring our researchers can share peer support. We have set up an AI Adoption Hub, and are developing communities of practice in key areas of AI research – notably research in AI and Health which is one of the most active areas of AI research. A similar initiative is being developed for AI and Sustainability.

Data safety. Our researchers are rightly concerned about feeding their data into large language models, given complex challenges around copyright and attribution. For this reason, the university has established its own interface with the main open source large language models including ChatGPT – the Edinburgh Language Model (ELM). ELM provides safer access to large language model, operating under a “zero data retention” agreement so that data is not retained by Open AI. We are encouraging our researchers to develop their own application programming interfaces (APIs), which allow them to provide more specific instructions to enhance their results.

Ethics. AI in research throws up a range of challenges around ethics and integrity. Our major project on responsible AI, BRAID, and ethics training by the Institute for Academic Development, provide expertise on how we adapt and apply our ethics processes to address the challenges. We also provide an AI Impact Assessment tool to help researchers work through the potential ethical and safety risks in using AI.

Research culture. The use of AI is ushering in a major shift in how we conduct research, raising fundamental questions about research integrity. When used well, generative AI can make researchers more productive and effective, freeing time to focus on those aspects of research that require critical thinking and creativity. But they also create incentives to take short cuts that can compromise the rigour, accuracy and quality of research. For this reason, we need a laser focus on quality over quantity.

Groundbreaking research is not done quickly, and the most successful researchers do not churn out large volumes of papers – the key is to take time to produce robust, rigorous and innovative research. This is a message that will be strongly built into our renewed 2026 Research Cultures Action Plan.

AI is helping our researchers drive important advances that will benefit society and the environment. It is imperative that we tap the opportunities of AI, while avoiding some of the often imperceptible risks in its mis-use. To this end, we have decided to make AI a core part of our Research and Innovation Strategy – ensuring we have the right training, safety and ethical standards, and research culture to harness the opportunities of this exciting technology in an enabling and responsible way.

Leave a reply