This article is more than 1 year old

Can AI support academic research?

Much of the debate about AI has focused on student work and academic offences. For Xianghan and Michael O'Dea there are implications for the conduct of research too
This article is more than 1 year old

Xianghan (Christine) O’Dea is a Subject Group Leader at Huddersfield Business School


Michael O’Dea is a Senior Lecturer in Computer Science at York St John University, and is a Senior Fellow of the HEA

ChatGPT is probably one of the best known of the class of generative chatbot technologies.

This type of chatbot creates “human-like” response to questions, including relatively complex natural language queries that are not pre-defined and are drawn from a large database.

ChatGPT’s database has around 175 billion parameters, which enable it to generate not only responses to a very large number of different types of queries, but also different responses to the same queries.

In fewer than six months ChatGPT has attracted a great deal of attention from professionals, researchers and educators in the field of further and higher education due to its ability to create realistic and convincing text outputs, such as essays, reports, and even computer programs.

These outputs are extremely difficult, if not impossible, to identify as being AI generated.

While the main focus of the debate has been on the potential negative impact of AI text generators on education and in particular on assessments, not many people have paid attention to both the benefits and potential impact upon academic research and academic research paper writing and publication.

ChatGPT and academic work

The temptation for academics and researchers – including PhD researchers – to use these technologies to assist with writing research papers or even generate complete articles will only increase as the technology improves.

Already the first peer viewed article written jointly by a human author and chatGPT has been published by the Journal of Nurse Education in Practice in 2020. The International Conference on Machine Learning, one of the most prestigious AI conferences, has banned authors from submitting abstracts and articles written entirely by chatGPT and other AI text generators.

Recent research carried out by researchers at Northwestern University examines how publishers, conference organisers and reviewers can detect whether research paper content is written by human or AI. The research found that both human reviewers and AI output detectors correctly identified only 68 per cent of 50 AI generated article abstracts. They also wrongly identified 14 per cent of human written abstracts as AI generated ones.

ChatGPT has weaknesses. The GPT-3 interface, which is the technology of ChatGPT that responds to the initial query and formats the response, exhibits a “hallucination effect” – seen in it sometimes making up non-existent references in its responses. These references are credible, as they are often constructed from existing academic sources, but they are ultimately false. Similarly, while AI is good at creating syntactically correct answers, these are often semantically incorrect.

Biases in the data that comprise the knowledge base of an AI tool result in biases in the responses it creates. And biased language is also an issue – Meta’s Galactica, a direct competitor to ChatGPT, was shut down after three days due to problems with racist language in its responses. The technology in its present state has poor awareness of context or ethics.

Just the start

AI technologies are likely to improve in terms of their ability to mimic human writing styles. So the use of ChatGPT and similar technologies raises questions about the proportion of academic work allowed to be produced by AI text generators, whether authors are allowed to use AI text generators to edit their work or enhance their work, and how the academic community can assess the originality and integrity of research publications, in particular, PhD theses, in future.

There are potential positive applications, too. ChatGPT along with other AI chatbots such as Alphabet’s Sparrow and related AI tools such as summarisers (like Scholarcy), have the potential to streamline the research process in a similar way to the “hollowing out” of manufacturing. Certain types of work may be able to be done more quickly and with less effort by AI.

For instance, in extracting data from a huge knowledge base and presenting the results in an easy to understand and pre-written format, researchers potentially can be released from much of the effort involved in such tasks as data collection and literature review.

To effectively utilise this capability to extract relevant data in the production of robust research, researchers will need to enhance their existing skill sets, and focus on the analysis and evaluation of the logic and semantics of the output, evaluation of the quality of the answer, and on fact checking.

Notably, these skills are not just relevant to academic research – in the age of fake news and organised misinformation we all need to develop them.

Leave a Reply