We’re living in a world of artificial intelligence – it’s academic publishing that needs to change

By kiera.obrien, 13 December, 2024
View
Scholars are using generative AI to assist them with writing articles, but should they be punished for it? The academic publishing community may need to change its mindset, writes Benjamin Luke Moorhouse
Article type
Article
Main text

The academic publishing community has been rocked by stories of authors using generative artificial intelligence (GenAI) to write articles and create images for publications. Slightly over 1 per cent of articles published in 2023 were written with the assistance of large language models (LLMs), according to one study. In my view, this is likely to be an underestimate. 

Published articles, which have now been retracted or removed because they featured GenAI content, were found to include verbatim generic responses from LLMs:

Certainly, here is a possible introduction for your topic: Lithium-metal batteries are promising candidates for…”

“In summary, the management of bilateral iatrogenic I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model.”  

These cases and others, such as this headline in Business Insider, “An AI-generated rat with a giant penis highlights a growing crisis of fake science that’s plaguing the publishing business”, highlight the challenges academic publishing has in adapting to the analytical and generative capabilities of AI tools.

Clearly, GenAI tools have the potential to exacerbate the crisis in confidence in academic publishing because readers are unsure whether what they are reading was written by humans, machines or both. At the same time, academic publishing, in many contexts, is gripped by a publish-or-perish culture, where academics have a strong incentive to outsource their writing to GenAI to increase their productivity. 

We need to change our mindset towards GenAI tools to regain the narrative and restore trust in academic publishing. Here, I’ll provide suggestions on how we can move forward. 

What are the guidelines on GenAI use?

In response to the development of GenAI, publishers and journals, under the advisement of the Committee on Publication Ethics, now require contributing authors to declare their use of GenAI in their writing process. Times Higher Education also requires authors to declare their use of GenAI. Similar requirements are a common feature of university assessment guidance. However, the voluntary declaration approach does not seem to have the desired effect. Journal editors I have spoken to said they rarely receive GenAI declarations with submitted articles. A study of university students’ AI use declarations found similar non-compliance results. Another study suggests that the academic community fears that GenAI disclosure will give editors, reviewers and potential readers a negative perception of the authors. 

Within universities, students fear being unfairly judged or penalised for accurately declaring their AI use in assessments. These concerns could be valid, with humans viewing texts they think are machine-translated more negatively than texts they think were translated by humans, according to one study. Deception seems preferable to transparency, because transparency is seen to carry the greater risk to the author.  

How we really engage with AI tools 

The practice of disclosing AI use may not reflect how we use GenAI tools today. GenAI functionalities are increasingly embedded in our daily work tools. Microsoft Word has Co-pilot, Grammarly has writing assistance, and these are just two of the many tools that we might use as part of our research process and writing. We might use an LLM to “discuss” our initial research ideas, we might use an AI academic research engine (such as Consensus AI, Scite or Elicit) to gather and summarise literature, we might ask another LLM for feedback on a research tool design, and we might get assistance from MAXQDA AI-assist during our data coding. We might switch on Grammarly or Microsoft Co-pilot when we are writing, selecting or dismissing suggestions on style, tone and language synchronously as we write. This blurs the lines of knowing what uses are acceptable, what uses need declaring or whether we received assistance from AI at all. 

How we normalise GenAI in academic writing 

In essence, using GenAI has become normalised in our academic writing processes. Many authors and students turn to GenAI tools to augment their intelligence and abilities but keep human oversight and ownership over the content they create. We need to consider how to reform academic publishing in a world where GenAI is normalised. 

First, we need to shift our mindset from seeing the use of these tools as a sign of deficit to a sign of enhancement. The responsible and ethical use of GenAI with the author’s oversight and accountability is perhaps no different from outsourcing tasks to research assistants or professional proofreaders. The author’s responsibility to verify information and check the accuracy of the tasks completed is the same. 

Second, we must develop methodological models and frameworks to show us how to use GenAI ethically, legally and transparently to support authors’ knowledge-production activities. Using GenAI to verify human coding procedures could enhance the quality of data analysis, for example. Creating models where GenAI is used as part of the process and protocols for writing about these processes can be drafted and tested. 

Third, authors and students need to be educated about the ethical issues associated with using GenAI tools. Authors can take steps to address concerns regarding bias and intellectual property rights when they have increased AI literacy. 

Once the normalisation of GenAI in academic publishing is accepted, it becomes easier to discuss the ethical and responsible uses of these tools. Users can come out of the shadows, and we can be more open, honest and reflective about our uses of GenAI. In this way, trust in academic publishing can start to be regained. 

I used Grammarly to assist me in writing this article.

Benjamin Luke Moorhouse is an assistant professor at the Hong Kong Baptist University. 

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
Scholars are using generative AI to assist them with writing articles, but should they be punished for it? The academic publishing community may need to change its mindset, writes Benjamin Luke Moorhouse

comment