We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

The Ethical Implications of AI in Scientific Publishing

Human hand holding a variety of AI icons.
Credit: iStock
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 4 minutes

The Turing Test, developed in the 1950s, aimed to determine if a machine could mimic human intelligence. Since then, artificial intelligence (AI) has grown from being a largely assistive tool to one that can generate both written and visual content. As a result, we are seeing a shift in how scientific research is conducted and disseminated. As the use of generative AI tools expands, and their potential in scientific research is better understood, there are ethical considerations to be made.

 

The Alan Turing Institute lists bias and discrimination, a lack of transparency and invasions of privacy as the potentially harmful effects of AI.1 Companies like Google have created frameworks and listed ethical AI principles to uphold high scientific standards and ensure accountability.2 However, ethics will remain an area of focus for researchers and publishers alike in order to prevent misuse of this technology.

 

A new era for scientific publishing


AI software introduces machine learning algorithms that can learn from data and be trained to formulate predictions based on observed patterns. For example, scientists could use AI to predict the best potential drug molecules based on previous data outputs.


Furthermore, tools like DALL-E can generate images, while others, such as Proofig AI, can review visual content and sub-images to identify discrepancies. Applied effectively, these innovative proofing technologies can improve integrity in scientific publishing and flag duplication and manipulation issues. They can flag these prior to publication and, therefore, enable publishers to fix any unintentional errors or reject any manipulated manuscripts.


Tackling misinformation


Just two months after its launch in 2022, artificial intelligence chatbot ChatGPT reached 100 million users.3 Some people use the tool to write poems or ask advice, but it can also be used to produce scientific content. In July 2023, Nature reported that a pair of scientists had produced a research paper on the impact of fruit and vegetable consumption and physical activity on diabetes risk in under an hour.4 The paper was reported as being fluent and in the right structure. However, ChatGPT had “a tendency to fill in gaps by making issues up, a phenomenon often called hallucination.”


Generative AI is also transforming imagery. Users can simply describe an image for platforms like DALL-E, Stable Diffusion and Midjourney, and the software will generate one in a matter of seconds.


These text-to-image systems have become more sophisticated, which makes AI-generated images difficult to detect, even for subject experts. A team led by computer scientist Rongshan Yu from Xiamen University in China, created a series of deepfake western blot and cancer images. It was found that two out of three biomedical specialists could not distinguish the AI-generated from a genuine image.5


In response to the powerful and potential harmful risks of some AI-generated image and text tools, many publishers have adapted editorial policies to restrict the use of AI to generate content for scientific manuscripts. For example, Nature said that it would not allow a large language model (LLM) tool to be accepted and credited as an author because AI tools cannot be held accountable for the work.6 Secondly, researchers using LLM tools must document its use in their methods or acknowledgments section. Elsewhere, legal issues surrounding the use of AI-generated images and videos mean that image integrity editorial policies prohibit their use in Nature journals.7


Preventing misuse


Due to the potential risks of AI-generated tools in performance and transparency, academics cannot use this technology without clear restrictions. There is a responsibility on the part of researchers, editors and publishing houses to verify the facts. The Committee on Publication Ethics (COPE) and publishers should also issue clear guidelines, which must be updated according to the development of AI capabilities, outlining when it is appropriate and desirable to use AI technology and when it’s inappropriate to do so.


One concerning example of AI misuse in the “publish or perish” culture is the emergence of paper mills organizations that produce fabricated content, including visuals such as charts. After screening 5,000 research papers, neuropsychologist Bernhard Sabel estimated that up to 34% of the neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%. Interestingly, this is well above the baseline of 2% reported in the 2022 COPE report.8


As well as checking written content, AI can automate the image-checking process and make it easier for both researchers and publishers to detect instances of misuse or unintentional duplications before publication. Some image integrity proofing software uses computer vision and AI to scan a manuscript and compare images in minutes, flagging any potential issues. This allows forensic editors to investigate further, using the tool to find instances of cut and paste, deletions or other manipulations.

 

Publishers and integrity teams are both concerned by the rapid proliferation of new AI tools, especially those capable of creating or modifying images, and the feasibility of detecting fake content in manuscripts. As AI platforms become more sophisticated, it will become even harder to detect fake images with the naked eye. Even comparing these images against a database of millions of previously published pictures might prove futile, as the AI created images could appear authentic and unique, despite the lack of legitimate data. Integrity experts can no longer depend on manual checks alone and must consider employing countermeasures to AI misuse. Therefore, developments in computer vision technologies and adversarial AI systems will be critical for maintaining research integrity.


AI offers many benefits to scientific publishing, but these tools cannot act ethically of their own accord. As AI becomes more widely adopted by both publishers and researchers, integrity teams and organizations such as COPE and The Office of Research Integrity (ORI) should collaborate to establish clear guidelines and standards for its use in content generation. Despite these efforts, manipulated manuscripts and paper mills will persist. Therefore, publishers and integrity editors should continue adopting the most suitable technological solutions available at the time for reviewing each manuscript before publication.


About the author:

 

Dr. Dror Kolodkin-Gal is a life sciences researcher who specializes in the development of ex vivo explant models to help understand disease progression and treatments. During his research, he became familiar with the issues surrounding images in scientific publications. Dror co-founded image check software provider Proofing AI to help to enable the publication of the highest quality science.