Full HTML
Balancing innovation with responsibility: A policy proposal for ethical artificial intelligence use in medical scholarly publication
Maher Mohamad Najm1,2, Moustafa Mohamad Najm3
Author Affiliation
1Consultant, Department of Pediatrics, Pediatric Emergency Center, Hamad General Hospital,
2Clinical Lecturer, Department of Clinical Medicine, College of Medicine, Qatar University, Doha, Qatar,
3Assistant Professor, Department of Computer Systems and Networks, Faculty of Informatics Engineering, Damascus University, Syria
Abstract
The roots of artificial intelligence (AI) as a terminology go back to the mid-20th century, specifically to the year 1956 when this term was first introduced by John McCarthy, who is considered the father of AI. However, it remained confined to laboratories and research centers, with its use limited to experts, and many aspects of AI remained theoretical during the AI winter period until the 21st century. The year 2022 can be considered a turning point in this field, as OpenAI introduced Chat Generative Pre-trained Transformer (GPT), the most famous AI chatbot that uses various advanced technologies to simulate human conversation for answering questions and generating texts.
DOI: 10.32677/yjm.v3i1.4552
Pages: 1-3
View: 9
Download: 11
DOI URL: https://doi.org/10.32677/yjm.v3i1.4552
Publish Date: 11-05-2024
Full Text
The roots of artificial intelligence (AI) as a terminology goes back to the mid-20th century, specifically to the year 1956 when this term was first introduced by John McCarthy, who is considered the father of AI [1]. However, it remained confined to laboratories and research centres, with its use limited to experts, and many aspects of AI remained theoretical during the AI winter period until the twenty-first century. The year 2022 can be considered a turning point in this field, as OpenAI introduced ChatGPT, the most famous AI chatbot that uses various advanced technologies to simulate human conversation for answering questions and generating texts. This breakthrough prompted other leading technology companies like Google and Microsoft to develop their own chatbots [2]. This marked the beginning of the rapidly growing use of AI in academia and scientific research, with AI-generated articles being submitted to scientific journals, including medical ones, for publication, either acknowledging the use of AI or without disclosure. Some scientific papers have even been published with ChatGPT credited as one of the authors. Recently, voices have risen to reject this trend, considering it as a threat to academic integrity, honesty, and responsibility, leading to calls for setting boundaries and regulations for the use of AI tools in academia and scientific publishing [3,4].
Major developments in the world of artificial intelligence are enabling machines to imitate human handwriting by leveraging the vast information available on the Internet and training modern tools to extract and generate information based on pre-defined models. In response to this, the development of intelligent tools for detecting non-human authorship is developing in parallel [5,6]. This should be a warning to those working in the field of biomedical research that there must be a commitment to transparency and balanced ethical use of this technology.
In this paper, we provide a simplified explanation of the architecture and functionality of AI chatbots and then explore the ethical issues associated with the use of AI tools in academic writing. We propose an AI Usage policy as an attempt to share ideas with other workers in the field.
AI-based Chatbots Architecture
Chatbots or GPT (Generative Pre-trained Transformer), which are also known as Large Language Models (LLMs) built with a specific goal, are computer programs designed to simulate conversation with human users. They employ a variety of techniques to understand and respond to user inputs, including Natural Language Processing (NLP) and machine learning algorithms. NLP helps chatbots understand the meaning and context of user messages, enabling them to extract relevant information and intent. LLMs are advanced models trained on vast amounts of text data to generate human-like responses. These models use machine learning techniques to process and generate text, allowing chatbots to provide more accurate and contextually relevant responses. Neural networks are a machine learning technique inspired by the structure of the human brain, consisting of interconnected nodes (neurons) organized in layers. They enable the model to learn complex patterns in the data and improve its performance over time. By combining all these techniques, chatbots can effectively engage in conversations with users, making them valuable tools for customer service, information retrieval, and other applications [7,8,9].
The diagram in Figure 1 illustrates the architecture of a chatbot and its operation in a simplified manner. Users typically interact with chatbots over the internet. Chatbots utilize NLP to understand and interpret user inputs, such as text or speech. This process involves tokenizing and parsing the input text to extract keywords, meaning, and intent, such as asking a question, making a request, or providing information. Once the chatbot comprehends the user's message and identifies the intent, it uses algorithms to retrieve relevant information from knowledge bases, databases, or other external sources. Additionally, the chatbot considers user input as a source of information. Subsequently, the chatbot generates a response and delivers it to the user in a way that mimics natural conversation. Chatbots use machine learning techniques, such as Neural Networks, to learn and enhance their capabilities. The more information the chatbot receives, the more it learns and improves. That is why companies often introduce their chatbots with a free version initially, allowing them to train and improve without cost, leveraging the vast amount of information provided by users.
Figure 1: Chatbot architecture
It's important to note that while chatbots can provide helpful responses for your queries, the answers are based on the information they have been trained on, which may not always be correct or perfect [9]. Moreover, their training data may not be up to date, especially with free versions, such as ChatGPT 3.5, which is limited to 2022. Furthermore, chatbots do not have the ability to assess the quality of the information source, so they may provide information from non-authentic sources [10-12].
AI-related ethical concerns in biomedical research field
The ethical concerns surrounding the use of AI-based chatbots in crafting scientific papers revolve around two key dimensions: the first is related to the transparency and scientific integrity, and the second is the responsibility and accountability for the content [12,13]. Utilizing machine-generated texts in scientific papers raises concerns akin to scientific plagiarism, where credit is claimed by the human author for text generated by the machine. One proposed solution to this ethical quandary suggests including the used AI tool in the authors' list, as some have attempted [14,15]. However, this solution introduces another ethical dilemma, as machines inherently lack the capability to be responsible or accountable for content and potential biases. This limitation stems from the inherent inaccuracy of information on the World Wide Web and the likelihood of incorrect or biased outputs from these tools. The intuitive answer to the question of holding machines accountable for such nuances is a resounding no [16].
The year 2023 witnessed a heated debate on the optimal way to deal with this new development in the world of academic research and publishing. Journals and publishers have already started formulating their own policies to address this issue. While the editor of Science decided to completely ban the use of AI-tools in writing manuscripts (17) before updating their policies to be more balanced(18), other journals like the Journal of the American Medical Association (JAMA) (15,19), and the journal of the American Academy of Pediatrics (Pediatrics) (20) have adopted policies that allow the use of AI under conditions that enhance transparency, accountability, scientific productivity, and integrity. We argue that the complete ban on AI use in academic writing opens the door for undisclosed use, posing a threat to transparency and hindering the benefits that do not involve unethical aspects such as linguistic and grammatical corrections of texts. Therefore, we believe that the second direction which allow AI using in ethical way is more applicable and logical, as it is not with the absolute rejection of AI use, but rather ensuring that its use is regulated to prevent data and image manipulation, with full detailed disclosure of its use under the methods section or acknowledgment section[15,18], and with the author or group of authors being responsible for the content of the text.
Adding the chatbot to the list of references was suggested [12]. We argue that this suggestion falls into the trap of honesty and responsibility as well, as the used AI-tool itself is not the original source of the information and machines cannot be considered as a party that bears responsibility and the developers of AI tools usually state that they do not guarantee the accuracy of their output. Moreover, artificial intelligence sometimes hallucinates (21, 22).
Journals and publishers play a crucial role in regulating AI utilization. Therefore, there is an urgent need for clear guidelines to prevent misuse of AI tools in academic writting and maintain the integrity of academic research.
Proposed AI Utilization Policy
Based on the preceding discussion, we propose an AI usage policy comprising the following key points:
- The conventional criteria of the International Committee of Medical Journal Editors (ICMJE) for identifying authorship (23) do not extend to AI tools. As such, it is explicitly prohibited to attribute authorship to any AI tool or list them as authors.
- The author or group of authors are assumed to have complete responsibility for reviewing, approving, and disclosing all AI tools used during preparation of their manuscript.
- AI tools are applicable for technical tasks such as literature reviews, data analysis, and linguistic and grammatical reviews of texts intended for publication. However, it is the author's responsibility to oversee the reviewing, approval, and referencing of information generated by these tools.
- A comprehensive disclosure is essential when utilizing AI techniques or tools. This disclosure should include details such as the name of the language model or AI tool, version and extension numbers, and manufacturer and the methodology employed during its application.
- In scientific material construction part, such as literature reviews and data analysis, the disclosure regarding AI tools usage is typically integrated into the method section. Conversely, if the AI tool is employed to enhance text quality through linguistic and grammatical review, this disclosure should be placed in the Acknowledgment section.
- AI is permissible for generating illustrative images in scientific material, provided these images undergo thorough review and approval, with the authors bearing full responsibility. However, any manipulation of real images to alter or conceal details, thereby distorting the reality of the image, is strictly prohibited.
- It is important to note that AI tools are not considered primary sources of information. They are not accountable for the conclusions they reach, and as such, they cannot be included in the reference list.
- The journal management has the full right to reject articles if the editor or the reviewers detect any AI usage that is not clearly mentioned by the authors in accordance with the policy. The journal may blacklist the authors and report the issue to their institute if this is repeated multiple times.
CONCLUSION
The advancement of AI and the proliferation of AI-based chatbots have revolutionized various aspects of academic research and publishing. However, this development has raised ethical concerns regarding transparency, accountability, and integrity in scientific writing. To address these issues, we have proposed an AI usage policy that emphasizes the responsible use of AI tools. Authors must take full responsibility for overseeing the review, approval, and referencing of information generated by AI tools. Journals and publishers play a critical role in regulating AI usage, and clear guidelines are necessary to ensure that AI tools are used ethically and transparently in academic writing. Implementing such policies will help maintain the credibility and integrity of academic research in the age of AI.
AUTHORS’ CONTRIBUTIONS
All authors made substantial contributions to the reported work, participating in various aspects such as conception, study design, implementation, data collection, analysis, and interpretation, as well as contributing to drafting, revising, and critically reviewing the article, and ultimately approving the final version for publication.
References
- Natale S, Ballatore A. Imagining the thinking machine: Technological myths and the rise of artificial intelligence. Convergence. 2020 Feb;26(1):3-18.
- de Souza LL, Fonseca FP, Martins MD, et al. ChatGPT and medicine: a potential threat to science or a step towards the future? J Med Artif Intell 2023;6:19.
- Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023 Jan;613(7945):620-621.
- Park JY. Could ChatGPT help you to write your next scientific paper? concerns on research ethics related to usage of artificial intelligence tools. J Korean Assoc Oral Maxillofac Surg. 2023 Jun 30;49(3):105-106.
- The Future of AI Detectors: A Comprehensive Evaluation: https://aicontentfy.com/en/blog/future-of-ai-detectors-comprehensive-evaluation [last accessed January 1, 2024]
- Elkhatat AM, Elsaid K, Almeer S. Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. Int J Educ Integr. 2023;19(1):17.
- Khadija A, Zahra FF, Naceur A. AI-powered health chatbots: toward a general architecture. Procedia. Comp. Sci. 2021;191:355-60.
- Cascella M, Montomoli J, Bellini V, et al. Writing the paper “unveiling artificial intelligence: an insight into ethics and applications in anesthesia” implementing the large language model ChatGPT: a qualitative study. J Med Artif Intell. 2023;6:9–9.
- Wagner MW, Ertl-Wagner BB. Accuracy of Information and References Using ChatGPT-3 for Retrieval of Clinical Radiological Information. Can Assoc Radiol J. 2024 Feb;75(1):69-73.
- Deng J, Heybati K, Park YJ, et al. Artificial intelligence in clinical practice: A look at ChatGPT. Cleve Clin J Med. 2024 Mar 1;91(3):173-180.
- Kooli C. Chatbots in Education and Research: A Critical Examination of Ethical Implications and Solutions. Sustainability. 2023;15(7):5614.
- Hosseini M., Resnik DB, Holmes K. (2023). The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Research Ethics. 2023;19(4), 449-465.
- Memarian B; Doleck T. Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and highereducation: A systematic review. Comput. Educ. Artif. Intell. 2023,5, 100152.
- Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613(7945):620–621.
- Flanagin A, Bibbins-Domingo K, Berkwits M, et al. Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge. JAMA. 2023;329(8):637–639.
- Ide K, Hawke P, Nakayama T. Can ChatGPT Be Considered an Author of a Medical Article? J Epidemiol. 2023 Jul 5;33(7):381-382.
- Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379(6630):313
- Authorship, General policies, Science Journals: Editorial Policies https://www.science.org/content/page/science-journals-editorial-policies#authorship; [Last Accessed January 21, 2024].
- Flanagin A, Kendall-Taylor J, Bibbins-Domingo K. Guidance for Authors, Peer Reviewers, and Editors on Use of AI, Language Models, and Chatbots. JAMA. 2023;330(8):702–703.
- Publication Ethics (Pediatrics): https://publications.aap.org/pediatrics/pages/author-instructions?autologincheck=redirected#artificial_intelligence [Last accessed February 15, 2024]
- Santoni de Sio F, Mecacci G. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them. Philos. Technol. 2021;34:1057–1084.
- Alkaissi H, McFarlane SI. Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus. 2023;15(2):e35179.
- ICMJE. Defining the Role of Authors and Contributors. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html; [Last accessed February 3, 2024].